Thursday, 9 October 2008

Fingerprint biometrics

Principles of fingerprint biometrics

A fingerprint is made of a a number of ridges and valleys on the surface of the finger. Ridges are the upper skin layer segments of the finger and valleys are the lower segments. The ridges form so-called minutia points: ridge endings (where a ridge end) and ridge bifurcations (where a ridge splits in two). Many types of minutiae exist, including dots (very small ridges), islands (ridges slightly longer than dots, occupying a middle space between two temporarily divergent ridges), ponds or lakes (empty spaces between two temporarily divergent ridges), spurs (a notch protruding from a ridge), bridges (small ridges joining two longer adjacent ridges), and crossovers (two ridges which cross each other).

The uniqueness of a fingerprint can be determined by the pattern of ridges and furrows as well as the minutiae points. There are five basic fingerprint patterns: arch, tented arch, left loop, right loop and whorl. Loops make up 60% of all fingerprints, whorls account for 30%, and arches for 10%.

Issues with fingerprint systems
The tip of the finger is a small area from which to take measurements, and ridge patterns can be affected by cuts, dirt, or even wear and tear. Acquiring high-quality images of distinctive fingerprint ridges and minutiae is complicated task.


People with no or few minutia points (surgeons as they often wash their hands with strong detergents, builders, people with special skin conditions) cannot enroll or use the system. The number of minutia points can be a limiting factor for security of the algorithm. Results can also be confused by false minutia points (areas of obfuscation that appear due to low-quality enrollment, imaging, or fingerprint ridge detail).

Note: There is some controversy over the uniqueness of fingerprints. The quality of partial prints is however the limiting factor. As the number of defining points of the fingerprint become smaller, the degree of certainty of identity declines. There have been a few well-documented cases of people being wrongly accused on the basis of partial fingerprints.

Benefits of fingerprint biometric systems
Easy to use
Cheap
Small size
Low power
Non-intrusive
Large database already available

Applications of fingerprint biometrics
Fingerprint sensors are best for devices such as cell phones, USB flash drives, notebook computers and other applications where price, size, cost and low power are key requirements. Fingerprint biometric systems are also used for law enforcement, background searches to screen job applicants, healthcare and welfare.

Fingerprints are usually considered to be unique, with no two fingers having the exact same dermal ridge characteristics.

How does fingerprint biometrics work
The main technologies used to capture the fingerprint image with sufficient detail are optical, silicon, and ultrasound.


There are two main algorithm families to recognize fingerprints:

Minutia matching compares specific details within the fingerprint ridges. At registration (also called enrollment), the minutia points are located, together with their relative positions to each other and their directions. At the matching stage, the fingerprint image is processed to extract its minutia points, which are then compared with the registered template.

Pattern matching compares the overall characteristics of the fingerprints, not only individual points. Fingerprint characteristics can include sub-areas of certain interest including ridge thickness, curvature, or density. During enrollment, small sections of the fingerprint and their relative distances are extracted from the fingerprint. Areas of interest are the area around a minutia point, areas with low curvature radius, and areas with unusual combinations of ridges.

Biometrics

What is “biometrics”?

Biometrics is a field of security and identification technology based on the measurement of unique physical characteristics such as fingerprints, retinal patterns, and facial structure. To verify an individual's identity, biometric devices scan certain characteristics and compare them with a stored entry in a computer database. While the technology goes back years and has been used in highly sensitive institutions such as defense and nuclear facilities, the proliferation of electronic data exchange generated new demand for biometric applications that can secure electronically stored data and online transactions.

Biometrics is the practice of automatically identifying people by one or more physical characteristics.

TYPES OF BIOMETRIC SYSTEMS

FINGERPRINTS.
Fingerprint-based biometric systems scan the dimensions, patterns, and topography of fingers, thumbs, and palms. The most common biometric in forensic and governmental databases, fingerprints contain up to 60 possibilities for minute variation, and extremely large and increasingly integrated networks of these stored databases already exist. The largest of these is the Federal Bureau of Investigation's (FBI) Automated Fingerprint Identification System, with more than 630 million fingerprint images.

FACIAL RECOGNITION.
Facial recognition systems vary according to the features they measure. Some look at the shadow patterns under a set lighting pattern, while others scan heat patterns or thermal images using an infrared camera that illuminates the eyes and cheekbones. These systems are powerful enough to scope out the minutest differences in facial patterns, even between identical twins. The hardware for facial recognition systems is relatively inexpensive, and is increasingly installed in computer monitors.

EYE SCANS.
There are two main features of the eye that are targeted by biometric systems: the retina and the iris. Each contains more points of identification than a fingerprint. Retina scanners trace the pattern of blood cells behind the retina by quickly flashing an infrared light into the eye. Iris scanners create a unique biological bar code by scanning the eye's distinctive color patterns. Eye scans tend to occupy less space in a computer and thus operate relatively quickly, although some users are squeamish about having beams of light shot into their eyes.

VOICE VERIFICATION.
Although voices can sound similar and can be consciously altered, the topography of the mouth, teeth, and vocal cords produces distinct pitch, cadence, tone, and dynamics that give away would-be impersonators. Widely used in phone-based identification systems, voice-verification biometrics also is used with personal computers.

HAND GEOMETRY.
Hand-geometry biometric systems take two infrared photographs—one from the side and one from above—of an individual's hand. These images measure up to 90 different characteristics, such as height, width, thickness, finger shape, and joint positions and compare them with stored data.

KEYSTROKE DYNAMICS.
A biometric system that is tailor-made for personal computers, keystroke-dynamic biometrics measures unique patterns in the way an individual uses a keyboard—such as speed, force, the variation of force on different parts of the keyboard, and multiple-key functions—and exploits them as a means of identification.

These things are indeed very interesting, and it would be better if I would explain each and every types in details to all of you...

Tuesday, 13 May 2008

BitTorrent (Protocol)


BitTorrent is a protocol designed for transferring files. It is peer-to-peer in nature, as users connect to each other directly to send and receive portions of the file.

BitTorrent is a method of distributing large amounts of data widely without the original distributor incurring the entire costs of hardware, hosting, and bandwidth resources.

However, there is a central server (called a tracker) which coordinates the action of all such peers. The tracker only manages connections, it does not have any knowledge of the contents of the files being distributed, and therefore a large number of users can be supported with relatively limited tracker bandwidth. The key philosophy of BitTorrent is that users should upload (transmit outbound) at the same time they are downloading (receiving inbound.) In this manner, network bandwidth is utilized as efficiently as possible. BitTorrent is designed to work better as the number of people interested in a certain file increases, in contrast to other file transfer protocols.

Instead, when data is distributed using the BitTorrent protocol, each recipient supplies pieces of the data to newer recipients, reducing the cost and burden on any given individual source, providing redundancy against system problems, and reducing dependence on the original distributor.

The most common method by which files are transferred on the Internet is the client-server model. A central server sends the entire file to each client that requests it -- this is how both http and ftp work. The clients only speak to the server, and never to each other. The main advantages of this method are that it's simple to set up, and the files are usually always available since the servers tend to be dedicated to the task of serving, and are always on and connected to the Internet. However, this model has a significant problem with files that are large or very popular, or both.

Namely, it takes a great deal of bandwidth and server resources to distribute such a file, since the server must transmit the entire file to each client. Perhaps you may have tried to download a demo of a new game just released, or CD images of a new Linux distribution, and found that all the servers report "too many users," or there is a long queue that you have to wait through. The concept of mirrors partially addresses this shortcoming by distributing the load across multiple servers. But it requires a lot of coordination and effort to set up an efficient network of mirrors, and it's usually only feasible for the busiest of sites.

Another method of transferring files has become popular recently: the peer-to-peer network, systems such as Kazaa, eDonkey, Gnutella, Direct Connect, etc. In most of these networks, ordinary Internet users trade files by directly connecting one-to-one. The advantage here is that files can be shared without having access to a proper server, and because of this there is little accountability for the contents of the files. Hence, these networks tend to be very popular for illicit files such as music, movies, pirated software, etc. Typically, a downloader receives a file from a single source, however the newest version of some clients allow downloading a single file from multiple sources for higher speeds.

A BitTorrent client is any program that implements the BitTorrent protocol. Each client is capable of preparing, requesting, and transmitting any type of computer file over a network, using the protocol. A peer is any computer running an instance of a client.
To share a file or group of files, a peer first creates a small file called a "torrent" (e.g. MyFile.torrent). This file contains metadata about the files to be shared and about the tracker, the computer that coordinates the file distribution. Peers that want to download the file first obtain a torrent file for it, and connect to the specified tracker, which tells them from which other peers to download the pieces of the file.

Though both ultimately transfer files over a network, a BitTorrent download differs from a classic full-file HTTP request in several fundamental ways:

BitTorrent makes many small data requests over different TCP sockets, while web-browsers typically make a single HTTP GET request over a single TCP socket. BitTorrent downloads in a random or in a "rarest-first"[2] approach that ensures high availability, while HTTP downloads in a sequential manner. Taken together, these differences allow BitTorrent to achieve much lower cost, much higher redundancy, and much greater resistance to abuse or to "flash crowds" than a regular HTTP server. However, this protection comes at a cost: downloads can take time to rise to full speed because it may take time for enough peer connections to be established, and it takes time for a node to receive sufficient data to become an effective uploader. As such, a typical BitTorrent download will gradually rise to very high speeds, and then slowly fall back down toward the end of the download. This contrasts with an HTTP server that, while more vulnerable to overload and abuse, rises to full speed very quickly and maintains this speed throughout.

Monday, 12 May 2008

The Types of Caching in ASP.NET

Introduction
The main benefits of caching are performance-related: operations like accessing database information can be one of the most expensive operations of an ASP page's life cycle.

If the database information is fairly static, this database-information can be cached.

When information is cached, it stays cached either indefinitely, until some relative time, or until some absolute time. Most commonly, information is cached for a relative time frame. That is, our database information may be fairly static, updated just a few times a week. Therefore, we might want to invalidate the cache every other day, meaning every other day the cached content is rebuilt from the database.

Caching in classic ASP was a bit of a chore, it is quite easy in ASP.NET. There are a number of classes in the .NET Framework designed to aid with caching information. In this article, I will explain how .NET supports caching and explain in detail how to properly incorporate each supported method into Web-based applications.

Caching Options in ASP.NET
ASP.NET supports three types of caching for Web-based applications:
Page Level Caching (called Output Caching)
Page Fragment Caching (often called Partial-Page Output Caching)
Programmatic or Data Caching


Output Caching:
Caches the output from an entire page and returns it for future requests instead of re-executing the requested page.
Fragment Caching:

Caches just a part of a page which can then be reused even while other parts of the page are being dynamically generated.
Data Caching:

Programmatically caches arbitrary objects for later reuse without re-incurring the overhead of creating them.

In Detail:
Output Caching
Output caching is the simplest of the caching options offered by ASP.NET. It is useful when an entire page can be cached as a whole and is analogous to most of the caching solutions that were available under classic ASP. It takes a dynamically generated page and stores the HTML result right before it is sent to the client. Then it reuses this HTML for future requests bypassing the execution of the original code.

Telling ASP.NET to cache a page is extremely simple. You simply add the OutputCache directive to the page you wish to cache. <%@ OutputCache Duration="30" VaryByParam="none" %>
The resulting caching is similar to the caching done by browsers and proxy servers, but does have one extremely important difference... you can tell a page which parameters to the page will have an effect on the output and the caching engine will cache separate versions based on the parameters you specify. This is done using the VaryByParam attribute of the OutputCache directive.

This is illustrated by a very simple example of output caching.
OutputCache Duration="30" VaryByParam="test"
<%@ Page Language="C#" %><%@ Page Language="VB" %><%@ OutputCache Duration="30" VaryByParam="test" %><%= Now() %>
This piece of code will cache the result for 30 seconds. During that time, responses for all requests for the page will be served from the cache.

Fragment Caching
Sometimes it's not possible to cache an entire page. For example, many shopping sites like to greet their users by name. It wouldn't look very good if you went to a site and instead of using your name to greet you it used mine! In the past this often meant that caching wasn't a viable option for these pages. ASP.NET handles this by what they call fragment caching.

More often than not, it is impractical to cache entire pages. For example, you may have some content on your page that is fairly static, such as a listing of current inventory, but you may have other information, such as the user's shopping cart, or the current stock price of the company, that you wish to not be cached at all. Since Output Caching caches the HTML of the entire ASP.NET Web page, clearly Output Caching cannot be used for these scenarios: enter Partial-Page Output Caching.

Partial-Page Output Caching, or page fragment caching, allows specific regions of pages to be cached. ASP.NET provides a way to take advantage of this powerful technique, requiring that the part(s) of the page you wish to have cached appear in a User Control. One way to specify that the contents of a User Control should be cached is to supply an OutputCache directive at the top of the User Control. That's it! The content inside the User Control will now be cached for the specified period, while the ASP.NET Web page that contains the User Control will continue to serve dynamic content. (Note that for this you should not place an OutputCache directive in the ASP.NET Web page that contains the User Control - just inside of the User Control.)

Data Caching
This is the most powerful of the caching options available in ASP.NET. Using data caching you can programmatically cache anything you want for as long as you want. The caching system exposes itself in a dictionary type format meaning items are stored in name/value pairs. You cache an item under a certain name and then when you request that name you get the item back. It's similar to an array or even a simple variable.

In addition to just placing an object into the cache you can set all sorts of properties. The object can be set to expire at a fixed time and date, after a period of inactivity, or when a file or other object in the cache is changed.

The main thing to watch out for with data caching is that items you place in the cache are not guaranteed to be there when you want them back. While it does add some work (you always have to check your object exists after you retrieve it), this scavenging really is a good thing. It gives the caching engine the flexibility to dispose of things that aren't being used or dump parts of the cache if the system starts running out of memory.

Sometimes, more control over what gets cached is desired. ASP.NET provides this power and flexibility by providing a cache engine. Programmatic or data caching takes advantage of the .NET Runtime cache engine to store any data or object between responses. That is, you can store objects into a cache, similar to the storing of objects in Application scope in classic ASP. (As with classic ASP, do not store open database connections in the cache!)
Realize that this data cache is kept in memory and "lives" as long as the host application does. In other words, when the ASP.NET application using data caching is restarted, the cache is destroyed and recreated. Data Caching is almost as easy to use as Output Caching or Fragment caching: you simply interact with it as you would any simple dictionary object. To store a value in the cache, use syntax like this:


Cache["Nikky"] = bar; // C#

To retrieve a value, simply reverse the syntax like this:

bar = Cache["Nikky"]; // C#

Note that after you retrieve a cache value in the above manner you should first verify that the cache value is not null prior to doing something with the data. Since Data Caching uses an in-memory cache, there are times when cache elements may need to be evicted. That is, if there is not enough memory and you attempt to insert something new into the cache, something else has gotta go! The Data Cache engine does all of this scavenging for your behind the scenes, of course. However, don't forget that you should always check to ensure that the cache value is there before using it. This is fairly simply to do - just check to ensure that the value isn't null/Nothing. If it is, then you need to dynamically retrieve the object and restore it into the cache.







Thursday, 8 May 2008

'A Leader Should Know How to Manage Failure'

(Former President of India APJ Abdul Kalam at Wharton India Economic forum , Philadelphia, March 22,2008)
Kalam was asked:

Could you give an example, from your own experience, of ' How Leaders Should Manage Failure' ?

Kalam answered it like that:
Let me tell you about my experience. In 1973 I became the project director of India's satellite launch vehicle program, commonly called the SLV-3. Our goal was to put India's "Rohini" satellite into orbit by 1980. I was given funds and human resources -- but was told clearly that by 1980 we had to launch the satellite into space. Thousands of people worked together in scientific and technical teams towards that goal.

By 1979 -- I think the month was August -- we thought we were ready. As the project director, I went to the control center for the launch. At four minutes before the satellite launch, the computer began to go through the checklist of items that needed to be checked. One minute later, the computer program put the launch on hold; the display showed that some control components were not in order. My experts -- I had four or five of them with me -- told me not to worry; they had done their calculations and there was enough reserve fuel. So I bypassed the computer, switched to manual mode, and launched the rocket. In the first stage, everything worked fine. In the second stage, a problem developed. Instead of the satellite going into orbit, the whole rocket system plunged into the Bay of Bengal. It was a big failure.
That day, the chairman of the Indian Space Research Organization, Prof. Satish Dhawan, had called a press conference. The launch was at 7:00 am, and the press conference -- where journalists from around the world were present -- was at 7:45 am at ISRO's satellite launch range in Sriharikota [in Andhra Pradesh in southern India]. Prof. Dhawan, the leader of the organization, conducted the press conference himself. He took responsibility for the failure -- he said that the team had worked very hard, but that it needed more technological support. He assured the media that in another year, the team would definitely succeed. Now, I was the project director, and it was my failure, but instead, he took responsibility for the failure as chairman of the organization.

The next year, in July 1980, we tried again to launch the satellite -- and this time we succeeded. The whole nation was jubilant. Again, there was a press conference. Prof. Dhawan called me aside and told me, "You conduct the press conference today." I learned a very important lesson that day. When failure occurred, the leader of the organization owned that failure. When success came, he gave it to his team.

The best management lesson I have learned did not come to me from reading a book; it came from that experience.

Saturday, 19 April 2008

What is Sharepoint

Overview

A SharePoint page is built by combining the web parts into a web page, to be accessed using a browser. Any web editor supporting ASP.NET can be used for this purpose, even though Microsoft Office SharePoint Designer is the preferred editor. The extent of customization of the page depends on its design.

SharePoint is a web-based collaboration and document management platform from Microsoft. It can be used to host web sites which can be used to access shared workspaces and documents, as well as specialized applications such as wikis, blogs and many other forms of applications, from within a browser. SharePoint functionality is exposed as web parts, such as a task list, or discussion pane. These web parts are composed into web pages, which are then hosted in the SharePoint portal. SharePoint sites are actually ASP.NET applications, which are served using IIS and use a SQL Server database as data storage backend.The term 'SharePoint' is commonly used to refer to one of the following two products:Windows SharePoint Services (WSS) Microsoft Office SharePoint Server 2007 (MOSS) In addition, previous versions of this software used different names (SharePoint Portal Server for example) but are referred to as "SharePoint".The SharePoint family also includes the Microsoft Office SharePoint Designer (SPD).



WSS pages are ASP.NET applications, as such SharePoint web parts use the ASP.NET web parts infrastructure, and using the ASP.NET APIs, web parts can be written to extend the functionality of WSS. In terms of programmability, WSS exposes an API and object model to programmatically create and manage portals, workspaces and users. In contrast, the MOSS API is more geared towards automation of tasks and integration with other applications.[1] Both WSS and MOSS can use the web parts API to enhance the end user functionality. In addition, WSS document libraries can be exposed over ADO.NET connections to programmatically access the files and revisions in them.

At the web server level, WSS configures IIS to forward all requests, regardless of file and content types, to the ASP.NET session hosting the WSS web application, which either makes a certain revision of a certain file available from the database or takes other actions. Unlike regular ASP.NET applications, the .aspx which contains the WSS (and MOSS) application code, resides in SQL Server databases instead of the filesystem. As such, the regular ASP.NET runtime cannot process the file. Instead, WSS plugs a custom Virtual Path Provider component[2] into the ASP.NET pipeline, which fetches the .aspx files from the database for processing. With this feature, introduced with WSS 3.0, both the WSS application as well as the data it generates and manages, could be stored in a database.

The first version, called SharePoint Team Services (usually abbreviated to STS), was released at the same time as Office XP and was available as part of Microsoft FrontPage. STS could run on Windows 2000 Server or Windows XP.

Windows SharePoint Services 2.0 was marketed as an upgrade to SharePoint Team Services, but was in fact a completely redesigned application[citation needed]. SharePoint Team Services stored documents in ordinary file storage, keeping document metadata in a database. Windows SharePoint Services 2.0 on the other hand, stores both the document and the metadata in a database, and supports basic document versioning for items in Document Libraries. Service Pack 2 for WSS added support for SQL Server 2005 and the use of the .NET Framework 2.0.
Windows SharePoint Services 3.0 was released on November 16, 2006 as part of the Microsoft Office 2007 suite and Windows Server 2008. In fact, Windows Server 2008 supports a separate server role for SharePoint services. WSS 3.0 is built using .NET Framework 2.0 and .NET Framework 3.0 Windows Workflow Foundation to add workflow capabilities to the basic suite. By the beginning of 2007 WSS 3.0 was made available to the public. Windows 2000 Server is not supported by WSS 3.0, nor is SQL Server 2000.


The WSS 3.0 wiki allows RSS export of content and, when viewed in Internet Explorer, provides a WYSIWYG editor. As with MediaWiki, it produces hyperlinks with a double square bracket but unlike MediaWiki it uses HTML for markup. An enhanced wiki is available for SharePoint on Codeplex and is free to download and install.

SharePoint solves four main problems:
· It’s difficult to keep track of all the documents in even a small office
· Email isn’t a great way to share files
· We work all over the place
· It’s hard to create/maintain web sites on your own


SharePoint is a web-based collaboration and document management platform from Microsoft. It can be used to host web sites which can be used to access shared workspaces and documents, as well as specialized applications such as wikis, blogs and many other forms of applications, from within a browser. SharePoint functionality is exposed as web parts, such as a task list, or discussion pane. These web parts are composed into web pages, which are then hosted in the SharePoint portal. SharePoint sites are actually ASP.NET applications, which are served using IIS and use a SQL Server database as data storage backend.

The term 'SharePoint' is commonly used to refer to one of the following two products:
Windows SharePoint Services (WSS)
Microsoft Office SharePoint Server 2007 (MOSS)


In addition, previous versions of this software used different names (SharePoint Portal Server for example) but are referred to as "SharePoint".

The SharePoint family also includes the Microsoft Office SharePoint Designer (SPD)

The SharePoint Family

Windows SharePoint Services (WSS)
Windows SharePoint Services (WSS)
is a free add-on to Windows Server. WSS offers the base collaborative infrastructure, supporting HTTP and HTTPS based editing of documents, as well as document organization in document libraries, version control capabilities, wikis, and blogs. It also includes end user functionality such as workflows, to-do lists, alerts and discussion boards, which are exposed as web parts to be embedded into SharePoint pages. WSS was previously known as SharePoint Team Services. Though workflows can be created for WSS in SharePoint Designer or VS.NET unlike with MOSS no workflows come installed out-of-the box.


Microsoft Search Server
Microsoft Search Server (MSS)
is an enterprise search platform from Microsoft, based on the search capabilities of Microsoft Office SharePoint Server.[2] MSS shares its architectural underpinnings with the Windows Search platform for both the querying engine as well as the indexer. MOSS search provides the ability to search metadata attached to documents.
Microsoft Search Server has been made available as Search Server 2008, which was released on March 2008. A free version, Search Server Express 2008 is also available. The express edition features the same feature-set as the commercial edition, including no limitation on the number of files indexed; however, it is limited to a stand-alone installation and cannot be scaled out to a cluster.


Microsoft Office SharePoint Server (MOSS)
Microsoft Office SharePoint Server (MOSS) is a costed component of the Microsoft Office server suite. MOSS is built on top of WSS and adds more functionality to it, including better document management, enterprise search functionality, navigation features, RSS support, as well as features from Microsoft Content Management Server. The Enterprise edition of MOSS also includes features for business data analysis such as Excel Services and the Business Data Catalog. MOSS also provides integration with Microsoft Office applications, such as project management capabilities with Microsoft Project Server and the ability to expose Microsoft Office InfoPath forms via a browser.[4] It can also host specific libraries, such as PowerPoint Template Libraries provided the server components of the specific application are installed. MOSS was previously known as SharePoint Server and SharePoint Portal Server.

Microsoft SharePoint Designer (SPD)
Microsoft Office SharePoint Designer (SPD) is a WYSIWYG HTML editor, which is primarily aimed at designing SharePoint sites and end-user workflows for WSS sites. It shares its rendering engine with Microsoft Expression Web, its general web designing sibling, and Microsoft's Visual Studio 2008 IDE.


Windows SharePoint Services (WSS) or Windows SharePoint is the basic part of SharePoint, offering collaboration and document management functionality by means of web portals, by providing a centralized repository for shared documents, as well as browser-based management and administration of them. It allows creation of Document libraries, which are collections of files that can be shared for collaborative editing. SharePoint provides access control and revision control for documents in a library.

It also includes a collection of web parts, which are web widgets that can be embedded into web pages to provide a certain functionality. SharePoint includes web parts such as workspaces and dashboards, navigation tools, lists, alerts (including e-mail alerts), shared calendar, contact lists and discussion boards. It can be configured to return separate content for Intranet, Extranet and Internet locations. It uses a similar permissions model to Microsoft Windows, via groups of users. Active Directory groups can be added to SharePoint groups to easily tie in permissions. Alternatively, other authentication providers can be added through HTML Forms authentication.


Friday, 18 April 2008

Assemblies Overview (.NET, C# )


Environment: C#, .NET

What is Assembly in .NET?

An assembly is a file that is automatically generated by the compiler upon successful compilation of every .NET application. It can be either a Dynamic Link Library or an executable file. It is generated only once for an application and upon each subsequent compilation the assembly gets updated. The entire process will run in the background of your application; there is no need for you to learn deeply about assemblies. However, a basic knowledge about this topic will help you to understand the architecture behind a .NET application.

An Assembly contains Intermediate Language (IL) code, which is similar to Java byte code. In the .NET language, it consists of metadata. Metadata enumerates the features of every "type" inside the assembly or the binary. In addition to metadata, assemblies also have a special file called Manifest. It contains information about the current version of the assembly and other related information.

In .NET, there are two kinds of assemblies, such as Single file and Multi file. A single file assembly contains all the required information (IL, Metadata, and Manifest) in a single package. The majority of assemblies in .NET are made up of single file assemblies. Multi file assemblies are composed of numerous .NET binaries or modules and are generated for larger applications. One of the assemblies will contain a manifest and others will have IL and Metadata instructions.

The main benefit of Intermediate Language is its power to integrate with all NET languages. This is because all .NET languages produce the same IL code upon successful compilation; hence, they can interact with each other very easily.

However, .NET is not yet declared as a platform-independent language; efforts are on at Microsoft to achieve this objective.

As of today, .NET applications are equipped to run only on Windows.

An Assembly contains Intermediate Language (IL) code, which is similar to Java byte code. In the .NET language, it consists of metadata. Metadata enumerates the features of every "type" inside the assembly or the binary. In addition to metadata, assemblies also have a special file called Manifest. It contains information about the current version of the assembly and other related information.

In .NET, there are two kinds of assemblies, such as Single file and Multi file. A single file assembly contains all the required information (IL, Metadata, and Manifest) in a single package. The majority of assemblies in .NET are made up of single file assemblies. Multi file assemblies are composed of numerous .NET binaries or modules and are generated for larger applications. One of the assemblies will contain a manifest and others will have IL and Metadata instructions.

The main benefit of Intermediate Language is its power to integrate with all NET languages. This is because all .NET languages produce the same IL code upon successful compilation; hence, they can interact with each other very easily. However, .NET is not yet declared as a platform-independent language; efforts are on at Microsoft to achieve this objective. As of today, .NET applications are equipped to run only on Windows.

An assembly is a fundamental building block of any .NET Framework application. For example, when you build a simple C# application, Visual Studio creates an assembly in the form of a single portable executable (PE) file, specifically an EXE or DLL.

Assemblies contain metadata that describe their own internal version number and details of all the data and object types they contain. For more information see Assembly Manifest.

Assemblies are only loaded as they are required. If they are not used, they are not loaded. This means that assemblies can be an efficient way to manage resources in larger projects.

Assemblies can contain one or more modules. For example, larger projects may be planned in such a way that several individual developers work on separate modules, all coming together to create a single assembly. For more information on modules, see the topic How to: Build a Multifile Assembly.

Assemblies have the following properties:

Assemblies are implemented as .exe or .dll files.

You can share an assembly between applications by placing it in the Global Assembly Cache.

Assemblies must be strong-named before they can be placed in the Global Assembly Cache. For more information, see Strong-Named Assemblies.

Assemblies are only loaded into memory if they are required.

You can programmatically obtain information about an assembly using reflection. For more information, see the topic Reflection.

If you want to load an assembly only to inspect it, use a method such as ReflectionOnlyLoadFrom.

You can use two versions of the same assembly in a single application. For more information, see extern alias.

Strong-Named Assemblies

A strong name consists of the assembly's identity—its simple text name, version number, and culture information (if provided)—plus a public key and a digital signature. It is generated from an assembly file (the file that contains the assembly manifest, which in turn contains the names and hashes of all the files that make up the assembly), using the corresponding private key. Microsoft® Visual Studio® .NET and other development tools provided in the .NET Framework SDK can assign strong names to an assembly. Assemblies with the same strong name are expected to be identical.

You can ensure that a name is globally unique by signing an assembly with a strong name. In particular, strong names satisfy the following requirements:

Strong names guarantee name uniqueness by relying on unique key pairs. No one can generate the same assembly name that you can, because an assembly generated with one private key has a different name than an assembly generated with another private key.

Strong names protect the version lineage of an assembly. A strong name can ensure that no one can produce a subsequent version of your assembly. Users can be sure that a version of the assembly they are loading comes from the same publisher that created the version the application was built with.

Strong names provide a strong integrity check. Passing the .NET Framework security checks guarantees that the contents of the assembly have not been changed since it was built. Note, however, that strong names in and of themselves do not imply a level of trust like that provided, for example, by a digital signature and supporting certificate.

When you reference a strong-named assembly, you expect to get certain benefits, such as versioning and naming protection. If the strong-named assembly then references an assembly with a simple name, which does not have these benefits, you lose the benefits you would derive from using a strong-named assembly and revert to DLL conflicts. Therefore, strong-named assemblies can only reference other strong-named assemblies.

A Kiss of Love

A married couple was in a terrible accident where the man's face was severely burned.

The doctor told the husband that they couldn't graft any skin from his body Because he was too skinny. So the wife offered to some of her own skin.

However, the only skin on her body , that the doctor felt was suitable would have to come From her buttocks. The husband and wife agreed that they would tell no one about where the skin came from, and they requested that the doctor also honor their secret.

After All, this was a very delicate matter.

After the surgery was completed, everyone was astounded at the man's new face. He looked more handsome than he ever had before!

All friends and relatives just went on and on about his youthful Beauty! One day, he was alone with his wife, and he overcome with emotion at her sacrifice.

He said, "Dear, I just want to thank you for everything you did for me.
How can I possibly repay you?


"My darling," she replied,

"I get all the thanks I need every time I see your mother Kiss you on your cheeks."

Thursday, 10 April 2008

The Frame Buffer

Overview:

Rasterization generates a stream of source pixels from graphic primitives, which are combined with destination pixels in the frame buffer. The term frame buffer originates from the early days of raster graphics and referred to a bank of mem- ory that contained a single image, or frame. As computer graphics evolved, the term came to encompass the image data as well as any ancilliary data needed during graphic rendering.

In Direct3D, the frame buffer encompasses the currently selected render target surface and depth/stencil surfaces. If mul- tisampling is used, additional memory is required but is not explicitly exposed as directly manipulable surfaces.

After rasterization, each source pixel contains an RGB color, an associated transparency value in its alpha channel, and an associated depth in the scene in its Z value. The Z value is a ¯xed-point precision quantity produced by rasterization. Fog may then be applied to the pixel before it is incorporated into the render target. The application of fog mixes the pixel's color value, but not its alpha vlaue, from rasterization with a fog color based on a function of the pixel's depth in the scene.

Fog, also referred to as depth cueing, can be used to diminish the intensity of an object as it recedes from the camera, placing more emphasis on objects closer to the viewer.

After fog application, pixels can be rejected on the basis of their trans- parency, their depth in the scene, or by stenciling operations. Stencil operations allow arbitrary regions of the frame buffer to be masked away from rendering, among other things. Unlike the associated alpha and depth produced for each pixel during rasterization, the stencil value associated with the source pixel is obtained from a render state.

Double buffering is a concept you need to be familiar with before moving on. When data goes through the rendering pipeline, it does not exit the other end to you screen. As you know, the rendering pipeline takes in 3D data, and outputs pixels. These pixels are outputted to a rectangular grid of pixels, known as the “frame buffer”.

Eventually, the frame buffer is displayed on the monitor. Now the problem with this is that the displaying of the frame buffer to the monitor is not completely in your control. Imagine you want to draw a scene of a town with a few people roaming around. You need various different 3D models to create a believable town scene. A few buildings, some houses, some shops, different people models, maybe a few props like benches and lamp posts, and then the model of the ground to put all the stuff on. Now you’re happily sending data to your graphics card and everything is going fine, you send it house one, house two, the shop, a few people, but then before you can send in the data that represents another person, your graphics card decides to give the frame buffer to the monitor.

And what do you get on the screen?

You get a scene of a town that has a few people displayed, a few buildings, half a human (because the frame buffer was sent to the monitor before you finished sending the entire data for the human figure you were rendering) and no streets (because you haven’t sent that data in yet) or props.

This is obviously no good for business. So we need a way to counter this problem. This is where double buffering comes in. The trick is to have two frame buffers. One called the front buffer, and the second one called the back buffer.

The front buffer is the one that is always displayed on your screen, not the back buffer. The back buffer is used as the rectangular grid that the rendering pipeline outputs the pixels to. So all your rendering goes straight to the back buffer. Only when you tell D3D to move the back buffer to the front buffer will your scene be displayed on the monitor. And by the time you tell D3D to move the back buffer to the front, you would’ve already finished rendering the entire scene. So using the example above, when you are sending the data for the human 3D model and your system displays the frame buffer – instead of seeing an incomplete scene, you will see whatever is on the front buffer (which is nothing at this time).

Then you can continue rendering the rest of the scene to the back buffer and move the back buffer to the front buffer when you’re done.





The figure above shows this process. You keep on sending data through the pipeline and it outputs pixels to the back buffer at the other end. Only when you tell D3D to switch the buffers will D3D take the data that is in the back buffer and put it in the front buffer.


Depth Buffers :

This depth-buffer is also known as a z-buffer, because the z-axis usually represents “depth”. A depth buffer usually has a certain level of accuracy associated with it. Just like in C++ you can have a “float” data type which allows for 32 bits of floating-point precision, or you can have a “double” data type which allows for 64 bits of floating-point precision.


When we talk about “16” or “32” bits per pixel for a depth buffer, we are discussing the accuracy of the device for determining how to arrange our objects. The higher the bit depth, the more accurate this arrangement is. This accuracy can also come at a cost of performance though, so make sure you try to test the scene using both.






Big and Little Endian

Basic Memory Concepts

In order to understand the concept of big and little endian, you need to understand memory. Fortunately, we only need a very high level abstraction for memory. You don't need to know all the little details of how memory works.

All you need to know about memory is that it's one large array. But one large array containing what? The array contains bytes. In computer organization, people don't use the term "index" to refer to the array locations. Instead, we use the term "address". "address" and "index" mean the same, so if you're getting confused, just think of "address" as "index".

Each address stores one element of the memory "array". Each element is typically one byte. There are some memory configurations where each address stores something besides a byte. For example, you might store a nybble or a bit. However, those are exceedingly rare, so for now, we make the broad assumption that all memory addresses store bytes.

I will sometimes say that memory is byte-addresseable. This is just a fancy way of saying that each address stores one byte. If I say memory is nybble-addressable, that means each memory address stores one nybble.

Storing Words in Memory
We've defined a word to mean 32 bits. This is the same as 4 bytes. Integers, single-precision floating point numbers, and MIPS instructions are all 32 bits long. How can we store these values into memory? After all, each memory address can store a single byte, not 4 bytes.
The answer is simple. We split the 32 bit quantity into 4 bytes. For example, suppose we have a 32 bit quantity, written as 90AB12CD16, which is hexadecimal. Since each hex digit is 4 bits, we need 8 hex digits to represent the 32 bit value.
So, the 4 bytes are: 90, AB, 12, CD where each byte requires 2 hex digits.


Big Endian :

In big endian, you store the most significant byte in the smallest address. Here's how it would look:

Address 1000 1001 1002 1003
Value 93 AB 12 CD

Little Endian

In little endian, you store the least significant byte in the smallest address. Here's how it would look:

Address 1000 1001 1002 1003
Value CD 12 AB 93


Which Way Makes Sense?

Different ISAs use different endianness. While one way may seem more natural to you (most people think big-endian is more natural), there is justification for either one.

For example, DEC and IBMs(?) are little endian, while Motorolas and Suns are big endian. MIPS processors allowed you to select a configuration where it would be big or little endian.


Why is endianness so important?


Suppose you are storing int values to a file, then you send the file to a machine which uses the opposite endianness and read in the value. You'll run into problems because of endianness. You'll read in reversed values that won't make sense.
Endianness is also a big issue when sending numbers over the network. Again, if you send a value from a machine of one endianness to a machine of the opposite endianness, you'll have problems. This is even worse over the network, because you might not be able to determine the endianness of the machine that sent you the data.


The solution is to send 4 byte quantities using network byte order which is arbitrarily picked to be one of the endianness (not sure if it's big or little, but it's one of them). If your machine has the same endianness as network byte order, then great, no change is needed. If not, then you must reverse the bytes.

Tuesday, 8 April 2008

Scientists - After Death in Heaven


Once all the scientists die and go to heaven. They decide to playhide-n-seek.

Unfortunately Einstein is the one who has the den.........

..He is supposed to count upto 100...and then start searching... ..

Everyone starts hiding except Newton......

...Newton just draws a square of 1 meter and stands in it right in front ofEinstein.

Einstein's counting 1,2,3......97, 98,99.... .100..... ...
He opens hiseyes and finds Newton standing in front....... .

Einstein says "newton's out..newton's out.....

"Newton denies and says "I am not out........I am not Newton......"All the scientists come out to see how he proves that he is not Newton.

Newton says "I am standing in a square of area 1m squared.....

That makes me Newton per meter squared..... . since one Newton per meter squared isone Pascal, I'm Pascal.....

Therefore Pascal is OUT.......!

Howwwwwwwwwwwwzzzzzzzzzzzz That !!!!!!!!!!!

Saturday, 5 April 2008

Linux\UNIX Commands

In earlier post I have explained how to do debugging in the Linux environment. Here I am exploring some important commands which is required to be comfortable with Linux operating system.

Providing some commands and besides that the explaination of the same.

Again, below is a listing of the Unix / Linux commands and a brief explanation of what each command does. Tried to sorted it, alphabetically.

I bet it is going to be very very useful and you would able to get all the content in one place collectively.

Command Description
ac :: Prints statistics about users' connect time.
alias ::Create a name for another command or long command string.
at :: Command scheduler.

base :: name Deletes any specified prefix from a string.
bash :: Command Bourne interpreter
bc :: Calculator.
bdiff :: Compare large files.
bfs :: Editor for large files.
bg :: Continues a program running in the background.
biff :: Enable / disable incoming mail notifications.
break:: Break out of while, for, foreach, or until loop.
bye :: Alias often used for the exit command.

cal :: Calendar
calendar :: Display appointments and reminders.
cancel :: Cancels a print job.
cat :: View and/or modify a file.
cc :: C compiler.
cd :: Change directory.
chdir :: Change directory.
checkeq :: Language processors to assist in describing equations.
checknr :: Check nroff and troff files for any errors.
chfn :: Modify your own information or if super user or root modify another users information.
chgrp :: Change a groups access to a file or directory.
chkey :: Change the secure RPC key pair.
chmod :: Change the permission of a file.
chown :: Change the ownership of a file.
chsh :: Change login shell.
cksum :: Display and calculate a CRC for files.
clear :: Clears screen.
cls :: Alias often used to clear a screen.
cmp :: Compare files.
col :: Reverse line-feeds filter.
comm :: Compare files and select or reject lines that are common.
compress :: Compress files on a computer.
continue :: Break out of while, for, foreach, or until loop.
cp :: Copy files.
cpio :: Creates archived CPIO files.
crontab :: Create and list files that you wish to run on a regular schedule.
csh :: Execute the C shell command interpreter
csplit :: Split files based on context.
ctags :: Create a tag file for use with ex and vi.
cu ::Calls or connects to another Unix system, terminal or non-Unix system.
curl :: Transfer a URL.
cut :: Cut out selected fields of each line of a file.

date :: Tells you the date and time in Unix.
dc :: An arbitrary precision arithmetic package.
df :: Display the available disk space for each mount.
deroff ::Removes nroff/troff, tbl, and eqn constructs.
diff :: Displays two files and prints the lines that are different.
dig ::DNS lookup utility.
dircmp :: Lists the different files when comparing directories.
dirname :: Deliver portions of path names.
dmesg :: Print or control the kernel ring buffer.
dos2unix :: Converts text files between DOS and Unix formats.
dpost :: Translates files created by troff into PostScript.
du :: Tells you how much space a file occupies.

echo ::Displays text after echo to the terminal.
ed ::Line oriented file editor.
egrep :: Search a file for a pattern using full regular expressions.
elm ::Program command used to send and receive e-mail.
emacs :: Text editor.
enable :: Enables / Disables LP printers.
env :: Displays environment variables.
eqn :: Language processors to assist in describing equations.
ex Line- :: editor mode of the vi text editor.
exit :: Exit from a program, shell or log you out of a Unix network.
expr :: Evaluate arguments as an expression.

fc :: The FC utility lists or edits and re-executes, commands previously entered to an interactive sh.
fg :: Continues a stopped job by running it in the foreground
fgrep :: Search a file for a fixed-character string.
file :: Tells you if the object you are looking at is a file or if it is a directory.
find :: Finds one or more files assuming that you know their approximate filenames.
findsmb :: List info about machines that respond to SMB name queries on a subnet.

fmt :: Simple text formatters.
fromdos :: Converts text files between DOS and Unix formats.
fsck :: Check and repair a Linux file system.
ftp :: Enables ftp access to another terminal.

getfacl :: Display discretionary file information.
gprof :: The gprof utility produces an execution profile of a program.
grep :: Finds text within a file.
groupadd :: Creates a new group account.
groupdel :: Enables a super user or root to remove a group.
groupmod :: Enables a super user or root to modify a group.
gunzip :: Expand compressed files.
gview :: A programmers text editor.
gvim ::A programmers text editor.
gzip :: Compress files.

halt :: Stop the computer.
hash :: Remove internal hash table.
hashstat :: Display the hash stats.
head :: Displays the first ten lines of a file, unless otherwise stated.
help :: If computer has online help documentation installed this command will display it.
history :: Display the history of commands typed.

ifconfig :: Sets up network interfaces.
ifdown :: take a network interface down
ifup :: bring a network interface up

join :: Joins command forms together.

keylogin :: Decrypt the user's secret key.
kill :: Cancels a job.
ksh :: Korn shell command interpreter.

ld :: Link-editor for object files.
ldd :: List dynamic dependencies of executable files or shared objects.
less :: Opposite of the more command.
lex :: Generate programs for lexical tasks.
link :: Calls the link function to create a link to a file.
ln :: Creates a link to a file.
locate :: List files in databases that match a pattern.
login :: Signs into a new system.
logname :: Returns users login name.
logout :: Logs out of a system.
lp :: Prints a file on System V systems.
lpadmin :: Configure the LP print service.
lpc :: Line printer control program.
lpq :: Lists the status of all the available printers.
lpr :: Submits print requests.
lprm :: Removes print requests from the print queue.
lpstat :: Lists status of the LP print services.
ls :: Lists the contents of a directory.

mach :: Display the processor type of the current host.
mail :: One of the ways that allows you to read/send E-Mail.
mailcompat :: Provide SunOS 4.x compatibility for the Solaris mailbox format.
mailx :: Mail interactive message processing system.
make :: Executes a list of shell commands associated with each target.
man :: Display a manual of a command.
mesg :: Control if non-root users can send text messages to you.
mkdir :: Create a directory.
mkfs :: Build a Linux file system, usually a hard disk partition.
more :: Displays text one screen at a time.
mount :: Disconnects a file systems and remote resources.
mt :: Magnetic tape control.
mv :: Renames a file or moves it from one directory to another directory.

nc :: TCP/IP swiss army knife.
neqn :: Language processors to assist in describing equations.
netstat :: Shows network status.
newalias :: Install new elm aliases for user and/or system.
newform :: Change the format of a text file.
newgrp :: Log into a new group.
nice :: Invokes a command with an altered scheduling priority.
niscat :: Display NIS+ tables and objects.
nischmod :: Change access rights on a NIS+ object.
nischown :: Change the owner of a NIS+ object.
nischttl :: Change the time to live value of a NIS+ object.
nisdefaults :: Display NIS+ default values.
nisgrep :: Utilities for searching NIS+ tables.
nismatch :: Utilities for searching NIS+ tables.
nispasswd :: Change NIS+ password information.
nistbladm :: NIS+ table administration command.
nmap :: Network exploration tool and security / port scanner.
nohup :: Runs a command even if the session is disconnected or the user logs out.
nroff :: Formats documents for display or line-printer.
nslookup :: Queries a name server for a host or domain lookup.

on :: Execute a command on a remote system, but with the local environment.
onintr :: Shell built-in functions to respond to (hardware) signals.
optisa :: Determine which variant instruction set is optimal to use.
pack Shrinks file into a compressed file.

pagesize :: Display the size of a page of memory in bytes, as returned by getpagesize.
passwd :: Allows you to change your password.
paste :: Merge corresponding or subsequent lines of files.
pax :: Read / write and writes lists of the members of archive files and copy directory hierarchies.
pcat :: Compresses file.
pg :: Files perusal filters for CRTs.
pgrep :: Examine the active processes on the system and reports the process :: IDs of the processes
pine :: Command line program for Internet News and Email.
ping :: Sends ICMP ECHO_REQUEST packets to network hosts.
pkill :: Examine the active processes on the system and reports the poweroff :: Stop the computer.
pr :: Formats a file to make it look better when printed.
printf :: Write formatted output.
ps :: Reports the process status.
pwd :: Print the current working directory.

quit :: Allows you to exit from a program, shell or log you out of a Unix network.

rcp :: Copies files from one computer to another computer.
reboot :: Stop the computer.
red :: Line oriented file editor.
rehash :: Recomputes the internal hash table of the contents of directories listed in the path.
remsh :: Runs a command on another computer.
repeat :: Shell built-in functions to repeatedly execute action(s) for a selected number of times.
rgview :: A programmers text editor.
rgvim :: A programmers text editor.
rlogin :: Establish a remote connection from your terminal to a remote machine.
rm :: Deletes a file without confirmation (by default).
rmail :: One of the ways that allows you to read/send E-Mail.
rmdir :: Deletes a directory.
rn :: Reads newsgroups.
route :: Show / manipulate the IP routing table.
rpcinfo :: Report RPC information.
rsh :: Runs a command on another computer.
rsync :: Faster, flexible replacement for rcp.
rview :: A programmers text editor.
rvim :: A programmers text editor.

s2p :: Convert a sed script into a Perl script.
sag :: Graphically displays the system activity data stored in a binary data file by a previous sar run.
sar :: Displays the activity for the CPU.
script ::Records everything printed on your screen.
sdiff :: Compares two files, side-by-side.
sed :: Allows you to use pre-recorded commands to make changes to text.
sendmail :: Sends mail over the Internet.
set :: Set the value of an environment variable.
setenv :: Set the value of an environment variable.
setfacl :: Modify the Access Control List (ACL) for a file or files.
settime :: Change file access and modification time.
sftp :: Secure file transfer program.
sh :: Runs or processes jobs through the Bourne shell.
shred :: Delete a file securely, first overwriting it to hide its contents.
shutdown :: Turn off the computer immediately or at a specified time.
sleep :: Waits a x amount of seconds.
slogin :: OpenSSH SSH client (remote login program).
smbclient :: An ftp-like client to access SMB/CIFS resources on servers.
sort :: Sorts the lines in a text file.
spell :: Looks through a text file and reports any words that it finds in the text file that are not in the dictionary.
split :: Split a file into pieces.
stat :: Display file or filesystem status.
stop :: Control process execution.
strip :: Discard symbols from object files.
stty :: Sets options for your terminal.
su :: Become super user or another user.
sysinfo :: Get and set system information strings.
sysklogd :: Linux system logging utilities.

tabs :: Set tabs on a terminal.
tail :: Delivers the last part of the file.
talk :: Talk with other logged in users.
tac :: Concatenate and print files in reverse.
tar :: Create tape archives and add or extract files.
tbl :: Preprocessor for formatting tables for nroff or troff.
tcopy :: Copy a magnetic tape.
tcpdump :: Dump traffic on a network.
tee :: Read from an input and write to a standard output or file.
telnet :: Uses the telnet protocol to connect to another remote computer.
time :: Used to time a simple command.
timex :: The timex command times a command; reports process data and system activity.
todos :: Converts text files between DOS and Unix formats.
top :: Display Linux tasks.
touch :: Change file access and modification time.
tput :: Initialize a terminal or query terminfo database.
tr :: Translate characters.
traceroute :: Print the route packets take to network host.
troff :: Typeset or format documents.

ul ::Reads the named filenames or terminal and does underlining.
umask :: Get or set the file mode creation mask.
unalias :: Remove an alias.
unhash :: Remove internal hash table.
uname :: Print name of current system.
uncompress :: Uncompressed compressed files.
uniq :: Report or filter out repeated lines in a file.
unmount :: Crates a file systems and remote resources.
unpack :: Expands a compressed file.
untar :: Create tape archives and add or extract files.
until :: Execute a set of actions while/until conditions are evaluated TRUE.
useradd :: Create a new user or updates default new user information.
userdel :: Remove a users account.
usermod :: Modify a users account.

vacation :: Reply to mail automatically.
vedit :: Screen-oriented (visual) display editor based on ex.
vgrind :: Grind nice program listings
vi :: Screen-oriented (visual) display editor based on ex.
vim :: A programmers text editor.
view :: A programmers text editor.

w :: Show who is logged on and what they are doing.
wait :: Await process completion.
wc :: Displays a count of lines, words, and characters in a file
whereis :: Locate a binary, source, and manual page files for a command.
while :: Repetitively execute a set of actions while/until conditions are evaluated TRUE.
which ::Locate a command.
who ::Displays who is on the system.
whois :: Internet user name directory service.
write ::Send a message to another user.

X :: Execute the X windows system.
xfd ::Display all the characters in an X font.
xlsfonts :: Server font list displayer for X.
xset ::User preference utility for X.
xterm :: Terminal emulator for X.
xrdb :: X server resource database utility.

yacc :: Short for yet another compiler-compiler, yacc is a compiler.
yes :: Repeatedly output a line with all specified STRING(s), or 'y'.
yppasswd :: Changes network password in the NIS database.

zcat :: Compress files.

Debugging with LINUX environment


Introduction


Linux environment generally used the GNU debugger, or gdb to the shell. Gdb lets you see the internal structure of a program, print out variable values, set breakpoints and single step through source code. It makes an extremely powerful tool for fixing problems in program code. In this article , Let's discuss how to use the .


The Linux development environment has several debugging alternatives. This
article I also explore debugging tools available for debugging applications, ranging
from simple print-statement to specialized tools (e.g. memory-debugging).


Print statements
Addition of printf() statements to your code is a traditional and time-honored way
to debug code. Downside is that you will need to modify and recompile the code
whenever you want more or less debug information.

Strace utility
Strace will output all the kernel calls that the application does and is a great way to find
e.g. what file the program is trying to access and whether it succeeded. For instance
calls to read() and write() will show how much data program tried to read/write and
how much actually was transferred, it also shows the beginning of the data in question.
You can use this without recompilations and it works on any program which you can
run.


Ltrace utility
Ltrace will output all the dynamic library function calls that the application does. It
can also show system calls like ’strace’.

Using GDB debugger
With gdb debugger you can examine all the symbols in the program and program runtime
state and follow program function calls. If trace utilities and the source code don’t
give you enough information to solve the problem, debugger is the next step. Gdb is
a console tool, but there are some nice debugging GUIs available that work on top of
gdb. One such is ddd.


Gdb can help with debugging programs written in C, C++, Fortran, Java, Chill, assembly
and Modula-2. You need to have compiled these programs with the gnu compiler
collection (gcc) tools.

Besides supporting multiple languages gdb also supports multiple hardware architectures,
including several embedded hardware architectures. It’s also possible to compile
a special version of GDB for debugging (Linux) kernel code.

Running gdb
Gdb is run from the shell with the command 'gdb' with the program name as a parameter, for example 'gdb file, or you can use the file command once inside gdb to load a program for debugging, for example 'file file. Both of these assume you execute the commands from the same directory as the program. Once loaded, the program can be started with the gdb command 'run'.


Preparing for GDB use
You need to compile all the C/C++ code you want to debug with debugging information
included into the binary (use ’-g’ and do not use ’-fomit-frame-pointer’ option when
compiling the code) and the code may not be stripped of symbols (do not use ’-s’ flag
in the compilation). Note that all the libraries used by the program should also be
compiled this way as missing stack frame pointer can confuse GDB.
It’s better if you use static linking (e.g. use ’-static’ option in your Makefile) because
that way gdb won’t have trouble finding the symbols for the libraries program uses (for
example if you use different libraries with your program than what your compilation
machine normally uses).

Before you can get started, the program you want to debug has to be compiled with debugging information in it. This is so gdb can work out the variables, lines and functions being used. To do this, compile your program under gcc (or g++) with an extra '-g' option:
gcc -g file.cpp -o file



Using GDB
There are two ways to use a debugger:
- Using debugger to examine the code runtime behavior. You start ’gdb’ with the
program binary (’gdb ’) in the program directory. Then you can either
run the program inside the gdb with ’run ’ or attach to an
already running instance of the program with ’attach ’. Latter is handy
when you don’t have working gdb inside the debugging environment. Then you
can set breakpoints, examine the code, variables, stack etc and call the program
functions.


- Using debugger to examine a process post-mortem ’core’ dump. You start ’gdb’
with the program binary and core dump (’gdb ’) in the program
code directory. You can examine where the program crashed and what was the
state of all the program variables. If your program crashes, but doesn’t produce
a core file, check ’ulimit -a’ to see whether your environment allows core files
and fix it with ’ulimit -c unlimited’.


The downside of debugger approach is that debug versions of the binaries are large and
statically linked debug ones are huge. If the target doesn’t have enough memory for
this, you can use ’gdbserver’ and stripped binaries on the target. On the host on which
you do the debugging, you use gdb with ’target remote’ option and give the host gdb
the binary with all the debugging symbols.

Short introduction to use of gdb

You get back to GDB prompt when your program either:
- Crashes.
- Code execution reaches a breakpoint.
- You suspend or otherwise send a signal to the program.

On latter two cases you can use ’cont’ to continue the program execution.


In the GDB prompt you can do one of the following:
- Set a breakpoint with ’break ’. You can delete breakpoint by
saying ’delete ’.
- Examine current program execution trace with ’bt’, which will show you where
the program execution was interrupted (see above), how it got there and what
were the function arguments.
- Move up and down in the execution stack frame by typing ’up’ or ’down’.
- Examine program state with either ’info locals’ which shows you the state of
variables in the current context (function) or use ’print ’ to show
you a value of given variable or function. Any valid C expression can be used,
even function calls!
- View your program code with ’list ’ which will list code of the
given function.
You can ’step’ through the program code with following commands:
- ’step’ will execute the current line and go to next command. If code line is a
function call, step will enter the function.
- ’next’ works like ’step’ but function calls are executed as single instruction.
- ’finish’ will execute to the end of current scope (function).
- ’cont’ will continue program execution.
Typing return will repeat the previous command. Gdb can also complete function and
variable names with TAB key.

Gdb problems
When optimization is used in code compilation, variable values shown by gdb may
sometimes be valid only for variables used on the line that program executed lastly 1,
not for the whole scope in where the variables are declared. Use of inlines especially
in C++ code (e.g. methods with their body in the header file) may confuse gdb so that
it shows either a wrong filename or line for the fault.
In these cases it’s better not to optimize the code so much, use only ’-O’ optimization
flag, and forbid inlining completely with the ’-fno-default-inline’ compilation flag.

Friday, 4 April 2008

Frequently asked interview question in Unix

Most commonly asked Questions in Unix.

I bet if any one would get prepared with these questions . One would have win the half of the battle of Interview war.

1) Advantages/disadvantages of script vs compiled program.

2) Name a replacement for PHP/Perl/MySQL/Linux/Apache and show main differences.

3) Why have you choosen such a combination of products?

4) Differences between two last MySQL versions. Which one would you choose and when/why?

5) Main differences between Apache 1.x and 2.x. Why is 2.x not so popular?
Which one would you choose and when/why?

6) Which Linux distros do you have experience with?

7) Which distro you prefer? Why?

8) Which tool would you use to update Debian / Slackware / RedHat / Mandrake / SuSE ?9)
You're asked to write an Apache module. What would you do?

10) Which tool do you prefer for Apache log reports?

11) What does 'route' command do?

12) Differnces between ipchains and iptables.

13) What's eth0, ppp0, wlan0, ttyS0, etc.

14) What are different directories in / for?

15) Partitioning scheme for new webserver. Why?

16) What is a Make file?

17) Could you tell something about the Unix System Kernel?

18) Difference between the fork(), exec() and system() . system calls?

19)How can you tell what shell you are running on UNIX system?

20)What is ‘the principle of locality’?What are conditions for a machine to support Demand Paging?

21)What are conditions on which deadlock can occur while swapping the processes?

22)What are the processes that are not bothered by the swapper? Give Reason.

23)How the Swapper works?

23)What do you mean by u-area (user area) or u-block?

24)What is a Region?

25)What scheme does the Kernel in Unix System V follow while choosing a swap device among the multiple swap devices?

26)What is the main goal of the Memory Management?

27)What is the difference between Swapping and ?

28)List the system calls used for process management?

29)How do you change File Access Permissions?

Refer the sites...
http://www.geekinterview.com/Interview-Questions/Operating-System/UNIX

Wednesday, 2 April 2008

UML

Unified Modeling Language?

Here I have tried to simplify the keyword UML.

In software engineering, whenever the pictorial graphical representation is required, as process document or some application architecture or design need to be elaborated, then it is done with the help of UML. Simply defining, the Unified Modeling Language (UML) is a standardized visual specification language for object modeling.

Defining in other terms , UML is a general-purpose modeling language that includes a graphical notation used to create an abstract model of a system, referred to as a UML model.

Going into bit detail description, UML is officially defined at the Object Management Group (OMG) by the UML metamodel, a Meta-Object Facility metamodel (MOF). Like other MOF-based specifications, the UML metamodel and UML models may be serialized in XML. UML was designed to specify, visualize, construct, and document software-intensive systems.

To understand more about the language, I have just framed and complied some general FAQ sort. I believe just going thru these answered questions, is enough to develop the basic understanding of the UML.

First of all, in the layman termed, basic query.

What is UML?
- Unified Modeling Language (UML) is the industry-standard language for specifying, visualizing, constructing, and documenting the artifacts of software systems. Using UML, programmers and application architects can make a blueprint of a project, which, in turn, makes the actual software development process easier.


What can I use UML for?
- UML was designed with these primary purposes in mind:
· Business process modeling with use cases
· Class and object modeling
· Component modeling
· Distribution and deployment modeling


What's the actual need of UML, isn't it enough to write some thing about our application's design and architecture, somewhere in documentation, without using the pictorial symbols of UML?
- While it's certainly possible to describe interrelated processes and code architecture in words, many people prefer to use a diagram to visualize the relationship of elements to one another. UML is a standard way to create these diagrams. As a result, it makes it easier for programmers and software architects to communicate.


Different types of diagrams available in UML?
- UML provides many different models of a system. The following is a list of then:
1) The Use Case Diagrams - A use case is a description of the system's behavior from a user's viewpoint.
2) The Class Diagrams - "What objects do we need? How will they be related?"
3) Collaboration Diagrams - "How will the objects interact?"
4) Sequence Diagrams - "How will the objects interact?"
5) State Diagrams - "What state should our object be in"
6) Package Diagrams - "How are we going to modularize our development?"
7) Component Diagrams - "How will our software components be related?"
8) Deployment Diagrams - "How will the software be deployed?"



What is a "use case"?
- A complete end-to-end business process that satisfies the needs of a user.


What are different categories of use cases?
- Detail Level: - High level / ExpandedTask Level: - Super / Sub (Abstract; Equal Alternatives; Complete v. Partial)Importance: - Primary / Secondary (use Secondary for exceptional processes)Abstraction: - Essential / Real


What is the difference between a real and essential use case?
- Essential - describes the "essence" of the problem; technology independent real - good for GUI designing; shows problem as related to technology decisions.


In a System Sequence Diagram, what is a System Event?
- It is from the expanded use case. It is an actor action the system directly responds to.


Give an example of a situation which a State Diagram could effectively model.
- Think of a cake and its different stages through the baking process: dough, baked, burned.


For what are Operations Contracts written?
- System Events.


In an Operations Contract's postconditions, four types of activities are specified. What are they?
- They are:
· Instances created
· Associations formed
· Associations broken
· Attributes changed



What does an Operations Contract do?
-Provides a snapshot of the System's state before and after a System Event. It is not interested in the Event's specific behavior.


What does a Collaboration Diagram (or Sequence Event, depending on the process) model?
- A System Event's behavior.


How does one model a class in a Collaboration Diagram? An instance?
- A box will represent both; however, a class is written as MyClass whereas an instance is written as myInstance:MyClass.


What are the three parts of a class in a Class Diagram?
- Name, Attributes, Methods.


In Analysis, we are interested in documenting concepts within the relevant problem domain. What is a concept?
- A person, place, thing, or idea that relates to the problem domain. They are candidates for objects.


Does a concept HAVE to become a class in Design?
- No.


In a Class Diagram, what does a line with an arrow from one class to another denote?
- Attribute visibility.


What are the four types of visibility between objects?

- Local
Parameter
Attribute
Global.


When do you use inheritance as opposed to aggregation?
- An aggregation is a "has a" relationship, and it is represented in the UML by a clear diamond.


When would I prefer to use composition rather than aggregation?
- Composition is a stronger form of aggregation. The object which is "contained" in another object is expected to live and die with the object which "contains" it. Composition is represented in the UML by a darkened diamond.

Is the UML a process, method, or notation?
- It is a notation. A process is Objectory, Booch, OMT, or the Unified Process. A process and a notation together make an OO method.



Friday, 28 March 2008

Design Patterns in OOPs

Introduction:


For starting with Design Patterns, pre-requiste is one should aware with the Object oriented concepts.

Here, we will try to simplify the term and purpose of the Design Pattern.

First normal and very common query could be, What is Design Patterns, why they are used?

In a simple layman term, a design pattern is a general reusable solution to a commonly occurring problem in software design.

Defining Design Patterns more descriptively...

In software engineering, a design pattern is a general repeatable solution to a commonly occurring problem in software design. A design pattern isn't a finished design that can be transformed directly into code.

It is a simply a description for how to solve a problem that can be used in many different situations.

Design patterns is used to speed up the development process by providing tested, proven development methods or paradigms. Effective software design requires considering issues that may not become visible until later in the implementation.

Reusing design patterns helps to prevent subtle issues that can cause major problems, and it also improves code readability for coders and architects who are familiar with the patterns.
Digging into depth,
Design Pattern is categoried in three different sub patterns.

Creational Patterns
Structural Patterns
Behavioral Patterns.


Creation Patterns
This design pattern is all about class instantiation.

This pattern can be further divided into class-creation patterns and object-creational patterns.

Abstract Factory : Creates an instance of several families of classes

Factory : Creates an instance of several derived classes
Builder : Separates object construction from its representation

Prototype : A fully initialized instance to be copied or cloned
Singleton : A class of which only a single instance can exist

Structural patterns

This design patterns is all about Class and Object composition. Structural class-creation patterns use inheritance to compose interfaces. Structural object-patterns define ways to compose objects to obtain new functionality.

Adaptor : Match interfaces of different classes
Bridge : Separates an object’s interface from its implementation
Decorator : Add responsibilities to objects dynamically
Facade : A single class that represents an entire subsystem
Proxy : An object representing another object

Behavioral patterns
This design patterns is all about Class's objects communication. Behavioral patterns are those patterns that are most specifically concerned with communication between objects.

Chain of responsibilty : A way of passing a request between a chain of objects
Command : Encapsulate a command request as an object
Interpreter : A way to include language elements in a program
Iterator : Sequentially access the elements of a collection
Observer : A way of notifying change to a number of classes
State : Alter an object's behavior when its state changes
Visitor : Defines a new operation to a class without change




Now we will pick the popular and commonly used design pattern and explain in detail.
Staring with the Factory Pattern.


What is a "factory" and why would you want to use one?
A factory, in this context, is a piece of software that implements one of the "factory" design patterns introduced . In general, a factory implementation is useful when you need one object to control the creation of and/or access to other objects. By using a factory in RMI, you can reduce the number of objects that you need to register with the RMI registry.

Examples of Factories in the Real World:
The Bank
The Library
These two example is sufficient to understand the actual process of the Factory Patterns. These examples are from the real life.

Let's take the example, commonly illustrated in the text book, GoF.
Widgets in a GUI Environment.
This example explains , how an Abstract Factory can be used to create widgets for a GUI environment. By designing the client to use the Abstract Factory interface, different factories can be created to generate different sets of widgets without requiring changes to the clients. This example is not intended to illustrate a useful implementation of Abstract Factory, as Java's AWT and Swing do the work for you, but it does show how easily design patterns can be implemented in RMI.
One other point to notice is how the pattern is split between client and server. This split was arbitrarily chosen, and with RMIs object-oriented nature you could decide to make the split in the pattern elsewhere.

In General , this pattern consists of two class hierarchies, one of Products, and one of Creators. Each ConcreteCreator class creates instances of specific ConcreteProduct classes using a factory method.

The factory pattern can help solve your application issues. For example, developers often must reply to users based on each user’s machine, creating multiple hardware issues. The factory pattern can streamline the process. The concept is simple, but the solutions created with the factory pattern are powerful.

For more understanding, let's take one more example.
Communications breakdown
Consider the example of a data collection application where various field devices supply data to the application via TCP/IP sockets. The application was originally written to communicate with one device but was expanded when the company produced a newer version. Unfortunately, this new hardware did not speak the same language as the previous version. Marketing required the application to support both hardware versions, since customers might buy new units and install them in tandem with older units. The factory pattern eased the burden of supporting multiple device types.

In short and effective explanation of fatory pattern is :
=> if we have a super class and n sub-classes, and based on data provided, we have to return the object of one of the sub-classes, we use a factory pattern.

Now, we will focus on the positive and negative points on using the Factory Patterns.
Positives:
-Eliminates the need to bind application-specific classes into your code
-Provides hooks for subclassing. Creating objects inside a class with a factory method is always more flexible than creating an object directly. This method gives subclasses a hook for providing an extended version of an object.
-Connects parallel heirarchies. Factory method localises knowledge of which classes belong together. Parallel class heirarchies result when a class delegates some of its responsibilities to a separate class.
Negatives:
-Clients might have to subclass the Creator class just to create a particular Concreate object.

>