Saturday, 31 January 2009
जो बीत गई सो बात गई
जीवन मे एक सितारा था
माना वह बेहद प्यारा था
वो डूब गया तोउ डूब गया
अम्बर के आगन को देखो
कितने इसके तारे टूटे
इतने इसके प्यारे छूटे
पअर बोलो टूटे तारों पर
कब अम्बर शोक मनाता हैं
जो बीत गयी सो बात गयी ...
जीवन मे था वो एक कुसुम
थे उस पे नित्य नयोचावार तुम
वो सूख गया तोउ सूख गया
मधुवन की छाती को देखो
सूखी कितनी इसकी कलियाँ
जो मुरझाई फिर कहाँ खिली
पर बोलो सूखी फूलों पे
कब मधुवन शोक मनाता हैं ?
जो बीत गयी सो बात गयी ...
जीवन मे मधु का प्याला था
तुमने तन मन दे डाला था
वो टूट गया तोउ टूट गया
मदिराली का आगन देखो
कितने प्याले हिल जाते हैं
गिर मिटटी में मिल जाते हैं
जो गिरते हैं कब उठते हैं
पर बोलो टूटे प्यालों मे कब मदिरालय पछताता हैं ?
जो बीत गयी सो बात गयी ...
मृदु मिटटी के है bane हुए
मधु घाट फूटा ही करतें हैं
लघु जीवन लेके आए हैं
प्याले टूटा ही करते हैं
फिर भी मदिरालय के अन्दर
मधु के घाट मधु के प्याले हैं
जो मादकता के मारे हैं
वो मधु लूटा ही करते हैं
वो कच्चा पीने वाला है
जिसकी ममता घाट प्यालों पर
जो सच्चे मधु से जला हुआ
कब रोता हैं चिल्लाता हैं ?
जो बीत गयी सो बात गयी ...
Thursday, 8 January 2009
Thursday, 9 October 2008
Fingerprint biometrics
A fingerprint is made of a a number of ridges and valleys on the surface of the finger. Ridges are the upper skin layer segments of the finger and valleys are the lower segments. The ridges form so-called minutia points: ridge endings (where a ridge end) and ridge bifurcations (where a ridge splits in two). Many types of minutiae exist, including dots (very small ridges), islands (ridges slightly longer than dots, occupying a middle space between two temporarily divergent ridges), ponds or lakes (empty spaces between two temporarily divergent ridges), spurs (a notch protruding from a ridge), bridges (small ridges joining two longer adjacent ridges), and crossovers (two ridges which cross each other).
The uniqueness of a fingerprint can be determined by the pattern of ridges and furrows as well as the minutiae points. There are five basic fingerprint patterns: arch, tented arch, left loop, right loop and whorl. Loops make up 60% of all fingerprints, whorls account for 30%, and arches for 10%.
Issues with fingerprint systems
The tip of the finger is a small area from which to take measurements, and ridge patterns can be affected by cuts, dirt, or even wear and tear. Acquiring high-quality images of distinctive fingerprint ridges and minutiae is complicated task.
People with no or few minutia points (surgeons as they often wash their hands with strong detergents, builders, people with special skin conditions) cannot enroll or use the system. The number of minutia points can be a limiting factor for security of the algorithm. Results can also be confused by false minutia points (areas of obfuscation that appear due to low-quality enrollment, imaging, or fingerprint ridge detail).
Note: There is some controversy over the uniqueness of fingerprints. The quality of partial prints is however the limiting factor. As the number of defining points of the fingerprint become smaller, the degree of certainty of identity declines. There have been a few well-documented cases of people being wrongly accused on the basis of partial fingerprints.
Benefits of fingerprint biometric systems
Easy to use
Cheap
Small size
Low power
Non-intrusive
Large database already available
Applications of fingerprint biometrics
Fingerprint sensors are best for devices such as cell phones, USB flash drives, notebook computers and other applications where price, size, cost and low power are key requirements. Fingerprint biometric systems are also used for law enforcement, background searches to screen job applicants, healthcare and welfare.
Fingerprints are usually considered to be unique, with no two fingers having the exact same dermal ridge characteristics.
How does fingerprint biometrics work
The main technologies used to capture the fingerprint image with sufficient detail are optical, silicon, and ultrasound.
There are two main algorithm families to recognize fingerprints:
Minutia matching compares specific details within the fingerprint ridges. At registration (also called enrollment), the minutia points are located, together with their relative positions to each other and their directions. At the matching stage, the fingerprint image is processed to extract its minutia points, which are then compared with the registered template.
Pattern matching compares the overall characteristics of the fingerprints, not only individual points. Fingerprint characteristics can include sub-areas of certain interest including ridge thickness, curvature, or density. During enrollment, small sections of the fingerprint and their relative distances are extracted from the fingerprint. Areas of interest are the area around a minutia point, areas with low curvature radius, and areas with unusual combinations of ridges.
Biometrics
Biometrics is a field of security and identification technology based on the measurement of unique physical characteristics such as fingerprints, retinal patterns, and facial structure. To verify an individual's identity, biometric devices scan certain characteristics and compare them with a stored entry in a computer database. While the technology goes back years and has been used in highly sensitive institutions such as defense and nuclear facilities, the proliferation of electronic data exchange generated new demand for biometric applications that can secure electronically stored data and online transactions.
Biometrics is the practice of automatically identifying people by one or more physical characteristics.
TYPES OF BIOMETRIC SYSTEMS
FINGERPRINTS.
Fingerprint-based biometric systems scan the dimensions, patterns, and topography of fingers, thumbs, and palms. The most common biometric in forensic and governmental databases, fingerprints contain up to 60 possibilities for minute variation, and extremely large and increasingly integrated networks of these stored databases already exist. The largest of these is the Federal Bureau of Investigation's (FBI) Automated Fingerprint Identification System, with more than 630 million fingerprint images.
FACIAL RECOGNITION.
Facial recognition systems vary according to the features they measure. Some look at the shadow patterns under a set lighting pattern, while others scan heat patterns or thermal images using an infrared camera that illuminates the eyes and cheekbones. These systems are powerful enough to scope out the minutest differences in facial patterns, even between identical twins. The hardware for facial recognition systems is relatively inexpensive, and is increasingly installed in computer monitors.
EYE SCANS.
There are two main features of the eye that are targeted by biometric systems: the retina and the iris. Each contains more points of identification than a fingerprint. Retina scanners trace the pattern of blood cells behind the retina by quickly flashing an infrared light into the eye. Iris scanners create a unique biological bar code by scanning the eye's distinctive color patterns. Eye scans tend to occupy less space in a computer and thus operate relatively quickly, although some users are squeamish about having beams of light shot into their eyes.
VOICE VERIFICATION.
Although voices can sound similar and can be consciously altered, the topography of the mouth, teeth, and vocal cords produces distinct pitch, cadence, tone, and dynamics that give away would-be impersonators. Widely used in phone-based identification systems, voice-verification biometrics also is used with personal computers.
HAND GEOMETRY.
Hand-geometry biometric systems take two infrared photographs—one from the side and one from above—of an individual's hand. These images measure up to 90 different characteristics, such as height, width, thickness, finger shape, and joint positions and compare them with stored data.
KEYSTROKE DYNAMICS.
A biometric system that is tailor-made for personal computers, keystroke-dynamic biometrics measures unique patterns in the way an individual uses a keyboard—such as speed, force, the variation of force on different parts of the keyboard, and multiple-key functions—and exploits them as a means of identification.
These things are indeed very interesting, and it would be better if I would explain each and every types in details to all of you...
Tuesday, 13 May 2008
BitTorrent (Protocol)
BitTorrent is a protocol designed for transferring files. It is peer-to-peer in nature, as users connect to each other directly to send and receive portions of the file.
BitTorrent is a method of distributing large amounts of data widely without the original distributor incurring the entire costs of hardware, hosting, and bandwidth resources.
However, there is a central server (called a tracker) which coordinates the action of all such peers. The tracker only manages connections, it does not have any knowledge of the contents of the files being distributed, and therefore a large number of users can be supported with relatively limited tracker bandwidth. The key philosophy of BitTorrent is that users should upload (transmit outbound) at the same time they are downloading (receiving inbound.) In this manner, network bandwidth is utilized as efficiently as possible. BitTorrent is designed to work better as the number of people interested in a certain file increases, in contrast to other file transfer protocols.
Instead, when data is distributed using the BitTorrent protocol, each recipient supplies pieces of the data to newer recipients, reducing the cost and burden on any given individual source, providing redundancy against system problems, and reducing dependence on the original distributor.
The most common method by which files are transferred on the Internet is the client-server model. A central server sends the entire file to each client that requests it -- this is how both http and ftp work. The clients only speak to the server, and never to each other. The main advantages of this method are that it's simple to set up, and the files are usually always available since the servers tend to be dedicated to the task of serving, and are always on and connected to the Internet. However, this model has a significant problem with files that are large or very popular, or both.
Namely, it takes a great deal of bandwidth and server resources to distribute such a file, since the server must transmit the entire file to each client. Perhaps you may have tried to download a demo of a new game just released, or CD images of a new Linux distribution, and found that all the servers report "too many users," or there is a long queue that you have to wait through. The concept of mirrors partially addresses this shortcoming by distributing the load across multiple servers. But it requires a lot of coordination and effort to set up an efficient network of mirrors, and it's usually only feasible for the busiest of sites.
Another method of transferring files has become popular recently: the peer-to-peer network, systems such as Kazaa, eDonkey, Gnutella, Direct Connect, etc. In most of these networks, ordinary Internet users trade files by directly connecting one-to-one. The advantage here is that files can be shared without having access to a proper server, and because of this there is little accountability for the contents of the files. Hence, these networks tend to be very popular for illicit files such as music, movies, pirated software, etc. Typically, a downloader receives a file from a single source, however the newest version of some clients allow downloading a single file from multiple sources for higher speeds.
A BitTorrent client is any program that implements the BitTorrent protocol. Each client is capable of preparing, requesting, and transmitting any type of computer file over a network, using the protocol. A peer is any computer running an instance of a client.
To share a file or group of files, a peer first creates a small file called a "torrent" (e.g. MyFile.torrent). This file contains metadata about the files to be shared and about the tracker, the computer that coordinates the file distribution. Peers that want to download the file first obtain a torrent file for it, and connect to the specified tracker, which tells them from which other peers to download the pieces of the file.
Though both ultimately transfer files over a network, a BitTorrent download differs from a classic full-file HTTP request in several fundamental ways:
BitTorrent makes many small data requests over different TCP sockets, while web-browsers typically make a single HTTP GET request over a single TCP socket. BitTorrent downloads in a random or in a "rarest-first"[2] approach that ensures high availability, while HTTP downloads in a sequential manner. Taken together, these differences allow BitTorrent to achieve much lower cost, much higher redundancy, and much greater resistance to abuse or to "flash crowds" than a regular HTTP server. However, this protection comes at a cost: downloads can take time to rise to full speed because it may take time for enough peer connections to be established, and it takes time for a node to receive sufficient data to become an effective uploader. As such, a typical BitTorrent download will gradually rise to very high speeds, and then slowly fall back down toward the end of the download. This contrasts with an HTTP server that, while more vulnerable to overload and abuse, rises to full speed very quickly and maintains this speed throughout.
Monday, 12 May 2008
The Types of Caching in ASP.NET
The main benefits of caching are performance-related: operations like accessing database information can be one of the most expensive operations of an ASP page's life cycle.
If the database information is fairly static, this database-information can be cached.
When information is cached, it stays cached either indefinitely, until some relative time, or until some absolute time. Most commonly, information is cached for a relative time frame. That is, our database information may be fairly static, updated just a few times a week. Therefore, we might want to invalidate the cache every other day, meaning every other day the cached content is rebuilt from the database.
Caching in classic ASP was a bit of a chore, it is quite easy in ASP.NET. There are a number of classes in the .NET Framework designed to aid with caching information. In this article, I will explain how .NET supports caching and explain in detail how to properly incorporate each supported method into Web-based applications.
Caching Options in ASP.NET
ASP.NET supports three types of caching for Web-based applications:
Page Level Caching (called Output Caching)
Page Fragment Caching (often called Partial-Page Output Caching)
Programmatic or Data Caching
Output Caching:
Caches the output from an entire page and returns it for future requests instead of re-executing the requested page.
Fragment Caching:
Caches just a part of a page which can then be reused even while other parts of the page are being dynamically generated.
Data Caching:
Programmatically caches arbitrary objects for later reuse without re-incurring the overhead of creating them.
In Detail:
Output Caching
Output caching is the simplest of the caching options offered by ASP.NET. It is useful when an entire page can be cached as a whole and is analogous to most of the caching solutions that were available under classic ASP. It takes a dynamically generated page and stores the HTML result right before it is sent to the client. Then it reuses this HTML for future requests bypassing the execution of the original code.
Telling ASP.NET to cache a page is extremely simple. You simply add the OutputCache directive to the page you wish to cache. <%@ OutputCache Duration="30" VaryByParam="none" %>
The resulting caching is similar to the caching done by browsers and proxy servers, but does have one extremely important difference... you can tell a page which parameters to the page will have an effect on the output and the caching engine will cache separate versions based on the parameters you specify. This is done using the VaryByParam attribute of the OutputCache directive.
This is illustrated by a very simple example of output caching.
OutputCache Duration="30" VaryByParam="test"
<%@ Page Language="C#" %><%@ Page Language="VB" %><%@ OutputCache Duration="30" VaryByParam="test" %><%= Now() %>
This piece of code will cache the result for 30 seconds. During that time, responses for all requests for the page will be served from the cache.
Fragment Caching
Sometimes it's not possible to cache an entire page. For example, many shopping sites like to greet their users by name. It wouldn't look very good if you went to a site and instead of using your name to greet you it used mine! In the past this often meant that caching wasn't a viable option for these pages. ASP.NET handles this by what they call fragment caching.
More often than not, it is impractical to cache entire pages. For example, you may have some content on your page that is fairly static, such as a listing of current inventory, but you may have other information, such as the user's shopping cart, or the current stock price of the company, that you wish to not be cached at all. Since Output Caching caches the HTML of the entire ASP.NET Web page, clearly Output Caching cannot be used for these scenarios: enter Partial-Page Output Caching.
Partial-Page Output Caching, or page fragment caching, allows specific regions of pages to be cached. ASP.NET provides a way to take advantage of this powerful technique, requiring that the part(s) of the page you wish to have cached appear in a User Control. One way to specify that the contents of a User Control should be cached is to supply an OutputCache directive at the top of the User Control. That's it! The content inside the User Control will now be cached for the specified period, while the ASP.NET Web page that contains the User Control will continue to serve dynamic content. (Note that for this you should not place an OutputCache directive in the ASP.NET Web page that contains the User Control - just inside of the User Control.)
Data Caching
This is the most powerful of the caching options available in ASP.NET. Using data caching you can programmatically cache anything you want for as long as you want. The caching system exposes itself in a dictionary type format meaning items are stored in name/value pairs. You cache an item under a certain name and then when you request that name you get the item back. It's similar to an array or even a simple variable.
In addition to just placing an object into the cache you can set all sorts of properties. The object can be set to expire at a fixed time and date, after a period of inactivity, or when a file or other object in the cache is changed.
The main thing to watch out for with data caching is that items you place in the cache are not guaranteed to be there when you want them back. While it does add some work (you always have to check your object exists after you retrieve it), this scavenging really is a good thing. It gives the caching engine the flexibility to dispose of things that aren't being used or dump parts of the cache if the system starts running out of memory.
Sometimes, more control over what gets cached is desired. ASP.NET provides this power and flexibility by providing a cache engine. Programmatic or data caching takes advantage of the .NET Runtime cache engine to store any data or object between responses. That is, you can store objects into a cache, similar to the storing of objects in Application scope in classic ASP. (As with classic ASP, do not store open database connections in the cache!)
Realize that this data cache is kept in memory and "lives" as long as the host application does. In other words, when the ASP.NET application using data caching is restarted, the cache is destroyed and recreated. Data Caching is almost as easy to use as Output Caching or Fragment caching: you simply interact with it as you would any simple dictionary object. To store a value in the cache, use syntax like this:
Cache["Nikky"] = bar; // C#
To retrieve a value, simply reverse the syntax like this:
bar = Cache["Nikky"]; // C#
Note that after you retrieve a cache value in the above manner you should first verify that the cache value is not null prior to doing something with the data. Since Data Caching uses an in-memory cache, there are times when cache elements may need to be evicted. That is, if there is not enough memory and you attempt to insert something new into the cache, something else has gotta go! The Data Cache engine does all of this scavenging for your behind the scenes, of course. However, don't forget that you should always check to ensure that the cache value is there before using it. This is fairly simply to do - just check to ensure that the value isn't null/Nothing. If it is, then you need to dynamically retrieve the object and restore it into the cache.
Thursday, 8 May 2008
'A Leader Should Know How to Manage Failure'
Kalam was asked:
Could you give an example, from your own experience, of ' How Leaders Should Manage Failure' ?
Kalam answered it like that:
Let me tell you about my experience. In 1973 I became the project director of India's satellite launch vehicle program, commonly called the SLV-3. Our goal was to put India's "Rohini" satellite into orbit by 1980. I was given funds and human resources -- but was told clearly that by 1980 we had to launch the satellite into space. Thousands of people worked together in scientific and technical teams towards that goal.
By 1979 -- I think the month was August -- we thought we were ready. As the project director, I went to the control center for the launch. At four minutes before the satellite launch, the computer began to go through the checklist of items that needed to be checked. One minute later, the computer program put the launch on hold; the display showed that some control components were not in order. My experts -- I had four or five of them with me -- told me not to worry; they had done their calculations and there was enough reserve fuel. So I bypassed the computer, switched to manual mode, and launched the rocket. In the first stage, everything worked fine. In the second stage, a problem developed. Instead of the satellite going into orbit, the whole rocket system plunged into the Bay of Bengal. It was a big failure.
That day, the chairman of the Indian Space Research Organization, Prof. Satish Dhawan, had called a press conference. The launch was at 7:00 am, and the press conference -- where journalists from around the world were present -- was at 7:45 am at ISRO's satellite launch range in Sriharikota [in Andhra Pradesh in southern India]. Prof. Dhawan, the leader of the organization, conducted the press conference himself. He took responsibility for the failure -- he said that the team had worked very hard, but that it needed more technological support. He assured the media that in another year, the team would definitely succeed. Now, I was the project director, and it was my failure, but instead, he took responsibility for the failure as chairman of the organization.
The next year, in July 1980, we tried again to launch the satellite -- and this time we succeeded. The whole nation was jubilant. Again, there was a press conference. Prof. Dhawan called me aside and told me, "You conduct the press conference today." I learned a very important lesson that day. When failure occurred, the leader of the organization owned that failure. When success came, he gave it to his team.
The best management lesson I have learned did not come to me from reading a book; it came from that experience.