Online Earning Sources (Without Investment)

If you want to post, send your post on dotnetglobe@gmail.com .Put 'Title' as 'Subject'

Pages

Sunday, August 30, 2009

Three ways to gain programming experience

Author: Justin James 

Justin James offers advice to a reader who needs experience but can't find work because he has very little on-the-job experience. Check out these recommendations for picking up programming experience — sometimes even without having a job in the field.

—————————————————————————————

A TechRepublic member is trapped in the chicken/egg situation that far too many entry-level IT programmers find themselves in: Businesses do not like to hire people without experience, and many businesses are not willing to train. If so many companies aren't open to hiring people without experience, how does someone get experience? Unfortunately, this scenario is a major issue for many IT pros.

In my long running, back-and-forth discussion with this member, here are three ways I suggested that he kick his career into high gear.
#1: Work for free (or close to it)

While the corporate world may not always be eager to hire people with little or no experience, the non-profit world is often delighted (or at least willing) to take volunteers with little or no experience. I got my start as a programmer in high school by volunteering for a local home for developmentally disabled adults. I worked on Excel spreadsheets to manage their finances, I put together a Web site for them, and so on. Was it glamorous? Heck, no. I was working for free on my afternoons and weekends. The only perk was that the place had a stocked pantry that I could hit whenever I wanted. Aside from the emotional satisfaction of doing something positive for the community, it gave me experience that I could put on a resume, and it gave me a reference. Some non-profits will be able to pay you a small amount of money.

And there are plenty of open source projects that can use some help. Or, you could pick up an "abandoned" open source project and revive it. Open source work is a great resume builder.

If you can't find a local charity or non-profit, maybe you can work for family. Perhaps a relative has a business that needs some programmer work. Offer to do it for free, and I bet that you will find that Uncle Jimmy or Aunt Betty would be delighted to have you on the team.
#2: Work like a dog


If you want to get ahead, you're going to have to hustle; I haven't met any developers who were handed opportunities on a silver platter. I suppose a few developers got lucky, and maybe a relative hired them at a very nice salary right out of school. And a few other developers managed to get great internships that led to other good opportunities. But for the vast majority of the people currently in college or just out of college, the only way to differentiate yourself and get the experience is to work, work, work. Period.

Your boss probably won't let you spend huge amounts of time writing code instead of manning the help desk. So, if you want to turn that help desk job into experience developing software, you're going to have to make the time. Code through lunch break? Check. Work after hours? Check. Plan and develop at home? Check.

I know, I know… working for free and working more than what is expected of you doesn't sound like much fun. It could be worse, though. Ever look into what doctors do during their residency (not to mention their pay)? Think of this period as your residency. You're going to bust your buns for a few months or years to get some experience, and your next job, though it may not be any easier (it won't be), it will likely pay better.

There are ways to get experience and get paid; the trick is to sneak in through the "back door" of employment. For example, I had a job where I was doing network management and monitoring. It had been a few years since I had been a professional programmer, and I knew I wanted to get back to it. But between the fact that most of my experience was in Perl (which was fairly dead by that point), and the years since I had been programming, I knew I needed to freshen my experience before I would be employable. So what did I do? I started writing applications to help my department in my free time; on occasion, I would even write code while not on the clock — all to get some experience under my belt and a reference.

Maybe you can't get a job as a developer, but you might be able to get a job as, say, a desktop technician or in the help desk. From there, you can start flexing your coding muscles and either build up a good resume and leave or get promoted. In fact, working at a help desk or as a desktop technician (or a "computer operator") is one of the oldest ways of getting your feet wet in this industry.
#3: Work at home

Maybe you can't find anyone willing to let you code for free. Perhaps there is no way that you are able to fit programming into your nonprogramming job (such as an hourly worker who can't get authorization for overtime). That's where your home comes into play. If all else fails (or to supplement your existing efforts), do some work at home. Find an application you really like and write your own version of it. Or, think of an application you always wish you had and write it.

When you work at home, try to emulate software development in professional environments as much as possible. Write a project plan, create unit tests, set up a nightly build, and so on. I guarantee that you will become a better programmer for it, and you'll have something to show perspective employers, which is actually quite important.

I have never worked somewhere where I could take my labor and show it to potential employers. Not only would it violate my employment contract, but it would often violate my employer's contracts with their customers. But when I do something at home on my own time and on my own dime, it becomes something I can show to potential employers. For example, I wanted to get a job doing more Web development and less Webmaster work, so I put together a Flash presentation that had highlights from my resume, quotes from my references, and so on. I even packaged it in a nice CD case and gave it an Autorun launcher, so potential employers could just pop the CD in. The CD got me a job in the middle of the dot-com bust in an instant. It was a real game changer.

As someone who has been on both sides of the interview table many times, I can tell you that it's impressive to have a candidate come in and talk about work they're doing on their own. Does it get the same level of consideration as paid, professional work? Sometimes. From what I can tell, doing "real work" on a credible open source application is just as good as a paid job; the only time it can hurt you is if the application is awful, and you show it to the interviewer anyway. So, yes, this is another "work without pay" suggestion, but it's often the only differentiator between you and the two dozen other entry-level developers who apply for the job.


Wednesday, August 26, 2009

10 habits of superstitious users

Author: Jaime Henriquez

For some users, the computer is unfathomable - leading them to make bizarre assumptions about technology and the effect of their own actions. Here are a few irrational beliefs such users develop.


Superstition: A belief, not based on human reason or scientific knowledge, that future events may be influenced by one's behavior in some magical or mystical way (Wiktionary).

In 1947, the psychologist B. F. Skinner reported a series of experiments in which pigeons could push a lever that would randomly either give them a food pellet, or nothing. Think of it as a sort of one-armed bandit that the pigeons played for free. Skinner found, after a while, that some of the pigeons started acting oddly before pushing the lever. One moved in counterclockwise circles, one repeatedly stuck its head into the upper corner of the cage, and two others would swing their heads back and forth in a sort of pendulum motion. He suggested that the birds had developed "superstitious behaviors" by associating getting the food with something they happened to be doing when they actually got it — and they had wrongly concluded that if they did it again, they were more likely to get the pellet. Essentially, they were doing a sort of food-pellet dance to better their odds.

Although computer users are undoubtedly smarter than pigeons, users who really don't understand how a computer works may also wrongly connect some action of theirs with success (and repeat it), or associate it with failure (and avoid it like the plague). Here are some of the user superstitions I've encountered.

Note: This article is also available as a PDF download.

1: Refusing to reboot

Some users seem to regard a computer that's up and running and doing what they want as a sort of miracle, achieved against all odds, and unlikely ever to be repeated … certainly not by them. Reboot? Not on your life! If it ain't broke, don't fix it. Why take the risk?

2: Excessive fear of upgrades

Exercising caution when it comes to upgrades is a good idea. But some users go well beyond that, into the realm of the irrational. It may take only one or two bad experiences. In particular, if an upgrade causes problems that don't seem to be related to the upgrade itself, this can lead to a superstitious fear of change because it confirms their belief that they have no idea how the computer really works — and therefore no chance of correctly judging whether an upgrade is worth it or just asking for trouble. Better to stay away from any change at all, right?

3: Kneejerk repetition of commands

These are the people who, when their print command fails to produce output in a timely manner, start pounding the keys. They treat the computer like a recalcitrant child who just isn't paying attention or doesn't believe they really mean it. Users may get the impression that this superstition is justified because the computer sometimes does seem to be ignoring them — when it fails to execute a double-click because they twitched the mouse or when they have inadvertently dropped out of input mode. Or it may come from the tendency of knowledgeable helpers to make inconspicuous adjustments and then say, "Try it again."

4: Insisting on using particular hardware when other equally good hardware is available

Whenever you go to the trouble of providing your users with multiple options — computers, printers, servers, etc. — they will develop favorite choices. Some users will conclude, however, based on their previous experience (or sometimes just based on rumor), that only this particular piece of hardware will do. The beauty of interchangeability is wasted on them.

5: "I broke it!"

Many users blame the computer for any problems (or they blame the IT department). But some users assume when something goes wrong, they did it.

They don't think about all the tiny voltages and magnetic charges, timed to the nanosecond, all of which have to occur in the proper sequence in order for success. In fact, there are plenty of chances for things to go wrong without them, and things often do. But then, all those possible sources of error are hidden from the user — invisible by their nature and tucked away inside the box. The only place complexity isn't hidden is in the interface, and the most obviously fallible part of that is … them. It may take only a few cases of it actually being the user's fault to get this superstition rolling.

6: Magical thinking

These are the users who have memorized the formula for getting the computer to do what they want but have no clue how it works. As in magic, as long as you get the incantation exactly right, the result "just happens." The unforgiving nature of computer commands tends to feed this belief. The user whose long-running struggle to connect to the Web is resolved by, "Oh, here's your problem, you left out the colon…" is a prime candidate to develop this superstition.

Once on the path to magical thinking, some users give up trying to understand the computer as a tool to work with and instead treat it like some powerful but incomprehensible entity that must be negotiated with. For them, the computer works in mysterious ways, and superstitions begin to have more to do with what the computer is than how they use it.

7: Attributing personality to the machine

This is the user who claims in all honesty, "The computer hates me," and will give you a long list of experiences supporting their conclusion, or the one who refuses to use a computer or printer that had a problem earlier but which you have now fixed. No, no, it failed before and the user is not going to forget it.

8: Believing the computer sees all and knows all

Things this user says betray the belief that behind all the hardware and software there is a single Giant Brain that sees all and knows all — or should. They're surprised when things they've done don't seem to "stick," as in "I changed my email address; why does it keep using my old one?" or "Did you change it everywhere?"  "… Huh?" or "My new car always knows where I am, how come I have to tell Google Maps where I live?" or the ever-popular "You mean when you open up my document you see something different?"

9: Assuming the computer is always right

This user fails to recognize that the modern computer is more like television than the Delphic oracle. Even the most credulous people recognize that not everything they see on television is true, but some users think the computer is different. "There's something wrong with the company server." "What makes you think that?" "Because when I try to log in, it says server not found." … "Why did you click on that pop-up?" "It said I had a virus and that I had to."

10: "It's POSSESSED!!"

Users who are ordinarily rational can still succumb to superstition when the computer or its peripherals seem to stop paying any attention to them and start acting crazy — like when the screen suddenly fills with a code dump, or a keyboard problem overrides their input, or a newly revived printer spews out pages of gibberish. It serves to validate the secretly held suspicion that computers have a mind of their own — and that mind isn't particularly stable.

Magic?

We're used to seeing superstitions among gamblers and athletes, who frequently engage in high-stakes performances with largely unpredictable outcomes. That superstitions also show up when people use computers — algorithmic devices designed to be completely predictable — is either evidence of human irrationality or an interesting borderline case of Clarke's Third Law: "Any sufficiently advanced technology is indistinguishable from magic."

10 fundamental differences between Linux and Windows

By Jack Wallen

I have been around the Linux community for more than 10 years now. From the very beginning, I have known that there are basic differences between Linux and Windows that will always set them apart. This is not, in the least, to say one is better than the other. It's just to say that they are fundamentally different. Many people, looking from the view of one operating system or the other, don't quite get the differences between these two powerhouses. So I decided it might serve the public well to list 10 of the primary differences between Linux and Windows.

Full access vs. no access
Having access to the source code is probably the single most significant difference between Linux and Windows. The fact that Linux belongs to the GNU Public License ensures that users (of all sorts) can access (and alter) the code to the very kernel that serves as the foundation of the Linux operating system. You want to peer at the Windows code? Good luck. Unless you are a member of a very select (and elite, to many) group, you will never lay eyes on code making up the Windows operating system.
You can look at this from both sides of the fence. Some say giving the public access to the code opens the operating system (and the software that runs on top of it) to malicious developers who will take advantage of any weakness they find. Others say that having full access to the code helps bring about faster improvements and bug fixes to keep those malicious developers from being able to bring the system down. I have, on occasion, dipped into the code of one Linux application or another, and when all was said and done, was happy with the results. Could I have done that with a closed-source Windows application? No.

Licensing freedom vs. licensing restrictions
Along with access comes the difference between the licenses. I'm sure that every IT professional could go on and on about licensing of PC software. But let's just look at the key aspect of the licenses (without getting into legalese). With a Linux GPL-licensed operating system, you are free to modify that software and use and even republish or sell it (so long as you make the code available). Also, with the GPL, you can download a single copy of a Linux distribution (or application) and install it on as many machines as you like. With the Microsoft license, you can do none of the above. You are bound to the number of licenses you purchase, so if you purchase 10 licenses, you can legally install that operating system (or application) on only 10 machines.

Online peer support vs. paid help-desk support
This is one issue where most companies turn their backs on Linux. But it's really not necessary. With Linux, you have the support of a huge community via forums, online search, and plenty of dedicated Web sites. And of course, if you feel the need, you can purchase support contracts from some of the bigger Linux companies (Red Hat and Novell for instance).
However, when you use the peer support inherent in Linux, you do fall prey to time. You could have an issue with something, send out e-mail to a mailing list or post on a forum, and within 10 minutes be flooded with suggestions. Or these suggestions could take hours of days to come in. It seems all up to chance sometimes. Still, generally speaking, most problems with Linux have been encountered and documented. So chances are good you'll find your solution fairly quickly.
On the other side of the coin is support for Windows. Yes, you can go the same route with Microsoft and depend upon your peers for solutions. There are just as many help sites/lists/forums for Windows as there are for Linux. And you can purchase support from Microsoft itself. Most corporate higher-ups easily fall victim to the safety net that having a support contract brings. But most higher-ups haven't had to depend up on said support contract. Of the various people I know who have used either a Linux paid support contract or a Microsoft paid support contract, I can't say one was more pleased than the other. This of course begs the question "Why do so many say that Microsoft support is superior to Linux paid support?"

Full vs. partial hardware support
One issue that is slowly becoming nonexistent is hardware support. Years ago, if you wanted to install Linux on a machine you had to make sure you hand-picked each piece of hardware or your installation would not work 100 percent. I can remember, back in 1997-ish, trying to figure out why I couldn't get Caldera Linux or Red Hat Linux to see my modem. After much looking around, I found I was the proud owner of a Winmodem. So I had to go out and purchase a US Robotics external modem because that was the one modem I knew would work. This is not so much the case now. You can grab a PC (or laptop) and most likely get one or more Linux distributions to install and work nearly 100 percent. But there are still some exceptions. For instance, hibernate/suspend remains a problem with many laptops, although it has come a long way.
With Windows, you know that most every piece of hardware will work with the operating system. Of course, there are times (and I have experienced this over and over) when you will wind up spending much of the day searching for the correct drivers for that piece of hardware you no longer have the install disk for. But you can go out and buy that 10-cent Ethernet card and know it'll work on your machine (so long as you have, or can find, the drivers). You also can rest assured that when you purchase that insanely powerful graphics card, you will probably be able to take full advantage of its power.

Command line vs. no command line
No matter how far the Linux operating system has come and how amazing the desktop environment becomes, the command line will always be an invaluable tool for administration purposes. Nothing will ever replace my favorite text-based editor, ssh, and any given command-line tool. I can't imagine administering a Linux machine without the command line. But for the end user -- not so much. You could use a Linux machine for years and never touch the command line. Same with Windows. You can still use the command line with Windows, but not nearly to the extent as with Linux. And Microsoft tends to obfuscate the command prompt from users. Without going to Run and entering cmd (or command, or whichever it is these days), the user won't even know the command-line tool exists. And if a user does get the Windows command line up and running, how useful is it really?

Centralized vs. noncentralized application installation
The heading for this point might have thrown you for a loop. But let's think about this for a second. With Linux you have (with nearly every distribution) a centralized location where you can search for, add, or remove software. I'm talking about package management systems, such as Synaptic. With Synaptic, you can open up one tool, search for an application (or group of applications), and install that application without having to do any Web searching (or purchasing).
Windows has nothing like this. With Windows, you must know where to find the software you want, download it (or put the CD into your machine), and run setup.exe or install.exe with a simple double-click. For many years, it was thought that installing applications on Windows was far easier than on Linux. And for many years, that thought was right on target. Not so much now. Installation under Linux is simple, painless, and centralized.

Flexibility vs. rigidity
I always compare Linux (especially the desktop) and Windows to a room where the floor and ceiling are either movable or not. With Linux, you have a room where the floor and ceiling can be raised or lowered, at will, as high or low as you want to make them. With Windows, that floor and ceiling are immovable. You can't go further than Microsoft has deemed it necessary to go.
Take, for instance, the desktop. Unless you are willing to pay for and install a third-party application that can alter the desktop appearance, with Windows you are stuck with what Microsoft has declared is the ideal desktop for you. With Linux, you can pretty much make your desktop look and feel exactly how you want/need. You can have as much or as little on your desktop as you want. From simple flat Fluxbox to a full-blown 3D Compiz experience, the Linux desktop is as flexible an environment as there is on a computer.

Fanboys vs. corporate types
I wanted to add this because even though Linux has reached well beyond its school-project roots, Linux users tend to be soapbox-dwelling fanatics who are quick to spout off about why you should be choosing Linux over Windows. I am guilty of this on a daily basis (I try hard to recruit new fanboys/girls), and it's a badge I wear proudly. Of course, this is seen as less than professional by some. After all, why would something worthy of a corporate environment have or need cheerleaders? Shouldn't the software sell itself? Because of the open source nature of Linux, it has to make do without the help of the marketing budgets and deep pockets of Microsoft. With that comes the need for fans to help spread the word. And word of mouth is the best friend of Linux.
Some see the fanaticism as the same college-level hoorah that keeps Linux in the basements for LUG meetings and science projects. But I beg to differ. Another company, thanks to the phenomenon of a simple music player and phone, has fallen into the same fanboy fanaticism, and yet that company's image has not been besmirched because of that fanaticism. Windows does not have these same fans. Instead, Windows has a league of paper-certified administrators who believe the hype when they hear the misrepresented market share numbers reassuring them they will be employable until the end of time.

Automated vs. nonautomated removable media
I remember the days of old when you had to mount your floppy to use it and unmount it to remove it. Well, those times are drawing to a close -- but not completely. One issue that plagues new Linux users is how removable media is used. The idea of having to manually "mount" a CD drive to access the contents of a CD is completely foreign to new users. There is a reason this is the way it is. Because Linux has always been a multiuser platform, it was thought that forcing a user to mount a media to use it would keep the user's files from being overwritten by another user. Think about it: On a multiuser system, if everyone had instant access to a disk that had been inserted, what would stop them from deleting or overwriting a file you had just added to the media? Things have now evolved to the point where Linux subsystems are set up so that you can use a removable device in the same way you use them in Windows. But it's not the norm. And besides, who doesn't want to manually edit the /etc/fstab fle?

Multilayered run levels vs. a single-layered run level
I couldn't figure out how best to title this point, so I went with a description. What I'm talking about is Linux' inherent ability to stop at different run levels. With this, you can work from either the command line (run level 3) or the GUI (run level 5). This can really save your socks when X Windows is fubared and you need to figure out the problem. You can do this by booting into run level 3, logging in as root, and finding/fixing the problem.
With Windows, you're lucky to get to a command line via safe mode -- and then you may or may not have the tools you need to fix the problem. In Linux, even in run level 3, you can still get and install a tool to help you out (hello apt-get install APPLICATION via the command line). Having different run levels is helpful in another way. Say the machine in question is a Web or mail server. You want to give it all the memory you have, so you don't want the machine to boot into run level 5. However, there are times when you do want the GUI for administrative purposes (even though you can fully administer a Linux server from the command line). Because you can run the startx command from the command line at run level 3, you can still start up X Windows and have your GUI as well. With Windows, you are stuck at the Graphical run level unless you hit a serious problem.

10 Windows XP services you should never disable

Author: Scott Lowe

    Disabling certain Windows XP services can enhance performance and security - but it's essential to know which ones you can safely turn off. Scott Lowe identifies 10 critical services and explains why they should be left alone.


    There are dozens of guides out there that help you determine which services you can safely disable on your Windows XP desktop. Disabling unnecessary services can improve system performance and overall system security, as the system's attack surface is reduced. However, these lists rarely indicate which services you should not disable. All of the services that run on a Window system serve a specific purpose and many of the services are critical to the proper and expected functioning of the desktop computing environment. In this article, you'll learn about 10 critical Windows XP services you shouldn't disable (and why).

    Note: This article is also available as a PDF download. For a quick how-to video on the basics, see Disable and enable Windows XP services.

    1: DNS Client

    This service resolves and caches DNS names, allowing the system to communicate with canonical names rather than strictly by IP address. DNS is the reason that you can, in a Web browser, type http://www.techrepublic.com rather than having to remember that http://216.239.113.101 is the site's IP address.

    If you stop this service, you will disable your computer's ability to resolve names to IP addresses, basically rendering Web browsing all but impossible.

    2: Network Connections

    The Network Connections service manages the network and dial-up connections for your computer, including network status notification and configuration. These days, a standalone, non-networked PC is just about as useful as an abacus — maybe less so. The Network Connections service is the element responsible for making sure that your computer can communicate with other computers and with the Internet.

    If this service is disabled, network configuration is not possible. New network connections can't be created and services that need network information will fail.

    3: Plug and Play

    The Plug and Play service (formerly known as the "Plug and Pray" service, due to its past unreliability), is kicked off whenever new hardware is added to the computer. This service detects the new hardware and attempts to automatically configure it for use with the computer. The Plug and Play service is often confused with the Universal Plug and Play service (uPNP), which is a way that the Windows XP computer can detect new network resources (as opposed to local hardware resources). The Plug and Play service is pretty critical as, without it, your system can become unstable and will not recognize new hardware. On the other hand, uPNP is not generally necessary and can be disabled without worry. Along with uPNP, disable the SSDP Discovery Service, as it goes hand-in-hand with uPNP.

    Historical note: Way back in 2001, uPNP was implicated in some pretty serious security breaches, as described here.

    If you disable Plug and Play, your computer will be unstable and incapable of detecting hardware changes.

    4: Print Spooler

    Just about every computer out there needs to print at some point. If you want your computer to be able to print, don't plan on disabling the Print Spooler service. It manages all printing activities for your system. You may think that lack of a printer makes it safe to disable the Print Spooler service. While that's technically true, there's really no point in doing so; after all, if you ever do decide to get a printer, you'll need to remember to re-enable the service, and you might end up frustrating yourself.

    When the Print Spooler service is not running, printing on the local machine is not possible.

    5: Remote Procedure Call (RPC)

    Windows is a pretty complex beast, and many of its underlying processes need to communicate with one another. The service that makes this possible is the Remote Procedure Call (RPC) service. RPC allows processes to communicate with one another and across the network with each other. A ton of other critical services, including the Print Spooler and the Network Connections service, depend on the RPC service to function. If you want to see what bad things happen when you disable this service, look at the comments on this link.

    Bad news. The system will not boot. Don't disable this service.

    6: Workstation

    As is the case for many services, the Workstation service is responsible for handling connections to remote network resources. Specifically, this service provides network connections and communications capability for resources found using Microsoft Network services. Years ago, I would have said that disabling this service was a good idea, but that was before the rise of the home network and everything that goes along with it, including shared printers, remote Windows Media devices, Windows Home Server, and much more. Today, you don't gain much by eliminating this service, but you lose a lot.

    Disable the Workstation service and your computer will be unable to connect to remote Microsoft Network resources.

    7: Network Location Awareness (NLA)

    As was the case with the Workstation service, disabling the Network Location Awareness service might have made sense a few years ago — at least for a standalone, non-networked computer. With today's WiFi-everywhere culture, mobility has become a primary driver. The Network Location Awareness service is responsible for collecting and storing network configuration and location information and notifying applications when this information changes. For example, as you make the move from the local coffee shop's wireless network back home to your wired docking station, NLA makes sure that applications are aware of the change. Further, some other services depend on this service's availability.

    Your computer will not be able to fully connect to and use wireless networks. Problems abound!

    8: DHCP Client

    Dynamic Host Configuration Protocol (DHCP) is a critical service that makes the task of getting computers on the network nearly effortless. Before the days of DHCP, poor network administrators had to manually assign network addresses to every computer. Over the years, DHCP has been extended to automatically assign all kinds of information to computers from a central configuration repository. DHCP allows the system to automatically obtain IP addressing information, WINS server information, routing information, and so forth; it's required to update records in dynamic DNS systems, such as Microsoft's Active Directory-integrated DNS service. This is one service that, if disabled, won't necessarily cripple your computer but will make administration much more difficult.

    Without the DHCP Client service, you'll need to manually assign static IP addresses to every Windows XP system on your network. If you use DHCP to assign other parameters, such as WINS information, you'll need to provide that information manually as well.

    9: Cryptographic Services

    Every month, Microsoft provides new fixes and updates on what has become known as "Patch Tuesday" because the updates are released on the first Tuesday of the month. Why do I bring this up? Well, one service supported by Cryptographic Services happens to be Automatic Updates. Further, Cryptographic Services provides three other management services: Catalog Database Service, which confirms the signatures of Windows files; Protected Root Service, which adds and removes Trusted Root Certification Authority certificates from this computer; and Key Service, which helps enroll this computer for certificates. Finally, Cryptographic Services also supports some elements of Task Manager.

    Disable Cryptographic Services at your peril! Automatic Updates will not function and you will have problems with Task Manager as well as other security mechanisms.

    10: Automatic Updates

    Keeping your machine current with patches is pretty darn important, and that's where Automatic Updates comes into play. When Automatic Updates is enabled, your computer stays current with new updates from Microsoft. When disabled, you have to manually get updates by visiting Microsoft's update site.

    Thursday, August 20, 2009

    Recover lost data with Disk Commander

    Takeaway: When a user mistakenly deletes a file, suffers a hard drive failure, or corrupts their OS, you need a tool to recover their data quickly and successfully. Find out why Disk Commander is one of the best data recovery tools available.


    Disk Commander from Winternals Software is one of the most comprehensive data recovery products that I've ever used. Disk Commander helped me bail out myself and my end users in several tough situations. It allowed me to recover data I thought was lost forever. No help desk should be without this tool.

    File recovery and a whole lot more
    Unlike many disk recovery utilities, Disk Commander isn't a deleted file recovery utility (although it can recover deleted files). Instead, the utility actually reconstructs damaged files. It can also rebuild a corrupt partition and recover data from a formatted hard disk, even if the disk is unbootable. While many other disk recovery utilities limit you to recovering data from a single hard disk, Disk Commander allows you to recover data from stripe sets, mirror sets, and volume sets. The only prerequisite is that the hard disk must be physically functional. If the disk has a problem, such as a dead motor, Winternals recommends shipping the drive to a data recovery lab.

    Disk Commander offers flexible installation. You can install it on a functioning hard disk, run it from a set of boot floppies, or run it from a floppy disk at a DOS prompt (Figure A).

    Figure A


    However, only running from a set of boot floppies gives you access to the product's full functionality. The hard disk installation and the DOS installation are both subject to restrictions of the underlying operating system. For example, when run from DOS, long filenames aren't supported and neither are normal RAID devices.

    Using Disk Commander
    Disk Commander is fairly simple and straightforward to use. The software wizard asks you several questions about your data recovery needs. You'll begin by selecting the drive letter associated with the damaged hard disk. You can then choose to try to salvage deleted files. If you need to perform any other type of repair on the volume, you must tell Disk Commander that no drive letter is associated with the hard disk.

    Next you must tell Disk Commander whether you want to recover regular and damaged files or files that have been deleted from the partition. If you choose to salvage deleted files, Disk Commander scans the hard disk for anything that can be recovered. It will then present you with a directory tree style view of salvageable files (Figure B). You must then select which files you want to recover along with a location for Disk Commander to copy those files to.

    Figure B


    If you tell Disk Commander that the damaged hard disk (or partition) doesn't have a drive letter, Disk Commander performs a thorough scan of the hard drive. The scan might take a while to complete, but the results are worth the wait.

    When the scan completes, Disk Commander will show you a report of the disk's partition scheme. The wizard then asks you whether the partition scheme accurately displays what should be on the disk. If you answer No, it will launch a more thorough scan, which can take an entire day to complete. At the end of the scan, Disk Commander will display a graphical representation of the partition table, including missing partitions and volumes.

    You may then select an area of the partition table and an action, and then click Next to perform the action. For example, you could select a damaged partition and click the Recover Entry button. Or you could select a damaged master boot record (MBR) and click the Rewrite MBR button. Also, the software includes a Volume Details button that allows you to gain detailed information on a partition or volume you're about to repair, which is a nice touch.

    Before executing any instruction that will modify the partition table, Disk Commander gives you the chance to copy the partition table to a floppy disk. That way, you can revert the system to its current state should you make a mistake that damages the partition table more than it was.

    Well worth the cost
    While there are plenty of other data recovery tools out there, each technique you use unsuccessfully decreases your chances of a successful recovery through another method. Most disk recovery utilities modify the data on the hard disk, and once a utility has modified the already damaged hard disk, it becomes that much tougher for another utility to pick up the pieces. So, if your data is important to you, I recommend spending a few bucks for Disk Commander instead of risking further damage to your data with a lower-budget data recovery utility.

    Disk Commander is designed to work on a system with Windows 9x, NT, 2000, XP, or Me—although the operating system doesn't have to be functional. You can buy a copy of Disk Commander for $299 directly from Winternals Software. Volume licensing discounts are also available.

    Increase XP NTFS performance

    Takeaway: Make the NTFS perform faster and more efficiently.

    A lot of things go into making a workstation operate at peak performance. Much of it, such as the amount of RAM in the system, the CPU speed, or the speed of the system's hard drive, is hardware-controlled. However, there are other aspects of the operating system that can impact system performance as well.

    One of the mechanisms that can greatly affect a workstation's efficiency is the file system used by the operating system to save files. If the file system is inefficient, then no matter how fast a CPU or hard drive is, the system will waste time retrieving data. XP's default file system, NTFS, is more efficient than Windows 9x's old FAT system under normal circumstances, but you can do more to make it even faster.



    Danger!
    This article discusses making changes to your server's registry. Before using any technique in this article, make sure you have a complete backup of your workstation. If you make a mistake when making changes to your workstation's registry, you may cause your server to become unbootable, which would require a reinstallation of Windows to correct. Proceed with extreme caution.

    NTFS vs. FAT
    NTFS has been around since Microsoft introduced the first version of Windows NT. Its goal was to overcome the limitations of the venerable FAT file system, which had been around since the first version of DOS in 1981. Some of the key benefits of NTFS over FAT include:
    • Smaller cluster sizes on drives over 1 GB
    • Added security through permissions
    • Support for larger drive sizes
    • Better fault tolerance through logging and striping

    Windows XP supports both NTFS and FAT, as well as FAT's newer cousin, FAT32. Chances are that you'll never see an XP workstation running the FAT-related file systems. About the only time you'll find FAT on an XP workstation is if someone upgraded a Windows 9x workstation to Windows XP and didn't convert the file system.

    Last access time stamps
    XP automatically updates the date and time stamp with information about the last time you accessed a file. Not only does it mark the file, but it also updates the directory the file is located in as well as any directories above it. If you have a large hard drive with many subdirectories on it, this updating can slow down your system.

    To disable the updating, start the Registry Editor by selecting Run from the Start menu, typing regedit in the Open text box, and clicking OK. When the Registry Editor window opens, navigate through the left pane until you get to

    HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Filesystem

    In the right pane, look for the value named NtfsDisableLastAccessUpdate. If the value exists, it's probably set to 0. To change the value, double-click it. You'll then see the Edit DWORD Value screen. Enter 1 in the Value Data field and click OK.

    If the value doesn't exist, you'll need to add it. Select New | DWORD Value from the Edit menu. The new value will appear in the right pane, prompting you for a value name. Type NtfsDisableLastAccessUpdate and press [Enter]. Double-click the new value. You'll then see the Edit DWORD Value screen. Enter 1 in the Value Data field and click OK. When you're done, close Regedit. Your registry changes will be saved automatically. Reboot your workstation.

    The Master File Table
    The Master File Table (MFT) keeps track of files on disks. This file logs all the files that are stored on a given disk, including an entry for the MFT itself. It works like an index of everything on the hard disk in much the same way that a phone book stores phone numbers.

    NTFS keeps a section of each disk just for the MFT. This allows the MFT to grow as the contents of a disk change without becoming overly fragmented. This is because Windows NT didn't provide for the defragmentation of the MFT. Windows 2000 and Windows XP's Disk Defragmenter will defragment the MFT only if there's enough space on the hard drive to locate all of the MFT segments together in one location.

    As the MFT file grows, it can become fragmented. Fortunately, you can control the initial size of the MFT by making a change in the registry. Making the MFT file larger prevents it from fragmenting but does so at the cost of storage space. For every kilobyte that NTFS uses for MFT, the less it has for data storage.

    To limit the size of the MFT, start the Registry Editor by selecting Run from the Start menu, typing regedit in the Open text box, and clicking OK. When the Registry Editor window opens, navigate through the left pane until you get to HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Filesystem.

    In the right pane, look for the value named NtfsMftZoneReservation. If the value doesn't exist, you'll need to add it. Select New | DWORD Value from the Edit menu. The new value will appear in the right pane, prompting you for a value name. Type NtfsMftZoneReservation and press [Enter]. Double-click the new value. You'll then see the Edit DWORD Value screen.

    The default value for this key is 1. This is good for a drive that will contain relatively few large files. Other options include:
    • 2—Medium file allocation
    • 3—Larger file allocation
    • 4—Maximum file allocation

    To change the value, double-click it. When the Edit DWORD Value screen appears, enter the value you want and click OK. Unfortunately, Microsoft doesn't give any clear guidelines as to what distinguishes Medium from Larger and Maximum levels of files. Suffice it to say, if you plan to store lots of files on your workstation, you may want to consider a value of 3 or 4 instead of the default value of 1.

    When you're done, close Regedit. Your registry changes will be saved automatically. Reboot your workstation. Unlike other registry changes, which take place immediately for maximum benefit, NtfsMftZoneReservation works best on freshly formatted hard drives. This is because XP will then create the MFT in one contiguous space. Otherwise, it will just modify the current size of the MFT, instantly fragmenting it. Therefore, it's best to use this if you plan to have one drive for data and another for applications.

    Short filenames
    Even though NTFS can support filenames with 256 characters in order to maintain backward compatibility with DOS and Windows 3.x workstations, Windows XP stores filenames in the old 8.3 file format as well as its native format. For example, if this article is named "Increase XP NTFS performance.doc," Windows XP will save this file under that filename as well as INCREA~1.DOC.

    To change this in the registry, start the Registry Editor. When the Registry Editor window opens, navigate through the left pane until you get to

    HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Filesystem

    In the right pane, look for the value named NtfsDisable8dot3NameCreation. If the value exists, it's probably set to 0. To change the value, double-click it. In the Edit DWORD Value screen, enter 1 in the Value Data field and click OK.

    If the value doesn't exist, you'll need to add it. Select New | DWORD Value from the Edit menu. The new value will appear in the right pane, prompting you for a value name. Type NtfsDisable8dot3NameCreation and press [Enter]. Double-click the new value. You'll then see the Edit DWORD Value screen. Enter 1 in the Value Data field and click OK. When you're done, close Regedit. Your registry changes will be saved automatically. Reboot your workstation.

    Other ways to speed drive access
    There are other ways to speed drive access that aren't NTFS-specific. These include:
    • Caching—If your XP workstation has more than 256 MB of RAM, you might be able to increase hard drive access speeds by tweaking the amount of RAM cache that XP uses. For more information about how to do this, see the article "Squeeze more performance out of Windows XP with CachemanXP 1.1."
    • Striping—If you have more than one hard drive on your system, you can use XP's striping feature to have the file system store data across multiple drives. This feature works best with SCSI drives, but it can work with multiple ATA drives as well. You'll make the change using the Logical Disk Management service in the Computer Management utility.
    • Defragmenting—Even though NTFS is more resistant to fragmentation than FAT, it can and does still fragment. You can either use XP's built-in defragmenter or a third-party utility such as Diskeeper.
    • Disable Compression—Compressing files may save space on your workstation's hard drive, but compressing and decompressing files can slow down your system. With the relative low cost of hard drives today, investing in an additional hard drive is better than compressing files on a workstation.

    Improve Windows XP's hard drive performance with disk striping

    Takeaway: Learn what disk striping is, how it can boost performance, and how to implement it


    Some applications need a higher level of performance than a standard installation can generally provide. For example, the process of creating DVDs requires the hard disk to read information at a very high speed. Fortunately, there's a relatively easy way of insuring that Windows XP's performance meets your needs: Boost your disk performance by implementing disk striping. In this article, I'll explain disk striping and show you how to implement it.

    What is disk striping?
    Disk striping is a technique by which data spans multiple hard drives. All hard drives involved in the stripe set are simultaneously read from and written to. For example, if a striped set of disks consists of three hard drives, then the data will be read and written about three times faster because Windows is distributing the workload among three hard drives. Creating a striped set is an inexpensive way of dramatically increasing performance.

    Before you begin
    In Windows XP, striped sets with parity aren't supported. This means that if any of the drives associated with the striped set have a problem, the entire volume (striped set) will be lost. Therefore, you'll have to back up frequently.

    Also, once you create a striped set, only Windows XP will be able to read that striped set. There's a way of making Windows 2000 be able to read the set, but generally, you should assume that no other OS will access the striped set if you have a dual-boot system.


    Creating a striped set
    To set up a striped set, first, install the hard drives. However, your primary hard drive cannot be included in the striped set because you can only create striped sets on empty hard drives. You need a minimum of two new hard drives to create a striped set, but you can use up to 32 hard drives in the set. Because this is a software-implemented striped set, there is no requirement as to what type of hard drive you must use. IDE and SCSI are both acceptable.

    Once you've physically installed the drives, boot Windows XP and log in as the Administrator. Next, enter the DISKMGMT.MSC command at the Run prompt to open the Disk Management console shown in Figure A.

    Figure A


    When the Disk Management console opens, locate the new disks and right-click them. Be sure to right-click the reference to the disk itself, not the space on the disk. Select the Convert To Dynamic Disk command from the context menu. When you do, a wizard will open, verifying that you want to convert the disk into a dynamic disk. Click Yes. When the conversion completes, repeat the process for each disk in the striped set.

    To create the striped set, right-click in the empty space on one of your new disks and select the New Volume command from the context menu. Windows will then launch the New Volume wizard. When the wizard asks what type of volume that you want to create, select Striped. Then, follow the instructions to complete the wizard. The process involves simply selecting which disks should be included in the striped set. Your striped set is now ready to use.

    Conclusion
    Creating a striped set is a low cost way of giving your PC a serious performance boost. Just remember to back up your striped set often, because it is more prone to failure than standard partitions due to the number of disks involved.

    Monday, August 17, 2009

    ASP.NET Page Life Cycle Overview

    When an ASP.NET page runs, the page goes through a life cycle in which it performs a series of processing steps. These include initialization, instantiating controls, restoring and maintaining state, running event handler code, and rendering. It is important for you to understand the page life cycle so that you can write code at the appropriate life-cycle stage for the effect you intend. Additionally, if you develop custom controls, you must be familiar with the page life cycle in order to correctly initialize controls, populate control properties with view-state data, and run any control behavior code. (The life cycle of a control is based on the page life cycle, but the page raises more events for a control than are available for an ASP.NET page alone.)

     General Page Life-cycle Stages

    In general terms, the page goes through the stages outlined in the following table. In addition to the page life-cycle stages, there are application stages that occur before and after a request but are not specific to a page. For more information, see ASP.NET Application Life Cycle Overview for IIS 7.0.

    Stage

    Description

    Page request

    The page request occurs before the page life cycle begins. When the page is requested by a user, ASP.NET determines whether the page needs to be parsed and compiled (therefore beginning the life of a page), or whether a cached version of the page can be sent in response without running the page.

    Start

    In the start step, page properties such as Request and Response are set. At this stage, the page also determines whether the request is a postback or a new request and sets the IsPostBack property. Additionally, during the start step, the page's UICulture property is set.

    Page initialization

    During page initialization, controls on the page are available and each control's UniqueID property is set. Any themes are also applied to the page. If the current request is a postback, the postback data has not yet been loaded and control property values have not been restored to the values from view state.

    Load

    During load, if the current request is a postback, control properties are loaded with information recovered from view state and control state.

    Validation

    During validation, the Validate method of all validator controls is called, which sets the IsValid property of individual validator controls and of the page.

    Postback event handling

    If the request is a postback, any event handlers are called.

    Rendering

    Before rendering, view state is saved for the page and all controls. During the rendering phase, the page calls the Render method for each control, providing a text writer that writes its output to the OutputStream of the page's Response property.

    Unload

    Unload is called after the page has been fully rendered, sent to the client, and is ready to be discarded. At this point, page properties such as Response and Request are unloaded and any cleanup is performed.

     Life-cycle Events

    Within each stage of the life cycle of a page, the page raises events that you can handle to run your own code. For control events, you bind the event handler to the event, either declaratively using attributes such as onclick, or in code.

    Pages also support automatic event wire-up, meaning that ASP.NET looks for methods with particular names and automatically runs those methods when certain events are raised. If the AutoEventWireup attribute of the @ Page directive is set to true (or if it is not defined, since by default it is true), page events are automatically bound to methods that use the naming convention of Page_event, such as Page_Load and Page_Init. For more information on automatic event wire-up, see ASP.NET Web Server Control Event Model.

    The following table lists the page life-cycle events that you will use most frequently. There are more events than those listed; however, they are not used for most page processing scenarios. Instead, they are primarily used by server controls on the ASP.NET Web page to initialize and render themselves. If you want to write your own ASP.NET server controls, you need to understand more about these stages. For information about creating custom controls, see Developing Custom ASP.NET Server Controls.

    Page Event

    Typical Use

    PreInit

    Use this event for the following:

    ·         Check the IsPostBack property to determine whether this is the first time the page is being processed.

    ·         Create or re-create dynamic controls.

    ·         Set a master page dynamically.

    ·         Set the Theme property dynamically.

    ·         Read or set profile property values.

    NoteNote:

    If the request is a postback, the values of the controls have not yet been restored from view state. If you set a control property at this stage, its value might be overwritten in the next event.

    Init

    Raised after all controls have been initialized and any skin settings have been applied. Use this event to read or initialize control properties.

    InitComplete

    Raised by the Page object. Use this event for processing tasks that require all initialization be complete.

    PreLoad

    Use this event if you need to perform processing on your page or control before the Load event.

    Before the Page instance raises this event, it loads view state for itself and all controls, and then processes any postback data included with the Request instance.

    Load

    The Page calls the OnLoad event method on the Page, then recursively does the same for each child control, which does the same for each of its child controls until the page and all controls are loaded.

    Use the OnLoad event method to set properties in controls and establish database connections.

    Control events

    Use these events to handle specific control events, such as a Button control's Click event or a TextBox control's TextChanged event.

    NoteNote:

    In a postback request, if the page contains validator controls, check the IsValid property of the Page and of individual validation controls before performing any processing.

    LoadComplete

    Use this event for tasks that require that all other controls on the page be loaded.

    PreRender

    Before this event occurs:

    ·         The Page object calls EnsureChildControls for each control and for the page.

    ·         Each data bound control whose DataSourceID property is set calls its DataBind method. For more information, see Data Binding Events for Data-Bound Controls later in this topic.

    The PreRender event occurs for each control on the page. Use the event to make final changes to the contents of the page or its controls.

    SaveStateComplete

    Before this event occurs, ViewState has been saved for the page and for all controls. Any changes to the page or controls at this point will be ignored.

    Use this event perform tasks that require view state to be saved, but that do not make any changes to controls.

    Render

    This is not an event; instead, at this stage of processing, the Page object calls this method on each control. All ASP.NET Web server controls have a Render method that writes out the control's markup that is sent to the browser.

    If you create a custom control, you typically override this method to output the control's markup. However, if your custom control incorporates only standard ASP.NET Web server controls and no custom markup, you do not need to override the Render method. For more information, see Developing Custom ASP.NET Server Controls.

    A user control (an .ascx file) automatically incorporates rendering, so you do not need to explicitly render the control in code.

    Unload

    This event occurs for each control and then for the page. In controls, use this event to do final cleanup for specific controls, such as closing control-specific database connections.

    For the page itself, use this event to do final cleanup work, such as closing open files and database connections, or finishing up logging or other request-specific tasks.

    NoteNote:

    During the unload stage, the page and its controls have been rendered, so you cannot make further changes to the response stream. If you attempt to call a method such as the Response.Write method, the page will throw an exception.

     

    Saturday, August 15, 2009

    Working with the ASP.NET Global.asax file

    Takeaway: The ASP.NET Global.asaz file allows you to implement a variety of tasks, including application security. Find out how may use this file in your application development efforts.

    The Global.asax file, sometimes called the ASP.NET application file, provides a way to respond to application or module level events in one central location. You can use this file to implement application security, as well as other tasks. Let's take a closer look at how you may use it in your application development efforts.

    Overview

    The Global.asax file is in the root application directory. While Visual Studio .NET automatically inserts it in all new ASP.NET projects, it's actually an optional file. It's okay to delete it—if you aren't using it. The .asax file extension signals that it's an application file rather than an ASP.NET file that uses aspx.

    The Global.asax file is configured so that any direct HTTP request (via URL) is rejected automatically, so users cannot download or view its contents. The ASP.NET page framework recognizes automatically any changes that are made to the Global.asax file. The framework reboots the application, which includes closing all browser sessions, flushes all state information, and restarts the application domain.

    Programming

    The Global.asax file, which is derived from the HttpApplication class, maintains a pool of HttpApplication objects, and assigns them to applications as needed. The Global.asax file contains the following events:

    • Application_Init: Fired when an application initializes or is first called. It's invoked for all HttpApplication object instances.
    • Application_Disposed: Fired just before an application is destroyed. This is the ideal location for cleaning up previously used resources.
    • Application_Error: Fired when an unhandled exception is encountered within the application.
    • Application_Start: Fired when the first instance of the HttpApplication class is created. It allows you to create objects that are accessible by all HttpApplication instances.
    • Application_End: Fired when the last instance of an HttpApplication class is destroyed. It's fired only once during an application's lifetime.
    • Application_BeginRequest: Fired when an application request is received. It's the first event fired for a request, which is often a page request (URL) that a user enters.
    • Application_EndRequest: The last event fired for an application request.
    • Application_PreRequestHandlerExecute: Fired before the ASP.NET page framework begins executing an event handler like a page or Web service.
    • Application_PostRequestHandlerExecute: Fired when the ASP.NET page framework is finished executing an event handler.
    • Applcation_PreSendRequestHeaders: Fired before the ASP.NET page framework sends HTTP headers to a requesting client (browser).
    • Application_PreSendContent: Fired before the ASP.NET page framework sends content to a requesting client (browser).
    • Application_AcquireRequestState: Fired when the ASP.NET page framework gets the current state (Session state) related to the current request.
    • Application_ReleaseRequestState: Fired when the ASP.NET page framework completes execution of all event handlers. This results in all state modules to save their current state data.
    • Application_ResolveRequestCache: Fired when the ASP.NET page framework completes an authorization request. It allows caching modules to serve the request from the cache, thus bypassing handler execution.
    • Application_UpdateRequestCache: Fired when the ASP.NET page framework completes handler execution to allow caching modules to store responses to be used to handle subsequent requests.
    • Application_AuthenticateRequest: Fired when the security module has established the current user's identity as valid. At this point, the user's credentials have been validated.
    • Application_AuthorizeRequest: Fired when the security module has verified that a user can access resources.
    • Session_Start: Fired when a new user visits the application Web site.
    • Session_End: Fired when a user's session times out, ends, or they leave the application Web site.

    The event list may seem daunting, but it can be useful in various circumstances.

    A key issue with taking advantage of the events is knowing the order in which they're triggered. The Application_Init and Application_Start events are fired once when the application is first started. Likewise, the Application_Disposed and Application_End are only fired once when the application terminates. In addition, the session-based events (Session_Start and Session_End) are only used when users enter and leave the site. The remaining events deal with application requests, and they're triggered in the following order:

    • Application_BeginRequest
    • Application_AuthenticateRequest
    • Application_AuthorizeRequest
    • Application_ResolveRequestCache
    • Application_AcquireRequestState
    • Application_PreRequestHandlerExecute
    • Application_PreSendRequestHeaders
    • Application_PreSendRequestContent
    • <<code is executed>>
    • Application_PostRequestHandlerExecute
    • Application_ReleaseRequestState
    • Application_UpdateRequestCache
    • Application_EndRequest

    A common use of some of these events is security. The following C# example demonstrates various Global.asax events with the Application_Authenticate event used to facilitate forms-based authentication via a cookie. In addition, the Application_Start event populates an application variable, while Session_Start populates a session variable. The Application_Error event displays a simple message stating an error has occurred.

    protected void Application_Start(Object sender, EventArgs e) {
    Application["Title"] = "Builder.com Sample";
    }
    protected void Session_Start(Object sender, EventArgs e) {
    Session["startValue"] = 0;
    }
    protected void Application_AuthenticateRequest(Object sender, EventArgs e) {
    // Extract the forms authentication cookie
    string cookieName = FormsAuthentication.FormsCookieName;
    HttpCookie authCookie = Context.Request.Cookies[cookieName];
    if(null == authCookie) {
    // There is no authentication cookie.
    return;
    }
    FormsAuthenticationTicket authTicket = null;
    try {
    authTicket = FormsAuthentication.Decrypt(authCookie.Value);
    } catch(Exception ex) {
    // Log exception details (omitted for simplicity)
    return;
    }
    if (null == authTicket) {
    // Cookie failed to decrypt.
    return;
    }
    // When the ticket was created, the UserData property was assigned
    // a pipe delimited string of role names.
    string[2] roles
    roles[0] = "One"
    roles[1] = "Two"
    // Create an Identity object
    FormsIdentity id = new FormsIdentity( authTicket );
    // This principal will flow throughout the request.
    GenericPrincipal principal = new GenericPrincipal(id, roles);
    // Attach the new principal object to the current HttpContext object
    Context.User = principal;
    }
    protected void Application_Error(Object sender, EventArgs e) {
    Response.Write("Error encountered.");
    }

    This example provides a peek at the usefulness of the events contained in the Global.asax file; it's important to realize that these events are related to the entire application. Consequently, any methods placed in it are available through the application's code, hence the Global name.

    Here's the VB.NET equivalent of the previous code:

    Sub Application_Start(ByVal sender As Object, ByVal e As EventArgs)
    Application("Title") = "Builder.com Sample"
    End Sub
    Sub Session_Start(ByVal sender As Object, ByVal e As EventArgs)
    Session("startValue") = 0
    End Sub
    Sub Application_AuthenticateRequest(ByVal sender As Object, ByVal e As
     EventArgs)
    ' Extract the forms authentication cookie
    Dim cookieName As String
    cookieName = FormsAuthentication.FormsCookieName
    Dim authCookie As HttpCookie
    authCookie = Context.Request.Cookies(cookieName)
    If (authCookie Is Nothing) Then
    ' There is no authentication cookie.
    Return
    End If
    Dim authTicket As FormsAuthenticationTicket
    authTicket = Nothing
    Try
    authTicket = FormsAuthentication.Decrypt(authCookie.Value)
    Catch ex As Exception
    ' Log exception details (omitted for simplicity)
    Return
    End Try
    Dim roles(2) As String
    roles(0) = "One"
    roles(1) = "Two"
    Dim id As FormsIdentity
    id = New FormsIdentity(authTicket)
    Dim principal As GenericPrincipal
    principal = New GenericPrincipal(id, roles)
    ' Attach the new principal object to the current HttpContext object
    Context.User = principal
    End Sub
    Sub Application_Error(ByVal sender As Object, ByVal e As EventArgs)
    Response.Write("Error encountered.")
    End Sub

    A good resource

    The Global.asax file is the central point for ASP.NET applications. It provides numerous events to handle various application-wide tasks such as user authentication, application start up, and dealing with user sessions. You should be familiar with this optional file to build robust ASP.NET-based applications.

    itworld