Cloaking is a technique that is used to display different pages to the search engine spiders than the ones normal visitors see. The usefulness of this ability results from the fact that good search engine optimization often
With cloaking, one can create two sets of pages: the first for search engine spiders, the second for regular human visitors. This enables retaining the good look and feel of the site for humans, while still being able to show highly optimized pages to the spiders and thus generate nice amounts of traffic from the search engines. Cloaking also prevents humans from seeing what kind of optimization techniques you are using and stealing your optimized pages.
One of the big questions with cloaking is how to tell whether the arriving visitor is a search engine spider or a human. Identification is usually done either by checking the visitors' IP address, or his User-agent string. The former is more secure and generally a better solution, but requires a comprehensive up-to-date database of known spider IP's, which takes a lot of work to gather and maintain (these lists can also be bought, which is sometimes the best option). The latter is easier to maintain, but is generally considered way too insecure to be used.
Cloaking is often confused with doorway pages and hiding text by making it the same color as the background, but it has nothing to do with those two. As said above, cloaking only makes sure that the search engine spider gets another page and the human visitors get another. Cloaking does not in any way effect the contents of those two pages - the hard work of optimizing and creating them is left for the webmaster. But even while cloaking is not
Possible punishments include burying the site so deep in the results that it will never see the sun again, or completely banning it from the index. For example, Altavista and Inktomi have been known to punish cloaking sites every now and then. You should also be careful when cloaking for Google, not because they are especially efficient in catching cloakers, but because they have a "cache" feature that allows visitors to their search engine to see the same content the spider saw when it visited your pages. Fortunately you can prevent Google from doing this if you wish by inserting a <META NAME="GOOGLEBOT" CONTENT="NOARCHIVE"> tag in the HEAD section of your pages.
The risk level involved with cloaking greatly depends on what you're actually doing with it. If you have a strong, IP-based cloak, your Title, Meta Description and the first row of text are the same with both your search engine optimized and your visitor optimized pages and the sizes of those pages (in KB's) are close to each other, you're pretty safe. With things like this, you're never completely safe, but that's pretty much as close to "safe" as you can get.
On the other hand, if you're running a cloak that relies solely on User-agent strings for spider detection or an IP-based cloak without a good IP database, you're asking for trouble. And if your SE-optimized pages and user-optimized pages don't obey by the safety rules outlined above, you're pretty likely to burn your fingers in the fire. In any case, you should always be prepared for the worst when you're cloaking - you might get banned, so have some extra cash available to buy another domain name to play with.
The troubles with cloaking do not entirely lay with the threat of getting punished by the search engines. Running a good cloak takes a great deal of work, especially if you are planning to create a specially optimized page for each engine instead of one general search engine optimized page.
So, unless you're really sure you're going to need it in your promotion efforts, I wouldn't recommend cloaking just because you can. If you still decide to cloak, it might be a good idea to buy a phony domain and experiment with it first - after gathering some confidence and experience, you could expand your cloaking to your serious website(s).