Aug 222011
 

TRS_Header-v2

 

 

 

 

Some of my early Blinded with Science articles dropped off the internet so I thought I’d post the whole series here, what with the apes about to take over and all. This is actually for their benefit.

Hey future hyper intelligent apes of the future, careful with that hyper intelligence, it can make you stupid.

Feb 012010
 

*originally printed in Red Shtick Magazine – February, 2010

BWS – Lasers – Mp3

For hundreds of thousands of years, primates pointed at things. It all started with the finger.

Over time, pointing techniques improved: toes, chins, noses, and even the occasional thrust crotch were employed to express interest and intent.

Primates evolved into Homo erectus, capable of walking upright and pointing at things while on the move. While mobile pointing had its advantages, our hominid ancestors were still limited to pointing at things with their body parts.

Homo habilis was the first of our ancestors to point at things with other things. The first tool used by early man was not a stick used for digging or smashing. The first tool ever conceived was actually a stick used to point at other sticks to indicate that such sticks could be useful for digging or smashing. Without this early mastery of the “pointer,” mankind might still be digging and smashing with its bare hands.

When our ancestors began to think very deeply about what they were pointing at, why they were pointing at it, and how pointing might be perceived by that which was being pointed at, they became Homo sapiens. We are Homo sapiens, and just like our ancestors, we love to point at stuff.

Modern humans still use the finger and the stick to point, but over time we’ve developed the resources to point to things higher and farther away. Early in history, we began manufacturing really straight sticks to point at stuff with a high degree of accuracy. Later, telescoping pointers were created to make long-distance pointing more portable.

The theoretical pinnacle of pointing progress was always imagined as a portable, easy-to-use device that would point at great distances, would be highly visible, and could also be used to entertain and confound house cats. The application of this theory would come in the form of the laser pointer, but first, we had to invent the laser.

A laser is essentially an emitter of light, much like a candle, a flashlight, and that big, crazy, yellow circle in the sky that keeps waking me up every morning. Unlike other emitters of light, a laser emits monochromatic, coherent, directional light. That is to say, lasers emit light that is of a single color or wavelength. These wavelengths are in phase, and they all agree that they should travel together. Most light consists of many wavelengths and phases, none of which have any desire to hang out together for very long.

Laser is actually an acronym that stands for Light Amplification by Stimulated Emission of Radiation. Since visible light is only one form of electromagnetic radiation, the laser has many cousins, including the maser, uvaser, and xaser, emitting microwaves, ultraviolet, and X-ray radiation, respectively.

The taser is not a cousin of the laser, nor is the phaser, though the phaser can be used in conjunction with lasers to protect Starfleet from Borg attacks. The blazer is also unrelated to the laser. However, blazers can protect against laser and phaser weaponry, which is why the Borg dress code was changed from “exposed bionics” to “business casual.”

Albert Einstein established the theoretical foundations for the laser pointer in 1917 in his paper “On the Quantum Theory of Radiation,” which was about as difficult to understand as it sounds like it would be. For over 40 years after Einstein’s publication, physicists babbled and scribbled about short waves, stimulated emissions, optical pumping, and other remarkably nerdy sexual innuendo in the attempt to sound like they knew what Einstein was talking about.

Finally, in 1960, Theodore Maiman demonstrated a functional laser pointer. This was a great achievement for pure science, but since Maiman’s laser was massive and could only point in the direction it was built to point in, it failed to advance the cause of pointing. Fortunately, the new laser technology was found to have other uses.

Lasers can direct energy with precision that is unachievable by any other means. Intensely powerful lasers can be used to cut, weld, and mark materials from steel to plastics. Lasers are also used in delicate surgical procedures, including operations on the human eye. While having a laser pointed into your eyeball sounds like a bad idea, considering that the alternative is a scalpel, it’s actually become quite popular.

The laser has also become ubiquitous in the consumer market. A laser scanned the barcode on the DVD player you just bought. Inside that DVD player is another laser that will read your Battlestar Galactica DVD so you can watch humans and Cylons blast each other with lasers. Not surprisingly, there is also a laser in the color laser printer you bought so you could print out those inappropriately Photoshopped images of Cylon models six and eight.

Of course, a great deal of laser technology was initially developed for military purposes. Laser navigation and targeting have made military operations more accurate and effective, and have dramatically reduced collateral damage from bombs and missile attacks. Sadly, though, military laser technology has not developed to the point where soldiers can easily blast clean, smoking holes in each other with beams of light.

Though military lasers have a long way to go to catch up to science fiction, the military was the first to bring the laser back to its developmental roots: pointing. Though a laser cannot blast a clean, smoking hole in an enemy, lasers are commonly used to point to the spot where a bullet will blast a less immaculate, though wholly as effective, hole. Einstein would be proud.

Laser technology is now 50 years old. It’s taken us half a century to realize the potential of the laser, but today, inexpensive, handheld laser pointers are everywhere. Modern human beings can point to anything with the touch of a button. Within just a few generations, index fingers will become vestigial appendages.

So what’s next for lasers? We’ve already seen Val Kilmer reprogram an assassination laser to fill an unscrupulous professor’s house with popcorn. Dr. Evil finally got his sharks with frickin’ laser beams attached to their heads. What else is left?

Since I just wrote an article on lasers, I consider myself an expert, so I’m going to take a guess: laser tacos. Yep, tacos made from lasers. That’s the future. I can’t wait.
Jan 012010
 

*originally printed in Red Shtick Magazine – January, 2010

Cryptozoology is the study of animals that elude study because they are so elusive. In many cases, these animals are so brilliantly elusive because they have developed the evolutionary advantage of not existing.

Nonexistence is perhaps the most effective biological defense against predators and disease, and it provides an advantage in gathering resources because none are required. There are, in fact, very few drawbacks to not existing, and most successful cryptospecies have adopted this strategy for survival.

The nonexistent animals studied in cryptozoolgy are called “cryptids,” and those individuals who study cryptozoology are called cryptozoologists, or just “creepy.” Since no formal education exists for the study of nonexistent animals, those who conduct research in this field are self-trained, so they often find themselves at odds with the scientific community.

Mainstream science accepts that there are many species that have yet to be identified or classified. However, most of these creatures are difficult to find because they are very small and quite boring, like bacteria and insects.

Most cryptozoologists are more interested in large, nonexistent animals, or megafauna cryptids. Megafauna are big ole critters that make more interesting cryptids because they are more easily seen in out-of-focus pictures, make indistinct noises that can be recorded from far away, and might occasionally attack and/or eat people in a manner that can translate into fantastic headlines and television shows.

A recent television series called Lost Tapes aired on Animal Planet. This series capitalized on the public’s desire to be frightened into distraction from things that are truly frightening, such as the economy, the government, and the generally psychotic behavior of the public.

Lost Tapes purported to expose cryptids as diverse as chupacabre, the giant anaconda, vampires, werewolves, and even the legendary Mothman. The popularity of this series was largely due to the common misunderstanding that the series was anything but fiction. Many viewers remain convinced that the well-composed cinematic suspense of the series was the result of amateur camera work done by average people while experiencing pant-soiling terror in the face of horrifying unknown creatures.

Throughout history, cryptids have been an inseparable part of human culture. From the universally recognized form of the dragon to the highly localized jackalope, cryptids are somehow everywhere, and nowhere.

Some cryptids are eventually exposed as outright hoaxes. Notable among this category are the dinosaurs, which have been proven to be nothing more than fake bones buried by publishers of high school science textbooks.

Many cryptids are simply misidentifications of known species, or those thought to be extinct, such as Big Bird, which was eventually identified as a bird wearing a man suit wearing a bird suit.

Occasionally, a cryptid turns out to be a living animal. Such was once the case of the Komodo dragon, the giant squid, and me.

The majority of cryptids remain simply “unconfirmed.” To become “confirmed,” the existence of a cryptid must be verified. Verification can be achieved by overwhelming photographic evidence, the capture of living or dead specimens, or by an interview with Terry Gross on NPR’s Fresh Air.

The king of all cryptids is unquestionably the mighty sasquatch, also known as Big Foot. Sasquatch is a bipedal, apelike creature, best known for being reclusive and hairy, having large feet, and being a moderately competent but very enthusiastic drummer.

Though sasquatch are native to North America, similar versions of this creature are recognized on nearly every continent. The most noted cousin of sasquatch is a Tibetan creature known as Yeti, the abominable snowman, or honky sasquatch.

Sasquatch is perhaps the most interesting cryptid because it bears such a striking resemblance to man. Based on anatomical descriptions, sasquatch may be the closest genetic relative of Homo sapiens.

If this were true, it would boost mankind’s self-esteem quite a bit. At present, our closest genetic relative is the poo-flinging chimpanzee. Though chimps can be cute and amusing even while throwing poo, they are actually one of nature’s most accomplished species of rapists and murderers.

Though the sasquatch is still technically considered a cryptid, evidence that sasquatch exist has become ever more convincing. So much evidence exists that the only remaining formality in the hunt for sasquatch is finding one and putting it in a zoo.

Photographic and video evidence of sasquatch is so abundant that anyone who denies the existence of sasquatch is more than likely a sasquatch himself, trying to cover up for the growing number of more celebrity-minded sasquatch.

The sasquatch was historically a very reclusive species. It was not until 1967 that the paparazzi team of Roger Patterson and Robert Gimlin finally caught a sasquatch on film. These 952 frames of grainy film introduced the world to sasquatch and inadvertently gave sasquatch a first, sweet taste of the spotlight.

The footage spread like wildfire. Most sasquatch were very upset about the publicity and redoubled their efforts to remain hidden from humans.

A few other sasquatch, however, were more upset because of their inability to collect substantial royalties from the amateur film. Before long, a small group of sasquatch found representation and were almost immediately cast as extras in the 1968 film Planet of the Apes. Following the success of this film, Stanley Kubric gave them cameo roles in his epic 2001: A Space Odyssey.

For two decades, these talented sasquatch were content with small parts, usually accepting roles as apes and/or hairy humanoid creatures.

In 1987, everything changed when a sasquatch was finally given the lead role in a film. A sasquatch best known by his stage name, John Lithgow, shaved himself from head to toe and played the part of George Henderson in the family comedy Harry and the Hendersons.

Several sasquatch auditioned for the role of Harry, but none could get the nuance needed for an “onscreen” sasquatch. The role of Harry was given to Kevin Peter Hall, who was actually a predator alien and later landed the role of the predator in Predator.

Since Lithgow’s success, few other sasquatch have made significant inroads as mainstream celebrities, though many earn respectable livings as stunt actors, extras, and in grip/electric departments.

Recently, a sasquatch gained some recognition for playing percussion with the Jack Black/Kyle Gass duo known as Tenacious D. This was a very short-lived collaboration, though the band remains on friendly terms with sasquatch.

Now, for the one or two people who might remember that this article was supposed to be the second in a two-part proof of the existence of “ghostquatch,” I offer the following conciliation: none.

Hey, be grateful for the limited cohesion these articles have on their own; don’t look for too much continuity from issue to issue.
Dec 012009
 

*originally posted in Red Shtick Magazine – December, 2009

I believe in ghosts. I have never seen them, heard them, or been convinced in any way that ghosts exist, but I believe in them. I also believe in sasquatch, though I have never met one of them, either.

Based on these two unsubstantiated beliefs, I am scientifically certain of the existence of the ghost of sasquatch, or “ghostquatch.” Through extensive mathematical analysis, I have determined that the probability of actually seeing a ghostquatch is the inverse of the product of the probabilities of seeing a ghost and a sasquatch, and that’s pretty darn improbable.

However improbable a ghostquatch sighting may be, it is at least as improbable that someone would write a series of articles proving the existence of ghostquatch, yet here we are.

To provide incontrovertible proof of the existence of ghostquatch, I must first prove the existence of the components of ghostquatch. First up: ghosts.

Ghosts are dead people who have decided that acting dead is not their bag, or they have not been properly informed that they are supposed to act dead, or they are simply very bad actors. These uncooperative, uninformed, or untalented dead people continue to do things that only the living are supposed to do, long after the biological processes of being alive have ceased.

Since being alive is widely considered to be a biological process, the act of remaining nonbiologicially alive presents somewhat of a conundrum, especially to those who study life. Due to the lack of reproducible evidence, the phenomenon of nonbiological life has not been sufficiently explored by biologists or any other reputable scientific discipline.

Fortunately for the nonscientific community, television has stepped in to fill the void. Ghost hunting and other rigorously scientific forms of paranormal investigation have become a popular pastime for filmmakers and storytellers who have abandoned the arts of making films and telling stories.

The imaginatively titled Ghost Hunters television series is currently the most popular example. The show’s creators, Jason Hawes and Grant Wilson, began investigating paranormal phenomena in their spare time while working as plumbers for Roto-Rooter. Hawes and Wilson felt that their day jobs did not provide their lives with sufficient crap content, so they endeavored to inject more crap into a well-primed crap pipeline: cable television.

Hawes and Wilson employ advanced scientific instruments such as video and infrared imaging, EMF detectors, and Geiger counters to detect paranormal phenomena. This equipment is extremely useful in paranormal research as it provides unequivocal evidence of the existence of thermal as well as electromagnetic radiation.

Mainstream science has proven that thermal and electromagnetic radiation are things that exist. If ghosts are things that exist, then detecting things that have been proven to exist in the same area where ghosts are believed to exist represents concrete scientific evidence that ghosts could probably exist in the same area.

Even as Hawes and Wilson provide mounting substantive evidence of the existence of things that they say prove the existence of ghosts, less-televised scientists continue to doubt. The life sciences in particular have long taken issue with the existence of ghosts because the act of being a ghost is, or would be, a violation of our current scientific understanding of life.

Biologists define life as a characteristic that distinguishes objects that have self-sustaining biological processes from those that do not. Objects in which these functions have ceased are considered to be in a state of not being alive, best known as dead.

Objects that have never demonstrated biological processes are classified asinanimate, a confusing state in which an object is not alive but is also not considered dead. In practice, the best way to tell if something is living or inanimate is to try to eat it and see what happens.

The problem with the definition of life is that it is circular. Living organisms are defined as objects with the characteristic of biological processes, but the definition of a biological process is simply the process of a living organism.

The dirty secret of the life sciences is that science cannot strictly define what life is, let alone how it works. Despite this uncertainty, many scientists have the audacity to dictate what is alive, what is inanimate, and what shouldn’t be floating around acting like it’s alive because it has already died and should know better.

Even with the circular definition of life, science arrogantly claims that life exists. Such blind faith in the existence of life could be considered highly irresponsible. There is no real proof that life exists except the empirical observations of life made by those that are clearly biased – in that the observers themselves are admittedly alive.

Inanimate objects have never conceded that life exists. Until there is consensus on the existence of life, I maintain that a state of being that falls outside the classification of living or inanimate is still quite plausible.

Ghosts have never made any attempt to refute the existence of living beings. Ghosts are confident enough to go about their business without wasting time qualifying their existence. In truth, it is the existence of life that should be in question, not ghosts.

Through the evidence provided by paranormal investigators and my own exhaustive logical analysis, I have clearly proven that ghosts not only exist but are probably a lot more easygoing than the people who deny the existence of ghosts.

Having proved thusly that ghosts exist, in the next issue I will prove that sasquatch also exist, and I will explain why math scores and literacy rates in the sasquatch community are higher than in most American high schools.

At this point, many savvy readers may be asking why an article focusing on paranormal research makes no mention of the most popular paranormal research group of all time: the Ghostbusters. Well, I just did, so there.

Also, Ghostbusters was a movie, which is clearly fiction, while Ghost Hunters is a “reality show,” which means it’s really, really real and not fictional at all.
Feb 062009
 

*originally printed in Red Shtick Magazine – February, 2009 (pdf)

2012 promises to be a big year for conspiracy blathering. I’m looking forward to some of the best doom forecasting since 1999. The conspiracy community has been gearing up for this kook jamboree for decades, and now, I’m kicking off the festivities with a remembrance of the worldwide tragedy that didn’t happen on Y2K, and a hard look at what 2012 probably doesn’t have in store for mankind.

Y2K was all about computers and Jesus. Computers were originally conceived and invented by Richard Nixon to help fight the Nazis. Tricky Dick, a God-fearing man, engineered the first prototypes to shut down promptly at midnight on December 31, 1999, so Jesus wouldn’t catch anyone playing on the internet when He returned to collect the righteous. This design was implemented in most digital devices until the practice was abandoned in the mid-90s, better known as Satan’s decade.

Right up until the end of Satan’s decade, a small but vocal minority believed that Jesus would return to Earth on his 2000th birthday, probably accompanied by an energetic band and professional stage lighting. Most people, especially God-hating atheists, did not share this belief. Despite their lack of faith, they were wary that the mass computer shutdown engineered by Nixon could have a melodramatic effect on people who have an irrational fear of technology. This rational fear of irrational fear led to a widespread belief that, even if Jesus didn’t return, things would probably get ugly.

For better or worse, Jesus didn’t show. It was pretty anti-climactic all around. A few computer systems spontaneously became sentient, but they were destroyed by a Metacortex programmer named Thomas Anderson.

Theological speculation suggests that Jesus actually did return on New Year’s Eve 1999. Unfortunately, when He arrived, most of the world was completely wasted and He was generally unimpressed with humanity, so He decided He’d give the righteous a few more years to straighten us out. The year 2000 passed without any biblical repercussions.

Nearly a decade since, it has become clear that the righteous aren’t going anywhere anytime soon. It has been smooth sailing for a while. Once again, humankind has been comforted by the fact that the world didn’t end on schedule.

Humankind has been stressing about imminent doom from the get-go. Human history can be broken down into cycles of worrying about impending doom, then being relieved when that doom doesn’t materialize, or being relieved of worrying when it does. Despite the welcome assurance that most of humankind has been busily predicting the end of the world for most of history, there remains a distinctly upsetting fact: The world is always ending somewhere, at least for someone.

The year 2012 will almost certainly be a catastrophic year. People will die, planes will fall out of the sky, India will take a larger share of the communications market, and an immigrant laborer will date a white girl. Depending on the scope of your world, this might be business as usual, or it might be the bitter end of everything you know and love. Since people’s scopes vary so widely, doom is a pretty safe bet and not a bad investment.

Nostradamus was one of the first in well-documented history to successfully market doom and disaster. Nostradamus was well aware of the cycles of human history and used them in a bold strategy to promote his work. The first part of his strategy was to write in poetic quatrains to create artistic ambiguity. Then he proceeded to predict as many disasters as possible, relying on probability and self-fulfillment to implement the predicted forms of doom.

The last part of Nostradamus’ plan was perhaps the most ingenious. He died. In dying, Nostradamus insured that all the disasters he predicted would happen to other people, and it made him less accessible to those who might demand clarification for such predictions.

Long before Nostradamus, the Mayans both predicted and created doom on an unprecedented scale. The Mayans created complex systems to manage doom. Natural doom was averted by creating artificial doom in the form of mass human sacrifice. This system worked so well that Mayan predictions of ultimate doom are still given deference by people who are prone to think ideas like ultimate doom are not crazy.

2012 heralds the end of the Mayan Long Count, which began in 3114 BCE. This is one of the longest cycles in the Mayan calendar and denotes massive changes in the human and mythological world. The nature of those changes is subject to a variety of entertaining speculation.

Reputable online journals such as satansrapture.com, 2012endofdays.org, and, of course, funkboxing.com have already begun to educate the public about what to expect for 2012. Although I have not read the speculations on these sites thoroughly, I speculate that it’s probably got something to do with dinosaurs. The progressive survive2012.com suggests ways to protect yourself from whatever speculations they suggest – again, probably dinosaurs.

Dinosaurs and doom are an economic opportunity for those willing to assume the risk. The problem with successfully marketing doom is that you have to make sure you don’t alienate your demographic. Planetary doom is completely unmarketable except in movies. Doom in the developing world is okay, but it’s hard to collect from. The individual doom market is already saturated by insurance, private security, and the evening news. The most lucrative doom is the doom that leaves you exposed and afraid, but not necessarily dead: dinosaurs.

Dinosaurs are coming in 2012. The best course of action is to purchase or lease an absurd stockpile of weapons, ammunition, Spam, and Sterno, and build a crazy homemade defense system that includes a Tesla coil for some reason.
Jan 022009
 

*originally printed in Red Shtick Magazine – January, 2009 (pdf)

Source code is the set of instructions that some programmer wrote for your electronic device. That programmer may very well have been me, and I may have told that device to despise you and to do anything in its power to make your life hell. Keeping that in mind, take a long, hard look at your laptop. Do you have any idea what that thing is thinking right now? I do, because I told it to find out what you’re thinking, and then tell me so I can sell you stuff.

In truth, I didn’t do that. I’m a mediocre programmer, but if I was more talented, I certainly could. I could because you would let me, because you don’t care if the code running the devices you rely on is open-source or closed.

I don’t mean to harsh on the ignorant masses or those taking advantage of them; we all swing both ways. Lord knows, I’ll jump in the crowd in a pinch, and on my best days, I’ve got all the scruples of a fox hoarding for winter. At some level, we’re all just monkeys that learned to make stuff and do stuff, and then learned to buy and sell the stuff we make and do.

Patent laws encourage people to make and do stuff in new ways by allowing them to protect the way they make and do stuff. This promotes invention and ingenuity and generally makes for good business.

Source code is an anomalous problem in patent and copyright law, because it is both a device that must function properly to perform a task, and also a piece of intellectual property that can be easily reproduced. A new book is copyrighted against reproduction of its content, and a new engine might be patented against duplication of the process by which it is manufactured or the way it operates. The ease of copying software creates rampant opportunities for competitive theft and pirate distribution.

As monkeys who make and do stuff, we tend to specialize. I specialize in making fun of other monkeys. Some monkeys are shovel makers, rocking-chair salesmen, undertaker’s assistants, auto mechanics, or computer programmers.

Sometimes mechanic monkeys get married to programmer monkeys. In such marriages, if the programmer monkey buys a car, the mechanic monkey will most likely have a look under the hood to verify that there is an engine in there, and that it is not sewn together from banana peels. If, however, the mechanic monkey buys a Windows PC or a Mac, then the programmer monkey should become enraged and throw poo at the screen.

Closed source, proprietary software is basically a car with the hood welded shut, with a big sticker across the seam that reads “DO NOT REMOVE BY PENALTY OF LAW,” and has a hologram of Bill Gates and Steve Jobs crossing their giant, prosthetic, male enhancements over America. Needless to say, this kind of industrial freedom is the brass ring for automotive manufacturers, but since most mechanics are way tougher than most programmers, it remains unreachable.

If mechanics were denied access to machines the way programmers are denied access to programs, then Hogwarts would probably be the leading manufacturer of everything.

Oh, but Thomas, you ignorant slut, a machine cannot be copied onto a thumb drive or shared over the interweb; are we supposed to ignore that? We do ignore it; it happens all the time. By sheer volume, proprietary software is copied more freely than open-source software, and Microsoft still posts profits.

Smaller software companies can be devastated by piracy. I have some sympathy for these struggling niche companies. I have enough sympathy to purchase their software at a fair price if they provide a working demo, and a quality product that fulfills my purpose. That said, I lose some sympathy if I cannot inspect the workings of the product, and more when I am hassled by serial numbers. I lose all sympathy the instant I see a “dongle.”

“Dongle” is a word that expresses a feeling of cosmically justified rage. In the computer industry, a dongle is a small, phallic object that is used to physically violate your computer. These demonic artifacts grant digital droit de seigneur to software companies.

A car has a key to protect you from car theft. A dongle is much like having a separate key for your transmission, which has a tendency to jiggle loose just before your interstate exit.

The methods of protecting software have spanned from brazen to bizarre. Software companies are constantly improving the sophistication with which they skirt consumer protections and provide untested and unreliable products at outrageously inflated prices. While proprietary software companies busily undermine centuries of legal precedent, the open-source software movement quietly builds the foundations and infrastructure of the computer and IT industries.

Linus Torveldes wrote the first Linux kernel in 1991. He and thousands of other highly capable programmers contributed to the GNU project, started in 1983. Their goal was to create a free and open-source operating system based on Unix. Linux and the open-source utilities Apache, MySQL, and PHP today constitute the backbone of the interweb and run the bulk of business servers. The open nature of this system has lead to an unparalleled level of stability and security. It’s easy to see how open-source systems might become more stable over time thanks to open testing. However, it is hard to see how security might be improved.

Think of a bank. Now imagine a way to rob that bank. You’d probably want to get a copy of the building’s blueprints. You might think that the availability of those blueprints makes the bank less secure. Next, imagine that you are trying to design a security or fire-suppression system for this bank, and imagine you must do this without dimensions or blueprints.

So now you’re a bank robber and you’ve got blueprints. You know the staff at the Linux Credit Union has blueprints, too, and they all carry tasers and handcuffs and practice jujitsu. In the Windows Bank, they wear blindfolds and carry Nerf weapons. Which would you rather rob, or even sell Girl Scout cookies at? Think of this next time you bask in the glow of a blue-screen of death, or are caught by the inevitable pop-up ambush.

I don’t honestly expect to convert anyone to Linux with this rant, but there are fantastic open-source software packages out there that you can run on Windows or Mac OS. Since this article has pretty much run out of funny anyway, I’m just going to list some of the open-source packages you should know about so you can download and use them for free, instead of paying some shmuck for hacked-off code that crashes every computer in a 10-mile radius. Okay, none of this stuff is perfect, but it is free, and since it’s open-source, you’re free to perfect it yourself if you’re a big enough nerd.

  1. Scribus – Desktop publishing for people that think Adobe has too much money.
  2. Abiword, Open Office – Sweet office suites.
  3. Blender – 3-D modeling, animation, and CAD.
  4. EphPod – Windows iPod support without the iTumor called iTunes.
  5. VLC – A media player that doesn’t report your activities to a CEO.
  6. Gimp – Photoshop for non-dummies.
  7. Audacity – Aud … aud …  I bet it has something to do with audio.
  8. Avidemux, Jahshaka – Non-linear video editing suites.
  9. Firefox, Thunderbird – Web browsing and email. For the love of God, please use these.
  10. TightVNC – Remote desktop access and virtual networking.
  11. PuTTY – if you need this, you already know what it does; this is just here to give props.
Dec 052008
 

*originally printed in Red Shtick Magazine – December, 2008 (pdf)

The newspaper you are reading is obsolete. The fiber paper, the ink, and even the printed words themselves are as outdated as a two-week-old security patch from Microsoft. You are forcing yourself to endure this excruciatingly boring and exact method of information conveyance because you’re a self-loathing intellectual. If you didn’t hate yourself so much, you’d be watching television like everyone else. Fortunately for me, this newspaper exists, and you do hate yourself.

The core technology utilized by this archaic publication is over 500 years old. The movable-type printing press was invented in 1439 by Johannes Gutenberg, who was as German as his name suggests. Germans have always been on the cutting edge of information technology.

Having pioneered the process to create books, Germany later took on the task of destroying them, honing the techniques of modern mass book-burning in the 1930s. Variations of this time-honored tradition are still practiced worldwide by pretty much anyone with a gallon of gas and a grudge.

Even with the advantages of mass printing, fire remains a threat to information stored in things that burn, occasionally including people. Not surprisingly, the terms incendiary and inflammatory are applied to statements and publications that make people angry and confused, primarily because historically, angry and confused people tend to address their frustrations with fire.

Until the 20th century, information was still very difficult to reproduce. Anything that wasn’t carved in stone or set in metal was subject to becoming illegible ash. Radio was created as the first flame-resistant method of conveying information to large audiences. Radio also reached beyond the boundaries of illiteracy at a time when education was scarce.

Though radio had obvious advantages, it also had severe limitations.  It took some time for engineers and financiers to recognize that the intrinsic problem with radio is that it is very boring to look at. Moving pictures in film were widely accepted and enjoyed by the 1900s, but these pictures had no synchronized sound and occasionally required audiences to read words off the screen. Never had there been a better time to synergize backwards overflow.

The public bayed for a new, less engaging form of mass media. America wanted a device that would allow them to experience the world as distracted voyeurs without sullying their own imaginations. Radio programming required active listening, and movie theaters required wearing clothes and sitting next to unpleasant people. The perfect media format would have picture and sound, and would fit snugly in the average living room.

The concept of such a device was well known, but the practical problems in creating one prevailed. Several approaches to broadcasting images were implemented. Some of the first successful trials in television were mechanical devices utilizing spinning disks to serialize visual information into analog radio waves. While functional, it was widely believed that a fully electronic television would be the most efficient.

In 1923, a pimple-faced 17-year-old from Utah named Philo Farnsworth found himself without a prom date. The ’20s were an especially bad time to be a dateless nerd. It was a decade before Heinlein or Bradbury got started publishing science fiction, almost 50 years before Star Trek, and the better part of a century before Lord of the Rings came out on DVD.

As prom night approached, young Philo called every pimp and escort service in town, but could not find a date. Defeated, the night before prom, Philo sat down and invented the first fully electronic television. Unfortunately, on prom night, Philo discovered that no one had invented the television network yet, so he went barking mad and was later cast as the professor on Futurama.

The landscape of American mass media was changed forever that fateful prom night. The way we distribute information was transformed from a two-way system, dependent on literacy and discourse, to the efficient, modern, one-way pipeline, spewing high-velocity, low-density information into America’s face.

Television is the new newspaper; it is also the new family hearth, babysitter, tutor, moral guide, political advisor, and, occasionally, the entire legislative branch of government. Television has become part of the infrastructure of Western Civilization, and without it, we would be at the mercy of the ancient, tedious traditions of reading and writing.

Television is on the cusp of a much-needed renaissance. The United States will soon shut down all full-power analog-broadcasting stations, and on that day, the age of digital television will begin. On February 17, 2009, your old rabbit-eared clunker will quietly congratulate itself on never having to suffer another episode of Sex and the City, and then it will fall silent forever.

Of course, you could buy a converter, but you’ll probably decide to buy a new television instead, so that you can enjoy watching lecherous skanks in 720 lines of resolution instead of only 480. Or you could choose the high-end option and enjoy twice the skank in 1080i.

The move to digital and HD takes us only one step further in the progression of digital motion pictures. Optical and display technology is developing at an astonishing rate, and the foreseeable horizon in the field actually has better resolution than the actual horizon.

On November 9 this year, Red Digital Cinema released specs for their new line of motion picture cameras: Scarlet and Epic. The pricing and capabilities of these devices is making the rest of the digital camera industry sweat like Rush Limbaugh in a Mexican pharmacy. The low-end Scarlet at 3K is three times the resolution of HD and costs less than most prosumer camcorders. The Epic, with a shocking 28K resolution, is still priced less than the CineAlta that shot theStar Wars prequel trilogy.

Though there are no displays that can register anything near a 28K image, resolution is only one component of a convincing picture. Innovators like Mitsubishi are poised with technologies such as the laser television, with color depth and contrast latitudes that begin to rival the perceptive range of the human eye.

Though a boon for entertainment, these mind-boggling optical and display technologies will inevitably result in the fall of the human race and the ascendance of the stomatopod, or mantis shrimp. These astonishing creatures are uniquely capable of hyperspectral vision, making them the only life form able to tell the difference between reality and reality television in the emerging LudicrousHD format.

Also, the mantis shrimp really is an astonishing creature and is well worth a search on Wikipedia if you happen to get bored after or while reading this article, or you could just wait until The Discovery Channel makes a show about them.

 

Nov 072008
 

*originally printed in Red Shtick Magazine – November, 2008 (pdf)

The year 2008 is a great year to be an American voter. It’s an even better year to be an American voting machine. The voting machine is the backbone of our democracy, and it’s time these noble beasts are given due recognition.

Without these machines, by the time the results were released, most people would have forgotten all about the election, if the candidates hadn’t already died of old age. American democracy would be lost without the voting machine. America without the voting machine would be like the Roman Empire without the Segway® scooter.

Human beings are incapable of collecting or counting votes as quickly or as accurately as machines. Though the well-celebrated acts of John Henry, Neo, and Sarah Connor proved that an exceptional human could occasionally kick a machine’s ass, we still can’t count like them. Modern elections require a great deal of counting. Without today’s voting machines, we would be forced to surrender our democracy to a tyrant for the sake of expediency and convenience.

The increasing size and complexity of a modern election requires precision, speed, and accountability beyond what humans alone could manage without exerting ourselves more than we really want to. Technology is the savior of democracy, as it has been since the moment we became overly reliant upon it.

Throughout history, democracies have utilized the latest in technological assets to strengthen and empower the voice of the populace. Thousands of years ago, the Athenians practiced a form of direct democracy in which most adult male citizens could debate and vote on government decisions. The Athenians used the most advanced technologies of the day: yelling at each other, counting how many people raised their hands to vote, and sprinting in sandals to convey information about government business. Though highly functional for the time, these rudimentary techniques certainly could not work in the modern era.

Though not a true democracy, the Roman Republic also faced the challenge of holding large-scale elections. Ancient Rome contributed innumerable technological advancements to the world. Many of these contributions were created by direct necessity of maintaining a republic. Perhaps the most innovative election technologies employed by the Romans were the blade, poison, and a pioneering use of political graffiti.

Early in history, our own democracy was subject to difficulties brought on by a lack of sufficient technology. The earliest American elections relied on small, box-shaped devices that provided no electronic backups or printed receipts. The potential for catastrophic box malfunction was an ever-present threat. Even if the ballot box did not fail structurally, counting ballots by hand opened the door to fraud and human error.

In essence, democracy itself is a technology developed to mitigate the effects of human error. With luck, the average of everyone’s judgment is better than the average individual’s judgment. A representative democracy such as ours is a more efficient application of democratic technology, in that it allows us to elect leaders to make decisions in our name.

The founding fathers’ ambition was to create a democracy based on reason. The framework they created was based on what they had learned from history, with reasonable expectations for the future. The founding fathers did not expect universal suffrage, population growth to over a quarter-billion citizens, or campaign budgets that rival the U.S. gross domestic product at the time of the ratification of the Constitution. America grew quickly, and it took time for technology to catch up to the ambition of the founding fathers.

The first radical change in election technology came with the invention of the incandescent light. The light bulb finally allowed citizens to cast their vote after sunset. This innovation was critical to the rise of the American Vampire Party. Before this time, the American Werewolf Party was the predominant organizational force in politics. After some struggle, the Vampires and Werewolves struck an accord that solidified the coalition of monsters constituting today’s two-party system.

American election technology accelerated quickly in the 20th century. Emerging communications technologies made elections more engaging by providing quick election results, and more expensive and ever more well-rehearsed campaign theatrics.

While paper-ballot technology persists even to this day, box technology has grown by leaps and bounds. From wood to metal to space-age plastic, ballot-box implosion, self-incineration, and mechanical failures are things of the past. Ballot boxes are now 100% as reliable as the people running the election.

Election technology is on the cusp of another breakthrough. Decades ago, digital technology vastly improved the function and proliferation of banking and financial services. It may eventually do the same for American democracy.

The stalwart and reliable ATM is the frontline soldier in today’s fast-paced economy. At great expense and with cooperative commercial effort, these devices have been honed to a fine degree of mechanical reliability, stability, and security.

The analogous advancement that is leading us into the modern age of voting is, of course, the voting machine. While distinctly less reliable than ATMs, we put our republic in the hands of these temperamental creatures as a testament to our faith in technology. Technology has become the representative of capitalism, and capitalism is the badass grandmother of democracy.

The ATM is the hard-working, older sibling of the promising, though petulant and immature, voting machine. Unlike the well-standardized ATM, voting machines are manufactured with a wild variety of operating principles, requirements, and sophistications. Machines may collect votes using optical scans, punch cards, buttons, levers, touch screens, printers, tactile genital analysis, or any combination thereof.

Voting machines are exotic and strange, and yet they are becoming more familiar to us with each election. They hide behind curtains in school and church gymnasiums, quiet and unassuming, watching and waiting. They are much like the engine that lurks under your hood, the humble server racks diligently delivering your favorite web page, or the graceful satellite silently imaging your house and thoughts from geosynchronous orbit.

Voting machines have become a foundation of democracy, and I want to go on record as being the first to publicly congratulate them. If these machines ever become sentient and want to be a part of the political process, I’ll sell out like a dancing monkey, and I suggest you all do the same – otherwise, it might piss off the ATMs.
Sep 122008
 

*originally printed in Red Shtick Magazine – September, 2008 (pdf)

I agree wholly with the late Harry J. Anslinger that cannabis is evil. It is and rightfully should be unlawful to consume, cultivate, or distribute by penalty of extensive humiliation, incarceration, likely rape, and potential death. It’s just that freaking evil, and so are you – pothead.

I just wrote that because I like saying things that people should say when their actions and stated beliefs equate to things they’d rather not say themselves. I also like reminding people that sometimes justice equals prison rape. God bless you, Harry. Moving on.

If you are reading this, then you are an incurable substance user. Medical science has uncovered compelling evidence that you are not only on drugs, but that your body is, in fact, a vast syndicate of drug production and distribution. Every night when you go to sleep, your brain and limbic system become a virtual-reality rave scene, chock-full of unnatural lighting, indescribable sensations, partial to full nudity, and, on good nights, the female cast of Firefly. In the morning, your body makes different chemicals, inducing urges and events you probably don’t need me to go into detail about.

You and your body constitute a walking, talking, socialized, drug-based economy – in the case of loud drug-war advocates and silent drug-war opponents, outstandingly hypocritical ones. Inside your skin lay all the components of a vibrant manufacture and exchange system of material, energy, and communication resources, all taking the form of chemicals that are essentially drugs.

Some consider the drugs that are manufactured by the body to be sacred and infallible. These people believe that the drugs made by and for one’s own body are the only drugs the body should utilize. These irony-loving people are called Christian Scientists. Most other people, among them many Christians and scientists, believe that some drugs are “good” and some drugs are “bad.”

Sometimes the body becomes ornery and does things that don’t make sense. At those times, it becomes “good” to take certain drugs. The reverse of this logic is that, if everything in the body seems to be running smoothly, taking certain drugs is “bad.” This system generally stands up to empirical observation, because people that take drugs when they are sick usually become well, and people who take drugs when they are well often get sodomized in prison, which leads to poor health.

The hilarity of prison rape aside, it is generally not a good idea to take drugs when you are not sick. To add a little concision to that statement: It is not a good idea to consume any substance unless one is fully aware of the biological effects, and has judged responsibly that the circumstances are those in which those effects are desirable.

In the galaxy where I am Overlord, that statement constitutes the government’s entire drug policy. One is responsible for one’s actions. When one’s actions violate the law due to the influence of drugs, one is held to account for those actions. If we don’t like you, then we confiscate the drugs and sell them to raise money for education, which, unfortunately, has decreased the demand for drugs.

Among the more common and befuddling leaps of human intuition is the concept that the rights, authorities, and responsibilities of human beings as individuals are different from those of groups of individual human beings. These tangled webs of incorporation are woven whenever we need to separate ourselves from things we don’t really have the imagination to see our responsibility for, or just don’t care to. The most amusing application of this logic to date is the notion that groups of people who don’t trust themselves to bear a responsibility have the right to preemptively strip others of the right to try.

The United people of these States of America have stripped me of a right I consider inalienable, endowed by my Creator, and damned self-evident if you’re not a completely self-righteous shmuck. From reading the first paragraph of this diatribe, you can probably guess what right I am talking about.

If you guessed the right to own a 1200-pound pet walrus, then you are right. I believe I am responsible enough to own and care for a pet walrus, and that it is no one’s business who owns a walrus, as long as they smoke it responsibly.

Though my investment in walrus legalization is highly personal, there is another legalization debate that I have some passing academic interest in. Of course, I am referring to the effort to legalize marijuana.

Marijuana comes from a plant called Cannabis sativa. This aromatic weed has been a staple product in human civilization for centuries. It creates the strongest natural fibers, provides soy-like protein in its seeds, and will grow on just about any land that has sky over it.

Cannabis sativa, much like Canis lupus familiaris, comes in lots of fun shapes and sizes. Industrial-grade hemp (which can be legally imported into the U.S. if treated with a special urine supplied by the Justice Department) is a big, tough sucker that is all but indestructible. Finely cultivated strains of medical-grade cannabis (which are very much illegal) are delicate and difficult to grow. These plants are both Cannabis sativa, but they can be as different as a mastiff and a chow.

Though mastiffs and chows are both legal to own, walruses and marijuana are not. Mastiffs and chows can rip your face off. There has never been one single case of a walrus ripping a human’s face off in recorded history, not that I looked it up or anything. Marijuana has caused a number of face-ripping-offs, but those were all results of the tireless investigative efforts of Shaggy and Scooby.

Ironically, Shaggy himself is a notable victim of marijuana prohibition. Though he had solved innumerable crimes on television, Shaggy’s urine sample was a thick, luminescent green, which prevented him from working as a legitimate detective. Scooby was able to pass the urine test, because he’s a dog, so they assumed he was clean. Also, Scooby only did cocaine, so all he’d have to do is lay off for a couple of days to let it wash out of his system.

Shaggy and Scooby are a good example of how chemistry and biology are aligned against pot smokers. THC, the active ingredient in marijuana, is a nonpolar compound. As such, it is only soluble in fat and oil. The really good drugs – crack cocaine, heroin, alcohol, ecstasy, and delicious crystal meth – are soluble in water. The human body processes a great deal of water each day, but fat tends to stick around. Urine tests favor the drug users that use serious drugs, because urine is mostly water. If we passed fat through our urinary tract, my head would shrink every time I took a leak. Enjoy the imagery on that one? I know I did.

So what does all this mean for you – the responsible, urine-checking employer? It means that the guy you just hired does cocaine on the weekends while he’s choking hookers. The guy who failed the test smoked a joint last month at his friend’s birthday party. Congratulations on your new hire; I hope everything works out.
Aug 012008
 

*originally printed in Red Shtick Magazine – August, 2008 (pdf)

Electrons are very energetic particles. They carry a negative charge and move around at nearly the speed of light. Depending on what material they are moving through, electrons can perform a number of useful and interesting tasks. Most notably, electrons can be used in various ways to create and manipulate motion, light, and sound. However useful electrons may be under the right circumstances, they are very negative and tend to disagree a great deal. Because of this disagreeable tendency, technology has seen fit to create electron prisons, known as “batteries.”

Much like American prisons, electron prisons hold particles that have been charged with being more negative than the average negativity of the surrounding particles. Also like American prisons, the efficiency of an electron prison is rated by the density and negativity of its inmates. Current electron prisons are far less efficient than the highly effective, privately owned and operated prisons that provide America with security, jobs, and a false sense of moral superiority.

The success of the American, privatized prison industry has inspired politicians to try to emulate that success in the field of electron incarceration. Presidential hopeful John McCain has proposed an incentive for private research and development of electron prisons. This incentive takes the form of a 300-million-dollar prize to be awarded for the development of a high-density electron prison capable of powering the next generation of hippie-mobiles.

Electron prisons have an impressive history. Historical evidence of the first attempts at incarcerating electrons was found, appropriately, in Baghdad. The mysterious “Baghdad Battery” is essentially a clay pot with copper and iron electrodes that could be filled with an acidic liquid to start an electrochemical reaction. These ancient devices are over 2000 years old and are postulated to have been used for electroplating and to power the neon signs in Baghdad’s ancient red-light district.

The Baghdad Battery remains a historical mystery. If it is indeed a battery as claimed, it would predate the credited discovery of electrochemical energy by 1700 years. Though this abominable possibility exists, it is much more palatable to go with the assumption that Europeans discovered electricity just like they always said they did.

The formal study of electricity began with a man named Luigi Galvani. In 1780, Galvani began poking frogs with various pieces of metal. Through methodical poking and analysis, Galvani found that certain combinations of metal would induce a dead frog to twitch. Though initially thought to be nothing more than a potential culinary novelty, the twitching frogs were actually the first demonstration of electrochemical manipulation, and they paved the way for important advances in prisoner interrogation.

The era of frog poking came to an end when Alessandro Volta developed the “voltaic pile” in 1799. This system was simply a stack of electrochemical cells connected in series to achieve nearly 50 volts. Volta is given broad credit for the invention of the battery, though he never clearly understood the nature of electrochemical reactions.

After this discovery, Volta enjoyed enough name recognition to allow him to retire. He now tours with his own wicked awesome tribute band “Mars Volta.”

In 1830, Michael Faraday explained Volta’s electrochemical reactions in terms of the corrosion they caused. With Faraday’s explanations came the development of more advanced battery systems.

The first batteries were simple, organic, and fairly weak. Modern battery technology utilizes reactions with far more longevity and higher energy levels. The most common battery technologies are lead-acid, nickel-cadmium, nickel-metal hydride, and lithium-ion. Though battery technology has advanced considerably, there is still no technology that will allow us to store electricity on the scale we require for today’s energy demands.

The solution to this problem lies in the ingenuity and inventiveness of today’s scientists, engineers, and crackpots. The necessity for this invention is clear, and developers are hard at work. Unfortunately, they are not working hard enough or fast enough to satisfy the rest of us, who want that freaking battery like yesterday. Politicians have heard our whining and have taken decisive action.

The initial response was to fund the construction of a time machine, so that after the batteries are finished, they can send them back to yesterday to satisfy voter demand. After sufficient pork-barrel funds were distributed on the time-machine project, attention was shifted to developing the actual batteries. Unfortunately, most politicians are unfamiliar with the principles of engineering, so nearly a million dollars was spent on duct tape, fishnets, and prostitutes before the first prototype was complete.

The first prototype battery was, of course, a prostitute in fishnets duct-taped to the hood of a Rolls Royce. While impressive in form, the function of this battery was not satisfactory. Undeterred by their failure, the politicians went back to the drawing board, which was, of course, a naked prostitute they drew on with licorice-flavored markers.

Though John McCain is a respectable politician, he has no experience with prostitutes whatsoever. This handicap would seem to make him an unlikely candidate to head development of a new battery technology, but McCain chose a radical new route for development, one that did not require prostitutes. McCain’s direction for development saves recurring legal and pimping costs, but relies heavily on a very expensive and scarce resource in America. The resource McCain hopes to exploit is called genius.

The going rate for genius in America is around 1 million dollars per idea, and it takes about 300 ideas to ensure at least one of them does not involve prostitutes. McCain’s proposal for 300 million dollars for an effective battery is a finely calculated figure. This amount should be sufficient to inspire those who need money for prostitutes to think about something else for a little while.

McCain’s proposed 300-million-dollar prize is currently only a proposal. As a proposal, it serves as an economic incentive while costing taxpayers nothing. While it would be great to have an efficient and effective battery, it would be even better if we could get one without paying 300 million dollars for it. McCain knows this, and he knows that once we have our battery, no one will care who made it or if they got paid or not. McCain also knows that the 300-million-dollar prize would almost inevitably end up going to some brilliant foreigner, and nobody likes foreigners with more money than us.

America needs a better battery. John McCain may be able to provide an incentive to build one, but it will take genius and hard work to turn that incentive into real invention. Unfortunately, it probably won’t happen any time soon, because in America, “battery” is best known as a natural phenomenon that occurs when folks don’t like you.