I’m relaying this story to you because of this article about A-Life and A-Societies.
When I was about 10 years old, a movie called “Electric Dreams” came out that explored the concept of a computer who, through a tragic accident (beer spilled on the keyboard) somehow gained consciousness and began vying for the same girl as its owner. It was one of those 80s schlock movies that bear absolutely no resemblance to reality (can you say “Weird Science”?) but is entertaining none the less.
I remember walking away from that movie with the understanding that computers are smarter than humans. When I explained this theory to my mother and her friend Bernie, a mechanical engineer, they laughed and immediately corrected me. They explained that a computer can do no more than a human tells it. But, I reasoned, a computer can do complex math in a fraction of a time that humans can, thus making them smarter. Again, they explained the concept of programming a computer in order to make it perform those calculations. Without a human, the computer sits there and sucks up electricity. So I retorted, “What if you tell a computer to think?” At this, I was told that impossible. Computers don’t speak English. They don’t grasp abstract concepts. You simply can’t tell them to think.
Never tell me something can’t be done.
I became obsessed with this idea of making computers think. Movies and television had told me that, eventually, computer intelligence will far surpass human intelligence. They wouldn’t lie to me, would they? I became fascinated with computers, constantly hovering around Bernie’s old TRS-80, at that time rapidly reaching obsolescence, peering into it screen but not touching as I had been admonished. I harassed the poor guy with all kinds of questions: “How does this thing work?” “Does it talk?” “How do you tell it add two plus two?” “Can I at least breathe on it, or will that break it too?”
When the first 8086 PCs hit the market, Bernie rushed to upgrade. Because of my fascination with the machine, and because it was now little more than a slow doorstop, he handed the TRS-80 down to me. It had no color and no games. The only truly usable software it had was an ancient database (not at all relational) and text editor. Along with the computer, Bernie handed down all of the manuals that came with it, one of which talked about a programming language called BASIC. At last! I could tell a computer to think!
Of course, the reference had no entry for a think() function (BASIC is WAY too high level a language for that) so I amused myself building small games and simulations intended to make one think the computer was carrying on a conversation with you. Of course, these simulations made Eliza look like Albert Einstein, but they brought me one step closer to my dream of a cognizant machine.
One may ask, at this point, why I never studied this in college. Two reasons: first, the calculus in college killed me. It was not beyond my grasp, but I totally lost interest in it when they refused to demonstrate how it could be applied in the real world.
Never tell me to just accept something as truth without showing me why or how it works.
The second reason is because I’m not particularly fond of the type of research that has been done in this arena. Don’t get me wrong, I find ideas like neural networks and expert systems fairly interesting, but they are merely aspects of intelligence. I have seen very few artificial intelligence research projects that have moved beyond the academic debate of what is intelligence. The best I’ve seen thus far is The Alice Project which utilizes the concept of Case-Based Reasoning. Some see this as a toy, but I can’t help but wonder whether or not this is the key to intelligence.
Quick tangent: Case-Based Reasoning (CBR) essentially matches input against a set of patterns that have been either learned or programmed directly into the system and produces a pre-recorded response. In a conversational setting, the result is remarkable. In many cases, the program is able to fool the person on the other end of the chat into believing they’re talking to a real person. Tot hose who think this is cheating, I say this: how did you learn to talk? What was your first word? Did you spontaneously say that word, or were you merely mimicking what you parents had been saying to you over and over? How many new parents spend hours a day trying to get their kids to talk by repeating something like “mama” over and over again? The best example of CBR in human thought that I can think of is the typical response to “How are you?” Everyone expects roughly the same response: “Fine, how are you?” It’s almost an automatic for us. We often respond without even thinking. The more I think about it, the more I believe CBR with some kind of ability to learn and respond to new patterns may be the thing that brings something like human thought to the computer world. But I digress…
CBR is a rather simple concept and, to some degree, even a “duh” concept. It’s very simple rules resulting in a complex interaction is an example of emergent behavior, which is another fascinating area in artificial life and intelligence. Conway’s Game of Life, which began its existence as a Go board with small, simple rules for the movement of each piece, is a perfect example of emergent behavior, as is the classic example of Boids. In Boids, each individual has essentially two basic rules: 1) If you see other boids in front of you, fly in the direction of the center of their distribution; 2) Do not get within X units of any other object. These two simple rules produce behavior that looks almost exactly like the flocking of birds or swarming of insects, which have long been considered complex animal behaviors.
Using such systems, social scientists have derived all kinds of models to attempt to explain behaviors ranging from the mysterious disappearance of the Anasazi people to Balkanization in racially diverse societies. Many of these are explained in detail in this article at the Atlantic Online. These kinds of things fascinate me to no end.
I find much of the research mentioned in this article to be amazing. Using simple rulesets, the models duplicated patterns that were eerily reminiscent of mass genocide, segregation and population distribution. One long supported but as yet unexplained phenomenon called Zipf distribution appeared naturally in some of the models without being explicitly programmed for it. Zipf distribution essentially show that the nth largest distribution in many organizations will be approximately 1/nth of the first largest distribution. This pattern shows itself in city settlement, corporate structure and various other organizational distributions. The fact that it just emerges from relatively simple rulesets only reinforces Zipf’s law while adding to the validity of such models.
One argument against this modeling is the simplicity of the rules used. For instance, the model demonstrating balkanization doesn’t take into account certain details such as same-race animosity, economic ability or educational background. The patterns generated by the model are so close to actual racial distribution, however, to suggest that perhaps such details don’t matter, or matter less than we think. While these models may not necessarily answer all of our questions about socioeconomics and societal behavior, they do give us an insight as to what may or may not matter. The reality is that any system is governed by so many variables that we can never reliably predict the outcome given only a handful of data (which is essentially the basis of chaos theory). They do, however, give us a better idea of just how simple life is and how useless some of that data may be.