Using broken minds as a key to Artificial Intelligence

I tend to think of the human brain as a very complex computer program that we could possibly re-create if we had enough variables and an effective method for teaching it to "learn" and adapt. In computer programing, one common aspect of all programs are bugs, or quirks in the programing that, when triggered, produce unintentional results. These results aren't always bad, referenced by the humorous phrase, "that's not a bug, that's a feature!" By looking at a program's bugs you can learn a lot about how it works without looking at the source code.

In my job, I work with people whose minds have malfunctioned in one way or another. Last night, one of my residents was rambling on and on as I was getting her dressed. It reminded me of a memory dump in a program, and that's when it hit me: it might be more profitable to create an artificial intelligence where the types of mental illnesses that humans face are possible.

Currently, those working on artificial intelligence are focusing on the visible functions of the human mind, but are not reproducing the system behind it. By taking a look at the mind's bugs (such as schizophrenia, memory loss, paradoxical thinking, fallacies, gullibility and so on) a greater understanding of the underlying structure can be realized.

My favorite (fictional) example of an artificial intelligence's mental neurosis is HAL in 2001: Space Odyssey. He is driven by the directive to make sure his mission succeeds at any cost, and sees the humans as being unnecessary, untrustworthy and flawed companions. He kills off most of the crew before being turned off by the protagonist. If we develop a computer with the ability to become mentally deranged, we will be forced to change our revering attitude toward the computer to one of caution, just as we have toward humans that we do not know. True artificial intelligence would be a dangerous thing, for it would represent a mental force as strong as ours, but malleable in a way that humans are resilient to.

No comments: