The Great Ai Debate
Essay Preview: The Great Ai Debate
Report this essay
The Great AI Debate
(Compare and Contrast Paper)
In a Star Trek episode entitled, “Measure of a Man,” a villainous cybernetics researcher obtains authorization to dismantle Data, the android, to learn more about its construction. This action is met by Datas friends with resistance, and a legal battle ensues to determine whether Data is a life form with rights.

Riker, appointed to present the researchers case, argues that because Data is composed of circuits and wires, he is nothing more than a sophisticated computing machine. (His case seems almost rock-solid when he forcibly switches Data off during the trial.) Later, Datas defense provides testimony to show that because Data has had many human-like experiences, including even an intimate relationship with another crew member, he must therefore be ruled a sentient life form, with all the rights of a human.

Normally, Star Trek has a reputation for portraying the future society as having solved the problems that vex us today. However, “Measure of a Man” raises issues that are still debated in the 20th century. If Star Trek is any reliable predictor of our worlds future (hah!), then the issue of whether machines can be alive wont be resolved any time soon.

Akin to the debate of whether machines can live is the debate over whether machines can be intelligent. This is the great Artificial Intelligence debate, one which has not been resolved and probably never will be.

Advocates of a view called “strong AI” such a Marvin Minsky believe that computers are capable of true intelligence. These “optimists” argue that what humans perceive as consciousness is strictly algorithmic, i.e. a program running in a complex, but predictable, system of electro-chemical components (neurons). Although the term “strong AI” has yet to be conclusively defined [Sloman 1992], many supporters of strong AI believe that the computer and the brain have equivalent computing power, and that with sufficient technology, it will someday be possible to create machines that enjoy the same type of consciousness as humans.

Some supporters of strong AI expect that it will some day be possible to represent the brain using formal mathematical constructs [Fischler 1987]. However, strong AIs dramatic reduction of consciousness into an algorithm is difficult for many to accept.

The “weak AI” thesis claims that machines, even if they appear intelligent, can only simulate intelligence [Bringsjord 1998], and will never actually be aware of what they are doing. Some weak AI proponents [Bringsjord 1997, Penrose 1990] believe that human intelligence results from a superior computing mechanism which, while exercised in the brain, will never be present in a Turing-equivalent computer.

To promote the weak AI position, John R. Searle, a prominent and respected scholar in the AI community, offered the “Chinese room parable” [Searle 1980]. This parable, summarized by [Baumgartner 1995], is as follows:

He imagines himself locked in a room, in which there are various slips of paper with doodles on them, a slot through which people can pass slips of paper to him and through which he can pass them out; and a book of rules telling him how to respond to the doodles, which are identified by their shape. One rule, for example, instructs him that when squiggle-squiggle is passed in to him, he should pass squoggle-squoggle out. So far as the person in the room is concerned, the doodles are meaningless. But unbeknownst to him, they are Chinese characters, and the people outside the room, being Chinese, interpret them as such. When the rules happen to be such that the questions are paired with what the Chinese people outside recognize as a sensible answer, they will interpret the Chinese characters as meaningful answers. But the person inside the room knows nothing of this. He is instantiating a computer program — that is, he is performing purely formal manipulations of uninterpreted patterns; the program is all syntax and has no semantics.

In this parable, Searle demonstrates that although the system may appear intelligent, it in fact is just following orders, without intent or knowledge of what it is accomplishing. He says that machines lack intentionality. Searles argument has been influential in the AI community and is referenced in much of the literature.

It is tempting for spiritually-inclined people to conclude that the weak AI vs. strong AI debate is about mind-body duality, or the existence of a soul, and whether a phenomenon separate from the body is necessary for intelligence. Far from it, the predominant opinion in the AI community, among both sides of the strong/weak issue, is that the mind is a strictly physical phenomenon [Fischler 1987]. Even Searle, a weak AI advocate, believes that.

Mental phenomena whether conscious or unconscious, visual or auditory, pains, tickles, itches, thoughts, indeed, all of our mental life are caused by processes going on in the brain. [Searle 1984]

The AI debate is primarily concerned over whether our current, algorithmic computing paradigm is sufficient to achieve intelligence, once “the right algorithm” has been found. The prevailing attitude, in favor of weak AI, asserts that “the syntax of the program is not by itself sufficient for the semantics of the mind” [Baumgartner 1995, quoting Searle].

The apparent failure of traditional (strong) AI has led researches to consider new computing paradigms. For example, researchers have noted that the traditional Von Neumann “stored-program” architecture, which is the basis of most of the worlds computers today, is radically different from the neural structure of the brain. “Connectionists” hope to build machines whose organization more resembles that of the brain and its neural structure, by using numerous, simple processing components connected in a massively parallel manner. An early attempt at this was the Connection Machine, introduced by Thinking Mind, Inc. in 1986, which had up to 64,000 processors, massively connected, capable of fully parallel operation.

Still, there are those who cling desperately to the strong AI dream. Searle says in [Baumgartner 1995] that these people have built their professional lives on the assumption that strong Artificial Intelligence is true. And then it becomes like a religion. Then you do not refute it; you do not convince its adherents just by presenting an argument. With all religions, facts do not matter, and rational arguments do not matter. In some quarters, the faith that the mind is just a computer program is like a religious faith.

Although Searles statement is biased, and his generalization about

Get Your Essay

Cite this page

Weak Issue And Star Trek Episode. (June 1, 2021). Retrieved from https://www.freeessays.education/weak-issue-and-star-trek-episode-essay/