Artificial Intelligence
Essay Preview: Artificial Intelligence
Report this essay
Introduction
The invention of computers–based on the work of Alan Turing in the 1930s and John von Neumann in the 1950s–quickly gave rise to the notion of artificial intelligence, or AI, the claim that such nonhuman machines can exhibit intelligence because they mimic (or so its proponents claim) what humans do when they do things we regard as being evidence of intelligence.

From about the late 1960s to the middle of the 1980s there was a great deal of excitement and debate among philosophers, psychologists, learning theorists, and others concerning the possibility and status of AI. Mostly there were AI champions and AI detractors, with little middle ground. That controversy seems to have cooled of late, but new developments in the computer-engineering field may now take us past those earlier debates. (Agre & Chapman 1987)

Research in the provocatively named field of artificial intelligence (AI) evokes both spirited and divisive arguments from friends and foes alike. The very concept of a “thinking machine” has provided fodder for the mills of philosophers, science fiction writers, and other thinkers of deep thoughts. Some postulate that it will lead to a frightening future in which superhuman machines rule the earth with humans as their slaves, while others foresee utopian societies supported by mechanical marvels beyond present ken. Cultural icons such as Lieutenant Commander Data, the superhuman android of Star Trek:

The Next Generation, show a popular willingness to accept intelligent machines as realistic possibilities in a technologically advanced future. (Albus 1996)

However, superhuman artificial intelligence is far from the current state of the art and probably beyond the range of projection for even the most optimistic AI researcher. This seeming lack of success has led many to think of the field of artificial intelligence as an overhyped failure–yesterdays news. Where, after all, are even simple robots to help vacuum the house or load the dishwasher, let alone the Lieutenant Commander Datas. It therefore may amaze the reader, particularly in light of critical articles, to learn that the field of artificial intelligence has actually been a significant commercial success.

In fact, according to a 1994 report issued by the U.S. Department of Commerce, (Critical technology assessment1994) the world market for AI products in 1993 was estimated to be over $900 million!

The reason for this is, in part, that the fruits of AI have not been intelligent systems that carry out complex tasks independently. Instead, AI research to date has primarily resulted in small improvements to existing systems or relatively simple applications that interact with humans in narrow technical domains. While selling well, these products certainly dont come close to challenging human dominance, even in todays highly computerized and networked society.

Some systems that have grown out of AI technology might surprise you. For example, at tax time many of you were probably sitting in front of your home computer running packages such as TURBOTAX, MACINTAX, and other “rule-based” tax-preparation software. Fifteen years ago, that very technology sparked a major revolution in the field of AI, resulting in some of the first commercial successes of a burgeoning applied-research area. Perhaps on the same machine you might be writing your own articles, using GRAMMATIK or other grammar-checking programs. (Araujo & Grupen 1996) These grew out of technology in the AI sub field of “natural language processing,” a research area “proven” to be impossible back in the late 1960s. Other examples range from computer chips in Japanese cameras and TVs that use a technique ironically called fuzzy logic that improves image quality and reduces vibration, to an industrial-scale “expert system” that plans the loading and unloading of cargo ships in Singapore.

If you werent aware of this, you are not alone. Rarely has the hype and controversy surrounding an entire research discipline been as overwhelming as it has for the AI field. (AI: The Tumultuous History1993) The history of AI includes raised expectations and dashed hopes, small successes sold as major innovations, significant research progress taking place quietly in an era of funding cuts, and an emerging technology that may play a major role in helping shape the way people interact with the information overload in our current data-rich society.

Where Artificial Intelligence Has Been
Roughly speaking, AI is more than fifty years old–the field as a coherent area of research usually being dated from the 1956 Dartmouth conference. That summer long conference gathered ten young researchers united by a common dream: to use the newly designed electrical computer to model the ways that humans think. They started from a relatively simple-sounding hypothesis: that the mechanisms of human thought could be precisely modeled and simulated on a digital computer. This hypothesis forms what is, essentially, the technological foundation on which AI is based.

In that day and age, such an endeavor was incredibly ambitious. Now, surrounded by computers, we often forget what the machines of forty years ago looked like. In those early days, entering a program into difficult-to-use, noisy teletypes, which were interfaced with large, snail-paced computers, largely performed AI. After starting the program (assuming one had access to one of the few interactive, as opposed to batch, machines), one would head off to lunch, hoping for a run of the program before the computer crashed. In those days, 8K of core memory was considered major computing memory, and a 16K disk was sometimes available for the main memory. In fact, anecdote has it that some of the runs of Herb Simons earliest AI systems used his family and students to simulate the computations–it was faster than using the computer! (Beer 1990)

Within a few years, however, AI seemed really to take off. Early versions of many ambitious programs seemed to do well, and the thinking was that the systems would progress at the same pace.

In fact, the flush of success of the young field led many of the early researchers to believe that progress would continue at this pace and that intelligent machines would be achieved in their lifetimes. A checkers-playing program beat human opponents, so could chess be far beyond? Translating sentences from codes (like those developed for military use during World War II and the Korean War) into human-understandable words by computer was possible, so could translation from one human language to another be too much harder? Learning to identify some

Get Your Essay

Cite this page

Earlier Debates And Computer-Engineering Field. (July 9, 2021). Retrieved from https://www.freeessays.education/earlier-debates-and-computer-engineering-field-essay/