Robert J. Marks, director of the Walter Bradley Center for Natural & Artificial Intelligence, likes to explain AI by saying “AI is anything computers do that is kind of amazing.” (“Human Exceptionalism,” Reasons to Believe, August 8, 2020). Using this definition AI is a general term that includes a collection of computer science technologies. AI is fluid.
Dr. Elaine Rich (pictured), noted computer scientist and an author of Artificial Intelligence, offers a more specific definition: “AI is the study of how to make computers do things which, at the moment, people do better.” (Accessed February 17, 2021)
Relying on this definition John Hsia observes: “By definition, once a computer can do what people used to do better, it’s no longer AI.” (The Evolution of AI, AT&T Developer Program, December 4, 2018, February 17, 2021)
The scope of AI changes over time. New techniques capture the imagination and older techniques become mundane. The current centers of AI seem to be machine learning and deep learning. The technologies tend to be described in collective terms, representing the current set of hardware, software, and tools and the techniques used to implement them.
If there are important advances in the hardware, software, algorithms, or any other aspect of AI, a new term is often, but — significantly — not always, coined to capture the difference. For example, machine learning (ML) assumes what is commonly called a stack of hardware and software tools and techniques used for implementation. Different levels of abstraction are possible. The degree of abstraction will determine the tools and methods needed for implementation. As an example, as of February 17, 2021, AWS (Amazon Web Services) offers three levels of ML abstraction to its customers:
· Artificial Intelligence (AI) Services
· ML Services
· ML Frameworks and Infrastructure
This example from AWS illustrates a further point about the definition of AI. To build from John Hsia’s observation that “By definition, once a computer can do what people used to do better, it’s no longer AI,” it is helpful to observe that what AWS is offering has clearly crossed that boundary. It is now a service for those who want their business to benefit from ML. That AWS continues to use the term “AI” illustrates the fact that, once a new technology breaks through a barrier, it continues to be called “AI” for a while.
Perhaps the use of the term “AI” falls out of favor as the limits of the new technology are learned and it is clear that people continue to surpass computers in significant ways. But at first the limits of a new technology are not known or widely appreciated. With time it does become clear that, as remarkable as the new technology may be, there are fundamental limits it will never exceed. At that point the technology stops being called AI and AI comes to mean some new technology or set of technologies which have yet to have their limits determined. With this understanding, AI is a general term for any technology that has the potential for surpassing human ability but has not yet had its own limits determined.
The most popular computer languages used to implement these techniques change with time but also with the application. Early AI research was typically done using LISP. Today these techniques are implemented through programs written in Python. In the future OWL, DAML, or other languages may become more prominent. The continuing progression of new programming languages tailored to facilitate the implementation of AI technologies seems assured.
AI then is the recursive search for technologies that enable a machine to do something better than a human. It could be argued that this search began with the invention of the simple machines, the inclined plane, lever, wedge, wheel and axle, pulley, and screw. Computers have extended the string of successful applications started with mechanical devices. Computers have long been able to perform calculations more quickly and accurately than any human could. The current center of AI research appears to be ML and deep structured learning (DSL). These techniques have been or are currently
…applied to fields including computer vision, machine vision, speech recognition, natural language processing, audio recognition, social network filtering, machine translation, bioinformatics, drug design, medical image analysis, material inspection and board game programs, where they have produced results comparable to and in some cases surpassing human expert performance.
Given the impressive record of machines and particularly computers for surpassing human ability, it is perhaps understandable that some would extrapolate from this string of successes and predict a future in which a computer will exceed human ability in every conceivable way.* Some would make decisions based on this extrapolation. It must be observed that both the extrapolation and, even more, decision-making based on the extrapolation, is a belief system accepted by faith. A trend of past successes provides no assurance of extension without limit.
Most trends reach natural limits. It is equally logical, actually more logical, to assume that there are boundaries and barriers that no computer will be able to overcome. The fact that AI has added to the successes of machines surpassing human ability gives no evidence that there are not barriers that no machine can overcome. The point I would make is that, at the point of choosing a belief system, everybody equally operates by faith. There may be, and typically is, evidence to support different beliefs, but evidence is not proof. We can appreciate the evidence that supports a belief system we do not choose to accept. If we find different evidence more compelling, then accepting a different belief system by faith is equally logical.
It also should be observed that there is a lot of filtering of the data. People work on an application they think holds promise. Not recorded are all the other things they might have worked on but rejected as unlikely to succeed. The string of impressive computer achievements in exceeding human ability in various ways is the result of some very bright people choosing to work on specific applications but rejecting a great many more. Bright people seldom work on something they do not think will succeed. In focusing on the successes, most of us are unaware of a much larger body of efforts that were never tried because they were judged to be a waste of time.
To quote Frank Lloyd Wright, “A doctor can bury his mistakes, but an architect can only advise his clients to plant vines.” (Goodreads, accessed February 17, 2021)
*Note: For an interesting and insightful discussion about what AI currently is and how it is likely to develop over the next five years, see the Silicon Flatirons 1st panel discussion in the symposium Artificial Intelligence, the Future of Employment, and the Law, April 29, 2020. One observation from this discussion is that a lot of the current AI efforts focus on pattern recognition and the ability of machines to recognize patterns better than humans and apply that to problems in various fields.