Will the Real AI Language Please Stand Up?

Table of Contents

by Harvey Newquist III

from the July 1987 issue of Computer Language

Artificial intelligence is perhaps the most overused—and abused—buzzword in the current age of computer science. The AI arena has been divided into several factions, all of which help distort its real benefits and potentials.

On the one hand we have the popular business media, which has billed AI as everything from the greatest creation since sliced bread to a worthless endeavor perpetuated by computer hucksters.

Then we have the vendors and users, caught in a continual tug-of-war over how to. properly implement AI, how to integrate it with existing applications, which machines to run it on, where it fits best in a company’s computing strategy, etc.

Finally, we have the hard-core researchers—those people who gave us AI in the first place. They spend their time thinking about thinking and coming up with new ways to create machines that more closely resemble humans and human capabilities.

Though these groups are quite separate, one simple question has begun to bind them together: Which language is best for developing AI applications, particularly expert systems and natural language interfaces?

Not since the whole concept of AI was developed has there been such chaos and confusion in the marketplace over the suitability of a programming language to the needs of the developer and user. So let’s look briefly at the history of languages in AI and then see what’s being done with them in 1987.

How it all began

The notion of artificial intelligence was conceived in 1956, when a group of gentlemen assembled for the Dartmouth Summer Conference to discuss ways to improve the capabilities of computers.

This small group was led by Massachusetts Institute of Technology researcher John McCarthy, who presented the term “artificial intelligence” to the group as a concept that they should all work to achieve.

Not coincidentally, McCarthy was also the inventor of a brand new language called LISP (LISt Processing). McCarthy’s new programming language was symbolically oriented: it shunned the standard computer language process of employing numerical processing. It was with this language that the group hoped to develop computers and computer programs that would become truly intelligent.

After this first conference, the attendees returned to their respective universities—primarily MIT, Cambridge, Mass., and Carnegie-Mellon University, Pittsburgh, Pa. Small AI labs at these two schools were fostered to work on problems in artificial intelligence, particularly expert systems and robotics.

Within a few years, however, McCarthy’s LISP began to mutate. As different organizations began to use LISP, they also began to customize the language to suit their own needs.

For instance, Bolt Beranek & Newman, the research and contract company, began developing its own version of LISP, known as BBN LISP, for internal use. But some of the people working on the project left to go to the Xerox Palo Alto Research Center (PARC) in Palo Alto, Calif., and they took this LISP with them. There it evolved again and became known as InterLISP-D.

At about the same time, researchers were leaving CMU and going to Stanford University, Stanford, Calif., where McCarthy had gone after leaving MIT. More LISPs made their way around the country, and no less than two dozen versions of the language appeared, some incompatible with others.

Since this was the researchers’ language of choice, it was the first one used extensively for developing AI applications. Expert systems such as the medical programs CADUCEUS and MYCIN were written in LISP, as was the pioneering molecular analysis program known as DENDRAL.

Some of these languages were developed specifically in InterLISP. Extensive use of this language was the result of a time-sharing program set up by the National Science Foundation and Stanford, where the main computer ran—you guessed it—InterLISP.

So the initial AI developers were actually a fairly tight-knit group of less than 50 researchers who all decided to use the same basic language for their work. Most of these individuals knew each other as well, so they developed somewhat of a monopoly on what was going on in AI and what languages were to be used. Thus LISP became the standard language for the AI community, with little or no competition from other forms of programming languages.

On the foreign front

Now we turn our sights to Europe. Both France and Sweden had emerging AI projects going on in the early 1970s in a number of universities, most notably Sweden’s Uppsala University and France’s University of Marseilles.

The primary work coming out of Europe involved logic programming, another approach to emulating human thought processes. The French research project developed a language known as PROLOG (PROgramming in LOGic). It was more structured than LISP but was better for following lines of logic.

PROLOG finally made its way across the Atlantic in the early 1980s, where it met with huge resistance from the established AI community in the U.S. Much of the time between 1981 and 1986 was spent arguing the pros and cons of one language or the other.

By 1986, LISP almost triumphed. Its proponents had kept PROLOG at bay by convincing U.S. customers that it was more of a methodology than a full-featured language. The number of companies selling PROLOG in the US. at that time was less than a dozen.

But PROLOG had gone to Japan. There the Japanese Fifth Generation Project—designed to create knowledge-based hardware and software—decided to use PROLOG as the basis for all of its operations. This decision was based in large part on the fact that PROLOG was not a U.S. product and the Fifth Generation was aimed directly at assaulting the U.S. claim to the highest of high technology.

Al vs. conventional languages

This brings us to the present state of affairs, but we have not yet looked at other languages. Just because the original AI researchers chose LISP doesn’t mean that BASIC, FORTRAN, or COBOL won’t work.

The difficulty with employing these languages in AI programming is that they don’t have all the logic or symbolic facilities that make LISP or PROLOG a little more efficient at the development stage. But a good programmer can work around these shortcomings and create full-featured expert systems in any language.

For instance, one of the first expert system shells for the PC was developed by the U.K.’s Donald Michie in FORTRAN. Upon its introduction, this package—known as Expert-Ease—was the most successful PC tool available.

But languages of all sizes and shapes can be used for AI, and not just at the PC level. A division of Cullinet, Distributed Management Systems, has a COBOL expert system for IBM environments known as IMPACT. And IBM itself has an internal expert system for teaching music theory that was written in Pascal. With a little imagination, any language can be appropriate, given the programmer’s familiarity with all its features and idiosyncracies.

Another way expert systems and various AI applications can be developed is through the use of object-oriented languages such as Smalltalk, Flavors, and OPS. Each of these has been employed for the creation of some very comprehensive applications, such as XCON, the infamous configuration expert used at Digital Equipment Corp.

XCON was developed in 1980 by DEC and CMU and was written in OPSS, an object language developed at CMU. XCON is believed to be the largest expert system running in day-to-day use anywhere in the world.

Many organizations see object orientation as a way to add intelligent capabilities to existing languages and create an enhanced programming environment. Object-oriented languages do not require the complete relearning of a whole separate language from the one that the programmer is already fluent in. In just the last two years, object orientation has grown from an AI curiosity into a very important part of programming at large.

Winds of change

So here we are in 1987. A major shift in control of the AI community has taken place, from a small group of researchers to the users.

Along with this shift has come a marked change in what types of languages are being used. There really aren’t that many fully trained LISP hackers out there,-simply because there hasn’t been the time to train them. Plus, the specialized machines used to run LISP are expensive and require good, experienced programmers.

Users began looking for alternatives to LISP in early 1986 and saw that with a little work they could use C for their applications—or even run LISP on standard, general-purpose machines. This gave users a choice that enabled them to save money and use their resources more economically.

As a result, the LISP vs. PROLOG argument has become moot; now it’s C vs. LISP. PROLOG appears to be taking a back seat to this new debate, but not for lack of trying. Case in point: Borland International’s Turbo PROLOG has allegedly sold close to 100,000 copies since its introduction last year, allowing a whole new generation of programmers to become acquainted with the language.

But the pervasiveness of C and its relationship to UNIX, coupled with the voices of LISP vendors and researchers, has made these two languages the center of AI attention.

Some expert system companies have dropped LISP altogether, focusing on more flexible languages. Teknowledge, for instance, switched its LISP-based S.1 tool to C in late 1985 and now only supports its C versions. Inference Corp. and The Carnegie Group are still keeping the LISP versions of their expert system tools but plan on having C implementations available by the end of this year. Plus, the recent demise of LISP Machine Inc. and the troubles at LISP leader Symbolics have not made things any easier for those organizations trying to decide on which path to follow in developing AI applications.

On the other hand, a number of companies such as Lucid, Sun Microsystems, Apollo, and Franz Inc. have all attempted to make it easier for the mass market to use generally available machines and tools for AI development instead of relying on specialized machines and tools. These companies are focusing on general-purpose environments such as UNIX to help the spread of AI development.

So now we have C vs. LISP. Which is better? Depends on what you need. LISP is a tough language to port to a lot of machines; that happens to be one of C’s strengths. C isn’t so great at some of the screen interfaces found in a majority of expert systems; LISP has enough interface capabilities to keep even the most jaded programmer happy. And the list goes on.

The fact remains, however, that AI is not any particular language nor any particular programming style. No AI language is required for developing applications, just as no specific brand of gasoline is required to get your car running.

Developing an expert system or natural language application all boils down to a few very simple things. First, how comfortable is the programmer with a specific language? Second, what kind of machine is the application going to run on? Third, how much is the use of a certain language going to cost? (Management may argue that this should be the first consideration.)

And last, but certainly not least: Does it work?


Harvey Newquist III’s column In Practice appears each month in AI EXPERT. In addition, he is editor of the AI TRENDS newsletter and a contributing editor to COMPUTERWORLD.