Email: S.N.K.Watt@open.ac.uk
`The whole of this unfortunate business,' said Dr Lyster, `has been the result of pride and prejudice.'Cecilia, Fanny Burney
Lisp has a unique style among programming languages. It has kept a conceptual cleanliness throughout its history, remaining both simple and powerful. While other languages were being deliberately shaped to encourage particular design approaches Lisp remained passive; it could mould itself around any approaches that seemed appropriate without changing in substance. It doesn't interfere with the design processes, it just offers a wider scope: programs and data can be structured so flexibly that people can express themselves in many different ways within the same framework. This makes Lisp ideal as a teaching language: in many institutions Scheme---a dialect of Lisp---is taught because it allows different styles of procedural, constraint, and logic programming to be used all within the same basic language [1]. Besides this, Lisp is an incremental language, so new procedures can be tried immediately they have been defined. This, together with type testing during execution and true abstraction from the underlying computer hardware, mean that it has all the qualities which make it ideal for developing both systems and programming skill quickly.
Today, Lisp is still a vibrant and thriving language; indeed, it has always been pioneering new concepts and approaches. There have, however, been fundamental changes both in its design and in its implementation, and now it runs easily on desktop and laptop computers, it is fast, and is used for many different kinds of application. These changes mean that Lisp can offer more than it ever has been able to in the past; many of the problems once associated with Lisp are no longer applicable.
Lisp has changed with the times. While the essence of the language has remained the same, its availability, portability and performance have all improved dramatically. This paper gives an overview to Lisp; a brief taste of what the language is about, and some indication of the kinds of task that Lisp may be more or less suitable for than conventional languages.
Since then, there have been many different dialects of Lisp. Figure 1 shows the relationship between some of the more commercially successful Lisp dialects; although it should be noted that in practice every dialect borrowed ideas from just about every other dialect that preceded it. Fortunately, most of these have now passed away, and many are best left in their unmarked graves. For a more detailed history, see [5] (up to 1960) and [10] (from 1960 to 1993).
There are a number of key features that remain from these original Lisp dialects. First, symbolic expressions, a simple and almost ultimately general data structure; second, functions are first-class objects, so programs can be used as data; and third, programming in the style of function application as well as procedurally. Almost everything else in the language has changed quite fundamentally through the years. Binding has changed from dynamic to lexical, the early interpreters have been replaced by fast and efficient compilers themselves written in Lisp, and many new kinds of data structure, strings, arrays, hash-tables, and objects, have all been built into the language since. Whole new programming paradigms have been added; object-oriented programming and data-flow programming, and all this lies within the same basic framework.
Lisp is normally called a `weakly typed' language, but this is an extremely deceptive term. A far better term would be `dynamically typed', as the truth of the matter is that in Lisp, the type is associated with the value rather than with the variable, and that error checking is provided at run-time rather than at compile-time. Of course, this is a little slower, but a good Lisp compiler is capable of inferring the possible types of an object at any point in the program, and skipping type checks when they aren't needed. The advantage of this method is that it is actually stricter about the typing, as silly values will signal an error immediately, rather than letting the program run on and do something extremely nasty to the memory which may only show up later. It is also useful when abstract objects are added to the language, as they also need to have the types associated with the object rather than the variable to do proper message handling. By contrast, statically typed languages usually add abstract objects by adding a second type system: in parallel with the static types they keep object descriptions around while the program runs---leaving problems to the programmer in understanding which type system is actually being used in a particular reference.
In Lisp, despite prejudices to the contrary, it is even possible to add type declarations, so that even if the compiler can't infer the types of variables, it can be told them so that further optimisation and checking may be possible In Common Lisp, for instance, there is a `safety switch' so that at high safety, these declarations are checked as assertions, and at low safety, they are used to optimise out any type-checking code that would be provided by default. Programs can be developed and tested with high safety, and when they are free of bugs the safety can be switched to generate a smaller and faster version with these additional optimisations. These features are shared by most dialects of Lisp, but standards have evolved. Today, there are just two dialects in active use: Common Lisp and Scheme; they differ greatly in their size and even in their syntax, but the most important distinction between them is in their different aims.
Common Lisp [9] was intended as an industrial strength language, and emerged as an attempt to create a single standard from the plethora of dialects in active use in industry in the early 1980's, most of which were variants of MacLisp. It succeeded, after some initial resistance. Today it can be used on almost every conceivable platform with good performance. Common Lisp is a big language, although at its core there is a small one trying to get out. Taken as a whole, though, it is rather larger than Ada, and this makes it difficult for implementors, users, and novice programmers alike. Learning Lisp is fairly quick and the benefits are reaped soon, but the scale of Common Lisp does make life rather hard for its users in their early days.
Scheme [7,11], by contrast, was conceived almost by accident in laboratory experiments studying the connection between denotational semantics and message-passing languages. The upshot was the abandoning of the ``go to'' concept, even implicitly, as part of the language, and its replacement by the more general notion of a continuation function. The result was a very small, clean language ideal for teaching. It also has carefully defined semantics; semantics which are truer to the original lambda calculus than other dialects of Lisp. Many of the higher-level elements of the Common Lisp standard, such as the object system and sorting procedures, are intentionally not part of Scheme. This allows exploration of many different possible program structures in the learning process; much more than with most conventional languages which are generally biased towards a particular kind of organisation. Besides which, it keeps the language small and means that students can understand these rather fancy elements by actually building them for themselves as part of the learning process.
Whichever dialect is being used much of the language shares the same basic principles: the principles which have kept the language alive from its early days by allowing it to adapt to changes in requirements. These early foundations provided a degree of flexibility almost unparalleled in other languages. A large part of this flexibility was originally due to its use of symbolic expressions, a simple and flexible data structuring system constructed from cons cells, each of which contains two pointers to other objects, called the car and cdr for arcane historical reasons associated with the original IBM hardware underlying the first implementations. Each pointer can point to either another cons cell or to something else, such as a symbol or a number, so cons cells could be daisy-chained into long pointer-linked structures, and lists were born.
(defun . (foo . ((n . nil) . ((+ . (n . (1 . nil))) . nil))))but this wouldn't make much sense to anybody. The Lisp syntax abbreviates this by rewriting chains of conses as lists of elements separated by spaces and leaving out the symbol nil marking the end of the list. That is, (a . (b . (c . nil))) can be written as (a b c).
The usual syntax for the structure in figure 2 is therefore:
(defun foo (n) (+ n 1))which corresponds to defining a function taking a single parameter n, adding one to it, and returning the value.
In summary, Lisp is a dynamically-typed language originally derived from the mathematical foundations of lambda-calculus, but throughout its history it has continually been developed to better tuned to the requirements of the time. The positive elements, such as symbolic expressions and its consistent treatment of programs and data, have been retained, but problem areas, like Lisp's use of interpreters and dynamic binding, have been replaced with fast and efficient compilers using static binding. Whole new features have been added where the language was deficient, more data types such as strings and hash-tables, object-oriented programming, and powerful numeric capabilities. Lisp is a survivor, and it has survived by adapting to today's needs.
But consider, the function that is applied---in this case list---can be considered as a parameter to this function. If the two lists were of numbers we might want to add corresponding elements by using +, or multiply them by *. mapcar does exactly this: it provides a general iteration framework for free, and uses a function passed by its caller to generate values it can then combine into a result list.
Not only is this version much shorter, it is also much more reliable, because the iteration framework only needs to be written once. mapcar circumvents all the potential errors in having to write iteration frameworks all over the place. The compiler then takes the higher-order function mapcar and unravels it to generate code which will, in practice, be identical to that in the iterative version. The nastiness of the basic iterative isn't a problem, because in practice nobody would bother writing it when there is no performance advantage, but the power of the general iteration framework in do is available for exceptional circumstances.
Conventional languages such as Pascal and C also allow higher-order functions, but their static type-checking removes much of their utility: an equivalent of mapcar could not be used with both list and +, because one returns a list and the other an integer, and these cannot be accommodated in a single function result in a statically-typed language.
Lisp's sequence functions can go even further than this, because they treat all the different kinds of object equivalently. It is possible to map over vectors and lists in the same way. After all, both contain an ordered sequence of values. Common Lisp contains a key set of sequence-generic functions: functions which will operate on any kind of sequence; lists, vectors, strings, and so on. These functions can be used to find, count, or delete elements, for instance, without the programmer needing to know anything about the underlying representation. Lisp provides true independence from the underlying representation, so that in a well-designed system data representations can be changed without any effect on the code that uses it, without needing to encapsulate everything so that it becomes difficult to debug.
In Lisp, another common kind of higher order function is a macro. Unlike conventional languages where macros operate by transforming program text at the lexical level, Lisp macros are themselves written in Lisp, and they work by transforming one Lisp program into another. As Lisp programs are just Lisp lists, this is both quick and simple to do, and the effect is to give macros access to the whole power of the language. Macros are used extensively in Common Lisp, indeed most of the control structures programmers use are actually macros, implemented in terms of a small number of primitive `special forms' which are all that the compiler needs to worry about. Scheme has about ten special forms, and Common Lisp about thirty. As an example, there isn't a Pascal-like repeat form in Common Lisp, so let's write one.
(defmacro repeat (form predicate) `(loop ,form (when ,predicate (return))))
The new macro creates a control structure which repeatedly executes its body, and then evaluates a predicate; when the predicate is true the loop exits. The backquote (`) syntax marks whatever follows as a kind of template: everything that follows is used literally to construct bits of program, except that forms preceded by a comma (,) are evaluated first and their value put in instead. From now on, the macro will be expanded when it is used in code, so the definition:
(let ((command ())) (repeat (setf command (read-command)) (null command))))
will be transformed before the compilation proper into:
(let ((command ())) (loop (setf command (read-command)) (when (null command) (return))))
Other macros in this form, notably loop and when, will then be expanded until only special forms are left, and then the compiler will be run to generate the corresponding executable code.
In the same manner whole new languages can be constructed, with entirely different syntax and semantics to those of Common Lisp, and indeed this is a very typical way of programming in Lisp. First, a more applicable (and usually quite reusable) general purpose language is built, and then this is gradually specialised to the application in hand. Lisp is a natural for writing compilers and interpreters for other languages.
Often this is done by adding a code walker, a program which can go through the nested list structure of a segment of program code and can process it according to the semantics of the special forms it contains. This adds the whole power of a compiler to the macro facility and is often used to generate extensions to the language: this is exactly how the data flow and nondeterministic choice extensions which will be described in sections 5 and 6 are implemented.
Some of the advantages of Lisp lie not in the language itself, but in the environments which are its natural consequence. Lisp's ancestry as an interpreted language lives on in its incremental nature; changes can be made to existing definitions while in the middle of testing, and, should an error occur, even in the error context itself, within the Lisp error handler. The edit-debug cycle is very short in Lisp compared to almost any other language.
Lisp environments are usually centred around a loop, called the REP loop (for read, eval, and print, the functions it calls repeatedly). This is a bit like a command-line interface: Lisp repeatedly reads an expression, evaluates it, and prints the answer. Functions can be defined, functions called, or variables evaluated all by typing in a form to be evaluated. If there is an error, for instance in the middle of evaluating a function call, the Lisp error handler is invoked and this creates a new, inner REP loop. In this loop, functions and variables can be evaluated as before, but additionally local variables bound in function calls on the stack can be inspected, and even changed, before execution is continued by returning a value from one of the calls on the stack. An error caused by faulty code can be fixed in the source, the function redefined, and the program run continued without needing to quit or rebuild the system to get it back into the right context for continued testing. A new generation of C environments also provide this facility: many of the companies that supply these were once Lisp houses and their environments use the same technologies and techniques which enabled today's advanced Lisp systems.
Since Lisp associates data types with values rather than with variables, and so the data type information is around at run-time, Lisp environments usually provide an `inspector' to allow the contents of structures and objects to be browsed and even changed. Figure 5 shows a typical inspector display; the upper window shows a rectangle object with its slots left, top, width, and height, and the lower window shows its class. The class is a first-order object describing the slots and structure of all rectangle objects, and it too consists of slots with values, just like the basic rectangle object. The power of the inspector lies in that it can be used at run-time to check and even change the actual values of variables; as such it is an invaluable debugging tool.
As with many other programming language features, Lisp was first used as an experimental laboratory for this new approach to program design, and the result was gradual emergence of a synthesis which added value in the process, without too many effects on the existing language. Object-oriented programming (OOP) wasn't added to Lisp as a `bolt-on goody'. The Common Lisp object-oriented programming system took many years to develop, and was actually a hybrid of two existing OOP systems, flavors, which was part of the ZetaLisp [13] environment, and Common Loops, which was part of Xerox's InterLisp environment. After a small group of proponents from each camp had been locked in a room for several years, they emerged with the Common Lisp Object System, or CLOS; a standard that everybody (eventually) agreed was better than any of its antecedents.
Defining a class in CLOS is simple. Like the defun form used to define functions, there is a defclass form to define classes. The following example defines the class rectangle, as shown in the inspector window example shown in figure 5.
(defclass rectangle () ((left :initarg :left) (top :initarg :top) (width :initarg :width) (height :initarg :height)))
Objects are created by calling the function make-instance, which takes a class and returns a newly created instance of that class. The following example code will create and inspect a rectangle---the display that it creates is shown in the top window in figure 5.
(inspect (make-instance 'rectangle :left 100 :top 100 :width 128 :height 64))
Methods can be written using defmethod, which is rather like defun, except that for each parameter there is a corresponding class. For instance, to write a function to return the right boundary of a rectangle, the slots left and width must be added. with-slots creates a local context where the slots can be used as if they were simple variables.
(defmethod rectangle-right ((object rectangle)) (with-slots (width left) object (+ width left)))
These methods are bound up by CLOS into a generic function, which correspond roughly to a message in Smalltalk. Generic functions combine all the different methods together with some hidden glue so that when the generic function is called the right method is run.
By extending functions rather than using message passing as the structure to hang methods on, CLOS opened up the possibility of one of its main innovations: multimethods, methods which dispatch on more than one parameter. Although this has been criticised as against the spirit of message passing (which it is) and even object-oriented programming itself (which it isn't) there is no doubt that multimethods are used, and used both extensively and effectively in real applications. For example, drawing a shape depends on both the class of the shape and the class of the device it should be drawn to. These examples (derived from [4]) show a generic function draw which dispatches on two parameters, and how different methods can be defined for the different combinations. The first method will only be run if the first parameter is a rectangle and the second a bitmap-device, the second method for a rectangle and a postscript-device, and the third for a circle and a postscript-device. The generic function draw will check the types of its two parameters to decide which should be run. A standard message passing approach can achieve a similar effect by chaining together messages, but this is not so elegant or clear a solution as these draw methods.
(defmethod draw ((shape rectangle) (device bitmap-device)) ... paint pixels on display ...) (defmethod draw ((shape rectangle) (device postscript-device)) ... output PostScript commands ...) (defmethod draw ((shape circle) (device postscript-device)) ... output PostScript commands ...)
Because the method combination glue is program code rather than a table, CLOS allows methods to be written on standard Lisp classes, such as list and string, and on individual objects, like the number 1. We can add another draw method, which draws strings by dispatching on the built-in class string.
(defmethod draw ((shape string) (device postscript-device)) ... output PostScript commands ...)
Much of the power in CLOS comes from the way that the different methods can be combined. For instance, we might define a class of shadowed objects which has a draw method like this:
(defmethod draw ((shape shadowed) device) ... set shadowed context and offset shape ... (call-next-method) ... restore original context and shape ... (call-next-method))
This shows the use of the standard function call-next-method which invokes the next most specific method. If we defined a new class which inherited from both shadowed and rectangle the draw method for shadowed would be run first. The method above would then set up the context to draw a shadow, and use call-next-method to call the draw method for rectangle. The context would then be restored and this method called again to draw a normal rectangle over the shadow. The power of the multimethod combination approach means that shadowed can also be mixed with other classes of shape, such as circle, and it will still work. Also, because this draw method doesn't need to be specialised to postscript-device, the same shadowing code can be used for many different kinds of device.
Method combination like this, multimethods, and the integration with base Common Lisp classes give CLOS a qualitative advance over other object-oriented programming systems. Finally, if CLOS isn't enough, there is a whole metaobject protocol [4] which allows the basic patterns of inheritance, slot access, and method writing to all be specialised to provide, for instance, object-oriented front-ends to data bases. All this power is there should it be needed, although most people get by with simple classes and methods.
Series can provide very elegant solutions to many problems; solutions which are as efficient as code which has been written iteratively but which has a style and elegance normally only possible with inefficient approaches such as lazy evaluation.
The series package is implemented as a set of macros, and using the Common Lisp macro facility they apply the whole power of the language to analyse a user's code and to transform it by interleaving code from the component functions to create a single, although complex, iterative function.
As an example, Lisp people frequently try out an implementation with an approximation to the factorial function, which is easy to define recursively as follows:
(defun fact (n) (if (= n 1) 1 (* n (fact (- n 1)))))
In words, the factorial of n is n multiplied by the factorial of n-1, with the special case that the factorial of 1 is 1. The same function can be written using the series macro package as follows:
(defun fact (n) (collect-product (scan-range :start 1 :upto n)))
The scan-range function generates a series containing the elements 1 to n inclusive. The collect-product function (which actually isn't part of series; collect-sum is, and collect-product can be trivially defined similarly. The series version of the function is shorter, and is often faster because the code that is generated is iterative rather than recursive.
Series have a fair number of restrictions, some of which are problematic, but code which is written using series is generally both efficient and elegant. Series, as with CLOS, coexists with the rest of Lisp, although it tends to supersede some aspects of the core language; mapcar for instance, is rather redundant because the series function mapping does the same kind of thing.
As an example, consider the following definition:
(defun an-integer-between (low high) (if (> low high) (fail) (either low (an-integer-between (1+ low) high)))
This function nondeterministically returns an integer between the given bounds. The set of all possible values can be collected into a list by all-values, which is roughly equivalent to the bag-of primitive in Prolog.
(all-values (an-integer-between 1 5)) => (1 2 3 4 5)
Nondeterminism is a natural tool to apply in search problems: for instance, in a difficult scheduling task there will be two components to the problem, a search tree and a heuristic. Nondeterministic choice operators allow the search tree to be efficiently constructed. It is particularly good for solving `constraint satisfaction' problems, where there are a number of variables which must be given values while maintaining certain constraints on the relationships between the values. This is a special kind of search problem; one which packages like Screamer can make very easy, and one can be applied in a very natural way to solve many different kinds of task, particularly planning and scheduling problems.
In a conventional language, it might be claimed that the features of a data-flow language (like series) and nondeterminism (like Screamer) might be implemented by a library or a preprocessor, but the Lisp macro package strategy is very different. In Lisp, the languages are built one upon the other, so the result is a single uniform language, which has the power of the new paradigm and retains the generality of Lisp. The ability to construct a whole new language is beyond the capabilities of a library in a conventional language, and using a preprocessor creates a new language rather than combining the two. It is Lisp's ability to manipulate programs as data that makes packages like the series package and Screamer possible in a way they could never be in any conventional language.
Object-oriented and data-flow programming and nondeterminism aren't the whole story. Most Lisp programming is done in this style; designing more specialised languages within the basic Lisp framework. Production rule languages are a common class of language, but others include constraint solvers and generic editors. It is this ability to unify many different specialised and reusable languages within a single framework that makes Lisp a uniquely flexible language.
Lisp, a single language, can be programmed in many different ways. It can be programmed in traditional Lisp style as a combination of procedures and function application. It can be programmed as an object-oriented language, complete with multiple inheritance. It can be programmed as a data-flow language, with nondeterministic search, or as a rule based inference engine. All these different styles can be wrapped into a single language, so that the most appropriate style can be chosen for the problem in hand.
Perhaps the most important of these is Lisp's treatment of programs as data. In an interface, this means that a procedure can be installed as a `callback', so that when the user selects something in an interface, say an ``OK'' button or a ``Print...'' menu item, that procedure is run to carry out the corresponding action.
Figure 6 shows the Lisp code needed to create a dialog interface to the Lisp evaluator; this has a box where a form can be typed in, and the ``OK'' pressed to evaluate it and print the answer. There is also a ``Cancel'' button which quits the dialog without evaluating the form. This example is written using ``Common Graphics'', the user interface package used by Procyon Common Lisp on the Macintosh and Allegro Common Lisp under Windows 3.
(defparameter text-item (make-dialog-item :widget 'lisp-text :box (make-box 18 35 250 60))) (defparameter dialog-items (list (make-dialog-item :widget 'default-button :title "OK" :box (make-box 18 74 89 96) :set-value-fn #'(lambda (item new-value old-value) (print (eval (dialog-item-value text-item))) (values t t))) (make-dialog-item :widget 'cancel-button :title "Cancel" :box (make-box 177 74 250 96) :set-value-fn #'(lambda (item new-value old-value) (values t t))) (make-dialog-item :widget 'static-text :box (make-box 18 10 250 25) :value "Please enter a form to be evaluated") text-item)) (defun evaluate () (let ((dialog (cg:open-dialog dialog-items 'dialog *screen* :pop-up-p t :title "Evaluate" :window-interior (make-box 18 34 285 142)))) (pop-up-dialog dialog) (close dialog)))
Even though Lisp doesn't have a standard graphical user interface system, it has features which make the development of user interfaces rather easier than in many other languages. It is incremental, so interfaces can be developed bit by bit without constantly needing to rebuild everything. It treats programs as data, so callback procedures can be used to handle menu and dialog selection, and it is object-oriented, which allows new kinds of object to be written and added to interfaces without unnecessarily duplicating or breaking what is already there. Lisp is an easy and natural language for the development of graphical user interfaces.
First and foremost of these was delivery; in the early to mid 1980s it simply wasn't feasible to deliver a Lisp application of any size on hardware that could be widely used. It either required a special card or extra memory, and this pushed the cost up too high to be commercially viable.
These problems were all solved in the late 1980s, but the relationship between the Lisp and the artificial intelligence communities has remained a bit strained. There has, though, been a gradual change since Common Lisp was introduced, and now virtually every major computer hardware supplier has endorsed one Lisp implementation or another, although nowadays few suppliers mention artificial intelligence explicitly when they try to sell Lisp.
Ever since artificial intelligence became a bag of tools which could be applied to solve previously hard problems, the success of Lisp has grown step-by-step, as it provides exactly the right framework to implement these tools with the minimum of fuss. Although the two fields are now separate, there is still a positive feedback which benefits both.
Lisp is still used by artificial intelligence researchers, but its incremental nature has opened up a whole new range of opportunities. Lisp, especially with the CLOS, has proved very powerful in human interface design, where it allows the interface to be developed step-by-step without ever leaving the Lisp environment. In short, whenever some kind of language is needed, even if it isn't textual, Lisp's ability to stretch into new forms incrementally has proved an immensely valuable characteristic.
Common Lisp especially is a big language, and as recently as five years ago delivering applications widely on cheap hardware was almost impossible, but as hardware costs have dropped it has become much more viable. Common Lisp applications can and have been delivered on PCs with a 80386 processor and 4Mb of memory under Windows 3.1 [3]; a cheap and fairly typical system configuration.
Lisp does have one severe problem in real-time systems. Most languages handle memory dynamically by requiring programmers to explicitly free memory when they have finished with it. This is actually pretty difficult for the programmer, and adds significantly to the cost of development. This is usually necessary for any statically typed language, because they throw away all the type information at run time, so when a memory reference can no longer be accessed the run-time environment has no information as to what to do with the memory it referred to. Dynamically typed languages keep the type information at run-time, so when memory runs out, everything that can no longer be referred to can be implicitly deallocated and its space recycled. Lisp usually implements this with a garbage collector. Only if memory is genuinely full of accessible data does Lisp really run out of memory. This makes a programmer's life significantly easier, but means that occasionally the Lisp system stops to reorganise all its memory. For real-time systems, this stopping usually isn't acceptable, and although there are complicated variations on garbage collection that allow processing to continue while the memory is being reorganised, in practice it means that Lisp is rarely used for real-time systems. For a conventional interactive system, however, the occasional garbage collection isn't regarded as a problem, even by users [3,6].
Perhaps the biggest problem with Lisp, though, is the shortage of skilled people. Lisp isn't used much as a teaching language (although it is on the increase), and transfer of skills from conventional languages isn't always natural, or good for developing skill with Lisp. People normally take three to six months experience to have an adequate grasp of Common Lisp, and much less for Scheme, although their technique and style will continue to develop for many years. This may seem like a long time, but their productivity is such that people even while learning Lisp people can still build systems more quickly than their more skilled equivalents in a conventional language. For equivalent levels of skill and experience, Lisp programmers are roughly twice as productive as C/C++ programmers, even assuming a C/C++ environment that offers the same kind of incremental development. If the environment isn't incremental, the difference is far larger.
Lisp has unparalleled advantages over other languages; principally in its development time. A substantial program may take two people a few months to develop, when another language would require a bigger team and take longer, perhaps requiring two to five times as much effort. The resulting system need not even look like a Lisp program, it could have been written in C or Pascal as far as any of its users are concerned, so by cutting the time (and so the cost) of development it can be a very attractive language for niche applications, where there is a small market so a high development cost would make product development uneconomic.
One example of just such a niche application is Syllabus [3]---a program to help schools construct timetables. This is a significant problem; every year a typical deputy head may spend six weeks locked away constructing a timetable, and without any guarantee that the result is consistent. Two people were able to develop a program for the Apple Macintosh to help in this process; it took a little over a man year to build the program to the beta test stage, and then with the program's assistance constructing a timetable would typically take two days.
Syllabus was one of the first Lisp applications to be delivered widely on standard hardware. At the time of its launch (1990), the minimal hardware configuration cost about £1000: today this would be about £400, or even less.
This is the kind of system Lisp is ideal for. There isn't a very large market for this application, so the low development cost made the product viable. The application, however, is still valuable, and was competitive against existing products---even those which had been written in more conventional programming languages. Of course, the users of the system weren't aware of the choice of language, although some did ask; the human interface hid any evidence of the details of the underlying language anyway. All that mattered was that the program did the job.
Other Lisp applications developed for use in industry are similar; applications that have fairly small markets, such as aircraft scheduling and design of analogue electronic devices, but which nevertheless are valuable tools within those markets. Lisp makes the development of these products cost-effective in a way that other languages can't.
Perhaps the most exciting prospect for the future is Apple's decision to define a new dialect: Dylan [2]. This was motivated by the problems with Common Lisp as an industrial language; Apple split their effort into two branches, the first concentrating on Macintosh Common Lisp, and the second on Dylan. The Dylan dialect is a direct descendent of Scheme, but incorporates an object system with the scope and style of CLOS and the consistency of Smalltalk. All its functions, even the system primitives, are generic functions, and new methods can be written on them.
The intent behind Dylan was to create a new, object-oriented but dynamic language which is attractive enough to entice C++ programmers to use it. Accordingly, the whole language has syntax which is closer to C++ than to traditional Lisp, hopefully one more congenial to those who blanch at the sight of all those nested parentheses. Underneath, the whole power of Lisp is available for rapid development. In fact, Dylan is a language designed to be fast, small, without lots of parentheses, and to be used for developing conventional software applications---the antithesis of everything that Lisp is supposed to be.
To give a hint of the correspondence between Dylan and Common Lisp, here is the Dylan syntax definition equivalent to the factorial function shown earlier.
define method fact (n) if (n = 1) 1; else n * fact (n - 1); end if; end method fact;
Apart from anything else, Dylan sets out to overcome the criticisms of Common Lisp as an industrial programming language: it is smaller, more consistent, and likely to be better in performance. Whether or not Dylan will be a success isn't yet clear, although a strong industrial consortium has committed to its development. The signs of potential success are certainly visible, but much hangs on the first implementations and the acceptability of the alternative syntax to programmers familiar with more conventional languages.
And the Lisp community shouldn't complain about the prejudices of people outside; this is the very force which encouraged the development of more practically useful dialects---the very force which will ensure the survival of Lisp as a useful language. As Burney's character Dr Lyster went on to say, `Yet this, however, remember: if to pride and prejudice you owe your miseries, so wonderfully is good and evil balanced, that to pride and prejudice you will also owe their termination.' 'Nuff said.
[2] Apple Computer Inc. Dylan: An Object-Oriented Dynamic Language. Apple Computer Inc., 1991.
[3] Rick Evertsz, Mark Dalgarno, Geoffrey A. Forster, and Stuart N. K. Watt. Syllabus: A solution to the school timetabling problem. In Proceedings of the First European Conference on the Practical Application of Lisp, Cambridge, UK, 1990.
[4] Gregor Kiczales, Jim des Rivières, and Daniel G. Bobrow. The Art of the Metaobject Protocol. MIT Press, 1991.
[5] John McCarthy. History of Lisp. ACM SIGPLAN Notices, 13(8), 1978.
[6] J. Nurminen. RFT design system: Experiences in the development and deployment of a Lisp application. In Proceedings of the First European Conference on the Practical Application of Lisp, Cambridge, UK, 1990.
[7] Jonathan Rees and William Clinger. Revised revised revised revised report on the algorithmic language Scheme (R4RS). AI Memo 848b, MIT AI Laboratory, 1992.
[8] Jeffrey Mark Siskind and David Allen McAllester. Screamer: A portable efficient implementation of nondeterministic common lisp. Technical Report IRCS--93--03, University of Pennsylvania Institute for Research in Cognitive Science, 1993.
[9] Guy Lewis Steele, Jr. Common Lisp: The Language. Digital Press, second edition, 1990.
[10] Guy Lewis Steele, Jr. and Richard P. Gabriel. The evolution of Lisp. SIGPLAN Notices, 23(3):1--80, 1993.
[11] Guy Lewis Steele, Jr. and Gerald Jay Sussman. An interpreter for the extended lambda calculus. AI Memo 349, MIT AI Laboratory, 1975.
[12] Richard C. Waters. Optimization of series expressions, part i: User's manual for the series macro package. AI Memo 1082, MIT AI Laboratory, 1989.
[13] Daniel Weinreb and David Moon. Lisp Machine Manual. MIT AI Laboratory, fourth edition, 1981.