Design of the SCHEME-78 Lisp-based microprocessor (1980)
dl.acm.org106 points by fanf2 5 days ago
106 points by fanf2 5 days ago
In an alternate universe, we'd all be using descendants of these instead of the 8086.
Part of what killed it, besides not having the economy of scale in microprocessors, is that optimizing compilers got good at turning loops over integers into fast code for regular CPUs, and these are pretty important for overall system performance. So no matter how much faster a custom CPU could run eval, it wasn't going to be competitive at integer loops.
I think it's a little more than that. The Lisp machines were very expensive (though the single chip version made things like a MacIvory or MicroExplorer somewhat more affordable), very proprietary, and generally didn't run the software the mass market was looking for (not that they couldn't...there just was never a Lotus-123 or Wordstar for Symbolics). So there was probably never a path to real economy of scale.
But otherwise I think you're spot on, and it's a common arc in this industry: company comes up with a true innovation solving a problem better than the competitors, Moores law does it's thing, suddenly the COTS products are competitive through brute force if nothing else at a lower price point, and company either has to innovate again or embrace a COTS platform. Jim Gettys wrote a neat paper about how this effected high-end video cards back in the early X11 days (admittedly, no longer directly applicable since GPUs are the COTS solution now, but the principle stands).
Had these been picked for the IBM PC, it would have died and been replaced by some RISC.
Lisp machines only made any sense in that very brief window when DRAM was faster than the CPUs that used it. Partly because of process technology, partly because of lack of understanding of CPU design principles. With x86, you could choose to ignore the parts of the design that made it slow, and just build on the basics to bring the design forward. With a lisp machine, that would have never worked.
It wouldn't have been long until you would have been able to buy machines that would have been more than 10x faster on real workloads, and probably cheaper too.
Maybe that's true for this project (implementing pointer chasing list interpreter directly in hardware), but it's much less clear to me why it would be true of the much more commonly remembered examples of "lisp machines" like symbolics, TI, etc.
Most likely the IBM PC would have died had it not been for the way clones came to be.
> In an alternate universe, we'd all be using descendants of these instead of the 8086.
Unlikely. For this to happen, the developers would have had to target the mass market (i.e. working hard to make the processor and a computer based on it rather cheaply available for the masses). This is clearly not what SCHEME-78 was developed for.
Programmers may have wanted to program in other languages but users wanted speed.
People were already buying computers that cost more than cars and they ran like molasses.
You have to deliver what customers want on their terms, not program on your own terms.
users wanted speed
I'd argue users wanted off the shelf software, not bespoke solutions, which is why they were cool with slow machines that ran 123, dBase and Wordperfect.
What I remember from the 80s/90s is that computer speed was never fast enough to keep up with use cases. Users really did want speed and marketers focused on speed.
Computers are still never fast enough. But for 95% of users, if you said "you can have a computer that's 5x as fast, but won't run any of the software you use every day", they'd pass.
I don't know what you're trying to say here, all those programs would have run terribly if they were written in lisp or scheme.
Compiled languages with automatic memory management were already a thing in 16 bit home computers, and they run fast enough for boring data entry business software, e.g. anything xBASE, compiled BASIC.
Anything to do with graphics and games, yes only Assembly would do the trick.
Just word processors and spreadsheets were not fast enough for general use in the 80's. Everyone was mostly exited for making them faster, not for graphics and games.
Still, not all of them were fully written in Assembly, like games and graphics.
I was there, and wrote my share of Clipper and DBase III code, among many other things.
You think hit software back then was written in basic or anything close to scheme?
DBase was written in assembly first, word perfect was written in assembly first.
Yes, DBase was written in Assembly, like many compilers and interpreters back then, and I am certain you would be able to dig out a Lisp or Scheme compiler for CP/M equally written in Assembly.
Yet, the folks down at the bank, insurance companies, and video rentals, were using DBase applications written in xBase, compiled via Clipper, not Assembly.
My first computer was a Timex 2068, I kind of know what was being written with what.
I don't know what you're trying to say here. This thread was about someone saying in an alternate universe lisp computers might have caught on.
They were never going to catch on and it was never going to work because people didn't want them. Programmers might have wanted to program in lisp, but users didn't want to buy software made in lisp.
For some reason you're bringing up that someone may have written some scripts in an interpreted language which if anything reinforces that lisp machines weren't necessary.
> I'd argue users wanted off the shelf software, not bespoke solutions, which is why they were cool with slow machines that ran 123, dBase and Wordperfect.
Followed by
> I don't know what you're trying to say here, all those programs would have run terribly if they were written in lisp or scheme.
While dBase was written in Assembly, the interpreter, Clipper, the compiler was written in C, and dBase software written in xBase programming language, a programming language that by Clipper 5 days was just like Lisp or Scheme in capabilities, garbage collected, able to do functional programming, and OOP, in 640 KB, running on 20 MHz CPUs.
What does this have to do with people wanting lisp machines?
They couldn't do what people wanted. People didn't want their software written in lisp, they didn't want expensive hardware and slow software. Who cares if that software had interpreters in it?
Does anyone know the connection between this group and Danny Hillis who created the Connection Machine a few years later (see what I did there)? Presumably they were all at MIT at about the same time.
https://en.m.wikipedia.org/wiki/Thinking_Machines_Corporatio...
I know that Guy Steele joined Thinking Machines, but after they'd at least designed the CM-1. He talked a little about it in A Conversation with Guy Steele Jr. in the April 2005 issue of Dr. Dobb's Journal. I don't have a link, but I am sure he has talked about it elsewhere too.
For a somewhat more commercially successful take on this theme, there are a number of papers on the Symbolics 'Ivory' lisp processor that are worth a read. TI made a Lisp processor as well, but I'm less familiar with it.
And some fortunate ones also used Interlisp-D.
Nowadays rescued at https://interlisp.org/ , with a browser emulation available.
Emacs with SBCL still isn't close to this, maybe if booting straight into Emacs, and even so.
I assume you meant to reply to some other comment, since this has nothing to do with Ivory, or even chips.
It has to do with Xerox PARC microcoded CPUs, which also supported Xerox PARC idea of what a Lisp Machine was supposed to look like.
So, nothing to do with lisp-based microprocessor chips. Got it.
While the Alto and Dorado weren't microprocessors, a couple of years after the Dorado they could have been. And there was a period of time when RISC processors were wasting their potential advantage of running user code at microcode-like one-cycle-per-instruction speeds by not including any instruction cache, as Alan Kay has argued on Quora. If Chuck Thacker had gone to work at Motorola or National and gotten free rein to build a Dorado microprocessor, it would have kicked butt. Also, Superman would beat Spider-Man.
But Xerox wasn't organized to sell even microcomputers, much less microprocessors, so it would have had to be another company that did it. Motorola, say, or Apple, but although they flirted with doing their own processor for the Lisa, they didn't do it in real life until maybe the Mac M1. (PowerPC and Newton ARM weren't really Apple designs.)
The Scheme chip is pretty different from the Dorado or from the Symbolics processors, which at the time also weren't microprocessors. But I think it's a mistake to think they had nothing to do with each other.
I have to assume that the TI Lisp chip was effectively just a CADR-on-a-chip (actually Lambda-on-a-chip) rather than a new architecture, necessary to support their software base.
Major difference between Lambda and TI machines was a switch from 24bit fixnum (and addressing) to 25bit fixnums (and address)
You could run 25bit images and microcode on any of the CADR/Lambda/TI machines.
TIL.
The only mentions I have seen were of changes for TI Explorer from Lambda, so I could not figure if it involved any kind of different circuitry
The change was to remove the "flag bit" from each memory word. This was done before MIT System 99 which was before the first LMI release.
The Lambda had memory management hardware that could make use of the extra virtual address space, the CADR didn't. Any of the machines could handle 24 or 25 bit integers.
MIT System 99 was made after the first LMI release, which was based on MIT System 98 (and even slightly older releases) - which removed the flag bit. See the release notes here https://tumbleweed.nu/lm-3/sys-msg/sys98-msg.html
programs.
] Pointer Fields Now 25 Bits; Flag Bit Gone.
Each typed data word in Lisp machine memory used to have one bit called the "flag bit" which was not considered part of the contents of the word. This is no longer so. There is no longer a flag bit; instead, the pointer field
of the word is one bit larger, making it 25 bits in all. This extra bit extends the range of integers that can be represented without allocation of storage, and also extends the precision of small-floats by one bit.
On the Lambda processor, the maximum size of virtual memory is doubled. This is the primary reason for the change. Unfortunately, the CADR mapping hardware is not able to use the extra bit as an address bit, so the maximum virtual memory size on a CADR is unchanged.
The functions %24-BIT-PLUS, %24-BIT-DIFFERENCE and %24-BIT-TIMES still produce only 24 bits of result. If you wish to have a result the full size of the pointer field, however wide that is, you should use the functions %POINTER-DIFFERENCE and %POINTER-TIMES (the last is new), and %MAKE-POINTER-OFFSET with a first argument of DTP-FIX to do addition.
The functions %FLOAT-DOUBLE, %DIVIDE-DOUBLE, %REMAINDER-DOUBLE and %MULTIPLY-FRACTIONS use the full width of the pointer field.
The values returned by SXHASH have not changed! They are always positive
fixnums less than 2*23.https://news.ycombinator.com/item?id=31758139
40 points by DonHopkins on June 15, 2022 | parent | context | favorite | on: Purdue Starts Comprehensive Semiconductor Degree P...
Lynn Conway, co-author along with Carver Mead of "the textbook" on VLSI design, "Introduction to VLSI Systems", created and taught this historic VLSI Design Course in 1978, which was the first time students designed and fabricated their own integrated circuits:
>"Importantly, these weren’t just any designs, for many pushed the envelope of system architecture. Jim Clark, for instance, prototyped the Geometry Engine and went on to launch Silicon Graphics Incorporated based on that work (see Fig. 16). Guy Steele, Gerry Sussman, Jack Holloway and Alan Bell created the follow-on ‘Scheme’ (a dialect of LISP) microprocessor, another stunning design."
THE M.I.T. 1978 VLSI SYSTEM DESIGN COURSE:
https://ai.eecs.umich.edu/people/conway/VLSI/MIT78/MIT78.htm...
A Guidebook for the Instructor of VLSI System Design:
https://ai.eecs.umich.edu/people/conway/VLSI/InstGuide/InstG...
That book and course catalyzed the "Mead–Conway VLSI chip design revolution":
https://en.wikipedia.org/wiki/Mead%E2%80%93Conway_VLSI_chip_...
https://ai.eecs.umich.edu/people/conway/conway.html
https://en.wikipedia.org/wiki/Lynn_Conway
https://en.wikipedia.org/wiki/Carver_Mead
Lynn Conway's "Reminiscences of the VLSI Revolution: How a series of failures triggered a paradigm shift in digital design":
https://ai.eecs.umich.edu/people/conway/Memoirs/VLSI/Lynn_Co...
Also:
https://news.ycombinator.com/item?id=25964865
Here's some historic Vintage VLSI Porn that I posted 6 years ago, from Lynn Conway's famous VLSI Design course at MIT: https://en.wikipedia.org/wiki/Lynn_Conway
https://ai.eecs.umich.edu/people/conway/conway.html
https://news.ycombinator.com/item?id=8860722
DonHopkins on Jan 9, 2015 | on: Design of Lisp-Based Processors Or, LAMBDA: The Ul...
I believe this is about the Lisp Microprocessor that Guy Steele created in Lynn Conway's groundbreaking 1978 MIT VLSI System Design Course:
http://ai.eecs.umich.edu/people/conway/VLSI/MIT78/MIT78.html
My friend David Levitt is crouching down in this class photo so his big 1978 hair doesn't block Guy Steele's face:
The class photo is in two parts, left and right:
http://ai.eecs.umich.edu/people/conway/VLSI/MIT78/Class2s.jp...
http://ai.eecs.umich.edu/people/conway/VLSI/MIT78/Class3s.jp...
Here are hires images of the two halves of the chip the class made:
http://ai.eecs.umich.edu/people/conway/VLSI/InstGuide/MIT78c...
http://ai.eecs.umich.edu/people/conway/VLSI/InstGuide/MIT78c...
The Great Quux's Lisp Microprocessor is the big one on the left of the second image, and you can see his name "(C) 1978 GUY L STEELE JR" if you zoom in. David's project is in the lower right corner of the first image, and you can see his name "LEVITT" if you zoom way in.
Here is a photo of a chalkboard with status of the various projects:
http://ai.eecs.umich.edu/people/conway/VLSI/MIT78/Status%20E...
The final sanity check before maskmaking: A wall-sized overall check plot made at Xerox PARC from Arpanet-transmitted design files, showing the student design projects merged into multiproject chip set.
http://ai.eecs.umich.edu/people/conway/VLSI/MIT78/Checkplot%...
One of the wafers just off the HP fab line containing the MIT'78 VLSI design projects: Wafers were then diced into chips, and the chips packaged and wire bonded to specific projects, which were then tested back at M.I.T.
http://ai.eecs.umich.edu/people/conway/VLSI/MIT78/Wafer%20s....
Design of a LISP-based microprocessor
http://dl.acm.org/citation.cfm?id=359031
ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-514.pdf
Page 22 has a map of the processor layout:
http://i.imgur.com/zwaJMQC.jpg
We present a design for a class of computers whose “instruction sets” are based on LISP. LISP, like traditional stored-program machine languages and unlike most high-level languages, conceptually stores programs and data in the same way and explicitly allows programs to be manipulated as data, and so is a suitable basis for a stored-program computer architecture. LISP differs from traditional machine languages in that the program/data storage is conceptually an unordered set of linked record structures of various sizes, rather than an ordered, indexable vector of integers or bit fields of fixed size. An instruction set can be designed for programs expressed as trees of record structures. A processor can interpret these program trees in a recursive fashion and provide automatic storage management for the record structures. We discuss a small-scale prototype VLSI microprocessor which has been designed and fabricated, containing a sufficiently complete instruction interpreter to execute small programs and a rudimentary storage allocator.
Here's a map of the projects on that chip, and a list of the people who made them and what they did:
http://ai.eecs.umich.edu/people/conway/VLSI/MPCAdv/SU-BK1.jp...
1. Sandra Azoury, N. Lynn Bowen Jorge Rubenstein: Charge flow transistors (moisture sensors) integrated into digital subsystem for testing.
2. Andy Boughton, J. Dean Brock, Randy Bryant, Clement Leung: Serial data manipulator subsystem for searching and sorting data base operations.
3. Jim Cherry: Graphics memory subsystem for mirroring/rotating image data.
4. Mike Coln: Switched capacitor, serial quantizing D/A converter.
5. Steve Frank: Writeable PLA project, based on the 3-transistor ram cell.
6. Jim Frankel: Data path portion of a bit-slice microprocessor.
7. Nelson Goldikener, Scott Westbrook: Electrical test patterns for chip set.
8. Tak Hiratsuka: Subsystem for data base operations.
9. Siu Ho Lam: Autocorrelator subsystem.
10. Dave Levitt: Synchronously timed FIFO.
11. Craig Olson: Bus interface for 7-segment display data.
12. Dave Otten: Bus interfaceable real time clock/calendar.
13. Ernesto Perea: 4-Bit slice microprogram sequencer.
14. Gerald Roylance: LRU virtual memory paging subsystem.
15. Dave Shaver Multi-function smart memory.
16. Alan Snyder Associative memory.
17. Guy Steele: LISP microprocessor (LISP expression evaluator and associated memory manager; operates directly on LISP expressions stored in memory).
18. Richard Stern: Finite impulse response digital filter.
19. Runchan Yang: Armstrong type bubble sorting memory.
The following projects were completed but not quite in time for inclusion in the project set:
20. Sandra Azoury, N. Lynn Bowen, Jorge Rubenstein: In addition to project 1 above, this team completed a CRT controller project.
21. Martin Fraeman: Programmable interval clock.
22. Bob Baldwin: LCS net nametable project.
23. Moshe Bain: Programmable word generator.
24. Rae McLellan: Chaos net address matcher.
25. Robert Reynolds: Digital Subsystem to be used with project 4.
Also, Jim Clark (SGI, Netscape) was one of Lynn Conway's students, and she taught him how to make his first prototype "Geometry Engine"!
http://ai.eecs.umich.edu/people/conway/VLSI/MPCAdv/MPCAdv.ht...
Just 29 days after the design deadline time at the end of the courses, packaged custom wire-bonded chips were shipped back to all the MPC79 designers. Many of these worked as planned, and the overall activity was a great success. I'll now project photos of several interesting MPC79 projects. First is one of the multiproject chips produced by students and faculty researchers at Stanford University (Fig. 5). Among these is the first prototype of the "Geometry Engine", a high performance computer graphics image-generation system, designed by Jim Clark. That project has since evolved into a very interesting architectural exploration and development project.[9]
Figure 5. Photo of MPC79 Die-Type BK (containing projects from Stanford University):
http://ai.eecs.umich.edu/people/conway/VLSI/MPCAdv/SU-BK1.jp...
[...]
The text itself passed through drafts, became a manuscript, went on to become a published text. Design environments evolved from primitive CIF editors and CIF plotting software on to include all sorts of advanced symbolic layout generators and analysis aids. Some new architectural paradigms have begun to similarly evolve. An example is the series of designs produced by the OM project here at Caltech. At MIT there has been the work on evolving the LISP microprocessors [3,10]. At Stanford, Jim Clark's prototype geometry engine, done as a project for MPC79, has gone on to become the basis of a very powerful graphics processing system architecture [9], involving a later iteration of his prototype plus new work by Marc Hannah on an image memory processor [20].
[...]
For example, the early circuit extractor work done by Clark Baker [16] at MIT became very widely known because Clark made access to the program available to a number of people in the network community. From Clark's viewpoint, this further tested the program and validated the concepts involved. But Clark's use of the network made many, many people aware of what the concept was about. The extractor proved so useful that knowledge about it propagated very rapidly through the community. (Another factor may have been the clever and often bizarre error-messages that Clark's program generated when it found an error in a user's design!)
9. J. Clark, "A VLSI Geometry Processor for Graphics", Computer, Vol. 13, No. 7, July, 1980.
[...]
The above is all from Lynn Conway's fascinating web site, which includes her great book "VLSI Reminiscence" available for free:
http://ai.eecs.umich.edu/people/conway/
These photos look very beautiful to me, and it's interesting to scroll around the hires image of the Quux's Lisp Microprocessor while looking at the map from page 22 that I linked to above. There really isn't that much too it, so even though it's the biggest one, it really isn't all that complicated, so I'd say that "SIMPLE" graffiti is not totally inappropriate. (It's microcoded, and you can actually see the rough but semi-regular "texture" of the code!)
This paper has lots more beautiful Vintage VLSI Porn, if you're into that kind of stuff like I am:
http://ai.eecs.umich.edu/people/conway/VLSI/MPC79/Photos/PDF...
A full color hires image of the chip including James Clark's Geometry Engine is on page 23, model "MPC79BK", upside down in the upper right corner, "Geometry Engine (C) 1979 James Clark", with a close-up "centerfold spread" on page 27.
Is the "document chip" on page 20, model "MPC79AH", a hardware implementation of Literate Programming?
If somebody catches you looking at page 27, you can quickly flip to page 20, and tell them that you only look at Vintage VLSI Porn Magazines for the articles!
There is quite literally a Playboy Bunny logo on page 21, model "MPC79B1", so who knows what else you might find in there by zooming in and scrolling around stuff like the "infamous buffalo chip"?
http://ai.eecs.umich.edu/people/conway/VLSI/VLSIarchive.html
http://ai.eecs.umich.edu/people/conway/VLSI/VLSI.archive.spr...
Mandatory xkcd link: https://xkcd.com/297/
Related performance-oriented discussion: https://news.ycombinator.com/item?id=40296932
Lisp will always be the 'what if' watering hole in the compsci space.