"High-level CPU": follow-up

February 2nd, 2008

This is a follow-up on the previous entry, the "high-level CPU" challenge. I'll try to summarize the replies and my opinion on the various proposals. But first, a summary of my original points:

  1. "Very" high-level languages have a cost. Attributing this cost to the underlying hardware architecture is wrong. You could move the cost from software to hardware, but that wouldn't eliminate it. I primarily referred to languages characterized by indirection levels and late binding of user-defined operations, such as Lisp and Python, and to a lesser extent/confidence to side-effect-free languages like Haskell. I didn't mean to say that high-level languages should not be used, in fact I think that their cost is wildly overestimated by many. However, denying the existence of any intrinsic cost guarantees that people will keep overestimating it, because if it weren't that high a cost, why would you lie to them? I mean it very seriously; horrible tech marketing is responsible for the death (or coma) of many great things.
  2. Of all systems with similar cost and features, the one that has the least stuff implemented in hardware is the best, because you can change more things. The idea that moving things to hardware is a sure way to make them efficient is a misconception. Hardware can't do "anything in one cycle"; there are many constraints involved. Therefore, it's better to let the software explicitly control a set of low-level components than build hardware logic implementing high-level interfaces to them. For example, to add 2 numbers on a RISC machine, you load them to registers, then add. You could have a command adding operands from memory; it wouldn't run faster, because the hardware would have to spend cycles on loading operands to (implicit) registers. Hardware doesn't have to be a RISC machine, but it's always better to move as much control to software as possible under the given system cost constraints.

I basically asked people to refute point 1 ("HLLs are costly"). What follows describes the attempts people made at it.

Computers you can't program

rainbow-mandala.png

Several readers managed to ignore my references to specific high-level languages and used the opportunity to pimp hardware architectures that can't run those languages. Or any other programming languages designed for human beings, for that matter. Example architectures:

It is my opinion that the fans of this family of hardware/vaporware, consistent advocates of The New Age of Computing, have serious AI problems. Here's a sample quote on cellular automata: "I guess they really are like us." Well, if you want to build a computing device in order to have a relationship with it, maybe a cellular automaton will do the trick. Although I'd recommend to first check the fine selection of Homo Sapiens we have here on Planet Earth. Because those come with lots of features you'd like in a friend, a foe, a spouse or an employee already built-in, while computer hardware has a certain gap to fill in this department.

cellular-automaton.png

Me, I want to build machines to do stuff that someone "like us" wouldn't want to do, for any of the several reasons (the job is hard/boring/stinky/whatever). And once I've built them, I want people to be able to use them. Please note this last point. People and other "nature's computers", like animals and fungi, aren't supposed to be "used". In fact, all those systems spend a huge amount of resources to avoid being used. Machines aren't supposed to be like that. Machines are supposed to do what you want. Which means that both the designer and the user need to control them. Now, a computer that can't even be tricked into parsing HTML in a straightforward way doesn't look like it's built to be controlled, does it?

Let me supply you with an example: Prolog. Prolog is an order of magnitude more tame than a neural net (and two orders of magnitude compared to a cellular automaton) when it comes to "control" – you can implement HTML parsing with it. But Prolog does show alarming signs of independence – it spends most of its time in its inference engine, an elaborate mechanism running lengthy non-trivial loops, which sometimes turn out to be infinite. You aren't supposed to single-step those loops; you're supposed to specify truths about your world, and Prolog will derive more truths for you. Prolog was supposed to be the wave of the future about 25 years ago. I think it can be safely called dead by now, despite the fair amount of money poured into it. I think it died because it's extremely frustrating to use – you just can't tell why the hell it worked that way in each particular case. I've never seen anything remotely as annoying as Prolog, with the notable exception of Makefiles, running on top of a wonderful inference engine of their own.

My current opinion is that neural networks rarely deserve a special hardware implementation – if you need them, build a more traditional computer and run them on top of that; and cellular automata are just stillborn. I might be wrong in the sense that a hardware implementation of these models is the optimal solution for some problem, hence we'll see those beasts in some corner of a successful real-world system. But the vast majority of computing, including AI apps, will run on machines that support basic bread-and-butter programmer things simply and straightforwardly. Here's a Computing Technology Acceptance Lower Bound for ya: if you can't parse a frigging log file with it, you can't do anything with it.

Self-assembly computers

Our next contestant is a machine that you surely can program, once you've built it from the pieces which came in the box. Some people mentioned "FPGA", others failed to call it by its name (one comment mentioned a "giant hypercube of gates", for example). In this part, I'm talking about the suggestions to use an FPGA without further advice on exactly how it should be used; that is, FPGA as the architecture, not FPGA used to prototype an architecture.

Maybe people think that with an FPGA, "everything is possible", so in particular, you could easily build a processor efficiently implementing a HLL. Well, FPGA is just a way to implement hardware allowing you to trade NRE for unit cost. And with hardware, some things are possible and some aren't, or so I claim – for example, you can't magically make the cost of HLLs go away. If you can't think of a way to reduce the overhead HLLs impose on the system cost, citing FPGA doesn't make your argument look any better. On the contrary – you've saved NRE, but you've raised the cost of the hardware by the factor of 5.

Another angle: can you build a compiler? Probably so. Would you like to start your project with building a compiler? Probably not. Now, what makes people think that they want to build hardware themselves? I really don't know. Building hardware is gnarly, FPGA or not – there are lots of constraints you have to think about to make the thing efficient, and it's extremely easy to err on the side of not having enough flexibility. The latter typically happens because you try to implement overly high-level interfaces; it then turns out that you need the same low-level components to do something slightly different.

And changing hardware isn't quite as easy as changing software, even with FPGA, because hardware description code, with its massive parallelism and underlying synthesis constraints, is fairly tricky. FPGA is a perfectly legitimate platform for hardware vendors, but an awful interface for application programmers. If you deliver FPGAs, make it your implementation detail; giving it to application programmers isn't very likely to make them happy in the long run.

At the other end of the spectrum, there's the kind of "self-assembly computer" that reassembles itself automatically, "adapting to the user's needs". Even if it made any sense (and it doesn't), it still wouldn't answer the question: how should this magical hardware adapt to handle HLLs, for example, indirect memory access?

Actual computers designed to run HLLs

Some people mentioned actual hardware which was built to run HLLs, including Reduceron, Tcl on Board, Lisp Machines, Rekursiv, and ARM's Jazelle instruction set. For some reason, nobody mentioned Intel's 432, an object-oriented microprocessor which was supposed to replace x86, but was, among other things, too slow. This illustrates that the existence of a "high-level processor" doesn't mean that it was a good idea (of course it doesn't mean the opposite, either).

I'll now talk about these machines in increasing order of my confidence that the architecture doesn't remove the overhead posed by the HLL it's supposed to run.

Stock computers with bells and whistles

Finally, there was a bunch of suggestions to add specific features to traditional processors.

The good stuff

At the bottom line, there were two hardware-related things which captured my intoxicated imagination: the Reduceron and content-addressable memories. If anything ever materializes around this, I'll send out some samples. In the meanwhile – thanks!

1. uaf1989Feb 3, 2008

Please allow me to present a subclass of problems that would be amenable to this approach. If you had a language that could compile a purely logical design, say a logic table for traffic-light-controllers, it could produce code that would be able to configure a logic gate array in the most efficient manner, ala Karnough mapping. Then the gate array would be able to execute the logical functions in the most efficient way possible. Scale this up for encryption, perhaps. I have often thought it would be nice to have a programmable array that could be referenced in software after it had been preprogrammed to perform some complex operation.

2. uaf1989Feb 3, 2008

I would add that the gate array itself could consist entirely of NAND gates.

3. bjorn.lindbergFeb 3, 2008

One application that actually use specialized hardware successfully is graphics cards. Typically drawing a lot of polygons really fast. I suppose you have to find some key functioning that is used a lot before you could justify some special hardware implemented instruction.

4. macoovacanyFeb 3, 2008

scheme-79 chip:
ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-559.pdf

LISP Machine :
ftp://publications.ai.mit.edu/ai-publications/500-999/AIM-514.ps

[ps|pdf].

Timbo

5. hotmichelFeb 4, 2008

It is possible to make Java CPU which greatly outperforms
traditional CPU (Intel, AMD, ...). Look at
http://www.azulsystems.com/
they have been marketing Java CPU for some time now.
Their Vega 2 processor, seems to be the first 48-core chip designed and optimized for Java and acheives much greater performance than general purpose CPU..

6. Yossi KreininFeb 4, 2008

Vega 2: (1) Java is fairly low-level compared to the languages I cited, (2) Vega 2 is supposed to run a gazillion of threads; when you have trivially parallelizable workloads (say, in servers), life is good.

Programmable gate arrays: actual FPGAs are actually more flexible than that, they're just gnarly to program.

Graphics cards: I know close to nothing about those; I was shocked to hear that Michael Abrash basically plans to make them obsolete: http://www.radgametools.com/pixomain.htm

The thing is, the less software-configurable a piece of hardware is, the less chances it has to survive, because people want features, and ought to be able to tweak things.

7. yahoolianFeb 5, 2008

Side effects limit optimization. if the code must be run in a certain order, the optimizer cannot reorder it to make it faster. does C have a cost compared to assembly? i doubt humans could consistently outperform a C compiler by coding in assembly. there are too many things to keep track of. there is no intrinsic cost to using C vs assembly, and there is no intrinsic cost to using Haskell vs C. it just depends on which implementation has better optimizations implemented.

by the way, RISC uses more memory bandwidth for the instructions than x86. this results in slower performance, as memory is usually the bottleneck. the Reduceron is faster because it can access memory blocks and compute in parallel, instead of being forced to run sequentially on a von Neumann architecture. a von Neumann processor runs sequentially. by the way, all processors solving optimization problems. yes, you could go lower level than haskell and compile directly to FPGA to get parallel access to memory, but that is much more work, and you'd have to reconfigure the FPGA every time you want to run a different program, which would be quite slow.

here's an interesting chip, based on Parallel Random Access Memory.
http://www.umiacs.umd.edu/~vishkin/XMT/index.shtml

and with respect to memory management, a compiler can automatically detect where memory is deallocated and allocated. a deallocation followed by an allocation can simply reuse the memory. this compile time garbage collection is implemented in mercury. http://www.cs.mu.oz.au/research/mercury/information/papers.html#iclp2001_ctgc

pixomatic only gives DX7 features, and doesn't come anywhere close to current GPU performance. the goal of pixomatic is not to make GPUs obsolete.

8. kragenApr 23, 2008

The old machine Kay was talking about as being fast for Smalltalk was a Dorado, not a B5000; the B5000 came out something like 15 years earlier, as you know but maybe not all your readers do. The Dorado's approach was the microcode thingy that lost out to RISC.

As far as I can tell, Kay is basically wrong about the Moore's Law thing; I wrote about it in this kragen-tol post: http://lists.canonical.org/pipermail/kragen-tol/2007-March/000850.html

In theory that tells you what benchmarks Kay was probably thinking about.

Basically he was wrong, first, because the Dorado was built out of 3000 extremely expensive, high-performance ECL chips. The appropriate comparison is not to a laptop but to a Cray; that accounts for two of the three "missing" orders of magnitude in performance.

Second, he was wrong because he's talking about Squeak's performance, and Squeak is a bytecode interpreter. If you compile your bytecodes down to machine code with some PICs, you get back the other "missing" order of magnitude.

So here are the approaches that have been suggested either in Yossi's original post or in the comments:
- data word tagging: Yosef doesn't like this because it doesn't work for tiny data (say, 8 bits).
- FPGAs for specialized coprocessors. I think this is a fantastic idea, especially for things like image-processing, and maybe you can use it for things like Reduceron-style combinator-graph reduction too. The Reduceron is a perfect example of how FPGAs do more than allow you to trade NRE for unit cost: you have to reprogram the gate array to make it run a different Haskell program, because that Haskell program is embodied in the set of supercombinators it knows how to reduce.
- CAMs for hash tables. I don't know if that will work.
- reference counting hardware. Maybe a good idea, I don't know.
- lightweight parallelism. Good idea, but more of a design goal than an architectural design feature.
- integer-only cores (maybe "core diversity" is a better term?) probably will result in better overall efficiency, but might be harder to program. Seems like a step backwards if what you want is to reduce the penalty to run high-level languages; what you really need to do is come up with a software system that would hide this complexity from the programmer.
- private per-core memory. Same comments as integer-only cores.
- cellular automata. For what it's worth, ERIM's Cytocomputer β€” a pipelined raster-scanning cellular-automaton machine on custom silicon β€” was pretty damn fast for certain kinds of image processing. (You used the CA matrix to perform bulk nearest-neighbor operations SIMD-style on, in effect, the whole image at once.) But, again, not helpful for high-level languages. I don't think anyone ever used a Cytocomputer or a CAM-8 to parse a frigging log file.
- combinator-graph reduction machines. One of the first ones of these was the SKIM ("S-K-I-Machine") back in I think the 1970s; the Reduceron is a supercombinator-based modern equivalent.
- zeroing newly allocated cache lines without reading from DRAM. Modern CPUs can do this already.
- the Reduceron. Yup, awesome.
- the Tera MTA, which was the 128-register-set machine.

I'm going to tell the story of what I think happened to the Tera. Contrary to another commenter's assertion, the problem with the MTA was not that software wasn't available to take advantage of the machine; Tera ported LS-DYNA3D to it, supported all the usual HPC stuff in C and Fortran, and got some really impressive performance numbers IIRC. The problem seemed to be that they were competing on performance with Intel, the Digital Alpha team, and the StrongARM team, all of whom had enormous market volumes and could afford to spend orders of magnitude more money on their CPU designs. I think they only ever shipped two or three generations of their hardware over the course of ten years or so; the last one was a CMOS design actually designed by engineers at Cadence. Fortunately they got enough money that they were able to buy Cray, took the Cray name, and eventually de-emphasized the MTA and even the old Cray vector line in favor of huge clusters of commodity microprocessors.

There's an important lesson here for would-be higher-performance CPUs. It's not enough that the CPU be a lot faster than the other CPUs at the time that you get the idea to build it; it also needs to be a lot faster than the other CPUs when it actually ships. The Itanic and Symbolics can also tell this sad tale.

Yossi also said, about Jamie Zawinski's assertion that Lisp can be fast on stock hardware:

> If you use Lisp in the Lispy way that makes it so attractive in the first place, how on Earth can you optimize out the dynamic type checks and binding? You’d have to solve undecidable problems to make sense of the data flow.

The answer is that there are decidable approximations that give you good speedups. Olin Shivers's dissertation was a starting point for a lot of this research, but it has continued since then. Also, you can take the specializing-JIT approach Java has taken, although I don't think anyone has.

Remember that Jamie got his start at Lucid, whose claim to fame was precisely that they proved you didn't need specialized hardware to run Lisp fast.

I don't really know what the Lisp Machines had in order to help them run Lisp fast. I know they tagged each word, and had tag-checking instructions that checked the tags in parallel with doing the most likely operation, and I'm not confident that you can actually do that in software even on a modern out-of-order CPU.

A couple of times the "tag every byte" thing has come up. In my experience, whenever I'm dealing with large arrays of small numbers (1, 8, 16, or 32 bits) in a high-level language, it's a lot easier to put all of those numbers into a single tagged blob object, rather than putting tagged versions of all those numbers into a potentially-heterogeneous vector. Then you have SIMD instructions that add, or multiply, or subtract, corresponding members, or whatever. This makes interpretation overhead pretty much irrelevant. I've done real-time audio and video synthesis in Python this way, on my 700MHz laptop. I've done OLAP in Python this way. It's a very popular approach β€” it's how Matlab works β€” and it dates back to APL360, so it's not new either.

So if you had a computer with all kinds of crazy architectural features to execute high-level languages quickly, you probably still wouldn't want to use them most of the time to grovel over images pixel by pixel. You probably want to use something like Numeric, or Matlab, or Sisal, or APL, or J, or A+, or Lush, or the Perl Data Language, or PV-WAVE IDL. The speed of your high-level-language code is basically irrelevant here.

Anyway, I'm no expert. I've never even built a CPU of my own, let alone designed one, let alone designed a cutting-edge world-beating fast CPU, and I've only ever written one compiler for a high-level language, and it was a toy. (It's called Ur-Scheme; it compiles itself.) But here are some approaches that I think show some promise:

- a conditional call instruction, or a "previous PC" register that always points to the previous instruction executed, or a "PC before last jump", or something. It would be really great if handling an unexpected operand type could be just a type test followed by a conditional jump, rather than conditionally jumping around a call instruction so that you know where the type fault happened. (This is useful both for safety and for dynamic dispatch, e.g. PICs.)

- stack computers. If you're serious about packing more cores into less area, well, they're much smaller in silicon area than register machines, and often with less function-call overhead. IntellaSys's new SeaForth chip sounds really interesting, although limited in memory, and I imagine you could synthesize something similar that took only 10x as much silicon, and then it wouldn't take you ten years to ship a working chip.

- associative polymorphic caches. Modern CPUs have branch target buffers, or BTBs, which speed up register-indirect jumps and calls by caching the most likely destination of the jump. This is great for cutting down the interpretation penalty of a bytecode interpreter (as long as you put an indirect jump at the end of each bytecode instead of jumping to a central dispatcher) but I have the impression that they still leave a bunch of pipeline stalls in late-bound method calls. Which is why the JIT and similar machines implement, essentially, a BTB in software in the form of a PIC.

- user-level memory fault handling. It's really fast to allocate, say, 12 bytes of memory if you have a copying collector: you just copy your heap pointer register to another register and add 12 to the heap pointer. Except that then you have to do a compare and conditional jump to check for heap overflow, which is a lot more expensive. You can use virtual-memory hardware to trap your heap overflow cheaply, but that typically involves context-switching into the kernel and back. There's no reason it has to.

- memory protection between objects, as in Erlang and KeyKOS. The idea is that you divide state into "domains" or "processes" that share nothing and communicate only by sending each other messages, each with its own event loop to handle those messages. KeyKOS used the IBM 370 virtual memory architecture to do this, so the objects could be written in whatever language you wanted, with an expected object granularity of about a memory page and maybe 16 "keys" or references to other objects per object. Erlang uses a virtual machine instead. IIRC the Intel 432 was built more or less with this in mind, but there were a bunch of "capability hardware" machines back in the 1960s with the same idea. I think the AS/400 ('scuse me, iSeries) still works this way, and has from the beginning. But you don't have to totally redesign your CPU architecture to support memory protection between objects better; just support a smaller page size and have enough MMU contexts that you don't have to flush a TLB every time you context-switch from one object to another.

- tag-checking instructions as on the SPARC, maybe.

Mark Lentczner recommends reading David Ungar's dissertation about what you'd want in a CPU to make it run Smalltalk faster.

9. kragenApr 23, 2008

Oh, and I don't know if HLLs are expensive. Ur-Scheme compiles code to run surprisingly fast, like only 5x slower than GCC, despite being totally type-checked and bounds-checked at run-time and not doing anything you could call "optimization". But Scheme is a long way from Python, and 5x is still 4 Moore-years. On the other hand, Chambers's dissertation explained how they got Self within a factor of 2 of C on the SPARC, and I don't think there are any languages more dynamic than Self at the moment. (Well, maybe Bicicleta, sort of.)

10. Yossi KreininApr 24, 2008

Regarding http://lists.canonical.org/pipermail/kragen-tol/2007-March/000850.html – nice piece of research. I wonder what really happened with all that benchmark business. Likely, Alan Kay cited it casually without giving it much thought, and then we all came along with our nitpicking... The numbers turned out to be waaaay too impressive.

"Ur-Scheme compiles code to run surprisingly fast, like only 5x slower than GCC"

Not on image processing code I'd guess :) Measuring efficiency is damn tricky.

Regarding cellular automaton for image processing: I'd love to see a competitor build one of those. Down would they go.

11. kragenApr 24, 2008

The 5x figure was from (define (fib n) (if (< n 2) 1 (+ (fib (1- n) (fib (- n 2))))) and its C equivalent, so indeed it might not generalize to more realistic programs. On the other hand, that program consists entirely of integer operations and function calls, and Ur-Scheme is particularly bad at both of those (it has to check and fix up type tags for integer operations, and function calls indirect through a global variable, pass and check the number of arguments every time, and its function prologues and epilogues are horrible crap) so the gap might be less rather than more. (On the other hand, the basic blocks are so short that there's not that much optimizing GCC can do.)

However, since it's an MFTL, it doesn't implement anything that isn't needed to compile itself. And it doesn't use vectors, so it doesn't have vectors. It would clearly need to have those!

Does GCC SSA-vectorize image-processing-type loops yet? If not, it seems like the quality of your library of Matlab-like primitives would matter a lot more than the code generated by your compiler. And I'm going to argue that for image-processing code, code written with Matlab-like or even K-like primitives is "higher level" than the equivalent nest of loops in a language like Python (without Numeric) or C, in the sense that they more closely approximate the terminology and concepts of the problem domain, and contain fewer irrelevant details.

The Cytocomputer was pretty capable; in a pipeline of ten gate arrays, it was able to do on the order of 100 million nearest-neighbor operations (Sobel, dilation, erosion, stuff like that) per second, with a throughput of 10 megapixels per second. Your laptop CPU can probably do a few times more than that now, but it's not implemented in a gate array, and it has 28 more years of Moore's Law behind it. The Cytocomputer was contemporary with the Cray X-MP and the 6510, but I think it was actually faster than the X-MP at the stuff it was good for. (No doubt your GPU can do one or two orders of magnitude more than your CPU at things like that? I haven't really looked into GPGPU programming.)

The best information online I've found about the Cytocomputer is in a patent from 1989, which points at the original Cytocomputer patents: http://www.freepatentsonline.com/4928313.html

I think it got deployed in a bunch of machine vision applications throughout the 1980s, but I'm not clear on whether those were production or experimental.

I don't know if its approach is still valuable, or if the stuff in a modern GPU or something is a Pareto improvement (faster, cheaper, and more flexible, or something). But it seems like, for stuff that you can phrase in terms of nearest-neighbor operations, it should be nearly optimal; and it should be possible to support multiple-image elementwise operations by sticking branches into the linear pipeline.

But maybe I'm suffering from an AVM problem :)

Oh, about method dispatch. I chatted with a buddy of mine who prototyped a BTB implementation at Transmeta. Apparently BTBs speed up C++-style virtual method dispatch quite a lot, so maybe the associative polymorphic cache would only save a few cycles.

12. kragenApr 24, 2008

Um, obviously I meant SSE, not SSA.

13. snicolaiMay 17, 2008

I'll take a slightly different angle to the question. What can you leave out of a CPU to improve performance? Have you looked at the Singularity project from MS Research?

http://research.microsoft.com/os/singularity/publications/OSR2007_RethinkingSoftwareStack.pdf give some details. Section 4.2.1 talks about the "unsafe code tax" imposed by the hardware protection mechanisms in the CPU. They measured it has high as 37%.

I see this just as a continuation of the RISC trend, moving things that were traditionally done in hardware to software. Ultimately what I want from the hardware is memory load/store throughput, enough registers/cache in the processor to hide the latency of that throughput and balanced processing elements to operate on that data.

Current memory management hardware tracks the state of each page (dirty, etc.) and a virtual address for that page. How much would dumping the virtual addressing mechanisms (but keeping the page state mechanisms, which are useful for garbage collection) speed up a CPU? I haven't kept up with modern CPUs, but getting rid of virtual address translation should save a stage or two in the pipeline.

A VM operating on physical addresses would now be able to do memory layout optimizations to take advantage of extra banks of memory in one machine vs another. The virtual addressing layer makes that difficult now.

The VM could still page items in and out, it would just replace references to an item being paged out with a proxy object that read the item back in before accessing the object.

The wonderful advantage we now have with many of the high level languages today is that they are defined in terms of a VM. This gives us a layer under any application written to that VM to change and implement new ways of executing the application transparently to the application. Witness the progress made in speeding up the JVM over the last 10 years. Theoretically, the application doesn't change, but executes faster on newer versions of the JVM.

14. Yossi KreininMay 18, 2008

Regarding the cost of page table management: I currently work on embedded systems with page table support unavailable or turned off. I wouldn't know the cost of address translation, but it appears to be passable because they cache mappings, and comparisons/negations are fast. That said, I'd rather have instructions for checked array access, and with those plus a VM taking care of type safety, I think you could get better security and safety than the process model gives. I think I'm actually speaking in the spirit of the Singularity project here.

Regarding JVM: it started out slow as hell, despite being relatively low-level, so you could expect improvement. Today, AFAIK it's comparable to C for object grinding, but is hardly an optimal platform for image processing or non-hardware-accelerated computer graphics (I'd guess array boundary checking is the main problem, and maybe object inlining; C# has structures to avoid the latter, and unsafe code to avoid the former).

15. Yossi KreininMay 18, 2008

Also regarding JVM efficiency and programs executing on different versions of the VM out there – a post by John Carmack:

http://www.armadilloaerospace.com/n.x/johnc/recent%20updates/archive?news_id=295

16. Yossi KreininMay 18, 2008

To kragen:

Regarding vectorization – I don't believe in automatic vectorization by compilers, nor do I believe in vectorized operations built into a programming language. I believe in hand-coding with intrinsics. I believe it to yield at least 5x more efficient code, on average. No figures to back that up, of course, but lots of confidence. So vectorization is out of the scope.

Now, with vectorization being out of the scope, a language with unchecked array access and support for small data types will beat a language without those by a large margin. However, I think you can make Lisp support small data types in memory, and you could make its array accesses unsafe. Now, if CPUs supported commands for checked array access, you have optimized, safe, portable Lisp.

Regarding machines executing gazillions of operations on neighbors: they usually bite the dust when you get beyond shamelessly parallel image processing. I'd rather play with a multi-core DSP which can handle data-parallel scenarios, not just task-parallel ones.

17. davidmathersJun 2, 2008

I don't know enough to know how relevant this is but...

Design of a LISP-Based Microprocessor by Guy Steele and Gerald Sussman

LISP differs from traditional machine languages in that the program/data storage is conceptually an unordered set of linked record structures of various sizes, rather than an ordered, indexable vector of integers or bit fields of fixed size. An instruction set can be designed for programs expressed as trees of record structures. A processor can interpret these program trees in a recursive fashion and provide automatic storage management for the record structures.

We discuss a small-scale prototype VLSI microprocessor which has been designed and fabricated, containing a sufficiently complete instruction interpreter to execute small programs and a rudimentary storage allocator.

http://delivery.acm.org/10.1145/360000/359031/p628-steele.pdf?key1=359031&key2=3630878911&coll=ACM&dl=ACM&CFID=15151515&CFTOKEN=6184618

18. Yossi KreininJun 4, 2008

I didn't read the whole thing, and I might at some point. However, it seems that it's a basic implementation of the core Lisp evaluator, using type tags for dispatching. Tagged memory has efficiency problems if you use it straightforwardly, for example, you can't compactly represent byte arrays. Of course there could be ways to save tags, I just don't see how a complete result would look like. I'll delve into the paper some more.

BTW, I wish we had something like Lisp machines rather than the buggy insecure desktop hardware/software towers of today. In this posting, I'm explicitly approaching the problem from a completely anal-retentive perspective I obtained in the world of embedded apps; it's hardly the right way to look at other things.

19. London Dry GinJan 28, 2009

> Regular expression and string functions in hardware

SSE4.2 introduced some string functions in hardware, http://en.wikipedia.org/wiki/SSE4#SSE4.2

20. EvanMay 24, 2012

PSA:

The david moon conversation has moved to this link since this thread was posted:

http://development.azuldev.com/blog/cliff/2008-11-18-brief-conversation-david-moon

21. ariscopJan 13, 2017

Jazelle is more of a hardware assisted interpreter, it requires an actual interpreter for unimplemented (ie: higher-level) opcodes. For J2ME phones it was a 'free' speedup, needing no additional memory

Modern chips preserve compatibility by jumping to the interpreter for every opcode

22. geoFeb 4, 2017

I know this is already a very old post but just want to share what I am currently building and might be a good candidate for this challenge.

I have implemented the PicoLisp VM into hardware using FPGA and current status is it is now on actual hardware. I have built the spread-out board and once everything is set I will post a demo video.

23. C U AnonNov 30, 2018

Yossi,

I don't know if you are even still interested in "build your own computer" but there are a few points that you have to consider. The first is the speed of light, and it's derating by dielectric and other effects puts a very very hard limit on the size or speed of things, and theres no magic to make that go away even in 3D chip stacking. The other problem is heat, the more stuff you have running at the full clock rate the less devices you can have in a given area. There are ways to cheat on this but all those gofaster stripes that gave rise to that little Xmas gift that just keeps giving of Spector/Meltdown, are in "saber tooth tiger" territory and are more –heat– power than they are actually worth.

The solution is large amounts of very local memory wrapped around the fastest striped down RISC type core running programs that remain entirely inside the local memory. The memory is connected as "registers" or arrays of registers etc by using programable addressing etc. But Content Addressable Memory (CAM) I'm possibly one of the few people around to have "designed it in" is not realy worth while as it is still a solution looking for problems to solve.

From a security perspective the CPUs are issolated from the main system via an MMU/switch that is controled by the security hypervisor. The design of the switch is important as it will be the main bottle neck in many types of program. However Seymor Cray in part designed the switch issue out which Sun aquired, likewise IBM in it's Z Servers designed out the switch issue. Neither works all the time, but the point is no high performance solution will be totally general purpose / use agnostic. But the one thing that is certain is "sequential CPU's have more or less splattered into real limits such as the speed of light. Thus the future is parallel like it or not, the trick will be "reprograming programers" to stop wearing those sequential programing blinkers. There are several ways you can do this but the simple fact is many programers are not going to be able to transition, so will fall by the wayside.

There are ways that tricks can be used where code is made from tasks where the parallel issues have been to a certain extent abstracted away.

However from a security perspective HLL's are nowhere near high level enough... Think *nix scripting using utilities, where the utilities have been written with security in mind such that they have "signitures" that can be used by sensors for a security hypervisor. The advantage of going higher in level aside from the security aspect is that whilst errors per line of code appears to be a constant for the "average" programer each step up in level gives you decreasing lines of code for functionality. Thus the "script writters" not only do not have to be security trained, they will be churning out several times the number of productive programs...

There have been discussions on this over on the Schneier blog under "Castles-v-Prisons" or "C-v-P" that you might want to go and look at.



Post a comment