Conventional programming languages are growing ever more enormous, but not stronger.
Inherent defects at the most basic level cause them to be both fat and weak: their
primitive word-at-a-time style of programming inherited from their common ancestor—...
Yes. Short-sightedness and externalities. I saw the same thing happen with things like stack machines. Here’s the rough script:
There’s research that shows tech N has potential to be superior to tech O.
Prototypes of N, made by a couple of people in a research lab naturally fail to be superior to existing implementations of O, made by literally hundreds of person-years’ effort to be as optimized as possible.
Non-technical leadership (political or corporate) consults with vested-in-O technical leadership and is told N is “unproven technology”.
The status quo continues, trying to squeeze as much out of O as possible while N goes unfunded and unresearched except for a few academics writing papers nobody reads.
This, too, is why your “ultramodern” CPU (whether x86, ARM, or increasingly RISC-V), complete with its out-of-order execution model and a whole myriad of other wonderous things under the hood, presents itself to you as a very fast PDP-11: because C was made for the PDP-11 and set the dominant programming model for half a century now. It’s why processors made with hundreds of small, parallel cores (like the Greenarrays line was) don’t catch on: they can’t really be meaningfully programmed in the C mindset. It’s part of why FPGAs are big box of blacklegging binary mystery bits instead of a normal way to enhance program performance. (The other part is that FPGA vendors are idiotically closed, though this is loosening finally.)
While we’re stuck with the plodding Von Neumann (or related, like Harvard) approach to things, we’re never going to see any real improvement any longer. Until one day we reach the hard limit of what these things can actually accomplish, no matter how much money is thrown at them, and we’re forced to look into new ways of doing things.
I’ll be long dead before that happens, unfortunately.
This is very interesting, but also worth mentioning that this is a paper from 1978. I didn’t check the date at first and got very excited when I read
thinking some new developments were happening today.
On this note: do we have a fairly good understanding of why none of these alternative systems took off?
Yes. Short-sightedness and externalities. I saw the same thing happen with things like stack machines. Here’s the rough script:
This, too, is why your “ultramodern” CPU (whether x86, ARM, or increasingly RISC-V), complete with its out-of-order execution model and a whole myriad of other wonderous things under the hood, presents itself to you as a very fast PDP-11: because C was made for the PDP-11 and set the dominant programming model for half a century now. It’s why processors made with hundreds of small, parallel cores (like the Greenarrays line was) don’t catch on: they can’t really be meaningfully programmed in the C mindset. It’s part of why FPGAs are big box of blacklegging binary mystery bits instead of a normal way to enhance program performance. (The other part is that FPGA vendors are idiotically closed, though this is loosening finally.)
While we’re stuck with the plodding Von Neumann (or related, like Harvard) approach to things, we’re never going to see any real improvement any longer. Until one day we reach the hard limit of what these things can actually accomplish, no matter how much money is thrown at them, and we’re forced to look into new ways of doing things.
I’ll be long dead before that happens, unfortunately.
Best explanation/simplification ever!