Communications of the ACM: Vol 21, No 8

  • ttmrichter
    link
    fedilink
    arrow-up
    6
    ·
    3 years ago

    Yes. Short-sightedness and externalities. I saw the same thing happen with things like stack machines. Here’s the rough script:

    • There’s research that shows tech N has potential to be superior to tech O.
    • Prototypes of N, made by a couple of people in a research lab naturally fail to be superior to existing implementations of O, made by literally hundreds of person-years’ effort to be as optimized as possible.
    • Non-technical leadership (political or corporate) consults with vested-in-O technical leadership and is told N is “unproven technology”.
    • The status quo continues, trying to squeeze as much out of O as possible while N goes unfunded and unresearched except for a few academics writing papers nobody reads.

    This, too, is why your “ultramodern” CPU (whether x86, ARM, or increasingly RISC-V), complete with its out-of-order execution model and a whole myriad of other wonderous things under the hood, presents itself to you as a very fast PDP-11: because C was made for the PDP-11 and set the dominant programming model for half a century now. It’s why processors made with hundreds of small, parallel cores (like the Greenarrays line was) don’t catch on: they can’t really be meaningfully programmed in the C mindset. It’s part of why FPGAs are big box of blacklegging binary mystery bits instead of a normal way to enhance program performance. (The other part is that FPGA vendors are idiotically closed, though this is loosening finally.)

    While we’re stuck with the plodding Von Neumann (or related, like Harvard) approach to things, we’re never going to see any real improvement any longer. Until one day we reach the hard limit of what these things can actually accomplish, no matter how much money is thrown at them, and we’re forced to look into new ways of doing things.

    I’ll be long dead before that happens, unfortunately.