Communications of the ACM: Vol 21, No 8

  • Ravn
    link
    fedilink
    arrow-up
    5
    ·
    2 years ago

    This is very interesting, but also worth mentioning that this is a paper from 1978. I didn’t check the date at first and got very excited when I read

    A new class of computing systems uses the functional programming style both in its programming language and in its state transition rules.

    thinking some new developments were happening today.

    • Ravn
      link
      fedilink
      arrow-up
      3
      ·
      2 years ago

      On this note: do we have a fairly good understanding of why none of these alternative systems took off?

      • ttmrichter
        link
        fedilink
        arrow-up
        6
        ·
        2 years ago

        Yes. Short-sightedness and externalities. I saw the same thing happen with things like stack machines. Here’s the rough script:

        • There’s research that shows tech N has potential to be superior to tech O.
        • Prototypes of N, made by a couple of people in a research lab naturally fail to be superior to existing implementations of O, made by literally hundreds of person-years’ effort to be as optimized as possible.
        • Non-technical leadership (political or corporate) consults with vested-in-O technical leadership and is told N is “unproven technology”.
        • The status quo continues, trying to squeeze as much out of O as possible while N goes unfunded and unresearched except for a few academics writing papers nobody reads.

        This, too, is why your “ultramodern” CPU (whether x86, ARM, or increasingly RISC-V), complete with its out-of-order execution model and a whole myriad of other wonderous things under the hood, presents itself to you as a very fast PDP-11: because C was made for the PDP-11 and set the dominant programming model for half a century now. It’s why processors made with hundreds of small, parallel cores (like the Greenarrays line was) don’t catch on: they can’t really be meaningfully programmed in the C mindset. It’s part of why FPGAs are big box of blacklegging binary mystery bits instead of a normal way to enhance program performance. (The other part is that FPGA vendors are idiotically closed, though this is loosening finally.)

        While we’re stuck with the plodding Von Neumann (or related, like Harvard) approach to things, we’re never going to see any real improvement any longer. Until one day we reach the hard limit of what these things can actually accomplish, no matter how much money is thrown at them, and we’re forced to look into new ways of doing things.

        I’ll be long dead before that happens, unfortunately.