Most of the stuff went over my head, Why should I care that C is no longer low-level? What exactly is considered close-to-metal in today’s time, apart from binary and assembly?

  • mozz
    link
    fedilink
    22 months ago

    ITT: People who didn’t understand the article

    OP: You should not be bothered. The author’s arguments are perfectly valid IMO, but they’re way way beyond a beginner level. C is already a fairly challenging language to get your head around, and the author is going way beyond that into arguments about the fundamental theoretical underpinnings of C and its machine model, and the hellish complexities of modern microcode-and-silicon CPU design. You don’t need to worry about it. You can progress your development through:

    • Basic computer science data structures Python and the like
    • C and the byte for byte realities <- You are here
    • Step 3
    • Step 4
    • Microcode realities like this guy is talking about

    … and not worry about step 5 until much much later.

    • @velox_vulnusOP
      link
      English
      02 months ago

      At the time of writing this comment, does there exist any programming language ecosystem that does not stick to the “primitive” PDP-11 abstraction/virtual machine/whatever the author is trying to say? I’m just interested to know if such options do exist.

      • mozz
        link
        fedilink
        12 months ago

        Er… sort of. He brings up some towards the end:

        There is a common myth in software development that parallel programming is hard. This would come as a surprise to Alan Kay, who was able to teach an actor-model language to young children, with which they wrote working programs with more than 200 threads. It comes as a surprise to Erlang programmers, who commonly write programs with thousands of parallel components. It’s more accurate to say that parallel programming in a language with a C-like abstract machine is difficult, and given the prevalence of parallel hardware, from multicore CPUs to many-core GPUs, that’s just another way of saying that C doesn’t map to modern hardware very well.

        I would add to that Go with its channel model of concurrency which I quite like, and numpy which does an excellent job in my experience with giving you fast paralleled operations on big parallel structures while still giving you a simple imperative model for quick simple operations. There are also languages like Erlang or ML that try to do things in just a totally different way which in theory can lend itself to much better use of parallelism, but I’m not real familiar with them and I have no idea how well the theoretical promise works out in terms of real world results.

        I’d be interested to see someone with this guy’s level of knowledge talk about how well any of that maps into actually well-parallelized operations when solving actual real problems on actual real-world CPUs (in the specific way that he’s talking about when he’s criticizing how well C maps to it), because personally I don’t really know.