Most of the stuff went over my head, Why should I care that C is no longer low-level? What exactly is considered close-to-metal in today’s time, apart from binary and assembly?

  • @neidu2@feddit.nl
    link
    fedilink
    21
    edit-2
    1 month ago

    C may not meet the authors definition of low level, but is still far lower level than most viable alternatives.

  • @Blue_Morpho@lemmy.world
    link
    fedilink
    171 month ago

    The author is confusing two completely different things and comes to a wrong conclusion.

    He states that C isn’t low level because CPU are much more complex today. But those aren’t related. His argument would be no different if he claimed assembly isn’t a low level language.

    That the CPU speculatively executes instructions and maintains many levels of cache doesn’t change that C is low level. Even if you wrote a program in OP codes you can’t change that.

    There was a single paragraph to support his argument that was optimizing compilers can create machine code wildly different than what might be expected.

    Then he goes off on a complete tangent of how C isn’t good for parallel processing which has nothing to do with his thesis.

    • @sushibowl@feddit.nl
      link
      fedilink
      51 month ago

      I think for these types of discussions it’s really necessary to clearly define what “low level” really means, something both you and the author kinda skip over. I think a reasonable definition is about the amount of layers of abstraction between the language’s model of the machine and the actual hardware.

      The author is correct that nowadays, on lots of hardware, there are considerably more abstractions in place and the C abstract machine does not accurately represent high performance modern consumer processors. So the language is not as low level as it was before. At the same time, many languages exist that are still way higher level than C is.

      I’d say C is still in the same place on the abstraction ladder it’s always been, but the floor is deeper nowadays (and the top probably higher as well).

      • @Blue_Morpho@lemmy.world
        link
        fedilink
        2
        edit-2
        1 month ago

        So the language is not as low level as it was before.

        But it’s the hardware that has changed not C. As I said, with his argument Assembly isn’t a low level programming language either.

        Besides, early risc cpus from the 80’s had out of order write back so this isn’t new. By the 90’s all risc were ooe. The first was the ibm 360 from the 1960’s.

        I’d say C is still in the same place on the abstraction ladder it’s always been, but the floor is deeper nowadays (and the top probably higher as well).

        I agree!

  • @Tolookah@discuss.tchncs.de
    link
    fedilink
    81 month ago

    I see the argument that C on an Intel CPU is not low level enough for this person. On arm cortex m and r series CPUs, it’s low enough. (I don’t know enough about the A series to say this).

    The gripe is mostly that there’s microcode in the pipeline for branch predictions and that takes the control away from them. If you want to own that control, You’re going to lose on speed.

    Should you be bothered? Generally, No. If you are, there are CPU options out there with smaller pipelines and much less predictions going on. But don’t expect them to compete in the same arena as an application class CPU. (Intel, AMD, Arm A series likely)

  • @AnarchoSnowPlow@midwest.social
    link
    fedilink
    8
    edit-2
    1 month ago

    Short answer: No, this guy is all the way up his own rear end.

    Longer answer:

    Author: “C is not ‘close to hardware’”

    Also Author: “Successful one to one struct comparisons may require padding, which isn’t automatically applied!!!”

    Like if you have an entire PhD on this stuff and you don’t understand how and why you need to pad, when you need to do it, and how to calculate the proper amount of padding, maybe somebody should’ve stopped you before you showed your whole ass on the Internet like that.

    (Padding is applied to align chunks of data more closely to the size of memory writes possible in a given architecture, it is extremely system dependent and you use it in very specific circumstances that you, a beginner, do not need to understand right now other than to say that if the senior says thou shalt not fuck with my struct you better not)

  • magic_lobster_party
    link
    fedilink
    81 month ago

    By this logic assembly isn’t low level either.

    C is low level because it allows you to manipulate pointers. Most higher level languages don’t let you do that.

  • smpl
    link
    fedilink
    English
    61 month ago

    The C compiler or third party libraries can provide support for parallel execution and SIMD. That article is just used by people in an attempt to argue that C’s strength in being a good low level abstraction is false, which it isn’t. C is the best portable abstraction over a generic CPU that I know of. If you start adding parallel features and SIMD like the article suggest, you’ll end up with something that’s not a portable low level abstraction. To be portable those featues would have to be implemented in slow fake variants for platforms that doesn’t support it. We can always argue where to draw the line, but I think C nailed it pretty good on what to include in the language and what to leave up to extensions and libaries.

    C is not a perfect abstraction, because it is portable. Non portable features for specific architectures are accessed through libraries or compiler extensions and many compilers even include memory safe features. It’s a big advantage though to know Assembly for your target platform when developing in C, such that you become aware fx. that x86 actually detects integer overflow and sets an overflow flag, even though that’s not directly accessible from C. Compilers often implement extensions for such features, but you can yourself extend C with small functions for architecture specific features using Assembly.

  • mozz
    link
    fedilink
    21 month ago

    ITT: People who didn’t understand the article

    OP: You should not be bothered. The author’s arguments are perfectly valid IMO, but they’re way way beyond a beginner level. C is already a fairly challenging language to get your head around, and the author is going way beyond that into arguments about the fundamental theoretical underpinnings of C and its machine model, and the hellish complexities of modern microcode-and-silicon CPU design. You don’t need to worry about it. You can progress your development through:

    • Basic computer science data structures Python and the like
    • C and the byte for byte realities <- You are here
    • Step 3
    • Step 4
    • Microcode realities like this guy is talking about

    … and not worry about step 5 until much much later.

    • @velox_vulnusOP
      link
      English
      01 month ago

      At the time of writing this comment, does there exist any programming language ecosystem that does not stick to the “primitive” PDP-11 abstraction/virtual machine/whatever the author is trying to say? I’m just interested to know if such options do exist.

      • mozz
        link
        fedilink
        11 month ago

        Er… sort of. He brings up some towards the end:

        There is a common myth in software development that parallel programming is hard. This would come as a surprise to Alan Kay, who was able to teach an actor-model language to young children, with which they wrote working programs with more than 200 threads. It comes as a surprise to Erlang programmers, who commonly write programs with thousands of parallel components. It’s more accurate to say that parallel programming in a language with a C-like abstract machine is difficult, and given the prevalence of parallel hardware, from multicore CPUs to many-core GPUs, that’s just another way of saying that C doesn’t map to modern hardware very well.

        I would add to that Go with its channel model of concurrency which I quite like, and numpy which does an excellent job in my experience with giving you fast paralleled operations on big parallel structures while still giving you a simple imperative model for quick simple operations. There are also languages like Erlang or ML that try to do things in just a totally different way which in theory can lend itself to much better use of parallelism, but I’m not real familiar with them and I have no idea how well the theoretical promise works out in terms of real world results.

        I’d be interested to see someone with this guy’s level of knowledge talk about how well any of that maps into actually well-parallelized operations when solving actual real problems on actual real-world CPUs (in the specific way that he’s talking about when he’s criticizing how well C maps to it), because personally I don’t really know.