• OneMeaningManyNames
    link
    fedilink
    English
    arrow-up
    4
    ·
    15 days ago

    I can’t speak to it, but perhaps Andrew Koenig and Barbara Moo, who have written the Accelerated C++ textbook can:

    We can be confident about the program’s performance because the C++ standard imposes performance requirements on the library’s implementation. Not only must the library meet the specifications for behavior, but it must also achieve well-specified performance goals. Every standard-conforming C++ implementation must

    • Implement vector so that appending a large number of elements to a vector is no worse than proportional to the number of elements
    • Implement sort to be no slower than nlog(n), where n is the number of elements being sorted

    The whole program is therefore guaranteed to run in nlog(n) time or better on any standard-conforming implementation. In fact, the standard library was designed with a ruthless attention to performance. C++ is designed for use in performance-critical applications, and the emphasis on speed pervades the library as well.

  • Windex007@lemmy.world
    link
    fedilink
    arrow-up
    1
    ·
    15 days ago

    If you look at the linked repo, they’re not using any standard library functions that would have impact the timing.

    The fundamental crux of the issue is that the author doesn’t seem to understand that at the end of the day, after all of the hand waving and slight of hand, all code is executed on a processor using whatever that processors instruction set is. ALL of them.

    The question isn’t “what language is fastest”, it is “given the instruction set has an optimal solution, how close to that can I get to that optimal solution with different languages?”

    There is really no guarantee that the code samples are comparable either. Unless the author actually examines the resultant instructions, it’s entirely possible one implementation is “cheating”. If I have a collection I know to have certain properties (like being a range) for all you know it’s pulling a Gauss under the hood. If, say, Java streams did that, then why make the C compiler write a program with two billion 64 bit additions?

    Anyways, I think these types of articles/investigations are neat. The problem is that they’re often accompanied with commentary and conclusions that they authours don’t have the knowledge to be able to provide.

    Anyhow, long story short, if you look at the repo this wasn’t an apples to apples comparison to begin with.