he

  • palordrolap@kbin.social
    link
    fedilink
    arrow-up
    11
    ·
    edit-2
    9 months ago

    If you want the historical reason, you need to look at ASCII and its teletype forerunners. America, for better or worse, were the ones who laid down the bases for a lot of the standards we still use today. UTF-8 covers the gamut of Unicode, but its first 95 printable characters from 32 (space) onwards? Identical to 1967 ASCII.

    And since ASCII ended up as the default, most programming languages only allowed symbols from those original 95.

    In some really bad cases, in order to support keyboards with even fewer symbols, things like digraphs and trigraphs were kludged into some languages. (C, for example, is only just planning on getting rid of support for them this year.)

    Those systems tended to be completely alien to ASCII or descended from the 6-bit teletype code where there are only 60-some usable characters. (Two character cases at the same time? Luxury! And what the heck is a curly brace?)

    Now that UTF-8 is gaining a foothold, some languages are daring to use Unicode, so we might soon see more and more interesting characters being allowed in the base syntax of languages. (See Raku, for example. Or don’t. You may go blind.)

    On the other hand, there’s APL, which has been doing its own thing with weird symbols since 1966 and giving not a bit of interest to ASCII or anything else.

    • wieson@lemmy.world
      link
      fedilink
      arrow-up
      2
      ·
      edit-2
      9 months ago

      What’s the programming language that allows emojis as variable names?

      Looked it up a little: it’s all languages that support extended characters and not only ASCII. But it looks pretty unreadable.