• MartianSands@sh.itjust.works
      link
      fedilink
      arrow-up
      28
      arrow-down
      5
      ·
      5 days ago

      That’s easy. The 2038 problem is fixed by using 64-bit processors running 64-bit applications. Just about everything built in the last 15 years has already got the fix

      Using that fix, the problem doesn’t come up again for about 300 billion years

      • cmnybo@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        27
        ·
        5 days ago

        You don’t need 64 bit programs or CPUs to fix the 2038 problem. You just need to use a 64 bit time_t. It will work fine on 32 bit CPUs or even 8 bit microcontrollers.

      • pimeys@lemmy.nauk.io
        link
        fedilink
        arrow-up
        21
        ·
        5 days ago

        And not using 32-bit integers to calculate time. Which is still a thing in many many many codebases written in C or C++…

        • JustEnoughDucks@feddit.nl
          link
          fedilink
          arrow-up
          2
          ·
          5 days ago

          32 bit embedded processors us a lot of 32 bit time, though i am not sure if date time libraries in SDKs have been updated to use 64 bit for time.

          • pimeys@lemmy.nauk.io
            link
            fedilink
            arrow-up
            4
            ·
            5 days ago

            Linux kernel updated to 64 bit time quite recently. In 2038 I can guarantee somebody in a very serious business is still using an ancient RHEL and will have issues.

      • r00ty@kbin.life
        link
        fedilink
        arrow-up
        8
        ·
        5 days ago

        Not really processor based. The timestamp needs to be ulong (not advised but good for date ranges up to something like 2100, but cannot express dates before 1970). Or llong (long long). I think it’s a bad idea but I bet some people too lazy to change their database schema will just do this internally.

        The type time_t in Linux is now 64bit regardless. So, compiling applications that used that will be fine. Of course it’s a problem if the database is storing 32bit signed integers. The type on the database can be changed too and this isn’t hard really.

        As for the Y10K problem. It will almost entirely only be formatting problems I think. In the 80s and 90s, storage was at a premium, databases were generally much simpler and as such dates were very often stored as YYMMDD. There also wasn’t so much use of standard libraries. So this meant that to fix the Y2K problem required quite some work. In some cases there wasn’t time to make a proper solution. Where I was working there was a two step solution.

        One team made the interim change to adjust where all dates were read and evaluate anything <30 (it wasn’t 30, it was another number but I forget which) to be 2000+number and anything else 1900+number. This meant the existing product would be fine for another 30 years or so.

        The other team was writing the new version of the software, which used MSSQL server as a back-end, with proper datetime typed columns and worked properly with years before and after 2000.

        I suspect this wasn’t unusual in terms of approach and most software is using some form of epoch datatype which should be fine in terms of storing, reading and writing dates beyond Y10K. But some hard-coded date format strings will need to be changed.

        Source: I was there, 3000 years ago.

      • 0x0@programming.dev
        link
        fedilink
        arrow-up
        1
        ·
        5 days ago

        Just about everything built in the last 15 years has already got the fix

        You mean regular PCs? Sure…
        Less COTS stuff? Not necessarily.

  • ImplyingImplications@lemmy.ca
    link
    fedilink
    arrow-up
    31
    ·
    edit-2
    5 days ago

    You hear the one about the COBOL programer? In 1999 many people were worried about a bug with dates that could crash their systems. Unfortunately, a lot of really important systems were still using the outdated COBOL programing language. A search was done to find someone who could fix these systems. The last programmer who knew COBOL was found and they made a lot of money fixing up these systems. So much money that, once they received a cancer diagnosis, they cyrogenically froze themselves hoping that a cure for their disease would be found in the future. The programmer is awoken one day in a lab filled with high tech robotics. A person who is seemingly half machine themseves stands before them. The programmer asks “Have you found a cure for my disease?”. The person replies “We have cured all diseases. You are the COBOL programmer, correct?” “That’s right” “Great! It is the year 9999 and we have a problem we need your help with.”

  • callmepk@lemmy.world
    link
    fedilink
    arrow-up
    30
    ·
    edit-2
    5 days ago

    Then there is Japanese that have to worry if their system still working tomorrow because of Showa 100 Year Problem

    Short summary: Basically similar to Y2K, when changing era in Japan last last time from Showa to Heisei, in order to save space, programmer at the time continue to use Showa as base so Heisei 2 actually recorded in computer as Showa 65. Tomorrow, which is 2025, is Showa 100. And you get the idea.