Microsoft is pivoting its company culture to make security a top priority, President Brad Smith testified to Congress on Thursday, promising that security will be “more important even than the company’s work on artificial intelligence.”

Satya Nadella, Microsoft’s CEO, “has taken on the responsibility personally to serve as the senior executive with overall accountability for Microsoft’s security,” Smith told Congress.

His testimony comes after Microsoft admitted that it could have taken steps to prevent two aggressive nation-state cyberattacks from China and Russia.

According to Microsoft whistleblower Andrew Harris, Microsoft spent years ignoring a vulnerability while he proposed fixes to the “security nightmare.” Instead, Microsoft feared it might lose its government contract by warning about the bug and allegedly downplayed the problem, choosing profits over security, ProPublica reported.

This apparent negligence led to one of the largest cyberattacks in US history, and officials’ sensitive data was compromised due to Microsoft’s security failures. The China-linked hackers stole 60,000 US State Department emails, Reuters reported. And several federal agencies were hit, giving attackers access to sensitive government information, including data from the National Nuclear Security Administration and the National Institutes of Health, ProPublica reported. Even Microsoft itself was breached, with a Russian group accessing senior staff emails this year, including their “correspondence with government officials,” Reuters reported.

  • Dudewitbow@lemmy.zip
    link
    fedilink
    English
    arrow-up
    83
    arrow-down
    7
    ·
    5 months ago

    you can have a propietary os thats secure, but the problem is once you get to the point where youre selling data and allow anything to be installed of course, its no longer secure.

    • tabular@lemmy.world
      link
      fedilink
      English
      arrow-up
      33
      arrow-down
      14
      ·
      edit-2
      5 months ago

      You can’t verify it’s secure if it’s proprietary, so it’s never secure? Having control over other people’s computing creates bad incentives to gain at your user’s expense, so it’s day 1 you should lose trust.

        • tabular@lemmy.world
          link
          fedilink
          English
          arrow-up
          12
          arrow-down
          8
          ·
          5 months ago

          That just moves requiring trust from the 1st party to 2nd or 3rd party. Unreasonable trust.

            • circuscritic@lemmy.ca
              link
              fedilink
              English
              arrow-up
              19
              ·
              edit-2
              5 months ago

              Wait…you don’t audit every package and dependency before you compile and install?

              That’s crazy risky my man.

              Me? I know security and take it seriously, unlike some people here. I’m actually almost done with my audit and should be ready to finally boot Fedora 8 within the next 6-8 months.

            • tabular@lemmy.world
              link
              fedilink
              English
              arrow-up
              6
              arrow-down
              4
              ·
              edit-2
              5 months ago

              This is like asking if you do scientific experiments yourself or do you trust others’ results. I distrust private prejudice and trust public, verifiable evidence that’s survived peer review.

              • TropicalDingdong@lemmy.world
                link
                fedilink
                English
                arrow-up
                17
                ·
                edit-2
                5 months ago

                Scientists in the room who have to base their experiments off other peoples data and results:

                Tongue in cheek but this is actually giving me particular headache because of some results (not mine) that should have never been published.

                • tabular@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  3
                  arrow-down
                  3
                  ·
                  5 months ago

                  That sucks, but the answer to bad results is still more/better tests 😇

                • Daniel Feldman@hachyderm.io
                  link
                  fedilink
                  arrow-up
                  8
                  ·
                  5 months ago

                  @fuckwit_mcbumcrumble @tabular I’ve never worked at Microsoft, but I worked at a different enterprise company and they did indeed fly in representatives of different governments who got free access to the code on a company laptop in a conference room to look for any back doors. I always thought it was silly because it is impossible to read all the code.

                • tabular@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  arrow-down
                  4
                  ·
                  edit-2
                  5 months ago

                  If I’m a government I’m hella criminalising the sharing of proprietary software.

      • Dudewitbow@lemmy.zip
        link
        fedilink
        English
        arrow-up
        11
        ·
        edit-2
        5 months ago

        id argue arguing the unknown can’t be used to say if its technically secure, nor insecure. If that kind of coding is brought into place, then say any OS using non open source hardware is insecure because the VHDL/Verilog code is not verifiable.

        Unless everyone running an open source version of RISC-V code or a FPGA for their hardware, its a game of goalposts on where someone puts said flag.

        • tabular@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          edit-2
          5 months ago

          Consider people counting paper votes in an election. Multiple political parties are motivated by their own self interests to watch the counting to prevent each other faking votes. That is a security feature and without it then the validity of the election has a critical unknown making it very sussy.

          An OS using proprietary software is like as an electronic voting machine, we pretend it’s secure to feel better about a failing we can’t change.

          • Dudewitbow@lemmy.zip
            link
            fedilink
            English
            arrow-up
            2
            ·
            edit-2
            5 months ago

            the problem is the bad actors have direct access to said voting machines. in the case of security, the people creating the OS is not the bad actor typically in question when you think of bad actors, which kind of goes back to the goalpost situation. Unless you knew how everything is designed from the ground up (including the hardware code in whatever language it is) then thats just setting an arbitrary goalpost. basically typical NSA backdoor, or foreign backdoor via hardware situation, independent of the OS. To bluntly place it only at the OS stage is setting said goalpost there when you can really apply it to any part of the line (the chip design, the hardware assembler, the os designer, the software maker). Setting it at the OS level fundamentally means all OS’ are insecure by nature unless you’re actively running it on a FPGA thats constantly getting updates.

            For instance, any CPU with speculative programming fundamentally is insecure and is virtually in all modern processors. never even mind the CPU when the door is already open regardless of the OS.

            • tabular@lemmy.world
              link
              fedilink
              English
              arrow-up
              3
              ·
              5 months ago

              When I think of bad actors and software I think of security from 3rd parties after the intentions of the authors. Not just security but also privacy and any other anti-features users wouldn’t want. That applies to the OS, apps or drivers. Hardware indeed has concerns like software, which is just a wider conversation about security, which is just part of user/consumer rights.

      • TORFdot0@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        ·
        5 months ago

        I mean you can provide audit findings and results and it’s a pretty big part of vendor management and due diligence but at some point you have to accept risk in using open source software that can be susceptible to supply chain hacks, might be poorly maintained, etc or accept the risk of taking the closed source company’s documentation at face value (and that can also be poorly maintained and susceptible to supply chain attacks)

        There’s got to be some level of risk tolerance to do business and open source doesn’t actually reduce risk. But it can at least reduce enshittification

        • cybersandwich@lemmy.world
          link
          fedilink
          English
          arrow-up
          9
          arrow-down
          3
          ·
          5 months ago

          It’s pretty hilarious when people act like being open source means it’s “more secure”. It can be, but it’s absolutely not guaranteed. The xz debacle comes to mind.

          There are tons of bugs in open source software. Linux has had its fair share.

          • PopOfAfrica@lemmy.world
            link
            fedilink
            English
            arrow-up
            11
            ·
            5 months ago

            The XZ thing is actually a great point to open source’s favor. All it took was some dude to figure it out.

            If you try to inject maligned code, you will be found out. That can’t happen with proprietary software.

            • cybersandwich@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              ·
              edit-2
              5 months ago

              It highlighted some pretty glaring weaknesses in OSS as well. Over worked maintainers, unvetted contributers, etc etc.

              The XZ thing seems like we got “lucky” more than anything. But that type of attack may have been successful already or in progress elsewhere. It’s not like people are auditing every line of every open source tool/library. It takes really talented devs and researchers to truly audit code.

              I mean, I certainly couldn’t do it for anything semi advanced, super clever, or obfuscated the way the XZ thing was.

              But I agree, that the fact we could audit it at all is a plus. The flip side is: an unvetted bad actor was able to publish these changes because of the nature of open source. I’m not saying bad actors can’t weasel their way into Microsoft, but that’s a much higher bar in terms of vetting.

          • tabular@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            ·
            edit-2
            4 months ago

            Proprietary software has to be caught being insecure to be “guilty of” being insecure. Free software can be publically verified, effectively “proven innocent” - a much higher standard.

    • TWeaK@lemm.ee
      link
      fedilink
      English
      arrow-up
      2
      ·
      5 months ago

      That’s the crux of it here. Microsoft wanted to get into the data game they saw Facebook and Google reaping. However, Microsoft still charge you for the software they use to harvest your data.