Websites, mobile apps, desktop apps and mobile OSes are developed and updated using the desktop OSes, which I would call the ‘master OS’. But who updates the ‘master’? How do the devs upgrade Windows 10 to Windows 11 using Windows 10?

I have some experience in computing but software development for operating systems is completely mysterious for me. I have had this question ever since I learned about software development in general.

I saw Apple say how they use the Macs to build all of their other products and softwares, but they never answer how they build macOS itself. I understand how these companies could design an upgraded or a brand new computer by designing its new architecture as well as the circuitry and the components underneath with the help of a program like CAD. What I don’t understand is how they upgrade their existing software they themselves work in, especially when it has completely new features the old one doesn’t have. I feel like this is similar to a person performing a brain surgery on himself.

I would really appreciate if someone could ELI5 but only dumb it down enough for a person that understands how to really work with computers and knows the general theory of programming , like an amateur or the family IT guy.

  • Max-P@lemmy.max-p.me
    link
    fedilink
    arrow-up
    7
    ·
    edit-2
    1 year ago

    They use the old one to build the new one. It’s the same deal with compilers: to build the new version, you use your existing C compiler to build the new version of the compiler, then you use that build of the new compiler to recompile the compiler again itself (so you get the latest codegen from it), which results in the final compiler.

    So for Windows 11, they developed it on Windows 10.

    For macOS, they use the previous generation of Mac to build the new build of macOS, then they boot it up on another Mac to test it out with the first Mac being the host for a debugger connection and whatnot. For new hardware, it’s essentially the same deal: they made M1 Macs using the previous generation Intel Mac, and when the hardware is ready to build the new OS from the Intel Mac and boot it up on the M1 Mac and test it and so on.

    It’s called the bootstrap process, and it essentially goes all the way to punch cards and mainframes. The first assembler was made directly in machine code, then with the first assembler you can make a better assembler, and with the better assembler one could build the first C compiler, then with the first C compiler you can make a better and more complex C compiler, and with that you can make the first C++ compiler, and on and on. You just use whatever you already have and build the new one using it.

    Same goes with CAD design: first it was on paper, then we had computers where we could write CAD software to make better computers that can run better CAD software that lets you build even better computers and then better CAD software.

    What I don’t understand is how they upgrade their existing software they themselves work in, especially when it has completely new features the old one doesn’t have. I feel like this is similar to a person performing a brain surgery on himself.

    Basically, they don’t. You build the next version using only features that the current version supports. Although in case of OS development specifically, you don’t use OS features, you use compiler features and those don’t evolve the same way at all. But the process is the same: you implement the new features exclusively using what the current compiler supports, then once you do have that new compiler, you can change the code again to make use of those new features and build the new version of the compiler using that. And then you recompile it again for good measure.

    See: https://rustc-dev-guide.rust-lang.org/building/bootstrapping.html