In practice, the Linux community is the wild wild west, and sweeping changes are infamously difficult to achieve consensus on, and this is by far the broadest sweeping change ever proposed for the project. Every subsystem is a private fiefdom, subject to the whims of each one of Linux’s 1,700+ maintainers, almost all of whom have a dog in this race. It’s herding cats: introducing Rust effectively is one part coding work and ninety-nine parts political work – and it’s a lot of coding work. Every subsystem has its own unique culture and its own strongly held beliefs and values.

The consequences of these factors is that Rust-for-Linux has become a burnout machine. My heart goes out to the developers who have been burned in this project. It’s not fair. Free software is about putting in the work, it’s a classical do-ocracy… until it isn’t, and people get hurt. In spite of my critiques of the project, I recognize the talent and humanity of everyone involved, and wouldn’t have wished these outcomes on them. I also have sympathy for many of the established Linux developers who didn’t exactly want this on their plate… but that’s neither here nor there for the purpose of this post, and any of those developers and their fiefdoms who went out of their way to make life difficult for the Rust developers above and beyond what was needed to ensure technical excellence are accountable for these shitty outcomes.

Here’s the pitch: a motivated group of talented Rust OS developers could build a Linux-compatible kernel, from scratch, very quickly, with no need to engage in LKML politics. You would be astonished by how quickly you can make meaningful gains in this kind of environment; I think if the amount of effort being put into Rust-for-Linux were applied to a new Linux-compatible OS we could have something production ready for some use-cases within a few years.

Having a clear, well-proven goal in mind can also help to attract the same people who want to make an impact in a way that a speculative research project might not. Freeing yourselves of the LKML political battles would probably be a big win for the ambitions of bringing Rust into kernel space. Such an effort would also be a great way to mentor a new generation of kernel hackers who are comfortable with Rust in kernel space and ready to deploy their skillset to the research projects that will build a next-generation OS like Redox. The labor pool of serious OS developers badly needs a project like this to make that happen.

Follow up to: One Of The Rust Linux Kernel Maintainers Steps Down - Cites “Nontechnical Nonsense”, On Rust, Linux, developers, maintainers, and Asahi Lina’s experience about working on Rust code in the kernel

  • JASN_DE@lemmy.world
    link
    fedilink
    arrow-up
    48
    arrow-down
    3
    ·
    4 months ago

    a motivated group of talented Rust OS developers could build a Linux-compatible kernel, from scratch, very quickly, with no need to engage in LKML politics

    Riiight… Because those developers would always be of the same opinion… Good luck with that.

    • Ephera
      link
      fedilink
      arrow-up
      14
      ·
      4 months ago

      You don’t need to always be of the same opinion for it to be much less loaded than Linux politics…

        • Ephera
          link
          fedilink
          arrow-up
          3
          ·
          4 months ago

          Yes? Again, I’m not saying there’s not going to be disagreements or politics, I’m just saying that it’s going to be less loaded than Linux kernel politics.

        • DigitalDilemma
          link
          fedilink
          English
          arrow-up
          5
          arrow-down
          5
          ·
          4 months ago

          (Ignoring the ageist and sexist “old men” statements in this thread because it’s irrelevant)

          They will die out,

          … and be replaced with other technically invested people who are resistant to change. Such as with every massive project ever - at least until you get a tyrant who ignores the feelings and work of of others and is in a position to push through their own vision.

          • femtech@midwest.social
            link
            fedilink
            arrow-up
            6
            ·
            4 months ago

            They are not being replaced at the same rate though. Knowledge brings understanding, understanding bring empathy, empathy brings change.

            • DigitalDilemma
              link
              fedilink
              English
              arrow-up
              2
              ·
              4 months ago

              Bless you for being an optimist, but I don’t think it works like that. I really wish it did though.

              • femtech@midwest.social
                link
                fedilink
                arrow-up
                5
                arrow-down
                1
                ·
                4 months ago

                Why else would Republicans ban books, keep their children in the dark about other religions and people. So they can keep them on the hate train.

                • DigitalDilemma
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  4 months ago

                  If it did, then the world would be a perfect place by now. Indeed, many things are better - but there’s enough people hard at work sowing discontent and hate to ensure it isn’t.

  • AllNewTypeFace@leminal.space
    link
    fedilink
    arrow-up
    23
    arrow-down
    6
    ·
    4 months ago

    Drew DeVault recently wrote a simple but functional UNIX kernel in a new systems programming language named Hare in about a month, which suggests that doing something similar in Rust would be equally feasible. One or two motivated individuals could get something up which is semi-useful (runs on a common x86 PC, has a console, a filesystem, functional if not necessarily high-performance scheduling and enough of the POSIX API to compile userspace programs for), upon which, what remained would be a lot of finishing work (device drivers, networking, and such), though not all of it necessary for all users. Doing this and keeping the goal of making it a drop-in replacement for the Linux kernel (as in, you can have both and select the one you boot into in your GRUB menu; eventually the new one will do enough well enough to replace Linux) sounds entirely feasible, and a new kernel codebase, implemented in a more structured, safer language sounds like it could deliver a good value proposition over the incumbent.

    • schizo@forum.uncomfortable.business
      link
      fedilink
      English
      arrow-up
      53
      ·
      4 months ago

      I’m not sure I’d call device drivers ‘finishing work’.

      The MAJORITY of the kernel is that pesky bit of ‘finishing work’; there’s a fuckton of effort in writing drivers and support for damn near ever architecture and piece of hardware made in the last 30 years.

      You could argue you don’t NEED to support that much stuff, and I’d probably agree, but let’s at least be honest that the device driver work is likely to be 90% of the work and 90% of the user complaints if something doesn’t work.

      • chameleon@fedia.io
        link
        fedilink
        arrow-up
        7
        ·
        4 months ago

        It depends on if you can feasibly implement compatibility layers for large parts of the “required” but very work-intensive drivers. FreeBSD has the same driver struggles and ended up with LinuxKPI to support AMD/Intel GPUs. I know there’s a whole bunch of toy kernels that implemented compatibility layers for parts of Linux in some fashion too.

        It’s a ton of work overall but there’s room to lift enough already existing stuff from Linux to get the ball rolling.

        • AllNewTypeFace@leminal.space
          link
          fedilink
          arrow-up
          4
          ·
          4 months ago

          It’s possible though less than ideal. Drivers that connect to devices are part of the attack surface, and probably the part you’d least want implemented in C when the rest of the kernel is in Rust.

          • Tanoh@lemmy.world
            link
            fedilink
            arrow-up
            3
            ·
            4 months ago

            Sure, but even if they started tomorrow it would probably be years before it even could be considered experimental outside of the most daring early adaptors.

            Having a combability layer is not ideal but it would mean they could have something worker for more users faster and at the same time see which modules/drivers they should focus on.

        • schizo@forum.uncomfortable.business
          link
          fedilink
          English
          arrow-up
          6
          arrow-down
          2
          ·
          4 months ago

          Sure, but isn’t the mixture of rust and c based device drivers and kernel ABIs kinda the whole kerfuffle that’s going on right now? Seems like the rust faction would probably NOT want to grab some duct tape and stuff all this back together, but slightly differently this time.

      • cakeistheanswer@lemmy.dbzer0.com
        link
        fedilink
        arrow-up
        6
        ·
        4 months ago

        This is incredibly true. The hardware manufacture process is a slow turning and cost centric wheel, but it’s always forward looking. If it doesn’t exist today you are building around compromises made outside the scope of your concerns.

        Anyone whose had to work on DEC or Sun hardware can describe in excruciating detail about how minor implementation differences in hardware cascade down the chain. (Missing) Rubber washers determined a SAN max writes once, lest the platters vibrating cause the chassis to walk across the floor.

        ‘Universal’ support is always a myth, and carving up what segment to target is shooting one moving target while standing on another one unless you have exclusive control of implementation of the whole chain (apple).

      • AllNewTypeFace@leminal.space
        link
        fedilink
        arrow-up
        5
        ·
        4 months ago

        There’s a Pareto effect when it comes to them, in that you can cover a large proportion of use cases with a small amount of work, but the more special cases consume proportionately more effort. For a MVP, you could restrict support to standard USB and SATA devices, and get a device you can run headless, tethered to the network through a USB Ethernet adapter. For desktop support, you’d need to add video display support, and support for the wired/wireless networking capabilities of common chipsets would be useful. And assuming that you’re aiming only for current hardware (i.e. Intel/AMD boards and ARM/RISC-V SOCs), there are a lot of legacy drivers in Linux that you don’t need to bring along, from floppy drives to the framebuffers of old UNIX workstations. (I mean, if a hobbyist wants to get the kernel running on their vintage Sun SPARCstation, they can do so, but it won’t be a mainstream feature. A new Linux-compatible kernel can leave a lot of legacy devices behind and still be useful.)

        • schizo@forum.uncomfortable.business
          link
          fedilink
          English
          arrow-up
          7
          ·
          4 months ago

          I think we’re kind of saying the same thing: making something that boots as an MVP isn’t the most difficult thing but is still a much different and simpler project than making a replacement kernel for Linux.

          If you really wanted to be a legitimate actual replaces-Linux-for-all-the-things-you-use-Linux-for kernel, you’d be biting off years and years and years of work on drivers. I mean, just look how long it took Noveau to go from ‘kinda works’ to ‘actually viable’ and that’s just a subset of GPUs from one vendor.

          I’d also add that if you cared about server hardware, you’ve got a much larger driver footprint with a lot more weird behavior shit than on desktops which are, honestly, just a couple dozen combinations of chipset, sound, ethernet, and usb controller in any given generation.

          And sure, you could lean on the work that’s already been done at this point and probably do it faster, but it’s still a massive undertaking.

        • progandy@feddit.org
          link
          fedilink
          arrow-up
          1
          ·
          edit-2
          4 months ago

          ReactOS is probably a good indicator how far you can get with some limited generic drivers.

      • AmalgamatedIllusions
        link
        fedilink
        arrow-up
        7
        arrow-down
        1
        ·
        4 months ago

        Drew mentions this and points out that it’s a new OS design and will therefore take a long time. He argues that an OS based on the linux design would be much easier.

        • Michael Murphy (S76)@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          edit-2
          4 months ago

          I’d recommend spending some time reading about it. It’s not as hard as he thinks. Applications developed for Linux are quite easy to port to Redox. It supports many of the same system calls and has a compatible libc implementation. The kernel does have abstractions to ease the porting process. And if you’re going to make a new kernel today, you should do it right and make a microkernel like Redox. One of the benefits of having a microkernel is that it doesn’t matter what language you write drivers in. They’re isolated to their own processes. Rust, C, C++, whatever.

        • Ephera
          link
          fedilink
          arrow-up
          1
          ·
          edit-2
          4 months ago

          Yeah, I did read that, admittedly after making my comment, but thanks for pointing it out anyways. 🙂

    • Telorand@reddthat.com
      link
      fedilink
      arrow-up
      8
      arrow-down
      2
      ·
      4 months ago

      I kinda hope this happens, and I even think it’s likely. GNU/Linux came about because of the corporate direction of Unix. If Linux kernel devs are similarly going to shut out Rust devs, it seems like a reasonable path forward is to diverge and start something new.

  • fruitycoder@sh.itjust.works
    link
    fedilink
    arrow-up
    11
    ·
    4 months ago

    Monolithic kernels were a mistake /s

    But honestly its too bad that adopting new technology is so difficult in large projects like the Linux kernel where arguably trying to replicate the success of Linux AND adopt new technology sounds extremely burdensome.

    Worse I fear the lose of the libre ethos in new projects as they feel the need to bend that ethical barrier more in order to better “compete” with Linux.

  • emax_gomax@lemmy.world
    link
    fedilink
    arrow-up
    9
    arrow-down
    1
    ·
    4 months ago

    Man, despite loving Foss this whole debacle is so disillusioning for anyone that ever wanted to pivot to working on it full time. You don’t have to agree with people wanting to try new things, but the bare minimum is not to spew vitriol to keep them quiet or claim you’ll break their stuff and that’s their problem because they aren’t doing things the same way as you but still depend on a shared ecosystem. All we have to do is be bloody polite to each other and build cool sh*t, why is that so hard. All the best to the Linux rust rewrite for these folks, but to me it just feels like both projects are losing here. Linux losing the passion and drive for adopting more modern stuff and all the folks with that drive opting to restart from scratch because too many people refuse to get along.

  • nyan@sh.itjust.works
    link
    fedilink
    arrow-up
    3
    ·
    4 months ago

    Large organizations always have politics—it’s human nature. 1700 people is quite a large organization. Therefore, the kernel maintainers have politics. The presence of politics always means that some people will get stomped on unfairly.

    This is all business as usual, in other words, and it will not go away. At best, you can shift the culture of the group and the politics along with it, but that takes time and effort and people-handling.

    • gerdesj
      link
      fedilink
      English
      arrow-up
      1
      ·
      4 months ago

      ♬rope and pull and brand 'em ♬

  • Cysioland@lemmygrad.ml
    link
    fedilink
    arrow-up
    1
    ·
    4 months ago

    I wouldn’t be surprised if a Rust-based Linux fork eventually happened (possibly funded by some techbro wrecker)

  • gerdesj
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    7
    ·
    4 months ago

    “Every subsystem is a private fiefdom, subject to the whims of each one of Linux’s 1,700+ maintainers, almost all of whom have a dog in this race. It’s herding cats”

    There are three similes in that quote. When your considerations are that disorganized, you have not finished thinking everything through. Fiefdoms, dogs and cats … oh my! That’s on top of wild west and other trite, well worn and rather silly similes.

    Make your argument without recourse to inflammatory terminology and similes and you slighten the risk of pissing people off.

    Clarity is in the eye of the beholder or as someone once said: “You do you”.