Hi, mostly i use REHL based distros like Centos/Rocky/Oracle for the solutions i develop but it seems its time to leave…

What good server/minimal distro you use ?

Will start to test Debian stable.

      • @cloudless
        link
        English
        15
        edit-2
        9 months ago

        deleted by creator

        • @Turtle@lemmy.world
          link
          fedilink
          English
          5
          edit-2
          11 months ago

          I’ve used debian on and off since the late 90s, what stands out about bookworm? They’ve been mostly the same to me, not that that’s a bad thing.

          • @themoonisacheese@sh.itjust.works
            link
            fedilink
            English
            111 months ago

            For me bookworm stands out from it’s lack of standing out, if that makes sense. It’s very well polished, stable and I am having considerably fewer problems wince I upgraded.

        • @eoli3n
          link
          English
          111 months ago

          “Blue is the absolute color” … why ?

    • Cal🦉B
      link
      711 months ago

      I’m going to throw my support behind this one as well. I’m circling back to Debian after a long stint on Fedora on my primary machine. I’ve been running Debian 12 on my desktop for several weeks now and it’s been pretty great.

      it is one version behind fedora in gnome releases, so I installed the latest gnome from the experimental repos and that worked pretty well. I don’t know if I would recommend that for anyone else, but it worked for me.

      I have a few personal servers still running CentOS 7, but I will be migrating them to Debian slowly over the next few months. I suspect I will go fine. Debian organization to maintain FOSS ideals over the next 5 to 10 years, so it seems like a good default for me.

      I have read about Vanilla OS. It is Debian based with some neat features stacked on top that might be fun for a desktop OS. I can see myself switching to that on the desktop if they deliver on all their promises.

      • The Bard in Green
        link
        fedilink
        11
        edit-2
        11 months ago

        Life long Debian (and Debian derivatives) user (23 years and counting). I have pretty much settled down into (this has been true for years):

        • Debian for servers.
        • Mint for workstations (that you want to just work and don’t want to spend time troubleshooting / tinkering). Mint is linux your grandma can use (my Boomer real estate broker father has been running Mint laptops for the last 5 years).
        • Ubuntu for jr. Engineers who want to learn linux.
        • Qubes (with Debian VMs) for workstations that must be secure (I’ve been working recently with several organizations that are prime targets either the CCP or have DFARS / NIST compliance requirements).
        • tool
          link
          fedilink
          111 months ago

          Qubes (with Debian VMs) for workstations that must be secure (I’ve been working recently with several organizations that are prime targets either the CCP or have DFARS / NIST compliance requirements).

          Can you apply STIGs to that?

  • @Borgzilla@lemmy.ca
    link
    fedilink
    44
    edit-2
    11 months ago

    As an old fart, I’m happy to see that Debian is still cool. All of this arch-manjaro-nix-os-awesome-bspwm-i3-xmonad-flatsnap whippersnapper stuff is over my head.

    • @Nyanix@beehaw.org
      link
      fedilink
      1611 months ago

      Realistically, it doesn’t make sense for folks to be using bleeding edge distros like Arch for a server anyway. LTS of Debian or even Ubuntu are definitely the right answer

      • @OsrsNeedsF2P
        link
        611 months ago

        Back when I was hyper into Arch I used it for my servers. “Why not make it the same as your development environment?”. Anyways, that immediately stops working when your development environment changes. For a server, just use Debian or Ubuntu.

    • @Vani@lemmy.world
      link
      fedilink
      11
      edit-2
      11 months ago

      I’m all for using Debian and such, and I think out of all the new and hip things people brag about, using Flatpak is the most useful thing for the average user experience and worth checking out. Everything (almost) else is just extra.

  • @bloodfart
    link
    3211 months ago

    You already figured it out. It’s Debian stable.

  • @dotancohen@lemmy.world
    link
    fedilink
    2211 months ago

    Will start to test Debian stable.

    This is a smart move.

    Debians make for very good servers, I’ve been using Debian servers since moving my desktop from Fedora (when it was still called Fedora Core) to Ubuntu. I don’t regret it one bit. The community is excellent, and there is ample information available online without having to ask a new question.

    • @absentbird@lemm.ee
      link
      fedilink
      Deutsch
      111 months ago

      This is how I feel. I was using Debian for both for a long time, but Arch has won me over.

      The variety of software and ease of sharing it via AUR is just so convenient.

    • frozen
      link
      fedilink
      911 months ago

      Huge fan of openSuse Tumbleweed. Rolling release like Arch with the backing of a decently sized organization.

    • @VerbTheNoun95@sopuli.xyz
      link
      fedilink
      English
      711 months ago

      I think OpenSuSe is really the best alternative. As much as I like Debian, OpenSuSe will be pretty comfy for someone coming from RHEL.

    • Sw00$h
      link
      fedilink
      111 months ago

      Until it is clear, how Leap 16 will look like, I would not start to use it now.

  • My vote is Archlinux. Debian is sometimes a little too “optimisitic” when backporting security fixes and upgrading from oldstable to stable always comes with manual intervention.

    Release-based distros tend to be deployed and left to fend on their own for years - when it is finally time to upgrade it is often a large manual migration process depending on the deployed software. A rolling release does not have those issues, you just keep upgrading continuously.

    Archlinux performs excellent as a lightweight server distro. Kernel updates do not affect VM hardware the same they do your laptop, so no issues with that. Same for drivers. It just, works.

    Bonus: it is extremely easy to build and maintain your own packages, so administration of many instances with customized software is very convenient.

    • Tyr3al
      link
      fedilink
      311 months ago

      Regarding the kernel upgrades: Using the linux-lts package / kernel get’s you a pretty reliable setup.

    • Sw00$h
      link
      fedilink
      111 months ago

      You basically recommend to burn money.

      Not because of Arch itself and its quality, but because you need to constantly monitor the mailing list for issues and you need to plan a lot more downtimes due to reboot. This is not gonna happen in businesses.

      • @EddyBot@feddit.de
        link
        fedilink
        311 months ago

        if you need reliable uptime you are in need of redundant servers and at that point you can just apply updates and reboot the servers concurrently

        • Sw00$h
          link
          fedilink
          111 months ago

          Businesses rely on stable server and applications. Stable in the sense of API/ABI stable. You want an application behave exactly the same on day one and on the last day before eol of the server OS.

          Arch is pure chaos and it could completely change how things work and break commercial third party apps on that server on potentially every day. And you would not necessarily notice the error until its to late and your data is corrupted.

          You don’t trow money at a your server infrastructure to get redundant servers to finally be able to use Arch somewhat stable. And why should a business not use that redundancy for an LTS distro to get even more stability and safety of operations.

    • @CAPSLOCKFTW
      link
      111 months ago

      I have tried using Arch on a (personal) server and while it certainly works, I would not recommend for production. There are just too many moving parts there and while breakage is extremely rare (especially when carefuly reading the news), it still happens and most companies or organisations prefer a cumbersome, planned upgrade to the next version over many very small possibilities for a catastrophe.

      The ease of building packages is indeed one of Arch’s strengths, it enables the AUR and therefore Arch has one of the easiestto use and largest repos out, but again, for production, you should go with containerisation IMO. Docker images are easy to build as well, OS independant and, most importantly independet from system libraries.

      Imagine following scenario: You’re running a SaaS based on some python framework. Option 1, you’re running it as a pacman package. If have to use a library that isn’t in the official repos, then you’re fucked when a Python update comes that requires update libraries, you have to update your own app AND the library yourself or do some nasty downgrading, lots of manual intervention required. Or a library you used updates and changes its functionality so that you have to adjust your app. Option 2, you trash the easy to build system packages and go for a virtual enviroment. Now you’re at least protected from the library problems, everythings stale unless you update within with pip, but you lost the easy packaging aspect. But breakage isn’t 100% ruled out, even with the venv you can still encounter problems with updates of stuff that is between your App and the web. All the system stuff could be subject to change in any Syu. And venv is a python only solution, so everything that has other parts is no option. So Option 3, build containers, connect them, deploy them, run them on systems with very little changes in system applications.

    • Kwozyman
      link
      fedilink
      111 months ago

      I don’t think Arch (or any rolling distro for that matter) is the best solution for a server deployment. If you update rarely, you’re bound to have to do manual interventions to fix the update. If you update too often, you might hit some distro breaking bug and you’re rebooting very often as well. Those two options are not great on something requiring stability.

      • Once a year there is a manual intervention. Last one was the repo merge, and that did not even break then. Before that… hmmm… I dont even remember.

        On Desktop with nvidia and a lot of other AUR stuff it is more work, but the servers run smooth as butter.

    • @Shareni@programming.dev
      link
      fedilink
      -211 months ago

      RHEL is designed to be the terminator: a bit outdated, but never stopping and never giving up until it’s completely destroyed.

      Arch is a house that’s being built by a drunk tradie: everything is probably going fine, but you might end up with a front door that opens up to a solid brick wall.

      The main benefit of arch is that it has a huge repo of cutting edge packages. That is pretty much completely useless for both development and infrastructure.

      Devs don’t use cutting edge packages because that can introduce a whole lot of work for no benefits. So for example instead of installing node (cutting edge on arch), they use node-14-lts, just like their infra, until it stops getting support or a feature they need comes out in a newer lts version. And if your app is running on lts packages, you most certainly don’t need cutting edge system packages and all of the issues that come with them.

      Debian is sometimes a little too “optimisitic” when backporting security fixes

      You’re not going to be hacked because of a system package. It’s going to be a bad library, or your own bad code. Either way, it’s got nothing to do with pacman.

      Release-based distros tend to be deployed and left to fend on their own for years - when it is finally time to upgrade it is often a large manual migration process depending on the deployed software. A rolling release does not have those issues, you just keep upgrading continuously.

      We’re not back in the early 2000s, upgrading the OS is trivial when you’re using tools like terraform, ansible, and docker.

      Bonus: it is extremely easy to build and maintain your own packages, so administration of many instances with customized software is very convenient.

      Sure you can write a package for pacman and have it available on arch. Or you can write a guix package and have it available on any Linux distro. Or you can write a nix package and then run it on macOS as well. Windows being covered by both of these because of WSL.

      I’ve recently had to write a package for both arch and guix, the guix one was a lot easier and the whole process was a lot smoother. Also you get nice features like transformations, allowing you to only modify the existing package instead of having to rewrite it.

      Archlinux performs excellent as a lightweight server distro. Kernel updates do not affect VM hardware the same they do your laptop, so no issues with that. Same for drivers. It just, works.

      I haven’t used it as a server distro, but it was my main desktop distro for the last ~4 years. It crashed every month or two, and failed to boot at least 3 times even with regular Syu’s. Before that I ran Mint for 2+ years. It never crashed, it never failed to boot. Other machines I wouldn’t update for months. mint had no issues with that and updated perfectly fine. Arch would often crap itself completely, fail to boot, I’d do a btrfs rollback and try again in a week or two. Sometimes that would be enough, other times I had to wait a bit more for shit to settle.

      Arch has possible minor benefits, and a lot of possible downsides. It just doesn’t make sense to use it on a server, when you can take a rock solid foundation like Debian, and then build on top of it with nix/guix.

      • We use Ansible as well, it keeps all servers happily upgraded and all packages in working order - even the weirdest custom software instances. Nodjs is available as lts packages im arch and it, again, just works.

        I have zero issues with upgrades on desktop and server except once last year when my old Core2Duo notebook I use in the kitchen did not suspend correctly for a whole week until the Kernel bug was fixed. (I ran linux-lts for a week, it was… smooth sailing).

        During that time we had 3 failed migrations of old PHP software to the new Ubuntu LTS and were fighting almightly RHEL because it simply did not provide the packages the customer required - we are now running an Arch container on the RHEL box…

        I know this discussion is a little bit like religion, and obviously luck and good circumstances play a role. We both speak from experience and OP can make their own decision.

  • @TheAnonymouseJoker
    link
    1711 months ago

    Debian

    Debian 12 Bookworm is their best release ever, and I am seeing a lot of positive opinions about it suddenly. It may be a Ubuntu 16.04 moment.

    • @pproe
      link
      English
      611 months ago

      Bookworm was the final straw that made me switch to Debian (and linux in general full time). Such a polished OS. And if the release cycle doesn’t suit your workflow its a very smooth change over to one of the many debian-based distros.

      • @TheAnonymouseJoker
        link
        English
        -2
        edit-2
        11 months ago

        I am on Ubuntu 22.04 LTS (since 16.04), and I will absolutely try it. The non free firmware support seems like Debian has become more open minded and pragmatic, and I like that. Ubuntu is safe, but I have become comfortable enough with Linux to step up my game with a more solid distro.

      • @TheAnonymouseJoker
        link
        -411 months ago

        Go check The Linux Experiment’s video, among a lot of other videos and discussion forums.

        • @eoli3n
          link
          2
          edit-2
          11 months ago

          Those are not “details”, but “blur sources”.

          • @TheAnonymouseJoker
            link
            -511 months ago

            Have you bothered to research the consensus and what Debian’s new release has? Literally 2 minutes away if you search internet instead of replying. Do not expect spoonfeeding.

            • @turdas@suppo.fi
              link
              fedilink
              3
              edit-2
              11 months ago

              Substantiating your claims isn’t “spoonfeeding”, it’s just common courtesy to reassure others that you aren’t talking completely out of your arse.

            • @eoli3n
              link
              0
              edit-2
              11 months ago

              Bla bla bla, so much energy to just not give what I ask for.

    • @jsonborne
      link
      111 months ago

      Does Debian let you specify regulatory compliance at install time? Or is it a do it yourself manually situation where you write an Ansible playbook.

  • @phil_m
    link
    1611 months ago

    If you’re up for it: NixOS!

    It’s quite a steep learning curve, but after some time (after you’ve configured your “dream-system”) you don’t want to go back/switch to any different distro.

    Specifically servers IMHO are a great use-case for NixOS. It’s usually simpler to configure than a desktop distro, and less of the usual pain points of “dirty” software (like hardcoded dynamic libraries, that exist on most systems (ubuntu as reference) at that path).

    I’ve much less fear maintaining my servers with NixOS because of its declarative functional reproducability and “transactional” upgrade system, than previously (where I’ve used Debian mostly).

    • ShittyKopper [they/them]
      link
      fedilink
      English
      511 months ago

      The thing about NixOS is that while using packages are easy, creating them are still really hard and/or undocumented.

      With most popular services already being packaged by people who know what they’re doing this isn’t that big of a deal, but when I want to try out something from Joe Schmoe’s GitHub (or worse, something I made myself) it is much easier for me to throw together a “good enough” Dockerfile and compose.yml together in barely a hour of work than to dig into Nixpkgs internals and wrestle with Nix’s syntax.

      • @lloram239@feddit.de
        link
        fedilink
        English
        511 months ago

        Kind of depends what you want to package. For projects that force you to provide dependencies yourself (e.g. most C or C++ projects), Nix packaging is very easy to use. Just slap a flake.nix together with the necessary dependencies, where to get the source from and how to build it.

        Where Nix gets really difficult is with packages that reinvent their own packaging system and do dynamic downloads at compile or even runtime. Those really do not harmonize with Nix, as the Nix build process happens in isolation without network access and wants to have all dependencies specified beforehand, with checksum and all.

        When it comes to languages with their own package manager it also gets a bit complicated, as while Nix does come with workarounds for all the common cases, there are generally multiple ways to do it, e.g. you can use mach-nix, pypi2nix, buildFHSUserEnv or buildPythonPackage to build Python packages and it’s not always obvious which is the best approach or which will even work.

        Packages that softly depend on other packages via some kind of plugin mechanism are also tricky, due to Nix packages all being isolated in their own directories. Again, which workaround works best here can be tricky, some packages require specifying all the plugins at package build time others use environment variables or other means to locate plugins.

        All that said, these issues are kind of fundamental when you want to have a proper reproducible packaging system and hard to avoid. I do prefer a system that forces some cleanliness from the ground up instead of adding ever more ugly patchwork on top, but I can understand why that can be at times very frustrating.

      • @phil_m
        link
        English
        211 months ago

        Well I guess it depends how deep you’re in the rabbit hole already, I think it’s relatively easy for me at this point to create a new package (I’m maintainer already for quite a few). But yeah … steep learning curve … Less so with Nix itself, though non-the-less, it’s a simple functional programming language with a new paradigm (derivations). But rather NixOS/nixpkgs Nix magic. For example there’s a dynamic dependently typed type-system built on top of untyped Nix in the NixOS module system that is spin up on evaluation time.

        But I understand your point, at the beginning of my NixOS journey I have also rather created a “good enough” Dockerfile. Depending on the exact context I still do this nowadays (often because there’s an official well maintained docker image in comparison to a not so well maintained Nix one, and the context is too complex to maintain/develop/extend it myself). But if there’s a good solution in Nix I rather use that, and that is often less headache than setting up a service with e.g. docker-compose. I also use flakes mostly for a dev environment, if you’re a little bit deeper in it, you can spin up a relatively clean dev env in short time (I’m often copy pasting the ones I have written from different projects, and change the packages/dependencies).

    • @eoli3n
      link
      311 months ago

      I had a really bad experience with NixOS, the idea is great, but I had a lot of troubles at each generation switch. I don’t like it because I had to learn a lot of specific tools, that only applies on that OS, and it was (really.) hard. I prefer a classic distro, maybe Debian (or Freebsd if not linux), with Ansible for declarative config, and ZFS storage to be able to revert a snapshot if I have any kind of problem.

      • @phil_m
        link
        1
        edit-2
        11 months ago

        As I said it has a steep learning curve and documentation is pretty much the nixpkgs repo itself (well after understanding the basics of Nix and NixOS at least, with the combination of the https://nixos.wiki mostly IMO). It also takes some time to get used to the quirks of NixOS (and understanding the necessary practical design decisions of these quirks).

        But I have nowadays seldom trouble with switching the generations (i.e. nixos-rebuild switch), unless you’re updating flake inputs or (legacy) channels (where e.g. a new kernel might be used). In that case it makes sense to reboot into the new configuration. Also, obviously that can lead to short down-times (including just restarting a systemd service, if a service has changed in between the generations), if that is unacceptable, there obviously needs to be a more sophisticated solution, like kubernetes via e.g. kubnix. I’m not sure how much of that can be achieved with Ansible, as I haven’t used it that much because I disliked the “programming” capabilities of the Ansible yaml syntax (which feels kinda hacky IMHO).

        But apart from NixOS, one can also just use Nix on a different system to e.g. deploy or create docker images (which can be really compact, as only the necessary dependencies for a package is packaged) that in turn could e.g. be managed with Ansible or something…

  • Sophia
    link
    fedilink
    1611 months ago

    Honestly, Debian stable has always been my first option. I’ll continue using Arch for my desktops and Debian on servers and stuff.

    • @CAPSLOCKFTW
      link
      3
      edit-2
      11 months ago

      Same here. Went from CentOS to debian (edit: on servers) when this whole shit show started and never looked back.

    • @darkmugglet@lemm.ee
      link
      fedilink
      111 months ago

      Agreed. I like Fedora and there’s some awesome stuff like Podman in it. But Debian or Alpine for cobtraiber images and Debian Stable for servers. I mean, let’s be honest, if your a professional you want boring for a server, and Debian Stable is dreadfully boring; it just works.

  • @yarr@lemmy.fmhy.ml
    link
    fedilink
    English
    1311 months ago

    Debian stable. The mix of having a stable host but being able to pull in flatpak / appimage / docker containers with newer software is awesome.

    • @itchy_lizard@feddit.it
      link
      fedilink
      English
      10
      edit-2
      11 months ago

      Debian yes, but don’t install from flatpaks or docker. Neither is secure.

      AppImage can be secure if the release is signed.

      Docker can pull images securely, but it’s disabled by default and many developers don’t sign their releases, so even if you enable it client-side there’s a risk you’ll download something malicious.

      Flatpak is never secure because it doesn’t support signing of releases at all.

      Apt is always secure because all packages must be cryptographically signed (by default).

      • ono
        link
        fedilink
        English
        911 months ago

        Flatpak is never secure because it doesn’t support signing of releases at all

        Can you elaborate on this? I ask because I build my own flatpaks, and signing is part of the publishing process.

          • ono
            link
            fedilink
            English
            7
            edit-2
            11 months ago

            Your earlier comment complains about pulling images securely, presumably meaning signature verification, which I believe Flatpak does.

            The report you linked is about tying downloaded sources to their author using public key infrastructure, which is a different issue. APT and dpkg don’t do that, either. (I know this because I build and publish with those, too.)

            Can you name a packaging system that does? I can’t. I would like to see it (along with reproducible builds) integrated into the software ecosystem, and I think we’re moving in that direction, but it will take time to become common.

            I have my own criticisms of Flatpak, mostly regarding the backwards permissions model (packages grant themselves permissions by default) and sloppy sandboxing policies on Flathub, so I caution against blindly assuming it’s safe. But claiming that it doesn’t support signing of releases is just plain false.

            • @federico3
              link
              English
              1
              edit-2
              11 months ago

              This is not correct. APT always verifies cryptographic signature unless you explicitly disable it. Yet it’s very important to understand who is signing packages. What kind of review process did the software go through? What kind of vetting did the package maintainer themselves go through?

              If software is signed only by the upstream developer and no 3rd party review is done by a distribution this means trusting a stranger’s account on a software forge.

              Update: the Debian infrastructure supports checking gpg signatures from upstream developers i.e. on the tarballs published on software forges.

              • ono
                link
                fedilink
                English
                111 months ago

                This is not correct. APT always verifies cryptographic signature unless you explicitly disable it.

                You’ve misunderstood what I wrote.

            • @itchy_lizard@feddit.it
              link
              fedilink
              English
              0
              edit-2
              11 months ago

              I believe Flatpak does.

              Flatpak does not authenticate files that it downloads. Please stop spreading misinformation that flatpak is secure. It’s not.

              If the flatpak (flathub?) repo was compromised and started serving malicious packages, the client would happily download & install them because it doesn’t have any cryptographic authenticity checks.

              APT and dpkg don’t do that

              Apt does verify the authenticity of everything it downloads (by default) using PGP signatures on SHA256SUMS manifest files. This provides cryptographic authenticity of everything it downloads. Flatpak doesn’t do this.

              Again, this is clearly documented here https://wiki.debian.org/SecureApt

              • ono
                link
                fedilink
                English
                5
                edit-2
                11 months ago

                Again, you’re confusing two different things (sources vs. packages). I’m not going to argue with you, though. Good day.

                • @itchy_lizard@feddit.it
                  link
                  fedilink
                  English
                  1
                  edit-2
                  11 months ago

                  I’m talking about the end-user securely downloading packages from the repo, not how the package maintainer obtains the software upstream.

                  How a package maintainer obtains the software from the source is dynamic and depends on the package. Ideally those releases are signed by the developer. In any case, if the package is poisoned when grabbing the source, it’s much easier for the community to detect than a targeted MITM attack on a client obtaining it from the repo.

                  I can say that I do maintain a software project that’s in the repo, and we do sign it with our PGP release key. Our Debian package maintainer does verify its authenticity by checking the release’s signature. So the authenticity is checked both at the source and when downloading the package.

      • @KindaABigDyl@programming.dev
        link
        fedilink
        4
        edit-2
        11 months ago

        Eh. I mean it’s certainly a smaller curve than other “hard” distros like Arch or Gentoo, and there really isn’t one at all since the installer does most of the complicated stuff for you.

        Would I recommend it to beginners? Probably not as they wouldn’t be willing to do any reading, configuring, or time sinking at all.

        However, for this use case of building solutions by an experienced Linux user, the 30 min to an hour of learning is really not a lot when it would save a ton of time down the line. It’s not like you need to be a nix lang or nixos expert to use it effectively

        • @mangopuncher
          link
          311 months ago

          I mostly agree with this, I have it on my laptop. Took an hour or two to learn it, used a live image from the website just like any distro. Not for beginners, but someone that is used to arch, after you rtfm it’s fine.

        • @huiledolive@sh.itjust.works
          link
          fedilink
          English
          2
          edit-2
          11 months ago

          I see more and more people mentionning NixOS, until I read your message I thought it’d be more complicated than that to use it. But I have a beginner question: do the Nix repositories contain many packages that you’d want, or do you find yourself installing stuff manually?

          • @KindaABigDyl@programming.dev
            link
            fedilink
            English
            2
            edit-2
            11 months ago

            That’s actually one of its selling points. 80k packages. It’s more than the AUR (or any other package manager, for that matter).

            I’ve only had 3 programs not be available so far: a tool someone made for RGB set up on MSI laptops (somewhat niche tool) and Slippi & Project+ which were only available as AppImages that for some reason wouldn’t run and need their own environment (other AppImages seem to work fine)

            Very rarely will something not be available, and even then, someone has probably already figured out how to install it; it’s just not in the main repo, so a quick internet search will remedy it without you having to do any thinking yourself. I didn’t solve the Slippi thing myself.

    • @Auli@lemmy.ca
      link
      fedilink
      211 months ago

      I keep hearing that but I definitely managed to break it. And yes it wouldn’t even boot when I rolled back.