I want to find the most sustainable operating system, because computers nowadays waste a lot of energy, because of data collection and data processing. Avoiding unnecessary processes and using resources in a mindful way could reduce the CO2 output on the whole world.

This discussion grew very fast and I put all links to other platforms in the end of the blog article.

  • nachtigall@feddit.de
    link
    fedilink
    arrow-up
    4
    ·
    2 years ago

    Interesting thoughts. I very recently considered this issue too. As Gentoo user I had to recompile software on updates which took sometimes a whole battery charge for large packets such as LLVM or rustc. Finally I decided to drop it in favor of Debian—the ecological aspect is more important to me than some minor tweaks after all.

    • maxmoonOP
      link
      fedilink
      arrow-up
      4
      ·
      2 years ago

      After discussion this topic for days, now, I’ve decided that my next operating system will be Debian (stable). And if I need a newer package for a software, I still could use flatpak (or similar). Someone even told me that Signal-Desktop (over flatpak) only used so much cpu, because it was bad configured. Maybe this will be different with Debian.

      And a very good point is using the right hardware. Switching to ARM could save a lot of energy. But there are even more aspects I’ve learnt and I haven’t even thought about, but this will be in the next parts of the blog series.

        • maxmoonOP
          link
          fedilink
          arrow-up
          1
          ·
          2 years ago

          So it’s saver to use AppImage, but it needs more resources.

        • maxmoonOP
          link
          fedilink
          arrow-up
          3
          ·
          2 years ago

          I use i3wm and want to test dwm, because it uses much less resources than i3wm or awesomewm, but might do everything I want. Desktop environments are using a lot of energy, but it might be interesting to figure out which one uses less, just to have a good recommendation for people who need a DE. My guess would be XFCE, but it’s only a theory, because I think it’s the smallest one without bloat software.

          The OS you are using does matter a lot if it’s about sustainability (co2 emissions). If everyone would leave Windows and use a lightweight Linux, the energy consumption would be almost halved over night and that’s only the energy, which would have been saved by the workstations. Avoiding Microsoft or other big companies can save much more energy, but more about this in my next blog article.

            • maxmoonOP
              link
              fedilink
              arrow-up
              2
              ·
              2 years ago

              Okay, I haven’t used a DE for years and shouldn’t create theories about them. XFCE was just one of the most light DEs back then, but nowadays were are so many new ones, I’ve never seen in action.

              https://fedoramagazine.org/fedora-desktops-memory-footprints/

              Thanks for this useful link! I will definitely use it in my work.

              A few days ago someone mentioned a link to a website with the resource usage of window managers in a forum, but I can’t find it anymore. I will post it here, if I will find it again, just to complete the numbers.

  • Zerush
    link
    fedilink
    arrow-up
    3
    ·
    2 years ago

    I remember many years ago they experimented with an online OS, which only required a ‘dumb’ PC with an internet connection. In principle, it is not such a bad idea, if it were not that this online OS was centralized and in the hands of a private company. But perhaps it wouldn’t be as bad as something decentralized, this would also greatly facilitate collaborations and save enormously on hardware reuse. I don’t know if this would be feasible in any way, there I orient myself to what our experts say here.

    • maxmoonOP
      link
      fedilink
      arrow-up
      1
      ·
      2 years ago

      So in general it is a remote desktop, right?

      If the main hardware is somewhere else and you only have a bare minimum of hardware, just to connect to the better remote machine and show it’s content, I would say it isn’t sustainable at all. Permanently transferring huge data over the internet produces a lot of CO2.

      And two computers always have a higher carbon footprint than one. Especially if the one could be a very lightweight system, which has everything to be able to go offline, too (like libreoffice for offline office stuff). And the one remote computer must be online all the time, even if your dumb pc is offline.

      But I don’t understand this part: “this would also greatly facilitate collaborations and save enormously on hardware reuse.” Could you explain this a little bit more?

      • Zerush
        link
        fedilink
        arrow-up
        1
        ·
        2 years ago

        I meant that a lot of people are conected to the same OS, because of this also a collaboration in realtime is very easy. Yn a similar way as a decentralized social netweork, but instead of this an OS. CCurrently tehere are some projects out there for online collaboration, maybe the most complete is the French System D, which I am using (not confuse with systend)- I think this type of service can be improved to a full OS in afuture (it’s FOSS)

        • maxmoonOP
          link
          fedilink
          arrow-up
          1
          ·
          2 years ago

          ok, I think I get it now. So not only the same hardware can be shared for different desktop environments, but even the same tools to work together. It definitely would be an interesting project, but it would still use a lot of energy, because everyone connected to it have to stream the desktop permanently. Just using a collaboration software would use less energy.

          btw. CryptPad is a really cool open source collaboration tool.

          • Zerush
            link
            fedilink
            arrow-up
            1
            ·
            2 years ago

            Yes, there are also some others. Energy use , if it’s from renevable resource, “green Energy”, isn’t so relevant. Also a server with a normal social network, like Lemmy or others, with a lot of users need a lot of energy, not so relevant if it host a social network or an OS, I think.

  • jollyrogue
    link
    fedilink
    arrow-up
    1
    ·
    2 years ago
    • Have a stable and secure system

    • Have the newest/fanciest updates for a few applications

    This can be done with Fedora. dnf update —security applies security updates only.

    After that you can cherry pick which applications to update. The cherry picking can be accomplished via an Ansible playbook.

    • maxmoonOP
      link
      fedilink
      arrow-up
      1
      ·
      2 years ago

      Does it make sense to only update security packages? Or will it even be unstable after updating everything?

      I don’t know what Ansible playbook is, but after doing a little research it just looks overpowered. Wouldn’t it be better to have the applications, which must have the fanciest updates in flatpak and than just update flatpak?

      btw. could you delete the redundant posts, please. You accidentally post it 4 times.

      • jollyrogue
        link
        fedilink
        arrow-up
        3
        ·
        2 years ago

        Does it make sense to only update security packages?

        Yes. “Update for security fixes, and then bump versions only when necessary for features” is how updates are supposed to work, but nobody does this.

        Or will it even be unstable after updating everything?

        RedHat’s release engineering is fantastic. I usually give new Fedora releases a month or two before upgrading my work desktop, but normal updates are uneventful.

        Fedora is experimental compared to RHEL, but in the grand scheme of things, it’s a moderate distro. It does more testing then Arch, they try to upstream as much as possible, they don’t ship software with license or patent problems, and it’s a semi-rolling release distro. A few packages are pinned, but most packages get updated as the package maintainer has time, which is usually shortly after release.

        Wouldn’t it be better to have the applications, which must have the fanciest updates in flatpak and than just update flatpak?

        That’s up to you. Some people like Flatpak, and some people don’t. I also don’t know how to only install security updates for Flatpak applications.

        I use a mixture. Some programs aren’t packaged as a Flatpak, some are only packaged as a Flatpak, and some are better from the distro package.

        I’ve run Fedora and RHEL/CentOS for over a decade at this point, and it’s been solid. The times things have gotten weird is when I’ve added 3rd party repos which replace system packages instead of installing into their own path. This problem has mostly been fixed now.

        btw. could you delete the redundant posts, please. You accidentally post it 4 times.

        Yeah. I was posting with Remmel, and it’s a little wonky. Four errors, four posts. :\

        • maxmoonOP
          link
          fedilink
          arrow-up
          1
          ·
          2 years ago

          Yes. “Update for security fixes, and then bump versions only when necessary for features” is how updates are supposed to work, but nobody does this.

          Isn’t this a hard way? In general I want to update programs, because I hope the bug, which annoys me will be fixed with the next update. So I will do updates for it as long as the bug got fixed, even if I get features I really don’t want. The only way to figure out if an update has a fix is to read all release notes since the version you own. Nobody got time for that.

          I totally get it, that if I own a software, which I like how it is, it shouldn’t be changed, but sometimes it’s not possible. I remember that Firefoxs GUI was once pretty lightweight, then they implemented developer tools, which no normal user needs, then a whole customization tool and then a tool to synchronize your data with other devices and so on. And that’s with many software. They reach a point were it is functional, everything work and then it gets screwed up by tons of features only a minority uses. But a browser should be up-to-date, because it could be very dangerous if not.

          Fedora is experimental compared to RHEL

          Do I understand it right that RHEL is like Debian stable, but you have to buy it?

          I use a mixture. Some programs aren’t packaged as a Flatpak, some are only packaged as a Flatpak, and some are better from the distro package.

          And this is pretty annoying imho, but it might be only the current situation, because I read somewhere that those virtual package managers (I don’t know how to call them otherwise?) will be the future, because there will be only one package to manage, which will work on all Linux distributions. But is this a good thing?

          Currently my result for an sustainable experiment would be to use Debian (stable) with AppImage and AppImageUpdate for partial updates. Would you say there is a better solution for a sustainable system? Would you even say Fedora is more sustainable?

          • jollyrogue
            link
            fedilink
            arrow-up
            1
            ·
            2 years ago

            Isn’t this a hard way? … Nobody got time for that.

            Using the minimal viable version is the correct way, but yeah, most people live and die by the @latest YOLO method.

            Updates can be done piecemeal in a much more purposeful way to minimize churn, or updates can be blasted out with one command.

            Do I understand it right that RHEL is like Debian stable, but you have to buy it?

            You’re correct RHEL is equivalent to Debian stable.

            There’s an “up to 16 installs” free tier. I haven’t bothered with it since CentOS is only slightly ahead of RHEL, and I don’t have to figure out entitlements with CentOS.

            For a desktop/laptop/workstation, I would stick with Fedora though. It has BTRFS, more desktop software, and more features.

            In the past, running RHEL/CentOS as a desktop was a much more advanced project then most people wanted. I was doing lots of custom compilation and upgrade planning for the desktop software I wanted to use. I’m not sure how the new 3yr cadence is going to affect things.

            And this is pretty annoying imho, but it might be only the current situation, because I read somewhere that those virtual package managers (I don’t know how to call them otherwise?) will be the future, because there will be only one package to manage, which will work on all Linux distributions. But is this a good thing?

            Flatpaks are built for desktop applications. Server applications or development tools don’t really fit into the Flatpak model, and I use server applications and development tools frequently.

            It is a good thing. Once a Flatpak is created it is portable across the ecosystem which enhances the software selection for all distros.

            Previously, some applications were locked to the big distros, and the smaller distros struggled to port software.

            Also, Flatpak is designed to work around some shortfalls of current package managers.

            Flatpak can run without root permissions, and it can install applications in the invoking user’s home dir. Most package manager assume the package will be installed on the system, and they don’t have provisions to be run by accounts other then root.

            Current package managers aren’t built to version libraries, and this something else Flatpak has addressed.

            Currently my result for an sustainable experiment would be to use Debian (stable) with AppImage and AppImageUpdate for partial updates.

            Debian is fine. I’m just familiar with the challenges of running a point in time distro as a desktop.

            I haven’t tried AppImageUpdate. I favor Flatpak over AppImage these days.

            Would you say there is a better solution for a sustainable system?

            Not a good one. :)

            Would you even say Fedora is more sustainable?

            It’s as sustainable as any Linux distro. From a user experience point of view, it’s easier to live with on a desktop.

            Now that I think about it. A local repo can be setup, and the local repo can be used to update the system.

            Mirror the repos to the sdcard, flash drive, or external HD, and then take the drive to each machine for updates. That would reduce the network usage, and reading the local storage is higher bandwidth then the network which would reduce CPU time.

            I’m not familiar with apt, but there might be something similar.

            • maxmoonOP
              link
              fedilink
              arrow-up
              1
              ·
              2 years ago

              Updates can be done piecemeal in a much more purposeful way to minimize churn, or updates can be blasted out with one command.

              It’s interesting that Fedora (and some other distros) are able to do partial updates. This is a huge plus imho, because updates are really energy-intensive (even if not compiled locally) and the reason why I want Debian (stable) is, it gets less updates in terms of, it must be stable before it gets to stable. In the testing branch you would get the bugs as an update and then the fix as another update.

              My theory is I can avoid not wanted updates by just using a stable system. And at this point I am afraid that Fedora wouldn’t fit in this logic, because it is like the testing branch. On the other hand it could get the fanciest stuff and I don’t need flatpak/appimage anymore. Okay, I walk in circles again…

              There’s an “up to 16 installs” free tier. I haven’t bothered with it since CentOS is only slightly ahead of RHEL, and I don’t have to figure out entitlements with CentOS.

              CentOS as a workstation? This sounds interesting! I only know it for servers and I thought it was created for servers, but distrowatch says it comes with a DE and it was even discontinued in 2020. I have not noticed anything about it.

              And it might make life easier, because you don’t have to learn different things if you have the same system on your workstation and on your server, like different package managers, which reminds me… I still have a v-server running some stuff with CentOS release 6.10 (Final) on it. And the only way it is still on 6 was, that I found it too complex to update and I was afraid I would break it.

              But I have to say I’ve never got comfy with yum, maybe because I just used too much apt and pacman in life. Wait… Fedora is using dnf… I thought CentOS is based on Fedora/RHEL. Doesn’t it mean they use the same package manager? Do I have to learn different package managers if I use Fedora as workstation and CentOS for servers?

              And I had the problem, that there was no python3 on CentOS and it must be installed by hand, which was a mess and I wouldn’t do it again.

              Server applications or development tools don’t really fit into the Flatpak model, and I use server applications and development tools frequently.

              What package manager do you use for servers to have the applications you need? Docker? Is this the way to install stuff like python on CentOS?

              Now that I think about it. A local repo can be setup, and the local repo can be used to update the system.

              In another discussion someone already mentioned, that if you have updated a fedora workstation, it can be used as an update server and it doesn’t sound really hard to configure all other pcs to use the one pc as an update server in the same network. It would definitely deduce the carbon footprint compared to update all workstations over the internet separately. Do you know how this is called, because I forgot it and really would like to get a tutorial for it.

              • jollyrogue
                link
                fedilink
                arrow-up
                3
                ·
                2 years ago

                CentOS as a workstation?

                It’s multi-purpose. It mostly gets used for servers, but it can be used as a client.

                it was even discontinued in 2020.

                The CentOS project went through a repositioning in the last couple of years, and things got weird there for a minute.

                CentOS 8 is EoL. CentOS Stream 8 is still supported, and there wasn’t significant differences between 8 and Stream 8. CentOS Stream 9 is the latest version, and it’s supported.

                CentOS was repositioned to be the upstream of RHEL instead of downstream. In practical terms, CentOS gets packages slightly before RHEL does, and there are more companies and people working on adding software to CentOS then RHEL.

                There are a few true downstream rebuilds of RHEL, like Rocky Linux, but it’s too early to tell if they’re going to be around long term.

                But I have to say I’ve never got comfy with yum, maybe because I just used too much apt and pacman in life. Wait… Fedora is using dnf… I thought CentOS is based on Fedora/RHEL. Doesn’t it mean they use the same package manager? Do I have to learn different package managers if I use Fedora as workstation and CentOS for servers?

                dnf in included in CentOS Stream 8. There is also a yum compatibility package installed, which aliases yum to dnf.

                dnf and yum work the same way, as far as users are concerned. Knowing one is basically knowing the other.

                Going forward, dnf is the package manager for the Red Hat ecosystem.

                And the only way it is still on 6 was, that I found it too complex to update and I was afraid I would break it.

                That’s another thing. Fedora can be upgraded in place. CentOS and RHEL subscribe to the clean install philosophy.

                And I had the problem, that there was no python3 on CentOS and it must be installed by hand, which was a mess and I wouldn’t do it again.

                Python3 has been included in the repos since CentOS 7.

                What package manager do you use for servers to have the applications you need?

                It’s a mixture of things depending on what I need.

                I do ops and dev work on my desktops/laptops, so there’s Flatpak for GUI tools, GUI and CLI tools from RPMs, services installed from RPMs, some container tools, and custom installs. It’s very much not a basic install.

                Servers have stuff from RPMs, containers, and custom installs.

                Docker? Is this the way to install stuff like python on CentOS?

                It depends on what you need. If you want to do some Python development for yourself, using a newer version of CentOS and installing Python from the repos is the easiest way.

                Containers are a good way to isolate software from the base system, but they add more complexity and systems to manage.

                Toolbx is a good way to create disposable environments to work in.

                Toolbx

                https://containertoolbx.org/

                Pkgs.org is a good resource to find packages in the various repos.

                https://pkgs.org/search/?q=python3