I have an HP Stream 11 that I want to use for word processing and some light web browsing - I’m a writer and it’s a lightweight laptop to bring to the library or coffee shop to write on. Right now it’s got Windows and it’s unusable due to lack of hard drive space for updates. Someone had luck with Xubuntu, but it’s been a few years and it seems like Xubuntu is no longer trying to be a lightweight distro for use cases like this.

My experience with Linux is very limited - I played around with Peppermint Linux a bit back when it was a Lubuntu fork and I used Ubuntu on the lab computers in college. I can follow instructions to make a live boot and I can do an apt-get (so something Debian-based might be best for compatibility and familiarity) but I mostly have no idea what I’m doing, lol. I used to do DOS gaming as a kid so having to do the occasional thing via command line isn’t going to scare me off but I’m not going to pretend to have knowledge I don’t. I’m probably going to go with Mint on my gaming laptop next year but I suspect it’s not the best choice for my blue bezeled potato (although I might try it anyway).

  • 7heo
    link
    fedilink
    arrow-up
    8
    arrow-down
    4
    ·
    edit-2
    9 months ago

    Note: this comment is long, because it is important and the idea that “systemd is always better, no matter the situation” is absolutely dangerous for the entire FOSS ecosystem: both diversity and rationality are essential.

    Systemd can get more efficient than running hundreds of poorly integrated scripts

    In theory yes. In practice, systemd is a huge monolithic single-point-of-failure system, with several bottlenecks and reinventing-the-wheel galore. And openrc is a far cry from “hundreds of poorly integrated scripts”.

    I think it is crucial we stop having dogmatic “arguments” with argumentum ad populum or arguments of authority, or we will end up recreating a Microsoft-like environment in free software.

    Let’s stop trying to shoehorn popular solutions into ill suited use cases, just because they are used elsewhere with different limitations.

    Systemd might make sense for most people on desktop targets (CPUs with several cores, and several GB of RAM), because convenience and comfort (which systemd excels at, let’s be honest) but as we approach “embedded” targets, simpler and smaller is always better.

    And no matter how much optimisation you cram into the bigger software, it will just not perform like the simpler software, especially with limited resources.

    Now, I take OpenRC as an example here, because it is AFAIR the default in devuan, but it also supports runit, sinit, s6 and shepherd.

    And using s6, you just can’t say “systemd is flat out better in all cases”, that would be simply stupid.

    • lemmyreader
      link
      fedilink
      English
      arrow-up
      2
      ·
      9 months ago

      For the record. OpenRC is the default on Alpine Linux, which is probably run on millions of Docker installations.

    • TCB13@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      9 months ago

      “systemd is always better, no matter the situation” is absolutely dangerous for the entire FOSS ecosystem: both diversity and rationality are essential.

      I agree with this, however the rest is more open to discussion.

      Systemd might make sense for most people on desktop targets (…) “embedded” targets, simpler and smaller is always better.

      A few years ago I was working on a bunch of “embedded” devices (4 x ARM @ 800 Mhz + 256MB of RAM) and whatever we the popular alternatives and the truth is that only with systemd we were able to boot and have a usable system (timers, full dual stack DHCP/SLAAC networking network time, secure DNS) without running out of resources for our daemons later on.

      The issue with sysvinit and OpenRC etc. isn’t that they aren’t good, it is that they’re simply init systems and nothing more. In order to have just the bare features above we would have to depend on tons of other small packages and daemons that would all eat up RAM and deal with all the integration pain because they weren’t designed to work together. Are you aware of the pain and number of things you’ve to setup to just have dual stack networking? With systemd you cut a lot of those smaller daemons and end up a few that have a much smaller RAM footprint and are actually made to work with each other.

      Systemd also providers very useful features like socket activated services in that can be leveraged to have the system wait for incoming connections and once it gets one launch a program. Without systemd it would’ve been one more constantly running daemon. It also provided us the ability to monitor if all required services were running, kill things going over the line, restart on specific conditions and even trigger alerts.

      Yes, you can do all of the above without systemd but the amount of stuff it required didn’t fit our 256MB target, nor the power budgets - we tried it, trust me. Besides all that without so many moving parts and by relying on systemd our solution was way more robust and easier to develop / debug.