I have a trusty UnRaid server that has been running great for almost 3 years now, with some kinks and headaches here and there, but mostly very stable. Now I’m entertaining the idea of setting that box up with ProxMox, and running UnRaid virtualized. The reason being that I want to use UnRaid exclusively as a NAS and then run all dockers and VMs on ProxMox (at least that’s how I’m picturing it). I would like to know your opinion on this idea. All I have is Nextcloud, Immich, Vaultwarden, Jellyfin, Calibre, Kavita and a Windows VM I use to update some hardware every now and then. I mainly want to do that for the backup capabilities in ProxMox for each instance. Storage is not a concern, and I have 64GB of ECC Ram running in that box. What are the Pros and Cons, or is it even worth it to move all this to ProxMox?

  • just_another_person@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    8 months ago

    There’s the question of “CAN I do this?” vs “SHOULD I do this?”. I don’t think abstracting your main storage handling software away from where it definitely needs to be is going to net you anything positive, but add more issues and complications.

    I’m sure you can find videos of people running drivers out of containers just because it’s possible. Should you though? Nope.

    • youmaynotknowOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      8 months ago

      I do have the advantage of having a mirror of my server 2.5K miles away in my brother’s house. That’s probably why I’m thinking about being so candidly careless.

      I appreciate the great advise. But now I’m willing to take one for the team and come back with either am horror story or an epic win.

      BRB.

      • just_another_person@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        8 months ago

        You’re thinking about this wrong way though. Why are trying to abstract the thing that keeps your disks working properly? What’s your gain here?

        • youmaynotknowOP
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          7 months ago

          Oh, ok. Mainly 3 things:

          1. Manage all my containers and VMs over ProxMox instead of inside UnRaid directly, effectively leaving UnRaid to be just manage storage only.
          2. This, from my understanding, will in turn allow me to play with container options other than docker (docker is awesome, I know, but it also has limitations), effectively opening new roads of knowledge to me. UnRaid doesn’t even support Kubernetes or LXC.
          3. Easier VLAN management in the server side. I have to play with firewall permissions on my PFSense to allow some containers to talk to others. ProxMox, being VLAN aware, would allow me to eliminate those permissions from PFSense and just manage interconnectivity via ProxMox.

          While I’m aware that I can even compose dockers in UnRaid if there’s no UnRaid docker template available, it’s not the most user friendly way for managing those containers, in my opinion.

          Another reason is that I’m always trying to learn new things, and from my limited experience with ProxMox (I’ve only been playing with it for about a month or so on an old rig), ProxMox is incredibly easy and powerful when it comes to container and VM deployment. The management options seem to be infinite.

          Your point is very solid, which is why I’m contemplating segregating UnRaid and ProxMox into 2 separate rigs as opposed to virtualizing UnRaid.

          These are hard decisions. Keep just 1 rig and spend way more time and probably migraines configuring this, or just build a new rig for ProxMox and migrate all my containers and VMs to it, which is faster, but will come at a higher monetary price, including power consumption.

          • just_another_person@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            7 months ago

            Just get a separate host for whatever the VM stuff you want. You won’t need to worry about messing anything related to storage up, AND you’ll be able to mess with all the networking stuff without impacting your NAS.

            If you’re just trying to run some simple services, just get a $300 Ryzen minipc. Plenty powerful for what it sounds like you’re looking to do.

            • youmaynotknowOP
              link
              fedilink
              English
              arrow-up
              1
              ·
              7 months ago

              Yeah. I told my wife what I wanted to do, and she actually would rather have me spend the money than risk spending too much time if and when I break something. I’m thinking a minispc Ryzen 9 or a Ryzen 7 venus, set it up with a 4TB NVMe. That should do the trick. It’s a bit over 300 bucks, but will be a bit more future proof. 64GB DDR5, and fire it away.

      • Pyrosis@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        7 months ago

        Have you considered the increase in disk io and that hypervisor prefer to be in control of all hardware? Including disks…

        If you are set on proxmox consider that it can directly share your data itself. This could be made easy with cockpit and the zfs plugin. The plugin helps if you have existing pools. Both can be installed directly on proxmox and present a separate web UI with different options for system management.

        The safe things here to use are the filesharing and pool management operations. Basically use the proxmox webui for everything it permits first.

        Either way have fun.

        • youmaynotknowOP
          link
          fedilink
          English
          arrow-up
          1
          ·
          7 months ago

          I actually never considered this. And if I’m understanding you correctly, this would render using UnRaid unnecessary.

          This is great info. I’m going to fit my current ProxMox test rig with a few disks I have (old small disks I have replaced over the years that still work) and test this option first. This might make this easier.

          If this works out, I can still keep the server I set up off-site to mirror my storage, right? Even if that is still UnRaid? I need more coffee.

          • Pyrosis@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            7 months ago

            Yup you can. In fact you likely should and will probably find yourself improving disk io dramatically compared to your original thoughts doing this. It’s better in my opinion to let the hypervisor manage disks operations. That means in my opinion it should also share files with smb and NFS especially if you are already considering nas type operations.

            Since proxmox supports zfs out of the box along with btrfs and even XFS you have a myriad of options. You combine that with cockpit and you have a nice management interface.

            I went the zfs route because I’m familiar with it and I appreciate it’s native sharing options built into the filesystem. It’s cool to have the option to create a new dataset off the pool and directly pass it into a new lxc container.

            • youmaynotknowOP
              link
              fedilink
              English
              arrow-up
              1
              ·
              7 months ago

              I’m very inclined to use this method instead.

              I would like to ask for some suggestions on the initial process to migrate the data from UnRaid.

              Considering that:

              • My disk pool is made out of 2 10TB disks, for a total of 20TB
              • It also has a 10TB parity disk
              • The pool is using just -6TB of the storage

              The option I see is:

              • Get another 10TB disk
              • I can clear the parity drive and copy my data from the pool to that disk for migrating
              • Configure the pool disks to RaidZ and once I complete that, use the other 2 disks as parity pool

              Or, I bite the bullet, get brand new 10TB disks, 12 to make it Raidz2 and have a storage pool of 40TB (35 usable?). I’m thinking 4 groups of 3 disks each should do the trick. Then use the same method to migrate my data.

              With 64GB of ECC RAM, I should have a pretty swift storage IOPS that way.

              • Pyrosis@lemmy.world
                link
                fedilink
                English
                arrow-up
                2
                ·
                7 months ago

                Another thing to keep in mind with zfs is underlying vm disks will perform better if the zfs pool is a type of mirror or stripe of mirrors. Z1 Z2 type pools are better for media and files. Cm disk io will improve on the mirror type style dramatically. Just passing what I’ve learned over time in optimizing systems.

                • youmaynotknowOP
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  ·
                  7 months ago

                  I’ll be studying that link you sent me deeply before I start my adventure here.

                  I didn’t know this rabbit hole was so deep. Love it!

              • Pyrosis@lemmy.world
                link
                fedilink
                English
                arrow-up
                2
                ·
                7 months ago

                Bookmark this if you utilize zfs at all. It will serve you well.

                https://jrs-s.net/2018/08/17/zfs-tuning-cheat-sheet/

                You will be amused with zfs performance in proxmox due to all the tuning that is possible. If this is going to be an existing zfs pool keep in mind it’s easier to just install proxmox with the zfs option and let it create a zfs rpool during setup. For the rpool tweak a couple options. Make sure ashift is at least 12 during the install or 13 if you are using some crazy fast SSD as proxdisk for the rpool.

                It needs to be 12 if it’s a modern day spinner and probably a good setting for most ssds. Do not go over 12 if it’s a spinning disk.

                Now beyond that you can directly import your existing zfs pool into proxmox with a single import command. Assuming you have an existing zfs pool.

                In this scenario zfs would be fully maintaining disk operations for both an rpool and a media pool.

                You should consider tweaking a couple things to really improve performance via the guide de I linked.

                Proxmox vms/zvols live in their own dataset. Before you start getting to crazy creating vms make sure you are taking advantage of all the performance tweaks you can. By default proxmox sets a default record size for all datasets to 128k. qcow2, raw, and even zvols will benefit from record size of 64k because it tends to improve the underlying filesystem performance of things like ext4, XFS, even UFS. Imo it’s silly to create vm filesystems like btrfs if you’re vm is sitting on top of a cow filesystem.

                Another huge improvement is tweaking the compression algorithm. lz4 is blazing fast and should be your default go to for zfs. The new one is pretty good but can slow things down a bit for active operations like active vm disks. So make sure your default compression is lz4 for datasets with vm disks. Honestly it’s just a good default to specify for the entire pool. You can select other compressions for datasets with more static data.

                If you have a media dataset full of files like music, vids, pics. Setting a record size of 1mb will heavily improve disk io operations.

                In proxmox it will default to grabbing half of your memory for arc. Make sure you change that after install. It’s a file that defines arc_max in byte number format. Set the max to something more reasonable if you have 64 gigs of memory. You can also define the arc_min

                Some other huge improvements? If you are using an SSD for your proxmox install I highly recommend you install log2ram on your hypervisor. It will stop all those constant log writes on your SSD. It will also sync them to disk on a timer and shutdown/reboot. It’s also a huge performance and SSD lifespan improvement to migrate /tmp and /var/tmp to tmpfs

                So many knobs to turn. I hope you have fun playing with this.

                • youmaynotknowOP
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  ·
                  7 months ago

                  Thanks so much.

                  All this info brought me back to the drawing board.

                  This led me to start searching for new components, as I’m pretty sure that I will want to build a new rig and just probably donate my current box.

                  Thank you, I really appreciate it. My bank account, not so much 🤣🤣

        • glasgitarrewelt@feddit.de
          link
          fedilink
          English
          arrow-up
          1
          ·
          7 months ago

          That sounds like a great idea.

          At the moment I am using Openmediavault as a VM within proxmox - I pass my HDDs through to this VM. Openmediavault let’s me do all the stuff I want to: Share folders via SSH, NFS and raid-management.

          Do you know if I can do the same with proxmox directly? Do you maybe have a link where this way is described in detail?

          • Pyrosis@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            7 months ago

            At its core cockpit is like a modern day webmin that allows full system management. So yes it can help with creating raid devices and even lvms. It can help with mount points and encryption as well.

            I do know it can help share whatever with smb and NFS. Just have a look at the plugins.

            As for proxmox it’s just using Debian underneath. That Debian already happens to be optimized for virtualization and has native zfs support baked in.

            https://cockpit-project.org/applications