• Edit- I set the machine to work last night testing memtester and badblocks (read only) both tests came back clean, so I assumed I was in the clear. Today, wanting to be extra sure, i ran a read-write badblocks test and watched dmesg while it worked. I got the same errors, this time on ata3.00. Given that the memory test came back clean, and smartctl came back clean as well, I can only assume the problem is with the ata module, or somewhere between the CPU and the ata bus. i’ll be doing a bios update this morning and then trying again, but seems to me like this machine was a bad purchase. I’ll see what options I have with replacement.

  • Edit-2- i retract my last statement. It appears that only one of the drives is still having issues, which is the SSD from the original build. All write interactions with the SSD produce I/O errors (including re-partitioning the drive), while there appear to be no errors reading or writing to the HDD. Still unsure what caused the issue on the HDD. Still conducting testing (running badblocks rw on the HDD, might try seeing if I can reproduce the issue under heavy load). Safe to say the SSD needs repair or to be pitched. I’m curious if the SD got damaged, which would explain why the issue remains after being zeroed out and re-written and why the HDD now seems fine. Or maybe multiple SATA ports have failed now?


I have no idea if this is the forum to ask these types of questions, but it felt a little like a murder mystery that would be a little fun to solve. Please let me know if this type of post is unwelcome and I will immediately take it down and return to lurking.

Background:

I am very new to linux. Last week I purchased a cheap refurbished headless desktop so I could build a home media server, as well as play around with vms and programming projects. This is my first ever exposure to linux, but I consider myself otherwise pretty tech-savvy (dabble in python scripting in my spare time, but not much beyond that).

This week, i finally got around to getting the server software installed and operating (see details of the build below). Plex was successfully pulling from my media storage and streaming with no problems. As i was getting the docker containers up, I started getting “not enough storage” errors for new installs. Tried purging docker a couple times, still couldn’t proceed, so I attempted to expand the virtual storage in the VM. Definitely messed this up, and immediately Plex stops working, and no files are visible on the share anymore. To me, it looked as if it attempted taking storage from the SMB share to add to the system files partition. I/O errors on the OMV virtual machine for days.

Take two.

I got a new HDD (so i could keep working as I tried recovery on the SSD). I got everything back up (created a whole new VM for docker and OMV). Gave the docker VM more storage this time (I think i was just reckless with my package downloads anyway), made sure that the SMB share was properly mounted. As I got the download client running (it made a few downloads), I notice the OVM virtual machine redlining on memory from the proxmox window. Thought, (uh oh, i should fix that). Tried taking everything down so I could reboot the OVM with more memory allocation, but the shutdown process hung on the OVM. Made sure all my devices on the network were disconnected, then stopped the VM from the proxmox window.

On OVM reboot, i noticed all kinds of I/O errors on both the virtual boot drive and the mounted SSD. I could still see files in the share on my LAN devices, but any attempt to interact with the folder stalled and would error out.

I powered down all the VM’s and now i’m trying to figure out where I went wrong. I’m tempted to just abandon the VM’s and just install it all on a Ubuntu OS, but I like the flexibility of having the VM’s to spin up new OS’s and try things out. The added complexity is obviously over my head, but if I can understand it better I’ll give it another go.

Here’s the build info:

Build:

  • HP prodesk 600g1
  • intel i5
  • upgraded 32gb after-market DDR3 1600mhz Patriot Ram
  • KingFlash 250gb SSD
  • WD 4T SSD (originally NTFS drive from my windows pc with ~2T of data existing)
  • WD 4T HDD (bought this after the SSD corrupted, so i could get the server back up while i delt with the SSD)
  • 500Mbps ethernet connection

Hypervisor

  • Proxmox (latest), Ubuntu kernel
  • VM110: Ubuntu-22.04.3-live server amd64, OpenMediaVault 6.5.0
  • VM130: Ubuntu-22.04.3-live, docker engine, portainer
    • Containers: Gluetun, qBittorrent, Sonarr, Radarr, Prowlarr)
  • LCX101: Ubuntu-22.04.3, Plex Server
  • Allocations
  • VM110: 4gb memory, 2 cores (balooning and swap ON)
  • VM130: 30gb memory, 4 cores (ballooning and swap ON)

Shared Media Architecture (attempt 1)

  • Direct-mounted the WD SSD to VM110. Partitioned and formatted the file system inside the GUI, created a folder share, set permissions for my share user. Shared as an SMB/CIFS
  • bind-mounted the shared folder to a local folder in VM130 (/media/data)
  • passed the mounted folder to the necessary docker containers as volumes in the docker-compose file (e.g. - volumes: /media/data:/data, ect)

No shame in being told I did something incredibly dumb, i’m here to learn, anyway. Maybe just not learn in a way that destroys 6 months of dvd rips in the process ___

  • @mortrek
    link
    English
    237 months ago

    If you are getting actual hardware/sata errors on the host (not sure if that’s exactly what’s happening from your description), and multiple drives have had a similar problem, I’d suspect the sata cable or controller/mobo. Intel had a lot of weird sata issues on their older chipsets, so I’d also recommend making sure it has the latest bios update. Could you be more specific on what kind of hardware errors are showing up? Like, maybe parts of the logs.

    • Human Crayon
      link
      fedilink
      English
      37 months ago

      Came here to say I had something similar happen with my NAS a year back. Thought it was the drives, then the controller it was attached to. Turns out it was some crappy blue breakout cables causing the drives to error out and disconnect.

      Ordered new breakout cables of a different brand and have zero errors since.

    • archomrade [he/him]OP
      link
      fedilink
      English
      27 months ago

      i’m going back and looking, but I may have deleted logs for the VM’s when I deleted the VM and started repair.

      here’s a readout of one of the instances of trying to shutdown the VM and having to ssh in and ‘force’ a shutdown (didn’t think i was forcing it from the terminal window, but maybe I did?) Doens’t give much more information.

      /var/log/pve/tasks/D/UPID:pve1:00000AD5:0000AEFC:652DD26D:qmshutdown:110:root@pam::TASK ERROR: VM quit/powerdown failed - got timeout

      i’m still looking for more detailed logs, but i’m starting to wonder if you’re right. This makes me more sad than having messed something up myself, because fixing it would involve buying more hardware :(

      oop, just found some better ones in the journalctl. These were happening way earlier than I thought:

      Oct 13 17:28:42 pve1 kernel: SMB2_read: 36 callbacks suppressed
      Oct 13 17:28:42 pve1 kernel: CIFS: VFS: Send error in read = -5
      Oct 13 17:28:42 pve1 kernel: CIFS: Status code returned 0xc0000185 STATUS_IO_DEVICE_ERROR
      Oct 13 17:28:42 pve1 kernel: CIFS: VFS: Send error in read = -5
      Oct 13 17:28:42 pve1 kernel: CIFS: Status code returned 0xc0000185 STATUS_IO_DEVICE_ERROR
      

      there’s a million of these.

      and here are some of the one’s i was seeing when I popped open the console while it was happening. pve1 was the mount device to the VM running the OMV server I think:

      Oct 14 23:14:14 pve1 kernel: I/O error, dev sdb, sector 1801348384 op 0x0:(READ) flags 0x1000000 phys_seg >

      and a bunch of these, looks like they happen after a lot of I/O errors happen and the system can’t reach the smb server anymore:

      Oct 16 13:25:04 pve1 kernel: CIFS: VFS: \192.168.0.135\plex_media BAD_NETWORK_NAME: \192.168.0.135\plex_media

      Here’s ones from yesterday, probably around the time i was getting the new HDD back up again. These call out the sata port specifically, and it’s running repeatedly in a loop:

      Oct 18 21:52:22 pve1 kernel: ata4.00: configured for UDMA/133
      Oct 18 21:52:22 pve1 kernel: ata4: EH complete
      Oct 18 21:52:22 pve1 kernel: ata4.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x0
      Oct 18 21:52:22 pve1 kernel: ata4.00: irq_stat 0x40000001
      Oct 18 21:52:22 pve1 kernel: ata4.00: failed command: WRITE DMA EXT
      Oct 18 21:52:22 pve1 kernel: ata4.00: cmd 35/00:a8:00:38:ff/00:00:27:01:00/e0 tag 18 dma 86016 out
                                            res 51/04:a8:00:38:ff/00:00:27:01:00/e0 Emask 0x1 (device error)
      Oct 18 21:52:22 pve1 kernel: ata4.00: status: { DRDY ERR }
      Oct 18 21:52:22 pve1 kernel: ata4.00: error: { ABRT }
      

      and here’s some more implicating the sdc device:

      Oct 18 21:57:23 pve1 kernel: sd 3:0:0:0: [sdc] tag#25 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=2s
      Oct 18 21:57:23 pve1 kernel: sd 3:0:0:0: [sdc] tag#25 Sense Key : Illegal Request [current]
      Oct 18 21:57:23 pve1 kernel: sd 3:0:0:0: [sdc] tag#25 Add. Sense: Unaligned write command
      Oct 18 21:57:23 pve1 kernel: sd 3:0:0:0: [sdc] tag#25 CDB: Write(16) 8a 00 00 00 00 00 e8 c4 08 00 00 00 00 08 00 00
      Oct 18 21:57:23 pve1 kernel: I/O error, dev sdc, sector 3905161216 op 0x1:(WRITE) flags 0x1008800 phys_seg 1 prio class 2
      

      There are actually kind of painting a picture. The culprit looks like that sata port, i’ll see if i can switch it to another and do some test writes, maybe that’ll fix it

      • Atemu
        link
        27 months ago

        The CIFS errors and logs inside the VMs are rather uninteresting as they’re just passing through the underlying HW’s issue.

        These logs presented here definitely indicate an issue between CPU and drives. Could also be RAM but I’d check SATA cables and controllers first.

        • archomrade [he/him]OP
          link
          fedilink
          English
          4
          edit-2
          7 months ago

          Yup, after scrubbing the log file, the problem port is ONLY ATA port 4.00. No other ports have thrown errors, BUT, i just did a block check on all the boot drive partitions, and it looks like they all have bad superblocks… not sure if the issue then is with the specific sata port or if the issue originates in the memory, or if the bad blocks get propagated to the other drives? unclear.

          Oct 19 09:59:17 pve1 kernel: ata4.00: cmd 35/00:08:00:08:c4/00:00:e8:00:00/e0 tag 15 dma 4096 out
          Oct 19 09:59:17 pve1 kernel: ata4.00: status: { DRDY ERR }
          Oct 19 09:59:17 pve1 kernel: ata4.00: error: { ABRT }
          Oct 19 09:59:17 pve1 kernel: ata4.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x0
          Oct 19 09:59:17 pve1 kernel: ata4.00: irq_stat 0x40000001
          Oct 19 09:59:17 pve1 kernel: ata4.00: failed command: WRITE DMA EXT
          

          i’ll do:

          • a memory test
          • swap the ports of the HD’s to specifically avoid port4.00
          • do a read-write test to make sure the issue doesn’t re-appear.

          if non of the above solves the mystery, i suppose I can splurge on another junker and see if I have better luck on the next one. I just have to decide if I wait for ddrescue to finish, or just start it now… Probably start it now, on the off-chance i’m just creating more bad blocks on the backup.

            • archomrade [he/him]OP
              link
              fedilink
              English
              17 months ago

              fuck me, that test was damning. Read test was fine, but starting a read-write test revealed all the same I/O errors as before, this time on a differen’t port.

              • @mortrek
                link
                English
                17 months ago

                Have you tried a running a different distro live f/usb or something like that? Doesn’t seem likely that it would help, but who knows…

                It’s unlikely the kernel or other low-level code is the problem on 10 year old Intel hardware, though. I’ve run numerous distros on numerous different machines, many of which were Intel-based, over the last couple decades, and never had this kind of basic, low-level problem with SATA before without it being the cable or controller. Oh, I just remembered: check the PSU as well if you can. A faulty PSU could have a bad rail or wire or something that leads to these problems. If you have a known-good one lying around, depending on the motherboard, you could try temporarily hooking it up to the board and drive and see if it changes anything.

                To eliminate Linux as a potential culprit, you could try to install Windows (7, 8, 10, whatever) and see if it exhibits similar problems.

        • archomrade [he/him]OP
          link
          fedilink
          English
          17 months ago

          Well shit. Looks like the other sata ports are having the same problem.

          trying to get a hardware probe running, but what are the chances i need to replace the motherboard/the machine? It’s looking likely the problem is upstream from the sata drives themselves, i just don’t know if it’s worth trying to swap the cpu before just ditching the machine entirely. I don’t have a cpu lying around to test it. memtester came back clean after 5 passes.

          • Atemu
            link
            17 months ago

            I’d honestly just abandon the hardware. It’s not worth your time to deal with that.