I noticed my home servers SSD running out of space and it ended up being my Jellyfin Docker container which wasn’t clearing the directory for transcodes in /var/lib/jellyfin/transcodes correctly.

I simply created a new directory on my media hard drive and bind mounted the above mentioned directory to it. Now Jellyfin got over 1 TB of free space to theoretically clutter. To prevent this I simply created a cronjob to delete old files in case Jellyfin isn’t.

@daily /usr/bin/find /path/to/transcodes -mtime +1 -delete

Easy!

      • Novi@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        5
        ·
        8 months ago

        tmpfs is the filesystem you are looking for. You can mount it like any other filesystem in /etc/fstab.

        tmpfs /path/to/transcode/dir tmpfs defaults 0 0

      • Shadow@lemmy.ca
        link
        fedilink
        English
        arrow-up
        4
        ·
        edit-2
        8 months ago

        Can just point it to /dev/shm as a transcoding folder, for a quick and dirty way.

        Otherwise you’d mount a tmpfs disk.

    • Dataprolet@lemmy.dbzer0.comOP
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      2
      ·
      8 months ago

      I have like a dozen people using my Jellyfin and sometimes 3-4 people watch something at the same time which results in a lot of transcoding data. At the moment my transcoding directory (which is cleaned every 24 hours) is almost 8 GB big. I don’t have the RAM to do this.

  • entropicdrift@lemmy.sdf.org
    link
    fedilink
    English
    arrow-up
    23
    ·
    8 months ago

    Personally I have a secondary external SSD I use for my cache and transcode directories so that my transcodes aren’t throttled by being read from and written to the same disk.

    Also of note is that Jellyfin does have a cron job built into it to clear the transcodes directory. You can see it under Dashboard -> Scheduled Tasks -> Clean Transcode Directory. I have mine set to run every 24 hours.

    • Dataprolet@lemmy.dbzer0.comOP
      link
      fedilink
      English
      arrow-up
      3
      ·
      8 months ago

      Yeah, but the cleanup job doesn’t seem to work reliably. I noticed because my home server ran out of disk space because the transcoding directory was over 30 GB in size.

  • stuckgum
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    2
    ·
    8 months ago

    Why not write to ram instead?

      • Dataprolet@lemmy.dbzer0.comOP
        link
        fedilink
        English
        arrow-up
        2
        ·
        8 months ago

        Every transcode could need as much disk space as the size of the file you’re playing. If you have a media file that’s bigger than your available RAM the transcode will propably cause problems because you will run out of RAM.