This is the second try (original post: https://feddit.de/post/426890) of me trying to get an answer, this time I’ll be more specific of what I am thinking to do. I thought a more generalized question would be enough. Sorry for that.

A peertube server needs lots of storage. Many of the videos will hardly get any views. Storage space on a vps is pretty expensive, storage space in general isn’t cheap. So my thought was to

have a disk at home (maybe external disk on a raspberry pi) and a VPS.

The VPS only has a very limited amount of storage, but is otherwise totally able to run peertube well. So why not have a virtual file system on the VPS, which looks like it has the size of the HDD and it uses a specified amount of the vps storage for caching. So if someone watches a popular part of a popular video, the vps can serve the video content from the local disk. If someone wants to watch the video that nobody ever watches, it’s not a problem since the uplink from home can easily deliver that as well, without the video taking the precious storage. Block caching would be best, since file caching wouldn’t be ideal with video files being really big in some cases. So a very long video would fill the cache, even if only parts of it are needed.

The remote storage doesn’t need to be from home of course, could be cheap cloud storage. I know that peertube works with s3, but it will only move transcoded videos into a bucket and then serve them directly from there. I don’t want that from home, it would also not use the upload performance of the VPS for popular videos.

Any thoughts? Good idea or not?

I have worked with bcache in the past and was always very impressed with the performance, I think my scenario could really work.

  • sexy_peach@feddit.deOP
    link
    fedilink
    arrow-up
    1
    ·
    edit-2
    2 years ago

    Okay so I managed to do this with rclone and a simple sftp connection. It’s pretty slow though, I think rclone isn’t a good solution for caching data through a pretty slow connection. So I’d call this a partial success, at best.

    This is my rclone.service file
    # /etc/systemd/system/rclone.service
    [Unit]
    Description=externalRemote (rclone)
    AssertPathIsDirectory=/mounting/directory
    
    [Service]
    Type=simple
    ExecStart=/usr/bin/rclone mount \
            --config=/root/.config/rclone/rclone.conf \
            --allow-other \
    	--vfs-fast-fingerprint \
    	--dir-cache-time 1h\
    	--vfs-cache-mode full\
    	--vfs-cache-max-age 240000h\
    	--vfs-cache-max-size 15G\
    	--vfs-cache-poll-interval 5m\
    	--vfs-read-chunk-size 1M \
    	--vfs-read-chunk-size-limit 10M \
    	--buffer-size 5M \
    	rcloneRemote:/remote/directory /mounting/directory
    ExecStop=/bin/fusermount -u /mounting/directory
    Restart=always
    RestartSec=10
    
    [Install]
    WantedBy=default.target
    

    If I have the time I am interested in trying nfs with FS-Cache. I think that might work better.