all the cloud apps ive seen advertize “easy installs” but i end up with 30 different problems i cant solve, is there any cloud apps that i can just easily install on my linux mint (ubuntu) server without issues?

  • dev_all_the_ops@alien.topB
    link
    fedilink
    English
    arrow-up
    4
    ·
    8 months ago

    CasaOS is about as easy as it gets.

    curl -fsSL https://get.casaos.io | sudo bash  
    

    It provides a gui front end for docker. You can install it on any debian based system (which mint is). Combine that with the portainer app and there isn’t much you can’t do.

    • BearOfaTime@lemm.ee
      link
      fedilink
      English
      arrow-up
      1
      ·
      8 months ago

      Oh wow, just glancing at the github page this looks really intriguing. It’s like a front-end for assembling your own cloud from existing apps. They even mention running it on older hardware.

      You may have just saved me some effort.

      Thanks!

  • lesigh@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    8 months ago

    Docker compose files are pretty easy and straightforward. Find a service and goggle “x docker compose”

  • linkthepirate@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    8 months ago

    Check the docker container filebrowser/filebrowser. Put that drive as /srv and it will be like a cloud web file browser. No sync but you can download/upload and share links.

    Though syncthing would work for that.

  • TheBoatyMcBoatFace@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    8 months ago

    Download docker and then install Portainer. This will make your life so much easier. I’ll share a guide in the am when I’m back at my comp.

  • Still-Snow-3743@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    8 months ago

    Your whole life becomes much simpler when you use docker.

    Elevator pitch: Docker containers are preconfigured services which run isolated from the rest of your system and only expose individual directories you map into the container. These directories are the persistence part of the application and survive a restart of the container or the host system. Just backup your scripts and the data directories and you have backed up your entire server.

    I have a few scripts as examples. ‘cd “$(dirname “$0”)”’ changes to the directory the script is stored in, and therefore will create and map data directories from that parent directory.

    Letsencrypt proxy companion will set up a single listener for web and ssl traffic, setup virtual hosts automatically, and setup SSL, all with automations.

    First, you need letsencrypt nginx proxy companion:

    #!/bin/bash
    

    cd “$(dirname “$0”)”

    docker run --detach
    –restart always
    –name nginx-proxy
    –publish 80:80
    –publish 443:443
    –volume $(pwd)/certs:/etc/nginx/certs
    –volume $(pwd)/vhost:/etc/nginx/vhost.d
    –volume $(pwd)/conf:/etc/nginx/conf.d
    –volume $(pwd)/html:/usr/share/nginx/html
    –volume /var/run/docker.sock:/tmp/docker.sock:ro
    –volume $(pwd)/my_proxy.conf:/etc/nginx/conf.d/my_proxy.conf:ro
    –volume $(pwd)/nginx.conf:/etc/nginx/nginx.conf:ro
    –volume $(pwd)/acme:/etc/acme.sh
    jwilder/nginx-proxy

    docker run --detach
    –restart always
    –name nginx-proxy-letsencrypt
    –volumes-from nginx-proxy
    –volume /var/run/docker.sock:/var/run/docker.sock:ro
    –env “DEFAULT_EMAIL=YOUR_EMAIL_ADDRESS_GOES_HERE@MYDOMAIN.COM
    jrcs/letsencrypt-nginx-proxy-companion

    Then for each service, you can start with a docker command as well with a few extra environment variables. Here is one for nextcloud:

    docker run -d \
    

    –name nextcloud
    –hostname cloud.MYDOMAIN.COM
    -v $(pwd)/data:/var/www/html
    -v $(pwd)/php.ini:/usr/local/etc/php/conf.d/zzz-custom.ini
    –env “VIRTUAL_HOST=cloud.MYDOMAIN.COM
    –env “LETSENCRYPT_HOST=cloud.MYDOMAIN.COM
    –env “VIRTUAL_PROTO=http”
    –env “VIRTUAL_PORT=80”
    –env “OVERWRITEHOST=cloud.MYDOMAIN.COM
    –env “OVERWRITEPORT=443”
    –env “OVERWRITEPROTOCOL=https”
    –restart unless-stopped
    nextcloud:25.0.0

    And Plex (/dev/dri is quicksync for hardware transcode):

    docker run \
    --device /dev/dri:/dev/dri \
    --restart always \
    -d \
    --name plex \
    --network host \
    -e TZ="America/Chicago" \
    -e PLEX_CLAIM="claim-somerandomcharactershere" \
    -v $(pwd)/config:/config \
    -v /my/media/directory/on/host/system:/media \
    plexinc/pms-docker
    

    Obsidian:

    docker run --rm -d \
    

    –name obsidian
    -v $(pwd)/vaults:/vaults
    -v $(pwd)/config:/config
    –env "VIRTUAL_HOST=obsidian.MYDOMAIN.COM "
    –env "LETSENCRYPT_HOST=obsidian.MYDOMAIN.COM "
    –env “VIRTUAL_PROTO=http”
    –env “VIRTUAL_PORT=8080”
    ghcr.io/sytone/obsidian-remote:latest

  • Docccc@alien.topB
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    8 months ago

    “I dont want to spend money on cloud services and dont want to spend time on self hosting” pick one buddy

  • DearJudge@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    8 months ago

    The answer is docker and docker-compose. The problem you’re describing is the reason it exists. Each app is isolated and runs as though it has its own dedicated system, but you can map directories and ports in to make data persistent, and ensure it all just works. This includes mapping in your whole HDD, or even /dev if you so desire (don’t do this). It honestly makes it trivial to get most things up and running.