Any helpful tips for general care and feeding I should be doing on a regular basis?

I know I need to keep an eye on updates and re-run my ansible setup form time to time to stay up to date.

But I have also been keeping an eye on my VPS metrics to see when/if I need to beef up the server.

One thing I am noticing is a steadily increasing disk utilization (which mostly makes sense except its seeming a bit faster than I expected as most all media is links to external sites rather than uploading media directly to my instance).

Anything I can do to manage that short of just adding more space? Like are there logs/cached content that need to be purged from time to time?

Thank you!

  • RGB@lemmyfi.com
    link
    fedilink
    arrow-up
    3
    ·
    1 year ago

    Just keep an eye on the GitHub page if you want to stay updated at all times. Other than that just check up on your storage use from time to time. You can also set up a job to restart the server every once in a while but that’s not really necessary.

    • OdiousStoogeOP
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      Gotcha thanks! Thats good to know. Based on the originating ticket: https://github.com/LemmyNet/lemmy/issues/1133

      Sounds like it might be safe for me to purge that table a bit more often as well.

      Dumb question, how are you profiling (RE your mention of getting a better idea of which tables might be bloated) your DB? Just SSHing into your box and direct connecting to DB? Or are there other recommended workflows?

      • OdiousStoogeOP
        link
        fedilink
        arrow-up
        1
        ·
        1 year ago

        If anyone stumbles across this, the command to connect to the DB is (run from the root of your lemmy install - assuming a docker install):

        docker-compose exec postgres psql -U lemmy

  • OdiousStoogeOP
    link
    fedilink
    arrow-up
    1
    ·
    edit-2
    1 year ago

    UPDATE:

    If anyone else is running into consistently rising disk I am pretty sure this is my issue (RE logs running with no cap):

    https://lemmy.eus/post/172518

    Trying out ^ and will update with my findings if it helps.

    • OdiousStoogeOP
      link
      fedilink
      arrow-up
      2
      ·
      1 year ago

      After some tinkering, **yes ** this indeed was my issue. The logs for pictrs and lemmy in particular were between 3 and 8 gb only after a couple weeks of info level logging.

      Steps to fix (the post above has more detail but adding my full workflow in case that helps folks, some of this wasn’t super apparent to me) - these steps assume a docker/ansible install:

      1. SSH to your instance.

      2. Change to your instance install dir

      most likely: cd /srv/lemmy/{domain.name}

      1. List currently running containers

      docker ps --format '{{.Name}}'

      Now for each docker container name:

      1. Find the path/name of the associated log file:

      docker inspect --format='{{.LogPath}}' {one of the container names from above}

      1. Optionally check the file size of the log

      ls -lh {path to log file from the inspect command}

      1. Clear the log

      truncate -s 0 {path to log file from the inspect command}

      After you have cleared any logs you want to clear:

      1. Modify docker-compose.yml adding the following to each container:
      logging:
            driver: "json-file"
            options:
              max-size: "100m"
      
      1. Restart the containers

      docker-compose restart