Any helpful tips for general care and feeding I should be doing on a regular basis?
I know I need to keep an eye on updates and re-run my ansible setup form time to time to stay up to date.
But I have also been keeping an eye on my VPS metrics to see when/if I need to beef up the server.
One thing I am noticing is a steadily increasing disk utilization (which mostly makes sense except its seeming a bit faster than I expected as most all media is links to external sites rather than uploading media directly to my instance).
Anything I can do to manage that short of just adding more space? Like are there logs/cached content that need to be purged from time to time?
Thank you!
Just keep an eye on the GitHub page if you want to stay updated at all times. Other than that just check up on your storage use from time to time. You can also set up a job to restart the server every once in a while but that’s not really necessary.
The main cause for the steady rise in disk usage that I’m seeing is the
activities
table, which contains the full JSON of all ActivityPub messages seen by the instance. It appears Lemmy automatically removes entries older than 6 months though.Gotcha thanks! Thats good to know. Based on the originating ticket: https://github.com/LemmyNet/lemmy/issues/1133
Sounds like it might be safe for me to purge that table a bit more often as well.
Dumb question, how are you profiling (RE your mention of getting a better idea of which tables might be bloated) your DB? Just SSHing into your box and direct connecting to DB? Or are there other recommended workflows?
If anyone stumbles across this, the command to connect to the DB is (run from the root of your lemmy install - assuming a docker install):
docker-compose exec postgres psql -U lemmy
UPDATE:
If anyone else is running into consistently rising disk I am pretty sure this is my issue (RE logs running with no cap):
Trying out ^ and will update with my findings if it helps.
After some tinkering, **yes ** this indeed was my issue. The logs for
pictrs
andlemmy
in particular were between 3 and 8 gb only after a couple weeks ofinfo
level logging.Steps to fix (the post above has more detail but adding my full workflow in case that helps folks, some of this wasn’t super apparent to me) - these steps assume a docker/ansible install:
-
SSH to your instance.
-
Change to your instance install dir
most likely:
cd /srv/lemmy/{domain.name}
- List currently running containers
docker ps --format '{{.Name}}'
Now for each docker container name:
- Find the path/name of the associated log file:
docker inspect --format='{{.LogPath}}' {one of the container names from above}
- Optionally check the file size of the log
ls -lh {path to log file from the inspect command}
- Clear the log
truncate -s 0 {path to log file from the inspect command}
After you have cleared any logs you want to clear:
- Modify docker-compose.yml adding the following to each container:
logging: driver: "json-file" options: max-size: "100m"
- Restart the containers
docker-compose restart
-