OdiousStooge

  • 9 Posts
  • 16 Comments
Joined 2 years ago
cake
Cake day: May 3rd, 2023

help-circle
  • Huge thank you! I had a feeling something like this was going on but had no idea how to troubleshoot/fix.

    My pictrs and lemmy containers were the biggest between 3-8 GB (significant for a smaller instance) after a couple weeks.

    For anyone who finds this, in addition to what OP provided here, another command I found helpful (since I am a docker noob 🙂) to find the name of the currently running containers:

    docker ps --format '{{.Name}}'



  • After some tinkering, **yes ** this indeed was my issue. The logs for pictrs and lemmy in particular were between 3 and 8 gb only after a couple weeks of info level logging.

    Steps to fix (the post above has more detail but adding my full workflow in case that helps folks, some of this wasn’t super apparent to me) - these steps assume a docker/ansible install:

    1. SSH to your instance.

    2. Change to your instance install dir

    most likely: cd /srv/lemmy/{domain.name}

    1. List currently running containers

    docker ps --format '{{.Name}}'

    Now for each docker container name:

    1. Find the path/name of the associated log file:

    docker inspect --format='{{.LogPath}}' {one of the container names from above}

    1. Optionally check the file size of the log

    ls -lh {path to log file from the inspect command}

    1. Clear the log

    truncate -s 0 {path to log file from the inspect command}

    After you have cleared any logs you want to clear:

    1. Modify docker-compose.yml adding the following to each container:
    logging:
          driver: "json-file"
          options:
            max-size: "100m"
    
    1. Restart the containers

    docker-compose restart








  • OdiousStoogetoTechnology@beehaw.orgReddit’s API rug pull
    link
    fedilink
    English
    arrow-up
    1
    ·
    2 years ago

    Fire ship is great. Feel like it’s the perfect mix of memery and we’ll researched hot takes.

    Hadn’t heard the note he makes about VCs before. Interesting… Defo assumed some of this was sparked by ChatGPTs success though and how much they leveraged the data API for training sets.



  • I flip flop on TS. I really liked it when I was using Angular a bunch as it really felt like a first class citizen there.

    In react projects it’s been a bit of a pain having to sift through starter kits/setup my own config for TS and such.

    I defo like the dynamism of JS, but end of the day, if I’m working on a project with other peeps, it’s nice to have some type safety and data contracts/type ahead.








  • OdiousStoogetoFediverse*Permanently Deleted*
    link
    fedilink
    English
    arrow-up
    1
    ·
    2 years ago

    Thanks!. Yeah I gotta figure that out. Something funky going on.

    There are several communities in there atm but I’m getting some “odd” federation behavior.

    If you are interested in federating there are two ways I have seen it work (again, warning, note that it is NSFW 😀 ):

    • use your search feature on your main instance and enter !butt_frenzy@booty.world then you can subscribe

    • or, from your main instance, alter your url to: {your main instance}/c/butt_frenzy@booty.world so for example if your main instance is lemmy.ml that would be: https://lemmy.ml/c/butt_frenzy@booty.world

    ^ butt_frenzy is one of the communities on booty.world for example.

    Gotta be a better way, and I am probs doing something wrong. Will work on it. Open to feedback if anyone has any advice though haha.

    Thanks.





  • Nice! Thanks your mention of the config.hjson makes me wonder lol. I probably goofed that too.

    I did the ansible install which I believe just adds orchestration on top of the Docker install. I’ll ssh in and try the docker-compose command.

    Do you know, if I did goof the email config, can I just tweak the config locally and then re-run ansible? Or do I need to do some manual tweaks to the deployed solution? Or I suppose at this point it might be easier to just blow the instance away and start fresh.