I’m the sysadmin for beehaw.org and our user-base has almost tripled in 24 hours. Our site has been crashing so much over this time period.

We do have some volunteers that are trying to figure out a reasonable solution for when this happens again four weeks from now.

Do you have any recommendations?

  • @nutomicA
    link
    2911 months ago

    So far there are no major problems. Disk was filling up but probably unrelated. I guess beehaw grew a lot more in terms of percentage than lemmy.ml during this time.

    The main things I can think of which might help are to increase database pool size (in config file) and number of federation workers (/admin). Also create a swapfile if you are running low on RAM.

    • @kai@beehaw.org
      link
      fedilink
      4
      edit-2
      11 months ago

      I mean I’ve been able to verify with a number of users in America, and in Europe that signups just “spin” on lemmy.ml and have been that way for 24 hours or so. I’ve personally tested it on iOS & Windows (Safari, librewolf and Vivaldi) as well as on multiple IPs and had a few friends in multiple countries (England, France and Germany) try as well. Something is hung up somewhere for a few people at least. I made a post about it shrug

      Also there’s a few posts on r/lemmy with others having this same problem too.

      • @nutomicA
        link
        211 months ago

        You can ask them to check the browser console for errors. Or sign up somewhere else. Anyway we are getting at least a dozen signups per hour so it’s working for many.

      • @szeraax@lemmy.dcrich.net
        link
        fedilink
        211 months ago

        On my instance at least, spinning on click normally meant issue with email submission. But there wasn’t any good feedback like “failed to send email”

        • @kai@beehaw.org
          link
          fedilink
          3
          edit-2
          11 months ago

          No issue using the same email server and naming structure for signup on beehaw (as far as user email field data goes).

          Update: Yup just checked using a different email address and server.

  • DessalinesMA
    link
    1711 months ago

    Its a kludge, but in the time being you can also add a docker-compose restart in your server cron every day or few hours. Lemmy restarts are pretty seemless, as the websockets automatically reconnect.

    The next release will have websockets removed which should fix some of the stability and memory issues.

  • db0
    link
    fedilink
    610 months ago

    What’s your server load look like?

    • @suspendedOP
      link
      110 months ago

      We’ve, fortunately, been able to resolve all of our server issues.

  • nick
    link
    fedilink
    English
    310 months ago

    I made a kubernetes deployment for my lemmy service + use object storage for image hosting. Everything lemmy-side looks like it should scale fine. I’m not doing open registrations though, so it won’t impact me. The key bottleneck in my case is the database, but if need be a larger node can be provisioned to let the DB expand a bit.

    I think for larger instances some form of server-abstraction will be useful for scaling (i.e. k8s, cloud run, EKS, etc.)

    • @suspendedOP
      link
      English
      110 months ago

      Thanks for the feedback…I’ll pass this along to the other sysadmins.

  • @Rulasmur@mhl.onl
    link
    fedilink
    110 months ago

    Hey, a little late, but maybe contact the Admin for lemm.ee they have horizontally scaled lemmy and it seems to be working. There is a config setting to disable scheduled tasks which should be disabled on all but one from what I remember, and then load balance across the rest