Hey folks

This is a heads up that I will be performing some maintenance and hardware upgrades on our database this Saturday.

We are currently experiencing several spikes throughout the day which cause our database to become overloaded - this results in degraded performance for many users. The spikes are happening due to a combination of continued growth of the database, some expensive periodic scheduled tasks which Lemmy runs, and fluctuating traffic patterns. Some of this can be optimized on the code level in the future, but it seems that the best way to deal with it right now is to add some additional resources to our database server.

I am intending to switch to slightly different hardware in this upgrade, and will be unable to make this switch without downtime, so unfortunately lemm.ee will be unavailable for the duration.

As our database has grown quite a bit, cloning it will most likely take a few hours, so I expect the downtime to last 2-3 hours. Sorry for the inconvenience, I am hopeful that it will be worth it and that this upgrade will significantly reduce some of our recent long page load times!


Edit: upgrade complete!

I have now migrated the lemm.ee database from the original DigitalOcean managed database service to a dedicated server on Hetzner.

As part of this migration, I have also moved all of our Lemmy servers from the DigitalOcean cloud to Hetzner’s Cloud. I always want the servers to be as close as possible to the database, in order to keep latencies low. At the same time, I am very interested in having the ability to dynamically spin up and down servers as needed, so a cloud-type solution is really ideal for that. Fortunately, Hetzner allows connecting cloud servers to their dedicated servers through a private network, so we are able to take advantage of a powerful dedicated server for the database, while retaining the flexibility of the cloud approach for the rest of our servers. I’m really happy with the solution now.

In terms of results, I am already seeing far better page load times and far less resource use on the new hardware, so I think the migration has been a success. I will keep monitoring things and tuning as necessary.

  • Nelots@lemm.ee
    link
    fedilink
    English
    arrow-up
    48
    arrow-down
    1
    ·
    11 months ago

    So I wasn’t going crazy, the long loads times were real. Glad to see upgrades coming soon!

    • JimmyBigSausage@lemm.ee
      link
      fedilink
      arrow-up
      20
      ·
      11 months ago

      Although the long page load times have been real, you still might be going crazy. Definitely a possibility on Lemmy (づ ̄ ³ ̄)づ

    • kratoz29@lemm.ee
      link
      fedilink
      English
      arrow-up
      6
      ·
      11 months ago

      My experience has been downgraded since the latest Lemmy update, all these caveats/workarounds are pretty much welcome in order to get this in a stable state again!

  • GrayBackgroundMusic@lemm.ee
    link
    fedilink
    English
    arrow-up
    17
    ·
    11 months ago

    No worries about the downtime. If that’s what’s needed to get back to 0.18 levels of performance, go for it! Thanks for all the work y’all do.

  • argo_yamato@lemm.ee
    link
    fedilink
    arrow-up
    14
    ·
    11 months ago

    Thank you for your work on lemm.ee! I was actually looking to post somewhere to see if there was some slowness or if it was just me, looks like you answered my question.

    • sunaurus@lemm.eeOPM
      link
      fedilink
      arrow-up
      10
      ·
      edit-2
      11 months ago

      Currently the database is a managed DigitalOcean Postgres instance, but I am going to migrate it to a 32 thread 128 gb RAM dedicated server.

      It’s something that I’ve been hoping we won’t need, as the managed database service has allowed me to not worry about patches, backups, etc (they took care of all of that automatically). Unfortunately it is clear now that further upgrades are just too costly on that service & actually the amount of configuration and tuning I would like to do is simply not possible there. So seems like right now, moving to the dedicated server for the database is the only option.

      • Neuromancer@lemm.ee
        link
        fedilink
        arrow-up
        4
        arrow-down
        2
        ·
        11 months ago

        Makes sense. I appreciate the reply back. I use to work on highly available fault tolerant systems. So I’m always interested how much hardware/etc to run something.

        Sounds like some serious optimization needs to be done or there are a lot more transaction than I was expecting.

  • Xepher@lemm.ee
    link
    fedilink
    English
    arrow-up
    12
    arrow-down
    1
    ·
    11 months ago

    Really appreciate the advanced heads up. Thanks for being such a awesome instance admin!

  • CluckN@lemmy.world
    link
    fedilink
    arrow-up
    20
    arrow-down
    10
    ·
    11 months ago

    When this comment is 2 hours old, I’ll have completed taking a colossal dump.

  • SurvivalMariner@lemm.ee
    link
    fedilink
    arrow-up
    9
    arrow-down
    1
    ·
    11 months ago

    Thanks for the update and information. You really set an example for communication and level headedness in decision making. Thanks for providing this space for us.

  • Aniki 🌱🌿@lemm.ee
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    1
    ·
    11 months ago

    I just wanted to tell you both, “Good luck,” We’re all counting on you.

    -Frank Drebbin, Police Squad