Credit to @eiknat@dev.lemmy.ml , and masterstur for their help with this effort.

Our postgres CPU usage has gone wayyyy down.

  • DessalinesOPMA
    link
    174 years ago

    So the early days of lemmy, we didn’t use any caching, but as we grew, in order to speed up read speeds, I added a SQL caching technique called materialized views. This instantly helped, and made lemmy a lot faster.

    Then we grew some more, and started to get a lot of active users, doing actions at the same time. Unfortunately, the only way you can update data in materialized views, is to refresh the entire table, which at first was extremely fast, but as we grew, started to take at least a second every time. And even worse, it locked out all reads on the table. So reads when they could occur, were very fast, but had to wait for all the locking writes to finish.

    In this user explosion last week, this became unbearable and I made it my top priority to fix this. Now Lemmy uses a different caching technique called fast tables, where updates to core data tables run SQL triggers, which update only the specifically changed data on the fast tables. This makes locking writes extremely fast and far less common.

    There’s still room for improvement of course, and luckily this last PR was the first time I’ve ever gotten another set of eyes looking at the DB besides myself. But yeah, that’s the long version.

    • @oriond
      link
      44 years ago

      Thank you so much for the hard work!

    • @k_o_t
      link
      24 years ago

      nice, thanks :)

    • @theafterman
      link
      14 years ago

      Interesting. I’m a bit ignorant when it comes to db administration, but is this fast tables technique something you developed on your own or is it a known documented thing? I can’t seem to find anything.

      • DessalinesOPMA
        link
        24 years ago

        It’s used elsewhere, but if you look up sql procedural functions or audit tables, you’ll find more.