Heyho,

as a PostgreSQL guy, i’m currently working an tooling environment to simulate load on a lemmy instance and measure the database usage.

The tooling is written in Go (just because it is easy to write parallel load generators with it) and i’m using tools like PoWA to have a deep look at what happens on the database.

Currently, i have some troubles with lemmy itself, that make it hard to realy stress the database. For example the worker pool is far to small to bring the database near any real performance bootlenecks. Also, Ratelimiting per IP is an issue.

I though about ignoring the reverse proxy in front of the lemmy API and just spoof Forwarded-For headers to work around it.

Any ideas are welcome. Anyone willing to help is welcome.

Goals if this should be:

  • Get a good feeling for the current database usage of lemmy
  • Optimize Statements and DB Layout
  • Find possible improvements by caching

As your loved core devs for lemmy have large Issue Tracker backlog, some folks that talk rust fluently would be great, so we can work on this dedicated topic and provide finished PR’s

Greatings, Loki (@tbe on github)

  • RoundSparrowM
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    2 years ago

    One of the big concerns I have is that there seems to be no sense of the problems being faced. The project was built around very little data for years, and growing pains abound.

    As of today, lemmy.ml says this is the posting with the most comments (local), 852: https://lemmy.ml/post/1186515 This federated posting from Beehaw has over 1000: https://lemmy.ml/post/1265302

    On Reddit, a “large” news event, such as the discovery of the Titanic submarine this week, can have 10,000 comments - https://old.reddit.com/r/news/comments/14g7ipn/debris_field_discovered_within_search_area_near/

    And that isn’t even a major news breaking event on the order of a terrorist attack, Japan earthquake/nuclear incident, famous person being shot, etc.

    • vapelokiOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      2 years ago

      Yes, i see this issue also. I would assume that the statements used here, tend to get very bad plans, due to overhang (specific id’s will have far more entries then others).

      This is one of the reasons for my current setup.

      But when it comes to optimizing databases, i think i’m pretty skilled in it, and i have seen much worse scenarios (billing systems, processing > 100.000.000 entries per billing run, with tough time constraints).

      • RoundSparrowM
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 years ago

        Just now they found out that Lemmy is falling over with 300 comment threads.

        • vapelokiOP
          link
          fedilink
          English
          arrow-up
          2
          ·
          2 years ago

          Thx for the heads up. Now, we are talking database ;)