It was last Sunday, July 23 that the site_aggregates update of 1500 rows issue was discovered.

Lemmy.ca made a clone of their database.

The UPDATE of 1500 rows in site_aggregates is an example of what keeps slipping through the cracks.

lemmy.ml as a long-running server may have data patterns in it that a fresh-install server does not easily match.

Lemmy.ca cloning a live database to run EXPLAIN was exactly the kind of thing I wanted to see from lemmy.ml 2 months ago.

  • RoundSparrowOP
    link
    fedilink
    arrow-up
    1
    ·
    1 year ago

    The bot cleanup work has some interesting numbers regarding data n the database: https://sh.itjust.works/post/1823812

    lemmy.ml has a much wider range of dates on communities, post, comments, user accounts than what new testing would generate. Even if you install a test server with the same quantity of data, the date patterns would come out a lot different from the organically grown lemmy.ml

    All I know is lemmy.ml errors out every single day I do routine browsing, and I haven’t seen any website throw this many errors in many many years. Delete of Accounts could also possibly be causing these 2 to 4 minute periods of overload, even with the 0.18.3 fixes.