Things have been incredibly unstable there. Until things stabilise, they should force the traffic elsewhere.

  • RoundSparrow
    link
    fedilink
    arrow-up
    14
    arrow-down
    1
    ·
    edit-2
    1 year ago

    Things have been incredibly unstable there.

    I wish lemmy.ml (also unstable) or lemmy.world would hand out a (nearly) full copy of the database so we can get more analysis done on PostgreSQL performance behaviors. Remove the private comments and password /2fa/user, or whitelist only comments/posts/communities/person tables - but most everything else should already be public information that’s shared via the API or federation anyway. it’s the quantity, grouping, and the age of the data that’s hard to reproduce in testing. And knowledge of other federated servers, even data that may have been generated by older versions of Lemmy that new versions can’t reproduce.

    It’s been over 60 days of constant PostgreSQL overload problems and last week Lemmy.ca made a clone of their database to study offline with AUTO_EXPLAIN which surfaced a major overload on new comments and posts related to site_aggregates counting (it was counting each new post/comment against every known server, not just the single database row for a server).

    I have an account over on World too, and every major Lemmy server I use throws errors with casual usage. It’s been discouraging, I haven’t visited a website with this many errors in years. Today (Sunday) has actually been better than yesterday, but I do not see many new postings being created on lemmy.ml today.

      • RoundSparrow
        link
        fedilink
        arrow-up
        3
        ·
        1 year ago

        subscribe to every community, and let federation load overwhelm your server.

        Did that, takes lots of time to wait for the content to come in… and there is no backfill. Plus I suspect that the oldest servers (online for several years) have some migration/upgrade related data that isn’t being accounted for.