https://github.com/LemmyNet/lemmy/issues/3395

This GitHub issue is getting ignored, just as data-size related crashes in general aren’t being reported by the major site operators to GitHub. The admin of lemmy.world posted yesterday that he “talked to the devs”, but all this talk seems to be behind the scenes and no Github issue was opened about 0.18.1 performance problems or any server logs as details. This is really holding bakc Lemmy as a platform, the lack of server logs from high-activity more-data servers being shared.

Is upvote doing backend-federation activity process spawning into the queue, is that is what is slow? Are database inserts into the comment_like table on that server taking so long given the amount of rows in the table from accumulated data?

The federation performance is also showing signs of serious backend delays that the server logs would show. And adding emergency logging to the code to establish what exactly is going on with timeouts, retries, etc.

  • RoundSparrowOP
    link
    fedilink
    arrow-up
    1
    ·
    1 year ago

    If it is indeed the SQL INSERT that is the bottleneck, Emergency measures, code changes, could be put in place.

    1. Turn off the comment-return on like, make it a hotfix. Or fake it, just do a comment read to a like request and echo-back the vote parameter in the JSON.

    2. Insert into a new table in the database, comment_like_queue that has none of the index, record locking problems of the massive primary table.

    3. Batch insert all likes fro the comment_like_queue table into the live table every 3 minutes with a outside shell script or other quick and dirty application that can be easily tweaked and monitored.