A single Like is measuring as taking significant time on the PostgreSQL database as a INSERT, and comments and postings are even higher. As in 1/2 second of real-time for each and every one.

The lemmy_server code that does https outbound pushes is also not thoughtful about how 20 likes against the same server hosting the community is triggering 20 simultaneous network and database connections… concurrent activity isn’t considered in the queuing from how I read the current 0.17.4 code, and it blindly starts going into timing loops without considering that the same target server has too much activity.

  • RoundSparrowOPM
    link
    fedilink
    arrow-up
    1
    ·
    edit-2
    1 year ago

    I installed pg_stat_statements module on the server, tuned PG to run with 6GB shared_buffers, on a 24GB RAM server running only Lemmy.

    INSERT INTO "comment_like" ("person_id", "comment_id", "post_id", "score") VALUES ($1, $2, $3, $4) ON CONFLICT ("comment_id", "person_id") DO UPDATE SET "person_id" = $5, "comment_id" = $6, "post_id" = $7, "score" = $8 RETURNING "comment_like"."id", "comment_like"."person_id", "comment_like"."comment_id", "comment_like"."post_id", "comment_like"."score", "comment_like"."published"

    Is showing up with pg_stat_statements as the SQL statement being executed, and the mean_exec_time is shifting around between 0.4 to 0.5 seconds. The disk I/O on my Oracle cloud server isn’t that great, typical IDE hard drive speeds (100 MB/s measured by gddrescue quick test)… and logs + data are on the same storage device.

    That’s with 429948 rows in the comment_like table so far, with 82556 in the comment table. Server is only doing federation, I’m the only interactive user.