I suggest Lemmy incoming federation inserts of votes, comments, and possibly postings - be queued so that concurrent INSERT operations into these very large database tables be kept linear so that local-instance interactive web and API (app) users are given performance priority.
This could also be a way to keep server operating costs more predictable with regard to using cloud-services for PostgreSQL.
There are several approaches that could be taken: Message Queue systems, queue to disk files, queue to an empty PostgreSQL table, queue to another database system such as SQLite, etc.
This would also start the basis for being able to accept federation incoming data while the PostgreSQL is down / website is offline for upgrades or whatever.
I would also suggest code for incoming federation data be moved to a different service and not run in-process of lemmy_server. This would be a step towards allowing replication integrity checks, backfill operations, firewall rules, CDN bypassing, etc
EDIT: And really much of this applies to outgoing, but that has gotten more attention in 0.17.4 time period - but ultimately I was speculating that the incoming backend transactions are a big part of why outbound queues are bunching up so much.
The comment database table is going to have a lot of concurrency concerns, remote federated servers are going to be all connecting at the same time with INSERT transactions into that table. The primary key on that table is going to have a lot of contention, and the interactive website users should be given highest priority.
Let the HTTPS connection be accepted, get the data, queue the data to a place that does not rely on locking the comment database table, release the HTTPS connection. Then do a linear insert of those new records one at a time.