Over the past 48 hours I have been glued to my screen trying to figure out how to make Beehaw more robust during this Reddit exodus.

My eyes are burning but I am thankful for so much financial support as well as the work of two sysadmins that have given us all a breath of fresh air.

One of the sysadmins was up until 2:30 am helping us out as a volunteer. I am so very grateful for persons such as this.

Thank you all for your continued support and patience.

  • darkfoe@lemmy.serverfail.party
    link
    fedilink
    English
    arrow-up
    9
    ·
    1 year ago

    I’m no dev on the project myself, and I haven’t studied that query enough to know, but yeah they are some monster queries. I’d have to fire up pgadmin and try them out on my personal instance to understand them better.

    But as for your curiosity, I had an issue with a microservice at my job that is very sensitive to database latency (makes one call, roughly 600 requests per second on average, up to 1200 in spikes.) We solved an issue with some of the joins going on by making a materialised view for what we knew didn’t change more than once per day, which we then scheduled with pg_cron to refresh concurrently (concurrently being key so we don’t lock out reads.) Reduced our query times significantly - ie, down to milliseconds vs up to 20 seconds.

    Really boils down to how often some data needs to change, so you can make some sort of way of caching it.