I’m the sysadmin for beehaw.org and our user-base has almost tripled in 24 hours. Our site has been crashing so much over this time period.
We do have some volunteers that are trying to figure out a reasonable solution for when this happens again four weeks from now.
Do you have any recommendations?
So far there are no major problems. Disk was filling up but probably unrelated. I guess beehaw grew a lot more in terms of percentage than lemmy.ml during this time.
The main things I can think of which might help are to increase database pool size (in config file) and number of federation workers (
/admin
). Also create a swapfile if you are running low on RAM.Thanks!
I mean I’ve been able to verify with a number of users in America, and in Europe that signups just “spin” on lemmy.ml and have been that way for 24 hours or so. I’ve personally tested it on iOS & Windows (Safari, librewolf and Vivaldi) as well as on multiple IPs and had a few friends in multiple countries (England, France and Germany) try as well. Something is hung up somewhere for a few people at least. I made a post about it shrug
Also there’s a few posts on r/lemmy with others having this same problem too.
You can ask them to check the browser console for errors. Or sign up somewhere else. Anyway we are getting at least a dozen signups per hour so it’s working for many.
On my instance at least, spinning on click normally meant issue with email submission. But there wasn’t any good feedback like “failed to send email”
No issue using the same email server and naming structure for signup on beehaw (as far as user email field data goes).
Update: Yup just checked using a different email address and server.
Please consider running the lemmy-UI on worker threads
Its a kludge, but in the time being you can also add a
docker-compose restart
in your server cron every day or few hours. Lemmy restarts are pretty seemless, as the websockets automatically reconnect.The next release will have websockets removed which should fix some of the stability and memory issues.
Are they being replaced, or is it just going to be all REST?
HTTP / rest client only.
/me goes and disable WSS support from nginx… :P
Some of the discussion of it is here. I was staunchly pro websocket, but it’s become too much work to maintain, and doesn’t scale well.
Thanks!
No probs.
What’s your server load look like?
We’ve, fortunately, been able to resolve all of our server issues.
I made a kubernetes deployment for my lemmy service + use object storage for image hosting. Everything lemmy-side looks like it should scale fine. I’m not doing open registrations though, so it won’t impact me. The key bottleneck in my case is the database, but if need be a larger node can be provisioned to let the DB expand a bit.
I think for larger instances some form of server-abstraction will be useful for scaling (i.e. k8s, cloud run, EKS, etc.)
Thanks for the feedback…I’ll pass this along to the other sysadmins.
Hey, a little late, but maybe contact the Admin for lemm.ee they have horizontally scaled lemmy and it seems to be working. There is a config setting to disable scheduled tasks which should be disabled on all but one from what I remember, and then load balance across the rest