No no, not down… “Elevated error rates”,. I think the elevation was from 0 to 100%.
Ah! I see! So it’s not down but up!
Q: I wish there was an unstable AWS region to test the resilience of my app.
A: We already have one, it’s called us-east-1
Any news as to the cause?
lemmy’d :P
That or the old classic - the cleaning lady plugs her vacuum into the UPS.
1:50 PM PDT: Beginning at 11:49 AM PDT, customers began experiencing errors and latencies with multiple AWS services in the US-EAST-1 Region. Our engineering teams were immediately engaged and began investigating. We quickly narrowed down the root cause to be an issue with a subsystem responsible for capacity management for AWS Lambda, which caused errors directly for customers (including through API Gateway) and indirectly through the use by other AWS services. We have associated other services that are impacted by this issue to this post on the Health Dashboard. Additionally, customers may experience authentication or sign-in errors when using the AWS Management Console, or authenticating through Cognito or IAM STS. Customers may also experience intermittent issues when attempting to call or initiate a chat to AWS Support. We are now observing sustained recovery of the Lambda invoke error rates, and recovery of other affected AWS services. We are continuing to monitor closely as we work towards full recovery across all services.
This happens but once in a blue moon. The overall availability of AWS is still immense.
lemmy.world and lemmy.ml are down also, wonder if is related…
It’s on a single dedicated server from ovh.
Yayyyyyyyyyyy!
took the day off posting on lemmy