Is it working? -> (Yes) --> Fix DNS
Does anyone else have the thought that maybe it’s time to just replace these 30+ year old ancient protocols? Seems like the entire networking stack is held together with string and duct tape and unnecessarily complicated.
A lot of the decisions made sense somewhat in the 80s and 90s, but seems ridiculous in this day and age lmao
Some ancient protocols get replaced gradually though. Look at http3 not using TCP anymore. I mean at least it’s something.
Wait till you hear about when ipv6 was first introduced (90s) and how 50% of the internet still doesn’t work with it.
Businesses don’t want to change shit that “works” so you still have stuff like the original KAME project code floating around from the 90s.
Data Link layer be pretty stable to be fair _
I definitely would love to see a rework of the network stack at large but idk how you’d do it without an insane amount of cooperation among tech giants which seems sort of impossible
I may be waaaay off here, but the internet as it exists is pretty much built on DNS, isn’t it? I mean, the whole idea of DARPANet back in the 60s and 70s was to build a robust, redundant, and self-healing network to survive nuclear armageddon, and except when humans f it up (intentional or otherwise), it generally does what it says on the tin.
Now, there’s arguments to beade about securing the protocol, but to rip and replace the routing protocols, I think you’d have to call it something other than the Internet.
Making a typo in the BGP config is the internet’s version of nuclear Armageddon
Same unfortunately goes for a big chunk of the law on a global scale… Constant progress, new possibilities and technologies, changes in general are really outpacing some dusted and constantly abused solutions. Every second goes by and any “somehow still holding” relic is under more pressure. As a species we can have some really great ideas but the long-term planning or future-proofing is still not our strongest suit.
Me last week when my pi-hole was down
Oh dang, I need to rebuild that one as well by chance. Still running on Buster…
Why do Canadians make such good network engineers?
We always make sure to check the Eh Record.
Literally this, literally today.
Same here, quite literally this morning, it was fucking DNS
Those little bastards, so sneaky. I’ve checked if d(uck)dns is working before my local DNS.
Am I the only one who can’t think of a time DNS has caused a production outage on a platform I worked on?
Lots of other problems over the years, but never DNS.
I have a coworker who always forgets TTL is a thing, and never plans ahead. On multiple occasions they’ve moved a database, updated DNS to reflect the change, and are confused why everything is broken for 10-20 minutes.
I really wish the first time they learned, but every once and a while they come to me to troubleshoot the same issue.
How would you prevent that?
While planning your change (or project requiring such change), check the relevant(* see edit) DNS TTL. Figure out the point in the future you want to do the actual change (time T), and set the TTL to 60 seconds at T-(TTL*2) or earlier. Then when it comes to the point where you need to make your DNS change, the TTL is reasonable and you can verify your change in some small amounts of minutes instead of wondering for hours.
Edit: literally check all host names involved. They are all suspect
This. For example, if you have a DNS entry for your DB and the TTL is set to 1 hour, an hour before you intend to make the changes, just lower the TTL of the record to a minute. This allows all clients to be told to only cache for a minute and to do lookups every minute. Then after an hour, make the necessary changes to the record. Within a minute of the changes, the clients should all be using the new record. Once you’ve confirmed that everything is good, you can then raise TTL to 1 hour again.
This approach does require some more planning and two or three updates to DNS, but minimizes downtime. The reason you may need to keep TTL high is if you have thousands of clients and you know the DNS won’t be updated often. Since most providers charge per thousand or million lookups, that adds up quickly when you have thousands of clients who would be doing unnecessary lookups often. Also a larger TTL would minimize the impact of a loss of DNS servers.
Set it to 5 seconds ??? Profit
??? Is when the underwear gnomes send you a massive bill because you’re paying per 1k lookups. They profit, you don’t
“yes boss we need another 20 dns servers” “idk why dns traffic is so heavy these days”
For real, I’ve had problems where I specifically checked if it was DNS, concluded it was not, but it still turned out to be DNS.
The problem is the cache. Always.
Actually while for myself it is sometimes DNS, if I see an internet wide outage it’s usually BGP.
I feel like there’s some context here I’m missing…
It’s a haiku about network issues
No words describe such 🤌
Nice painting!
Not uncommon.
I’m not going to get old at the beach
There’s no way it doesn’t hold logically
I got old
I have this one hanging up in my cube