I can’t help but feel overwhelmed by the sheer complexity of self-hosting modern web applications (if you look under the surface!)
Most modern web applications are designed to basically run standalone on a server. Integration into an existing environment a real challenge if not impossible. They often come with their own set of requirements and dependencies that don’t easily align with an established infrastructure.
“So you have an already running and fully configured web server? Too bad for you, bind me to port 443 or GTFO. Reverse-proxying by subdomain? Never heard of that. I won’t work. Deal with it. Oh, and your TLS certificates? Screw them, I ship my own!”
Attempting to merge everything together requires meticulous planning, extensive configuration, and often annoying development work and finding workarounds.
Modern web applications, with their elusive promises of flexibility and power, have instead become a source of maddening frustration when not being the only application that is served.
My frustration about this is real. Self-hosting modern web applications is an uphill battle, not only in terms of technology but also when it comes to setting up the hosting environment.
I just want to drop some PHP files into a directory and call it a day. A PHP interpreter and a simple HTTP server – that’s all I want to need for hosting my applications.
this… is not the way.
If you have containers named a, b, and c which all want port 443 then you don’t bind any of them, just point your reverse proxy to a:443 b:443 and c:443. The containers just need to be on the same network.
Also there’s a footgun with the approach you mentioned which I only just learned - exposed docker ports bypass iptables. So even if iptables is denying access to anything other than 80 & 443 docker container exposed ports are still accessible.