I can’t help but feel overwhelmed by the sheer complexity of self-hosting modern web applications (if you look under the surface!)
Most modern web applications are designed to basically run standalone on a server. Integration into an existing environment a real challenge if not impossible. They often come with their own set of requirements and dependencies that don’t easily align with an established infrastructure.
“So you have an already running and fully configured web server? Too bad for you, bind me to port 443 or GTFO. Reverse-proxying by subdomain? Never heard of that. I won’t work. Deal with it. Oh, and your TLS certificates? Screw them, I ship my own!”
Attempting to merge everything together requires meticulous planning, extensive configuration, and often annoying development work and finding workarounds.
Modern web applications, with their elusive promises of flexibility and power, have instead become a source of maddening frustration when not being the only application that is served.
My frustration about this is real. Self-hosting modern web applications is an uphill battle, not only in terms of technology but also when it comes to setting up the hosting environment.
I just want to drop some PHP files into a directory and call it a day. A PHP interpreter and a simple HTTP server – that’s all I want to need for hosting my applications.
deleted by creator
Yes, containers could be the way – if every applications would come in a container or it were super easy to containerize them without the applications knowing it.
Can I run half a dozen of applications in containers that all need port 443 and how annoying is it to set it up?
Yes, you can just map the internal 443 port to another port outside of the container and then reverse-proxy them all.
this… is not the way.
If you have containers named a, b, and c which all want port 443 then you don’t bind any of them, just point your reverse proxy to a:443 b:443 and c:443. The containers just need to be on the same network.
Also there’s a footgun with the approach you mentioned which I only just learned - exposed docker ports bypass iptables. So even if iptables is denying access to anything other than 80 & 443 docker container exposed ports are still accessible.
deleted by creator
This is what I mean. I don’t think it is easy. I also don’t like it. I find that utterly annoying. And yes, I need all of my services to listen on port 443 or other publicly reachable ports. Right now my router just forwards the poets to the machine the applications run on.
My dynamic DNS Provider does not offer wildcards or unlimited subdomains. This was never an issue during the last 10+ years.
But it seems to be impossible to selfhost anything nowadays without reimplementing the whole tech stack for each individual application - ending up with half a dozen of operating systems and different configurations I need to maintain.
In my opinion, it seems like it’s actually becoming increasingly easier to self-host in recent times, no more dependency hell to deal with.
Most of the apps are offering docker support making it easier to deploy, you don’t waste time setting up the perfect environment for that new app you want to deploy and sometimes mess up your current setup in the process.
Which apps are you using that make it seem so bad for you, would you like to share? I think you should try it once at least managing ports it’s really easy in containerized apps and offer the same solution you need, like to port 443 of all your favourite apps to different ports and set up reverse proxy and access them.
I’d recommend you to try pi-hole with docker compose, you’ll be surprised.
Pretty much all apps that I want to publicly use.
While I could bind a GoToSocial instance to any port I want, I highly doubt there is a way to tell every federating instance to use a port not being 443. At the same time I want to selfhost a publicly available webserver using https only, so this one also needs port 443.