I can’t help but feel overwhelmed by the sheer complexity of self-hosting modern web applications (if you look under the surface!)
Most modern web applications are designed to basically run standalone on a server. Integration into an existing environment a real challenge if not impossible. They often come with their own set of requirements and dependencies that don’t easily align with an established infrastructure.
“So you have an already running and fully configured web server? Too bad for you, bind me to port 443 or GTFO. Reverse-proxying by subdomain? Never heard of that. I won’t work. Deal with it. Oh, and your TLS certificates? Screw them, I ship my own!”
Attempting to merge everything together requires meticulous planning, extensive configuration, and often annoying development work and finding workarounds.
Modern web applications, with their elusive promises of flexibility and power, have instead become a source of maddening frustration when not being the only application that is served.
My frustration about this is real. Self-hosting modern web applications is an uphill battle, not only in terms of technology but also when it comes to setting up the hosting environment.
I just want to drop some PHP files into a directory and call it a day. A PHP interpreter and a simple HTTP server – that’s all I want to need for hosting my applications.
Containers really shine in the selfhosting world in modern times. Complete userspace isolation, basically no worries about dependencies or conflicts since it’s all internally shipped and pre-configured, easy port mapping, immutable “system” files and volume mounting for persistent data… And much more. If built properly, container images solve almost all problems you’re grappling with.
I can’t imagine ever building another application myself without containerization ever again. I can’t remember the last time I installed any kind of server-side software directly on a host without containerization, with the exception of packages required by the host that are unavoidable to support containers or to increase security posture.
I’m my (admittedly strong) opinion, it’s absolute madness, and dare I say, reckless and incomprehensible, why anybody would ever create a brand new product that doesn’t ship via container images in this day and age, if you have the required knowledge to make it happen, or the capacity to get up to speed to learn how to make it happen (properly and following best practices of course) in time to meet a deadline.
I’m sure some would disagree or have special use-cases they could cite where containers wouldn’t be a good fit for a product or solution, but I’m pretty confident that those would be really niche cases that would apply to barely anyone.
While php is still cool… join the dark side and start using containers 😏
Yeah, I can’t imagine going back to not using containers. Call me a script kiddy if you want but I can copy paste some environment variables into a Docker Compose and stand up a new service in ten minutes.
I’m not going to say it’s always smooth sailing. I’ve definitely had containers with frustrating complications that took some sorting out. But man, if you want to just drop some files in a directory and go? Just get on board the Docker train and save yourself the headache.
deleted by creator
Yes, containers could be the way – if every applications would come in a container or it were super easy to containerize them without the applications knowing it.
Can I run half a dozen of applications in containers that all need port 443 and how annoying is it to set it up?
Yes, you can just map the internal 443 port to another port outside of the container and then reverse-proxy them all.
this… is not the way.
If you have containers named a, b, and c which all want port 443 then you don’t bind any of them, just point your reverse proxy to a:443 b:443 and c:443. The containers just need to be on the same network.
Also there’s a footgun with the approach you mentioned which I only just learned - exposed docker ports bypass iptables. So even if iptables is denying access to anything other than 80 & 443 docker container exposed ports are still accessible.
deleted by creator
For IPv4 you can use tricks like an SNI proxy to sniff the SNI header and transparently redirect applications to the right IPv6 host. HTTP requests can just be proxied with any reverse proxy. For non-HTTP, non-TLS traffic, you’ll need more complicated solutions, though
This is what I mean. I don’t think it is easy. I also don’t like it. I find that utterly annoying. And yes, I need all of my services to listen on port 443 or other publicly reachable ports. Right now my router just forwards the poets to the machine the applications run on.
My dynamic DNS Provider does not offer wildcards or unlimited subdomains. This was never an issue during the last 10+ years.
But it seems to be impossible to selfhost anything nowadays without reimplementing the whole tech stack for each individual application - ending up with half a dozen of operating systems and different configurations I need to maintain.
In my opinion, it seems like it’s actually becoming increasingly easier to self-host in recent times, no more dependency hell to deal with.
Most of the apps are offering docker support making it easier to deploy, you don’t waste time setting up the perfect environment for that new app you want to deploy and sometimes mess up your current setup in the process.
Which apps are you using that make it seem so bad for you, would you like to share? I think you should try it once at least managing ports it’s really easy in containerized apps and offer the same solution you need, like to port 443 of all your favourite apps to different ports and set up reverse proxy and access them.
I’d recommend you to try pi-hole with docker compose, you’ll be surprised.
Which apps
Pretty much all apps that I want to publicly use.
While I could bind a GoToSocial instance to any port I want, I highly doubt there is a way to tell every federating instance to use a port not being 443. At the same time I want to selfhost a publicly available webserver using https only, so this one also needs port 443.
And that’s why my rule is: if it doesn’t container it doesn’t go on my server. If I can’t get the application crammed into my docker compose stack I look for an alternative. Hell I run PiHole and Octoprint inside container
What “modern web application” doesn’t work with rev proxy by subdomain? (Esp one that can’t be remedied by rewriting the host header at the proxy).
Furthermore which of these apps require binding to 443 and issue their own certs? This sounds range if a listening port can’t be specified.
Sadly, a PHP dev environment and a webserver is not enough for modern devs.
I just ended up installing proxmox, and everything I install gets it’s own VM. It binds to the port it wants, and my public IP port 443 binds to a VM win ngnix. If you hit a subdomain, ngnix proxies the request to the actual server and port. Servers can ship whatever certificates they want, my ngnix is the one clients negociate SSL with, so it has its own certificate. The only other thing running on that server is certbot.
It’s honestly much simpler this way. Need to restart a machine after install? Everything stays up. One of the software needs glibc version fuck my ass? Don’t care that machine will have that version of glibc and I will not touch it. Software has a memory leak? Qemu doesn’t, and the VM is limited in ram so only that is crashing.
Just asked sure your VM template is good (and has your ssh key installed) and you’re golden. Before this week’s internet outage, I had 99.999% uptime with a single hypervisor, and the only monitoring I have is just uptime of all services as seen from AWS. I don’t even have alerts.
I sometimes long for the days (that I missed, I’m only 24) of monolithic Linux servers where you have a webserver, a database server and that’s it. Sadly, VMs are cheap and dependencies hell. It’s still quite fun to tinker in the virtualized world. It’s just not the same as what has been.
retvrn to cgi-bin
Oh I’d love to!
I recently(ish) installed Unraid on a new NAS, as I’d heard good things but knew nothing about it. Didn’t really intend to install much on it, but got playing around with the Docker stuff built into it and… fuck me. The amount of time I used to spend installing dependancies, configuring stuff, trying to work out why the hell it wasn’t working. With really not much work I’ve got a fully fledged Arr setup with Jellyfin, got a full dev environment, Grafana and influx for monitoring, automated tls certs, and a bunch of other things all working pretty damn flawlessly.
Containers are awesome.
Sometimes venting off a little helps a a little. I finally sat down and learned the basics of docker and found an easy to follow video series on how to setup Docker with Portainer and Nginx Proxy Manager. Works like charm. I also set up my GoToSocial instance again but failed at setting up a Lemmy instance … but I guess that’s for another discussion :)
Care to share what the helpful series was?
Sure. It’s a 4-part video series by German YouTuber Raspberry Pi Cloud created end of 2021 (but still works with most recent versions).
He goes from technical background over basic system preparation and Docker installation to a fully featured setup. I skipped lots of content up to the point where he was done with the Docker installation (I prepared and “cleaned” my system and installed Docker beforehand.)
- Basics and somewhat not-so-nice manual Docker installation not using the system’s package manager: https://www.youtube.com/watch?v=8QgBqu-tE-I
- Portainer installation and setup and general usage: https://www.youtube.com/watch?v=ZYgCYgxbKgQ
- Nginx Proxy Manager and reverse-proxying, Vaultwarden installation: https://www.youtube.com/watch?v=SsnrH-5_ORE
- More on Nginx Proxy Manager (Let’s Encrypt), Pi-Hole installation and setup: https://www.youtube.com/watch?v=D6aOdey5nj8
The thing that boils my blood is secret sqlite databases. I just want to store my volumes on an NAS using NFS, and run the stacks on a server built for it. Having a container randomly blows up because an undocumented sqlite database failed to get a lock sucks ass.
secret sqlite databases
The thing is: “secret”. SQLite databases in general are awesome. Basically no need to configuration. They just work and don’t even need an own server and in 99% of all cases they’re absolutely enough for what they used for. I’d always chose a SQLite database over anything else - but it should made clear that such a database is used.
Perhaps a solution like CloudPanel or Cloudron would make self-hosting multiple sites / apps easier for you. I use CloudPanel to host multiple Wordpress websites and it works very well. I use Cloudron to quickly deploy various open-source apps on one VPS.
Docker containers do pretty much solve that, drop a
docker-compose.yml
file in place, maybe tweak a few lines, and that’s all.Not sure what’s the problem though. Pull up a reverse proxy, and give all the crappy shit a private ip and whatever port they want, and access it through the proxy, and everyone can be on 443.
127.42.1.123:443,
whatever. Maybe use real containers, or that crappy docker shit, both offer you independent namespaces with all the port and whatnot.