I can’t help but feel overwhelmed by the sheer complexity of self-hosting modern web applications (if you look under the surface!)

Most modern web applications are designed to basically run standalone on a server. Integration into an existing environment a real challenge if not impossible. They often come with their own set of requirements and dependencies that don’t easily align with an established infrastructure.

“So you have an already running and fully configured web server? Too bad for you, bind me to port 443 or GTFO. Reverse-proxying by subdomain? Never heard of that. I won’t work. Deal with it. Oh, and your TLS certificates? Screw them, I ship my own!”

Attempting to merge everything together requires meticulous planning, extensive configuration, and often annoying development work and finding workarounds.

Modern web applications, with their elusive promises of flexibility and power, have instead become a source of maddening frustration when not being the only application that is served.

My frustration about this is real. Self-hosting modern web applications is an uphill battle, not only in terms of technology but also when it comes to setting up the hosting environment.

I just want to drop some PHP files into a directory and call it a day. A PHP interpreter and a simple HTTP server – that’s all I want to need for hosting my applications.

    • 𝘋𝘪𝘳𝘬OP
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      1 year ago

      For IPv4 you can use tricks like an SNI proxy to sniff the SNI header and transparently redirect applications to the right IPv6 host. HTTP requests can just be proxied with any reverse proxy. For non-HTTP, non-TLS traffic, you’ll need more complicated solutions, though

      This is what I mean. I don’t think it is easy. I also don’t like it. I find that utterly annoying. And yes, I need all of my services to listen on port 443 or other publicly reachable ports. Right now my router just forwards the poets to the machine the applications run on.

      My dynamic DNS Provider does not offer wildcards or unlimited subdomains. This was never an issue during the last 10+ years.

      But it seems to be impossible to selfhost anything nowadays without reimplementing the whole tech stack for each individual application - ending up with half a dozen of operating systems and different configurations I need to maintain.

      • microair2
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        In my opinion, it seems like it’s actually becoming increasingly easier to self-host in recent times, no more dependency hell to deal with.

        Most of the apps are offering docker support making it easier to deploy, you don’t waste time setting up the perfect environment for that new app you want to deploy and sometimes mess up your current setup in the process.

        Which apps are you using that make it seem so bad for you, would you like to share? I think you should try it once at least managing ports it’s really easy in containerized apps and offer the same solution you need, like to port 443 of all your favourite apps to different ports and set up reverse proxy and access them.

        I’d recommend you to try pi-hole with docker compose, you’ll be surprised.

        • 𝘋𝘪𝘳𝘬OP
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          Which apps

          Pretty much all apps that I want to publicly use.

          While I could bind a GoToSocial instance to any port I want, I highly doubt there is a way to tell every federating instance to use a port not being 443. At the same time I want to selfhost a publicly available webserver using https only, so this one also needs port 443.