I’m sure I’m massively overthinking this, but any help would be greatly appreciated.
I have a domain name that I bought through NameCheap and I’ve pointed it to Cloudflare (i.e. updated the name servers). I have a Synology NAS on which I run Docker and a few containers. Up until now I’ve done this using IP addresses and ports to access everything (I have a Homepage container running and just link to everything from there).
But I want to setup SSL and start running Vaultwarden, hence purchasing a domain name to make it all easier.
I tried creating an A record in Cloudflare to point to the internal IP of my NAS (and obviously, this couldn’t be orange-clouded through CF because it’s internal to my LAN). I’m very reluctant to point the A record to the external IP of my NAS (which, for added headache is dynamic, so I’d need to get some kind of DDNS) because I don’t want to expose everything on my NAS to the Internet. In actual fact, I’m not precious about accessing any of this stuff over the internet - if I need remote access I have a Tailscale container running that I can connect to (more on that later in the post). The domain name was purely for ease of setting up SSL and Vaultwarden.
So I guess my questions are:
- What is the best way to go about this - do I create a DDNS on the NAS and point that external IP address to my domain in Cloudflare, then use Traefik to just expose the containers I want to have access to using subdomains?
- If so, then how do I know that all other ports aren’t accessible (I assume because I’m only going to expose ports 80 and 443 in Traefik?)
- What do other people see (i.e. outside my network) if they go to my domain? How do I ensure they can’t access my NAS and see some kind of page?
- Is there a benefit to using Cloudflare?
- How would Pi-hole and local DNS fit into this? I guess I could point my router at Pi-hole for DNS and create my A records on Pi-hole for all my subdomains - but what do I need to setup initially in Cloudflare?
- I also have a RPi that has a (very basic) website on it - how do I setup an A record to have Cloudflare point a sub-domain to the Pi’s IP address?
- Going back to the Tailscale thing - is it possible to point the domain to the IP address of the Tailscale container, so that the domain is only accessible when I switch on the Tailscale VPN? Is this a good idea/bad idea? Is there a better way to do it?
I’m sure these are all noob-type questions, but for the past 6-7 years I’ve purely used this internally using IP:port combinations, so never had to worry about domain names and external exposure, etc.
Many thanks in advance!
I do this for some dockers in my unraid, except I use the zero trust tunnels. MUCH easier, can use SSL, and can set up a login page for users. Also, you don’t have to open any ports on your router!
Im not sure about synology, but I would assume you can find a “cloudflared” docker in the app store.
check out this youtube video for a good explanation: https://www.youtube.com/watch?v=ZvIdFs3M5ic
A hundred times this. It’s going to be the easiest to set up by a wide margin. https://www.cloudflare.com/products/tunnel/
Interesting, I’ve never considered Cloudflare Tunnels. Thanks.
However I do remember seeing this video the other day, that suggests perhaps it’s not always the best solution? Not sure this applies here, though: https://www.youtube.com/watch?v=oqy3krzmSMA.
Christian brings up some great points worthy of consideration; however, if your going to use traditional routing through their network (A/cname) your still doing the same thing. CF will still see your traffic.
The second thing I should say is, I only use zero trust for websites I share with family. So, I have a Searxng and wef/voyager dockers running through zero trust.
For admin, homeassistant/iot/ip cams, I use an always on IPSec vpn on my iPhone, iPad, and steam deck (take it to work and plug into 3rd monitor) … this is cool because I get 24/7 ad blocking no matter where I am because it routes all my traffic through my pihole at home. This is a great solution for a single person, but I do not want to manage vpn access for multiple ppl. So, I agree with christian in NOT putting admin stuff/sensitive info behind CF at all (zero trust OR tradition web routing) unless you fully trust them. Otherwise do a 24/7 vpn like I do.
I don’t plan on exposing any of this stuff to anybody other than me. I do plan on spinning up SearX but it’ll only be me using it. I’ve given up trying to convince my family to move away from Google to even DuckDuckGo or Startpage, so there’s no way I’ll convince them to use SearX!
I think, therefore, for accessing away from home I’ll perhaps setup a subdomain that points to the IP of my Tailscale container — that means it’ll be accessible externally but only when I turn on the VPN.
When I’m on my home network I have a VPN on my Mac anyway.
Here is an alternative Piped link(s): https://piped.video/watch?v=ZvIdFs3M5ic
Piped is a privacy-respecting open-source alternative frontend to YouTube.
I’m open-source, check me out at GitHub.
If using Docker, then just setup NGINX Proxy Manager. It has Let’s Encrypt built in, so you literally just fill out a few fields, ask for a new certificate, provide your email, and BAM!, all done.
Before I was using Traefik I used to use plain NGINX and was pretty happy with it. I made the switch to Traefik after reading some good things about it on Reddit.
More than happy to switch to NPM and give it a try. At this point I have no reverse proxy running at all, so not even like I have to swap out Traefik — there’s nothing they’re to begin with.
NPM is such a blessing! It works absolutely flawless!
Easiest Solution imo:
- get Wildcard DNS, point it to the public IP of your NAS
- deploy the ssl cert (containing your main domain and sudomains for your docker containers)
- configure reverse Proxy in Synology configy proxying requests for the subdomains to your docker container (you can enforce only local access to certain services too)
- Static route or local dns (Pihole) to redirect local requests for your public ip to the private IP of your NAS
- done!
Thanks, I’d like to know more about how to go about this approach.
I guess in my head, I want to achieve the following (however I go about it):
- Access https://mydomain.com from outside my network and hit some kind of blank page that wouldn’t necessarily suggest to the public that anything exists here
- Access https://mydomain.com from inside my network and hit a login page of some kind (Authelia or otherwise), to then gain access to the Homepage container running in Docker (essentially a dashboard to all my services)
- Access https://secure.mydomain.com from outside my network and route through to the same as above, only this would be via the Tailscale IP address/container running on my stack to allow for remote access
- Route all HTTP requests to HTTPS
- Use the added protection that Cloudflare brings (orange clouds where possible)
- SSL certificates for all services
- Ability to turn up extra Docker containers and auto-obtain SSL certs for them Ensure that everything else on my NAS and network is secure/inaccessible other than the services I expose through Traefik.
I have no idea where Cloudflare factors in (if at all), nor how Pi-hole factors in (if at all).
Internal stuff I’ve been absolutely fine with. Stick a domain name, a reverse proxy and DNS in front of me and it’s like I’m learning how to code a Hello World app all over again.
How would Pi-hole and local DNS fit into this?
Pihole/local DNS would resolve all your queries when on your local network. So you would add the A/CNAME records for your services there with local IPs.
but what do I need to setup initially in Cloudflare?
Nothing if you just want local usage of the domain name, queries never hit cloudflare. But you do want the domain at least added to cloudflare so you can issue SSL certs using letsencrypt and its DNS-01 challenge.
What do other people see (i.e. outside my network) if they go to my domain? How do I ensure they can’t access my NAS and see some kind of page?
If you don’t open ports on your firewall they wouldn’t have any access. Otherwise if you do open the web ports, they generally go to a reverse proxy running somewhere that routes traffic as needed, so you could choose to display some kind of page or just show nothing.
I also have a RPi that has a (very basic) website on it - how do I setup an A record to have Cloudflare point a sub-domain to the Pi’s IP address?
You would need a reverse proxy running either on the Pi or on the NAS that cloudflare points to, then that proxy takes the subdomain and routes it to the appropriate internal IP/service.
Thanks. There’s definitely stuff in here I want to do, I just need to figure out the order of play and break it down a bit.
As per reply to another comment.
Do I have to port forward 80 and 443 no matter what? Ideally I don’t want to forward anything.
Do I need DDNS in here somewhere, i.e. create a DDNS and link it to my NAS, create an A record in Cloudflare to point my domain to the external IP of the DDNS? Is that how I get into my NAS from the domain without worrying about the IP changing? How do I then prevent anybody accessing the NAS admin on port 5000/5001, as well as anything else except the containers I expose via Traefik?
Do I have to port forward 80 and 443 no matter what? Ideally I don’t want to forward anything.
You only need to port forward if you want external access without using a VPN or something like that. Like if you wanted friends to be able to access your server for example.
Do I need DDNS in here somewhere, i.e. create a DDNS and link it to my NAS, create an A record in Cloudflare to point my domain to the external IP of the DDNS?
Yes, but only if you want to port forward and have external access. If you want local access only then you don’t need port forwarding, DDNS, or any A records in cloudflare.
How do I then prevent anybody accessing the NAS admin on port 5000/5001, as well as anything else except the containers I expose via Traefik?
Assuming you did port forward 80/443, then the NAS admin wouldn’t be exposed since it’s on different ports.
Thanks. I realise they’re all pretty basic questions. But brace yourself: more are on their way!
So… no, I don’t want to give external access - I’m not running any services that anyone would want/need access to - other than perhaps my Jellyfin server, but not sure I even want anyone accessing that. So let’s assume for right now, no access to the outside world. Therefore, no port forwarding required.
So to get access to my internal network from the domain, do I simply setup local DNS records in something like Pi-hole, to point mydomain.com to the internal IP or my NAS? Kind of like a network-wide equivalent of modding the /etc/hosts file on my machine?
Perhaps a(nother) silly question but, what’s to stop me doing that now with a completely random domain name? Is there some kind of authentication I’d need to go through to prove that mydomain.com is, in fact, mine? Or does it simply not matter since it’s internal only?
If I’ve understood correctly, then, I don’t need Cloudflare at all in my setup if there’s no external access? Nothing to proxy, nothing to protect?
Assuming I get all of the above working and traffic routing to my containers, how would I then go about setting up SSL? Can that be done through Traefik rather than Cloudflare? Even if the domain isn’t external?
do I simply setup local DNS records in something like Pi-hole, to point mydomain.com to the internal IP or my NAS? Kind of like a network-wide equivalent of modding the /etc/hosts file on my machine?
Yep exactly!
Perhaps a(nother) silly question but, what’s to stop me doing that now with a completely random domain name?
Nothing, it’s local to your network only so it only affects you. You could set
google.com
to return whatever IP you want for example, but it would prevent you from actually accessing google.If I’ve understood correctly, then, I don’t need Cloudflare at all in my setup if there’s no external access? Nothing to proxy, nothing to protect?
The only thing you need Cloudflare (or another DNS-01 supported service) for, is getting letsencrypt SSL certificates. Since it uses automatically generated public DNS records on your domain name to verify that you own it.
Can that be done through Traefik rather than Cloudflare? Even if the domain isn’t external?
Yep it’s done through Traefik either way, their docs should have a section on SSL with cloudflare IIRC.
Absolute superstar, thanks for your help so far. I’ll make a start on some of this tomorrow and see how far I get — either with Traefik or NPM.
Do I need to do anything with the domain itself on Cloudflare at the moment? Or do I just leave it with its current A record pointing at an IP address (it was done as part of the setup in Cloudflare so I have no idea what that IP address is).
Obviously that domain in reality will just sit there doing nothing.
Yeah you can just leave it, delete the A record if you want to.
OK so made a start with this. Spun up a Pi-hole container, added mydomain.com as an A record in Local DNS, and created a CNAME for traefik.mydomain.com to point to mydomain.com.
In Cloudflare, I removed the mydomain.com A record and the www CNAME record.
Doing an nslookup on mydomain.com I get
Non-authoritative answer: *** Can't find mydomain.com: No answer
Which I guess is to be expected.
However, when I then navigate to http://traefik.mydomain.com in my browser, I’m met with a Cloudflare error page: https://imgur.com/XhKOywo.
Below is the docker-compose of my traefik container:
traefik: container_name: traefik image: traefik:latest restart: unless-stopped networks: - medianet ports: - 80:80 - 443:443 expose: - 8080 volumes: - /etc/localtime:/etc/localtime:ro - /var/run/docker.sock:/var/run/docker.sock:ro - /volume1/docker/traefik:/etc/traefik - /volume1/docker/traefik/access.log:/logs/access.log - /volume1/docker/traefik/traefik.log:/logs/traefik.log - /volume1/docker/traefik/acme/acme.json:/acme.json environment: - TZ=Europe/London labels: - traefik.enable=true - traefik.http.routers.traefik.rule=Host(`$TRAEFIK_DASHBOARD_HOST`) && (PathPrefix(`/api`) || PathPrefix(`/dashboard`)) - traefik.http.routers.traefik.service=api@internal - traefik.http.routers.traefik.entrypoints=traefik
My traefik.yml is also nice and basic at this point:
global: sendAnonymousUsage: false entryPoints: web: address: ":80" traefik: address: "8080" api: dashboard: true insecure: true providers: docker: endpoint: "unix:///var/run/docker.sock" watch: true exposedByDefault: false log: filePath: traefik.log level: DEBUG accessLog: filePath: access.log bufferingSize: 100
Any ideas what’s going wrong? I’m unclear on why the domain is still routing to Cloudflare.
You’re on the right track. I’m on mobile so will be brief, edit from a laptop in a while.
You can use subdomains, which is my preferred way if making services work with traefik, but you could also look for, say,
example.com/potato
to get to the potato service; this may work better with DDNS.Edit: each subdomain needs to be updated, you might be able to get away with making them all a CNAME that points at the DDNS.
You’re correct in your assessment that you only expose 80 and 443 for the Traefik container and access everything else through that. Also only use 80 to redirect to 443.
Don’t expose the NAS directly to the web, instrad look at port forwarding on your router, it should be able to forward requests received on only 80 and 443 to the NAS while still blocking everything else.
My only complaint about Synology stuff is that I couldn’t get Traefik in swarm mode going!
Any questions reach out.
Edit2: consider looking at a cheap VPS or a static IP to eliminate the requirement to expose your NAS directly to the web. Alternately run your internal DNS for stuff (including SSL certs from LetsEncrypt) and VPN in (I use Wireguard) when you want to access it.
Thanks. Yep, subdomains was what I’d planned on: traefik.mydomain.com to access the Traefik dashboard; home.mydomain.com to access the Homepage container. I was planning on spinning up an Authelia container as well to provide 2FA for the services I want protecting. I guess it’d also be nice to have some kind of landing page for traffic coming directly to www.mydomain.com or mydomain.com as well.
Ideally I don’t want to port forward, so would I need to rely on Traefik to redirect the traffic from port 80 to port 443, and then proxy from port 443 to the required container? How do I therefore stop traffic from hitting the DSM admin on ports 5000/5001 for example?
I need to figure out a starting point to get traffic from my domain into my NAS (safely) then start spinning up containers and have Traefik route them appropriately, then I can look at Pi-hole/local DNS and Tailscale. And then I guess SSL.
Ideally I don’t want to port forward, so would I need to rely on Traefik to redirect the traffic from port 80 to port 443, and then proxy from port 443 to the required container? How do I therefore stop traffic from hitting the DSM admin on ports 5000/5001 for example?
That’s not quite how it works - the port forwarding is on your internet gateway to allow traffic on those ports to a specific host internal to your network. That’s your only option if you want these services to be available on the wider web.
My recommendation around using 80 to redirect to 443 is because in 2023 there’s no reason for that traffic to be unencrypted - just listen on 80 and say “Hey, go to https://example.com” instead.
If you don’t care about that you can do internal only DNS + VPN into the network and still get the benefits of free SSL certificates via the LetsEncrypt DNS01 challenge.
Thanks, and yeah sorry, what I meant was to listen on both ports 80 and 443 and have a redirect in Traefik from 80 to 443 - I don’t plan on having anything directly accessible over port 80.
As per another post, I’ve hit a stumbling block:
OK so made a start with this. Spun up a Pi-hole container, added mydomain.com as an A record in Local DNS, and created a CNAME for traefik.mydomain.com to point to mydomain.com.
In Cloudflare, I removed the mydomain.com A record and the www CNAME record.
Doing an nslookup on mydomain.com I get
Non-authoritative answer: *** Can't find mydomain.com: No answer
Which I guess is to be expected.
However, when I then navigate to http://traefik.mydomain.com in my browser, I’m met with a Cloudflare error page: https://imgur.com/XhKOywo.
Below is the docker-compose of my traefik container:
traefik: container_name: traefik image: traefik:latest restart: unless-stopped networks: - medianet ports: - 80:80 volumes: - /etc/localtime:/etc/localtime:ro - /var/run/docker.sock:/var/run/docker.sock:ro - /volume1/docker/traefik:/etc/traefik - /volume1/docker/traefik/access.log:/logs/access.log - /volume1/docker/traefik/traefik.log:/logs/traefik.log - /volume1/docker/traefik/acme/acme.json:/acme.json environment: - TZ=Europe/London labels: - traefik.enable=true - traefik.http.routers.traefik.rule=Host(`$TRAEFIK_DASHBOARD_HOST`) && (PathPrefix(`/api`) || PathPrefix(`/dashboard`)) - traefik.http.routers.traefik.service=api@internal
My traefik.yml is also nice and basic at this point:
global: sendAnonymousUsage: false entryPoints: web: address: ":80" api: dashboard: true insecure: true providers: docker: endpoint: "unix:///var/run/docker.sock" watch: true exposedByDefault: false log: filePath: traefik.log level: DEBUG accessLog: filePath: access.log bufferingSize: 100
Any ideas what’s going wrong? I’m unclear on why the domain is still routing to Cloudflare.
Going back to the Tailscale thing - is it possible to point the domain to the IP address of the Tailscale container, so that the domain is only accessible when I switch on the Tailscale VPN? Is this a good idea/bad idea? Is there a better way to do it?
Yeah that works perfectly. The domain will point to your Tailscale IP, but that IP is not reachable unless you are in the VPN.
On my box I have a Caddy container with the Cloudflare plugin, that automatically generates Let’s Encrypt certificates. And I can use it to point (sub)domains to certain docker containers. (see: https://caddy.community/t/how-to-guide-caddy-v2-cloudflare-dns-01-via-docker/8007 )
Thanks.
I guess the issue with this, though, is that I don’t always need to access it via Tailscale - I’d only do that when away from home. Perhaps there’s a way to point a subdomain to the Tailscale IP, and that’s only accessible when Tailscale is active? And then use an alternative subdomain to access it the rest of the time? Is that achievable?