I have a few selfhosted services, but I’m slowly adding more. Currently, they’re all in subdomains like linkding.sekoia.example etc. However, that adds DNS records to fetch and means more setup. Is there some reason I shouldn’t put all my services under a single subdomain with paths (using a reverse proxy), like selfhosted.sekoia.example/linkding?

  • dan@lemm.ee
    link
    fedilink
    arrow-up
    27
    ·
    1 year ago

    The only problem with using paths is the service might not support it (ie it might generate absolute URLs without the path in, rather than using relative URLs).

    Subdomains is probably the cleanest way to go.

  • Sascamooch@lemmy.sascamooch.com
    link
    fedilink
    arrow-up
    24
    ·
    1 year ago

    I prefer subdomains, personally. A lot of services expect to run on the root of the web server, so although you can sometimes configure them to use a path, it’s kind of a pain.

    Also, migrating services from one server to another will be a lot easier with subdomains since all you have to do is change the A and AAAA records. I use ZeroTier for a lot of my services, and that’s really nice since, even if I move a container to another machine, the container’s ZeroTier IP address will stay the same, so I don’t even need to update DNS. With paths, migration would involve a lot more work.

  • Jeena@jemmy.jeena.net
    link
    fedilink
    arrow-up
    22
    ·
    edit-2
    1 year ago

    I started with paths because I didn’t want to pay for a expensive SSL certificate for each service I’m running (now with letsencrypt no problem anymore). But that turned out to be a terrible idea. Once I wanted to host a service on a different server the problems started. With subdomain you just point your DNS to the correct IP address and that’s it. With paths you have to proxy everything through your one vhost and it get’s really messy. And to be honest most services expect you to run them on the root directory and not a path.

    • SocialDoki@lemmy.blahaj.zone
      link
      fedilink
      arrow-up
      4
      ·
      1 year ago

      Yeah, this is it. The only exception would be if you’re running everything off a single non-virtualized, non-containerized server, which is a bad idea for a whole host of reasons.

  • lvl@beehaw.org
    link
    fedilink
    arrow-up
    20
    ·
    1 year ago

    Try not to use paths, you’ll have some weird cross-interactions when two pieces of software set the same cookie (session cookies for example), which will make you reauthenticate for every path.

    Subdomains are the way to go, especially with wildcard DNS entries and DNS-01 letsencrypt challenges.

  • surfrock66@lemmy.world
    link
    fedilink
    arrow-up
    15
    ·
    1 year ago

    Subdomain; overall cheaper after a certain point to get a wildcard cert, and if you split your services up without a reverse proxy it’s easier to direct names to different servers.

    • witten@lemmy.world
      link
      fedilink
      arrow-up
      10
      ·
      1 year ago

      Who still pays for certs?? (I say this as non-snarkily as possible.) I just imagined everyone self-hosting uses Let’s Encrypt.

      • surfrock66@lemmy.world
        link
        fedilink
        arrow-up
        3
        ·
        1 year ago

        Let’s encrypt is fine for encryption but not identification. I have some stuff which I prefer that on, specifically around demonstrating services that I host at home in the workplace. Having full verification just reduces the questions I have to deal with. It’s like $90/year for a wildcard.

  • Midas@ymmel.nl
    link
    fedilink
    arrow-up
    11
    ·
    edit-2
    1 year ago

    I’ve kinda been trimming the amount of services I’ve exposed through subdomains, it grew so wild because it was pretty easy. I’d just set a wildcard subdomains to my ip and the caddy reverse proxy created the subdomains.

    Just have a wildcard A record that points *. to your ip address.

    Even works with nested domains like “home.” and then “*.home”

  • preciouspupp@sopuli.xyz
    link
    fedilink
    arrow-up
    10
    ·
    1 year ago

    You can make a wildcard domain and point it to the reverse proxy to route based on SNI. That works if you have HTTPS sites only. Just an idea.

  • Oida@lemmy.world
    link
    fedilink
    arrow-up
    9
    ·
    1 year ago

    Depends on the usage and also service. I’m using subfolders for all my Tasmota switches. Like https://switch.domain.org/garage this makes it easier to maintain because I don’t need to mess around with a new subdomain for ever new device. On the other side, I like unique services on a subfomain: video or audio. I can switch the application behind, but the entry point remains.

  • I eat words@group.lt
    link
    fedilink
    arrow-up
    9
    ·
    1 year ago

    in addition to all other good comments - if you will ever decide to move out service to another server or something like that - moving subdomains will be much easier.

  • LordChaos82@fosstodon.org
    link
    fedilink
    arrow-up
    5
    ·
    1 year ago

    @Sekoia I like using subdomains as it is easy to configure in a lot of services. Also, easier to remember if you are giving the URL to someone for access.

  • drdaeman@lemmy.zhukov.al
    link
    fedilink
    arrow-up
    2
    ·
    1 year ago

    Some apps have hardcoded assumptions about the paths, making those kind of setup harder to achieve (you’ll have to patch the apps or do on-the-fly rewrites).

    Then there’s also potential cookie sharing/collision issue. If apps don’t set cookies for specific paths, they may both use same-named cookie and this may cause weird behavior.

    And if one of the apps is compromised (e.g. has an XSS issue) it’s a bit less secure with paths than with subdomains.

    But don’t let me completely dissuade you - paths are totally valid approach, especially if you group multiple clisely related things together (e.g. Grafana and Prometheus) under the same domain name.

    However, if you feel that setting up a new domain name is a lot of effort, I would recommend you investing some time in automating this.

  • shrugal@lemmy.world
    link
    fedilink
    arrow-up
    3
    arrow-down
    1
    ·
    1 year ago

    If you don’t have any restrictions (limited subdomains, service only works on the server root etc.) then it’s really just a personal preference. I usually try paths first, and switch to subdomains if that doesn’t work.

  • gaurhoth@lemmy.world
    link
    fedilink
    arrow-up
    2
    ·
    1 year ago

    You can certainly do it with paths, but it’s generally cleaner and easier to do subdomains. Some apps don’t like paths without additional setup and/or reverse proxy configuration because they hard-code redirects to specific paths.

    In some cases (if you are hosting services both internal and externally), you’ll want to configure a split brain DNS (a local DNS server that resolves internal host to internal IPs and external DNS resolves to public IPs).

    Yes there’s some setup with that, but once you really get into it – you’ll start automating that :) I have a script that reads all of my Traefik http routers via the rest API and updates my unbound DNS server automagically.

  • Freeman@lemmy.pub
    link
    fedilink
    arrow-up
    2
    ·
    1 year ago

    With paths you can use httpS://192etc/example, but if you use subdomains, how do you connect internally with https? Https://example.192etc won’t work as you can’t mix an ip address with domain resolution.

    You can do this. The reality is it depends on the app.

    But ultimately I used both and pass them through a nginx proxy. The proxy listens for the SNI and passes traffic based on that.

    For example homeassistant doesn’t do well with paths. So it goes to ha.contoso.com.

    Miniflux does handle paths. So it uses contoso.com/rss.

    Plex needs a shitload of headers and paths so I use the default of contoso.com to pass to it along with /web.

    My photo albums use both. And something’s even a separate gTLD.

    But they all run through the same nginx box at the border.