Welcome to today’s daily kōrero!

Anyone can make the thread, first in first served. If you are here on a day and there’s no daily thread, feel free to create it!

Anyway, it’s just a chance to talk about your day, what you have planned, what you have done, etc.

So, how’s it going?

  • Xcf456@lemmy.nz
    link
    fedilink
    arrow-up
    3
    ·
    1 year ago

    I’ve gotten back into HA recently and I’m running it in a docker container. It confuses the hell out of me, like how do you access the files that are in the docker container? I can’t work out where they actually are on my system.

    I think the main difference between docker and using home assistant OS is that some things, add ons in particular, aren’t available in docker unless you install them manually.

    • Dave@lemmy.nzM
      link
      fedilink
      arrow-up
      2
      ·
      edit-2
      1 year ago

      Docker is fantastic but there is definitely a learning curve. I use docker compose for everything, which makes it easier (especially blowing away and recreating a container).

      Instead of trying to work out where stuff lives, I create bind mounts. Basically, map a directory on your system to a directory within the container. Most self-hosted services these days will provide docker-compose.yml example files with bind mounts specified, and docker now ships with compose baked in.

      I create one directory to hold everything, then one directory to hold a service, then put the docker-compose.yml file in there. Then I can back up just that one directory and everything important is backed up (I do things differently for lemmy because I want live backups, but all my personal stuff is done like that).

        • Dave@lemmy.nzM
          link
          fedilink
          arrow-up
          2
          ·
          1 year ago

          Two parts. The first part is the important part, the database. I run a scheduled database dump for this (the lemmy documentation actually explains how to do a database dump and how to restore it - I just have a bash script triggered by a cron job).

          The second part is the images. Uploaded images and the image cache (for thumbnails for posts on other servers, etc) are virtually impossible to tell apart, so it’s all backed up (about 150GB just for images). For this, I just do a tarball of the whole directory. Any in-progress changes get skipped by the process but my theory is that since images are only added and deleted, if the process was run 30 seconds earlier then any being added would have been missed anyway, so they can wait until tomorrow’s backup. The vast, vast majority are cached images rather than image uploads anyway.

          These two things are done overnight NZ time as they can be resource intensive, but I’ve done daylight hour backups before when needed, and it doesn’t cause much of an impact to the site performance anyway.