Not containers and data, but the images. The point would be reproducability in case a remote registry does not contain a certain image anymore. Do you do that and how?
Not containers and data, but the images. The point would be reproducability in case a remote registry does not contain a certain image anymore. Do you do that and how?
I also run (well, ran) a local registry. It ended up being more trouble than it was worth.
Would you have to docker load them all when rebuilding a host?
Only if you want to ensure you bring the replacement stack back up with the exact same version of everything or need to bring it up while you’re offline. I’m bad about using the
:latesttag so this is my way of version-controlling. I’ve had things break (cough Authelia cough) when I moved it to another server and it pulled a newer image that had breaking config changes.For me, it’s about having everything I need on hand in order to quickly move a service or restore it from a backup. It also depends on what your needs are and the challenges you are trying to overcome. i.e. When I started doing this style of deployment, I had slow, unreliable, ad heavily data-capped internet. Even if my connection was up, pulling a bunch of images was time consuming and ate away at my measly satellite internet data cap. Having the ability to rebuild stuff offline was a hard requirement when I started doing things this way. That’s now no longer a limitation, but I like the way this works so have stuck with it.
Everything a service (or stack of services) needs is all in my deploy directory which looks like this:
/apps/{app_name}/ docker-compose.yml .env build/ Dockerfile {build assets} data/ {app_name} {app2_name} # If there are multiple applications in the stack ... conf/ # If separate from the app data {app_name} {app2_name} ... images/ {app_name}-{tag}-{arch}.tar.gz {app2_name}-{tag}-{arch}.tar.gzWhen I run backups, I tar.gz the whole base
{app_name}folder which includes the deploy file, data, config, and dumps of its services images and pipe that over SSH to my backup server (rsync also works for this). The only ones I do differently are ones with in-stack databases that need a consistent snapshot.When I pull new images to update the stack, I move the old images and
docker savethe now current ones. The old images get deleted after the update is considered successful (so usually within 3-5 days).A local registry would work, but you would have to re-tag all of the pre-made images to your registry (e.g.
docker tag library/nginx docker.example.com/nginx) in order to push them to it. That makes updates more involved and was a frequent cause of me running 2+ year old versions of some images.Plus, you’d need the registry server and any infrastructure it needs such as DNS, file server, reverse proxy, etc before you could bootstrap anything else. Or if you’re deploying your stack to a different environment outside your own, then your registry server might not be available.
Bottom line is I am a big fan of using Docker to make my complex stacks easy to port around, backup, and restore. There’s many ways to do that, but this is what works best for me.