floof.org

Everytime I look up advice/details of how to do something on Linux and the project/guide doesn't explain what to do, but instead has a docker image, my resolve to never use docker increases a little bit more.

I get why docker exists and I'm not saying that it's not useful but wow I really do not want the question "How do I do x" to be answered with "Use this docker image"

1 2
The year is 2034. The Linux command "man" is now distributed as a Docker image. To find out how to deploy it, you have to join a Discord server, the client for which is also distributed as a Docker image.
1 3
Ikani mastodon (AP)
@tryst funny thing on that. For Mint, the only listed install for discord is flatpak, which is a docker image. It's a 1.8GB download and the image is unable to log into discord for me. It goes into a captcha loop.
This Reddit post kinda sums up how I feel about docker:
1 2

Honestly if you like docker then that's great but here me out:

Docker on enterprise servers? βœ… Yep
Docker instead of VMs? βœ… Sure why not?
Docker because you want to? βœ… Of course!
Docker on a single board computer for one job? ❌ Nonononono please just tell me the steps involved so I can learn how the system works!

1
I've got a couple of things running in Docker on SBCs that are the ony thing doing anything on that machine. it's kinda overkill, but it allows me to more easily intergrate it into my larger remote management system, and makes monitoring, alerting, and updates a looot easier.

@garrwolfdog Sorry I didn't mean to come across as "never use docker at all" but that I dislike that answers have in some cases become "use this docker image"

For example I want a SBC to monitor the temperature of my hot water tank. The first guide I found said that I should use multiple docker images to provide Prometheus and Grafana, and other guides were similar.

In the end Darac pointed me to Munin and that's exactly what I want. :)

@garrwolfdog Like in your case if you're already au fait with docker and it fits into your network then it makes sense, but for me who's still running servers with multiple services for an internal home network I'd prefer to have the details of how to configure it myself :)

It wouldn't be an issue if it was "here's how to do it from scratch but also there's a docker image if you want" but I keep seeing guides that are "you must use docker"

I'm not totally sure I follow. Even if you're using docker you still need to configure things as much as you would if you were setting it up on bare-metal. the only real difference if that you don't need to compile the binarries yourself and it will have kind of sandbox to run in. unless you're wanting to tinker with the source code itself, I guess?
Ooor this might just be confusion about how docker works? it can be alittle unclear, tbh.

@garrwolfdog Sorry let me clarify; I know nothing about docker and the first time I tried to follow one of these guides I ran into a problem with no way of being able to troubleshoot the fault. I couldn't find an easy answer of how to look at the logs or files within the docker so I had no idea what was going on.

That one did have all the code/scripts/etc not in a docker image and the first time I ran all that I found the fault straight away just by looking at the system logs.

@garrwolfdog @garrwolfdog It turned out that the python code was pointing to a folder that didn't exist. Changing that code fixed the problem, but the docker image pulled the code directly from the GitHub so short of forking the project and making my own changes directly from there I wasn't able to find a way to change the files in the docker container
@garrwolfdog I guess my point is that while I'm futsing around with my own little projects I know how to troubleshoot if it doesn't work. I've no idea how to do that on a docker container and I haven't yet found anything that explains it in a way I can understand, if it even is possible to troubleshoot like one would for a bare metal machine.
Kootenay mastodon (AP)
@garrwolfdog Really, each container is just a little Linux server running in its own space on top of actual Linux. You can get into it and see the filesystem with 'docker exec <container id|name> -ti /bin/bash’.
Incoming network connections are mapped (on startup) from your host to the container.
Logs may be in /var/log in the container, or sometimes it's set to go to stderr, in which case use 'docker logs <container> -f' to see them.
You know all this, then you can debug. :)
Yikes! yeeeeeah, that is NOT how you should be building your container images. The whole point of containerisation is to avoid problems like that! no one should be releasing images that pull no-versioned locked code from 3rd party sources; that's dodgy as hell!

@garrwolfdog That's how I've seen a lot of people using it for small projects, hence my aversion to it in small projects.

I've always seen it as one of those things that you have to know/be invested in learning before you use it in a production environment but some people are treating it like FlatPak/AppImage

honesty, if you're running self-hosted home systems, then it's worth learning how to use docker. it can make spinning up and testing out services sooo much quicker and clear.
@garrwolfdog I want to eventually when I can brain properly for it; I want to set up four TV channels running from a server and being output into an old hotel CATV distribution board, and having each channel in its own docker container would be helpful for monitoring them.
If you ever need to pointers to get you going, we're always happy to help!
ok, unrelated to you but I got to be pedant for a moment XD
It should be it's named after the raven so its a nominative singular masculine propper noun, so it should be "Muninn" not "Munin". Why people don't consult a linguist before naming their software, I'll never know!
Kootenay mastodon (AP)
@garrwolfdog Woo! Corvid pedantry! I approve! :>
Kootenay mastodon (AP)
@garrwolfdog It’s always Corvid Time.
Pippin friendica
@Epoxy / Renby πŸ’œπŸ³οΈβ€βš§οΈ The reason I hate the idea of using docker is that I want/need to *understand* things properly before I use them. I love things like postfix, because it has a full set of man pages which explain every little detail of how to configure it and how it behaves, and it also has documentation designed to help you understand how to use it. Docker and git and various other things… I've never come across documentation that actually *helped*. The more complex it is, the more documentation is needed to explain it and the better organised that documentation needs to be. I have a bit of ambivalence regarding qemu - it's pretty much essential for my business, but the documentation is severely, *severely* lacking, to the point where I've had to refer to the source code. And even that is full of unexplained abstractions and very hard to understand.
2
Pippin friendica
@Epoxy / Renby πŸ’œπŸ³οΈβ€βš§οΈ Actually, thinking about this properly (and I haven't thought about docker other than in an "ew, no" way for a long time now) I suspect the thing that originally put me off was that, if I remember right, its normal mode of operation is to download stuff from unidentified, unexplained servers on the internet and execute it on my machine. This is kinda the same reason I don't like/use build systems that do that kind of thing, like pip and pear and composer and npm and so on. I just about trust Debian's distribution network so I'll install dependencies from there thankyouverymuch, stop trying to grab them from places I've never even heard of, and especially don't just replace whole dependencies with newer possibly-improved-possibly-compromised versions on a whim. I certainly don't want entire containers obtained that way.
1

@pippin part of the point of the containers is to avoid the very issue it sounds like you're worried they cause. There are potential Escape Routes (usually if run with too many permissions) but the idea is almost more "I don't trust this to _not_ get compromised so I'm isolating this with limited connections for networking/data out of it" with the added benefit of "I also don't have to worry about package collisions or it fucking with local packages".

Outside of official containers I tend not to trust ones where I can't see the Dockerfile, and can read to see how the container image was built and what it'll do inside itself. Useful sometimes for writing my own Dockerfile stuff like for the mastodon image I use.

But yeah the dual purpose is definitely "contain" first, hence the name, with the benefit of "isolate libraries" second meaning if your container ever goes sideways you can just tear it down, and not have to worry about "alright what files got fucked up by building or package management?" And kinda making the data a little more portable. Definitely makes migrating/moving stuff a lot less painful.

1
Pippin friendica

@Kay Ohtie @Epoxy / Renby πŸ’œπŸ³οΈβ€βš§οΈ I don't drive recklessly just because I'm wearing a seatbelt, though. πŸ€·β€β™‚οΈ

I'm just very dubious about the benefits, haven't had the time and motivation to spend to learn this whole new thing, and haven't had any problems doing it the way I've always done it.

(I'm probably in the "anything invented after you turn 30 is newfangled trash" phase, too.)

yeah i can sort of get it from an avoiding dependency hell perspective but yeah absolutely not running it on the pi zero
Kevin mastodon (AP)

I'm the same with snap/flatpak/appimages, for local desktop use I want a bloody package I can keep up to date with standard utilities.

Docker is for remote systems IMO.

1