I did something (slightly) similar via proot, called Bag [1], which I must have not described as a docker alternative: It has nothing to do with cgroups, and the cli deviates from that of docker's.
The backstory: To bypass internet censorship and deep packet inspection, I had written a proxy chain solution masquerading itself as plain html traffic. I needed it constantly running everywhere I went, but I didn't want to port it to a native android app. I wanted to run it through termux and at the time termux had no jdk/jre. Proot could spawn a archlinux env and there indeed was a jdk available.
The arch env within termux turned out to be generally more suitable for all tasks. Creating and destroying ephemeral envs with different setups and prooting into them to just run a single command is easily automated with a script; I named it bag.sh, a drastically smaller form of a shipping container.
Funny bag.sh also has a roadmap/todo in there untouched for 5 years! It's written on mobile screen hence mostly formatted to 40 columns lines to fit on the display without scrolling.
I guess a lot of us had stories like this. I needed to package a bunch of things into a single environment where a VM was unsuitable. I cooked up something using chroot, deb-bootstrap and make an installer using makeself. It created a mini debian inside /opt which held the software and all the dependencies I needed (mysql etc.). Worked pretty well and the company I made this for used it till they got acquired in 2016 or so.
More generally though, implementing a crude version of a larger application is one of the best ways of learning how things work inside it. I'm a big fan of the approach.
I love these. Been a fan of minimal bash stuff.
Here's a proof of concept for a intra-cluster load balancer in 40 lines of bash done during a hackathon I organized to promote distributed infra with Docker, Mesos, etc. about a decade ago https://github.com/cell-os/metal-cell/blob/master/discovery/...
I likely lost it, but I had a redundand and distributed reverse SSH tunnel based colo-to-cloud transfer tool.
> Because most distributions do not ship a new enough version of util-linux you will probably need to grab the sources from here and compile it yourself.
Careful. The default installation prefix is /usr/bin, and the install will happily clobber your mount command with one that requires a library that doesn't exist. Then next time you boot, the kernel will mount the file system read-only.
I like when repos say "not implemented yet" or "to-do" or "working on" and the last commit was years ago. Makes me feel better about not going back to my to-dos I drop through my code. (Not meaning to throw shade on this author, just finding it comforting)
I think it's good. I guess it's possible for something to be simply done, and you don't always have to have a bunch of next ideas, but I generally always have next ideas.
If there is always some next ideas then by definition you must always have todos that never get done. It should actually be the normal state of every single project.
Yeah it's weird, I feel like a repo is untrustworthy if it wasn't committed to in the past year but sometimes a project is just done. Now I'm actuality there would likely be work on my end to update it for integration with modern tools/devices, but there's a repo from 12 years ago I've been considering using. Maybe it'll just work, maybe it'll be trash.
Great point! It is not shade at all, you are trying to normalize this which I like. For unpaid, volunteer, or hobby code feeling a _need_ because its public can make coding less fun or prevent people from sharing code publicly they otherwise would.
I wonder why Bocker makes the frontpage so often. Is Docker still that controversial even in 2024? Why people don't recognize that it actually brought something useful (mainly, software distribution and easy of "run everywhere") to the table?
Exactly. "Docker" is boring, everyone uses it, everyone knows it, no one really wants to rewrite it (on Linux) except for parochial infighting or religious license reasons.
But Linux containers[1] are actually fascinating stuff, really powerful, and (even for the Docker experts) poorly understood. The point of Bocker isn't "see how easy it is to rewrite Docker" it's "See how simple and powerful the container ecosystem is!".
[1] Also btrfs snapshots, which are used very cleverly in Bocker.
It hits the frontpage often because people assume that Docker is this super complex thing, but (at its most fundamental), it's actually quite elegant and understandable, which is interesting - a perfect HN story, in fact.
It's possible it's not climbing the front page to slight docker, but rather that people are seeing that docker is something useful and want to know how it works. Bocker can be an entrypoint into the technologies.
I'm bringing overlayfs to people at my company to save time on a lenghty CI process, and they are in awe at the speedup. But after demo-ing it to a few people I realized they could just use / (I could have brought them) docker.
Lazydocker sure looks interesting, but self-promotional ads - for products in an entirely different space - in an OSS project's README.md? Seriously? At least for me it is the first time I have come across anything like this.
I'm wondering if advertising like this is even allowed under GitHub's TOS and AUP.
A brother from another mother: https://bastillebsd.org/ Bastille manages jails using shell with many of the same constructs you'd find in docker. I like it over other jail management software in BSD because it has so few dependencies.
Absolutely, it adds a lot of value for a shell script that is about 100 LoC.
By the way it took me a while to get why it was named Bastille. As La Bastille was a castle built to defend Paris from English attacks during the Hundred Years' War, and then turned into a prison.
The fact how simple it is to re-implement a large part of Docker because all it fundamentally is a bit of glue code to the kernel is the biggest problem Docker-the-company faced and still faces.
Where Docker adds real value is not (just) Docker Hub but Docker for Windows and Mac. The integrations offer a vastly superior experience than messing around with VirtualBox and Vagrant by hand (been there, done that) to achieve running Docker on one's development machine.
Rancher desktop is also a viable option and free. Many including my work moved to it after Docker's new licensing kicked in.
IMO the real magic of Docker was the Docker/OCI image format. It's a brilliant way to perform caching and distribute container images, and it's really what still differentiates the workflow from "full" VM's.
My main dev machine is Linux so I use Rancher Desktop but I also have a MacBook Pro m1 machine. Orbstack is so much better than rancher and docker desktop. I know they are a small company but hell if their product isn’t significantly more efficient and better.
Completely agree. I moved from docker desktop to rancher after an update blew away my kubernetes cluster, and then from Rancher to Orbstack due to a number of bugs that were crashing the underlying VM. Orbstack has been rock solid (aside from one annoying networking issue), and it uses significantly less battery. They’ve done a fantastic job.
Related to image format, has anyone tried to use alternative image formats? There was a differnt format / filesystem for containers to leverage deduplication between different images (so the node won't need to fetch yet another copy of cuda / pytorch)
Docker Desktop on Mac is a handicapped, underprivileged mess. Docker cli for Mac with Colima is still underprivileged, but at least you can skip the bs license and Docker's gui. On Windows you can at least use Docker on WSL which works great. Why use Docker Desktop is beyond me.
I lived through a failed attempt to migrate from Docker Desktop for Mac to an open source alternative (minikube+portainer, IIRC). A lot of test scripts developers relied on – to run parts of the integration test suite on their laptops for debugging – broke, because Docker Desktop for Mac went to a lot of effort to make macOS look like you were running Docker on Linux, whereas the open source replacement wasn't as seamless. Some of these test scripts contained Java code directly talking to the Docker daemon over its Unix domain socket, so need the same API implemented. Many other scripts made heavy use of the Docker CLI. After spending a lot of time on it, it was decided to just go back to Docker Desktop for Mac. The failed migration had resulted in highly paid engineers spending time pulling their hair out trying to get test scripts to work instead of actually fixing bugs and delivering new features.
Now, that was 2+ years ago now, and maybe the open source alternatives have caught up since, or maybe we picked the wrong one or made some other mistake. But I'm not rushing to try it again.
I would look at Orbstack. Yes it costs money but it is pretty great.
Your situation sounds very similar to the company I work for. Orbstack has been a drop in replacement except one issue. Any dev using IPv6 assignment on their home network has issues where pods try to hit external dns because it tries to use IPv6 and I don’t think the Orbstack k8s instance is dual stack.
There are hacks to get around it but if I could get Orbstack to address this issue, I couldn’t find one other issue.
Orbstack is crazy fast and way better than docker desktop overall
Colima is the way to work with Docker on mac nowadays. I appreciate Docker Inc folks trying to get some money, but Docker Desktop is just not worth it.
A lot of popular wealthy systems are 'easy' to re-implement. I thought the value was in Docker images? Or is that not how Docker is used? The only way I've used it is to be able to import someone's virtual build setup so I could build something from years ago.
Nah, they should have prioritized building some sort of PaaS solution like CloudRun, Render or Fly so they can sell that to enterprises for $$$. Instead they did half-baked docker swarm which never really worked reliably and then lost ground to k8s rapidly
Docker was a spinoff of an internal tool used to build exactly the type of PaaS you're describing. It was like a better Heroku and I loved it, but they shut it down when they focused on commercializing Docker itself.
That's what people usually say but they have tried to do just that a few years ago and it didn't really work. Docker inc has been doing great since they have shifted towards even more standardization in their container runtime, and focused on dev tooling. They became profitable when they focused on Docker desktop and docker hub instead of trying to build a clunky alternative to kubernetes or yet another cloud orchestration tool/platform.
> Bocker runs as root and among other things needs to make changes to your network interfaces, routing table, and firewall rules. I can make no guarantees that it won't trash your system.
Linux makes it quite hard to run "containers" as an unprivileged user. Not impossible! https://github.com/rootless-containers/rootlesskit is one approach and demonstrates much of the difficulty involved. Networking is perhaps the most problematic. Your choices are either setuid binaries (so basically less-root as opposed to root-less) or usermode networking. slirp4netns is the state of the art here as far as I know, but not without security and performance tradeoffs.
Is there any Docker alternative on Mac that can utilize the MPS device in a container? ML stuff is many times slower in a container on my Mac than running outside
The issue you're running into is that to run docker on mac, you have to run it in a vm. Docker is fundamentally a linux technology, so first emulate x86_64 linux, then run the container. That's going to be slow.
There are native macos containers, but they arent very popular
You still pay the VM penalty, though it's a lot less bad than it used to be. And the Arm MacBooks are fast enough that IME they generally compare well against Intel Linux laptops even then now. But it sounds like first-class GPU access (not too surprisingly) isn't there yet.
Damn good to know! Have been gaslit by the ever-changing docker install instructions. Of course it would be a lagging version, but I think the docker feature-set has converged years ago, why would I care any more about the docker version than e.g. about the version of grep?
How exactly is docker (buildkit, compose, the runtime, the daemon, etc) not open source? Docker desktop isn't, but that's almost entirely unrelated to the containerization technology that it uses or that people refer to when they talk about docker.
That service agreement is for using the Docker Desktop GUI tool which isn't open source (though free to use for small businesses etc) whereas the basic docker CLI commands are all open source.
I did something (slightly) similar via proot, called Bag [1], which I must have not described as a docker alternative: It has nothing to do with cgroups, and the cli deviates from that of docker's.
The backstory: To bypass internet censorship and deep packet inspection, I had written a proxy chain solution masquerading itself as plain html traffic. I needed it constantly running everywhere I went, but I didn't want to port it to a native android app. I wanted to run it through termux and at the time termux had no jdk/jre. Proot could spawn a archlinux env and there indeed was a jdk available.
The arch env within termux turned out to be generally more suitable for all tasks. Creating and destroying ephemeral envs with different setups and prooting into them to just run a single command is easily automated with a script; I named it bag.sh, a drastically smaller form of a shipping container.
Funny bag.sh also has a roadmap/todo in there untouched for 5 years! It's written on mobile screen hence mostly formatted to 40 columns lines to fit on the display without scrolling.
[1]: https://github.com/hkoosha/bag
I guess a lot of us had stories like this. I needed to package a bunch of things into a single environment where a VM was unsuitable. I cooked up something using chroot, deb-bootstrap and make an installer using makeself. It created a mini debian inside /opt which held the software and all the dependencies I needed (mysql etc.). Worked pretty well and the company I made this for used it till they got acquired in 2016 or so.
More generally though, implementing a crude version of a larger application is one of the best ways of learning how things work inside it. I'm a big fan of the approach.
FYI, I think you forgot some important quotes in your script. Try shellcheck?
> mkdir -p $(dirname "$2")
That'll handle whitespace in paths, but if you want it to handle all path characters, dirname and mkdir need "--" here too.
Wow
I love these. Been a fan of minimal bash stuff. Here's a proof of concept for a intra-cluster load balancer in 40 lines of bash done during a hackathon I organized to promote distributed infra with Docker, Mesos, etc. about a decade ago https://github.com/cell-os/metal-cell/blob/master/discovery/...
I likely lost it, but I had a redundand and distributed reverse SSH tunnel based colo-to-cloud transfer tool.
Shell Fu and others have good collections of these https://www.shell-fu.org/
> Because most distributions do not ship a new enough version of util-linux you will probably need to grab the sources from here and compile it yourself.
Careful. The default installation prefix is /usr/bin, and the install will happily clobber your mount command with one that requires a library that doesn't exist. Then next time you boot, the kernel will mount the file system read-only.
Should also be be /usr/local/bin.
I like when repos say "not implemented yet" or "to-do" or "working on" and the last commit was years ago. Makes me feel better about not going back to my to-dos I drop through my code. (Not meaning to throw shade on this author, just finding it comforting)
I think it's good. I guess it's possible for something to be simply done, and you don't always have to have a bunch of next ideas, but I generally always have next ideas.
If there is always some next ideas then by definition you must always have todos that never get done. It should actually be the normal state of every single project.
I feel like most — if not all — projects are never done. Knowing when to stop is important
Yeah it's weird, I feel like a repo is untrustworthy if it wasn't committed to in the past year but sometimes a project is just done. Now I'm actuality there would likely be work on my end to update it for integration with modern tools/devices, but there's a repo from 12 years ago I've been considering using. Maybe it'll just work, maybe it'll be trash.
Great point! It is not shade at all, you are trying to normalize this which I like. For unpaid, volunteer, or hobby code feeling a _need_ because its public can make coding less fun or prevent people from sharing code publicly they otherwise would.
Totall ok! As soon as the program does what I want, and my task is complete, I stop developing. Software is not my hobby.
I wonder why Bocker makes the frontpage so often. Is Docker still that controversial even in 2024? Why people don't recognize that it actually brought something useful (mainly, software distribution and easy of "run everywhere") to the table?
It's just a learning tool to see how docker works.
Docker is just a combination of kernel tech that already exists. Namespaces, cgroups, and union file systems and probably few others.
Exactly. "Docker" is boring, everyone uses it, everyone knows it, no one really wants to rewrite it (on Linux) except for parochial infighting or religious license reasons.
But Linux containers[1] are actually fascinating stuff, really powerful, and (even for the Docker experts) poorly understood. The point of Bocker isn't "see how easy it is to rewrite Docker" it's "See how simple and powerful the container ecosystem is!".
[1] Also btrfs snapshots, which are used very cleverly in Bocker.
It hits the frontpage often because people assume that Docker is this super complex thing, but (at its most fundamental), it's actually quite elegant and understandable, which is interesting - a perfect HN story, in fact.
It is kinda complex, but all the complexity is in Linux, ~none is in 'Docker'.
It's possible it's not climbing the front page to slight docker, but rather that people are seeing that docker is something useful and want to know how it works. Bocker can be an entrypoint into the technologies.
Yes it's a wonderful little read. Besides without volumes and port forwarding few would ever deploy this to production.
The reason people use docker over Podman and rolling their own is because of the ecosystem and ubiquity of docker.
I'm bringing overlayfs to people at my company to save time on a lenghty CI process, and they are in awe at the speedup. But after demo-ing it to a few people I realized they could just use / (I could have brought them) docker.
How overlayfs speeds up CI processes?
Surprised no one's mentioned lazydocker as a great alternative for Docker Desktop (on Linux/macOS/Windows) [1].
It's a fairly full-featured Terminal UI that has the benefit of running over ssh:
[1] https://github.com/jesseduffield/lazydocker
Literally a few days ago: https://news.ycombinator.com/item?id=42214873
Lazydocker sure looks interesting, but self-promotional ads - for products in an entirely different space - in an OSS project's README.md? Seriously? At least for me it is the first time I have come across anything like this. I'm wondering if advertising like this is even allowed under GitHub's TOS and AUP.
A brother from another mother: https://bastillebsd.org/ Bastille manages jails using shell with many of the same constructs you'd find in docker. I like it over other jail management software in BSD because it has so few dependencies.
Absolutely, it adds a lot of value for a shell script that is about 100 LoC.
By the way it took me a while to get why it was named Bastille. As La Bastille was a castle built to defend Paris from English attacks during the Hundred Years' War, and then turned into a prison.
Fun fact: docker started as bash, then moved to python before settling on golang.
Also, in a 2013 docker meetup, someone wrote a docker clone in bash.
People want to learn! Hopefully things like this help them.
This was written in 2015, I think we can get this down to 69 lines or less in brainfuck
The fact how simple it is to re-implement a large part of Docker because all it fundamentally is a bit of glue code to the kernel is the biggest problem Docker-the-company faced and still faces.
Where Docker adds real value is not (just) Docker Hub but Docker for Windows and Mac. The integrations offer a vastly superior experience than messing around with VirtualBox and Vagrant by hand (been there, done that) to achieve running Docker on one's development machine.
Rancher desktop is also a viable option and free. Many including my work moved to it after Docker's new licensing kicked in.
IMO the real magic of Docker was the Docker/OCI image format. It's a brilliant way to perform caching and distribute container images, and it's really what still differentiates the workflow from "full" VM's.
My main dev machine is Linux so I use Rancher Desktop but I also have a MacBook Pro m1 machine. Orbstack is so much better than rancher and docker desktop. I know they are a small company but hell if their product isn’t significantly more efficient and better.
Completely agree. I moved from docker desktop to rancher after an update blew away my kubernetes cluster, and then from Rancher to Orbstack due to a number of bugs that were crashing the underlying VM. Orbstack has been rock solid (aside from one annoying networking issue), and it uses significantly less battery. They’ve done a fantastic job.
Only complaint is that my home network assigns IPv6 addresses and that fucks up external dns lookups for pods in Orbstack.
Podman-Desktop is also great b/c it now has gpu support on macOS (for the Linux container)
Love to hear that :) sent you an email about the k8s IPv6 issue — should be able to get it fixed in OrbStack
Related to image format, has anyone tried to use alternative image formats? There was a differnt format / filesystem for containers to leverage deduplication between different images (so the node won't need to fetch yet another copy of cuda / pytorch)
This is common in the Bazel community.
Docker Desktop on Mac is a handicapped, underprivileged mess. Docker cli for Mac with Colima is still underprivileged, but at least you can skip the bs license and Docker's gui. On Windows you can at least use Docker on WSL which works great. Why use Docker Desktop is beyond me.
> Why use Docker Desktop is beyond me.
I lived through a failed attempt to migrate from Docker Desktop for Mac to an open source alternative (minikube+portainer, IIRC). A lot of test scripts developers relied on – to run parts of the integration test suite on their laptops for debugging – broke, because Docker Desktop for Mac went to a lot of effort to make macOS look like you were running Docker on Linux, whereas the open source replacement wasn't as seamless. Some of these test scripts contained Java code directly talking to the Docker daemon over its Unix domain socket, so need the same API implemented. Many other scripts made heavy use of the Docker CLI. After spending a lot of time on it, it was decided to just go back to Docker Desktop for Mac. The failed migration had resulted in highly paid engineers spending time pulling their hair out trying to get test scripts to work instead of actually fixing bugs and delivering new features.
Now, that was 2+ years ago now, and maybe the open source alternatives have caught up since, or maybe we picked the wrong one or made some other mistake. But I'm not rushing to try it again.
I would look at Orbstack. Yes it costs money but it is pretty great.
Your situation sounds very similar to the company I work for. Orbstack has been a drop in replacement except one issue. Any dev using IPv6 assignment on their home network has issues where pods try to hit external dns because it tries to use IPv6 and I don’t think the Orbstack k8s instance is dual stack.
There are hacks to get around it but if I could get Orbstack to address this issue, I couldn’t find one other issue.
Orbstack is crazy fast and way better than docker desktop overall
i used it for a year or so then subscribed finally the other day. it really is well worth the money.
I've heard from someone who would know that you should be using Orbstack.
I have a feeling we work at the same company. Well, maybe not, but we went through a strikingly similar experience around the same timeframe.
I've just use a Debian arm virtual machine and be done with it (M1). If I'm going to run a VM regardless, may as well go with a full fledged one.
A fair amount of the Docker Desktop use, on both mac and Windows, is driven by it's internal workarounds for brain-dead corporate VPNs.
Docker for Mac does run on Linux. Just a striped down lightweight vm. It's why file Io is complete shit. It's a network share.
Use either the cached or delegated options for the volume [1] then even NodeJS becomes decently performant.
[1] https://tkacz.pro/docker-volumes-cached-vs-delegated/
Colima is the way to work with Docker on mac nowadays. I appreciate Docker Inc folks trying to get some money, but Docker Desktop is just not worth it.
I've been using Docker CLI for Mac happily for years. What am I missing?
I just use colima on macos, its a far better experience. Much lighter weight
A lot of popular wealthy systems are 'easy' to re-implement. I thought the value was in Docker images? Or is that not how Docker is used? The only way I've used it is to be able to import someone's virtual build setup so I could build something from years ago.
Nah, they should have prioritized building some sort of PaaS solution like CloudRun, Render or Fly so they can sell that to enterprises for $$$. Instead they did half-baked docker swarm which never really worked reliably and then lost ground to k8s rapidly
Docker was a spinoff of an internal tool used to build exactly the type of PaaS you're describing. It was like a better Heroku and I loved it, but they shut it down when they focused on commercializing Docker itself.
dot cloud yes?
I was surprised when they shut that down too.
That's what people usually say but they have tried to do just that a few years ago and it didn't really work. Docker inc has been doing great since they have shifted towards even more standardization in their container runtime, and focused on dev tooling. They became profitable when they focused on Docker desktop and docker hub instead of trying to build a clunky alternative to kubernetes or yet another cloud orchestration tool/platform.
Didn’t they buy at least one of these? It was garbage, and no one cared.
I think Docker is really lucky that devs still think container=Docker.
Podman is in many aspects superior, while still being able to function as a drop in.
But Rancher Desktop does the same too (and is also open source).
Docker for Mac is just unusable. They're not really adding any value there.
Have you tried out Orbstack?
+1 on orbstack. near-perfect drop in
Docker for Windows and Mac are both bloated pieces of software, outperformed by Rancher Desktop and Orbstack.
Docker's only real innovation was the OCI format, which it had to give away for it to become an industry standard, and now doesn't own.
Docker on Windows can use WSL2 engine for near native performance.
If the author happens to see this: the link to your homepage on GitHub is broken - drop the "www."
Practicality aside, there seems to be a lot we can learn from the implementation.
Isn’t this how Docker started?
Does it require root access to the machine I have a user account on?
Yes, from the README:
> Bocker runs as root and among other things needs to make changes to your network interfaces, routing table, and firewall rules. I can make no guarantees that it won't trash your system.
Linux makes it quite hard to run "containers" as an unprivileged user. Not impossible! https://github.com/rootless-containers/rootlesskit is one approach and demonstrates much of the difficulty involved. Networking is perhaps the most problematic. Your choices are either setuid binaries (so basically less-root as opposed to root-less) or usermode networking. slirp4netns is the state of the art here as far as I know, but not without security and performance tradeoffs.
Is there any Docker alternative on Mac that can utilize the MPS device in a container? ML stuff is many times slower in a container on my Mac than running outside
The issue you're running into is that to run docker on mac, you have to run it in a vm. Docker is fundamentally a linux technology, so first emulate x86_64 linux, then run the container. That's going to be slow.
There are native macos containers, but they arent very popular
Docker can run ARM64 linux kernel, no need to emulate x86
You still pay the VM penalty, though it's a lot less bad than it used to be. And the Arm MacBooks are fast enough that IME they generally compare well against Intel Linux laptops even then now. But it sounds like first-class GPU access (not too surprisingly) isn't there yet.
Podman-Desktop can do it
Makes me wonder why docker still didn't make it to the ubuntu/debian repositories. Would be such an easy net benefit
What do you mean? It's been there for years: https://packages.debian.org/docker.io
It’s an old version, and I think it isn’t supported by Docker Inc (for the reasons mentioned in the sibling comment), but it’s there.
Damn good to know! Have been gaslit by the ever-changing docker install instructions. Of course it would be a lagging version, but I think the docker feature-set has converged years ago, why would I care any more about the docker version than e.g. about the version of grep?
The buildx/build and docker-compose vs 'docker compose' are recent updates.
(a) Docker wants to bundle vendor libraries instead of using other packages and (b) Canonical uses LXD and MicroK8s instead.
While not good for daily driving, this gives you an idea on what docker is and how it works.
On Linux, docker is basically fancy chroot.
On macOS/Windows/etc., docker is basically fancy chroot in a linux VM.
Very interesting. With how standard containerization has become, we sorely need an FOSS solution
Don’t we have them? I only casually use containers, but what about podman, runc, systemd-nspawn, LXC etc?
If Docker isn't open enough for you, check out Podman (now with extra CNCF).
how is docker open in any way?
https://i.imgur.com/2F0JmUw.png
In this way: https://imgur.com/a/PIkm7Eb
How exactly is docker (buildkit, compose, the runtime, the daemon, etc) not open source? Docker desktop isn't, but that's almost entirely unrelated to the containerization technology that it uses or that people refer to when they talk about docker.
That service agreement is for using the Docker Desktop GUI tool which isn't open source (though free to use for small businesses etc) whereas the basic docker CLI commands are all open source.
Why not podman?
Agree. Not sure about mac but on Windows, Podman + WSL works well. No need for podman desktop either, the cli vwrsion is fine.
Is the original docker just a script? Have they not added anything to the container story themselves?