You can use systemd-run with --shell (or a subset of options enabled by --shell) and -p to specify service properties to run commands interactively in a similar environment as your service.
This can help troubleshoot issues and makes experimenting with systemd options faster.
I think there's been some talk about adding a built-in way for systemd-run to copy settings out of a .service file, but it doesn't exist yet.
I've written Perl/Python scripts to do this for me. They're not really aimed at working with arbitrary services, but it should be possible to adapt to different scenarios.
There are some gotchas I ran into. For example, with RuntimeDirectory: systemd deletes the directory once the process exits, even if there's still another process running with the same RuntimeDirectory value set.
It's also really useful for doing parallel builds of modules that may actually consume all available memory when you can't force the build system to use fewer cores than you have available.
Both in terms of artificially reducing the number of CPUs you expose, but also in terms of enforcing a memory limit that will kill all processes in the build before the broader kernel OOM killer will act, in case you screw up the number of CPUs.
woah that's actually awesome.
I feel like adding uh storage usage limits could also be easy as well.
But the one thing that I always wonder is about (virtualization?) in the sense of something like docker just for containerizing or some sort of way of running them in some sort of sandbox without much performance issues or something, I am kinda interested in knowing what might be the best way of doing so (is podman the right way or some other way like bubblewrap?)
Edit: just discovered in the comment below the (parents parents?)comment that there is systemd isolation too, that sounds very interesting and the first time I personally heard of it hmm
You can achieve similar results with podman and bubblewrap, but podman handles things like networking, resource and image management that bubblewrap doesn't by itself
Bubblewrap really is more for sandboxing "transient" containers and being able to separate specific things from the host (such as libraries), with other applications handling the image management, which makes sense because its primary user is Flatpak and Steam. Once the application inside the container is exited, the sandbox is destroyed, it's job is done.
Podman is a Docker clone, it's for development or persistent containers. It will monitor containers, restart them, can pull image updates, setup networks between them etc.
They both use namespacing and cgroups under the hood, but for different results and purposes.
Your right that systemd has sandboxing too, and it also uses the same features as the kernel. Podman can also export it's services to be managed by systemd.
There's literally so much choice when it comes to making containers on Linux.
> but podman handles things like networking, resource and image management
Btw, you can do all of this with systemd too
> the sandbox is destroyed, it's job is done.
I think most container systems have an ephemeral option. If you're looking at systemd then look at the man pages for either systemd-nspawn or systemd-vmspawn and look under Image Options. More specifically `-x, --ephemeral`. It's a pretty handy option.
> Podman can also export it's services to be managed by systemd.
But in that case, why not just use systemd? ;)
> There's literally so much choice when it comes to making containers on Linux.
Despite my joke above, I actually love this. Having options is great and I think it ends up pushing all of them to be better. The competition is great. I'm hyping systemd up a bit but honestly there's gives and takes with each of the different methods. There's healthy competition right now, but I do think systemd deserves a bit more love than it currently gets.
given that podman can also have a (nicer?) transition to docker is a plus as well.
There are a lot of paas nowadays which use docker under the hood. I think I would love seeing a future where a paas actually manages it using systemd.
I think this might be really nice giving an almost standard way of installing software.
I really want to try to create something like dokku or some gui for making systemd management easier but I will see some alternatives currently, thanks for sharing it!
I wrote my comment before I saw yours, but you'll probably be interested in it[0].
The best thing about systemd is also the worst thing: it's monolithic. You can containarize applications lightly all the way to having a full fledged VM. You can run as user or root. You can limit system access like CPU, RAM, network, and even the physical hardware. You even have homed which gives you more control over your user environments. There's systemd mounts[1], boot, machines, timers, networks, and more. It's overwhelming.
I think two commands everyone should know if dealing with systemd services is:
- `systemctl edit foo.service` to create an override file which sits on top of the existing service file (so your changes don't disappear when you upgrade)
- `systemd-analyze security foo.service` which will give you a short description of the security options and a score specifying your exposure level.
These really helped me go down the rabbit hole and I think most people should have some basic idea of how to restrict their services. A little goes a long way, so even if you're just adding `PrivateTmp=yes` to a service, you're improving it.
I've replaced all my cron jobs with systemd jobs now and while it is a bit more work up front (just copy paste templates...) there are huge benefits to be had. Way more flexibility in scheduling and you're not limited to such restrictions as your computer being off[3]
[1] I've found mounts really helpful and can really speed up boot times. You can make your drives mount in the background and after any service you want. You can also set timeouts so that they will power down and automount as needed. That can save you a good amount on electricity if you got a storage service. This might also be a good time to remind people that you likely want to add `noatime` to your mount options (even if you use fstab)[2].
I'm fairly confident that systemd, docker, podman, bubblewrap, unshare, and probably other tools are all wrapping the same kernel features, so I'd expect a certain degree of convergence in what they provide.
I feel like Docker and other containerization tools are becoming even less relevant given that systemd can twiddle the same isolation bits so there's no real difference in terms of security that using a container tool grants.
Seeing that podman can run containers as systemd services (see https://codesmash.dev/why-i-ditched-docker-for-podman-and-yo... ), it seems like using containers other than as a distribution mechanism has few advantages, and many disadvantages in terms of dependency updates requiring container rebuilds.
> I feel like Docker and other containerization tools are becoming even less relevant given that systemd can twiddle the same isolation bits so there's no real difference in terms of security that using a container tool grants.
I see it as _exactly_ the opposite. Podman gives me more or less the same security controls as systemd and the package/delivery problem is solved.
Call me when `systemctl pull ...` fetches the binary and everything else needed to run it _and_ puts the .service file in the right spot.
nixos kind of does that except better. Usually just set services.foo.enabled to true along with any other config you want. It's also super easy to wrap services in a container if you want, and doing so is kept conceptually separate from dependency management. If you want to make your own systemd service, then referencing a package in `ExecStart` or whatever will make it automatically get pulled in as a dependency.
I can already hear the systemd-haters complaining about The One True Unix Way™ is to have tools that only do one thing even if that leaves holes in their functionality.
Isn't this literally what podman-systemd does? You don't exactly run a command to pull a container, but just like systemd you place a config file in the right directory, tell podman-systemd to reconfigure itself, and run the service the standard systemd way.
1) the usual `curl` or `wget` to fetch the binary and the lib(s) and all the work of validating and putting them in place and the like and _then_ you can use a systemd/.service file to set up controls for the bin
2) podman pull and then either ask podman to make a .service file for you or write your own
because only one of the two approaches has solved the package/distribution issue, containers are _not_ "less relevant given that systemd can twiddle the same isolation bits"
What "validating" does docker/podman pull do that is in excess of a curl of a file?
One of the advantages of a real package manager is that it checks signatures on the content that is downloaded. The supply chain on a linux distro's package repos is much harder to break into than typosquatting into a docker registry somewhere.
IMO, docker layering over the OS's built-in package management and update lifecycle in an incompatible ways is far worse than systemd replacing the init system and other service management functionality.
Back in the old days (late 90's, early 2k's) as a sysadmin I'd often write scripts to chroot or in other ways isolate services rather than run them as root, so extending the init system to handle those features feels like it's a logical extension, not a incompatible replacement.
systemd-sysupdate already exists. systemd won't run the software repository of course, but with systemd-sysupdate together with some overlay mounts you can get Steam Deck-like ease of use system updates.
For software management in R/W environments, there's the podman + systemd combo that'll let you run containers like normal systemd services.
Container rebuilds are disadvantages?
Using mkosi and systemd-nspawn for containers it doesnt really feel that way, still a lot easier to build some distroless app container than to finangle a service to have zero access to other binaries, libraries, or other data entirely.
I dont get the distribution "advantage" building em with mkosi but I'd argue it a weakness as far too many are running containers with who-knows-what inside them.
Docker is absolutely less relevant. My personal machines haven't run Docker for months and my employer is finishing our migration away from Docker in a few months.
What makes me scratch my hand is why the failed access violations are not easy to show and log. A correctly configured service should not attempt to access things is is not intended to access. If it has to check if it has access and act conditionally, this also should be made explicit, either in the service code, or in its configuration.
There should be an strace-like tool that would collect a log of such "access denied" erros for troubleshooting. Even better, each service should run in its own process group, and tracing could be switched on / off for a particular process group.
> A correctly configured service should not attempt to access things is is not intended to access. If it has to check if it has access and act conditionally
It's normally recommended to attempt the access and handle the denial, instead of doing two separate steps (checking for access and doing the access); the later can lead to security issues (https://en.wikipedia.org/wiki/TOCTOU).
The fact that systemd continues to get hate, ~15 years after mass adoption, is a cultural phenomenon worth understanding. Benno Rice of freebsd gave a super interesting talk about this: The Tragedy of systemd: https://www.youtube.com/watch?v=o_AIw9bGogo
I can only imagine how long the Wayland haters will be writing blogs once LTS distro start shipping Wayland-first desktops. Looking at the whole upstart/systemd drama, I'm guessing we'll hit the 2k38 bug before they'll find something new to write about.
Not only is systemd strictly better, they had really extended themselves to make migrating services as simple as possible rather than assert you have to follow a new status quo entirely.
Allowing services to incrementally and optionally adopt features was the key part.
but you can adopt incrementally, thanks to XWayland. sure it's not the same, but unlike systemd vs sysv-init, you can't run two windowing systems side by side with equal privileges unless maybe you have two monitors and graphic cards. one has to be the one that controls the screen. and the other must necessarily run as a client inside it. wayland-on-X may have been possible, but it would have limited waylands capabilities and development.
i am willing to bet that there are systemd haters out there that love wayland and would make the exact reverse claim.
> but you can adopt incrementally, thanks to XWayland.
Wayland's weakest point is a11y and automation tools, which XWayland doesn't work for.
> sure it's not the same, but unlike systemd vs sysv-init, you can't run two windowing systems side by side with equal privileges unless maybe you have two monitors and graphic cards. one has to be the one that controls the screen. and the other must necessarily run as a client inside it. wayland-on-X may have been possible, but it would have limited waylands capabilities and development.
You can do both, actually; XWayland can run an X server in a window, and many Wayland compositors will run in a window on top of an X11 server. It's not seamless, of course, but it does work.
You don't understand. Everybody who doesn't like the things I like is the same: bad and stupid. Or maybe you do understand, because suddenly Wayland came up, and since you personally are annoyed by it, now this style of argument is "gaslighting."
It's not "gaslighting" it's just name-calling and argument through insinuation about other people's characters, rather than substance. It's not even ad hominem, because you assume people are arguing in bad faith because of the positions they've taken, not because you know a thing about them.
As a systemd user but Wayland "hater", to me the big difference is that you can adopt systemd without losing functionality - e.g. you can configure systemd to run sysV init style init scripts if you insist and no functionality is lost. The "complaints" in the linked article, are minor and about options that can just be turned off and that are offering useful additional capabilities without taking away the old.
Whereas with Wayland the effort to transition is significant and most compositors still have limitations X doesn't (and yes, I realise some of those means X is more vulnerable) - especially for people with non-standard setups. Like me.
I use my own wm. I could start with ~40-50 lines of code and expand to add functionality. That made it viable. I was productively using my own wm within a few days, including to develop my wm.
With Wayland, even if I start with an existing compositor, the barrier is far larger, and everything changes. I'm not going to do that. Instead I'll stick with X. The day an app I actually depend on stops supporting X, I'll just wrap those apps in a Wayland compositor running under X.
And so I won't be writing blog posts about how much I hate Wayland, and hence the quotes around "hater" above. But maybe I will one day write some about how to avoid running Wayland system-wide.
If Wayland gave me something I cared about, I'd take the pain and switch. It doesn't.
Systemd did, so even if I hadn't liked it better than SysVinit, I'd still have just accepted the switch.
If I one day give up Xorg, my expectation is that it'll be a passive-aggressive move to a custom franken-server that is enough-of-X to run modern X apps coupled to enough-of-Wayland to run the few non-X-apps I might care about directly (I suspect the first/only ones that will matter to me that might eventually drop X will be browsers), just because I'd get some of the more ardent Wayland proponents worked up.
I remember the good old days of xfree86. It was arse but mostly worked OK on a PC. Then this blasted Xorg thing rocked up and it was worse for a while! Nowadays I can barely remember the last time I had to create an xorg.conf.
Wayland has a few years to go yet and I'm sure it will be worth the wait. For me, it seems to work OK already.
That's interesting, but I really would rather write something stripped down with X as the base, though. This might be a good intermediary step, though.
I mean unless you're going to commit to maintaining xlibre or something, wayback seems like the future for x-based desktops.
> [wayback] is intended to eventually replace the classic X.Org server, thus reducing maintenance burden of X11 applications, but a lot of work needs to be done first.
> With Wayland, even if I start with an existing compositor, the barrier is far larger, and everything changes.
I mean, no one puts a gun against your head to use Wayland, X will be on life support for decades and will likely work without any issue.
But with this stance, no evolution could ever happen and every change would be automatically "bad". Sure, changes have a downside of course, but that shouldn't deter us in every case.
Plenty of evolution can happen. The problem with Wayland to me is that it's not evolution, but a step backward. It's forcing a tremendous amount of boilerplate and duplication of code.
X can evolve both by extensions and by "pseudo extensions" that effectively tell the server it's okay to make breaking changes to the protocol. There are also plenty of changes you could just make and accept the breakage because the clients it's break are limited.
I don't mind breaking changes if they actually bring me benefits I care about, but to me Wayland is all downside and no upsides that matter to me.
Dude’s been arguing with people since at least 2012 that systemd is a good thing. It took me less than a minute to figure that out by searching his blog.
PulseAudio also drew a lot of disapproval, until Pipewire appeared and finally did the same thing (and more) well.
Maybe systemd (service management, logind, the DNS resolver, the loggig system, etc) will eventually be re-implemented in a way that does not have the irritating properties of the original systemd.
/* I'd say that systemd feels like typical corporate software. It has a ton of features to check all the requisite boxes, it's not very ergonomic, it does things the way authors wanted and could sell to the corporate customers (who are not the end users), not the way end users prefer. It also used to be bug-ridden for quite some time, after having been pushed upon users. It comes from Red Hat (which is now a part of IBM), so you could say: here's why! But, say, podman also comes from Red Hat, and does not feel like such corporate software; to the contrary, end users enjoy it. */
Hey, now I am interested in more of such softwares overall.
Like imagine a list where we can create a form where people can give them and give reasonings or just something.
What if I can create a github repo and issues regarding this so that we can discuss about them and I can create a website later if it interests but its a really nice thought experiment.
Are we talking more about uh every software including proprietory too?
Are we talking about lets say websites too or services (what if we extend it to services like websites or even products outside of software niche into things beyond too, that's interesting too)
Another interesting point that comes to my mind might be that cryptocoins might be the lowest inverse to this software project in the sense that I believe that there was very little net positive done to all humanity in general, sure the privacy aspects are nice but still, its not worth having people invest their life savings into it thinking that its going to 100x y'know, I have created a whole article about it being frustated by this idea people think regarding crypto as an investment when it could very well be a crypto"currency" but that's a yap for another day.
I really nerded over this and I think I loved it, we need a really good discussion about it :>
My mom and dad live in a country where there is a dictator, and that there has been restrictions to send money there. I'm happy that I can send crypto usdc over Solana and then the cost is basically a few cents to my family
I am all for stable cryptocurrencies but not cryptocurrencies which prey on desperate people wanting 100x returns or promising too much that we all know is BS for hype and not delivering on it.
I myself am a teenager and uh had gotten some 100 ish dollar from winning competitions in crypto space (not that much, but I am proud of it kinda money) and uh I couldn't really get that money if it wasn't for crypto y'know.
If I hadn't made my stance clear, I am all for stable cryptocoins but that is just a very (minor?) part of the amount of things on crypto and all others usually generally harm, sure there are some outliers but here's an article that I had made when I ragebaited so hard one day about some crypto thing that I basically just wrote an article trying to explain my situation
"This whole space is still full of scam. 99% of the times everyone has ulterior incentive (to earn money) but still I mean, it kinda exists and I mean still crypto (stablecoins?) are the only sane way to transfer money without kyc"
This is one of the comments that I had written and I hope I can make my stance clear. I just searched and found that I have written about stablecoins 10 times almost seperately in the article and uh I am all for stablecoin and I think even In my original comment, I may have said that stablecoins are the only thing that is remotely nice about cryptocoins,maybe monero if we are really streching it.
I saw a r/cryptocurrency poll of someone saying that they are 97% for the money and 3% for tech and that point this is almost like polymarket gambling except being even worse and more out of your control, almost giving someone your money if you invest in some memecoin
I kinda (like?) stablecoins (even gold like paxg is good) but not much anything else as a baseline of token for the most part, i myself have usdc in things like stellar/polygon which also have low gas fees for the most part
So what are your thoughts? I hope I can make my stance clear lol,
Yeah I understand what you mean, actually I've been investing in Solana, since it's the one that allows for low cost transactions and I feel like it's the network that's actually being used for useful stuff (stripe, visa are working on building on top) so I made 30% in return. Since cryptos are risky I didn't put much money and only made 50$, But I'm faithful of the development of Solana, stablecoins and the like.
uh, the point I am trying to make is that sure you might invest into solana for less gas fees but the point is that, you used it with usdc, I am not sure if solana requires some minimum balance of solana or smth, i think it does but just because people can pay less fees while paying usdc on solana chain doesn't make me believe that solana's native token price should increase if the gas fees are low...
Like, quite frankly, they are all driven on speculation. Sure there is some aspect of investment but the reason I believe in stock markets or even money is that they grow because people get somewhat more productive over time from all agreggates whether its through tech or just innovation or just learning from mistakes.
Crypto might grow and it might not grow, its certainly more risky and risky might make it more profitable or loss at the same time.
See the thing is, I believe that things are for the most part kinda accurate in their prices and if they aren't, then I shouldn't want to mess around trying to prove the market wrong. Because mostly its kinda an efficient market with robots that can trade in milliseconds but it was efficient even before that.
I am sure that solana's price is effectively weighed in and if its not, then well, i still wouldn't want +/- 30%
Stripe is also building its own chain (tempo) i think.. I am all for stripe having cryptocurrency too but just stablecoin.
Uh, idk man, I see these people making 30% returns and I think wow, but then I see the amount that they put and the time they invest in and they say that they are "learning" which is great but still, my questions is if this is even smth that you can "learn"
Because the thing is that if you can guarantee me 30% returns for 30 years or even guaranteed 30% or even some previous data to back that on historically, that would be great but even historical data isn't enough sometimes which is why I think of productivity as the final benchmark
My brother had like 1$ in polymarket will btc go up or down, he made 100% in an hour, he gave me all the stats of the world of these max smth curves and how they go and I appreciate my brother a lottt but I can't help but I just don't like this where he feels like he needs to earn money on top of money.
Idk maybe its me but i literally can't predict if solana can get hacked tomorrow and it can make it all 0. I saw some project on HN about how Sui, a literal billion dollar coin is suffering through some sort of not well setup nodes and a malicious actor could thereotically stop the byzantine consensus and might even bring the network down lol.
Maybe I am old schooled when I am a literal teenager but here's almost a pipeline that I feel like "investors" get into,
I have thought of actually writing something of my own coin with low fees with just slightly more programmability than nano to integrate it into something like cosmic while having maybe 0 gas fees but honestly, I would do it for the tech not for the money while the money is lucrative too.
If you really want, there are some better ways like how there was this 2048 thingy launched by some crypto that I kinda won for like 0 fees and got like 100 bucks and i had invested nothing in but i think that it ended.
I feel like the only times I made money was when I was lucky but if I am honest, I kinda made like infinity percent the first few times just doing smth i like / messing around but that's not really sustainable or predictable but neither is the 30% imo.
Also uh yeah some coins might require some basecoin like stellar required 0.5 stellar and 1 to get usdc but i doubt that just because of them, smth like stellars price can grow & there is this usdc project whose aim is to make gas fees also in usdc and I hope that those gas fees could be low as that could be extremely useful imo.
To each their own, I don't even trust S&P 500 at this point given how concentrated they are into AI.
Would love it if you could mail me (see my about me!) . I love talking about it!
I hope you can read the article that I shared and I would love hearing your thoughts.
The tech is cool but people have ulterior incentives. When I say not to & to invest in something as boring as world index funds, i legit don't have a ulterior motive for the most part aside from bringing what I believe a reasonable financial opinion to everybody and hearing them out.
Also side note but didn't knew that you couldn't really name gists or atleast I might have the skill issues to not find a way to change the gist name from this 36b4 thing to something better so that it might be search indexed too but idk lol
I mean, I just thought of a way of curating the software list/discussion without bloating hackernews which might make it a bit difficult to find if someone is looking for something similar imo, i can make a ask hn too but I am more than willing for some suggestions if you have any about it :p
I think I agree. I’m curious what software would be in places 2-10. If we’re talking about HN, maybe excel/google sheets? Maybe C++? Recent versions of macOS always seem to get hate, but I think macOS is in a different category.
I think excel/google sheets are generally well regarded in online circles. I also don’t see that much C++ hate, at least not the same kind of viceral hate systemd receives.
I don't understand how people consider this article "systemd hate".
The article is informative. Even a bit bland when it comes to opinion on the matter of systemd as a whole. The article is literally just saying "if you write services, complain loudly and with context about permission errors" and "if you use systemD with hardening enabled, consider it alongside discretionary access control and mandatory access control when troubleshooting permissions errors".
> One of the traditional rites of passage for Linux system administrators is having a daemon not work in the normal system configuration (eg, when you boot the system) but work when you manually run it as root.
I've don't remember the last time I run a daemon by hand (that I wasn't developed it myself). I always just run the systemd unit via systemctl and debug that.
> A standard thing I do when troubleshooting a chain of programs executing programs executing programs is to shim in diagnostics that dump information to /tmp.
This seems like a very esoteric case in the days of structured logging and log levels.
> A mailer usually can't really tell the difference between 'no one has .forward files' and 'I'm mysteriously not able to see people's home directories to find .forward files in them'
Obviously a daemon that should access files in people's home directories shouldn't have ProtectHome=true. It's the responsibility of the daemon developer or the package maintainer to set appropriate flags based on what the daemon does. Someone had to explicitly write "ProtectHome=true". It's not the default, and it doesn't just appear in the service file.
When in doubt don't set security options at all, instead of shipping a broken daemon that you don't understand why it doesn't work.
Note: please base your daemon on D-Bus or a socket in /run and not on reading arbitrary files from my home directory.
I also don't understand the larger perspective? Should we not make our daemon run in more secure environments?
The private /tmp strike us, when update to Debían 12 servers and find that a batch process cannot access the same temporal files that our web application.
Luckily, it's very easy to fix, adding an extra systems file to disable that feature on the Tomcat service.
The reason for this is that it creates an override file rather than edits the systemd file directly. This means you'll keep your changes even if the systemd file changes in an upgrade. Obviously also helps you roll back and make sure you got things right. Another side benefit is that you can place these on github. Also, pretty much every service should use `PrivateTmp=yes` and this is surprisingly uncommon.
Then run
systemd-analyze security foo.service
It'll give you a nice little rundown of what is restricted and what isn't. There's a short description of what each of the capabilities are, but I also like this gist[0]. The docs kinda suck (across systemd) but play around and it starts to make sense.
This stuff is surprisingly quite powerful and it's caused me to go on a systemd binge for the last year. You can restrict access to basically anything, effectively giving you near container like restrictions. You can restrict paths (and permissions to those paths), executables, what conditions will cause the service to start/stop, how much CPU and memory usage a service can grab, and all sorts of things. That last one can be really helpful when you have a service that is being too greedy, giving you effectively an extremely lightweight container.
If you want to go further, then look into systemd-nspawn and systemd-vmspawn. This will give you a "chroot on steroids" (lightweight container, which can be used in place of things like docker) and a full fledged VM, respectively. Both of these have two complementary commands: machinectl and importctl.
`machinectl` will allow you to make these things act just like services, even allowing you to define conditions in which they can autoactivate.
`importctl` gives you the ability to download, import, and export these machines as images.
You can use importctl to get an image from an arbitrary url, so if a project decided to provide this in addition to (or as a replacement to) a docker image, you could have a pretty similar streamlined process. I'm no expert, but I've not run across a case where you can't use systemd as a full on replacement for docker. While it's more annoying to make these images I've found that I can get better control over them and feel a lot less like I'm fighting the system. A big advantage too is that these can run with fewer resources than docker, but this really depends on tuning. The more locked down the slower it will be, so this is only a "can" not "will" (IIRC defaults on vmspawn are slower[1])
Finally, note that all these have a `--user` flag, so you don't always need to use sudo. That itself gives a big advantage because you're automatically limiting capabilities by running it as a user service. No more fucking around with groups and all that.
Honestly, the biggest downside of systemd is the lack of documentation and the lack of wider adoption and examples to pull from. But that's a solvable problem and the exact kinda thing HN can help solve (this comment is even doing some of that as well as some others in this thread). Systemd is far from perfect and there's without a doubt a good learning curve (i.e. lack of docs), but IME it's been worth the effort.
That seems like an unnecessary foot-gun. Needing a special command to safely edit what appears to be an ordinary config file indicates some other part of the system is trying to be too clever. In this case, I think it's a combination of filling /etc with symlinks to stuff in /usr, and package managers casually overwriting stuff in /usr without the careful handling that stuff in /etc deserves.
You can always edit the service directly. There's nothing stopping you from doing that, and most people probably do it this way. Doing that would make it just like any other application with a config file.
BUT have you ever had configs overwritten by an update? Have you ever found an update to break your config? These are really quite annoying problems to deal with. Having the override file basically means you can keep the maintainer's config as your "gold standard" and then edit it without worrying of fucking things up.
This is the difference to me:
systemctl edit:
It is clearly defined what you changed. Your changes will not be overwritten by an update.
directly editing config:
File may change with update. You won't be notified of said change. If you didn't write down what you're changes were, you need to redo that work and figure it out all over again.
The pros outweigh the cons IMO. I had to get myself into the habit of doing `systemctl edit foo` instead of `sudo vim /etc/systemd/system/foo.service`, but that's provided more benefits than the annoyance of building this habit.
systemd restrictions on daemons are an excellent feature for daemon security, especially for publicly accessed services. The restrictions are fully configurable and can be removed, so there is nothing to lose if you can control the service definition file. Moreover, at least the GPT LLM is well versed in them, making the unit file easy to write or update.
Nah, this blog is neutral about it at worst. He publishes daily, and it's usually tips or notes on how things work, especially if it's different from the "traditional" way of doing things. One of my favorite blogs for letting me know how things work nowadays
systemd seems to violate this at every turn, but sadly they are not the only ones to do that in Linux. I wonder at what point we should start considering Linux to be its own thing instead of "UN*X like".
I know people like to complain, but these comments are not really based in reality. systemd is really a suite of applications that work together to provide init, service management, network management, &c. there's no single giant program that does all this. each component has a specific focused task.
you could break this out into individually named projects, but nobody complains about coreutils enabling dozens of different workflows.
What about problems that are not "one thing" only? Like, should a web browser be 640 piped subprocess in a trenchcoat, would that really be maintainable/easier to see how it works?
Systemd, as usual randomly and suddenly breaking things that worked for decade and for people that asked nothing.
Because they know better what you need...
And what's your preferred alternative to what's described in the article? Packaging every single service in its own 500mb ubuntu chroot and using docker? Running a local dhcp server and a bridge interface so that you can selectively expose ports?
Here's an alternative title for this post: these days, two lines in a systemd service file can easily constrain arbitrary applications to just the files and resources they need, and only those.
tl;dr: systemd isn't meant to be an init system, it's meant to manage services, and the alternative world where you don't have a unified system for managing services and events actually sucks.
selinux doesn't really provide anything like ProtectHome or PrivateTmp mentioned in the article. SELinux only does access control, while systemd can create new resources that are scoped to specific service.
The problem is the “more”. SELinux is extremely flexible and does what the configuration tells it to do. And it does not compose well. Want to point whateverd at /var/lib/whatever? Probably works if the distro packages are correct. Want to make /var/lib/whatever be a symlink? Probably does not do what you expect. Want to run a different daemon that accesses /var/lib/whatever or mount it into a container? Good luck. Want to run a second copy of the distro’s whateverd and point it at a different directory? The result depends on how the policy works.
And worst: want to understand what the actual security properties of your policy are? The answer is buried very, very deep.
Did you read the article? The author is complaining that aystemd introduced _optional_ security mechanisms for units. If you don’t like these mechanisms, don’t use them in your units.
Systemd didn’t “break” anything at all here. This author’s arcane debugging workflow doesn’t work for certain units who have opted into the new security mechanisms. But that is hardly systemd’s fault.
“Please don't comment on whether someone read an article. "Did you even read the article? It mentions that" can be shortened to "The article mentions that".” --https://news.ycombinator.com/newsguidelines.html
You can use systemd-run with --shell (or a subset of options enabled by --shell) and -p to specify service properties to run commands interactively in a similar environment as your service.
This can help troubleshoot issues and makes experimenting with systemd options faster.
I think there's been some talk about adding a built-in way for systemd-run to copy settings out of a .service file, but it doesn't exist yet.
I've written Perl/Python scripts to do this for me. They're not really aimed at working with arbitrary services, but it should be possible to adapt to different scenarios.
https://gist.github.com/dextercd/59a7e5e25b125d3506c78caa3dd...
There are some gotchas I ran into. For example, with RuntimeDirectory: systemd deletes the directory once the process exits, even if there's still another process running with the same RuntimeDirectory value set.
I use systemd-run very often to impose CPU usage limits on software. Awesome feature.
It's also really useful for doing parallel builds of modules that may actually consume all available memory when you can't force the build system to use fewer cores than you have available.
Both in terms of artificially reducing the number of CPUs you expose, but also in terms of enforcing a memory limit that will kill all processes in the build before the broader kernel OOM killer will act, in case you screw up the number of CPUs.
woah that's actually awesome. I feel like adding uh storage usage limits could also be easy as well.
But the one thing that I always wonder is about (virtualization?) in the sense of something like docker just for containerizing or some sort of way of running them in some sort of sandbox without much performance issues or something, I am kinda interested in knowing what might be the best way of doing so (is podman the right way or some other way like bubblewrap?)
Edit: just discovered in the comment below the (parents parents?)comment that there is systemd isolation too, that sounds very interesting and the first time I personally heard of it hmm
You can achieve similar results with podman and bubblewrap, but podman handles things like networking, resource and image management that bubblewrap doesn't by itself
Bubblewrap really is more for sandboxing "transient" containers and being able to separate specific things from the host (such as libraries), with other applications handling the image management, which makes sense because its primary user is Flatpak and Steam. Once the application inside the container is exited, the sandbox is destroyed, it's job is done.
Podman is a Docker clone, it's for development or persistent containers. It will monitor containers, restart them, can pull image updates, setup networks between them etc.
They both use namespacing and cgroups under the hood, but for different results and purposes.
Your right that systemd has sandboxing too, and it also uses the same features as the kernel. Podman can also export it's services to be managed by systemd.
There's literally so much choice when it comes to making containers on Linux.
podman + systemd integration seems really nice now.
given that podman can also have a (nicer?) transition to docker is a plus as well.
There are a lot of paas nowadays which use docker under the hood. I think I would love seeing a future where a paas actually manages it using systemd.
I think this might be really nice giving an almost standard way of installing software.
I really want to try to create something like dokku or some gui for making systemd management easier but I will see some alternatives currently, thanks for sharing it!
I wrote my comment before I saw yours, but you'll probably be interested in it[0].
The best thing about systemd is also the worst thing: it's monolithic. You can containarize applications lightly all the way to having a full fledged VM. You can run as user or root. You can limit system access like CPU, RAM, network, and even the physical hardware. You even have homed which gives you more control over your user environments. There's systemd mounts[1], boot, machines, timers, networks, and more. It's overwhelming.
I think two commands everyone should know if dealing with systemd services is:
These really helped me go down the rabbit hole and I think most people should have some basic idea of how to restrict their services. A little goes a long way, so even if you're just adding `PrivateTmp=yes` to a service, you're improving it.I've replaced all my cron jobs with systemd jobs now and while it is a bit more work up front (just copy paste templates...) there are huge benefits to be had. Way more flexibility in scheduling and you're not limited to such restrictions as your computer being off[3]
[0] https://news.ycombinator.com/item?id=45318649
[1] I've found mounts really helpful and can really speed up boot times. You can make your drives mount in the background and after any service you want. You can also set timeouts so that they will power down and automount as needed. That can save you a good amount on electricity if you got a storage service. This might also be a good time to remind people that you likely want to add `noatime` to your mount options (even if you use fstab)[2].
[2] https://opensource.com/article/20/6/linux-noatime
[3] You can have it run the service on the next boot (or whenever) if it was supposed to run when the machine was powered off.
I'm fairly confident that systemd, docker, podman, bubblewrap, unshare, and probably other tools are all wrapping the same kernel features, so I'd expect a certain degree of convergence in what they provide.
Kubernetes is also this but for the cloud. CPU/Mem/Storage limits
I feel like Docker and other containerization tools are becoming even less relevant given that systemd can twiddle the same isolation bits so there's no real difference in terms of security that using a container tool grants.
Seeing that podman can run containers as systemd services (see https://codesmash.dev/why-i-ditched-docker-for-podman-and-yo... ), it seems like using containers other than as a distribution mechanism has few advantages, and many disadvantages in terms of dependency updates requiring container rebuilds.
> I feel like Docker and other containerization tools are becoming even less relevant given that systemd can twiddle the same isolation bits so there's no real difference in terms of security that using a container tool grants.
I see it as _exactly_ the opposite. Podman gives me more or less the same security controls as systemd and the package/delivery problem is solved.
Call me when `systemctl pull ...` fetches the binary and everything else needed to run it _and_ puts the .service file in the right spot.
Literally exists.
importctl pull-tar https://example.com/image.tar.gz && portablectl attach image
Did you call him?
with podman-systemd/"Quadlet" we're basically there:
https://docs.podman.io/en/latest/markdown/podman-systemd.uni...
I replaced all my home server services with this an uninstalled docker entirely. Been very nice.
nixos kind of does that except better. Usually just set services.foo.enabled to true along with any other config you want. It's also super easy to wrap services in a container if you want, and doing so is kept conceptually separate from dependency management. If you want to make your own systemd service, then referencing a package in `ExecStart` or whatever will make it automatically get pulled in as a dependency.
That, and dependency management, no? I’m not going back to installing libwhathaveyou-dev-0.28c1 ever again.
Containers don't solve dependency Management, they just push it a layer up so it's only someone else's problem.
That sounds like solving dependency management.
> Call me when `systemctl pull ...` fetches the binary and everything else needed to run it _and_ puts the .service file in the right spot.
That would be pretty awesome, actually.
I can already hear the systemd-haters complaining about The One True Unix Way™ is to have tools that only do one thing even if that leaves holes in their functionality.
That seems like a `machinectl` task though.
Isn't this literally what podman-systemd does? You don't exactly run a command to pull a container, but just like systemd you place a config file in the right directory, tell podman-systemd to reconfigure itself, and run the service the standard systemd way.
> Isn't this literally what podman-systemd does?
That was my point, basically.
You have two options:
1) the usual `curl` or `wget` to fetch the binary and the lib(s) and all the work of validating and putting them in place and the like and _then_ you can use a systemd/.service file to set up controls for the bin
2) podman pull and then either ask podman to make a .service file for you or write your own
because only one of the two approaches has solved the package/distribution issue, containers are _not_ "less relevant given that systemd can twiddle the same isolation bits"
What "validating" does docker/podman pull do that is in excess of a curl of a file?
One of the advantages of a real package manager is that it checks signatures on the content that is downloaded. The supply chain on a linux distro's package repos is much harder to break into than typosquatting into a docker registry somewhere.
That would mean systemd entering package management territory. Now THAT would not be well received.
IMO, docker layering over the OS's built-in package management and update lifecycle in an incompatible ways is far worse than systemd replacing the init system and other service management functionality.
Back in the old days (late 90's, early 2k's) as a sysadmin I'd often write scripts to chroot or in other ways isolate services rather than run them as root, so extending the init system to handle those features feels like it's a logical extension, not a incompatible replacement.
systemd-sysupdate already exists. systemd won't run the software repository of course, but with systemd-sysupdate together with some overlay mounts you can get Steam Deck-like ease of use system updates.
For software management in R/W environments, there's the podman + systemd combo that'll let you run containers like normal systemd services.
Container rebuilds are disadvantages? Using mkosi and systemd-nspawn for containers it doesnt really feel that way, still a lot easier to build some distroless app container than to finangle a service to have zero access to other binaries, libraries, or other data entirely.
I dont get the distribution "advantage" building em with mkosi but I'd argue it a weakness as far too many are running containers with who-knows-what inside them.
Oddly, "mkosi" is "misfortune" in Swahili.
> I feel like Docker and other containerization tools are becoming even less relevant
Do you work in the software industry?
Docker is absolutely less relevant. My personal machines haven't run Docker for months and my employer is finishing our migration away from Docker in a few months.
Containers are as relevant as ever, of course.
Systemd hardening is great, but each service needs its own bespoke config and that takes a bit of time and trial & error. Here's the override I've been using for Jellyfin: https://gist.github.com/radupotop/61d59052ff0a81cc5a32c92b3b...
Some references:
- https://docs.arbitrary.ch/security/systemd.html
- https://gist.github.com/ageis/f5595e59b1cddb1513d1b425a323db...
What makes me scratch my hand is why the failed access violations are not easy to show and log. A correctly configured service should not attempt to access things is is not intended to access. If it has to check if it has access and act conditionally, this also should be made explicit, either in the service code, or in its configuration.
There should be an strace-like tool that would collect a log of such "access denied" erros for troubleshooting. Even better, each service should run in its own process group, and tracing could be switched on / off for a particular process group.
> A correctly configured service should not attempt to access things is is not intended to access. If it has to check if it has access and act conditionally
It's normally recommended to attempt the access and handle the denial, instead of doing two separate steps (checking for access and doing the access); the later can lead to security issues (https://en.wikipedia.org/wiki/TOCTOU).
Yes, this is the explicit attempt of access which should be logged by the service.
Systemd haters really are often a masterclass in finding problems with flexible, sanely configurable systems.
The fact that systemd continues to get hate, ~15 years after mass adoption, is a cultural phenomenon worth understanding. Benno Rice of freebsd gave a super interesting talk about this: The Tragedy of systemd: https://www.youtube.com/watch?v=o_AIw9bGogo
I can only imagine how long the Wayland haters will be writing blogs once LTS distro start shipping Wayland-first desktops. Looking at the whole upstart/systemd drama, I'm guessing we'll hit the 2k38 bug before they'll find something new to write about.
It's gas lighting to equate the two at this point.
Systemd is strictly better than what came before it, while Wayland still has missing functionality and breaks a lot of use cases.
Not only is systemd strictly better, they had really extended themselves to make migrating services as simple as possible rather than assert you have to follow a new status quo entirely. Allowing services to incrementally and optionally adopt features was the key part.
but you can adopt incrementally, thanks to XWayland. sure it's not the same, but unlike systemd vs sysv-init, you can't run two windowing systems side by side with equal privileges unless maybe you have two monitors and graphic cards. one has to be the one that controls the screen. and the other must necessarily run as a client inside it. wayland-on-X may have been possible, but it would have limited waylands capabilities and development.
i am willing to bet that there are systemd haters out there that love wayland and would make the exact reverse claim.
> but you can adopt incrementally, thanks to XWayland.
Wayland's weakest point is a11y and automation tools, which XWayland doesn't work for.
> sure it's not the same, but unlike systemd vs sysv-init, you can't run two windowing systems side by side with equal privileges unless maybe you have two monitors and graphic cards. one has to be the one that controls the screen. and the other must necessarily run as a client inside it. wayland-on-X may have been possible, but it would have limited waylands capabilities and development.
You can do both, actually; XWayland can run an X server in a window, and many Wayland compositors will run in a window on top of an X11 server. It's not seamless, of course, but it does work.
You don't understand. Everybody who doesn't like the things I like is the same: bad and stupid. Or maybe you do understand, because suddenly Wayland came up, and since you personally are annoyed by it, now this style of argument is "gaslighting."
It's not "gaslighting" it's just name-calling and argument through insinuation about other people's characters, rather than substance. It's not even ad hominem, because you assume people are arguing in bad faith because of the positions they've taken, not because you know a thing about them.
As a systemd user but Wayland "hater", to me the big difference is that you can adopt systemd without losing functionality - e.g. you can configure systemd to run sysV init style init scripts if you insist and no functionality is lost. The "complaints" in the linked article, are minor and about options that can just be turned off and that are offering useful additional capabilities without taking away the old.
Whereas with Wayland the effort to transition is significant and most compositors still have limitations X doesn't (and yes, I realise some of those means X is more vulnerable) - especially for people with non-standard setups. Like me.
I use my own wm. I could start with ~40-50 lines of code and expand to add functionality. That made it viable. I was productively using my own wm within a few days, including to develop my wm.
With Wayland, even if I start with an existing compositor, the barrier is far larger, and everything changes. I'm not going to do that. Instead I'll stick with X. The day an app I actually depend on stops supporting X, I'll just wrap those apps in a Wayland compositor running under X.
And so I won't be writing blog posts about how much I hate Wayland, and hence the quotes around "hater" above. But maybe I will one day write some about how to avoid running Wayland system-wide.
If Wayland gave me something I cared about, I'd take the pain and switch. It doesn't. Systemd did, so even if I hadn't liked it better than SysVinit, I'd still have just accepted the switch.
If I one day give up Xorg, my expectation is that it'll be a passive-aggressive move to a custom franken-server that is enough-of-X to run modern X apps coupled to enough-of-Wayland to run the few non-X-apps I might care about directly (I suspect the first/only ones that will matter to me that might eventually drop X will be browsers), just because I'd get some of the more ardent Wayland proponents worked up.
I remember the good old days of xfree86. It was arse but mostly worked OK on a PC. Then this blasted Xorg thing rocked up and it was worse for a while! Nowadays I can barely remember the last time I had to create an xorg.conf.
Wayland has a few years to go yet and I'm sure it will be worth the wait. For me, it seems to work OK already.
wayback has you covered https://gitlab.freedesktop.org/wayback/wayback
the idea here is to make it easy for x-based wms to keep working like they always have!
That's interesting, but I really would rather write something stripped down with X as the base, though. This might be a good intermediary step, though.
I mean unless you're going to commit to maintaining xlibre or something, wayback seems like the future for x-based desktops.
> [wayback] is intended to eventually replace the classic X.Org server, thus reducing maintenance burden of X11 applications, but a lot of work needs to be done first.
As I said, I'd rather write something from scratch when the time comes that Xorg becomes a challenge.
Wayback is really the only good step in the X11 space I've seen. They could use your help.
It also has the benefit that if it gets enough traction, then you can displace the backend off of Wayland and go directly to hardware.
Killing Wayland would just be a bonus...
> With Wayland, even if I start with an existing compositor, the barrier is far larger, and everything changes.
I mean, no one puts a gun against your head to use Wayland, X will be on life support for decades and will likely work without any issue.
But with this stance, no evolution could ever happen and every change would be automatically "bad". Sure, changes have a downside of course, but that shouldn't deter us in every case.
Plenty of evolution can happen. The problem with Wayland to me is that it's not evolution, but a step backward. It's forcing a tremendous amount of boilerplate and duplication of code.
X can evolve both by extensions and by "pseudo extensions" that effectively tell the server it's okay to make breaking changes to the protocol. There are also plenty of changes you could just make and accept the breakage because the clients it's break are limited.
I don't mind breaking changes if they actually bring me benefits I care about, but to me Wayland is all downside and no upsides that matter to me.
I haven't seen this before. It's very interesting so far!
I used to be a systemd hater about 10 years ago, now it's probably my favorite part of my distro.
When you see a large number of masters spanning diverse skill levels across a population, maybe it's an easy skill to acquire.
Dude’s been arguing with people since at least 2012 that systemd is a good thing. It took me less than a minute to figure that out by searching his blog.
I genuinely believe that systemd might have the highest “haters” to “benefit-to-humanity” ratio, out of any software project in history.
PulseAudio also drew a lot of disapproval, until Pipewire appeared and finally did the same thing (and more) well.
Maybe systemd (service management, logind, the DNS resolver, the loggig system, etc) will eventually be re-implemented in a way that does not have the irritating properties of the original systemd.
/* I'd say that systemd feels like typical corporate software. It has a ton of features to check all the requisite boxes, it's not very ergonomic, it does things the way authors wanted and could sell to the corporate customers (who are not the end users), not the way end users prefer. It also used to be bug-ridden for quite some time, after having been pushed upon users. It comes from Red Hat (which is now a part of IBM), so you could say: here's why! But, say, podman also comes from Red Hat, and does not feel like such corporate software; to the contrary, end users enjoy it. */
Pipewire also comes from Red Hat FWIW
And maybe people didn't hate Pulseaudio because it came from red hat? But maybe people hated red hat after they pressured gnome to depend on it?
Maybe Red Hat didn't actually "pressure GNOME to depend on it", and that's mostly a meme?
You mean the highest combined amount of haters and benefit? A high ratio means many haters, little benefit.
Hey, now I am interested in more of such softwares overall.
Like imagine a list where we can create a form where people can give them and give reasonings or just something.
What if I can create a github repo and issues regarding this so that we can discuss about them and I can create a website later if it interests but its a really nice thought experiment.
Are we talking more about uh every software including proprietory too?
Are we talking about lets say websites too or services (what if we extend it to services like websites or even products outside of software niche into things beyond too, that's interesting too)
Another interesting point that comes to my mind might be that cryptocoins might be the lowest inverse to this software project in the sense that I believe that there was very little net positive done to all humanity in general, sure the privacy aspects are nice but still, its not worth having people invest their life savings into it thinking that its going to 100x y'know, I have created a whole article about it being frustated by this idea people think regarding crypto as an investment when it could very well be a crypto"currency" but that's a yap for another day.
I really nerded over this and I think I loved it, we need a really good discussion about it :>
My mom and dad live in a country where there is a dictator, and that there has been restrictions to send money there. I'm happy that I can send crypto usdc over Solana and then the cost is basically a few cents to my family
Hey mate, I myself understand this usecase.
Sorry if I wasn't being clear.
I am all for stable cryptocurrencies but not cryptocurrencies which prey on desperate people wanting 100x returns or promising too much that we all know is BS for hype and not delivering on it.
I myself am a teenager and uh had gotten some 100 ish dollar from winning competitions in crypto space (not that much, but I am proud of it kinda money) and uh I couldn't really get that money if it wasn't for crypto y'know.
If I hadn't made my stance clear, I am all for stable cryptocoins but that is just a very (minor?) part of the amount of things on crypto and all others usually generally harm, sure there are some outliers but here's an article that I had made when I ragebaited so hard one day about some crypto thing that I basically just wrote an article trying to explain my situation
https://justforhn.mataroa.blog/blog/most-crypto-is-doomed-to...
"This whole space is still full of scam. 99% of the times everyone has ulterior incentive (to earn money) but still I mean, it kinda exists and I mean still crypto (stablecoins?) are the only sane way to transfer money without kyc"
This is one of the comments that I had written and I hope I can make my stance clear. I just searched and found that I have written about stablecoins 10 times almost seperately in the article and uh I am all for stablecoin and I think even In my original comment, I may have said that stablecoins are the only thing that is remotely nice about cryptocoins,maybe monero if we are really streching it.
I saw a r/cryptocurrency poll of someone saying that they are 97% for the money and 3% for tech and that point this is almost like polymarket gambling except being even worse and more out of your control, almost giving someone your money if you invest in some memecoin
I kinda (like?) stablecoins (even gold like paxg is good) but not much anything else as a baseline of token for the most part, i myself have usdc in things like stellar/polygon which also have low gas fees for the most part
So what are your thoughts? I hope I can make my stance clear lol,
Yeah I understand what you mean, actually I've been investing in Solana, since it's the one that allows for low cost transactions and I feel like it's the network that's actually being used for useful stuff (stripe, visa are working on building on top) so I made 30% in return. Since cryptos are risky I didn't put much money and only made 50$, But I'm faithful of the development of Solana, stablecoins and the like.
I have created things on top of nano which has literally 0 fees.
https://nanotimestamps.org/appseed (scroll to the end otherwise its all for appseed which was some something that I vibe coded lol)
uh, the point I am trying to make is that sure you might invest into solana for less gas fees but the point is that, you used it with usdc, I am not sure if solana requires some minimum balance of solana or smth, i think it does but just because people can pay less fees while paying usdc on solana chain doesn't make me believe that solana's native token price should increase if the gas fees are low...
Like, quite frankly, they are all driven on speculation. Sure there is some aspect of investment but the reason I believe in stock markets or even money is that they grow because people get somewhat more productive over time from all agreggates whether its through tech or just innovation or just learning from mistakes.
Crypto might grow and it might not grow, its certainly more risky and risky might make it more profitable or loss at the same time.
See the thing is, I believe that things are for the most part kinda accurate in their prices and if they aren't, then I shouldn't want to mess around trying to prove the market wrong. Because mostly its kinda an efficient market with robots that can trade in milliseconds but it was efficient even before that.
I am sure that solana's price is effectively weighed in and if its not, then well, i still wouldn't want +/- 30%
Stripe is also building its own chain (tempo) i think.. I am all for stripe having cryptocurrency too but just stablecoin.
Uh, idk man, I see these people making 30% returns and I think wow, but then I see the amount that they put and the time they invest in and they say that they are "learning" which is great but still, my questions is if this is even smth that you can "learn"
Because the thing is that if you can guarantee me 30% returns for 30 years or even guaranteed 30% or even some previous data to back that on historically, that would be great but even historical data isn't enough sometimes which is why I think of productivity as the final benchmark
My brother had like 1$ in polymarket will btc go up or down, he made 100% in an hour, he gave me all the stats of the world of these max smth curves and how they go and I appreciate my brother a lottt but I can't help but I just don't like this where he feels like he needs to earn money on top of money.
Idk maybe its me but i literally can't predict if solana can get hacked tomorrow and it can make it all 0. I saw some project on HN about how Sui, a literal billion dollar coin is suffering through some sort of not well setup nodes and a malicious actor could thereotically stop the byzantine consensus and might even bring the network down lol.
Maybe I am old schooled when I am a literal teenager but here's almost a pipeline that I feel like "investors" get into,
investing -> trading -> options trading/derivatives / forex -> cryptocurrencies -> memecoins -> polymarket trading / crypto trading
I have thought of actually writing something of my own coin with low fees with just slightly more programmability than nano to integrate it into something like cosmic while having maybe 0 gas fees but honestly, I would do it for the tech not for the money while the money is lucrative too.
If you really want, there are some better ways like how there was this 2048 thingy launched by some crypto that I kinda won for like 0 fees and got like 100 bucks and i had invested nothing in but i think that it ended.
I feel like the only times I made money was when I was lucky but if I am honest, I kinda made like infinity percent the first few times just doing smth i like / messing around but that's not really sustainable or predictable but neither is the 30% imo.
Also uh yeah some coins might require some basecoin like stellar required 0.5 stellar and 1 to get usdc but i doubt that just because of them, smth like stellars price can grow & there is this usdc project whose aim is to make gas fees also in usdc and I hope that those gas fees could be low as that could be extremely useful imo.
To each their own, I don't even trust S&P 500 at this point given how concentrated they are into AI.
Would love it if you could mail me (see my about me!) . I love talking about it!
I hope you can read the article that I shared and I would love hearing your thoughts.
The tech is cool but people have ulterior incentives. When I say not to & to invest in something as boring as world index funds, i legit don't have a ulterior motive for the most part aside from bringing what I believe a reasonable financial opinion to everybody and hearing them out.
(Edit?): I finally made the list [1]
I feel as if I currently made a gist but I might make some fediverse or reddit or bluesky posts too in the future.
[1]: https://gist.github.com/SerJaimeLannister/36b4cdc7e9bb790929...
Also side note but didn't knew that you couldn't really name gists or atleast I might have the skill issues to not find a way to change the gist name from this 36b4 thing to something better so that it might be search indexed too but idk lol
I mean, I just thought of a way of curating the software list/discussion without bloating hackernews which might make it a bit difficult to find if someone is looking for something similar imo, i can make a ask hn too but I am more than willing for some suggestions if you have any about it :p
I think I agree. I’m curious what software would be in places 2-10. If we’re talking about HN, maybe excel/google sheets? Maybe C++? Recent versions of macOS always seem to get hate, but I think macOS is in a different category.
I think excel/google sheets are generally well regarded in online circles. I also don’t see that much C++ hate, at least not the same kind of viceral hate systemd receives.
C++ hate is somewhat like: nooo memory safety better, rust was depicted with chad emoji and C++ as the soyjack
It is there but most people don't care, C++ is fine imo, I mean whatever works-> works.
Its very little hate compared to systemd imo
Such a goofy post.
"People who hate person X are often a masterclass in finding fault in wonderful, intelligent, faithful, generous men."
People who talk like this are worse than systemd.
Ah yes, sub-reply spewing false equivalence. Surely that proves my point wrong oh enlightened one.
I don't understand how people consider this article "systemd hate".
The article is informative. Even a bit bland when it comes to opinion on the matter of systemd as a whole. The article is literally just saying "if you write services, complain loudly and with context about permission errors" and "if you use systemD with hardening enabled, consider it alongside discretionary access control and mandatory access control when troubleshooting permissions errors".
I'm ok with it as long as it doesn't cause __any__ confusion whatsoever.
> One of the traditional rites of passage for Linux system administrators is having a daemon not work in the normal system configuration (eg, when you boot the system) but work when you manually run it as root.
I've don't remember the last time I run a daemon by hand (that I wasn't developed it myself). I always just run the systemd unit via systemctl and debug that.
> A standard thing I do when troubleshooting a chain of programs executing programs executing programs is to shim in diagnostics that dump information to /tmp.
This seems like a very esoteric case in the days of structured logging and log levels.
> A mailer usually can't really tell the difference between 'no one has .forward files' and 'I'm mysteriously not able to see people's home directories to find .forward files in them'
Obviously a daemon that should access files in people's home directories shouldn't have ProtectHome=true. It's the responsibility of the daemon developer or the package maintainer to set appropriate flags based on what the daemon does. Someone had to explicitly write "ProtectHome=true". It's not the default, and it doesn't just appear in the service file.
When in doubt don't set security options at all, instead of shipping a broken daemon that you don't understand why it doesn't work.
Note: please base your daemon on D-Bus or a socket in /run and not on reading arbitrary files from my home directory.
I also don't understand the larger perspective? Should we not make our daemon run in more secure environments?
The private /tmp strike us, when update to Debían 12 servers and find that a batch process cannot access the same temporal files that our web application. Luckily, it's very easy to fix, adding an extra systems file to disable that feature on the Tomcat service.
Make sure to do
The reason for this is that it creates an override file rather than edits the systemd file directly. This means you'll keep your changes even if the systemd file changes in an upgrade. Obviously also helps you roll back and make sure you got things right. Another side benefit is that you can place these on github. Also, pretty much every service should use `PrivateTmp=yes` and this is surprisingly uncommon.Then run
It'll give you a nice little rundown of what is restricted and what isn't. There's a short description of what each of the capabilities are, but I also like this gist[0]. The docs kinda suck (across systemd) but play around and it starts to make sense.This stuff is surprisingly quite powerful and it's caused me to go on a systemd binge for the last year. You can restrict access to basically anything, effectively giving you near container like restrictions. You can restrict paths (and permissions to those paths), executables, what conditions will cause the service to start/stop, how much CPU and memory usage a service can grab, and all sorts of things. That last one can be really helpful when you have a service that is being too greedy, giving you effectively an extremely lightweight container.
If you want to go further, then look into systemd-nspawn and systemd-vmspawn. This will give you a "chroot on steroids" (lightweight container, which can be used in place of things like docker) and a full fledged VM, respectively. Both of these have two complementary commands: machinectl and importctl.
You can use importctl to get an image from an arbitrary url, so if a project decided to provide this in addition to (or as a replacement to) a docker image, you could have a pretty similar streamlined process. I'm no expert, but I've not run across a case where you can't use systemd as a full on replacement for docker. While it's more annoying to make these images I've found that I can get better control over them and feel a lot less like I'm fighting the system. A big advantage too is that these can run with fewer resources than docker, but this really depends on tuning. The more locked down the slower it will be, so this is only a "can" not "will" (IIRC defaults on vmspawn are slower[1])Finally, note that all these have a `--user` flag, so you don't always need to use sudo. That itself gives a big advantage because you're automatically limiting capabilities by running it as a user service. No more fucking around with groups and all that.
Honestly, the biggest downside of systemd is the lack of documentation and the lack of wider adoption and examples to pull from. But that's a solvable problem and the exact kinda thing HN can help solve (this comment is even doing some of that as well as some others in this thread). Systemd is far from perfect and there's without a doubt a good learning curve (i.e. lack of docs), but IME it's been worth the effort.
[0] https://gist.github.com/ageis/f5595e59b1cddb1513d1b425a323db...
[1] https://github.com/systemd/systemd/issues/18370
> systemctl edit foo.service
That seems like an unnecessary foot-gun. Needing a special command to safely edit what appears to be an ordinary config file indicates some other part of the system is trying to be too clever. In this case, I think it's a combination of filling /etc with symlinks to stuff in /usr, and package managers casually overwriting stuff in /usr without the careful handling that stuff in /etc deserves.
You can always edit the service directly. There's nothing stopping you from doing that, and most people probably do it this way. Doing that would make it just like any other application with a config file.
BUT have you ever had configs overwritten by an update? Have you ever found an update to break your config? These are really quite annoying problems to deal with. Having the override file basically means you can keep the maintainer's config as your "gold standard" and then edit it without worrying of fucking things up.
This is the difference to me:
The pros outweigh the cons IMO. I had to get myself into the habit of doing `systemctl edit foo` instead of `sudo vim /etc/systemd/system/foo.service`, but that's provided more benefits than the annoyance of building this habit.systemd restrictions on daemons are an excellent feature for daemon security, especially for publicly accessed services. The restrictions are fully configurable and can be removed, so there is nothing to lose if you can control the service definition file. Moreover, at least the GPT LLM is well versed in them, making the unit file easy to write or update.
Old Man Yells at Cloud.
Nah, this blog is neutral about it at worst. He publishes daily, and it's usually tips or notes on how things work, especially if it's different from the "traditional" way of doing things. One of my favorite blogs for letting me know how things work nowadays
>Make each program do one thing well. To do a new job, build afresh rather than complicate old programs by adding new features.
https://cscie2x.dce.harvard.edu/hw/ch01s06.html
systemd seems to violate this at every turn, but sadly they are not the only ones to do that in Linux. I wonder at what point we should start considering Linux to be its own thing instead of "UN*X like".
I know people like to complain, but these comments are not really based in reality. systemd is really a suite of applications that work together to provide init, service management, network management, &c. there's no single giant program that does all this. each component has a specific focused task.
you could break this out into individually named projects, but nobody complains about coreutils enabling dozens of different workflows.
What about problems that are not "one thing" only? Like, should a web browser be 640 piped subprocess in a trenchcoat, would that really be maintainable/easier to see how it works?
Lot of this comes from kernel though, cgroups and processes have tons of knobs to tweak and systemd merely exposes those.
Systemd, as usual randomly and suddenly breaking things that worked for decade and for people that asked nothing. Because they know better what you need...
And what's your preferred alternative to what's described in the article? Packaging every single service in its own 500mb ubuntu chroot and using docker? Running a local dhcp server and a bridge interface so that you can selectively expose ports?
Here's an alternative title for this post: these days, two lines in a systemd service file can easily constrain arbitrary applications to just the files and resources they need, and only those.
My grumpy preferred alternative would be "you're supposed to be an init service. That's not your job".
> systemd is a suite of basic building blocks for a Linux system.
You can always use a simpler init system if you want
I linked it elsewhere in this thread, but you should really watch this talk, particularly 12:45 through 16:20: https://www.youtube.com/watch?v=o_AIw9bGogo
tl;dr: systemd isn't meant to be an init system, it's meant to manage services, and the alternative world where you don't have a unified system for managing services and events actually sucks.
Doesn't SELinux do that (and more)?
selinux doesn't really provide anything like ProtectHome or PrivateTmp mentioned in the article. SELinux only does access control, while systemd can create new resources that are scoped to specific service.
The problem is the “more”. SELinux is extremely flexible and does what the configuration tells it to do. And it does not compose well. Want to point whateverd at /var/lib/whatever? Probably works if the distro packages are correct. Want to make /var/lib/whatever be a symlink? Probably does not do what you expect. Want to run a different daemon that accesses /var/lib/whatever or mount it into a container? Good luck. Want to run a second copy of the distro’s whateverd and point it at a different directory? The result depends on how the policy works.
And worst: want to understand what the actual security properties of your policy are? The answer is buried very, very deep.
Systemd is probably the part of my distro that works best.
The example given is a distro changing their bundled systemd unit files to use new features, yet you choose to blame systemd?
You do realize distros can also change SysV shell scripts in ways that break your use case as well, right?
Did you read the article? The author is complaining that aystemd introduced _optional_ security mechanisms for units. If you don’t like these mechanisms, don’t use them in your units.
Systemd didn’t “break” anything at all here. This author’s arcane debugging workflow doesn’t work for certain units who have opted into the new security mechanisms. But that is hardly systemd’s fault.
“Please don't comment on whether someone read an article. "Did you even read the article? It mentions that" can be shortened to "The article mentions that".” --https://news.ycombinator.com/newsguidelines.html