Not much we didn't know (you're basically SOL since an owner was compromised), however we now have a small peek into the actual meat of the social engineering, which is the only interesting news imho: https://github.com/axios/axios/issues/10636#issuecomment-418...
jasonsaayman and voxpelli had useful write ups from the "head on a swivel" perspective of what to watch out for. Jason mentioned "the meeting said something on my system was out of date." they were using Microsoft meeting and that's how they got RCE. Would love more color on that.
they are cloning Zoom and MS Teams, and try to get people to either copy a script (which is in a textarea that's conveniently too small to show the whole script, and scrollbars are hidden by CSS, and there's a copy button, and when you paste it into the terminal you'll see last few lines, also look innocent, but there's a curl | zsh or `mshta` somewhere in there), download and run a binary/.dmg (and it might be even signed by GoogIe LLC. - the name chosen to look good in the usual typeface used on macOS).
...
it seems the correct muscle memory response to train into people is that "if some meeting link someone sent you doesn't work, then you should create one and send them the link"
(and of course never download and execute anything, don't copy scripts into terminals, but it seems even veteran maintainers do this, etc...)
An owner being compromised is absolutely survivable on a responsibly run FOSS project with proper commit/review/push signing.
This and every other recent supply chain attack was completely preventable.
So much so I am very comfortable victim blaming at this point.
This is absolutely on the Axios team.
Go setup some smartcards for signing git push/commit and publish those keys widely, and mandate signed merge commits so nothing lands on main without two maintainer sigs, and no more single points of failure.
Did you investigate the maintainer compromise and publication path? The malicious version was never committed or pushed via git. The maintainer signs his commits, and v1 releases were using OIDC and provenance attestations. The malicious package versions were published locally using the npm cli after the maintainer's machine was compromised via a RAT; there's no way for package maintainers to disable/forbid local publication on npmjs.
It seems the Axios team was largely practicing what you're preaching. To the extent they aren't: it still wouldn't have prevented this compromise.
I can not find a single signed recent commit on the axios repo. It is totally yolo mode. Those "signed by github" signatures are meaningless. I stand by my comment in full.
One must sign commits -universally- and -also- sign reviews/merges (multi-party) and then -also- do multi party signing on releases. Doing only one step of basic supply chain security unfortunately buys you about as much defense as locking only a single door.
I do however certainly assign significant blame to the NPM team though for repeatedly refusing optional package signing support so packages with signing enabled can be refused at the server and client if unsigned by a quorum of pinned keys, but even aside from that if packages were signed manually then canary tools could have detected this immediately.
What you sign or don't sign in your Git repo doesn't matter because NPM doesn't publish from a Git repo. Signing commits is still useful for your contributors and downstream forks but it won't have any effect on the users who use your package via NPM.
I think NPM is fully to blame here. Packages that exceed a certain level of popularity should require signing/strong 2FA. They should implement more schemes that publishers can optionally enable, like requiring mandatory sign-off from more than 1 maintainer before the package is available to download.
Then on the package page it should say: "[Warning] Weak publishing protection" or "[Checkmark] This package requires sign-off from accountA and accountB to publish".
It wasn’t done through git. It was a direct npm publish from the compromised machine. If you read further down in the comments (https://github.com/axios/axios/issues/10636#issuecomment-418...), it seems difficult to pick the right npm settings to prevent this attack.
If I understand it correctly, your suggestions wouldn’t have prevented it, which is evidence that this is not as trivially fixable as you believe it is.
To prevent supply chain attacks you need multi party cryptographic attestation at every layer, which is pretty straight forward, but you are correct, NPM and GitHub controls absolutely will not save you. Microsoft insists their centralized approach can work, but we have plenty of evidence it does not.
Operate under the assumption all accounts will be taken over because centralized corporate auth systems are fundamentally vulnerable.
This is how you actually fix it:
1. Every commit must be signed by a maintainer key listed in the MAINTAINERS file or similar
2. Every review/merge must be signed by a -second-
maintainer key
3. Every artifact must be build deterministically and be signed by multiple maintainers.
4. Have only one online npm publish key maintained in a deterministic and remotely attestable enclave that validates multiple valid maintainer signatures
5. Automatically sound the alarm if an NPM release is pushed any other way, and automatically revoke it.
And for 5 there should be help on the NPM end to make it so that the alarms can fire before the new update is actually revealed to the public. There could be a short staging time where it could be revoked before any harm has been done. During this staging time NPM should also scan the package through a malware scanner before allowing it to go public.
The interesting detail from this thread is that every legitimate v1 release had OIDC provenance attestations and the malicious one didn't, but nobody checks. Even simpler, if you're diffing your lockfile between deploys, a brand new dependency appearing in a patch release is a pretty obvious red flag.
npm could solve half of this by letting packages opt into OIDC-only publishing at the registry level. v1 already had provenance attestations but the registry happily accepted the malicious publish without them.
Looks like a very sophisticated operation, and I feel for the maintainer who had his machine compromised.
The next incarnation of this, I worry, is that the malware hibernates somehow (e.g., if (Date.now() < 1776188434046) { exit(); }) to maximize the damage.
I mean the compromised machine registers itself on the command server and occasionally checks for workloads.
The hacker then decides his next actions - depending on the machine they compromised they'll either try to spread (like this time) and make a broad attack or they may go more in-depth and try to exfiltrate data/spread internally if eg a build node has been compromised
> something on my system was out of date. i installed the missing item
Given the "extreme vigilance" of the primitive "don't install unknown something on your machine" level is unattainable, can there really be an effective project-level solutions?
Mandatory involvement of more people to hope not everyone installs random stuff, at least not at same time? (though you might not even have more people...)
That's the reality of modern war. Many countries are likely planting malware on a wide scale. You can't even really prove where an attack originated from, so uninvolved countries would also be smart to take advantage of the current conflict. Like if you primarily wrote German, you would translate your malware to Chinese, Farsi, English, or Hebrew, and take other steps to make it appear to come from one of those warring countries. Any country who was making a long term plan involving malware would likely do it around this time.
NPM is designed to let you run untrusted code on your machine. It will never work. There is no game to step up. It's like asking an ostrich to start flying.
It’s far from a complete solution, but to mitigate this specific avenue of supply chain compromise, couldn’t Github/npm issue single-purpose physical hardware tokens and allow projects (or even mandate, for the most popular ones) maintainers use these hardware tokens as a form of 2FA?
The attacker installed a RAT on the contributor’s machine, so if they had configured TOTP or saved the recovery codes anywhere on that machine, the attacker could defeat 2FA.
Oh, yes, I missed that the TOTP machine was compromised:\ Would that then imply that it would have been okay if codes came from a separate device, eg. a TOTP app on a Palm OS device with zero network connectivity? (Or maybe these days the easiest airgapped option is an old android phone that stays in airplane mode...)
I mean, I guess attestation might have some value, but it feels like moving the goalposts. Under the threat model of a remote attacker who can compromise a normal networked computer, I can't think of an attack that would succeed with a programmable TOTP code generator that would fail if that code generator was not reprogrammable. Can you?
> It would not be an advantage for your front door lock to be infinitely reprogrammable. It’s just a liability.
Er, most door locks are infinitely reprogrammable, because being able to rekey them without having to replace the whole unit is a huge advantage and the liability/disadvantage is minimal (falling under "It rather involved being on the other side of this airtight hatchway" in an unusually almost-literal sense where you have to be inside the house in order to rekey the lock, at which point you could also do anything else).
Sorry, attestation is the goalpost. The community wants certainty that the package was published by a human with authority, and not just by someone who had access to an authority’s private keys. That is what distinguishes attestation from authentication or authorization.
Yes, unfortunately authenticator apps just generate TOTP codes based on a binary key sitting in plain sight without any encryption. Not that it would help if the encrypting/decrypting machine is pwned.
All maintainers need to do is code signing. This is a solved problem but the NPM team has been actively rejecting optional signing support for over a decade now. Even so maintainers could sign their commits anyway, but most are too lazy to spend a few minutes to prevent themselves from being impersonated.
If the solution is 'maintainers just need to do xyz', then it's not a solution, sorry. It's not scalable and which projects become 'successful' and which maintainers accidentally become critical parts of worldwide codebases, is almost pure chance. You will never be able to get all the maintainers you need to 'just' do xyz. Just like you will never be able to get humans to 'just' stop making mistakes. So you had better start looking for a solution that doesn't rely on humans not making mistakes.
Any good payload analysis been published yet? Really curious if this was just a one and done info stealer or if it potentially could have clawed its way deeper into affected systems.
This article[0] investigated the payload. It's a RAT, so it's capable of executing whatever shell commands it receives, instead of just stealing credentials.
Seems to me the root of the problem was that the guy was using the same device for all sorts of stuff.
Seems to me that one drastic tactic NPM could employ to prevent attacks like this is to use hardware security. NPM could procure and configure laptops with identity rooted in the laptop TPM instead of 2FA. Configure the NPM servers so that for certain repos only updates signed with the private key in the laptop TPM can be pushed to NPM. Each high profile repo would have certain laptops that can upload for that repo. Set up the laptop with a minimal version of Linux with just the command line tools to upload to NPM, not even a browser or desktop environment. Give those laptops to maintainers of high profile repos for free to use for updates.
Then at update time, the maintainer just transfers the code from their dev machine to the secure laptop via USB drive or CD and pushes to NPM from the special laptop.
they can simply make an app that requires tapping a button, so people don't end up with TOTP seeds stored in their password manager on the same notebook where they run 'publish' from
this is why i pin every dependency hash in my python projects. pip install --require-hashes with a locked requirements file catches exactly this, if the package hash changes unexpectedly the install fails. surprised this isn't the default in the npm ecosystem
Npm and the other JavaScript package managers do generate and check lockfiles with hashes by default. This was a new release, not a republishing of an old version (which isn’t possible on the npm registry anyway).
Nope, the most restrictive option available is to disallow tokens and require 2FA. I think that using exclusively hardware 2FA and not having the backup codes on the compromised machine probably would have prevented this attack though.
Someone in the linked Github thread describes an attack where the attackers waited for the victim to use their Yubikey for an AWS login, giving the attackers access to AWS as well. I don't think hardware 2FA is safe against a RAT.
No. axios (v1 at least; not v0) were setup to publish via OIDC, but there's no option on npmjs for package maintainers to restrict their package to *only* using OIDC. The maintainer says his machine was infected via RAT, so if he was using software-based 2FA, nothing could have prevented this.
I never understood why all the CAS tutorials pushed axios. This was before vite and build-scripts was how you did react. After the compromise I reviewed some projects and converted them to pure JS fetch and vite.
I ask this on every supply chain security fail: Can we please mandate signing packages? Or at least commits?
NPM rejected PRs to support optional signing multiple times more than a decade ago now, and this choice has not aged well.
Anyone that cannot take 5 minutes to set up commit signing with a $40 usb smartcard to prevent impersonation has absolutely no business writing widely depended upon FOSS software.
Perhaps, but if it's gotten to the point where millions of people download the unsigned code, signing should probably become required. Even reproducible builds.
Required by who though? If your business etc depends upon some code, it's up to you to ensure its quality, surely? You copy some code onto your machine then it's your codebase, right?
While I think anyone unwilling to sign their code is negligent, I also feel anyone unwilling to ensure credible review of code has been done before pushing it to production is equally negligent.
Anyone that maintains code for others to consume has a basic obligation to do the bare minimum to make sure their reputations are not hijacked by bad actors.
Just sign commits and reviews. It is so easy to stop these attacks that not doing so is like a doctor that refuses to wash their hands between patients.
If you are not going to wash your hands do not be a doctor.
If you are not going to sign your code do not be a FOSS maintainer.
No they don't! They have literally no obligations to you - and you've got the MIT/APL/GPL license to prove it. You're getting the benefit of their labour for free!
Even if they did sign the code, What's stopping them slipping some crypto link in. And do they also need to check all the transitive depdencies in their code?
"Anyone that cannot spend $40+ to give every FOSS maintainer a smartcard and maybe even separate machines for releases and make the more secure workflow truly 5 minutes has absolutely no business widely depending upon FOSS"
Not much we didn't know (you're basically SOL since an owner was compromised), however we now have a small peek into the actual meat of the social engineering, which is the only interesting news imho: https://github.com/axios/axios/issues/10636#issuecomment-418...
jasonsaayman and voxpelli had useful write ups from the "head on a swivel" perspective of what to watch out for. Jason mentioned "the meeting said something on my system was out of date." they were using Microsoft meeting and that's how they got RCE. Would love more color on that.
they are cloning Zoom and MS Teams, and try to get people to either copy a script (which is in a textarea that's conveniently too small to show the whole script, and scrollbars are hidden by CSS, and there's a copy button, and when you paste it into the terminal you'll see last few lines, also look innocent, but there's a curl | zsh or `mshta` somewhere in there), download and run a binary/.dmg (and it might be even signed by GoogIe LLC. - the name chosen to look good in the usual typeface used on macOS).
...
it seems the correct muscle memory response to train into people is that "if some meeting link someone sent you doesn't work, then you should create one and send them the link"
(and of course never download and execute anything, don't copy scripts into terminals, but it seems even veteran maintainers do this, etc...)
see Infection Chain here https://cloud.google.com/blog/topics/threat-intelligence/unc...
textarea at the bottom of this comment: https://github.com/axios/axios/issues/10636#issuecomment-418...
An owner being compromised is absolutely survivable on a responsibly run FOSS project with proper commit/review/push signing.
This and every other recent supply chain attack was completely preventable.
So much so I am very comfortable victim blaming at this point.
This is absolutely on the Axios team.
Go setup some smartcards for signing git push/commit and publish those keys widely, and mandate signed merge commits so nothing lands on main without two maintainer sigs, and no more single points of failure.
Did you investigate the maintainer compromise and publication path? The malicious version was never committed or pushed via git. The maintainer signs his commits, and v1 releases were using OIDC and provenance attestations. The malicious package versions were published locally using the npm cli after the maintainer's machine was compromised via a RAT; there's no way for package maintainers to disable/forbid local publication on npmjs.
It seems the Axios team was largely practicing what you're preaching. To the extent they aren't: it still wouldn't have prevented this compromise.
I can not find a single signed recent commit on the axios repo. It is totally yolo mode. Those "signed by github" signatures are meaningless. I stand by my comment in full.
One must sign commits -universally- and -also- sign reviews/merges (multi-party) and then -also- do multi party signing on releases. Doing only one step of basic supply chain security unfortunately buys you about as much defense as locking only a single door.
I do however certainly assign significant blame to the NPM team though for repeatedly refusing optional package signing support so packages with signing enabled can be refused at the server and client if unsigned by a quorum of pinned keys, but even aside from that if packages were signed manually then canary tools could have detected this immediately.
What you sign or don't sign in your Git repo doesn't matter because NPM doesn't publish from a Git repo. Signing commits is still useful for your contributors and downstream forks but it won't have any effect on the users who use your package via NPM.
I think NPM is fully to blame here. Packages that exceed a certain level of popularity should require signing/strong 2FA. They should implement more schemes that publishers can optionally enable, like requiring mandatory sign-off from more than 1 maintainer before the package is available to download.
Then on the package page it should say: "[Warning] Weak publishing protection" or "[Checkmark] This package requires sign-off from accountA and accountB to publish".
2FA was mandated by npm
they had 2FA, but likely software TOTP (so it was either autofilled via 1password (or similar), or they were able to steal the seed)
at this point I think publishing an npm app and asking people to scan a QR with it is the easiest way (so people don't end up with 1 actual factor)
It wasn’t done through git. It was a direct npm publish from the compromised machine. If you read further down in the comments (https://github.com/axios/axios/issues/10636#issuecomment-418...), it seems difficult to pick the right npm settings to prevent this attack.
If I understand it correctly, your suggestions wouldn’t have prevented it, which is evidence that this is not as trivially fixable as you believe it is.
To prevent supply chain attacks you need multi party cryptographic attestation at every layer, which is pretty straight forward, but you are correct, NPM and GitHub controls absolutely will not save you. Microsoft insists their centralized approach can work, but we have plenty of evidence it does not.
Operate under the assumption all accounts will be taken over because centralized corporate auth systems are fundamentally vulnerable.
This is how you actually fix it:
1. Every commit must be signed by a maintainer key listed in the MAINTAINERS file or similar
2. Every review/merge must be signed by a -second- maintainer key
3. Every artifact must be build deterministically and be signed by multiple maintainers.
4. Have only one online npm publish key maintained in a deterministic and remotely attestable enclave that validates multiple valid maintainer signatures
5. Automatically sound the alarm if an NPM release is pushed any other way, and automatically revoke it.
And for 5 there should be help on the NPM end to make it so that the alarms can fire before the new update is actually revealed to the public. There could be a short staging time where it could be revoked before any harm has been done. During this staging time NPM should also scan the package through a malware scanner before allowing it to go public.
The interesting detail from this thread is that every legitimate v1 release had OIDC provenance attestations and the malicious one didn't, but nobody checks. Even simpler, if you're diffing your lockfile between deploys, a brand new dependency appearing in a patch release is a pretty obvious red flag.
To be honest, I would have assumed the tooling would do attestation verification for me. The diffing the lockfile would be on me though.
npm could solve half of this by letting packages opt into OIDC-only publishing at the registry level. v1 already had provenance attestations but the registry happily accepted the malicious publish without them.
Looks like a very sophisticated operation, and I feel for the maintainer who had his machine compromised.
The next incarnation of this, I worry, is that the malware hibernates somehow (e.g., if (Date.now() < 1776188434046) { exit(); }) to maximize the damage.
Isn't that already how it is?
I mean the compromised machine registers itself on the command server and occasionally checks for workloads.
The hacker then decides his next actions - depending on the machine they compromised they'll either try to spread (like this time) and make a broad attack or they may go more in-depth and try to exfiltrate data/spread internally if eg a build node has been compromised
> something on my system was out of date. i installed the missing item
Given the "extreme vigilance" of the primitive "don't install unknown something on your machine" level is unattainable, can there really be an effective project-level solutions?
Mandatory involvement of more people to hope not everyone installs random stuff, at least not at same time? (though you might not even have more people...)
Incredible uptick in supply chain attacks over the last few weeks.
I feel like npm specifically needs to up their game on SA of malicious code embedded in public projects.
That's the reality of modern war. Many countries are likely planting malware on a wide scale. You can't even really prove where an attack originated from, so uninvolved countries would also be smart to take advantage of the current conflict. Like if you primarily wrote German, you would translate your malware to Chinese, Farsi, English, or Hebrew, and take other steps to make it appear to come from one of those warring countries. Any country who was making a long term plan involving malware would likely do it around this time.
You can write code in Chinese and Farsi?
You can deliberately put comments and descriptions using those language.
NPM is designed to let you run untrusted code on your machine. It will never work. There is no game to step up. It's like asking an ostrich to start flying.
It’s far from a complete solution, but to mitigate this specific avenue of supply chain compromise, couldn’t Github/npm issue single-purpose physical hardware tokens and allow projects (or even mandate, for the most popular ones) maintainers use these hardware tokens as a form of 2FA?
What would a physical token give you that totp doesn't?
Edit: wait, did the attacker intercept the totp code as it was entered? Trying to make sense of the thread
The attacker installed a RAT on the contributor’s machine, so if they had configured TOTP or saved the recovery codes anywhere on that machine, the attacker could defeat 2FA.
Oh, yes, I missed that the TOTP machine was compromised:\ Would that then imply that it would have been okay if codes came from a separate device, eg. a TOTP app on a Palm OS device with zero network connectivity? (Or maybe these days the easiest airgapped option is an old android phone that stays in airplane mode...)
The easiest approach is a provider-issued hardware dongle like a SecurID or Yubikey. Lack of end-user programmability is a feature, not a bug.
> Lack of end-user programmability is a feature, not a bug.
I would argue that the problem is network accessibility, not programmability.
When designing a system for secure attestation, end-user programmability is not a feature.
It would not be an advantage for your front door lock to be infinitely reprogrammable. It’s just a liability.
I mean, I guess attestation might have some value, but it feels like moving the goalposts. Under the threat model of a remote attacker who can compromise a normal networked computer, I can't think of an attack that would succeed with a programmable TOTP code generator that would fail if that code generator was not reprogrammable. Can you?
> It would not be an advantage for your front door lock to be infinitely reprogrammable. It’s just a liability.
Er, most door locks are infinitely reprogrammable, because being able to rekey them without having to replace the whole unit is a huge advantage and the liability/disadvantage is minimal (falling under "It rather involved being on the other side of this airtight hatchway" in an unusually almost-literal sense where you have to be inside the house in order to rekey the lock, at which point you could also do anything else).
Sorry, attestation is the goalpost. The community wants certainty that the package was published by a human with authority, and not just by someone who had access to an authority’s private keys. That is what distinguishes attestation from authentication or authorization.
Yes, unfortunately authenticator apps just generate TOTP codes based on a binary key sitting in plain sight without any encryption. Not that it would help if the encrypting/decrypting machine is pwned.
code becomes trusted by review, but these crowd sourcing efforts to do so fizzled out, so in practice we have weak proxies like number of downloads
the implicit trust we have in maintainers is easily faked as we see
All maintainers need to do is code signing. This is a solved problem but the NPM team has been actively rejecting optional signing support for over a decade now. Even so maintainers could sign their commits anyway, but most are too lazy to spend a few minutes to prevent themselves from being impersonated.
If the solution is 'maintainers just need to do xyz', then it's not a solution, sorry. It's not scalable and which projects become 'successful' and which maintainers accidentally become critical parts of worldwide codebases, is almost pure chance. You will never be able to get all the maintainers you need to 'just' do xyz. Just like you will never be able to get humans to 'just' stop making mistakes. So you had better start looking for a solution that doesn't rely on humans not making mistakes.
"Discipline doesn't scale" as become one of my favourite quotes for a reason.
Any good payload analysis been published yet? Really curious if this was just a one and done info stealer or if it potentially could have clawed its way deeper into affected systems.
This article[0] investigated the payload. It's a RAT, so it's capable of executing whatever shell commands it receives, instead of just stealing credentials.
[0]: https://safedep.io/axios-npm-supply-chain-compromise/
> March 31, around 01:00 UTC: community members file issues reporting the compromise. The attacker deletes them using the compromised account.
Interesting it got caught when it did.
[dead]
Check if your machine was affected with this tool: https://github.com/aeneasr/was-i-axios-pwned
How do we know this is not the next tool in line to compromise a machine?
Seems to me the root of the problem was that the guy was using the same device for all sorts of stuff.
Seems to me that one drastic tactic NPM could employ to prevent attacks like this is to use hardware security. NPM could procure and configure laptops with identity rooted in the laptop TPM instead of 2FA. Configure the NPM servers so that for certain repos only updates signed with the private key in the laptop TPM can be pushed to NPM. Each high profile repo would have certain laptops that can upload for that repo. Set up the laptop with a minimal version of Linux with just the command line tools to upload to NPM, not even a browser or desktop environment. Give those laptops to maintainers of high profile repos for free to use for updates.
Then at update time, the maintainer just transfers the code from their dev machine to the secure laptop via USB drive or CD and pushes to NPM from the special laptop.
they can simply make an app that requires tapping a button, so people don't end up with TOTP seeds stored in their password manager on the same notebook where they run 'publish' from
this is why i pin every dependency hash in my python projects. pip install --require-hashes with a locked requirements file catches exactly this, if the package hash changes unexpectedly the install fails. surprised this isn't the default in the npm ecosystem
Npm and the other JavaScript package managers do generate and check lockfiles with hashes by default. This was a new release, not a republishing of an old version (which isn’t possible on the npm registry anyway).
Does OIDC flow block this same issue of being able to use a RAT to publish a malicious package?
Nope, the most restrictive option available is to disallow tokens and require 2FA. I think that using exclusively hardware 2FA and not having the backup codes on the compromised machine probably would have prevented this attack though.
Someone in the linked Github thread describes an attack where the attackers waited for the victim to use their Yubikey for an AWS login, giving the attackers access to AWS as well. I don't think hardware 2FA is safe against a RAT.
No. axios (v1 at least; not v0) were setup to publish via OIDC, but there's no option on npmjs for package maintainers to restrict their package to *only* using OIDC. The maintainer says his machine was infected via RAT, so if he was using software-based 2FA, nothing could have prevented this.
No, once the computer is compromised nothing really helps assuming the attacker is patient enough.
I never understood why all the CAS tutorials pushed axios. This was before vite and build-scripts was how you did react. After the compromise I reviewed some projects and converted them to pure JS fetch and vite.
what's CAS?
I ask this on every supply chain security fail: Can we please mandate signing packages? Or at least commits?
NPM rejected PRs to support optional signing multiple times more than a decade ago now, and this choice has not aged well.
Anyone that cannot take 5 minutes to set up commit signing with a $40 usb smartcard to prevent impersonation has absolutely no business writing widely depended upon FOSS software.
Normalized negligence is still negligence.
Is the onus really on people who write code here? It really should be on those who choose to use this unsigned code, surely?
Perhaps, but if it's gotten to the point where millions of people download the unsigned code, signing should probably become required. Even reproducible builds.
Required by who though? If your business etc depends upon some code, it's up to you to ensure its quality, surely? You copy some code onto your machine then it's your codebase, right?
While I think anyone unwilling to sign their code is negligent, I also feel anyone unwilling to ensure credible review of code has been done before pushing it to production is equally negligent.
Anyone that maintains code for others to consume has a basic obligation to do the bare minimum to make sure their reputations are not hijacked by bad actors.
Just sign commits and reviews. It is so easy to stop these attacks that not doing so is like a doctor that refuses to wash their hands between patients.
If you are not going to wash your hands do not be a doctor.
If you are not going to sign your code do not be a FOSS maintainer.
No they don't! They have literally no obligations to you - and you've got the MIT/APL/GPL license to prove it. You're getting the benefit of their labour for free!
Even if they did sign the code, What's stopping them slipping some crypto link in. And do they also need to check all the transitive depdencies in their code?
If you're paid then sure. Otherwise... It depends.
"Anyone that cannot spend $40+ to give every FOSS maintainer a smartcard and maybe even separate machines for releases and make the more secure workflow truly 5 minutes has absolutely no business widely depending upon FOSS"
[dead]
[dead]
[dead]
[dead]
[dead]