I wonder who actually discovered this attack? Can we credit them? The phrasing in these posts is interesting, with some taking direct credit and others just acknowledging the incident.
Aikido says:
> We were alerted to a large-scale attack against npm...
Socket says:
> Socket.dev found compromised various CrowdStrike npm packages...
Ox says:
> Attackers slipped malicious code into new releases...
Safety says:
> The Safety research team has identified an attack on the NPM ecosystem...
Phoenix says:
> Another supply chain and NPM maintainer compromised...
Semgrep says:
> We are aware of a number of compromised npm packages
Mackenzie here I work for Aikido.
This is a classic example of the security community all playing a part. The very first notice of this was from a developer named Daniel Pereira. He alerted Socket who did the first review of the Malware and discovered 40 packages. After, Aikido discovered an additional 147 packages and the Crowdstrike packages.
I'm not sure how Step found it but they were the first to really understand the malware and that it was a self replicating worm. So multiple parties all playing a part kinda independent. Its pretty cool
Several individual developers seem to have noticed it at around the same time with Step and Socket pointing to different people in their blogs.
And then vendors from Socket, Aikido, and Step all seem to have detected it via their upstream malware detection feeds - Socket and Aikido do AI code analysis, and Step does eBPF monitoring of build pipelines. I think this was widespread enough it was noticed by several people.
Since so many vendors discovered these packages seemingly independently, you'd think that they would share those mechanisms with NPM itself so that those packages would never be published in the first place. But I guess that removes their ability to sell an "early alert" mechanism through their offerings...
NPM is owned by github/microsoft. I'm sure they could afford to buy one of these products or just build their own, but clearly security is not a thing they care about.
Why should MS buy any of these startups when a developer (not any automated tech) found the malware? It looks like these startups did after-the-fact analysis for PR.
> The entire attack design assumes Linux or macOS execution environments, checking for os.platform() === 'linux' || 'darwin'. It deliberately skips Windows systems
If I were the conspiracy-minded sort I might jump to some wild conclusions here.
Usually security companies monitor CVEs and the security mailing lists. That's how they all end up releasing the blog posts at the same time. It's because they are all using the same primary source.
This happens because there's no auditing of new packages or versions. The distro's maintainer and the developer is the same person.
The general solution is to do what Debian does.
Keep a stable distro where new packages aren't added and versions change rarely (security updates and bugfixes only, no new functionality). This is what most people use.
Keep a testing/unstable distro where new packages and new versions can be added, but even then added only by the distro maintainer, NOT by the package developers. This is where the audits happen.
NPM, Python, Rust, Go, Ruby all suffer from this problem, because they have centralized and open package repositories.
In Rust we have cargo vet, where we share these audits and use them in an automated fashion. Companies like Google and Mozilla contribute
their audits.
There is another related growing problem in my recent observation. As a Debian Developer, when I try to audit upstream changes before pulling them in to Debian, I find a huge amount of noise from tooling, mostly pointless. This makes it very difficult to validate the actual changes being made.
For example, an upstream bumps a version of a lint tool and/or changes style across the board. Often these are labelled "chore". While I agree it's nice to have consistent style, in some projects it seems to be the majority of the changes between releases. Due to the difficulty in auditing this, I consider this part of the software supply chain problem and something to be discouraged. Unless there's actually reason to change code (eg. some genuine refactoring a human thinks is actually needed, a bug fix or new feature, a tool exposed a real bug, or at least some identifiable issue that might turn into a bug), it should be left alone.
I'd like to think there are ways to do this and keep things decentralized.
Things like: Once a package has more than [threshold] daily downloads for an extended period of time, it requires 2FA re-auth/step-up on two separate human-controlled accounts to approve any further code updates.
Or something like: for these popular packages, only a select list of automated build systems with reproducible builds can push directly to NPM, which would mean that any malware injector would need to first compromise the source code repository. Which, to be fair, wouldn't necessarily have stopped this worm from propagating entirely, but would have slowed its progress considerably.
This isn't a "sacrifice all of NPM's DX and decentralization" question. This is "a marginally more manual DX only when you're at a scale where you should be release-managing anyways."
> two separate human-controlled accounts to approve any further code updates.
Except most projects have 1 developer… Plus, if I develop some project for free I don't want to be wasting time and work for free for large rich companies. They can pay up for code reviews and similar things instead of adding burden to developers!
I think that we should impose webauthn 2fa on all npm accounts as the only acceptable auth method if you have e.g., more than 1 million total downloads.
Someone could pony up the cash to send out a few thousand yubikeys for this and we'd all be a lot safer.
It's already centralized by virtue of using and relying on NPM as the registry.
If we want decentralized package management for node/javascript, you need to dump NPM - why not something like Go's system which is actually decentralized? There is no package repository/registry, it's all location based imports.
Pypi did that, i got 2 google keys for free. But I used them literally once, to create a token that never expires and that is what I actually use to upload on pypi.
(I did a talk at minidebconf last year in toulouse about this).
If implemented like this, it's completely useless, since there is actually no 2fa at all.
Anyway the idea of making libre software developers work more is a bad idea. We do it for fun. If we have to do corporate stuff we want a corporate salary to go with.
You can use debian's version of your npm packages if you'd like. The issues you're likely to run into are: some libraries won't be packaged period by debian; those that are might be on unacceptably old versions. You can work around these issues by vendoring dependencies that aren't in your distro's repo, ie copying a particular version into your own source control, manually keeping up with security updates. This is, to my knowledge, what large tech companies do. Other companies that don't are either taking a known risk with regards to vulnerabilities, or are ignorant. Ignorance is very common in this industry.
So, who is going to audit the thousands of new packages/versions that are published to npm every day? It only works for Debian because they hand-pick popular software.
Maybe NPM should hand pick popular packages and we should get away from this idea of every platform should always let everyone publish. Curation is expensive, but it may be worthwhile for mature platforms.
> Keep a stable distro where new packages aren't added and versions change rarely (security updates and bugfixes only, no new functionality). This is what most people use.
Unfortunately most people don't want old software that doesn't support newer hardware so most people don't end up using Debian stable.
It'd be interesting to see how much of the world runs on Debian containers, where most of the whole "it doesn't support my insert consumer hardware here" argument is completely moot.
If you ask these people, distributions are terrible and need to die.
Python even removed PGP signatures from Pypi because now attestation happens by microsoft signing your build on the github CI and uploading it directly to pypi with a never expiring token. And that's secure, as opposed to the developer uploading locally from their machine.
In theory it's secure because you see what's going in there on git, but in practice github actions are completely insecure so malware has been uploaded this way already.
As hair splitting, that's actually not true: Go's package manager is just version control of which GitHub is currently the most popular hosting. And it also allows redirecting to your own version control via `go mod edit -replace` which leaves the sourcecode reference to GitHub intact, but will install it from wherever you like
Golang at least gives you the option to easily vendor-ize packages to your local repository. Given what has happened here, maybe we should start doing this more!
I believe good centralized infrastructure for this would be a good start. It could be "gamified" and reviewers could earn reputation for reviewing packages, common packages would be reviewed all the time.
Kinda like Stackoverflow for reviews, with optional identification and such.
And honestly an LLM can strap a "probably good" badge on things with cheap batch inference.
I'm coming to the unfortunate realizattion that supply chain attacks like this are simply baked into the modern JavaScript ecosystem. Vendoring can mitigate your immediate exposure, but does not solve this problem.
These attacks may just be the final push I needed to take server rendering (without js) more seriously. The HTMX folks convinced me that I can get REALLY far without any JavaScript, and my apps will probably be faster and less janky anyway.
Traditional JS is actually among the safest environments ever created. Every day, billions of devices run untrusted JS code, and no other platform has seen sandboxed execution at such scale. And in nearly three decades, there have been very few incidents of large successful attacks on browser engines. That makes the JS engine derived from browsers the perfect tool to build a server side framework out of.
However, processes and practices around NodeJS and npm are in dire need of a security overhaul. leftpad is a cultural problem that needs to be addressed. To start with, snippets don't need to be on npm.
Javascript doesn't have a standard library, until it does the 170 million[1] weekly downloads of packages like UUID will continue. You can't expect people to re-write everything over and over.
That's not the problem. There is a cultural (and partly technical) aversion in JavaScript to large libraries - this is where the issue comes from. So, instead of having something like org.apache.commons in Java or Boost in C++ or Posix in C, larger libraries that curate a bunch of utilities missing from the standard library, you get an uncountable number of small standalone libraries.
I would bet that you'll find a third party `leftpad` implementation in org.apache.commons or in Spring or in some other collection of utils in Java. The difference isn't the need for 3rd party software to fix gaps in the standard library - it's the preference for hundreds of small dependencies instead of one or two larger ones.
1000% agree. Javascript is weak in this regard if you compare it to major programming languages. It just adds unnecessary security risks not having a language with built in imports for common things like making API calls out or parsing JSON, for example.
You have the DOM and Node APIs. Which I think cover more than C library or Common Lisp library. Adding direct dependencies is done by every project. The issue is the sprawling deps tree of NPM and JS culture.
> You can't expect people to re-write everything over and over.
That’s the excuse everyone is giving, then you see thousands of terminal libraries and calendar pickers.
When I was learning JS/node/npm as a total programming newbie, a lot of the advice online was basically “if you write your own version of foobar when foobar is already available as an npm package, you’re stupid for wasting your time”.
I’d never worked in any other ecosystem, and I wish I realized that advice was specific to JS culture
It's not really bad advice, it just has different implications in Javascript.
In other languages, you'd have a few dependencies on larger libraries providing related functionality, where the Javascript culture is to use a bunch of tiny libraries to give the same functionality.
Sometimes I wonder how many of these tiny libraries are just the result of an attempt to have something ready for a conference talk and no one had the courage to say "Uh, Chris, that already exists, and the world doesn't need your different approach on it."
> You can't expect people to re-write everything over and over.
Call me crazy but I think agentic coding tools may soon make it practical for people to not be bogged down by the tedium of implementing the same basic crap over and over again, without having to resort to third party dependencies.
I have a little pavucontrol replacement I'm walking Claude Code through. It wanted to use pulsectl but, to see what it could do, I told it no. Write your own bindings to libpulse instead. A few minutes later it had that working. It can definitely write crap like leftpad.
I think the smallest C library I’ve seen was a single file to include on your project if you want terminal control like curses on windows. A lot of libraries on npm (and cargo) should be gist or a blog post.
None of those security guarantees matter when you take out the sandbox, which is exactly what server-side JS does.
The isolated context is gone and a single instance of code talking to an individual client has access to your entire database. It’s a completely different threat model.
You can't sandbox the code that is supposed to talk to your DB from your DB.
And even on client side, the sandboxing helps isolate any malicious webpage, even ones that are accidentally malicious, from other webpages and from the rest of your machine.
If malicious actors could get gmail.com to run their malicious JS on the client side through this type of supply-chain attack, they could very very easily steal all of your emails. The browser sandbox doesn't offer any protection from 1st party javascript.
> Traditional JS is actually among the safest environments ever created.
> However, processes and practices around NodeJS and npm are in dire need of a security overhaul. leftpad is a cultural problem that needs to be addressed. To start with, snippets don't need to be on npm.
Traditional JS is the reason we have all of these problems around NodeJS and npm. It's a lot better than it was, but a lot of JS tooling came up in the time when ES5 and older were the standard, and to call those versions of the language lacking is... charitable. There were tons of things that you simply couldn't count on the language or its standard library to do right, so a culture of hacks and bandaids grew up around it. Browser disparities didn't help either.
Then people said, "Well, why don't we all share these hacks and bandaids so that we don't have to constantly reinvent the wheel?", and that's sort of how npm got its start. And of course, it was the freewheeling days of the late 00s/early 10s, when you were supposed to "move fast and break things" as a developer, so you didn't have time to really check if any of this was secure or made any sense. The business side wanted the feature and they wanted it now.
The ultimate solution would be to stop slapping bandaids and hacks on the JS ecosystem by making a better language but no one's got the resolve to do that.
Interestingly AI should be able to help a lot with desire to load those snippets.
What I'm wondering if it would help the ecosystem, if you were able to rather load raw snippets into your codebase, and source control as opposed to having them as dependencies.
So e.g. shadcn component pasting approach.
For things like leftPad, cli colors and others you would just load raw typescript code from a source, and there you would immediately notice something malicious or during code reviews.
You would leave actual npm packages to only actual frameworks / larger packages where this doesn't make sense and expect higher scrutiny, multi approvals of releases there.
> I'm coming to the unfortunate realizattion that supply chain attacks like this are simply baked into the modern JavaScript ecosystem.
I see this odd take a lot - the automatic narrowing of the scope of an attack to the single ecosystem it occurred in most recently, without any real technical argument for doing so.
What's especially concerning is I see this take in the security industry: mitigations put in place to target e.g. NPM, but are then completely absent for PyPi or Crates. It's bizarre not only because it leaves those ecosystems wide open, but also because the mitigation measures would be very similar (so it would be a minimal amount of additional effort for a large benefit).
I agree other repos deserve a good look for potential mitigations as well (PyPI too, has a history of publishing malicious packages).
But don't brush off "special status" of NPM here. It is unique in that JS being language of both front-end and back-end, it is much easier for the crooks to sneak in malware that will end up running in visitor's browser and affect them directly. And that makes it a uniquely more attractive target.
npm in itself isn't special at all, maybe the userbase is but that's irrelevant because the mitigation is pretty easy and 99.9999% effective, works for every package manager and boils down to:
1- thoroughly and fully analyze any dependency tree you plan to include
2- immediately freeze all its versions
3- never update without very good reason or without repeating 1 and 2
in other words: simply be professional, face logical consequences if you aren't. if you think one package manager is "safer" than others because magic reasons odds are you'll find out the hard way sooner or later.
As an outsider looking in as I don't deal with NPM on a daily basis, the 30k dependencies going 50 branches deep seems to be the real problem here. Code reuse is an admiral goal but this seems absurd. I have no idea if these numbers are correct or exaggerations but from my limited time working with NPM a year or two ago it seems like it's a definite problem.
I'm in the C ecosystem mostly. Is one NPM package the equivalent of one object file? Can NPM packages call internal functions for their dependencies instead of relying so heavily on bringing in so many external ones? I guess it's a problem either way, internal dependencies having bugs vs supply chain attacks like these. Doesn't bringing in so many dependencies lead to a lot of dead code and much larger codebases then necessary?
> Is one NPM package the equivalent of one object file?
No. The closest thing to a package (on almost every language) is an entire library.
> Can NPM packages call internal functions for their dependencies instead of relying so heavily on bringing in so many external ones?
Yes, they can. They just don't do it.
> Doesn't bringing in so many dependencies lead to a lot of dead code and much larger codebases then necessary?
There aren't many unecessary dependencies, because the number of direct dependencies on each package is reasonable (on the order of 10). And you don't get a lot of unecessary code because the point of tiny libraries is to only import what you need.
Dead code is not the problem, instead the JS mentality evolved that way to minimize dead code. The problem is that dead code is actually not that much of an issue, but dependency management is.
Most people have addressed the package registry side of NPM.
But NPM has a much, much bigger problem on the client side, that makes many of these mitigations almost moot. And that is that `npm install` will upgrade every single package you depend on to its latest version that matches your declared dependency, and in JS land almost everyone uses lax dependency declarations.
So, an attacker who simply publishes a new patch version of a package they have gained access to will likely poison a good chunk of all of the users of that package in a relatively short amount of time. Even if the projects using this are careful and use `npm ci` instead of `npm install` for their CI builds, it will still easily get developers to download and run the malicious new version.
Most other ecosystems don't have this unsafe-by-default behavior, so deploying a new malicious version of a previously safe package is not such a major risk as it is in NPM.
> in JS land almost everyone uses lax dependency declarations
They do, BUT.
Dependency versioning schemes are much more strictly adhered to within JS land than in other ecosystems. PyPi is a mishmash of PEP 440, SemVer, some packages incorrectly using one in the format of the other, & none of the 3 necessarily adhering to the standard they've chosen. Other ecosystems are even worse.
Also - some ecosystems (PyPi again) are committing far worse offences than lax versioning - versionless dependency declaration. Heavy reliance on requirements.txt without lockfiles where half the time version isn't even specified at all. Astral/Poetry are improving the situation here but things are still bad.
Maven land is full of plugins with automated pom.xml version templating that has effectively the same effect as lax versioning, but without any strict adherence to any kind of standard like semver.
Yes, the situation in JS land isn't great, but there are much worse offenders out there.
The point is still different. In PyPI, if I put `requests` in my requirements.txt, and I run `pip install -r requirements.txt` every time I do `make build`, I will still only get one version of requests - the latest available the first time I installed it. This severely reduces the attack radius compared to NPM's default, where I would get the latest (patch) version of my dependency every day. And the ecosystem being committed to respecting semver is entirely irrelevant to supply chain security. Malicious actors don't care about semver.
Overall, publishing a new malicious version of a package is a much lesser problem in virtually any ecosystem other than NPM; in NPM, it's almost an automatic remote code execution vulnerability for every NPM dev, and a persistent threat for many NPM packages even without this.
> This severely reduces the attack radius compared to NPM's default, where I would get the latest (patch) version of my dependency every day.
By default npm will create a lock file and give you the exact same version every time unless you manually initiate an upgrade. Additionally you could even remove the package-lock.json and do a new npm install and it still wouldn't upgrade the package if it already exists in your node_modules directory.
Only time this would be true is if you manually bump the version to something that is incompatible, or remove both the package-lock.json and your node_modules folder.
> Maven land is full of plugins with automated pom.xml version templating that has effectively the same effect as lax versioning, but without any strict adherence to any kind of standard like semver.
Please elaborate on this. I'm a long-time Java developer and have never once seen something akin to what you're describing here. Maven has support for version ranges but in practice it's very rarely used. I can expect a project to build with the exact same dependencies resolved today and in six months or a year from now.
`npm install` uses a lockfile by default and will not change versions. No, not transitives either. You would have to either manually change `package.json` or call `npm update`.
You'd have to go out of your way to make your project as bad as you're describing.
No, this is just wrong. It might indeed use package-lock.json if it matches your node_modules (so that running `npm install` multiple times won't download new versions). But if you're cloning a repo off of GitHub and running npm install for the first time (which a CI setup might do), it will take the latest deps from package.json and update the package-lock.json - at least this is what I've found many responses online claim. The docs for `npm ci` also suggest that it behaves differently from `npm install` in this exact respect:
> In short, the main differences between using npm install and npm ci are:
> The project must have an existing package-lock.json or npm-shrinkwrap.json.
> If dependencies in the package lock do not match those in package.json, npm ci will exit with an error, instead of updating the package lock.
Well but the docs you cited don't match what you stated. You can delete node_modules and reinstall, it will never update the package-lock.json, you will always end up with the exact same versions as before. The package-lock updating happens when you change version numbers in the package.json file, but that is very much expected! So no, running npm install will not pull in new versions randomly.
Could you say more about what mitigations you’re thinking of?
I ask because think the directionality is backwards here: I’ve been involved in packaging ecosystem security for the last few years, and I’m generally of the opinion that PyPI has been ahead of the curve on implementing mitigations. Specifically, I think widespread trusted publishing adoption would have made this attack less effective since there would be fewer credentials to steal, but npm only implemented trusted publishing recently[1]. Crates also implemented exactly this kind of self-scoping, self-expiring credential exchange ahead of npm.
(This isn’t to malign any ecosystem; I think people are also overcorrect in treating this like a uniquely JavaScript-shaped problem.)
> Go modules go even further and add automatic checksum verification per default
Cargo lockfiles contain checksums and Cargo has used these for automatic verification since time immemorial, well before Go implemented their current packaging system. In addition, Go doesn't enforce the use of go.sum files, it's just an optional recommendation: https://go.dev/wiki/Modules#should-i-commit-my-gosum-file-as... I'm not aware of any mechanism which would place Go's packaging system at the forefront of mitigation implementations as suggested here.
To clarify (a lot of sibling commenters misinterpreted this too so probably my fault - can't edit my comment now):
I'm not referring to mitigations in public repositories (which you're right, are varied, but that's a separate topic). I'm purely referring to internal mitigations in companies leveraging open-source dependencies in their software products.
These come in many forms, everything from developer education initiatives to hiring commercial SCA vendors, & many other things in between like custom CI automations. Ultimately, while many of these measures are done broadly for all ecosystems when targeting general dependency vulnerabilities (CVEs from accidental bugs), all of the supply-chain-attack motivated initiatives I've seen companies engage in are single-ecosystem. Which seems wasteful.
I mostly agree. But NPM is special, in that the exposure is so much higher. The hypothetical python+htmx web app might have 10s of dependencies (including transitive) whereas your typical Javascript/React will have 1000s. All an attacker needs to do is find one of many packages like TinyColor or Leftpad or whatever and now loads of projects are compromised.
> NPM is special, in that the exposure is so much higher.
NPM is special in the same way as Windows is special when it comes to malware: it's a more lucrative target.
However, the issue here is that - unlike Windows - targetting NPM alone does not incur significantly less overhead than targetting software registries more broadly. The trade-off between focusing purely on NPM & covering a lot of popular languages isn't high, & imo isn't a worthwhile trade-off.
Stuff like Babel, React, Svelte, Axios, Redux, Jest… should be self contained and not depend on anything other than being a peer dependency. They are core technological choices that happens early in the project and is hard or impossible to replace afterwards.
- I feel that you are unlikely to need Babel in 2025, most things it historically transpiled are Baseline Widely Available now (and most of the things it polyfilled weren't actually Babel's but brought in from other dependencies like core-js, which you probably don't need either in 2025). For the rest of the things it still transpiles (pretty much just JSX) there are cheaper/faster transpilers with fewer external dependencies and runtime dependencies (Typescript, esbuild). It should not be hard to replace Babel in your stack: if you've got a complex webpack solution (say from CRA reasons) consider esbuild or similar.
- Axios and Jest have "native" options now (fetch and node --test). fetch is especially nice because it is the same API in the browser and in Node (and Deno and Bun).
- Redux is self-contained.
- React itself is sort of self-contained, it's the massive ecosystem that makes React the most appealing that starts to drive dependency bloat. I can't speak to Svelte.
Not saying this in defence of Rust or Cargo, but often times those dependencies are just different versions of the same thing. In a project at one of my previous companies, a colleague noticed we had LOADS of `regex` crate versions. Forgot the number but it was well over 100
That seems like a failure in workspace management. The most duplicates I've seen was 3, with crates like url or uuid, even in projects with 1000+ distinct deps.
Supply chain attacks happen at every layer where there is package management or a vector onto the machine or into the code.
What NPM should do if they really give a shit is start requiring 2FA to publish. Require a scan prior to publish. Sign the package with hard keys and signature. Verify all packages installed match signatures. Semver matching isn’t enough. CRC checks aren’t enough. This has to be baked into packages and package management.
While technically true, I have yet to see Go projects importing thousands of dependencies. They may certainly exist, but are absolutely not the rule. JS projects, however...
We have to realize, that while supply chain attacks can happen everywhere, the best mitigations are development culture and solid standard library - looking at you, cargo.
I am a JS developer by trade and I think that this ecosystem is doomed. I absolutely avoid even installing node on my private machine.
I think you are reading that wrong, go.sum isn't a list of dependencies it's a list of checksums for modules that were, at some point, used by this module. All those different versions of the same module listed there, they aren't all dependencies, at most one of them is.
Assuming 'go mod tidy' is periodically run go.mod should contain all dependencies (which in this case seems to be shy of 300, still a lot).
How will multi-factor-authentication prevent such a supply chain issue?
That is, if some attacker create some dummy trivial but convenient package and 2 years latter half the package hub depends on it somehow, the attacker will just use its legit credential to pown everyone and its dog. This is not even about stilling credentials. It’s a cultural issue with bare blind trust to use blank check without even any expiry date.
That's an entirely different issue compared to what we're seeing here. If an attacker rug-pulls of course there is nothing that can be done about that other than security scanning. Arguably some kind of package security scanning is a core-service that a lot of organisations would not think twice about paying npm for.
> If an attacker rug-pulls of course there is nothing that can be done about that other than security scanning.
As another subthread mentioned (https://news.ycombinator.com/item?id=45261303), there is something which can be done: auditing of new packages or versions, by a third party, before they're used. Even doing a simple diff between the previous version and the current version before running anything within the package would already help.
That's really the core issue. Developer-signed packages (npm's current attack model is "Eve doing a man-in-the-middle attack between npm and you," which is not exactly the most common threat here) and a transparent key registry should be minimal kit for any package manager, even though all, or at least practically all, the ecosystems are bereft of that. Hardening API surfaces with additional MFA isn't enough; you have to divorce "API authentication" from "cryptographic authentication" so that compromising one doesn't affect the other.
> What NPM should do if they really give a shit is start requiring 2FA to publish.
How does 2FA prevent malware? Anyone can get a phone number to receive a text or add an authenticator to their phone.
I would argue a subscrption model for 1 EUR/month would be better. The money received could pay for certification of packages and the credit card on file can leverage the security of the payments system.
NPM does not require two-factor authentication. If two-factor authentication is enabled for your account and you wish to disable it, this explains how to do that if allowed by your organization:
It doesn't require 2FA in general, but it does for people with publish rights for popular packages, which covers most or all of the recent security incidents.
If NPM really cared, they'd stop recommending people use their poorly designed version control system that relies on late-fetching third-party components required by the build step, and they'd advise people to pick a reliable and robust VCS like Git for tracking/storing/retrieving source code objects and stick to that. This will never happen.
NPM has also been sending out nag emails for the last 2+ years about 2FA. If anything, that constituted an assist in the attack on the Junon account that we saw a couple weeks ago.
The solution is not to go back to vanilla JS, it's for people to form a foundation and build a more complete utilities library for JS that doesn't have 1000 different dependencies, and can be trusted. Something like Boost for C++, or Apache Commons for Java.
Python and Rust both have decent std lib, but it is just a matter of time before this happens in thoae ecosystems. There is nothing unique about this specific attack that could only happen in JavaScript.
I don't know Go, but Rust absolutely has the same problem, yes. So does Python. NPM is being discussed here, because it is the topic of the article, but the issue is the ease with which you can pull in unvetted dependencies.
Languages without package managers have a lot more friction to pull in dependencies. You usually rely on the operating system and its package-manager-humans to provide your dependencies; or on primitive OSes like Windows or macOS, you package the dependencies with your application, which involves integrating them into your build and distribution systems. Both of those involve a lot of manual, human effort, which reduces the total number of dependencies (attack points), and makes supply-chain issues like this more likely to be noticed.
The language package managers make it trivial to pull in dozens or hundreds of dependencies, straight from some random source code repository. Your dependencies can add their own dependencies, without you ever knowing. When you have dozens or hundreds of unvetted dependencies, it becomes trivial for an attacker to inject code they control into just one of those dependencies, and then it's game over for every project that includes that one dependency anywhere in their chain.
It's not impossible to do that in the OS-provided or self-managed dependency scenario, but it's much more difficult and will have a much narrower impact.
It depends on `image` which in turn depends on a number of crates to handle different file types. If you disable all `image` features, it only has like 5 dependencies left.
C library is smaller than Node.js (you won’t have HTTP). What C have is much more respectable libraries. If you add libcurl or freetype to your project, it won’t pull the whole jungle with them.
What C doesn't have is an agreed-upon standard package manager. Which means that any dependency - including transitive ones! - requires some effort on behalf of the developer to add to the build. And that, in turn, puts pressure on library authors to avoid dependencies other than a few well-established libraries (like libpng or GLib),
This makes little sense. Any popular language with a lax package management culture will have the exact same issue, this has nothing to do with JS itself.
I'm actually doing JS quasi exclusively these days, but with a completely different tool chain, and feel totally unconcerned by any of these bi-weekly NPM scandals.
Javascript is badly over-used and over-depended on. So many websites just display text and images, but have extremely heavy javascript libraries because that's what people know and that is part of the default, and because it enables all the tracking that powers the modern web. There's no benefit to the user, and we'd be better off without these sites existing if there were really no other choice but to use javascript.
NPM does seem vastly over represented in these type of compromises, but I don't necessarily think that e.g. pypi is much better in terms of security. So you could very well be correct that NPM is just a nicer, perhaps bigger, target.
If you can sneak malware into a JavaScript application that runs in millions of browsers, that's a lot more useful that getting a some number servers running a module as part of a script, who's environment is a bit unknown.
Javascript really could do with a standard library.
Is the difference between the number of dev dependencies for eg. VueJs (a JavaScript library for marshalling Json Ajax responses into UI) and Htmx (a JavaScript library for marshalling html Ajax responses into UI) meaningful?
There is a difference, but it's not an order of magnitude and neither is a true island.
Granted, deciding not to use JS on the server is reasonable in the context of this article, but for the client htmx is as much a js lib with (dev) dependencies as any other.
While npm is a huge and easy target, the general problem exists for all package repositories. Hopefully a supply chain attack mitigation strategy can be better than hoping attackers target package repositories you aren't using.
While there's a culture prevalent in Javascript development to ignore the costs of piling abstractions on top of abstractions, you don't have to buy into it. Probably the easiest thing to do is count transitive dependencies.
The blast radius is made far worse by npm having the concept of "postinstall" which allows any package the ability to run a command on the host system after it was installed.
This works for deps of deps as well, so anything in your node_modules has access to this hook.
It's a terrible idea and something that ought to be removed or replaced by something much safer.
I agree in principle, but child_process is a thing so I don't think it makes much difference. You are pwned either way if the package can ever execute code.
Why is this inevitable? If you use only easily verifyable packages you’ve lost nothing. The whole concept of npm automatically executing postinstall scripts was fixed when my pnpm started asking me every time a new package wanted to do that.
You all really need to stop using this term when it comes to OSS. Supply chain implies a relationship, none of these companies or developers have a relationship with the creators other than including their packages.
Call it something like "free code attacks" or "hobbyist code attacks."
Even if we didn't have post install scripts wouldn't the malware just run as soon as you imported the module into your code during the build process, server startup, testing, etc?
I can't think of an instance where I ran npm install and didn't run some process shortly after that imported the packages.
Many people have non-JS backends and only use npm for frontend dependencies. If a postinstall script runs in a dev or build environment it could get access to a lot of things that wouldn't be available when the package is imported in a browser or other production environment.
When the left-pad debacle happened, one commenter here said of a well known npm maintainer something to the effect of that he's an "author of 600 npm packages, and 1200 lines of JavaScript".
Not much has changed since then. The best counter-example I know is esbuild, which is a fully featured bundler/minifier/etc that has zero external dependencies except for the Go stdlib + one package maintained by the Go project itself:
Other "next generation" projects are trading one problematic ecosystem for another. When you study dependency chains of e.g. biomejs and swc, it looks pretty good:
Replacing the tire fire of eslint (and its hundreds to low thousands of dependencies) with zero of them! Very encouraging, until you find the Rust source:
Part of the reason of my switch to using Go as my primary language is that there's this trend of purego implementations which usually aim towards zero dependencies besides the stdlib and golang.org/x.
These kind of projects usually are pretty great because they aim to work with CGO_ENABLED=0 so the libs are very portable and work with different syscall backends.
Additionally I really like to go mod vendor my snapshot of dependencies which is great for short term fixes, but it won't fix the cause in the long run.
However, the go ecosystem is just as vulnerable here because of lack of signing off package updates. As long as there's no verification possible end-to-end when it comes to "who signed this package" then there's no way this will get better.
Additionally most supply chaib attacks focussed on the CI/CD infrastructure in the past, because they are just as broken with just as many problems. There needs to be a better CI/CD workflow where signing keys don't have to be available on the runners themselves, otherwise this will just shift the attack surface to a different location.
In my opinion the package managers are somewhat to blame here, too. They should encourage and mandate gpg signatures, and especially in git commits when they rely on git tags for distribution.
> there's this trend of purego implementations which usually aim towards zero dependencies besides the stdlib and golang.org/x.
I'm interested in knowing whether there's something intrinsic to Go that encourages such a culture.
IMO, it might be due to the fact that Go mod came rather late in the game, while NPM was introduced near the beginning of NodeJS. But it might be more related to Go's target audience being more low-level, where such tools are less ubiquitous?
> I'm interested in knowing whether there's something intrinsic to Go that encourages such a culture.
I've also seen something similar with Java, with its culture of "pure Java" code which reimplements everything in Java instead of calling into preexisting native libraries. What's common between Java and Go is that they don't play well with native code; they really want to have full control of the process, which is made harder by code running outside their runtime environment.
Go sits at about the same level of abstraction as Python or Java, just with less OO baked in. I'm not sure where go's reputation as "low-level" comes from. I'd be curious to hear why that's the category you think of it in?
> I'm interested in knowing whether there's something intrinsic to Go that encourages such a culture.
I think it's because the final deliverable of Go projects is usually a single self-contained binary executable with no dependencies, whereas with Node the final deliverable is usually an NPM package which pulls its dependencies automatically.
With Node the final deliverable is an app that comes packaged with all its dependencies, and often bundled into a single .js file, which is conceptually the same as a single binary produced by Go.
Can you give an example? While theoretically possible I almost never see that in Node projects. It's not even very practical because even if you do cram everything into a single .js file you still need an external dependency on the Node runtime.
> usually an NPM package which pulls its dependencies automatically
Built applications do not pull dependencies at runtime, just like with golang. If you want to use a library/source, you pull in all the deps, again just like golang.
Not at runtime no, but at install time yes. In contrast, with Go programs I often see "install time" being just `curl $url > /usr/local/bin/my_application` which is basically never the case with Node (for obvious reasons).
There are plenty of people in the community who would help reduce the number of dependencies, but it really requires the maintainers to make it a priority. Otherwise the only way to address it is to switch to another solution like oxlint.
I tried upgrading ESLint recently and it took me forever to fix all the dependency issues. I wish I never used ESLint prettier as now my codebase styling is locked into an ESLint config :/
Deno has a similar formatter to prettier and similar linter to eslint (with Typescript plugins) out-of-the-box. (Some parts of those written in Rust.) I have been finding myself moving to Deno more and more. I also haven't noticed too many reformatting problems with migrating from prettier to Deno. (If there are major changes, you can also add the commit to a .git-ignore-revisions file.)
Have you looked into biome? We recently switched at work. It’s fine and fast. If you overly rely on 3rd party plugins it might be hard but it covered our use case fine for a network based react app.
Even minor styling rule changes would result in a huge PR across our frontend so I tend to avoid any change in tooling. But using old tools is not the end of the world. I only upgrade ESLint because I had to upgrade something else.
The answer is to not draw in dependencies for things you are easily able to write yourself. That would probably reduce dependencies by 2/3 or so in many projects. Especially, left-pad things. If you write properly self contained small parts and a few tests, you probably don't have to touch them much, and the maintenance burden is not that high. Compare that with having to check every little dependency like left pad and all its code and its dependencies. If a dependency is not strictly necessary, then don't do it.
It's crazy to me that npm still executes postinstall scripts by default for all dependencies. Other package managers (Pnpm, Bun) do not run them for dependencies unless they are added to a specific allow-list. Composer never runs lifecycle scripts for dependencies.
This matters because dependencies are often installed in a build or development environment with access to things that are not available when the package is actually imported in a browser or other production environment.
What has been the community reaction? Has allowing scripts been scalable for users? Or could it be described as people blindly copying and pasting allow commands?
I am involved in Python packaging discussions and there is a pre-proposal (not at PEP stage yet) at the moment for "wheel variants" that involves a plugin architecture, a contentious point is whether to download and run the plugins by default. I'd like to find parallels in other language communities to learn from.
In my experience, packages which legitimately require a postinstall script to work correctly are very rare. For the apps I maintain, esbuild is the only dependency which benefits from a postinstall script to slightly improve performance (though it still works without the script). So there's no scaling issue adding one or two packages to a whitelist if desired.
Is there any way to install CLI tools from npmjs without being affected by a recent compromise?
Rust has `cargo install --locked`, which will use the pinned versions of dependencies from the lockfile, and these lockfiles are published for bin packages to crates.io.
But it seems npmjs doesn't allow publishing lockfiles, neither for libraries nor for CLI tools, so if you try to install let's say @google/gemini-cli, it will just pull the latest dependencies that fit the constraints in package.json. Is that true? Is it really this bad? If you try to install a CLI tool on a bad day when half of npmjs is compromised, you're out of luck?
Lock files wouldn't work if they were locking transitive dependencies; otherwise the version solver would not have any work to actually do and you'd have many, many versions of the same package rather than a few versions that satisfy all of the version range constraints.
Lots of good ideas since last week, the one I like most being that published packages, especially those that are high in download count, don't actually go publish for a while until after publishing, allowing security scanners to do their thing.
In the Rust ecosystem, you only publish lock files for binary crates. So yeah then you get churn like https://github.com/cargo-bins/cargo-binstall/releases/tag/v1... bumping transitive deps, but this churn/noise doesn't exist for library crates - because the lock file isn't published for them.
npm will use your lockfile if it’s present, otherwise yeah it’s pretty much whatever is tagged and latest at the time (and the version doesn’t even have to change). If npm respected every upstream lockfile, then it could never share a single version that satisfied all dependencies.
The bigger issue here is that npm has such unrestricted and unsupervised access to the entire environment at all.
> If npm respected every upstream lockfile, then it could never share a single version that satisfied all dependencies.
I'm asking in the context of installing a single CLI tool into ~/bin or something. There's no requirement to satisfy all dependencies, because the only dependency I care about is that one CLI tool. All I want is an equivalent of what `cargo install --locked` does — use the top-level lockfile of the CLI tool itself.
That sounds pretty reasonable: npm should allow bundling the lockfile with things that are marked with the type of "project", and whether it actually uses them depending on whether other locked constraints are overriding it. So instead of one lockfile, a prioritized list of them. The UX of dealing with that list could be a sticky wicket though, and npm isn't known for making this stuff easy to begin with.
I think these kinds of attack would be strongly reduced if js had a strong standard library.
If it was provided, it would significantly trim dependency trees of all the small utility libraries.
Perhaps we need a common community effort to create a “distro” of curated and safe dependencies one can install safely, by analyzing the most popular packages and checking what’s common and small enough to be worth being included/forked.
We didn't really dodge a bullet. We put a bullet named 'node' in the cylinder of a revolver, spun it, pointed the gun at our head, and pulled the trigger. We just happened to be lucky enough that we got an empty chamber.
Software supply chain attacks are well known and they are a massive hole in the entirety of software infrastructure. As usual with security, no one really cares that much.
I knew npm was a train wreck when I first used it years ago and it pulled in literally hundreds of dependencies for a simple app. I avoid anything that uses it like the plague.
Lots of languages ecosystems have this problem, but it is especially prominent in JS and lies on a spectrum. For comparison, in the C/C++ ecosystem it is prominent to have libraries advertising that they have zero dependencies and header only or one common major library like Boost.
The JavaScript ecosystem has a major case of import-everything disease that acts as a catalyst for supply chain attacks. left-pad as one example of many.
Just more engineering leaning than you. Actual engineers have to analyze their supply chains, and so makes sense they would be baffled by NPM dependency trees that utterly normal projects grow into in the JavaScript ecosystem.
Good thing that at scale, private package repositories or even in-house development is done. Personally, I would argue that an engineer unable to tell apart perfect from good, isn't a very good engineer in my book, but some engineers are unable to make compromises.
Do you think companies using node don't analyze supply chains? That's nonsense. Have you cargo installed a rust app recently? This isn't just a js issue. This needs to be solved across the industry and npm frankly has done a horrible job at it. We let people with billions of downloads a month with recently changed password/2fa publish packages? Why don't we pool assets as a collective to scan newly published packages before they're allowed to be installed? These types of things really should exist across all package registries (and my really hot take is that we probably don't need a registry for every language, either!).
It is solved across the industry for those who care. If you use cargo, npm, or a python package manager, you may have a service that handles static versioning of dependencies for security purposes. If you don't, you aren't generally working in a language that encourages so much package use.
> Do you think companies using node don't analyze supply chains?
I _know_ many don’t. In fact suggesting doing it is a good way to be looked at like a crazy person and be told something like “this is a yes place not a no place.”
"I knew you weren't a great engineer the moment you started pulling dependencies for a simple app"
You realize my point right? People are taught to not reinvent the wheel at work (mostly for good reasons) so that's what they do, me and you included.
You ain't gonna be bothered to write html and manual manipulation, the people that will give you libraries to do so won't be bothered reimplementing parsers and file watchers, file watcher writers won't be bothered reimplementing file system utils, file system utils developers won't be bothered reimplementing structured cloning or event loops, etc, etc.
I myself just the other day had the task of converting HTML to markdown, because I don't remember whether it was Jira or Github APIs that returns comments as HTML and despite it being mostly few hours of work that would get us 90% there everybody was in favor of pulling a dependency to do so (with its own dependencies) and thus further exposing our application to those risks.
One that gets me 90% there would take me few hours, one that gets me 99% there few months, which is why eventually people would rather pull a dependency.
LLMs are pretty good at greenfield projects and especially if they are tasked with writing something with a lot of examples in the training data. This approach can be used to solve the problem of supply-chain attacks with the downside being that the code might not be as well written and feature complete as a third-party package.
Not for the parser, only for the demo server! And I guess the dev dependencies as well, but with a much smaller surface area. But yeah, I don't think a TypeScript compiler is within the scope of an LLM.
I try to avoid JS, as it is a horrible language, by design. That does include TS, but it at least is useable, but barely - because it still tied to JS itself.
Off-topic, but I love how different programmers think about things, and how nothing really is "correct" or "incorrect". Started thinking about it because for me it's the opposite, JS is an OK and at least usable language, as long as you avoid TS and all that comes with it.
Still, even I who'd call myself a JavaScript developer also try to avoid desktop applications made with just JS :)
JS's issue is that it allows you to run an objectively wrong code without throwing explicit error to the user, it just fails silently or does something magical. Seems innocent, until you realize what we use JS for, other than silly websites or ERP dashboards.
It is full of gotchas that serves 0 purpose nowadays.
Also remember that it is basically a Lisp wearing Java skin on top, originally designed in less than 2 weeks.
Typescript is one of few things that puts safety barrier and sane static error checking that makes JS bearable to use - but it still has to fall down to how JS works in the end so it suffers from same core architectural problems.
> JS's issue is that it allows you to run an objectively wrong code without throwing explicit error to the user, it just fails silently or does something magical. Seems innocent, until you realize what we use JS for, other than silly websites or ERP dashboards.
What some people see as a fault, others see as a feature :) For me, that's there to prevent entire websites from breaking because some small widget in the bottom right corner breaks, for example. Rather than stopping the entire runtime, it just surfaces that error in the developer tools, but lets the rest to continue working.
Then of course entire web apps crash because one tiny error somewhere (remember seeing a blank page with just some short error text in black in the middle? Those), but that doesn't mean that's the best way of doing things.
> Also remember that it is basically a Lisp wearing Java skin on top
I guess that's why I like it better than TS, that tries to move it away from that. I mainly do Clojure development day-to-day, and static types hardly ever gives me more "safety" than other approaches do. But again, what I do isn't more "correct" than what anyone else does, it's largely based on "It's better for me to program this way".
>it's there to prevent entire websites from breaking because some small widget in the bottom right corner breaks, for example.
the issue is that it prevents that, but also allows you to send complete corrupt data forward, that can create horrible cascade of errors down the pipeline - because other components made assumption about correctness of data passed to them.
Such display errors should be caught early in development, should be tested, and should never reach prod, instead of being swept under the rug - for anything else other than prototype.
but i agree - going fully functional with dynamic types beats average JS experience any day.
It is just piling up more mud upon giant mudball,
> JS is an OK and at least usable language, as long as you avoid TS and all that comes with it.
Care to explain why?
My view is this: since you can write plain JS inside TS (just misconfigure tsconfig badly enough), I honestly don’t see how you arrive at that conclusion.
I can just about understand preferring JS on the grounds that it runs without a compile step. But I’ve never seen a convincing explanation of why the language itself is supposedly better.
I was hyped for wasm because i thought it was supposed to solve this problem, allowing any programming language to be compiled to run in browsers.
But apparently they only made it do like 95% of what JS does so you can't actually replace js with it. To me it seems like a huge blunder. I don't give a crap about making niche applications a bit faster, but freeing the web from the curse of JS would be absolutely huge. And they basically did it except not quite. It's so strange to me, why not just go the extra 5%?
Maybe its something about sharing memory with the js that would introduce serious vulnerabilities so they can't let wasm code have access to everything.
The only way to remove Js is to create a new browser that doesn't use it. Fragments the web, yes and probably nobody will use it
That 5% of js glue code necessary right now is just monumentally difficult to get rid of, it's like a binary serialization / interface (ABI) of all DOM/BOM APIs and these APIs are huge, dynamic, callback-heavy and object-oriented. It's much easier to have that glue compiler generated, which you can already do right now (you can write your entire web app in rust if you want):
This is also being worked on, in the future this 5% glue might eventually entirely disappear:
> Designed with the "Web IDL bindings" proposal in mind. Eventually, there won't be any JavaScript shims between Rust-generated wasm functions and native DOM methods
depends on use case, i don't think one language can fit all cases. 100% correctness is required for systems, but it is a hindrance in non-critical systems. or robust type systems require high compilation times which hurt iterating on the codebase.
systems? rust - but it is still far from perfect, too much focus on saving few keystrokes here and there.
general purpose corporate development? c# - despite current direction post .net 5 of stapling together legacy parts of .net framework to .net core. it does most things good enough.
scripting, and just scripting? python.
web? there's only one, bad, option and that's js/ts.
most hated ones are in order: js, go, c++, python.
I mean, it's hard to avoid indirectly using things that use npm, e.g. websites or whatever. But it's pretty easy to never have to run npm on your local machine, yes.
> How many tokens do you have lying around in your home directory in plain text, able to be read by anything on your computer running as your user?
Zero? How many developers have plain-text tokens lying around on disk? Avoiding that been hammered into me from every developer more senior than me since I got involved with professional software development.
You're sure you don't have something lying around in ~/.config ? Until recently the github cli would just save its refresh token as a plain text file. AWS CLI loves to have secrets sitting around in a file https://docs.aws.amazon.com/cli/latest/userguide/cli-configu...
I don't use AWS and looking in ~/.config/gh I see two config files, no plain-text secrets.
With that said, it's not impossible some tool leaks their secrets into ~/.local, ~/.cache or ~/.config I suppose.
I thought they were referencing the common approach of adding environment variables with plaintext secrets to your shell config or as an individual file in $HOME, which been a big no-no for as long as I can remember.
I guess I'd reword it to "I'm not manually putting any cleartext secrets on disk" or something instead, if we wanted it to be 100% accurate.
I'd argue the reverse is true. On your local system, which only need to operate when a named user with a (hopefully) strong password is present, you can encrypt the secrets with the user's login password and the OS can verify that it's handing the secret out to the correct binary before doing so. The binary can also take steps to verify that it is being called directly from a user interaction and not from a build script of some random package.
The extent to which any of this is actually implemented varies wildly between different OSes, ecosystems and tools. On macOS, docker desktop does quite well here. There's also an app called Secretive which does even better for SSH keys - generating a non-exportable key in the CPU's secure enclave. It can even optionally prompt for login password or fingerprint before allowing the key to be used. It's practically almost as secure as using a separate hardware token for SSH but significantly more convenient.
In contrast, most of the time the only thing protecting the keys in your CI vault from being exfiltrated is that the malware needs to know the specific name / API call / whatever to read them. Plenty of CI systems you don't even need that, because the build script that uses the secrets will read them into environment variables before starting the build proper.
It's not that hard if it's something you decide you care about and want to solve. Like diggan mentions, there's many tools, some you already might use, that can be used to inject secrets into applications that's not too onerous to use in your development workflow.
I don't think so? I don't even know what a "CI vault automation" is, I store my credentials and secrets in 1Password, and use the CLI to get the secrets for the moments they're needed, I do all my development locally and things seem fine.
One option is pass, which is a shell script that uses GPG to manage passwords for command line tools. You can put the password store into a git repository if you need to sync it across machines.
The store in the case of pass, is a plain text file, whose contents are encrypted strings. If you trust the encryption, you can put it anywhere you like. Keep the keys secret and safe, though!
Using a password manager for fetching them when needed. 1Password in my case, but I'm sure any password manager can be used for storing secrets for most programming projects.
I was thinking about one more case, if you are using 1password as a cli tool. Let's say you "op run -- npm dev". If there's a malicious node modules script, it would of course be able to get the env variables you intended to inject, but would it also be able to continue running more op commands to get all your other secrets too if you have started a session?
Edit:
Testing 1Password myself, with 1password desktop and shell, if I have authed myself once in shell, then "spawn" would be able to get all of my credentials from 1Password.
So I'm not actually sure how much better than plaintext is that. Unless you use service accounts there.
Which programming languages/frameworks do you use? Do you use 1Password to load secrets to env where you run whatever thing you are working on? Or does the app load them during boot?
A bunch, ranging from JS to Clojure and everything in-between, depends on the project.
The approach also depends on the project. There is a bunch of different approaches and I don't think there is one approach that would work for every project, and sometimes I requires some wrangling but takes 5-10 minutes tops.
How long have you been using that method? I didn't feel it's been very popular so far, although it makes a lot of sense. I've always seen people using gitignored .env files/config dirs in projects with many hardcoded credentials.
We've seen many reports of supply chain attacks affecting NPM. Are these symptoms of operational complexity, which can affect any such service, or is there something fundamentally wrong with NPM?
Adding dependencies comes with advantages and downsides. You need to strike a balance between them. External libraries can help implement things that you better don't implement yourself, so the answer is certainly not "no dependencies". But there are downsides and risks, and the risks grow with the number of dependencies.
In the world of NPM, people think those simple truths don't apply to them and the downsides and risks of dependencies can be ignored. Then you end up with thousands of transitive dependencies.
You can't put this all on the users. The JS/node/npm projects have been mismanaged since the start.
node should have shipped "batteries included" after the left-pad incident. There was a boneheaded attachment to small stdlib, which you could put down to youthful innocence, except that it's been almost 10 years.
The TC39 committee which controls the design of JS stdlib and the node maintainers basically both act like the other one doesn't exist.
NPM was never designed with security in mind. It's a dirty hack that somehow became the most popular package manager.
The dependency hell is a reflection of the massive egos of the people involved in the multiple organizations. Python doesn't have this problem because it's all centralized under one org with a single vision.
Apparently Maven has 61.9M indexed packages. As Java has a decent standard lib, mini libs like leftpad are not contributing to this count. NPM has 3.1M packages. Many are trivially simple. Those stats would suggest that NPM has disproportionately more issues than other services.
I would argue that is only one of the many issues with the JS/TS/NPM ecosystem. Many of the other problems have been normalized. The constant security issues are highly visible.
Where did you see that number? Maven central says it has about 18 million [1] packages. Maybe with all versions of those 18 million packages there are about 62 million artifacts?
While the Java ecosystem is vastly larger, in Java (with Maven, Gradle, Bazel, etc.) it is not common to use really small libraries. So you end up with vastly less transitive dependencies in your projects.
This. You would expect some of the mature packages to be quite diligent about dependencies, but they are the one pulling random stuff for a minor feature. then the transitive dependencies adds like GBs of files to your project.
There is a guy (ljharb) who is literally on TC39 - JavaScript specification committee - who is maintaining like 600 packages full of polyfills/dependencies/utilities.
There was a huge uproar about that guy specifically and deep dependency graphs in general a year ago. A lot has already changed for lots of the popular frameworks and libraries. Dependency graphs are already much slimmer. The cultural change is happening, but we can't expect it to happen all at once.
That wouldn't be a problem if there was proper package signing and the polyfill packages were hosted under a package namespace owned by the javascript specification committee.
Irrelevant here. You use eslint-plugin-import with its 60 dependencies; One dependency or 60 is irrelevant because you only need one token: his. They're all his packages.
The problem with that guy is that the dependencies are useless to everyone except his ego.
Just spit-balling here, but it seems that the problem is with the pushing to NPM, and distribution from NPM, rather than the concept of NPM. If NPM required some form of cryptographically secure author signing, and didn't distribute un-signed packages, then there is at least a chain of responsibility that can be followed.
It's both that and a culture of installing a myriad of constantly-updating, tiny libraries to do basic utility functions. (Not even libraries, they're more like individual pages in individual books).
In our line-of-business .NET app, we have a logger, a database, a unit tester, and a driver for some specialty hardware. We upgrade to the latest version of each external dependency about once per year (every major version) to avoid accruing tech debt. They're all pinned and locally hosted, nuget exists but we (like most .Net developers) don't use it to the extent that npm devs do. We read the changelogs - all four of them! - and manually update.
I understand that the NPM ecosystem works differently from a "batteries included" .Net environment for a desktop app, but it's not just about where the users are. Line of business code in .Net and Java apps process a lot of important data. Slipping a malicious package into pypi could expose all kinds of juicy, proprietary data, but again, it's less about the existence of a package manager and more about when and how you use it.
> Slipping a malicious package into pypi could expose all kinds of juicy, proprietary data
> In July 2024, Bittensor users were the victims of an $8 million hack. The Bittensor hack was an example of a supply chain hack using PyPI. PyPI is a site that hosts packages for the Python programming language
We don't see these attacks nearly as severe or frequent on Maven, which is a much older package management solution. Maven users would be far more attractive targets given corporates extensively run Java.
Number of packages doesn’t mean much. If you can get your code into just one Javascript package you could have it run on billions of browsers. With Java it’s hard to get the same distribution (although the log4j vulnerability shows it’s not entirely impossible).
It is also, in my humble but informed opinion, where you will find the least security concious programs, just because of the breadth of it's use and myriad of deployments.
It's the new pragmatic choice for web apps and so it's everyone is using it, from battle hardened teams to total noobs to people who just don't give a shit. It reminds me of Wordpress from 10 years ago, when it was the goto platform for cheap new websites.
So do you expect other supply chain services that also supply juicy targets to be affected? I mean, we live in a bubble here in HN, so not seeing something in the front page doesn't mean it doesn't exist or it doesn't happen, but the feeling is that NPM is particularly more vulnerable than other services, correct me if I'm wrong.
NPM isn’t perfect but no, it’s fundamentally self inflicted.
Community is very happy to pick up helper libraries and by the time you get all the way up the tree in a react framework you have hundreds or even thousands of packages.
If you’re sensible you can be fine just like any other ecosystem, but limited because one wrong package and you’ve just ballooned your dependency tree by hundreds which lowers the value of the ecosystem.
Node doesn’t have a standard library and until recently not even a test runner which certainly doesn’t help.
If your sensible with node or Deno* you’ll somewhat insulated from all this nonsense.
*Deno has linting,formatting,testing & a standard library which is a massive help (and a permission system so packages can’t do whatever they want)
Anyone know if there is a public events feed/firehouse for npm ecosystem system? Similar to GitHub public events feed?
We, at ClickHouse, love big data and it would be super cool download and analyse patterns of all these data & provide some tooling to help with combatting this wide spread issue.
Languages/VMs should support capability-based permissions for libraries, no library should be able to open a file or do network requests without explicit granular permissions.
How many packages now have been compromised over the past couple of weeks? The velocity of these attacks are insane. Part of me believes state actors must be involved at this point.
In any case, does anyone have an exhaustive list of all recently compromised npm packages + versions across the recent attacks? We need to do an exhaustive scan after this news...
From what I've seen, it's either spam, telemetry, or downloading prebuilt binaries. The first two are anti-user and should not exist, the last one isn't really necessary — swc, esbuild, and typescript-go simply split native versions into separate packages, and install just what your system needs.
Use pnpm and whitelist just what you need. It disables all scripts by default.
The malware could have been a JS code injected into the module entry point itself. As soon as you execute something that imports the package (which, you did install for a reason) the code can run.
I don't think that many people sandbox their development environments.
It absolutely matters. Many people install packages for front-end usage which would only be imported in the browser sandbox. Additionally, a package may be installed in a dev environment for inspection/testing before deciding whether to use it in production.
To me it's quite unexpected/scary that installing a package on my dev machine can execute arbitrary code before I ever have a chance to inspect the package to see whether I want to use it.
I've been using pnpm and it does not run lifecycle scripts by default. Asks for confirmation and creates a whitelist if you allow things. Might be the better default.
Most don’t need it. There was a time when most post installing flooded your terminal with annoying messages to upgrade, donate, say hi.
Modern node package managers such as yarn and pnpm allow you to prevent post installs entirely.
Today most of the time you need to make an exception for a package is when a module requires native compilation or download of a pre-built binary. This has become rare though.
I haven't dug into the specifics but technical props and nostalgia to the "self propagating" nature. Reminds of the OG "Worm" - the https://en.wikipedia.org/wiki/Morris_worm
It's high time we took this seriously and required signing and 2FA on all publishes to NPM and NPM needs to start doing security scanning and tooling for this that they can charge organisations for.
As a developer, is there a way on mac to limit npm file access to the specific project?
So that if you install a compromised package it cannot access any data outside of your project directory?
Wrote a small utility shell script that uses docker behind the scenes to prevent access to your host machine while still allowing full npm install and run workflow.
This blog post and others are from 'security saas' that also try to make money off how bad NPM package security safety is.
Why can't npm maintainers just implement something similar?
Maybe at least have a default setting (or an option) that packages newer than X days are never automatically installed unless forced? That would at least give time for people to review and notice if the package has been compromised.
Also, there really needs to be a standard library or at least a central community approved library of safe packages for all standard stuff.
Object-capability model / capability-based security.
Do not let code to have access to things it's not supposed to access.
It's actually that simple. If you implemented a function which formats a string, it should not have access to `readFile`, for example.
Retrofitting it into JS isn't possible, though, as language is way too dynamic - self-modifying code, reflection, etc, means there's no isolation between modules.
In a language which is less dynamic it might be as easy as making a white-list for imports.
People have tried this, but in practice it's quite hard to do because then you have to start treating individual functions as security boundaries - if you can't readFile, just find a function which does it for you.
The situation gets better in monadic environments (can't readFile without the IO monad, and you cant' call anything which would read it).
Well, to me it looks like people are unreasonably eager to use "pathologically dynamic" languages like JS & Python, and it's an impossible problem in a highly dynamic environment where you can just randomly traverse and change objects.
Programming languages which are "static" (or, basically, sane) you can identify all imports of a module/library, and, basically, ban anything which isn't "pure" part of stdlib.
If your module needs to work with files, it will receive an object which lets it to work with files.
A lot of programming languages implement object-capability model: https://en.m.wikipedia.org/wiki/Object-capability_model it doesn't seem to be hard at all. It's just programmers have preference for shittier languages, just like they prefer C which doesn't even have language-level array bound checking (for a lack of a "dynamic array" concept on a language level).
I think it's sort of orthogonal to "pure functional" / monadic: if you have unrestricted imports you can import some shit like unsafePerformIO, right? You have another level of control, of course (i.e. you just need to ban unsafePerformIO and look for unlicensed IO) but I don't feel like ocap requires Haskell
You can protect yourself using existing tools, but it's not trivial and requires serious custom work. Effectively you want minimal permissions and loud failures.
This is something I'm trying to polish for my system now, but the idea is: yarn (and bundler and others) needs to talk only to the repositories. That means yarn install is only allowed outbound connections to localhost running a proxy for packages. It can only write in tmp, its caches, and the current project's node_packages. It cannot read home files beyond specified ones (like .yarnrc). The alias to yarn strips the cloud credentials. All tokens used for installation are read-only. Then you have to do the same for the projects themselves.
On Linux, selinux can do this. On Mac, you have to fight a long battle with sandbox-exec, but it's kinda maybe working. (If it gained "allow exec with specified profile", it would be so much better)
But you may have guessed from the description so far - it's all very environment dependent, time sink-y, and often annoying. It will explode on issues though - try to touch ~/.aws/credentials for example and yarn will get killed and reported - which is exactly what we want.
But internally? The whole environment would have to be redone from scratch. Right now package installation will run any code it wants. It will compile extensions with gyp which is another way of custom code running. The whole system relies on arbitrary code execution and hopes it's secure. (It will never be) Capabilities are a fun idea, but would have to be seriously improved and scoped to work here.
Something similar to Deno's permission system, but operating at a package level instead of a process level.
When declaring dependencies, you'd also declare the permissions of those dependencies. So a package like `tinycolor` would never need network or disk access.
Probably signatures could alleviate most of these issues, as each publish would require the author to actually sign the artifact, and setup properly with hardware keys, this sort of malware couldn't spread. The NPM CI tokens that don't require 2fa kind of makes it less useful though.
Clojars (run by volunteers AFAIK) been doing signatures since forever, not sure why it's so difficult for Microsoft to follow their own yearly proclamation of "security is our top concern".
There are, but they have huge performance or usability penalties.
Stuff like intents "this is a math library, it is not allowed to access the network or filesystem".
At a higher level, you have app sandboxing, like on phones or Apple/Windows store. Sandboxed desktop apps are quite hated by developers - my app should be allowed to do whatever the fuck it wants.
Reminds me of when I went to a tech conference with a Windows laptop and counted exactly two like me among the hundreds of attendees. I was embarrassed then but I'd be laughing now :D
This isn't a JavaScript problem. What, structurally, stops the same thing happening to PyPI? Or the Rust ecosystem? Or Lisp via QuickLisp? Or CPAN?
This whole mess was foreseeable. So what's to be done?
Look. Any serious project needs to start vendoring its dependencies. People should establish big, coarse grained meta-distributions like C++ Boost that come from a trustable authority and that get updated infrequently enough that you can keep up with release notes.
> This isn't a JavaScript problem. What, structurally, stops the same thing happening to PyPI? Or the Rust ecosystem? Or Lisp via QuickLisp? Or CPAN?
For one, NPM has a really sprawling ecosystem where it's normal to have many dependencies.
I remember that I once tried to get started with angular, and I did an "init" for an empty project and "compile", and suddenly had half a gigabyte of code lying in my directory.
This means that there is a high number of dependencies that are potential targets for a supply chain attack.
I just took a look at our biggest JS/Typescript project at work, it comes in at > 1k (recursive) NPM dependencies. Our biggest Python project has 78 recursive dependencies. They are of comparable size in terms of lines of code and total development time.
Why? Differences in culture, as well as python coming with more "batteries included", so there's less need for small dependencies.
Common Lisp is not worth it - you are unlikely to hit any high-value production target, there are not many uses and they are tech-savy. Good for us, the 5 remaining users. Also, Quicklisp is not rolling-release, it is a snapshot done one or two times a year.
They were new versions of the packages instead of modified existing ones so vendoring has the same effect as the usual practice of pinning npm deps and using npm ci, I think.
This. But the problem seems to go way deeper than npm or whatever package manager is used. I mean, why is anyone consuming a package like colors or tinycolors? Do projects really need to drag in a random dependency to handle these usecases?
So rather than focusing on how Microsoft/npm et al can prevent similar situations in the future, you chose to think about what relevance/importance each individual package has?
There will always be packages that for some people are "but why?" but for others are "thank god I don't have to deal with that myself". Sure, colors and whatnot are tiny packages we probably could do without, but what are you really suggesting here? Someone sits and reviews every published package and rejects it if the package doesn't fit your ideal?
But the issue isn't just about the “thank god I don't have to deal with that myself” perspective. It's more about asking: do you actually need a dependency, or do you simply want it?
A lot of developers, especially newer ones, tend to blur that distinction. The result is an inflated dependency tree that unnecessarily increases the attack surface for malware.
The "ship fast at all costs" mindset that dominates many startups only makes this worse, since it encourages pulling in packages without much thought to long-term risk.
> So rather than focusing on how Microsoft/npm et al can prevent similar situations in the future, (...)
There's some ignorance in your comment. If you read up on debug & chalk supply chain attack, you'll end up discovering that the attacker gained control of the account through plain old phishing. Through a 2FA reset email, to boot.
What exactly do you expect the likes of Microsoft to do if users hand over their access to third parties? Do you want to fix issues or to pile onto the usual targets?
1. It's a solution meant for highly interactive app-like websites, not static-content driven websites like ecommerces. React in this context is just the wrong tool for the problem that will give you a huge array of performance, bugs and ux problems.
2. Extensive ecommerce experience including Disney, Carnival Cruises, Booking, TUI, and some of the European leaders in real estate and professional home building tools among the others.
> 1. It's a solution meant for highly interactive app-like websites, not static-content driven websites like ecommerces. React in this context is just the wrong tool for the problem that will give you a huge array of performance, bugs and ux problems.
Strongly disagree. React is not about interactivity, but reactivity. If you have to consume an API and update your app based on the responses, React does all the heavy lifting for you without requiring full page reloads.
On top of that, and as a nice perk, React also gives you all the tools you will ever need to optimize perceived performance.
Claiming that a tool designed for reactive programming is not suited for the happy flow of reactive programming is simply fundamentally wrong.
2. Ecommerces are not highly dynamic pages. They are overwhelmingly static content with an occasional configurator/cart/search. All things that can be embedded with whatever library you like (including React), or even better none at all.
3. Seo and performance is what really matters in ecommerces. The only minor exceptions are shops like Amazon or Airbnb, but that's unrelated to their seo and performance.
4. I've been writing React and ecommerces using React and similar with millions of daily users for a decade :)
My comment yesterday, which received one downvote and which I will repeat if/until they’re gone: HTTP and JS have to go. There are ways to replace them.
One upvote is not enough. We need enough upvotes to fix the problem. You can’t shape a big pile of shit into success. HTTP and JS will never serve as a proper application framework.
A lot of blogs on this are AI generated and such as this is developing, so just linking to a bunch of resources out there:
Aikido - https://www.aikido.dev/blog/s1ngularity-nx-attackers-strike-...
Socket - https://socket.dev/blog/ongoing-supply-chain-attack-targets-...
Ox - https://www.ox.security/blog/npm-2-0-hack-40-npm-packages-hi...
Safety - https://www.getsafety.com/blog-posts/shai-hulud-npm-attack
Phoenix - https://phoenix.security/npm-tinycolor-compromise/
Semgrep - https://semgrep.dev/blog/2025/security-advisory-npm-packages...
I wonder who actually discovered this attack? Can we credit them? The phrasing in these posts is interesting, with some taking direct credit and others just acknowledging the incident.
Aikido says: > We were alerted to a large-scale attack against npm...
Socket says: > Socket.dev found compromised various CrowdStrike npm packages...
Ox says: > Attackers slipped malicious code into new releases...
Safety says: > The Safety research team has identified an attack on the NPM ecosystem...
Phoenix says: > Another supply chain and NPM maintainer compromised...
Semgrep says: > We are aware of a number of compromised npm packages
Mackenzie here I work for Aikido. This is a classic example of the security community all playing a part. The very first notice of this was from a developer named Daniel Pereira. He alerted Socket who did the first review of the Malware and discovered 40 packages. After, Aikido discovered an additional 147 packages and the Crowdstrike packages. I'm not sure how Step found it but they were the first to really understand the malware and that it was a self replicating worm. So multiple parties all playing a part kinda independent. Its pretty cool
Several individual developers seem to have noticed it at around the same time with Step and Socket pointing to different people in their blogs.
And then vendors from Socket, Aikido, and Step all seem to have detected it via their upstream malware detection feeds - Socket and Aikido do AI code analysis, and Step does eBPF monitoring of build pipelines. I think this was widespread enough it was noticed by several people.
Since so many vendors discovered these packages seemingly independently, you'd think that they would share those mechanisms with NPM itself so that those packages would never be published in the first place. But I guess that removes their ability to sell an "early alert" mechanism through their offerings...
NPM is owned by github/microsoft. I'm sure they could afford to buy one of these products or just build their own, but clearly security is not a thing they care about.
Somehow I didn't realize GitHub purchased npm in 2020. GitHub is the second word on npmjs.org. How did I not notice?
Microsoft: GitHub, NPM, typescript, VS Code, OpenAI, Playwright
A lot of fingers in a lot pies
Why should MS buy any of these startups when a developer (not any automated tech) found the malware? It looks like these startups did after-the-fact analysis for PR.
Can't help noticing, in the original article:
> The entire attack design assumes Linux or macOS execution environments, checking for os.platform() === 'linux' || 'darwin'. It deliberately skips Windows systems
If I were the conspiracy-minded sort I might jump to some wild conclusions here.
I’m using windows again. By default windows has “power shell” which is not at all like bash and is (how do I say this diplomatically)… wanting.
I mean it says something the developed the Linux Subsystem for Windows, but it’s an optional install.
OP article says: > The incident was discovered by @franky47, who promptly notified the community through a GitHub issue.
Points to this, which does look like the first mention.
https://github.com/scttcper/tinycolor/issues/256
Usually security companies monitor CVEs and the security mailing lists. That's how they all end up releasing the blog posts at the same time. It's because they are all using the same primary source.
Related (7 days ago):
NPM debug and chalk packages compromised (1366 points, 754 comments): https://news.ycombinator.com/item?id=45169657
Related in that this is another, separate, attack on npm.
No direct relation to the specific attack on debug/chalk/error-ex/etc that happened 7 days ago.
The article states that this is the same attackers that got control of the "nx" packages on August 27th, which didn't really get a lot of traction on HN when it happened: https://hn.algolia.com/?dateRange=pastMonth&page=0&prefix=fa...
Seems to be a separate incident?
Separate? Yes. Unrelated? Hard to tell.
It's unrelated in every observable technical way, but related in that it's a bit crazy how often this is happening to npm lately.
I'm glad it wasn't this particular attack that hit me last week.
I guess it's still spreading? those blogs seem to list differences packages
This happens because there's no auditing of new packages or versions. The distro's maintainer and the developer is the same person.
The general solution is to do what Debian does.
Keep a stable distro where new packages aren't added and versions change rarely (security updates and bugfixes only, no new functionality). This is what most people use.
Keep a testing/unstable distro where new packages and new versions can be added, but even then added only by the distro maintainer, NOT by the package developers. This is where the audits happen.
NPM, Python, Rust, Go, Ruby all suffer from this problem, because they have centralized and open package repositories.
In Rust we have cargo vet, where we share these audits and use them in an automated fashion. Companies like Google and Mozilla contribute their audits.
It's too bad MS doesn't own npm, and/or GitHub repositories. Wait
And it's a great idea, similar thematically to certificate transparency
There is another related growing problem in my recent observation. As a Debian Developer, when I try to audit upstream changes before pulling them in to Debian, I find a huge amount of noise from tooling, mostly pointless. This makes it very difficult to validate the actual changes being made.
For example, an upstream bumps a version of a lint tool and/or changes style across the board. Often these are labelled "chore". While I agree it's nice to have consistent style, in some projects it seems to be the majority of the changes between releases. Due to the difficulty in auditing this, I consider this part of the software supply chain problem and something to be discouraged. Unless there's actually reason to change code (eg. some genuine refactoring a human thinks is actually needed, a bug fix or new feature, a tool exposed a real bug, or at least some identifiable issue that might turn into a bug), it should be left alone.
I'm using difftastic, it cuts down a whole lot of the noise
https://difftastic.wilfred.me.uk/
This looks good! Unfortunately it looks like it also suffers from exactly the same software supply chain problem that we need to avoid in the first place: https://github.com/Wilfred/difftastic/blob/master/Cargo.lock
I'd like to think there are ways to do this and keep things decentralized.
Things like: Once a package has more than [threshold] daily downloads for an extended period of time, it requires 2FA re-auth/step-up on two separate human-controlled accounts to approve any further code updates.
Or something like: for these popular packages, only a select list of automated build systems with reproducible builds can push directly to NPM, which would mean that any malware injector would need to first compromise the source code repository. Which, to be fair, wouldn't necessarily have stopped this worm from propagating entirely, but would have slowed its progress considerably.
This isn't a "sacrifice all of NPM's DX and decentralization" question. This is "a marginally more manual DX only when you're at a scale where you should be release-managing anyways."
> two separate human-controlled accounts to approve any further code updates.
Except most projects have 1 developer… Plus, if I develop some project for free I don't want to be wasting time and work for free for large rich companies. They can pay up for code reviews and similar things instead of adding burden to developers!
I think that we should impose webauthn 2fa on all npm accounts as the only acceptable auth method if you have e.g., more than 1 million total downloads.
Someone could pony up the cash to send out a few thousand yubikeys for this and we'd all be a lot safer.
Why even put a package download count on it? Just require it for everything submitted to NPM. It's not hard.
Because then it's extra hassle and expense for new developers to publish a package, and we're trying to keep things decentralized.
It's already centralized by virtue of using and relying on NPM as the registry.
If we want decentralized package management for node/javascript, you need to dump NPM - why not something like Go's system which is actually decentralized? There is no package repository/registry, it's all location based imports.
Pypi did that, i got 2 google keys for free. But I used them literally once, to create a token that never expires and that is what I actually use to upload on pypi.
(I did a talk at minidebconf last year in toulouse about this).
If implemented like this, it's completely useless, since there is actually no 2fa at all.
Anyway the idea of making libre software developers work more is a bad idea. We do it for fun. If we have to do corporate stuff we want a corporate salary to go with.
PyPI already has this. It was a little bit annoying when they imposed stricter security on maintainers, but I can see the need.
You can use debian's version of your npm packages if you'd like. The issues you're likely to run into are: some libraries won't be packaged period by debian; those that are might be on unacceptably old versions. You can work around these issues by vendoring dependencies that aren't in your distro's repo, ie copying a particular version into your own source control, manually keeping up with security updates. This is, to my knowledge, what large tech companies do. Other companies that don't are either taking a known risk with regards to vulnerabilities, or are ignorant. Ignorance is very common in this industry.
So, who is going to audit the thousands of new packages/versions that are published to npm every day? It only works for Debian because they hand-pick popular software.
Maybe NPM should hand pick popular packages and we should get away from this idea of every platform should always let everyone publish. Curation is expensive, but it may be worthwhile for mature platforms.
> NPM, Python, Rust, Go, Ruby all suffer from this problem, because they have centralized and open package repositories
Can you point me to Go's centralized package repository?
https://github.com/
Exactly, in a way Debian (or any other distro) is an extended standard library.
security updates and bugfixes only
Just wondering: while this is less of an attack surface, it's still a surface?
> Keep a stable distro where new packages aren't added and versions change rarely (security updates and bugfixes only, no new functionality). This is what most people use.
Unfortunately most people don't want old software that doesn't support newer hardware so most people don't end up using Debian stable.
I don't know why you went with hardware.
Most people don't want old software because they don't want old software.
They want latest features, fixes and performance improvements.
It'd be interesting to see how much of the world runs on Debian containers, where most of the whole "it doesn't support my insert consumer hardware here" argument is completely moot.
What hardware isn't supported by Debian stable that is supported by unstable?
Or is this just a "don't use Linux" gripe?
> The general solution is to do what Debian does.
If you ask these people, distributions are terrible and need to die.
Python even removed PGP signatures from Pypi because now attestation happens by microsoft signing your build on the github CI and uploading it directly to pypi with a never expiring token. And that's secure, as opposed to the developer uploading locally from their machine.
In theory it's secure because you see what's going in there on git, but in practice github actions are completely insecure so malware has been uploaded this way already.
Go’s package repository is just GitHub.
At the end of the day, it’s all a URL.
You’re asking for a blessed set of URLs. You’d have to convince someone to spend time maintaining that.
As hair splitting, that's actually not true: Go's package manager is just version control of which GitHub is currently the most popular hosting. And it also allows redirecting to your own version control via `go mod edit -replace` which leaves the sourcecode reference to GitHub intact, but will install it from wherever you like
How does that relate to the bigger conversation here? Are you suggesting people stop pulling Go packages from GitHub and only use local dependencies?
Golang at least gives you the option to easily vendor-ize packages to your local repository. Given what has happened here, maybe we should start doing this more!
The problem with your idea is that you need to find the person who wants to do all this auditing of every version of Node/Python/Ruby libraries.
I believe good centralized infrastructure for this would be a good start. It could be "gamified" and reviewers could earn reputation for reviewing packages, common packages would be reviewed all the time.
Kinda like Stackoverflow for reviews, with optional identification and such.
And honestly an LLM can strap a "probably good" badge on things with cheap batch inference.
> suffer from this problem
Benefit from this feature.
I'm coming to the unfortunate realizattion that supply chain attacks like this are simply baked into the modern JavaScript ecosystem. Vendoring can mitigate your immediate exposure, but does not solve this problem.
These attacks may just be the final push I needed to take server rendering (without js) more seriously. The HTMX folks convinced me that I can get REALLY far without any JavaScript, and my apps will probably be faster and less janky anyway.
Traditional JS is actually among the safest environments ever created. Every day, billions of devices run untrusted JS code, and no other platform has seen sandboxed execution at such scale. And in nearly three decades, there have been very few incidents of large successful attacks on browser engines. That makes the JS engine derived from browsers the perfect tool to build a server side framework out of.
However, processes and practices around NodeJS and npm are in dire need of a security overhaul. leftpad is a cultural problem that needs to be addressed. To start with, snippets don't need to be on npm.
Sandboxing doesn't do any good if the malicious code and target data are in the same sandbox, which is the whole point of these supply-chain attacks.
I mean, what does do good if your supply chain is attacked?
This said, less potential vendors supplying packages 'may' reduce exposure, but doesn't remove it.
Either way, not running the bleeding edge packages unless it's a known security fix seems like a good idea.
Javascript doesn't have a standard library, until it does the 170 million[1] weekly downloads of packages like UUID will continue. You can't expect people to re-write everything over and over.
[1]https://www.npmjs.com/package/uuid
That's not the problem. There is a cultural (and partly technical) aversion in JavaScript to large libraries - this is where the issue comes from. So, instead of having something like org.apache.commons in Java or Boost in C++ or Posix in C, larger libraries that curate a bunch of utilities missing from the standard library, you get an uncountable number of small standalone libraries.
I would bet that you'll find a third party `leftpad` implementation in org.apache.commons or in Spring or in some other collection of utils in Java. The difference isn't the need for 3rd party software to fix gaps in the standard library - it's the preference for hundreds of small dependencies instead of one or two larger ones.
1000% agree. Javascript is weak in this regard if you compare it to major programming languages. It just adds unnecessary security risks not having a language with built in imports for common things like making API calls out or parsing JSON, for example.
FYI, there's crypto.randomUUID()
That's built in to server side and browser.
You have the DOM and Node APIs. Which I think cover more than C library or Common Lisp library. Adding direct dependencies is done by every project. The issue is the sprawling deps tree of NPM and JS culture.
> You can't expect people to re-write everything over and over.
That’s the excuse everyone is giving, then you see thousands of terminal libraries and calendar pickers.
When I was learning JS/node/npm as a total programming newbie, a lot of the advice online was basically “if you write your own version of foobar when foobar is already available as an npm package, you’re stupid for wasting your time”.
I’d never worked in any other ecosystem, and I wish I realized that advice was specific to JS culture
It's not really bad advice, it just has different implications in Javascript.
In other languages, you'd have a few dependencies on larger libraries providing related functionality, where the Javascript culture is to use a bunch of tiny libraries to give the same functionality.
Sometimes I wonder how many of these tiny libraries are just the result of an attempt to have something ready for a conference talk and no one had the courage to say "Uh, Chris, that already exists, and the world doesn't need your different approach on it."
> You can't expect people to re-write everything over and over.
Call me crazy but I think agentic coding tools may soon make it practical for people to not be bogged down by the tedium of implementing the same basic crap over and over again, without having to resort to third party dependencies.
I have a little pavucontrol replacement I'm walking Claude Code through. It wanted to use pulsectl but, to see what it could do, I told it no. Write your own bindings to libpulse instead. A few minutes later it had that working. It can definitely write crap like leftpad.
I think the smallest C library I’ve seen was a single file to include on your project if you want terminal control like curses on windows. A lot of libraries on npm (and cargo) should be gist or a blog post.
None of those security guarantees matter when you take out the sandbox, which is exactly what server-side JS does.
The isolated context is gone and a single instance of code talking to an individual client has access to your entire database. It’s a completely different threat model.
So maybe the solution would be to sandbox Node.js?
I'm not quite sure what that would mean, but if it solves the problem for browsers, why not for server?
You can't sandbox the code that is supposed to talk to your DB from your DB.
And even on client side, the sandboxing helps isolate any malicious webpage, even ones that are accidentally malicious, from other webpages and from the rest of your machine.
If malicious actors could get gmail.com to run their malicious JS on the client side through this type of supply-chain attack, they could very very easily steal all of your emails. The browser sandbox doesn't offer any protection from 1st party javascript.
Deno does exactly that.
But in practice, to do useful things server-side you generally need quite a few permissions.
> Traditional JS is actually among the safest environments ever created.
> However, processes and practices around NodeJS and npm are in dire need of a security overhaul. leftpad is a cultural problem that needs to be addressed. To start with, snippets don't need to be on npm.
Traditional JS is the reason we have all of these problems around NodeJS and npm. It's a lot better than it was, but a lot of JS tooling came up in the time when ES5 and older were the standard, and to call those versions of the language lacking is... charitable. There were tons of things that you simply couldn't count on the language or its standard library to do right, so a culture of hacks and bandaids grew up around it. Browser disparities didn't help either.
Then people said, "Well, why don't we all share these hacks and bandaids so that we don't have to constantly reinvent the wheel?", and that's sort of how npm got its start. And of course, it was the freewheeling days of the late 00s/early 10s, when you were supposed to "move fast and break things" as a developer, so you didn't have time to really check if any of this was secure or made any sense. The business side wanted the feature and they wanted it now.
The ultimate solution would be to stop slapping bandaids and hacks on the JS ecosystem by making a better language but no one's got the resolve to do that.
Interestingly AI should be able to help a lot with desire to load those snippets.
What I'm wondering if it would help the ecosystem, if you were able to rather load raw snippets into your codebase, and source control as opposed to having them as dependencies.
So e.g. shadcn component pasting approach.
For things like leftPad, cli colors and others you would just load raw typescript code from a source, and there you would immediately notice something malicious or during code reviews.
You would leave actual npm packages to only actual frameworks / larger packages where this doesn't make sense and expect higher scrutiny, multi approvals of releases there.
> I'm coming to the unfortunate realizattion that supply chain attacks like this are simply baked into the modern JavaScript ecosystem.
I see this odd take a lot - the automatic narrowing of the scope of an attack to the single ecosystem it occurred in most recently, without any real technical argument for doing so.
What's especially concerning is I see this take in the security industry: mitigations put in place to target e.g. NPM, but are then completely absent for PyPi or Crates. It's bizarre not only because it leaves those ecosystems wide open, but also because the mitigation measures would be very similar (so it would be a minimal amount of additional effort for a large benefit).
I agree other repos deserve a good look for potential mitigations as well (PyPI too, has a history of publishing malicious packages).
But don't brush off "special status" of NPM here. It is unique in that JS being language of both front-end and back-end, it is much easier for the crooks to sneak in malware that will end up running in visitor's browser and affect them directly. And that makes it a uniquely more attractive target.
npm in itself isn't special at all, maybe the userbase is but that's irrelevant because the mitigation is pretty easy and 99.9999% effective, works for every package manager and boils down to:
1- thoroughly and fully analyze any dependency tree you plan to include 2- immediately freeze all its versions 3- never update without very good reason or without repeating 1 and 2
in other words: simply be professional, face logical consequences if you aren't. if you think one package manager is "safer" than others because magic reasons odds are you'll find out the hard way sooner or later.
Your item #1 there may be simple, but that's not the same as being easy.
Good luck with nr 1 in the js ecosystem and its 30k dependencies 50 branches deep per package
As an outsider looking in as I don't deal with NPM on a daily basis, the 30k dependencies going 50 branches deep seems to be the real problem here. Code reuse is an admiral goal but this seems absurd. I have no idea if these numbers are correct or exaggerations but from my limited time working with NPM a year or two ago it seems like it's a definite problem.
I'm in the C ecosystem mostly. Is one NPM package the equivalent of one object file? Can NPM packages call internal functions for their dependencies instead of relying so heavily on bringing in so many external ones? I guess it's a problem either way, internal dependencies having bugs vs supply chain attacks like these. Doesn't bringing in so many dependencies lead to a lot of dead code and much larger codebases then necessary?
> Is one NPM package the equivalent of one object file?
No. The closest thing to a package (on almost every language) is an entire library.
> Can NPM packages call internal functions for their dependencies instead of relying so heavily on bringing in so many external ones?
Yes, they can. They just don't do it.
> Doesn't bringing in so many dependencies lead to a lot of dead code and much larger codebases then necessary?
There aren't many unecessary dependencies, because the number of direct dependencies on each package is reasonable (on the order of 10). And you don't get a lot of unecessary code because the point of tiny libraries is to only import what you need.
Dead code is not the problem, instead the JS mentality evolved that way to minimize dead code. The problem is that dead code is actually not that much of an issue, but dependency management is.
Most people have addressed the package registry side of NPM.
But NPM has a much, much bigger problem on the client side, that makes many of these mitigations almost moot. And that is that `npm install` will upgrade every single package you depend on to its latest version that matches your declared dependency, and in JS land almost everyone uses lax dependency declarations.
So, an attacker who simply publishes a new patch version of a package they have gained access to will likely poison a good chunk of all of the users of that package in a relatively short amount of time. Even if the projects using this are careful and use `npm ci` instead of `npm install` for their CI builds, it will still easily get developers to download and run the malicious new version.
Most other ecosystems don't have this unsafe-by-default behavior, so deploying a new malicious version of a previously safe package is not such a major risk as it is in NPM.
> in JS land almost everyone uses lax dependency declarations
They do, BUT.
Dependency versioning schemes are much more strictly adhered to within JS land than in other ecosystems. PyPi is a mishmash of PEP 440, SemVer, some packages incorrectly using one in the format of the other, & none of the 3 necessarily adhering to the standard they've chosen. Other ecosystems are even worse.
Also - some ecosystems (PyPi again) are committing far worse offences than lax versioning - versionless dependency declaration. Heavy reliance on requirements.txt without lockfiles where half the time version isn't even specified at all. Astral/Poetry are improving the situation here but things are still bad.
Maven land is full of plugins with automated pom.xml version templating that has effectively the same effect as lax versioning, but without any strict adherence to any kind of standard like semver.
Yes, the situation in JS land isn't great, but there are much worse offenders out there.
The point is still different. In PyPI, if I put `requests` in my requirements.txt, and I run `pip install -r requirements.txt` every time I do `make build`, I will still only get one version of requests - the latest available the first time I installed it. This severely reduces the attack radius compared to NPM's default, where I would get the latest (patch) version of my dependency every day. And the ecosystem being committed to respecting semver is entirely irrelevant to supply chain security. Malicious actors don't care about semver.
Overall, publishing a new malicious version of a package is a much lesser problem in virtually any ecosystem other than NPM; in NPM, it's almost an automatic remote code execution vulnerability for every NPM dev, and a persistent threat for many NPM packages even without this.
> This severely reduces the attack radius compared to NPM's default, where I would get the latest (patch) version of my dependency every day.
By default npm will create a lock file and give you the exact same version every time unless you manually initiate an upgrade. Additionally you could even remove the package-lock.json and do a new npm install and it still wouldn't upgrade the package if it already exists in your node_modules directory.
Only time this would be true is if you manually bump the version to something that is incompatible, or remove both the package-lock.json and your node_modules folder.
> Maven land is full of plugins with automated pom.xml version templating that has effectively the same effect as lax versioning, but without any strict adherence to any kind of standard like semver.
Please elaborate on this. I'm a long-time Java developer and have never once seen something akin to what you're describing here. Maven has support for version ranges but in practice it's very rarely used. I can expect a project to build with the exact same dependencies resolved today and in six months or a year from now.
`npm install` uses a lockfile by default and will not change versions. No, not transitives either. You would have to either manually change `package.json` or call `npm update`.
You'd have to go out of your way to make your project as bad as you're describing.
A lot of people use tools like Dependabot which automates updates to the lockfile.
No, this is just wrong. It might indeed use package-lock.json if it matches your node_modules (so that running `npm install` multiple times won't download new versions). But if you're cloning a repo off of GitHub and running npm install for the first time (which a CI setup might do), it will take the latest deps from package.json and update the package-lock.json - at least this is what I've found many responses online claim. The docs for `npm ci` also suggest that it behaves differently from `npm install` in this exact respect:
> In short, the main differences between using npm install and npm ci are:
> The project must have an existing package-lock.json or npm-shrinkwrap.json.
> If dependencies in the package lock do not match those in package.json, npm ci will exit with an error, instead of updating the package lock.
Well but the docs you cited don't match what you stated. You can delete node_modules and reinstall, it will never update the package-lock.json, you will always end up with the exact same versions as before. The package-lock updating happens when you change version numbers in the package.json file, but that is very much expected! So no, running npm install will not pull in new versions randomly.
Could you say more about what mitigations you’re thinking of?
I ask because think the directionality is backwards here: I’ve been involved in packaging ecosystem security for the last few years, and I’m generally of the opinion that PyPI has been ahead of the curve on implementing mitigations. Specifically, I think widespread trusted publishing adoption would have made this attack less effective since there would be fewer credentials to steal, but npm only implemented trusted publishing recently[1]. Crates also implemented exactly this kind of self-scoping, self-expiring credential exchange ahead of npm.
(This isn’t to malign any ecosystem; I think people are also overcorrect in treating this like a uniquely JavaScript-shaped problem.)
[1]: https://github.blog/changelog/2025-07-31-npm-trusted-publish...
> PyPI has been ahead of the curve on implementing mitigations
Indeed, crates.io implemented PyPI's trusted publishing and explicitly called out PyPI as their inspiration: https://blog.rust-lang.org/2025/07/11/crates-io-development-...
Which mitigations specifically are in npm but not in crates.io?
As far as I know crates.io has everything that npm has, plus
- strictly immutable versions[1]
- fully automated and no human in the loop perpetual yanking
- no deletions ever
- a public and append only index
Go modules go even further and add automatic checksum verification per default and a cryptographic transparency log.
Contrast this with docker hub for example, where not even npm's basic properties hold.
So, it is more like
docker hub ⊂ npm ⊂ crates.io ⊂ Go modules
[1] Nowadays npm has this arguably too
> Go modules go even further and add automatic checksum verification per default
Cargo lockfiles contain checksums and Cargo has used these for automatic verification since time immemorial, well before Go implemented their current packaging system. In addition, Go doesn't enforce the use of go.sum files, it's just an optional recommendation: https://go.dev/wiki/Modules#should-i-commit-my-gosum-file-as... I'm not aware of any mechanism which would place Go's packaging system at the forefront of mitigation implementations as suggested here.
To clarify (a lot of sibling commenters misinterpreted this too so probably my fault - can't edit my comment now):
I'm not referring to mitigations in public repositories (which you're right, are varied, but that's a separate topic). I'm purely referring to internal mitigations in companies leveraging open-source dependencies in their software products.
These come in many forms, everything from developer education initiatives to hiring commercial SCA vendors, & many other things in between like custom CI automations. Ultimately, while many of these measures are done broadly for all ecosystems when targeting general dependency vulnerabilities (CVEs from accidental bugs), all of the supply-chain-attack motivated initiatives I've seen companies engage in are single-ecosystem. Which seems wasteful.
I mostly agree. But NPM is special, in that the exposure is so much higher. The hypothetical python+htmx web app might have 10s of dependencies (including transitive) whereas your typical Javascript/React will have 1000s. All an attacker needs to do is find one of many packages like TinyColor or Leftpad or whatever and now loads of projects are compromised.
> NPM is special, in that the exposure is so much higher.
NPM is special in the same way as Windows is special when it comes to malware: it's a more lucrative target.
However, the issue here is that - unlike Windows - targetting NPM alone does not incur significantly less overhead than targetting software registries more broadly. The trade-off between focusing purely on NPM & covering a lot of popular languages isn't high, & imo isn't a worthwhile trade-off.
Stuff like Babel, React, Svelte, Axios, Redux, Jest… should be self contained and not depend on anything other than being a peer dependency. They are core technological choices that happens early in the project and is hard or impossible to replace afterwards.
- I feel that you are unlikely to need Babel in 2025, most things it historically transpiled are Baseline Widely Available now (and most of the things it polyfilled weren't actually Babel's but brought in from other dependencies like core-js, which you probably don't need either in 2025). For the rest of the things it still transpiles (pretty much just JSX) there are cheaper/faster transpilers with fewer external dependencies and runtime dependencies (Typescript, esbuild). It should not be hard to replace Babel in your stack: if you've got a complex webpack solution (say from CRA reasons) consider esbuild or similar.
- Axios and Jest have "native" options now (fetch and node --test). fetch is especially nice because it is the same API in the browser and in Node (and Deno and Bun).
- Redux is self-contained.
- React itself is sort of self-contained, it's the massive ecosystem that makes React the most appealing that starts to drive dependency bloat. I can't speak to Svelte.
Well, your typical Rust project has over 1000 dependencies, too. Zed has over 2000 in release mode.
Not saying this in defence of Rust or Cargo, but often times those dependencies are just different versions of the same thing. In a project at one of my previous companies, a colleague noticed we had LOADS of `regex` crate versions. Forgot the number but it was well over 100
That seems like a failure in workspace management. The most duplicates I've seen was 3, with crates like url or uuid, even in projects with 1000+ distinct deps.
Your typical Rust project does not have over 1000 dependencies.
Zed is not a typical Rust project; it's a full fledged editor that includes a significant array of features and its own homegrown UI framework.
Until you go get malware
Supply chain attacks happen at every layer where there is package management or a vector onto the machine or into the code.
What NPM should do if they really give a shit is start requiring 2FA to publish. Require a scan prior to publish. Sign the package with hard keys and signature. Verify all packages installed match signatures. Semver matching isn’t enough. CRC checks aren’t enough. This has to be baked into packages and package management.
> Until you go get malware
While technically true, I have yet to see Go projects importing thousands of dependencies. They may certainly exist, but are absolutely not the rule. JS projects, however...
We have to realize, that while supply chain attacks can happen everywhere, the best mitigations are development culture and solid standard library - looking at you, cargo.
I am a JS developer by trade and I think that this ecosystem is doomed. I absolutely avoid even installing node on my private machine.
Here's an example off the top of my mind:
https://github.com/go-gitea/gitea/blob/main/go.sum
I think you are reading that wrong, go.sum isn't a list of dependencies it's a list of checksums for modules that were, at some point, used by this module. All those different versions of the same module listed there, they aren't all dependencies, at most one of them is.
Assuming 'go mod tidy' is periodically run go.mod should contain all dependencies (which in this case seems to be shy of 300, still a lot).
Half of go.sum dependencies generally are multiple versions of same package. 400 still a lot, but a huge project like gitea might need them I guess.
> cat go.sum |awk '{print $1}' | sort |uniq |wc -l
431
> wc -l go.sum
1156 go.sum
How will multi-factor-authentication prevent such a supply chain issue?
That is, if some attacker create some dummy trivial but convenient package and 2 years latter half the package hub depends on it somehow, the attacker will just use its legit credential to pown everyone and its dog. This is not even about stilling credentials. It’s a cultural issue with bare blind trust to use blank check without even any expiry date.
https://en.wikipedia.org/wiki/Trust,_but_verify
That's an entirely different issue compared to what we're seeing here. If an attacker rug-pulls of course there is nothing that can be done about that other than security scanning. Arguably some kind of package security scanning is a core-service that a lot of organisations would not think twice about paying npm for.
> If an attacker rug-pulls of course there is nothing that can be done about that other than security scanning.
As another subthread mentioned (https://news.ycombinator.com/item?id=45261303), there is something which can be done: auditing of new packages or versions, by a third party, before they're used. Even doing a simple diff between the previous version and the current version before running anything within the package would already help.
Sign the package with hard keys and signature.
That's really the core issue. Developer-signed packages (npm's current attack model is "Eve doing a man-in-the-middle attack between npm and you," which is not exactly the most common threat here) and a transparent key registry should be minimal kit for any package manager, even though all, or at least practically all, the ecosystems are bereft of that. Hardening API surfaces with additional MFA isn't enough; you have to divorce "API authentication" from "cryptographic authentication" so that compromising one doesn't affect the other.
> What NPM should do if they really give a shit is start requiring 2FA to publish.
How does 2FA prevent malware? Anyone can get a phone number to receive a text or add an authenticator to their phone.
I would argue a subscrption model for 1 EUR/month would be better. The money received could pay for certification of packages and the credit card on file can leverage the security of the payments system.
NPM does require 2FA to publish. I would love a workaround! Isn't it funny that even here on HN, misinformation is constantly being spread?
NPM does not require two-factor authentication. If two-factor authentication is enabled for your account and you wish to disable it, this explains how to do that if allowed by your organization:
<https://docs.npmjs.com/configuring-two-factor-authentication...>
It doesn't require 2FA in general, but it does for people with publish rights for popular packages, which covers most or all of the recent security incidents.
https://github.blog/changelog/2022-11-01-high-impact-package...
If NPM really cared, they'd stop recommending people use their poorly designed version control system that relies on late-fetching third-party components required by the build step, and they'd advise people to pick a reliable and robust VCS like Git for tracking/storing/retrieving source code objects and stick to that. This will never happen.
NPM has also been sending out nag emails for the last 2+ years about 2FA. If anything, that constituted an assist in the attack on the Junon account that we saw a couple weeks ago.
NPM lock files seem to include hashes for integrity checking, so as long as you check the lock file into the VCS, what's the difference?
Wrong question; NPM isn't bedrock. The question to be answered if there is no difference is, "In that case, why bother with NPM?"
They are. Any language that depends heavily on package managers and lacks a standard lib is vulnerable to this.
At some point people need to realize and go back to writing vanilla js, which will be very hard.
The rust ecosystem is also the same. Too much dependence on packages.
An example of doing it right is golang.
The solution is not to go back to vanilla JS, it's for people to form a foundation and build a more complete utilities library for JS that doesn't have 1000 different dependencies, and can be trusted. Something like Boost for C++, or Apache Commons for Java.
Python and Rust both have decent std lib, but it is just a matter of time before this happens in thoae ecosystems. There is nothing unique about this specific attack that could only happen in JavaScript.
>and go back to writing vanilla js
Lists of things that won't happen. Companies are filled with node_modules importers these days.
Even worse, now you have to check for security flaws in that JS that's been written by node_modules importers.
That or there could someone could write a standard library for JS?
C#, Java, and so on.
AFAICT, the only thing this attack relies on, is the lack of scrutiny by developers when adding new dependencies.
Unless this lack of scrutiny is exclusive to JavaScript ecosystem, then this attack could just as well have happened in Rust or Golang.
I don't know Go, but Rust absolutely has the same problem, yes. So does Python. NPM is being discussed here, because it is the topic of the article, but the issue is the ease with which you can pull in unvetted dependencies.
Languages without package managers have a lot more friction to pull in dependencies. You usually rely on the operating system and its package-manager-humans to provide your dependencies; or on primitive OSes like Windows or macOS, you package the dependencies with your application, which involves integrating them into your build and distribution systems. Both of those involve a lot of manual, human effort, which reduces the total number of dependencies (attack points), and makes supply-chain issues like this more likely to be noticed.
The language package managers make it trivial to pull in dozens or hundreds of dependencies, straight from some random source code repository. Your dependencies can add their own dependencies, without you ever knowing. When you have dozens or hundreds of unvetted dependencies, it becomes trivial for an attacker to inject code they control into just one of those dependencies, and then it's game over for every project that includes that one dependency anywhere in their chain.
It's not impossible to do that in the OS-provided or self-managed dependency scenario, but it's much more difficult and will have a much narrower impact.
If you try installing npm itself on debian, you would think you are downloading some desktop environment. So many little packages.
JavaScript does have some pretty insane dependency trees. Most other languages don’t have anywhere near that level of nestedness.
Don't they?
I just went to crates.io and picked a random newly updated crate, which happened to be pixelfix, which fixes transparent pixels in pngs.
It has six dependencies and hundreds of transient dependencies, may of which appear to be small and highly specific a la left-pad.
https://crates.io/crates/pixelfix/0.1.1/dependencies
Maybe this package isn't representative, but it feels pretty identical to the JS ecosystem.
It depends on `image` which in turn depends on a number of crates to handle different file types. If you disable all `image` features, it only has like 5 dependencies left.
And all those 5 remaining dependencies have lots of dependencies of their own. What's your point?
> What's your point?
Just defending Rust.
> 5 remaining dependencies have lots of dependencies of their own.
Mostly well-known crates like rayon, crossbeam, tracing, etc.
You cannot defend Rust if this is reality.
Any Rust project I have ever compiled pulled in over 1000 dependencies. Recently it was Zed with its >2000 dependencies.
I think it's justified for Zed. It does a lot of things.
Zed isn’t special, I doubt Sublime Text has thousands of dependencies. It’s a language/culture problem.
Edit: Ghostty is a good counter-example that is open source. https://github.com/ghostty-org/ghostty/tree/main/pkg
It's not possible for a language to have an insane dependency tree. That's an attribute of a codebase.
Modern programming languages don't exist in a vacuum, they are tied to the existing codebase and libraries.
Whatever you're trying to say, you aren't.
Maybe the language should have a standard library then.
C library is smaller than Node.js (you won’t have HTTP). What C have is much more respectable libraries. If you add libcurl or freetype to your project, it won’t pull the whole jungle with them.
What C doesn't have is an agreed-upon standard package manager. Which means that any dependency - including transitive ones! - requires some effort on behalf of the developer to add to the build. And that, in turn, puts pressure on library authors to avoid dependencies other than a few well-established libraries (like libpng or GLib),
This makes little sense. Any popular language with a lax package management culture will have the exact same issue, this has nothing to do with JS itself. I'm actually doing JS quasi exclusively these days, but with a completely different tool chain, and feel totally unconcerned by any of these bi-weekly NPM scandals.
Rust is working on that. It's not far behind right now, leave it a couple of years.
That, and the ability to push an update without human interaction.
Javascript is badly over-used and over-depended on. So many websites just display text and images, but have extremely heavy javascript libraries because that's what people know and that is part of the default, and because it enables all the tracking that powers the modern web. There's no benefit to the user, and we'd be better off without these sites existing if there were really no other choice but to use javascript.
NPM does seem vastly over represented in these type of compromises, but I don't necessarily think that e.g. pypi is much better in terms of security. So you could very well be correct that NPM is just a nicer, perhaps bigger, target.
If you can sneak malware into a JavaScript application that runs in millions of browsers, that's a lot more useful that getting a some number servers running a module as part of a script, who's environment is a bit unknown.
Javascript really could do with a standard library.
> So many websites just display text and images
Eh... This over-generalises a bit. That can be said of anything really, including native desktop applications.
Is the difference between the number of dev dependencies for eg. VueJs (a JavaScript library for marshalling Json Ajax responses into UI) and Htmx (a JavaScript library for marshalling html Ajax responses into UI) meaningful?
There is a difference, but it's not an order of magnitude and neither is a true island.
Granted, deciding not to use JS on the server is reasonable in the context of this article, but for the client htmx is as much a js lib with (dev) dependencies as any other.
https://github.com/bigskysoftware/htmx/blob/master/package.j...
https://github.com/vuejs/core/blob/main/package.json
Rendering template partials server-side and fetching/loading content updates with HTMX in the browser seems like the best of all worlds at this point.
Until you need to write JavaScript?
Then write it. Javascript itself isn't the problem, naive third-party dependencies are.
Developers are perfectly fine with writing insecure JS all by themselves.
Which should be much less than what’s customary?
But that's the neat part, you don't!
Until you have to.
The only way to win is not to play.
Let me quit my job real quick. The endgame is probably becoming a monk, no kidding.
Simply avoiding Javascript won't cut it.
While npm is a huge and easy target, the general problem exists for all package repositories. Hopefully a supply chain attack mitigation strategy can be better than hoping attackers target package repositories you aren't using.
While there's a culture prevalent in Javascript development to ignore the costs of piling abstractions on top of abstractions, you don't have to buy into it. Probably the easiest thing to do is count transitive dependencies.
The blast radius is made far worse by npm having the concept of "postinstall" which allows any package the ability to run a command on the host system after it was installed.
This works for deps of deps as well, so anything in your node_modules has access to this hook.
It's a terrible idea and something that ought to be removed or replaced by something much safer.
I agree in principle, but child_process is a thing so I don't think it makes much difference. You are pwned either way if the package can ever execute code.
HTMX is full of JavaScript. Server-side-rendering without JavaScript is just back to the stuff Perl and PHP give you.
I don't think the point is to avoid Javascript, but to avoid depending on a random number of third-parties.
> Server-side-rendering without JavaScript is just back to the stuff Perl and PHP give you.
As well as Ruby, Python, Go, etc.
HTMX does not have external dependencies, only dev dependencies, reducing the attack surface.
Do you count LiveView (Elixir) in that assessment?
Why is this inevitable? If you use only easily verifyable packages you’ve lost nothing. The whole concept of npm automatically executing postinstall scripts was fixed when my pnpm started asking me every time a new package wanted to do that.
Not for the frontend. esm modules work great nowadays with import maps.
> supply chain attacks
You all really need to stop using this term when it comes to OSS. Supply chain implies a relationship, none of these companies or developers have a relationship with the creators other than including their packages.
Call it something like "free code attacks" or "hobbyist code attacks."
“code I picked up off the side of the road”
“code I somehow took a dependency on when copying bits of someone’s package.json file”
“code which showed up in my lock file and I still don’t know how it got there”
All of which is true for far too many projects
A supply chain can have hobbyists, there's no particular definition that says everyone involved must be a professional registered business.
I know CrowdStrike have a pretty bad reputation but calling them hobbyists is a bit rude.
I'm sure no offense was intended to hobbyists, but it was indeed rude
This vulnerability was reported to NPM in 2016: https://blog.npmjs.org/post/141702881055/package-install-scr... https://www.kb.cert.org/vuls/id/319816 but the NPM response was WAI.
Acronym expansion for those-not-in-the-know (such as me before a web search): WAI might mean "working as intented", or possibly "why?"
Even if we didn't have post install scripts wouldn't the malware just run as soon as you imported the module into your code during the build process, server startup, testing, etc?
I can't think of an instance where I ran npm install and didn't run some process shortly after that imported the packages.
Many people have non-JS backends and only use npm for frontend dependencies. If a postinstall script runs in a dev or build environment it could get access to a lot of things that wouldn't be available when the package is imported in a browser or other production environment.
When the left-pad debacle happened, one commenter here said of a well known npm maintainer something to the effect of that he's an "author of 600 npm packages, and 1200 lines of JavaScript".
Not much has changed since then. The best counter-example I know is esbuild, which is a fully featured bundler/minifier/etc that has zero external dependencies except for the Go stdlib + one package maintained by the Go project itself:
https://www.npmjs.com/package/esbuild?activeTab=dependencies
https://github.com/evanw/esbuild/blob/755da31752d759f1ea70b8...
Other "next generation" projects are trading one problematic ecosystem for another. When you study dependency chains of e.g. biomejs and swc, it looks pretty good:
https://www.npmjs.com/package/@biomejs/biome/v/latest?active...
https://www.npmjs.com/package/@swc/types?activeTab=dependenc...
Replacing the tire fire of eslint (and its hundreds to low thousands of dependencies) with zero of them! Very encouraging, until you find the Rust source:
https://github.com/biomejs/biome/blob/a0039fd5457d0df18242fe...
https://github.com/swc-project/swc/blob/6c54969d69551f516032...
I think as these projects gain more momentum, we will see similar things cropping up in the cargo ecosystem.
Does anyone know of other major projects written in as strict a style as esbuild?
Part of the reason of my switch to using Go as my primary language is that there's this trend of purego implementations which usually aim towards zero dependencies besides the stdlib and golang.org/x.
These kind of projects usually are pretty great because they aim to work with CGO_ENABLED=0 so the libs are very portable and work with different syscall backends.
Additionally I really like to go mod vendor my snapshot of dependencies which is great for short term fixes, but it won't fix the cause in the long run.
However, the go ecosystem is just as vulnerable here because of lack of signing off package updates. As long as there's no verification possible end-to-end when it comes to "who signed this package" then there's no way this will get better.
Additionally most supply chaib attacks focussed on the CI/CD infrastructure in the past, because they are just as broken with just as many problems. There needs to be a better CI/CD workflow where signing keys don't have to be available on the runners themselves, otherwise this will just shift the attack surface to a different location.
In my opinion the package managers are somewhat to blame here, too. They should encourage and mandate gpg signatures, and especially in git commits when they rely on git tags for distribution.
> there's this trend of purego implementations which usually aim towards zero dependencies besides the stdlib and golang.org/x.
I'm interested in knowing whether there's something intrinsic to Go that encourages such a culture.
IMO, it might be due to the fact that Go mod came rather late in the game, while NPM was introduced near the beginning of NodeJS. But it might be more related to Go's target audience being more low-level, where such tools are less ubiquitous?
"A little duplication is better than a little dependency," -- Rob Pike
I think the culture was set from the top. Also, the fairly comprehensive standard library helps a lot. C# was in a similar boat back when I used it.
> I'm interested in knowing whether there's something intrinsic to Go that encourages such a culture.
I've also seen something similar with Java, with its culture of "pure Java" code which reimplements everything in Java instead of calling into preexisting native libraries. What's common between Java and Go is that they don't play well with native code; they really want to have full control of the process, which is made harder by code running outside their runtime environment.
Go sits at about the same level of abstraction as Python or Java, just with less OO baked in. I'm not sure where go's reputation as "low-level" comes from. I'd be curious to hear why that's the category you think of it in?
> I'm interested in knowing whether there's something intrinsic to Go that encourages such a culture.
I think it's because the final deliverable of Go projects is usually a single self-contained binary executable with no dependencies, whereas with Node the final deliverable is usually an NPM package which pulls its dependencies automatically.
With Node the final deliverable is an app that comes packaged with all its dependencies, and often bundled into a single .js file, which is conceptually the same as a single binary produced by Go.
Can you give an example? While theoretically possible I almost never see that in Node projects. It's not even very practical because even if you do cram everything into a single .js file you still need an external dependency on the Node runtime.
> usually an NPM package which pulls its dependencies automatically
Built applications do not pull dependencies at runtime, just like with golang. If you want to use a library/source, you pull in all the deps, again just like golang.
Not at runtime no, but at install time yes. In contrast, with Go programs I often see "install time" being just `curl $url > /usr/local/bin/my_application` which is basically never the case with Node (for obvious reasons).
C encourages such culture, too, FWIW.
Yes, eslint is particularly frustrating: https://npmgraph.js.org/?q=eslint
There are plenty of people in the community who would help reduce the number of dependencies, but it really requires the maintainers to make it a priority. Otherwise the only way to address it is to switch to another solution like oxlint.
I tried upgrading ESLint recently and it took me forever to fix all the dependency issues. I wish I never used ESLint prettier as now my codebase styling is locked into an ESLint config :/
Deno has a similar formatter to prettier and similar linter to eslint (with Typescript plugins) out-of-the-box. (Some parts of those written in Rust.) I have been finding myself moving to Deno more and more. I also haven't noticed too many reformatting problems with migrating from prettier to Deno. (If there are major changes, you can also add the commit to a .git-ignore-revisions file.)
Have you looked into biome? We recently switched at work. It’s fine and fast. If you overly rely on 3rd party plugins it might be hard but it covered our use case fine for a network based react app.
Way less dependencies too.
Even minor styling rule changes would result in a huge PR across our frontend so I tend to avoid any change in tooling. But using old tools is not the end of the world. I only upgrade ESLint because I had to upgrade something else.
Would omitting this commit from git blame solve the issue?
The answer is to not draw in dependencies for things you are easily able to write yourself. That would probably reduce dependencies by 2/3 or so in many projects. Especially, left-pad things. If you write properly self contained small parts and a few tests, you probably don't have to touch them much, and the maintenance burden is not that high. Compare that with having to check every little dependency like left pad and all its code and its dependencies. If a dependency is not strictly necessary, then don't do it.
> Does anyone know of other major projects written in as strict a style as esbuild?
As in any random major project with focus on not having dependencies? SQLite comes to mind.
It's crazy to me that npm still executes postinstall scripts by default for all dependencies. Other package managers (Pnpm, Bun) do not run them for dependencies unless they are added to a specific allow-list. Composer never runs lifecycle scripts for dependencies.
This matters because dependencies are often installed in a build or development environment with access to things that are not available when the package is actually imported in a browser or other production environment.
Seems like this is a fairly recent change, for Pnpm at least, https://socket.dev/blog/pnpm-10-0-0-blocks-lifecycle-scripts...
What has been the community reaction? Has allowing scripts been scalable for users? Or could it be described as people blindly copying and pasting allow commands?
I am involved in Python packaging discussions and there is a pre-proposal (not at PEP stage yet) at the moment for "wheel variants" that involves a plugin architecture, a contentious point is whether to download and run the plugins by default. I'd like to find parallels in other language communities to learn from.
In my experience, packages which legitimately require a postinstall script to work correctly are very rare. For the apps I maintain, esbuild is the only dependency which benefits from a postinstall script to slightly improve performance (though it still works without the script). So there's no scaling issue adding one or two packages to a whitelist if desired.
Is there any way to install CLI tools from npmjs without being affected by a recent compromise?
Rust has `cargo install --locked`, which will use the pinned versions of dependencies from the lockfile, and these lockfiles are published for bin packages to crates.io.
But it seems npmjs doesn't allow publishing lockfiles, neither for libraries nor for CLI tools, so if you try to install let's say @google/gemini-cli, it will just pull the latest dependencies that fit the constraints in package.json. Is that true? Is it really this bad? If you try to install a CLI tool on a bad day when half of npmjs is compromised, you're out of luck?
How is that acceptable at all?
Lock files wouldn't work if they were locking transitive dependencies; otherwise the version solver would not have any work to actually do and you'd have many, many versions of the same package rather than a few versions that satisfy all of the version range constraints.
Lots of good ideas since last week, the one I like most being that published packages, especially those that are high in download count, don't actually go publish for a while until after publishing, allowing security scanners to do their thing.
In the Rust ecosystem, you only publish lock files for binary crates. So yeah then you get churn like https://github.com/cargo-bins/cargo-binstall/releases/tag/v1... bumping transitive deps, but this churn/noise doesn't exist for library crates - because the lock file isn't published for them.
npm will use your lockfile if it’s present, otherwise yeah it’s pretty much whatever is tagged and latest at the time (and the version doesn’t even have to change). If npm respected every upstream lockfile, then it could never share a single version that satisfied all dependencies. The bigger issue here is that npm has such unrestricted and unsupervised access to the entire environment at all.
> If npm respected every upstream lockfile, then it could never share a single version that satisfied all dependencies.
I'm asking in the context of installing a single CLI tool into ~/bin or something. There's no requirement to satisfy all dependencies, because the only dependency I care about is that one CLI tool. All I want is an equivalent of what `cargo install --locked` does — use the top-level lockfile of the CLI tool itself.
That sounds pretty reasonable: npm should allow bundling the lockfile with things that are marked with the type of "project", and whether it actually uses them depending on whether other locked constraints are overriding it. So instead of one lockfile, a prioritized list of them. The UX of dealing with that list could be a sticky wicket though, and npm isn't known for making this stuff easy to begin with.
According to Aikido Security the attack has now targeted 180+ packages: https://www.aikido.dev/blog/s1ngularity-nx-attackers-strike-...
I think these kinds of attack would be strongly reduced if js had a strong standard library.
If it was provided, it would significantly trim dependency trees of all the small utility libraries.
Perhaps we need a common community effort to create a “distro” of curated and safe dependencies one can install safely, by analyzing the most popular packages and checking what’s common and small enough to be worth being included/forked.
Last week someone wrote a blog post saying "We dodged a bullet" because it was only a browser-based crypto wallet scrape
Guess we didn't dodge this one
We didn't really dodge a bullet. We put a bullet named 'node' in the cylinder of a revolver, spun it, pointed the gun at our head, and pulled the trigger. We just happened to be lucky enough that we got an empty chamber.
> Shai Hulud
Clever name... but I would have expected malware authors to be a bit less obvious. They literally named their giant worm after a giant worm.
> At the core of this attack is a ~3.6MB minified bundle.js file
Yep, even malware can be bloated. That's in the spirit of NPM I guess...
I suppose it's only a matter of time before one of these supply chain attacks unintentionally pulls in a second, unrelated supply chain attack.
Malwares have to follow Moore's law, tequila virus was ~2.6kb in 1991.
Are Python packaging systems like pip exposed to the same risks?
Is anybody looking at this?
Software supply chain attacks are well known and they are a massive hole in the entirety of software infrastructure. As usual with security, no one really cares that much.
Warning: LLM-generated article, terribly difficult to follow and full of irrelevant details.
I knew npm was a train wreck when I first used it years ago and it pulled in literally hundreds of dependencies for a simple app. I avoid anything that uses it like the plague.
I can tell a lot about a dev by the fact that they single out npm/js for this supply chain issue.
Lots of languages ecosystems have this problem, but it is especially prominent in JS and lies on a spectrum. For comparison, in the C/C++ ecosystem it is prominent to have libraries advertising that they have zero dependencies and header only or one common major library like Boost.
What other language ecosystems have had this happen systematically? This isn't even the first time this month!
Go has this issue
Python/PyPi.
Rust.
The JavaScript ecosystem has a major case of import-everything disease that acts as a catalyst for supply chain attacks. left-pad as one example of many.
That they’ve coded in more than one language?
Just more engineering leaning than you. Actual engineers have to analyze their supply chains, and so makes sense they would be baffled by NPM dependency trees that utterly normal projects grow into in the JavaScript ecosystem.
Good thing that at scale, private package repositories or even in-house development is done. Personally, I would argue that an engineer unable to tell apart perfect from good, isn't a very good engineer in my book, but some engineers are unable to make compromises.
Do you think companies using node don't analyze supply chains? That's nonsense. Have you cargo installed a rust app recently? This isn't just a js issue. This needs to be solved across the industry and npm frankly has done a horrible job at it. We let people with billions of downloads a month with recently changed password/2fa publish packages? Why don't we pool assets as a collective to scan newly published packages before they're allowed to be installed? These types of things really should exist across all package registries (and my really hot take is that we probably don't need a registry for every language, either!).
It is solved across the industry for those who care. If you use cargo, npm, or a python package manager, you may have a service that handles static versioning of dependencies for security purposes. If you don't, you aren't generally working in a language that encourages so much package use.
> Do you think companies using node don't analyze supply chains?
I _know_ many don’t. In fact suggesting doing it is a good way to be looked at like a crazy person and be told something like “this is a yes place not a no place.”
I think it’s just that a lot of old men don’t like how popular it has become with script kiddies.
"I knew you weren't a great engineer the moment you started pulling dependencies for a simple app"
You realize my point right? People are taught to not reinvent the wheel at work (mostly for good reasons) so that's what they do, me and you included.
You ain't gonna be bothered to write html and manual manipulation, the people that will give you libraries to do so won't be bothered reimplementing parsers and file watchers, file watcher writers won't be bothered reimplementing file system utils, file system utils developers won't be bothered reimplementing structured cloning or event loops, etc, etc.
I myself just the other day had the task of converting HTML to markdown, because I don't remember whether it was Jira or Github APIs that returns comments as HTML and despite it being mostly few hours of work that would get us 90% there everybody was in favor of pulling a dependency to do so (with its own dependencies) and thus further exposing our application to those risks.
Pause, you could write an HTML to markdown library in half a day? Like, 4 hours? Or 12? Either way damn
One that gets me 90% there would take me few hours, one that gets me 99% there few months, which is why eventually people would rather pull a dependency.
Or about 15 minutes with an LLM?
https://github.com/williamcotton/markdown-to-html-llm
I love how it took you very short to implement...the wrong thing.
> I myself just the other day had the task of converting HTML to markdown
> you could write an HTML to markdown library in half a day
LOL! Good point, my friend.
Claude Code just added support for HTML to Markdown. Seems to work?
In any case, not following the point you're trying to make.
LLMs are pretty good at greenfield projects and especially if they are tasked with writing something with a lot of examples in the training data. This approach can be used to solve the problem of supply-chain attacks with the downside being that the code might not be as well written and feature complete as a third-party package.
I use LLMs too, but don't share your opinion fully.
In less time than that, you could `git clone` the desired open source package, and text search & replace the author's name with your own.
And then still be subject to supply-chain attacks with all of the dependencies in whatever open source package you're cloning?
you are aware that the app you just wrote with Claude pulls in dependencies, yes?
Not for the parser, only for the demo server! And I guess the dev dependencies as well, but with a much smaller surface area. But yeah, I don't think a TypeScript compiler is within the scope of an LLM.
So basically you live JavaScript free?
as much as i can yes.
I try to avoid JS, as it is a horrible language, by design. That does include TS, but it at least is useable, but barely - because it still tied to JS itself.
Off-topic, but I love how different programmers think about things, and how nothing really is "correct" or "incorrect". Started thinking about it because for me it's the opposite, JS is an OK and at least usable language, as long as you avoid TS and all that comes with it.
Still, even I who'd call myself a JavaScript developer also try to avoid desktop applications made with just JS :)
JS's issue is that it allows you to run an objectively wrong code without throwing explicit error to the user, it just fails silently or does something magical. Seems innocent, until you realize what we use JS for, other than silly websites or ERP dashboards.
It is full of gotchas that serves 0 purpose nowadays.
Also remember that it is basically a Lisp wearing Java skin on top, originally designed in less than 2 weeks.
Typescript is one of few things that puts safety barrier and sane static error checking that makes JS bearable to use - but it still has to fall down to how JS works in the end so it suffers from same core architectural problems.
> JS's issue is that it allows you to run an objectively wrong code without throwing explicit error to the user, it just fails silently or does something magical. Seems innocent, until you realize what we use JS for, other than silly websites or ERP dashboards.
What some people see as a fault, others see as a feature :) For me, that's there to prevent entire websites from breaking because some small widget in the bottom right corner breaks, for example. Rather than stopping the entire runtime, it just surfaces that error in the developer tools, but lets the rest to continue working.
Then of course entire web apps crash because one tiny error somewhere (remember seeing a blank page with just some short error text in black in the middle? Those), but that doesn't mean that's the best way of doing things.
> Also remember that it is basically a Lisp wearing Java skin on top
I guess that's why I like it better than TS, that tries to move it away from that. I mainly do Clojure development day-to-day, and static types hardly ever gives me more "safety" than other approaches do. But again, what I do isn't more "correct" than what anyone else does, it's largely based on "It's better for me to program this way".
>it's there to prevent entire websites from breaking because some small widget in the bottom right corner breaks, for example.
the issue is that it prevents that, but also allows you to send complete corrupt data forward, that can create horrible cascade of errors down the pipeline - because other components made assumption about correctness of data passed to them.
Such display errors should be caught early in development, should be tested, and should never reach prod, instead of being swept under the rug - for anything else other than prototype.
but i agree - going fully functional with dynamic types beats average JS experience any day. It is just piling up more mud upon giant mudball,
> JS is an OK and at least usable language, as long as you avoid TS and all that comes with it.
Care to explain why?
My view is this: since you can write plain JS inside TS (just misconfigure tsconfig badly enough), I honestly don’t see how you arrive at that conclusion.
I can just about understand preferring JS on the grounds that it runs without a compile step. But I’ve never seen a convincing explanation of why the language itself is supposedly better.
Lucky you. I keep coming back to it because jobs and even for desktop apps a native webview beats everything else.
We fcked up with js, big time and its with us forever now
For game dev too - all game engines suck. <canvas/> FTW.
I was hyped for wasm because i thought it was supposed to solve this problem, allowing any programming language to be compiled to run in browsers.
But apparently they only made it do like 95% of what JS does so you can't actually replace js with it. To me it seems like a huge blunder. I don't give a crap about making niche applications a bit faster, but freeing the web from the curse of JS would be absolutely huge. And they basically did it except not quite. It's so strange to me, why not just go the extra 5%?
Maybe its something about sharing memory with the js that would introduce serious vulnerabilities so they can't let wasm code have access to everything.
The only way to remove Js is to create a new browser that doesn't use it. Fragments the web, yes and probably nobody will use it
That 5% of js glue code necessary right now is just monumentally difficult to get rid of, it's like a binary serialization / interface (ABI) of all DOM/BOM APIs and these APIs are huge, dynamic, callback-heavy and object-oriented. It's much easier to have that glue compiler generated, which you can already do right now (you can write your entire web app in rust if you want):
https://github.com/wasm-bindgen/wasm-bindgen https://docs.rs/web-sys/latest/web_sys/
This is also being worked on, in the future this 5% glue might eventually entirely disappear:
> Designed with the "Web IDL bindings" proposal in mind. Eventually, there won't be any JavaScript shims between Rust-generated wasm functions and native DOM methods
out of sincere curiosity, which one is a great programming language to you?
depends on use case, i don't think one language can fit all cases. 100% correctness is required for systems, but it is a hindrance in non-critical systems. or robust type systems require high compilation times which hurt iterating on the codebase.
systems? rust - but it is still far from perfect, too much focus on saving few keystrokes here and there.
general purpose corporate development? c# - despite current direction post .net 5 of stapling together legacy parts of .net framework to .net core. it does most things good enough.
scripting, and just scripting? python.
web? there's only one, bad, option and that's js/ts.
most hated ones are in order: js, go, c++, python.
go is extremely infuriating, there was a submission on HN that perfectly encapsulated my feelings about it, after writing it for a while: https://fasterthanli.me/articles/i-want-off-mr-golangs-wild-...
Under a submission like this you picked Rust, that is neat.
You can write javascript without using npm...
I mean, it's hard to avoid indirectly using things that use npm, e.g. websites or whatever. But it's pretty easy to never have to run npm on your local machine, yes.
My main takeaway from all of these is to stop using tokens, and rely on mechanisms like OIDC to reduce the blast radius of a compromise.
How many tokens do you have lying around in your home directory in plain text, able to be read by anything on your computer running as your user?
> How many tokens do you have lying around in your home directory in plain text, able to be read by anything on your computer running as your user?
Zero? How many developers have plain-text tokens lying around on disk? Avoiding that been hammered into me from every developer more senior than me since I got involved with professional software development.
You're sure you don't have something lying around in ~/.config ? Until recently the github cli would just save its refresh token as a plain text file. AWS CLI loves to have secrets sitting around in a file https://docs.aws.amazon.com/cli/latest/userguide/cli-configu...
I don't use AWS and looking in ~/.config/gh I see two config files, no plain-text secrets.
With that said, it's not impossible some tool leaks their secrets into ~/.local, ~/.cache or ~/.config I suppose.
I thought they were referencing the common approach of adding environment variables with plaintext secrets to your shell config or as an individual file in $HOME, which been a big no-no for as long as I can remember.
I guess I'd reword it to "I'm not manually putting any cleartext secrets on disk" or something instead, if we wanted it to be 100% accurate.
Isn't this quite hard to achieve on local systems, where you don't have a CI vault automation to help?
I'd argue the reverse is true. On your local system, which only need to operate when a named user with a (hopefully) strong password is present, you can encrypt the secrets with the user's login password and the OS can verify that it's handing the secret out to the correct binary before doing so. The binary can also take steps to verify that it is being called directly from a user interaction and not from a build script of some random package.
The extent to which any of this is actually implemented varies wildly between different OSes, ecosystems and tools. On macOS, docker desktop does quite well here. There's also an app called Secretive which does even better for SSH keys - generating a non-exportable key in the CPU's secure enclave. It can even optionally prompt for login password or fingerprint before allowing the key to be used. It's practically almost as secure as using a separate hardware token for SSH but significantly more convenient.
In contrast, most of the time the only thing protecting the keys in your CI vault from being exfiltrated is that the malware needs to know the specific name / API call / whatever to read them. Plenty of CI systems you don't even need that, because the build script that uses the secrets will read them into environment variables before starting the build proper.
It's not that hard if it's something you decide you care about and want to solve. Like diggan mentions, there's many tools, some you already might use, that can be used to inject secrets into applications that's not too onerous to use in your development workflow.
I don't think so? I don't even know what a "CI vault automation" is, I store my credentials and secrets in 1Password, and use the CLI to get the secrets for the moments they're needed, I do all my development locally and things seem fine.
> How many developers have plain-text tokens lying around on disk?
Most of them. Mainly on purpose, (.env files) but many also accidentally. (shell history with tokens in the commands)
How do you manage secrets for your projects?
One option is pass, which is a shell script that uses GPG to manage passwords for command line tools. You can put the password store into a git repository if you need to sync it across machines.
Wait, what? "put the password store into a git repository"?!
The store in the case of pass, is a plain text file, whose contents are encrypted strings. If you trust the encryption, you can put it anywhere you like. Keep the keys secret and safe, though!
Using a password manager for fetching them when needed. 1Password in my case, but I'm sure any password manager can be used for storing secrets for most programming projects.
I was thinking about one more case, if you are using 1password as a cli tool. Let's say you "op run -- npm dev". If there's a malicious node modules script, it would of course be able to get the env variables you intended to inject, but would it also be able to continue running more op commands to get all your other secrets too if you have started a session?
Edit: Testing 1Password myself, with 1password desktop and shell, if I have authed myself once in shell, then "spawn" would be able to get all of my credentials from 1Password.
So I'm not actually sure how much better than plaintext is that. Unless you use service accounts there.
Fun fact : Bitwarden’s cli is written in JavaScript and needs Node.js to run.
Which programming languages/frameworks do you use? Do you use 1Password to load secrets to env where you run whatever thing you are working on? Or does the app load them during boot?
A bunch, ranging from JS to Clojure and everything in-between, depends on the project.
The approach also depends on the project. There is a bunch of different approaches and I don't think there is one approach that would work for every project, and sometimes I requires some wrangling but takes 5-10 minutes tops.
Some basic information about how you could make it work with 1Password: https://developer.1password.com/docs/cli/secrets-environment...
How long have you been using that method? I didn't feel it's been very popular so far, although it makes a lot of sense. I've always seen people using gitignored .env files/config dirs in projects with many hardcoded credentials.
A good habit, but encryption won't save you in all cases because anything you run has write access to .bashrc.
Frankly, our desktop OSes are not fit for purpose anymore. It's nuts that everything I run can instantly own my entire user account.
It's the old https://xkcd.com/1200/ . That's from 2013 and what little (Flatpak, etc.) has changed has only changed for end users - not developers.
We've seen many reports of supply chain attacks affecting NPM. Are these symptoms of operational complexity, which can affect any such service, or is there something fundamentally wrong with NPM?
It's actually relatively simple.
Adding dependencies comes with advantages and downsides. You need to strike a balance between them. External libraries can help implement things that you better don't implement yourself, so the answer is certainly not "no dependencies". But there are downsides and risks, and the risks grow with the number of dependencies.
In the world of NPM, people think those simple truths don't apply to them and the downsides and risks of dependencies can be ignored. Then you end up with thousands of transitive dependencies.
They're wrong and learn it the hard way now.
You can't put this all on the users. The JS/node/npm projects have been mismanaged since the start.
node should have shipped "batteries included" after the left-pad incident. There was a boneheaded attachment to small stdlib, which you could put down to youthful innocence, except that it's been almost 10 years.
The TC39 committee which controls the design of JS stdlib and the node maintainers basically both act like the other one doesn't exist.
NPM was never designed with security in mind. It's a dirty hack that somehow became the most popular package manager.
The dependency hell is a reflection of the massive egos of the people involved in the multiple organizations. Python doesn't have this problem because it's all centralized under one org with a single vision.
Apparently Maven has 61.9M indexed packages. As Java has a decent standard lib, mini libs like leftpad are not contributing to this count. NPM has 3.1M packages. Many are trivially simple. Those stats would suggest that NPM has disproportionately more issues than other services.
I would argue that is only one of the many issues with the JS/TS/NPM ecosystem. Many of the other problems have been normalized. The constant security issues are highly visible.
> Apparently Maven has 61.9M indexed packages.
Where did you see that number? Maven central says it has about 18 million [1] packages. Maybe with all versions of those 18 million packages there are about 62 million artifacts?
While the Java ecosystem is vastly larger, in Java (with Maven, Gradle, Bazel, etc.) it is not common to use really small libraries. So you end up with vastly less transitive dependencies in your projects.
[1] https://mvnrepository.com/repos/central
That is correct.
On Maven, I restrict packages to Spring and Apache. As opposed to NPM, where even big vendors can depend on hundreds of small ones.
This. You would expect some of the mature packages to be quite diligent about dependencies, but they are the one pulling random stuff for a minor feature. then the transitive dependencies adds like GBs of files to your project.
There is a guy (ljharb) who is literally on TC39 - JavaScript specification committee - who is maintaining like 600 packages full of polyfills/dependencies/utilities.
It's just javascript being javascript.
There was a huge uproar about that guy specifically and deep dependency graphs in general a year ago. A lot has already changed for lots of the popular frameworks and libraries. Dependency graphs are already much slimmer. The cultural change is happening, but we can't expect it to happen all at once.
That wouldn't be a problem if there was proper package signing and the polyfill packages were hosted under a package namespace owned by the javascript specification committee.
Irrelevant here. You use eslint-plugin-import with its 60 dependencies; One dependency or 60 is irrelevant because you only need one token: his. They're all his packages.
The problem with that guy is that the dependencies are useless to everyone except his ego.
Just spit-balling here, but it seems that the problem is with the pushing to NPM, and distribution from NPM, rather than the concept of NPM. If NPM required some form of cryptographically secure author signing, and didn't distribute un-signed packages, then there is at least a chain of responsibility that can be followed.
It's the entire blase nature of js development in general.
It's just where the users and the juicy targets are.
NPM packages are used by huge Electron apps like Discord, Slack, VS Code, the holy grail would be to somehow slip something inside them.
It's both that and a culture of installing a myriad of constantly-updating, tiny libraries to do basic utility functions. (Not even libraries, they're more like individual pages in individual books).
In our line-of-business .NET app, we have a logger, a database, a unit tester, and a driver for some specialty hardware. We upgrade to the latest version of each external dependency about once per year (every major version) to avoid accruing tech debt. They're all pinned and locally hosted, nuget exists but we (like most .Net developers) don't use it to the extent that npm devs do. We read the changelogs - all four of them! - and manually update.
I understand that the NPM ecosystem works differently from a "batteries included" .Net environment for a desktop app, but it's not just about where the users are. Line of business code in .Net and Java apps process a lot of important data. Slipping a malicious package into pypi could expose all kinds of juicy, proprietary data, but again, it's less about the existence of a package manager and more about when and how you use it.
> Slipping a malicious package into pypi could expose all kinds of juicy, proprietary data
> In July 2024, Bittensor users were the victims of an $8 million hack. The Bittensor hack was an example of a supply chain hack using PyPI. PyPI is a site that hosts packages for the Python programming language
https://www.halborn.com/blog/post/explained-the-bittensor-ha...
Yes, there are hackers on every platform... but it feels like there's an NPM compromise announced about once a week.
We don't see these attacks nearly as severe or frequent on Maven, which is a much older package management solution. Maven users would be far more attractive targets given corporates extensively run Java.
Number of packages doesn’t mean much. If you can get your code into just one Javascript package you could have it run on billions of browsers. With Java it’s hard to get the same distribution (although the log4j vulnerability shows it’s not entirely impossible).
It is also, in my humble but informed opinion, where you will find the least security concious programs, just because of the breadth of it's use and myriad of deployments.
It's the new pragmatic choice for web apps and so it's everyone is using it, from battle hardened teams to total noobs to people who just don't give a shit. It reminds me of Wordpress from 10 years ago, when it was the goto platform for cheap new websites.
Every NPM turd should be run with bubblewrap or a similar sandbox toolkit at least.
So do you expect other supply chain services that also supply juicy targets to be affected? I mean, we live in a bubble here in HN, so not seeing something in the front page doesn't mean it doesn't exist or it doesn't happen, but the feeling is that NPM is particularly more vulnerable than other services, correct me if I'm wrong.
> is there something fundamentally wrong with NPM?
Its users don't check who the email is from
NPM isn’t perfect but no, it’s fundamentally self inflicted.
Community is very happy to pick up helper libraries and by the time you get all the way up the tree in a react framework you have hundreds or even thousands of packages.
If you’re sensible you can be fine just like any other ecosystem, but limited because one wrong package and you’ve just ballooned your dependency tree by hundreds which lowers the value of the ecosystem.
Node doesn’t have a standard library and until recently not even a test runner which certainly doesn’t help.
If your sensible with node or Deno* you’ll somewhat insulated from all this nonsense.
*Deno has linting,formatting,testing & a standard library which is a massive help (and a permission system so packages can’t do whatever they want)
Anyone know if there is a public events feed/firehouse for npm ecosystem system? Similar to GitHub public events feed?
We, at ClickHouse, love big data and it would be super cool download and analyse patterns of all these data & provide some tooling to help with combatting this wide spread issue.
Languages/VMs should support capability-based permissions for libraries, no library should be able to open a file or do network requests without explicit granular permissions.
How many packages now have been compromised over the past couple of weeks? The velocity of these attacks are insane. Part of me believes state actors must be involved at this point.
In any case, does anyone have an exhaustive list of all recently compromised npm packages + versions across the recent attacks? We need to do an exhaustive scan after this news...
> Part of me believes state actors must be involved at this point.
Its less a technical but rather a moral hurdle. Its probably a bunch of teenagers behind it like how it was with the Mirai Botnet.
post-install seems like it shouldn't be necessary anyway, let alone need shell access. What are legitimate JS packages using this for?
From what I've seen, it's either spam, telemetry, or downloading prebuilt binaries. The first two are anti-user and should not exist, the last one isn't really necessary — swc, esbuild, and typescript-go simply split native versions into separate packages, and install just what your system needs.
Use pnpm and whitelist just what you need. It disables all scripts by default.
Does that even matter?
The malware could have been a JS code injected into the module entry point itself. As soon as you execute something that imports the package (which, you did install for a reason) the code can run.
I don't think that many people sandbox their development environments.
It absolutely matters. Many people install packages for front-end usage which would only be imported in the browser sandbox. Additionally, a package may be installed in a dev environment for inspection/testing before deciding whether to use it in production.
To me it's quite unexpected/scary that installing a package on my dev machine can execute arbitrary code before I ever have a chance to inspect the package to see whether I want to use it.
I've been using pnpm and it does not run lifecycle scripts by default. Asks for confirmation and creates a whitelist if you allow things. Might be the better default.
I think these compromises show that install hooks should be severely restricted.
Something like, only packages with attestations/signed releases and OIDC-only workflow should allow these scripts.
Worm could propogate through the code itself but I think it would be quite a bit less effective.
Most don’t need it. There was a time when most post installing flooded your terminal with annoying messages to upgrade, donate, say hi.
Modern node package managers such as yarn and pnpm allow you to prevent post installs entirely.
Today most of the time you need to make an exception for a package is when a module requires native compilation or download of a pre-built binary. This has become rare though.
The number of packages is now up to 180 (or more, depending on which source you're looking at)
Just notice guys it did not started with tinycolor. I had first reported it here, I am just not as popular haha
My posts way before the issue was created: https://news.ycombinator.com/item?id=45252940 https://www.linkedin.com/posts/daniel-pereira-b17a27160_i-ne...
Soon we'll see services like, havemysecretsbeenpwned.com check it against with your secrets xD given the malw seeks local creds.
To my experience 80% of companies do not care about their secrets will/being exposed.
There is this shallow belief that production will never be hacked
I haven't dug into the specifics but technical props and nostalgia to the "self propagating" nature. Reminds of the OG "Worm" - the https://en.wikipedia.org/wiki/Morris_worm
It's high time we took this seriously and required signing and 2FA on all publishes to NPM and NPM needs to start doing security scanning and tooling for this that they can charge organisations for.
Need to stop using javascript on desktop ASAP. Also Rust might be a bit dangerous now?
As a developer, is there a way on mac to limit npm file access to the specific project? So that if you install a compromised package it cannot access any data outside of your project directory?
Wrote a small utility shell script that uses docker behind the scenes to prevent access to your host machine while still allowing full npm install and run workflow.
https://github.com/freakynit/simple-npm-sandbox
Disclaimer: I am not Docker expert. Please review the script (sandbox.js) and raise any potential issues or suggestions.
Thanks..
Frankly, I am refusing to use npm outside of docker anymore.
This blog post and others are from 'security saas' that also try to make money off how bad NPM package security safety is.
Why can't npm maintainers just implement something similar?
Maybe at least have a default setting (or an option) that packages newer than X days are never automatically installed unless forced? That would at least give time for people to review and notice if the package has been compromised.
Also, there really needs to be a standard library or at least a central community approved library of safe packages for all standard stuff.
Why did the socket.dev story from last night get flagged off the front page?
https://news.ycombinator.com/item?id=45256210
What indicates to you that it has been flagged?
npm considered harmful
Bless the maker and his water.
Is there a theoretical framework that can prevent this from happening? Proof-carrying code?
Object-capability model / capability-based security.
Do not let code to have access to things it's not supposed to access.
It's actually that simple. If you implemented a function which formats a string, it should not have access to `readFile`, for example.
Retrofitting it into JS isn't possible, though, as language is way too dynamic - self-modifying code, reflection, etc, means there's no isolation between modules.
In a language which is less dynamic it might be as easy as making a white-list for imports.
People have tried this, but in practice it's quite hard to do because then you have to start treating individual functions as security boundaries - if you can't readFile, just find a function which does it for you.
The situation gets better in monadic environments (can't readFile without the IO monad, and you cant' call anything which would read it).
Well, to me it looks like people are unreasonably eager to use "pathologically dynamic" languages like JS & Python, and it's an impossible problem in a highly dynamic environment where you can just randomly traverse and change objects.
Programming languages which are "static" (or, basically, sane) you can identify all imports of a module/library, and, basically, ban anything which isn't "pure" part of stdlib.
If your module needs to work with files, it will receive an object which lets it to work with files.
A lot of programming languages implement object-capability model: https://en.m.wikipedia.org/wiki/Object-capability_model it doesn't seem to be hard at all. It's just programmers have preference for shittier languages, just like they prefer C which doesn't even have language-level array bound checking (for a lack of a "dynamic array" concept on a language level).
I think it's sort of orthogonal to "pure functional" / monadic: if you have unrestricted imports you can import some shit like unsafePerformIO, right? You have another level of control, of course (i.e. you just need to ban unsafePerformIO and look for unlicensed IO) but I don't feel like ocap requires Haskell
You can protect yourself using existing tools, but it's not trivial and requires serious custom work. Effectively you want minimal permissions and loud failures.
This is something I'm trying to polish for my system now, but the idea is: yarn (and bundler and others) needs to talk only to the repositories. That means yarn install is only allowed outbound connections to localhost running a proxy for packages. It can only write in tmp, its caches, and the current project's node_packages. It cannot read home files beyond specified ones (like .yarnrc). The alias to yarn strips the cloud credentials. All tokens used for installation are read-only. Then you have to do the same for the projects themselves.
On Linux, selinux can do this. On Mac, you have to fight a long battle with sandbox-exec, but it's kinda maybe working. (If it gained "allow exec with specified profile", it would be so much better)
But you may have guessed from the description so far - it's all very environment dependent, time sink-y, and often annoying. It will explode on issues though - try to touch ~/.aws/credentials for example and yarn will get killed and reported - which is exactly what we want.
But internally? The whole environment would have to be redone from scratch. Right now package installation will run any code it wants. It will compile extensions with gyp which is another way of custom code running. The whole system relies on arbitrary code execution and hopes it's secure. (It will never be) Capabilities are a fun idea, but would have to be seriously improved and scoped to work here.
Why yarn instead of pnpm?
It doesn't matter. It applies the same to all those tools.
Something similar to Deno's permission system, but operating at a package level instead of a process level.
When declaring dependencies, you'd also declare the permissions of those dependencies. So a package like `tinycolor` would never need network or disk access.
Probably signatures could alleviate most of these issues, as each publish would require the author to actually sign the artifact, and setup properly with hardware keys, this sort of malware couldn't spread. The NPM CI tokens that don't require 2fa kind of makes it less useful though.
Clojars (run by volunteers AFAIK) been doing signatures since forever, not sure why it's so difficult for Microsoft to follow their own yearly proclamation of "security is our top concern".
I would like to see more usage of NPM/Github Actions provenance statements https://www.npmjs.com/package/sigstore#provenance through the ecosystem
> The NPM CI tokens that don't require 2fa kind of makes it less useful though
Use OIDC to publish packages instead of having tokens around that can be stolen or leaked https://docs.npmjs.com/trusted-publishers
Manual verification of releases and chain-of-trust systems help a lot. See for example https://lucumr.pocoo.org/2019/7/29/dependency-scaling/
There are, but they have huge performance or usability penalties.
Stuff like intents "this is a math library, it is not allowed to access the network or filesystem".
At a higher level, you have app sandboxing, like on phones or Apple/Windows store. Sandboxed desktop apps are quite hated by developers - my app should be allowed to do whatever the fuck it wants.
Do they actually have huge performance penalties in Javascript?
I would have thought it wouldn't be too hard to design a capability system in JS. I bet someone has done it already.
Of course, it's not going to be compatible with any existing JS libraries. That's the problem.
You can do that by screening module imports with zero runtime penalty.
Related:
Active NPM supply chain attack: Tinycolor and 40 Packages Compromised
https://news.ycombinator.com/item?id=45256210
Is using any type of NPM type stuff a no go? Who reads the code and verifies is secure?
Unless npm infrastructure will be thoroughly curated and moderated, it always going to stay a high risk threat.
> It deliberately skips Windows systems
Reminds me of when I went to a tech conference with a Windows laptop and counted exactly two like me among the hundreds of attendees. I was embarrassed then but I'd be laughing now :D
Jesus Christ. Another one? What the fuck?
This isn't a JavaScript problem. What, structurally, stops the same thing happening to PyPI? Or the Rust ecosystem? Or Lisp via QuickLisp? Or CPAN?
This whole mess was foreseeable. So what's to be done?
Look. Any serious project needs to start vendoring its dependencies. People should establish big, coarse grained meta-distributions like C++ Boost that come from a trustable authority and that get updated infrequently enough that you can keep up with release notes.
> This isn't a JavaScript problem. What, structurally, stops the same thing happening to PyPI? Or the Rust ecosystem? Or Lisp via QuickLisp? Or CPAN?
For one, NPM has a really sprawling ecosystem where it's normal to have many dependencies.
I remember that I once tried to get started with angular, and I did an "init" for an empty project and "compile", and suddenly had half a gigabyte of code lying in my directory.
This means that there is a high number of dependencies that are potential targets for a supply chain attack.
I just took a look at our biggest JS/Typescript project at work, it comes in at > 1k (recursive) NPM dependencies. Our biggest Python project has 78 recursive dependencies. They are of comparable size in terms of lines of code and total development time.
Why? Differences in culture, as well as python coming with more "batteries included", so there's less need for small dependencies.
> For one, NPM has a really sprawling ecosystem where it's normal to have many dependencies.
Agreed, but it's a difference of degree (literally --- graph in- and out-degree) not kind.
> Or Lisp via QuickLisp
Common Lisp is not worth it - you are unlikely to hit any high-value production target, there are not many uses and they are tech-savy. Good for us, the 5 remaining users. Also, Quicklisp is not rolling-release, it is a snapshot done one or two times a year.
They were new versions of the packages instead of modified existing ones so vendoring has the same effect as the usual practice of pinning npm deps and using npm ci, I think.
Rust was hit by a similar attempt: https://github.com/rust-lang/crates.io/discussions/11889
Nothing much came of it, I don't know.
New day, new npm malware. Sigh..
> New day, new npm malware. Sigh..
This. But the problem seems to go way deeper than npm or whatever package manager is used. I mean, why is anyone consuming a package like colors or tinycolors? Do projects really need to drag in a random dependency to handle these usecases?
So rather than focusing on how Microsoft/npm et al can prevent similar situations in the future, you chose to think about what relevance/importance each individual package has?
There will always be packages that for some people are "but why?" but for others are "thank god I don't have to deal with that myself". Sure, colors and whatnot are tiny packages we probably could do without, but what are you really suggesting here? Someone sits and reviews every published package and rejects it if the package doesn't fit your ideal?
You're partly right.
But the issue isn't just about the “thank god I don't have to deal with that myself” perspective. It's more about asking: do you actually need a dependency, or do you simply want it?
A lot of developers, especially newer ones, tend to blur that distinction. The result is an inflated dependency tree that unnecessarily increases the attack surface for malware.
The "ship fast at all costs" mindset that dominates many startups only makes this worse, since it encourages pulling in packages without much thought to long-term risk.
> So rather than focusing on how Microsoft/npm et al can prevent similar situations in the future, (...)
There's some ignorance in your comment. If you read up on debug & chalk supply chain attack, you'll end up discovering that the attacker gained control of the account through plain old phishing. Through a 2FA reset email, to boot.
What exactly do you expect the likes of Microsoft to do if users hand over their access to third parties? Do you want to fix issues or to pile onto the usual targets?
Why are people using React to write simple ecommerces?
Why are React devs pulling object utils from lodash instead of reimplementing them?
> Why are people using React to write simple ecommerces?
What leads you to believe React is not well suited to simple ecommerce sites?
1. It's a solution meant for highly interactive app-like websites, not static-content driven websites like ecommerces. React in this context is just the wrong tool for the problem that will give you a huge array of performance, bugs and ux problems.
2. Extensive ecommerce experience including Disney, Carnival Cruises, Booking, TUI, and some of the European leaders in real estate and professional home building tools among the others.
> 1. It's a solution meant for highly interactive app-like websites, not static-content driven websites like ecommerces. React in this context is just the wrong tool for the problem that will give you a huge array of performance, bugs and ux problems.
Strongly disagree. React is not about interactivity, but reactivity. If you have to consume an API and update your app based on the responses, React does all the heavy lifting for you without requiring full page reloads.
On top of that, and as a nice perk, React also gives you all the tools you will ever need to optimize perceived performance.
Claiming that a tool designed for reactive programming is not suited for the happy flow of reactive programming is simply fundamentally wrong.
1. React didn't invent SPAs and reactivity.
2. Ecommerces are not highly dynamic pages. They are overwhelmingly static content with an occasional configurator/cart/search. All things that can be embedded with whatever library you like (including React), or even better none at all.
3. Seo and performance is what really matters in ecommerces. The only minor exceptions are shops like Amazon or Airbnb, but that's unrelated to their seo and performance.
4. I've been writing React and ecommerces using React and similar with millions of daily users for a decade :)
My comment yesterday, which received one downvote and which I will repeat if/until they’re gone: HTTP and JS have to go. There are ways to replace them.
One downvote is not enough.
One upvote is not enough. We need enough upvotes to fix the problem. You can’t shape a big pile of shit into success. HTTP and JS will never serve as a proper application framework.
This seems like something that can be solved with reproducible builds and ensuring you only deploy from a CI system that verifies along the way.
In fact this blog post appears to be advertising for a system that secures build pipelines.
Google has written up some about their internal approach here: https://cloud.google.com/docs/security/binary-authorization-...