React Server Components always felt uncomfortable to me because they make it hard to look at a piece of JavaScript code and derive which parts of it are going to run on the client and which parts will run on the server.
It turns out this introduces another problem too: in order to get that to work you need to implement some kind of DEEP serialization RPC mechanism - which is kind of opaque to the developer and, as we've recently seen, is a risky spot in terms of potential security vulnerabilities.
I was a fan of NextJS in the pages router era. You knew exactly where the line was between server and client code and it was pretty easy to keep track of that. Then I've began a new project and wanted to try out app router and I hated it. So many (to me common things) where just not possible because the code can run in the client and on the server so Headers might not always be available and it was just pure confusion whats running where.
I think we (the Next.js user community) need to organize and either convince Vercel to announce official support of the Pages router forever (or at least indefinitely, and stop posturing it as a deprecated-ish thing), or else fork Next.js and maintain the stable version of it that so many of us enjoyed. Every time Next comes up I see a ton of comments like this, everyone I talk to says this, and I almost never hear anyone say they like the App Router (and this is a pretty contrarian site, so if they existed I’d expect to see them here).
I would highly recommend just checking out TanStack Router/Start instead. It fills a different niche, with a slightly different approach, that the Next.js app router just hasn't prioritized enabling anymore.
What app router has become has its ideal uses, but if you explicitly preferred the DX of the pages router, you might enjoy TanStack Router/Start even more.
OK I am personally surprised that anyone likes the Pages router? Pages routing has all the benefits (simple to get started the first time) and all the downsides (maintainability of larger projects goes to hell) of having your routing being determined by where in the file system things are.
I don't care about having things simple to get started the first time, because soon I will have to start things a second or third time. If I have a little bit more complexity to get things started because routing is handled by code and not filesystem placement then I will pretty quickly develop templates to handle this, and in the end it will be easier to get things started the nth time than it is with the simple version.
Do I like the app router? No, Vercel does a crap job on at least two things - routing and building (server codes etc. can be considered as a subset of the routing problem), but saying I dislike app router is praising page router with too faint a damnation.
Remix 2 is beautiful in its abstractions. The thing with NextJS Roadmap is that it is tightly coupled with Vercel's financial incentives. A more complex & more server code runs ensure more $$$ for them. I don't see community being able to do much change just like how useContextSelector was deprioritized by the React Core team.
Align early on wrt values of a framework and take a closer look at the funder's incentives.
I've been using React since its initial release; I think both RSC and App Router are great, and things are better than ever.
It's the first stack that allows me to avoid REST or GraphQL endpoints by default, which was the main source of frontend overhead before RSC. Previously I had to make choices on how to organize API, which GraphQL client to choose (and none of them are perfect), how to optimize routes and waterfalls, etc. Now I just write exactly what I mean, with the very minimal set of external helper libs (nuqs and next-safe-action), and the framework matches my mental model of where I want to get very well.
Anti-React and anti-Next.js bias on HN is something that confuses me a lot; for many other topics here I feel pretty aligned with the crowd opinion on things, but not on this.
You still need API routes for stuff like data-heavy async dropdowns, or anything else that's hard to express as a pure URL -> HTML, but it cuts down the number of routes you need by 90% or more.
Some of the anti-next might be from things like solid-start and tanstack-start existing, which can do similar things but without the whole "you've used state without marking as a client component thus I will stop everything" factor of nextjs.
Not to mention the whole middleware and being able to access the incoming request wherever you like.
Personally, I love App Router: it reminds me of the Meta monorepos, where everything related to a certain domain is kept in the same directory. For example, anything related to user login/creation/deletion might be kept in the /app/users directory, etc.
But I really, really do not like React Server Components as they work today. I think it's probably better to strip them out in favor of just a route.ts file in the directory, rather than the actions files with "use server" and all the associated complexity.
Technically, you can build apps like that using App Router by just not having "use server" anywhere! But it's an annoying, sometimes quite dangerous footgun to have all the associated baggage there waiting for an exploit... The underlying code is there even if you aren't using it.
I think my ideal setup would be:
1. route.ts for RESTful routes
2. actions/SOME_FORM_NAME.ts for built-in form parsing + handling. Those files can only expose a POST, and are basically a named route file that has form data parsing. There's no auto-RPC, it's just an HTTP handler that accepts form data at the named path.
Route files are no different than the pages router that preceded them, except they sit in a different filepath. They're not React components, and definitely not React Server Components. They're not even tsx/jsx files, which should hint at the fact that they're not components! They just declare ordinary HTTP endpoints.
And they're what everyone here is talking about: the vulnerabilities were all in the action/use server codepaths. I suppose the clearest thing I could have said is that I like App Router + route files, but I dislike the magic RPC system: IMO React should simplify to JSON+HTTP and forms+HTTP, rather than a novel RPC system that doesn't interoperate with anything else and is much more difficult to secure.
I find myself just wanting to go all the way back to SPAs—no more server-side rendering at all. The arguments about performance, time to first paint, and whatever else we're supposed to care about just don't seem to matter on any projects I've worked on.
Vercel has become a merchant of complexity, as DHH likes to say.
Htmx does full server rendering and it works beautifully. Everything is RESTful–endpoints are resources, you GET (HTML) and POST (HTTP forms) on well-defined routes, and it works with any backend. Performance, including time to interactive and user device battery life, are great.
I think the context matters here - for SEO heavy marketing pages I still see google only executing a full browser based crawl for a subset of pages. So SSR matters for the remainder.
Probably an unpopular take, but I really think Vercel has lost the plot. I don't know what happened to the company internally. But, it feels like the first few, early, iterations of Next were great, and then it all started progressively turning into slop from a design perspective.
An example of this is filesystem routing. Started off great, but now most Next projects look like the blast radius of a shell script gone terribly wrong.
There's also a(n in)famous GitHub response from one of the maintainers backwards-rationalising tech debt and accidental complexity as necessary. They're clearly smart, but the feeling I got from reading that comment was that they developed Stockholm syndrome towards their own codebase.
I pretty much dumped a side project that was using next over the new router. It's so much more convoluted, way too many limitations. Who even really wants to make database queries in front end code? That's sketchy as heck.
This is what I asked my small dev team after I recently joined and saw that we were using Next for the product — do we know how this works? Do we have even a partial mental model of what's happening? The answers were sadly, pretty obvious. It was hard enough to get people to understand how hooks worked when they were introduced, but the newer Next versions seem even more difficult to grok.
I do respect the things React + Next team is trying to accomplish and it does feel like magic when it works but I find myself caring more and more about predictability when working with a team and with every major version of Next + React, that aspect seems to be drifting further and further away.
I feel the same. In fact, I'll soon be preparing a lunch and learn on trying out Solid.js. I'm hoping to convince the team that we should at least try a different mental model and see if we like it.
This is why I'm a big advocate of Inertia.js [1]. For me it's the right balance of using "serious" batteries included traditional MVC backends like Laravel, Rails, Adonis, Django, etc... and modern component based frontend tools like React, Vue, Svelte, etc. Responsibilities are clear, working in it is easy, and every single time I used it feels like you're using the right tool for each task.
I can't recommend it enough. If you never tried/learnt about it, check it out. Unless you're building an offline first app, it's 100% the safest way to go in my opinion for 99.9% of projects.
I am also in love with Inertia, it lets you use a React frontend and a Laravel backend without a dedicated API or endpoints, its so much faster to develop and iterate, and you dont need to change your approach or mental model, it just makes total sense.
Instead of creating routes and using fetch() you just pass the data directly to the client side react jsx template, inertia automatically injects the needed data as json into the client page.
I do think RSC and server side rendering in general was over adopted.
Have a Landing/marketing page? Then, yes, by all means render on the server (or better yet statically render to html files) so you squeeze every last millisecond you can out of that FCP. Also easy to see the appeal for ecommerce or social media sites like facebook, medium, and so on. Though these are also use cases that probably benefit the least from React to begin with.
But for the "app" part of most online platforms, it's like, who cares? The time to load the JS bundle is a one time cost. If loading your SaaS dashboard after first login takes 2 seconds versus 3 seconds, who cares? The amount of complexity added by SSR and RSC is immense, I think the payout would have to be much more than it is.
I've been at an embarassing number of places where turning off server side rendering improved performance as the number of browsers rendering content scales with the number of users, but the server-side rendering provisioning doesn't
I'm no javascript framework expert, but how vulnerable do people estimate other frameworks like Angular, Sveltekit and Nuxt to be to this sort of thing? Is React more disposed to be at risk? Is it just because there are more eyes on React due to its popularity?
nuxt, sveltekit etc don't have RSC equivalent. and won't have in future either. Vue has discussed it and explicitly rejected it. also RSC was proposed to sveltekit, they also rejected it citing public endpoint should not be hidden
they may get other vulnemerelities as they are also in JS, but RSC class vulelnebereleties won't be there
I 100% agree. I didn't even bother to think about the security implications - why worry about security implications if the whole things seems like a bad idea?
In retrospect I should have given it more thought since React Server Components are punted in many places!
Yeah. Being able to write code that's polymorphic between server and client is great, but it needs to be explicit and checked rather than invisible and magic. I see an analogy with e.g. code that can operate on many different types: it's a great feature, but really you want a generics feature where you can control which types which pieces of code operate on, not a completely untyped language.
You have two poison pills (`import "server-only"` and `import "client-only"`) that cause a build error when transitively imported from the wrong environment. This lets you, for example, constrain that a database layer or an env file can never make it into the client bundle (or that some logic that requires client state can never be accidentally used from the stateless request/response cycle). You also have two directives that explicitly expose entry points between the two worlds.
The vulnerabilities in question aren't about wrong code/data getting pulled into a wrong environment. They're about weaknesses in the (de)serialization protocol which relied on dynamic nature of JavaScript (shared prototypes being writable, function having a string constructor, etc) to trick the server into executing code or looping. These are bad, yes, but they're not due to the client/server split being implicit. They're in the space of (de)serialization.
I had this issue with a React app I inherited, there was a .env with credentials, and I couldn't figure out whether it was being read from the frontend or the backend.
So I ran a static analysis (grep) on the apk generated and
Why would you have anything for the backend in an APK? Wouldnt that be an app, that by definition runs on the client?
Most frameworks also by default block ALL environment variables on the client side unless the name is preceded by something specific, like NEXT_PUBLIC_*
> and how little documentation there seems to be on it
DISCLAIMER: After years of using Angular/Ember/Jquery/VanillaJs, jumping into React's functional components made me enjoy building front-ends again (and still remains that way to this very day). That being said:
This has been maybe the biggest issue in React land for the last 5 years at least. And not just for RSC, but across the board.
It took them forever to put out clear guidance on how to start a new React project. They STILL refuse to even acknowledge CRA exist(s/ed). The maintainers have actively fought with library makers on this exact point, over and over and over again.
The new useEffect docs are great, but years late. It'll take another 3-4 years before teh code LLMs spit out even resemble that guidance because of it.
And like sure, in 2020 maybe it didn't make sense to spell out the internals of RSC because it was still in active development. But it's 2025. And people are using it for real things. Either you want people to be successful or you want to put out shiny new toys. Maybe Guillermo needs to stop palling around with war criminals and actually build some shit.
It might be one of the most absurd things about React's team: their constitutional refusal to provide good docs until they're backed into a corner.
I remember when the point of an SPA was to not have all these elaborate conversations with the server. Just "here's the whole app, now only ask me for raw data."
It's funny (in a "wtf" sort of way) how in C# right now, the new hotness Microsoft is pushing is Blazor Server, which is basically old-school .aspx Web Forms but with websockets instead of full page reloads.
Every action, every button click, basically every input is sent to the server, and the changed dom is sent back to the client. And we're all just supposed to act like this isn't absolutely insane.
This is how client-server applications have been done for decades, it's basically only the browser that does the whole "big ole requests" thing.
The problem with API + frontend is:
1. You have two applications you have to ensure are always in sync and consistent.
2. Code is duplicated.
3. Velocity decreases because in order to implement almost anything, you need buy-in from the backend AND frontend team(s).
The idea of Blazor Server or Phoenix live view is "the server runs the show". There's now one source of truth, and you don't have to spend time making sure it's consistent.
I would say, really, 80% of bugs in web applications come from the client and server being out of sync. Even if you think about vulnerability like unauthorized access, it's usually just this. If you can eliminate those 80% or mitigate them, then that's huge.
Oh, and thats not even touching on the performance implications. APIs can be performant, but they usually aren't. Usually adding or editing an API is treated as such a high risk activity that people just don't do it - so instead they contort, like, 10 API calls together and discard 99% of the data to get the thing they want on the frontend.
No, it's not. I've built native Windows client-server applications, and many old-school web applications. I never once sent data to the server on every click, keydown, keyup, etc. That's the sort of thing that happens with a naive "livewire-like" approach. Most of the new tools do ship a little JavaScript, and make it slightly less chatty, but it's still not a great way to do it.
A web application should either be server-generated HTML with a little JS sprinkled in, or a client-side application with traditional RPC-like calls when necessary.
Blazor (and old-school .NET Web Forms) do a lot more back-and-forth than either of those two approaches.
The problem with all-backend is that to change the order of a couple buttons, you now need buy-in from the backend team. There's definitely a happy medium or several between these extremes: one of them is that you have full-stack devs and don't rigidly separate teams by the implementation technology. Some devs will of course specialize in one area more than others, but that's the point of having a diverse team. There's no good reason that communicating over http has to come with an automatic political boundary.
You have two applications you have to ensure are always in sync and consistent.
No, the point of the API is to loosely couple the frontend and backend with a contract. The frontend doesn't need to model the backend, and the backend doesn't need to know what's happening on the frontend, it just needs to respect the API output. Changes/additions in the API are handled by API versioning, allowing overlap between old and new.
Code is duplicated.
Not if the frontend isn't trying to model the internals of the backend.
Velocity decreases because in order to implement almost anything, you need buy-in from the backend AND frontend team(s).
Velocity increases because frontend works to a stable API, and backend doesn't need to co-ordinate changes that don't affect the API output. Also, changes involving both don't require simultaneous co-ordinated release: once the PM has approved a change, the backend implements, releases non-breaking API changes, and then frontend goes on its way.
Yes, I say this every time this topic comes up: it took many years to finally have mainstream adoption of client-side interactivity so that things are finally mostly usable on high latency/lossy connections, but now people who’re always on 10ms connections are trying to snatch that away so that entirely local interactions like expanding/collapsing some panels are fucked up the moment a WebSocket is disconnected. Plus nice and simple stateless servers now need to hold all those long-lived connections. WTF. (Before you tell me about Alpine.js, have you actually tried mutating state on both client and server? I have with Phoenix and it sucks.)
Isn’t that what Phoenix (Elixir) is? All server side, small js lib for partial loads, each individual website user gets their own thread on the backend with its own state and everything is tied together with websockets.
Basically you write only backend code, with all the tools available there, and a thin library makes sure to stich the user input to your backend functions and output to the front end code.
Websockets+thin JS are best for real time stuff more than standard CRUD forms. It will fill in for a ton of high-interactivity usecases where people often reach for React/Vue (then end up pushing absolutely everything needlessly into JS). While keeping most important logic on the server with far less duplication.
For simple forms personally I find the server-by-default solution of https://turbo.hotwired.dev/ to be far better where the server just sends HTML over the wire and a JS library morph-replaces a subset of the DOM, instead of doing full page reloads (ie, clicking edit to in-place change a small form, instead of redirecting to one big form).
Idk about Phoenix, but having tried Blazor, the DX is really nice. It's just a terrible technical solution, and network latency / spotty wifi makes the page feel laggy. Not to mention it eats up server resources to do what could be done on the client instead with way fewer moving parts. Really the only advantage is you don't have to write JS.
It's basically what Phoenix LiveView specifically is. That's only one way to do it, and Phoenix is completely capable of traditional server rendering and SPA style development as well.
LiveView does provide the tools to simulate latency and move some interactions to be purely client side, but it's the developers' responsibility to take advantage of those and we know how that usually goes...
It's extremely nice! Coming from the React and Next.js world there is very little that I miss. I prefer to obsess over tests, business logic, scale and maintainability, but the price I pay is that I am no longer able to obsess over frontend micro-interactions.
Not the right platform for every product obviously, but I am starting to believe it is a very good choice for most.
Server side rendering has been with us since the beginning, and it still works great.
Client side page manipulation has its place in the world, but there's nothing wrong with the server sending page fragments, especially when you can work with a nice tech stack on the backend to generate it.
Sure. The problem with some frameworks is that they attached server events to things that should be handled on the front-end without a roundtrip.
For instance, I've seen pages with a server-linked HTML button that would open a details panel. That button should open the panel without resorting to sending the event and waiting for a response from the server, unless there is a very, very specific reason for it.
Yeah, I kind of hate it... Blazor has a massive payload and/or you're waiting seconds to see a response to a click event. I'm not fond of RSC either... and I say this as someone absolutely and more than happy with React, Redux and MUI for a long while at this point.
I've been loosely following the Rust equivalents (Leptos, Yew, Dioxux) for a while in the hopes that one of them would see a component library near the level of Mantine or MUI (Leptos + Thaw is pretty close). It feels a little safer in the longer term than Blazor IMO and again, RSC for react feels icky at best.
> And we're all just supposed to act like this isn't absolutely insane.
This is insane to you only if you didn't experience the emergence of this technique 20-25 years ago. Almost all server-side templates were already partials of some sort in almost all the server-side environments, so why not just send the filled in partial?
Business logic belongs on the server, not the client. Never the client. The instant you start having to make the client smart enough to think about business logic, you are doomed.
> Then they rediscovered PHP, Rails, Java EE/Spring, ASP.NET, and reboted SPAs into fullstack frameworks.
I can understand the dislike for Next but this is such a poor comparison. If any of those frameworks at any point did half the things React + Next-like frameworks accomplished and the apps/experiences we got since then, we wouldn't be having this discussion.
> If any of those frameworks at any point did half the things React + Next-like frameworks accomplished and the apps/experiences we got since then, we wouldn't be having this discussion.
This is interesting because every Next/React project I see has a slower velocity than the median Rails/Django product 15 years ago. They’re just as busy, but pushing so much complexity around means any productivity savings is cancelled out by maintenance and how much harder state management and security are. Theoretically performance is the justification for this but the multi-second page load times are unconvincing.
From my perspective, it really supports the criticism about culture in our field: none of this is magic, we can measure things like page-weight, response times, or time to complete common tasks (either for developers or our users), but so much of it is driven by what’s in vogue now rather than data.
+1 to this. I seriously believe frontend was more productive in the 2010-2015 era than now, despite the flaws in legacy tech. Projects today have longer timelines, are more complex, slower, harder to deploy, and a maintenance nightmare.
I remember maintaining webpack-based projects, and those were not exactly a model of simplicity. Nor was managing a fleet of pet dev instances with Puppet.
Puppet isn’t a front end problem, but I do agree on Webpack - which is one reason it wasn’t super common. A lot of sites either didn’t try to bundle things or had simple Make-level workflows which were at least very simple, and at the time I noted that these often performed similarly: people did, and still do, want to believe there’s a magic go-faster switch for their front end which obviates the need to reconsider their architectural choices but anyone who actually measured it knew that bundlers just didn’t deliver savings on that scale.
I still remember the joy of using the flagship rails application - basecamp.
Minimal JS, at least compared to now, mostly backend rendering, everything felt really fast and magical to use.
Now they accomplished this by imposing a lot of constraints on what you could do, but honestly it was solid UX at the time so it was fine.
Like the things you could do were just sane things to do in the first place, thus it felt quite ok as a dev.
React apps, _especially_ ones hosted on Next.js rarely feel as snappy, and that is with the benefit of 15 years of engineering and a few order of magnitude perf improvement to most of the tech pieces of the stack.
It’s just wild to me that we had faster web apps, with better organizarion, better dev ex, faster to build and easier to maintain.
The only “wins” I can see for a nextjs project is flexibility, animation (though this is also debatable), and maybe deployment cost, but again I’m comparing to deploying rails 15 years ago, things have improved there as well I’m sure.
I know react can accomplish _a ton_ more on the front end but few projects actually need that power.
We are having this discussion because at some point, the people behind React decided it should be profitable and made it become the drug gateway for NextJS/Vercel
I sometimes feel like I go on and on about this... but there is a difference between application and pages (even if blurry at times), and Next is a result of people doing pages adopting React that was designed for applications when they shouldn't have.
Yeah, but then people started building bloated static websites with those libraries instead of using a saner template engine + javascript approach which is fast, easy to cache, debug, and has stellar performance and SEO.
Little it helped that even React developers were saying that it was the wrong tool for plenty use cases.
Worst of all?
The entire nuance of choosing the right tool for the job has been long lost on most developers. Even the comments I read on HN make me question where the engineering part of the job starts.
It also doesn't help that non-technical stakeholders sometimes want a say in a tech stack conversation as well. I've been at more than one company where either the product team or the acquiring firm wanted us to migrate away from a tried and true Rails setup to a fullstack JS platform simply because they either wanted the UI development flexibility or to not have to hire Ruby devs.
Non-technical MBA's seem to have a hard time grasping that a JS-only platform is not a panacea and comes with serious tradeoffs.
Correct, their main purpose is ecosystem lock-in. Because why return json when you can return html. Why even build a SPA when the old school model of server-side includes and PHP worked just fine? TS with koa and htmx if you must but server-side react components are kind of a waste of time. Give me one example where server side react components are the answer over a fetch and json or just fetching an html page?
The only example that has any traction in my view are web-shops, which claim that time-to-render and time-to-interactivity are critical for customer retention.
Surely there are not so many people building e-commerce sites that server components should have ever become so popular.
The thing is time to render and interactivity is much more reliant on the database queries and the internet connection of the user than anything else. Now instead of a spinner or a progress bar in the toolbar of the browser, now I got skeleton loaders and use half of GB for one tab.
Not to defend the practice, I’ve never partaken, but I think there’s some legit timing arguments that a server renderer can integrate more requests faster thanks to being collocated with services and dbs.
which brings me back to my main point of the web 1.0 architecture. Serving pages from the server-side, where the data lives, and we've come full circle.
It also decoupled fe and backend. You could use the same apis for say mobile, desktop and web. Teams didnt have to cross streams allowing for deeper expertise on each side.
Now they are shoving server rendering into react native…
It's really concerning that the biggest, most eye-grabbing part of this posting is the note with the following: "It’s common for critical CVEs to uncover follow‑up vulnerabilities."
Trying to justify the CVE before fully explaining the scope of the CVE, who is affected, or how to mitigate it -- yikes.
What’s concerning about it? The first thing I thought when I read the headline was “wow, another react CVE?” It’s not a justification, it’s an explanation to the most obvious immediate question.
It's definitely a defensive statement, proactively covering the situation as "normal". Normal it may be, but emphasizing that in the limited space of a tweet thread definitely indicates where their mind is on this, I'd think.
fwiw, the goal here wasn't to downplay the severity, but to explain the context to an audience who might not be familiar with CVEs and what's considered normal. I moved the note down so the more important information like severity, impacted versions, and upgrade instructions are first.
If there are so many React developers out there using server side components while not familiar with the concept of CVEs, we’re in very serious trouble.
It's ok, you gotta play the game. I'm more concerned about the fact that the downtime issue ranks higher than the security issue. But I'm assuming it relates to the specifics of the issue rather than reflecting on the priorities of the project as a whole.
We pioneered a lot of things with Opa, 15 years ago now. Opa featured automatic code "splitting" between client and server, introduced the JSX syntax although it wasn't called that way (Jordan at Facebook used Opa before creating React, but the discussions around the syntax happened at W3C notably with another Facebook employee, Tobie).
Since the Opa compiler was implemented in OCaml (we were looking more like Svelte than React as a pure lib), we performed a lot of statical analysis to prevent the wide range of attacks on frontend code (XSS, CSRF, etc.) and backend code. The Opa compiler became a huge beast in part because of that.
In retrospect, better separation of concerns and foregoing completely the idea of automatic code splitting (what React Server Components is) or even having a single app semantics is probably better for the near future. Our vision (way too early), was that we could design a simple language for the semantics and a perfect advanced compiler that would magically output both the client and the server from that specification. Maybe it's still doable with deterministic methods. Maybe LLMs will get to automatic code generation of all parts in one shot before.
Note that the exploits so far haven’t had much to do with “server code/data getting bundled into the client code” or similar which you’re alluding to. Also, RSC does not try to “guess” how to split code — it is deterministic and always user-controlled.
The vulnerabilities so far were weaknesses in the (de)serializer stemming from the dynamism of JavaScript — ability to hijack root object prototype, ability to toString functions to get their code, ability to override a Promise then implementation, ability to construct a function from a string. The patches are patching the (de)serializer to work around those dynamic pieces of JavaScript to avoid those gaps. This is similar to mistakes in parsers where they’re fooled by properties called hasOwnProperty/constructor/etc.
The serialization format is essentially “JSON with Promises and code chunk references”, and it seems like there’s enough pieces where dynamic nature of JS can leak that needed to be plugged. Hopefully with more scrutiny on the protocol, these will be well-understood by the team. The surface area there isn’t growing much anymore (it’s close to being feature-complete), and the (de)serializers themselves are roughly 5 kloc each.
The problem you had in Opa is solved in RSC with build-time assertions (import "server-only" is the server environment poison pill, and import "client-only" is the client environment poison pill). These poison pills work transitively up the module import stack and are statically enforced and prevent code (eg DB code, secrets, etc) from being pulled into the wrong environment. Of course this doesn’t prevent bugs in the (de)serializer but it’s why the overall approach is sound, in the absence of (de)serialization vulnerabilities.
Wouldn't make more sense keeping React smaller and left those features to frameworks? I liked it more when it was marketed as the View in MVC. Surely can still be used like that today but it still feels bloated
This code doesn’t exist in `react` or `react-dom`, no. Packages are released in lockstep to avoid confusion which is why everything got a version bump.
The vulnerable packages are the ones starting with `react-server-` (like `react-server-dom-webpack') or anything that vendors their code (like `next` does).
react server components is something that should've never existed anyway.
are people shipping faster due to them ? or it's all complexity, security vulnerabilities like this. you're not facebook. render html the classic way if you need server rendered html. if you really do need an SPA - which is 5% of the apps out there - then yeah use client side react, vue, svelte etc - none of those RPC server actions etc
Agreed. Unfortunately, there's an entire content/bootcamp ecosystem pushing this stuff that came of age largely during the tech boom, as well as a bunch of early and mid-career devs and companies that are deeply tied to it. With major VC funding backing projects like Bun, Vercel, etc., I don't think this deeply flawed approach to development is going anywhere because the utopian JS fantasy of "it just works and runs everywhere flawlessly" is the ultimate myth of building for the web.
I'm not going to let go my argument with Dan Abramov on x 3 years ago where he held up rsc as an amazing feature and i told him over and over he was making a foot gun. tahdah!
I'm a nobody PHP dev. He's a brilliant developer. I can't understand why he couldn't see this coming.
For what it’s worth, I’ve just built an app for myself with RSC, and I’m still a huge fan of this way of building and structuring web software.
I agree I underestimated the likelihood of bugs like this in the protocol, though that’s different from most discussions I’ve had about RSC (where concerns were about user code). The protocol itself has a fairly limited surface area (the serializer and deserializer are a few kloc each), and that’s where all of the exploits so far have concentrated.
Vulnerabilities are frustrating, and this seems to be the first time the protocol is getting a very close look from the security community. I wish this was something the team had done proactively. We’ll probably hear more from the team after things stabilize a bit.
A tale as old as time: hubris. A successful system is destined to either stop growing or morph into a monstrosity by taking on too many responsibilities. It's hard to know when to stop.
React lost me when it stopped being a rendering library and became a "runtime" instead. What do you know, when a runtime starts collapsing rendering, data fetching, caching, authorization boundaries, server and client into a single abstraction, the blast radius of any mistake becomes enormous.
I never saw brilliance in his contributions. Specially as React keeps being duct-taped.
Making complex things complex is easy.
Vue on the other hand is just brilliant. No wonder it's creator, Evan You went on to also create Vite. A creation so superior that it couldn't be confined to Vue and React community adopted it.
There's no need to take down and diminish other's contributions, especially in open source where everybody's free to bring a better solution to the table.
Or just fork if the maintainers want to go their way. If your solution has its merits it will find its fans.
I'm not defending React and this feature, and I also don't use it, but when making a statement like that the odds are stacked in your favor. It's much more likely that something's a bad idea than a good idea, just as a baseball player will at best fail just 65-70% of the time at the plate. Saying for every little thing that it's a bad idea will make you right most of the time.
But sometimes, occasionally, a moonshot idea becomes a home run. That's why I dislike cynicism and grizzled veterans for whom nothing will ever work.
I do hope this means we can finally stop hearing about RSC. The idea is an interesting solution to problems that never should exist in the first place.
A framework designed to blur the line between code running on the client and code running on the server — forgot the distinction between code running on the client and code running on the server. I don't know what they expected.
(The same confusion comes up regularly whenever you touch Next.js apps.)
I really like having a good old RESTful API (well maybe kinda faking the name because don't need HATEOAS usually)!
Except I find most front end stacks to lead to either endless configuration (e.g. Vue with Pinia, router, translation, Tailwind, maybe PrimeVue and a bunch of logic for handling sessions and redirects and toast messages and whatnot) and I feel the pull to just go and use Django or Laravel or Ruby on Rails mostly with server side templates - I much prefer that simplicity, even if it feels a bit icky to couple your front end and back end like that.
I really wonder why the swarm intelligence of software developers still hasn’t decided on a single best clearly defined architecture for serving web applications, decades after the building blocks have been in place.
Let the server render everything. Let JS render everything, server is only providing the initial div and serves only JSON from then on. Actually let JS render partial HTML rendered on the server! Websockets anyone?
Imagine SQL server architecture or iOS development had this kind of ADHS syndrome.
I've noticed a pattern in the security reports for a project I'm involved in. After a CVE is released, for the next month or so there will likely be additional reports targeting the same (or similar) areas of the framework. There is definitely a competitive spirit amongst security researchers as they try to get more CVEs credited to them (and potentially bounties).
have you seen the code of next.js? its completely impenetrable, and the packages have legacy versions of the same files coexisting, it's like huge hairball
Our team is working to fix our Next.js project.It's so painful.
Now I'm doubting RSC is a good engineering technology or a good practice.The real world is tradeoffs: RSC really help us improve our develop speed as we have good teamates that has good understanding of fullstack.
How is it painful? You need to bump a minor version? It took me less than 5 minutes 90% of which was waiting for CI. There's even a tool you can run to do it for you.
For the vast majority of projects it seems like the disadvantages of these highly complex RPC systems far exceed the benefits... Not just in terms of security but also the reduced observability compared to simple JSON..
I noticed requests that were exploiting the vulnerability were turning into timeouts pretty much immediately after rolling out the patch. I’m surprised it took so long for it to be announced.
Im confused, did the update from last week for the RCE bug also include fixes for these new CVEs or will I need to update again? npm audit says theres no issues
My Umami stats box got "pwned" about 15 mins after the last CVE was published and I spent an hour or so cleaning up that mess and upgrading everything. Not looking forward to doing it again today.
No, but it's primarily because Meta has their own server infrastructure already. RSCs are essentially the React team trying to generalize the data fetching patterns from Meta's infrastructure into React itself so they can be used more broadly.
I wrote an extensive post and did a conference talk earlier this year recapping the overall development history and intent of RSCs, as best as I understand it from a mostly-external perspective:
Like I said above and in the post: it was an attempt to generalize the data fetching patterns developed inside of Meta and make them available to all React devs.
If you watch the various talks and articles done by the React team for the last 8 years, the general themes are around trying to improve page loading and data fetching experience.
Former React team member Dan Abramov did a whole series of posts earlier this year with differently-focused explanations of how to grok RSCs: "customizable Backend for Frontend", "avoiding unnecessary roundtrips", etc:
Conceptually, the one-liner Dan came up with that I liked is "extending React's component model to the server". It's still parent components passing props to child components, "just" spread across multiple computers.
Yeah the "just" is doing a lot of things, nobody asked for a react server but it turns out it could be the base for a $10B cloud company. Classical open source rugpull.
LOL. I must have divination powers.
I am currently working on a UI framework and opened an issue just 3 weeks ago that says:
***
Seems that server functions are all the rage.
We are unlikely to have them.
The main reason is that it ties the frontend and the backend together in undesirable ways.
It forces a js backend upon people (what if I want to use Go for instance).
The api is not client agnostic anymore.
How to specify middleware is not clear.
Requires a bundler, so destroys isomorphism (isomorphic code requires no difference between the client and the server/ environment agnostic).
Even if it requires a bundler because it separates client and server implementation files, it blurs the data scoping (especially worrying for sensitive data)
Do one thing and do it well: separate frontend and backend.
It might be something that is useful for people who only plan on having a javascript web frontend server separate from the API server that links to the backend service.
Besides, it is really not obvious to me how it becomes architecturally clearer. It would double the work in terms of security wrt authorization etc. This is at least not a generic pattern.
So I'd tend to go opposite to the trend and say no.
Who knows, we might revisit it if anything changes in the future.
***
And boy, look at the future 3 weeks later...
To be fair, the one good thing is that they are hardening their implementation thanks to these discoveries. But still seems to me that this is wholly unnecessary and possibly will never be safe enough.
Anyway, not to toot my own horn, I know for a fact these things are difficult. Just found the timing funny. :)
React patches one vulnerability and two more are revealed, just like a Hydra.
At this point you might as well deprecate RSC as it is clearly a contraption for someone trying to justify a promotion at Meta.
Maybe they are going to silently remove “Built RSC at Meta!” in their LinkedIn bios after this. So what other vulnerabilities are going to be revealed in React after this one?
> We are not using RSC at Meta yet, bc of limits of our packaging infra (it’s great at different things) and because Relay+GraphQL gives us many of the same benefits as RSCs. But we are fans and users of server driven UI and incrementally working toward RSC.
Interesting how DoS ranks higher than code exposure in severity.
I personally think it's the other way around, since code exposure increases the odds that a security breach happens, while DoS does not increase chances of exposure, but affects reliability.
Obviously we are simplifying a multidimensional severity to one dimension, but I personally think that breaches are more important than reliability. I'd rather have my app go down than be breached.
And I don't think it's a trivial difference, if you'd rather have a breach than downtime, you will have a breach.
I remember some podcast interview with Miško Hevery talking about how Qwik was very emphatic about what code ran on the server and what ran on the client. Seems self-evident and prescient. It was a great interview as Miško Hevery is extremely articulate about the problems at hand. If I find it, I'll post.
Oh boy, I somehow missed that React was offering these.
Google has a similar technology in-house, and it was a bit of a nightmare a few years back; the necessary steps to get it working correctly required some very delicate dancing.
The JavaScript fanatics will downvote me for saying this, but I'll say this, "using a single JavaScript codebase on your client-side and server-side is like cooking food in your toilet, sooner or later, contamination is guaranteed" [1]
This isn't a Javascript problem, this is a React problem. You could theoretically rewrite React and RSC in any language and the outcome would be the same. Say Python ran in the browser natively, and you reimplented React on browser and server in Python. Same problem, not Javascript.
> Say Python ran in the browser natively, and you reimplented React on browser and server in Python. Same problem, not Javascript.
Yes.
And since Python does not natively run in the browser, that mistake never happens.
With JavaScript, the desire to have "backend and frontend in a single codebase" requires active resistance.
It's the same vulnerabilities because Next uses the vulnerable parts of React.
Your rational is quite poor as I can write an isomorphic web app in C or Rust or Go and run parts in the browser, what then? Look, many of us also strongly dislike JavaScript but generally that distaste is based on its actual shortcomings and failures, you don't have to invent new ones plenty already exist.
> I can write an isomorphic web app in C or Rust or Go and run parts in the browser, what then?
If you have a single codebase for Go-based code running in an untrusted browser (the "toilet") and a trusted backend (the "kitchen"), then the same contamination is highly likely.
Personally I prefer simple software without bugs! This security vulnerability highlights a serious issue with React. It’s a SPA framework, a server side framework, and a functional component library all at the same time. And it’s apparently getting so complex that it’s introducing source code exposures.
I’m not interested in flame wars per se, but I can tell you there are better alternatives, and that the closer you stay towards targeting the browser itself the better, because browser APIs are at least an order of magnitude more secure and performant than equivalent JS operations.
After Log4Shell, additional CVEs were reported as well.
It’s common for critical CVEs to uncover follow‑up vulnerabilities because researchers scrutinize adjacent code paths looking for variant exploit techniques to test whether the initial mitigation can be bypassed.
React Server Components always felt uncomfortable to me because they make it hard to look at a piece of JavaScript code and derive which parts of it are going to run on the client and which parts will run on the server.
It turns out this introduces another problem too: in order to get that to work you need to implement some kind of DEEP serialization RPC mechanism - which is kind of opaque to the developer and, as we've recently seen, is a risky spot in terms of potential security vulnerabilities.
I was a fan of NextJS in the pages router era. You knew exactly where the line was between server and client code and it was pretty easy to keep track of that. Then I've began a new project and wanted to try out app router and I hated it. So many (to me common things) where just not possible because the code can run in the client and on the server so Headers might not always be available and it was just pure confusion whats running where.
I think we (the Next.js user community) need to organize and either convince Vercel to announce official support of the Pages router forever (or at least indefinitely, and stop posturing it as a deprecated-ish thing), or else fork Next.js and maintain the stable version of it that so many of us enjoyed. Every time Next comes up I see a ton of comments like this, everyone I talk to says this, and I almost never hear anyone say they like the App Router (and this is a pretty contrarian site, so if they existed I’d expect to see them here).
I would highly recommend just checking out TanStack Router/Start instead. It fills a different niche, with a slightly different approach, that the Next.js app router just hasn't prioritized enabling anymore.
What app router has become has its ideal uses, but if you explicitly preferred the DX of the pages router, you might enjoy TanStack Router/Start even more.
Last time I tried tanstack router, I spent half a day trying to get breadcrumbs to work. Nit: I also can't stand their docs site.
OK I am personally surprised that anyone likes the Pages router? Pages routing has all the benefits (simple to get started the first time) and all the downsides (maintainability of larger projects goes to hell) of having your routing being determined by where in the file system things are.
I don't care about having things simple to get started the first time, because soon I will have to start things a second or third time. If I have a little bit more complexity to get things started because routing is handled by code and not filesystem placement then I will pretty quickly develop templates to handle this, and in the end it will be easier to get things started the nth time than it is with the simple version.
Do I like the app router? No, Vercel does a crap job on at least two things - routing and building (server codes etc. can be considered as a subset of the routing problem), but saying I dislike app router is praising page router with too faint a damnation.
Remix 2 is beautiful in its abstractions. The thing with NextJS Roadmap is that it is tightly coupled with Vercel's financial incentives. A more complex & more server code runs ensure more $$$ for them. I don't see community being able to do much change just like how useContextSelector was deprioritized by the React Core team.
Align early on wrt values of a framework and take a closer look at the funder's incentives.
I've been using React since its initial release; I think both RSC and App Router are great, and things are better than ever.
It's the first stack that allows me to avoid REST or GraphQL endpoints by default, which was the main source of frontend overhead before RSC. Previously I had to make choices on how to organize API, which GraphQL client to choose (and none of them are perfect), how to optimize routes and waterfalls, etc. Now I just write exactly what I mean, with the very minimal set of external helper libs (nuqs and next-safe-action), and the framework matches my mental model of where I want to get very well.
Anti-React and anti-Next.js bias on HN is something that confuses me a lot; for many other topics here I feel pretty aligned with the crowd opinion on things, but not on this.
Can you describe how rsc allows you to avoid rest endpoints? Are you just putting your rsc server directly on top of your database?
If I control both the backend and the frontend, yes. Server-only async components on top of layout/page component hierarchy, components -> DTO layer -> Prisma. Similar to this: https://nextjs.org/blog/security-nextjs-server-components-ac...
You still need API routes for stuff like data-heavy async dropdowns, or anything else that's hard to express as a pure URL -> HTML, but it cuts down the number of routes you need by 90% or more.
You’re just shifting the problem from HTTP to an adhoc protocol on top of it.
Yes but they’re also shifting the problem from one they explicitly have to deal with themselves to one the framework handles for them.
Personally I don’t like it but I do understand the appeal.
Maybe, but you go from one of the most tested protocol with a lot of tooling to another with not even a specification.
Some of the anti-next might be from things like solid-start and tanstack-start existing, which can do similar things but without the whole "you've used state without marking as a client component thus I will stop everything" factor of nextjs.
Not to mention the whole middleware and being able to access the incoming request wherever you like.
And vercel
That's true, can't blame people for having a bad taste of VC funded companies taking the reigns on open source projects.
Personally, I love App Router: it reminds me of the Meta monorepos, where everything related to a certain domain is kept in the same directory. For example, anything related to user login/creation/deletion might be kept in the /app/users directory, etc.
But I really, really do not like React Server Components as they work today. I think it's probably better to strip them out in favor of just a route.ts file in the directory, rather than the actions files with "use server" and all the associated complexity.
Technically, you can build apps like that using App Router by just not having "use server" anywhere! But it's an annoying, sometimes quite dangerous footgun to have all the associated baggage there waiting for an exploit... The underlying code is there even if you aren't using it.
I think my ideal setup would be:
1. route.ts for RESTful routes
2. actions/SOME_FORM_NAME.ts for built-in form parsing + handling. Those files can only expose a POST, and are basically a named route file that has form data parsing. There's no auto-RPC, it's just an HTTP handler that accepts form data at the named path.
3. no other built-in magic.
Route files are still RSCs. Actions/“use server” are unrelated.
Route files are no different than the pages router that preceded them, except they sit in a different filepath. They're not React components, and definitely not React Server Components. They're not even tsx/jsx files, which should hint at the fact that they're not components! They just declare ordinary HTTP endpoints.
RSCs are React components that call server side code. https://react.dev/reference/rsc/server-components
Actions/"use server" functions are part of RSC: https://react.dev/reference/rsc/server-functions They're the RPC system used by client components to call server functions.
And they're what everyone here is talking about: the vulnerabilities were all in the action/use server codepaths. I suppose the clearest thing I could have said is that I like App Router + route files, but I dislike the magic RPC system: IMO React should simplify to JSON+HTTP and forms+HTTP, rather than a novel RPC system that doesn't interoperate with anything else and is much more difficult to secure.
I find myself just wanting to go all the way back to SPAs—no more server-side rendering at all. The arguments about performance, time to first paint, and whatever else we're supposed to care about just don't seem to matter on any projects I've worked on.
Vercel has become a merchant of complexity, as DHH likes to say.
Htmx does full server rendering and it works beautifully. Everything is RESTful–endpoints are resources, you GET (HTML) and POST (HTTP forms) on well-defined routes, and it works with any backend. Performance, including time to interactive and user device battery life, are great.
I think the context matters here - for SEO heavy marketing pages I still see google only executing a full browser based crawl for a subset of pages. So SSR matters for the remainder.
SPAs can still be server rendered.
Probably an unpopular take, but I really think Vercel has lost the plot. I don't know what happened to the company internally. But, it feels like the first few, early, iterations of Next were great, and then it all started progressively turning into slop from a design perspective.
An example of this is filesystem routing. Started off great, but now most Next projects look like the blast radius of a shell script gone terribly wrong.
There's also a(n in)famous GitHub response from one of the maintainers backwards-rationalising tech debt and accidental complexity as necessary. They're clearly smart, but the feeling I got from reading that comment was that they developed Stockholm syndrome towards their own codebase.
We're migrating away from both Next and Vercel post-haste
What are you migrating to? Vanilla React?
Vanilla react, ts-rest
I pretty much dumped a side project that was using next over the new router. It's so much more convoluted, way too many limitations. Who even really wants to make database queries in front end code? That's sketchy as heck.
A lot of functionality is obviously designed for Vercel's hosting platform, with local equivalents as an afterthought.
This is what I asked my small dev team after I recently joined and saw that we were using Next for the product — do we know how this works? Do we have even a partial mental model of what's happening? The answers were sadly, pretty obvious. It was hard enough to get people to understand how hooks worked when they were introduced, but the newer Next versions seem even more difficult to grok.
I do respect the things React + Next team is trying to accomplish and it does feel like magic when it works but I find myself caring more and more about predictability when working with a team and with every major version of Next + React, that aspect seems to be drifting further and further away.
I feel the same. In fact, I'll soon be preparing a lunch and learn on trying out Solid.js. I'm hoping to convince the team that we should at least try a different mental model and see if we like it.
Should just use Vue.
Should just use Svelte.
This is why I'm a big advocate of Inertia.js [1]. For me it's the right balance of using "serious" batteries included traditional MVC backends like Laravel, Rails, Adonis, Django, etc... and modern component based frontend tools like React, Vue, Svelte, etc. Responsibilities are clear, working in it is easy, and every single time I used it feels like you're using the right tool for each task.
I can't recommend it enough. If you never tried/learnt about it, check it out. Unless you're building an offline first app, it's 100% the safest way to go in my opinion for 99.9% of projects.
[1] https://inertiajs.com/
I am also in love with Inertia, it lets you use a React frontend and a Laravel backend without a dedicated API or endpoints, its so much faster to develop and iterate, and you dont need to change your approach or mental model, it just makes total sense.
Instead of creating routes and using fetch() you just pass the data directly to the client side react jsx template, inertia automatically injects the needed data as json into the client page.
I do think RSC and server side rendering in general was over adopted.
Have a Landing/marketing page? Then, yes, by all means render on the server (or better yet statically render to html files) so you squeeze every last millisecond you can out of that FCP. Also easy to see the appeal for ecommerce or social media sites like facebook, medium, and so on. Though these are also use cases that probably benefit the least from React to begin with.
But for the "app" part of most online platforms, it's like, who cares? The time to load the JS bundle is a one time cost. If loading your SaaS dashboard after first login takes 2 seconds versus 3 seconds, who cares? The amount of complexity added by SSR and RSC is immense, I think the payout would have to be much more than it is.
Deeply agree.
I've been at an embarassing number of places where turning off server side rendering improved performance as the number of browsers rendering content scales with the number of users, but the server-side rendering provisioning doesn't
I'm no javascript framework expert, but how vulnerable do people estimate other frameworks like Angular, Sveltekit and Nuxt to be to this sort of thing? Is React more disposed to be at risk? Is it just because there are more eyes on React due to its popularity?
nuxt, sveltekit etc don't have RSC equivalent. and won't have in future either. Vue has discussed it and explicitly rejected it. also RSC was proposed to sveltekit, they also rejected it citing public endpoint should not be hidden
they may get other vulnemerelities as they are also in JS, but RSC class vulelnebereleties won't be there
please forgive typos in above comment. i can no longer edit them
Haha don’t sweat it dude. Happens to literally everyone on HN.
I 100% agree. I didn't even bother to think about the security implications - why worry about security implications if the whole things seems like a bad idea?
In retrospect I should have given it more thought since React Server Components are punted in many places!
This happens in Next.js as well https://github.com/vercel/next.js/discussions/11106
Yeah. Being able to write code that's polymorphic between server and client is great, but it needs to be explicit and checked rather than invisible and magic. I see an analogy with e.g. code that can operate on many different types: it's a great feature, but really you want a generics feature where you can control which types which pieces of code operate on, not a completely untyped language.
It is explicit and checked.
You have two poison pills (`import "server-only"` and `import "client-only"`) that cause a build error when transitively imported from the wrong environment. This lets you, for example, constrain that a database layer or an env file can never make it into the client bundle (or that some logic that requires client state can never be accidentally used from the stateless request/response cycle). You also have two directives that explicitly expose entry points between the two worlds.
The vulnerabilities in question aren't about wrong code/data getting pulled into a wrong environment. They're about weaknesses in the (de)serialization protocol which relied on dynamic nature of JavaScript (shared prototypes being writable, function having a string constructor, etc) to trick the server into executing code or looping. These are bad, yes, but they're not due to the client/server split being implicit. They're in the space of (de)serialization.
I had this issue with a React app I inherited, there was a .env with credentials, and I couldn't figure out whether it was being read from the frontend or the backend.
So I ran a static analysis (grep) on the apk generated and
points light at face dramatically
the credentials were inside the frontend!
Why would you have anything for the backend in an APK? Wouldnt that be an app, that by definition runs on the client?
Most frameworks also by default block ALL environment variables on the client side unless the name is preceded by something specific, like NEXT_PUBLIC_*
> Most frameworks also by default block ALL environment variables on the client side
I’ve been out of full stack dev for ~5 years now, and this statement is breaking my brain
Why would you have anything for the backend in a browser app? Wouldn't that by definition run on the client?
These kind of node + Mobile apps typically use an embedded browser like electron or a builtin browser, it's not much different than a web app.
turns out a separation of concern is valid approach for decades
React team reinvent the wheel again and again and now we back to laravel
When I looked into RSC last week, I was struck by how complex it was, and how little documentation there seems to be on it.
In fairness react present it as an "experimental" library, although that didn't stop nextjs from widely deploying it.
I suspect there will be many more security issues found in it over the next few weeks.
Nextjs ups the complexity orders of magnitude, I couldn't even figure out how to set any breakpoints on the RSC code within next.
Next vendors most of their dependencies, and they have an enormously complex build system.
The benefits that next and RSC offer, really don't seem to be worth the cost.
People did complain about next exposing "react, not ready for production" things as "the latest and greatest thing from nextjs" for quite a while now
I had moved off nextjs for reasons like these, the mind load was getting too heavy for not too much benefit
> and how little documentation there seems to be on it
DISCLAIMER: After years of using Angular/Ember/Jquery/VanillaJs, jumping into React's functional components made me enjoy building front-ends again (and still remains that way to this very day). That being said:
This has been maybe the biggest issue in React land for the last 5 years at least. And not just for RSC, but across the board.
It took them forever to put out clear guidance on how to start a new React project. They STILL refuse to even acknowledge CRA exist(s/ed). The maintainers have actively fought with library makers on this exact point, over and over and over again.
The new useEffect docs are great, but years late. It'll take another 3-4 years before teh code LLMs spit out even resemble that guidance because of it.
And like sure, in 2020 maybe it didn't make sense to spell out the internals of RSC because it was still in active development. But it's 2025. And people are using it for real things. Either you want people to be successful or you want to put out shiny new toys. Maybe Guillermo needs to stop palling around with war criminals and actually build some shit.
It might be one of the most absurd things about React's team: their constitutional refusal to provide good docs until they're backed into a corner.
I remember when the point of an SPA was to not have all these elaborate conversations with the server. Just "here's the whole app, now only ask me for raw data."
It's funny (in a "wtf" sort of way) how in C# right now, the new hotness Microsoft is pushing is Blazor Server, which is basically old-school .aspx Web Forms but with websockets instead of full page reloads.
Every action, every button click, basically every input is sent to the server, and the changed dom is sent back to the client. And we're all just supposed to act like this isn't absolutely insane.
This is how client-server applications have been done for decades, it's basically only the browser that does the whole "big ole requests" thing.
The problem with API + frontend is:
1. You have two applications you have to ensure are always in sync and consistent.
2. Code is duplicated.
3. Velocity decreases because in order to implement almost anything, you need buy-in from the backend AND frontend team(s).
The idea of Blazor Server or Phoenix live view is "the server runs the show". There's now one source of truth, and you don't have to spend time making sure it's consistent.
I would say, really, 80% of bugs in web applications come from the client and server being out of sync. Even if you think about vulnerability like unauthorized access, it's usually just this. If you can eliminate those 80% or mitigate them, then that's huge.
Oh, and thats not even touching on the performance implications. APIs can be performant, but they usually aren't. Usually adding or editing an API is treated as such a high risk activity that people just don't do it - so instead they contort, like, 10 API calls together and discard 99% of the data to get the thing they want on the frontend.
No, it's not. I've built native Windows client-server applications, and many old-school web applications. I never once sent data to the server on every click, keydown, keyup, etc. That's the sort of thing that happens with a naive "livewire-like" approach. Most of the new tools do ship a little JavaScript, and make it slightly less chatty, but it's still not a great way to do it.
A web application should either be server-generated HTML with a little JS sprinkled in, or a client-side application with traditional RPC-like calls when necessary.
Blazor (and old-school .NET Web Forms) do a lot more back-and-forth than either of those two approaches.
The problem with all-backend is that to change the order of a couple buttons, you now need buy-in from the backend team. There's definitely a happy medium or several between these extremes: one of them is that you have full-stack devs and don't rigidly separate teams by the implementation technology. Some devs will of course specialize in one area more than others, but that's the point of having a diverse team. There's no good reason that communicating over http has to come with an automatic political boundary.
Stop having backend and frontend teams. Start having crossfunctional teams. Problem solved.
You have two applications you have to ensure are always in sync and consistent.
No, the point of the API is to loosely couple the frontend and backend with a contract. The frontend doesn't need to model the backend, and the backend doesn't need to know what's happening on the frontend, it just needs to respect the API output. Changes/additions in the API are handled by API versioning, allowing overlap between old and new.
Code is duplicated.
Not if the frontend isn't trying to model the internals of the backend.
Velocity decreases because in order to implement almost anything, you need buy-in from the backend AND frontend team(s).
Velocity increases because frontend works to a stable API, and backend doesn't need to co-ordinate changes that don't affect the API output. Also, changes involving both don't require simultaneous co-ordinated release: once the PM has approved a change, the backend implements, releases non-breaking API changes, and then frontend goes on its way.
Yes, I say this every time this topic comes up: it took many years to finally have mainstream adoption of client-side interactivity so that things are finally mostly usable on high latency/lossy connections, but now people who’re always on 10ms connections are trying to snatch that away so that entirely local interactions like expanding/collapsing some panels are fucked up the moment a WebSocket is disconnected. Plus nice and simple stateless servers now need to hold all those long-lived connections. WTF. (Before you tell me about Alpine.js, have you actually tried mutating state on both client and server? I have with Phoenix and it sucks.)
Isn’t that what Phoenix (Elixir) is? All server side, small js lib for partial loads, each individual website user gets their own thread on the backend with its own state and everything is tied together with websockets.
Basically you write only backend code, with all the tools available there, and a thin library makes sure to stich the user input to your backend functions and output to the front end code.
Honestly it is kinda nice.
Also what https://anycable.io/ does in Rails (with a server written in Go)
Websockets+thin JS are best for real time stuff more than standard CRUD forms. It will fill in for a ton of high-interactivity usecases where people often reach for React/Vue (then end up pushing absolutely everything needlessly into JS). While keeping most important logic on the server with far less duplication.
For simple forms personally I find the server-by-default solution of https://turbo.hotwired.dev/ to be far better where the server just sends HTML over the wire and a JS library morph-replaces a subset of the DOM, instead of doing full page reloads (ie, clicking edit to in-place change a small form, instead of redirecting to one big form).
Idk about Phoenix, but having tried Blazor, the DX is really nice. It's just a terrible technical solution, and network latency / spotty wifi makes the page feel laggy. Not to mention it eats up server resources to do what could be done on the client instead with way fewer moving parts. Really the only advantage is you don't have to write JS.
It's basically what Phoenix LiveView specifically is. That's only one way to do it, and Phoenix is completely capable of traditional server rendering and SPA style development as well.
LiveView does provide the tools to simulate latency and move some interactions to be purely client side, but it's the developers' responsibility to take advantage of those and we know how that usually goes...
> Honestly it is kinda nice.
It's extremely nice! Coming from the React and Next.js world there is very little that I miss. I prefer to obsess over tests, business logic, scale and maintainability, but the price I pay is that I am no longer able to obsess over frontend micro-interactions.
Not the right platform for every product obviously, but I am starting to believe it is a very good choice for most.
Well, maybe it isn't so insane?
Server side rendering has been with us since the beginning, and it still works great.
Client side page manipulation has its place in the world, but there's nothing wrong with the server sending page fragments, especially when you can work with a nice tech stack on the backend to generate it.
Sure. The problem with some frameworks is that they attached server events to things that should be handled on the front-end without a roundtrip.
For instance, I've seen pages with a server-linked HTML button that would open a details panel. That button should open the panel without resorting to sending the event and waiting for a response from the server, unless there is a very, very specific reason for it.
Yeah, I kind of hate it... Blazor has a massive payload and/or you're waiting seconds to see a response to a click event. I'm not fond of RSC either... and I say this as someone absolutely and more than happy with React, Redux and MUI for a long while at this point.
I've been loosely following the Rust equivalents (Leptos, Yew, Dioxux) for a while in the hopes that one of them would see a component library near the level of Mantine or MUI (Leptos + Thaw is pretty close). It feels a little safer in the longer term than Blazor IMO and again, RSC for react feels icky at best.
Hotwire et al are also doing part of this. It isn't a new concept but it seems to come and go it terms of popularity
It's kinda nice.
Main downside is the hot reload is not nearly as nice as TS.
But the coding experience with a C# BE/stack is really nice for admin/internal tools.
I saw this kind of interactivity in Apache Wicket Java framework. It's very interesting approach.
> And we're all just supposed to act like this isn't absolutely insane.
This is insane to you only if you didn't experience the emergence of this technique 20-25 years ago. Almost all server-side templates were already partials of some sort in almost all the server-side environments, so why not just send the filled in partial?
Business logic belongs on the server, not the client. Never the client. The instant you start having to make the client smart enough to think about business logic, you are doomed.
Until they discovered why so many of us have kept with server side rendering, and only as much JS as needed.
Then they rediscovered PHP, Rails, Java EE/Spring, ASP.NET, and reboted SPAs into fullstack frameworks.
> Then they rediscovered PHP, Rails, Java EE/Spring, ASP.NET, and reboted SPAs into fullstack frameworks.
I can understand the dislike for Next but this is such a poor comparison. If any of those frameworks at any point did half the things React + Next-like frameworks accomplished and the apps/experiences we got since then, we wouldn't be having this discussion.
> If any of those frameworks at any point did half the things React + Next-like frameworks accomplished and the apps/experiences we got since then, we wouldn't be having this discussion.
This is interesting because every Next/React project I see has a slower velocity than the median Rails/Django product 15 years ago. They’re just as busy, but pushing so much complexity around means any productivity savings is cancelled out by maintenance and how much harder state management and security are. Theoretically performance is the justification for this but the multi-second page load times are unconvincing.
From my perspective, it really supports the criticism about culture in our field: none of this is magic, we can measure things like page-weight, response times, or time to complete common tasks (either for developers or our users), but so much of it is driven by what’s in vogue now rather than data.
+1 to this. I seriously believe frontend was more productive in the 2010-2015 era than now, despite the flaws in legacy tech. Projects today have longer timelines, are more complex, slower, harder to deploy, and a maintenance nightmare.
I remember maintaining webpack-based projects, and those were not exactly a model of simplicity. Nor was managing a fleet of pet dev instances with Puppet.
Puppet isn’t a front end problem, but I do agree on Webpack - which is one reason it wasn’t super common. A lot of sites either didn’t try to bundle things or had simple Make-level workflows which were at least very simple, and at the time I noted that these often performed similarly: people did, and still do, want to believe there’s a magic go-faster switch for their front end which obviates the need to reconsider their architectural choices but anyone who actually measured it knew that bundlers just didn’t deliver savings on that scale.
I do kind of miss gulp and wish there was a modern TS version. Vite is mighty powerful, but pretty opaque.
I'm not so sure those woes are unique to frontend development.
I still remember the joy of using the flagship rails application - basecamp. Minimal JS, at least compared to now, mostly backend rendering, everything felt really fast and magical to use.
Now they accomplished this by imposing a lot of constraints on what you could do, but honestly it was solid UX at the time so it was fine.
Like the things you could do were just sane things to do in the first place, thus it felt quite ok as a dev.
React apps, _especially_ ones hosted on Next.js rarely feel as snappy, and that is with the benefit of 15 years of engineering and a few order of magnitude perf improvement to most of the tech pieces of the stack.
It’s just wild to me that we had faster web apps, with better organizarion, better dev ex, faster to build and easier to maintain.
The only “wins” I can see for a nextjs project is flexibility, animation (though this is also debatable), and maybe deployment cost, but again I’m comparing to deploying rails 15 years ago, things have improved there as well I’m sure.
I know react can accomplish _a ton_ more on the front end but few projects actually need that power.
How does Next accomplish more than a PHP/Ruby/whatever backend with a React frontend?
If anything the latter is much easier to maintain and to develop for.
Blazor? Razor pages?
They weren't the new shinny to pump up the CV, and fill the Github repo for HR applications.
We are having this discussion because at some point, the people behind React decided it should be profitable and made it become the drug gateway for NextJS/Vercel
Worse, because Vercel then started its marketing wave, thus many SaaS products only support React/Next.js as extensions points.
Using anything else requires yak shaving instead of coding the application code.
That is the only reason I get to use them.
I sometimes feel like I go on and on about this... but there is a difference between application and pages (even if blurry at times), and Next is a result of people doing pages adopting React that was designed for applications when they shouldn't have.
Yeah, but then people started building bloated static websites with those libraries instead of using a saner template engine + javascript approach which is fast, easy to cache, debug, and has stellar performance and SEO.
Little it helped that even React developers were saying that it was the wrong tool for plenty use cases.
Worst of all?
The entire nuance of choosing the right tool for the job has been long lost on most developers. Even the comments I read on HN make me question where the engineering part of the job starts.
It also doesn't help that non-technical stakeholders sometimes want a say in a tech stack conversation as well. I've been at more than one company where either the product team or the acquiring firm wanted us to migrate away from a tried and true Rails setup to a fullstack JS platform simply because they either wanted the UI development flexibility or to not have to hire Ruby devs.
Non-technical MBA's seem to have a hard time grasping that a JS-only platform is not a panacea and comes with serious tradeoffs.
That was indeed one of the main points of SPAs, but React Server Components are generally not used for pure SPAs.
Correct, their main purpose is ecosystem lock-in. Because why return json when you can return html. Why even build a SPA when the old school model of server-side includes and PHP worked just fine? TS with koa and htmx if you must but server-side react components are kind of a waste of time. Give me one example where server side react components are the answer over a fetch and json or just fetching an html page?
I like RSCs and mostly dislike SPAs, but I also understand your sentiment.
The only example that has any traction in my view are web-shops, which claim that time-to-render and time-to-interactivity are critical for customer retention.
Surely there are not so many people building e-commerce sites that server components should have ever become so popular.
The thing is time to render and interactivity is much more reliant on the database queries and the internet connection of the user than anything else. Now instead of a spinner or a progress bar in the toolbar of the browser, now I got skeleton loaders and use half of GB for one tab.
Not to defend the practice, I’ve never partaken, but I think there’s some legit timing arguments that a server renderer can integrate more requests faster thanks to being collocated with services and dbs.
which brings me back to my main point of the web 1.0 architecture. Serving pages from the server-side, where the data lives, and we've come full circle.
Sure they are. Next sites are SPAs.
I'd be interested in adopting a sole-purpose framework like that.
It also decoupled fe and backend. You could use the same apis for say mobile, desktop and web. Teams didnt have to cross streams allowing for deeper expertise on each side.
Now they are shoving server rendering into react native…
I think people just never understood SPA.
Like with almost everything people then shit on something they don’t understand.
It's really concerning that the biggest, most eye-grabbing part of this posting is the note with the following: "It’s common for critical CVEs to uncover follow‑up vulnerabilities."
Trying to justify the CVE before fully explaining the scope of the CVE, who is affected, or how to mitigate it -- yikes.
What’s concerning about it? The first thing I thought when I read the headline was “wow, another react CVE?” It’s not a justification, it’s an explanation to the most obvious immediate question.
It's definitely a defensive statement, proactively covering the situation as "normal". Normal it may be, but emphasizing that in the limited space of a tweet thread definitely indicates where their mind is on this, I'd think.
Are you reading a different link? This statement is on a React blog post, not a Twitter thread.
But it is another React CVE. Doesn't really matter why it was uncovered, it's bad that it existed either way
an insecure software will have multiple CVEs, not necessarily related to each other. Those 3 are probably not the only ones.
Thanks for the feedback, I adjusted it here so the first note is related to the impacted versions:
https://github.com/reactjs/react.dev/pull/8195
I appreciate the follow up! I think it looks great now and doesn’t read as defensively anymore!
Yeah agreed, thanks again for the feedback. The priority here is clear disclosure and upgrade steps.
Perception management
https://en.wikipedia.org/wiki/Perception_management
There are a lot of careers riding on the optics here.
No, there aren't. The react team isn't going to axe half the team because there's a high severity CVE.
I think the same. To me it looks like a Vercel marketing employee wrote that.
Also kind of funny that they're comparing it to Log2Shell. Maybe not the best sort of company to be keeping...
React is the new JavaBean
Welcome to the React, Next, Vercel ecosystem. Our tech may be shite but we look fancy.
The Vercel CEO post congratulating his team for how they managed the vulnerability was funny
Very standard in security, announcements always always always try to downplay their severity.
fwiw, the goal here wasn't to downplay the severity, but to explain the context to an audience who might not be familiar with CVEs and what's considered normal. I moved the note down so the more important information like severity, impacted versions, and upgrade instructions are first.
> an audience who might not be familiar with CVEs
If there are so many React developers out there using server side components while not familiar with the concept of CVEs, we’re in very serious trouble.
It's ok, you gotta play the game. I'm more concerned about the fact that the downtime issue ranks higher than the security issue. But I'm assuming it relates to the specifics of the issue rather than reflecting on the priorities of the project as a whole.
We pioneered a lot of things with Opa, 15 years ago now. Opa featured automatic code "splitting" between client and server, introduced the JSX syntax although it wasn't called that way (Jordan at Facebook used Opa before creating React, but the discussions around the syntax happened at W3C notably with another Facebook employee, Tobie).
Since the Opa compiler was implemented in OCaml (we were looking more like Svelte than React as a pure lib), we performed a lot of statical analysis to prevent the wide range of attacks on frontend code (XSS, CSRF, etc.) and backend code. The Opa compiler became a huge beast in part because of that.
In retrospect, better separation of concerns and foregoing completely the idea of automatic code splitting (what React Server Components is) or even having a single app semantics is probably better for the near future. Our vision (way too early), was that we could design a simple language for the semantics and a perfect advanced compiler that would magically output both the client and the server from that specification. Maybe it's still doable with deterministic methods. Maybe LLMs will get to automatic code generation of all parts in one shot before.
Note that the exploits so far haven’t had much to do with “server code/data getting bundled into the client code” or similar which you’re alluding to. Also, RSC does not try to “guess” how to split code — it is deterministic and always user-controlled.
The vulnerabilities so far were weaknesses in the (de)serializer stemming from the dynamism of JavaScript — ability to hijack root object prototype, ability to toString functions to get their code, ability to override a Promise then implementation, ability to construct a function from a string. The patches are patching the (de)serializer to work around those dynamic pieces of JavaScript to avoid those gaps. This is similar to mistakes in parsers where they’re fooled by properties called hasOwnProperty/constructor/etc.
The serialization format is essentially “JSON with Promises and code chunk references”, and it seems like there’s enough pieces where dynamic nature of JS can leak that needed to be plugged. Hopefully with more scrutiny on the protocol, these will be well-understood by the team. The surface area there isn’t growing much anymore (it’s close to being feature-complete), and the (de)serializers themselves are roughly 5 kloc each.
The problem you had in Opa is solved in RSC with build-time assertions (import "server-only" is the server environment poison pill, and import "client-only" is the client environment poison pill). These poison pills work transitively up the module import stack and are statically enforced and prevent code (eg DB code, secrets, etc) from being pulled into the wrong environment. Of course this doesn’t prevent bugs in the (de)serializer but it’s why the overall approach is sound, in the absence of (de)serialization vulnerabilities.
You might be interested in Electric Clojure [1], although I must admit that I have not used it myself.
[1]: https://github.com/hyperfiddle/electric
Ocsigen Eliom did it before Opa, no?
Wouldn't make more sense keeping React smaller and left those features to frameworks? I liked it more when it was marketed as the View in MVC. Surely can still be used like that today but it still feels bloated
But the react-components are a separate library, they are not installed by default
? afaik react server components made it to core
They shouldn't be loaded in a React SPA at least, e.g. `react-dom` and `react` packages should be unaffected.
So they are part of the standard distribution (like through npm install react), but are unused by default? Something like that?
This code doesn’t exist in `react` or `react-dom`, no. Packages are released in lockstep to avoid confusion which is why everything got a version bump.
The vulnerable packages are the ones starting with `react-server-` (like `react-server-dom-webpack') or anything that vendors their code (like `next` does).
git checkout v15.0.0
There we go.
Can I have v15 with the rendering optimizations of further versions?
react server components is something that should've never existed anyway.
are people shipping faster due to them ? or it's all complexity, security vulnerabilities like this. you're not facebook. render html the classic way if you need server rendered html. if you really do need an SPA - which is 5% of the apps out there - then yeah use client side react, vue, svelte etc - none of those RPC server actions etc
Agreed. Unfortunately, there's an entire content/bootcamp ecosystem pushing this stuff that came of age largely during the tech boom, as well as a bunch of early and mid-career devs and companies that are deeply tied to it. With major VC funding backing projects like Bun, Vercel, etc., I don't think this deeply flawed approach to development is going anywhere because the utopian JS fantasy of "it just works and runs everywhere flawlessly" is the ultimate myth of building for the web.
I'm not going to let go my argument with Dan Abramov on x 3 years ago where he held up rsc as an amazing feature and i told him over and over he was making a foot gun. tahdah!
I'm a nobody PHP dev. He's a brilliant developer. I can't understand why he couldn't see this coming.
For what it’s worth, I’ve just built an app for myself with RSC, and I’m still a huge fan of this way of building and structuring web software.
I agree I underestimated the likelihood of bugs like this in the protocol, though that’s different from most discussions I’ve had about RSC (where concerns were about user code). The protocol itself has a fairly limited surface area (the serializer and deserializer are a few kloc each), and that’s where all of the exploits so far have concentrated.
Vulnerabilities are frustrating, and this seems to be the first time the protocol is getting a very close look from the security community. I wish this was something the team had done proactively. We’ll probably hear more from the team after things stabilize a bit.
A tale as old as time: hubris. A successful system is destined to either stop growing or morph into a monstrosity by taking on too many responsibilities. It's hard to know when to stop.
React lost me when it stopped being a rendering library and became a "runtime" instead. What do you know, when a runtime starts collapsing rendering, data fetching, caching, authorization boundaries, server and client into a single abstraction, the blast radius of any mistake becomes enormous.
I never saw brilliance in his contributions. Specially as React keeps being duct-taped.
Making complex things complex is easy.
Vue on the other hand is just brilliant. No wonder it's creator, Evan You went on to also create Vite. A creation so superior that it couldn't be confined to Vue and React community adopted it.
https://evanyou.me
There's no need to take down and diminish other's contributions, especially in open source where everybody's free to bring a better solution to the table.
Or just fork if the maintainers want to go their way. If your solution has its merits it will find its fans.
I'm not defending React and this feature, and I also don't use it, but when making a statement like that the odds are stacked in your favor. It's much more likely that something's a bad idea than a good idea, just as a baseball player will at best fail just 65-70% of the time at the plate. Saying for every little thing that it's a bad idea will make you right most of the time.
But sometimes, occasionally, a moonshot idea becomes a home run. That's why I dislike cynicism and grizzled veterans for whom nothing will ever work.
You're probably right. This one just felt like Groundhog Day, but I can't argue with "nothing ventured nothing gained".
You might be more brilliant than you think.
React server component is frontend's attempt of "eating" backend.
On the contrary, HTMX is the attempt of backend "eating" frontend.
HTMX preserves the boundary between client and server so it's more safe in backend, but less safe in frontend (risk of XSS).
Htmx doesn't really have an XSS problem, this was solved by templating language long ago. See https://htmx.org/essays/web-security-basics-with-htmx/#alway...
Next team just published this: https://nextjs.org/blog/security-update-2025-12-11
Seems to affect 14.x, 15.x and 16.x.
I do hope this means we can finally stop hearing about RSC. The idea is an interesting solution to problems that never should exist in the first place.
A framework designed to blur the line between code running on the client and code running on the server — forgot the distinction between code running on the client and code running on the server. I don't know what they expected.
(The same confusion comes up regularly whenever you touch Next.js apps.)
So we have a new React CVE and tomorrow is Friday, so please be prepared for a new outage brought to you by the super-engineers at Cloudflare.
Look on the bright side: maybe GitHub will be down first, so nobody can upload any vulnerable code right before the weekend.
Just exchange json.
Backend in python/ruby/go/rust.
Frontend in javascript/typescript.
Scripts in bash/zsh/nushell.
One upon a time there was a low amount of friction and boilerplate with this approach, but with Claude and Codex it’s changed from low to none.
I really like having a good old RESTful API (well maybe kinda faking the name because don't need HATEOAS usually)!
Except I find most front end stacks to lead to either endless configuration (e.g. Vue with Pinia, router, translation, Tailwind, maybe PrimeVue and a bunch of logic for handling sessions and redirects and toast messages and whatnot) and I feel the pull to just go and use Django or Laravel or Ruby on Rails mostly with server side templates - I much prefer that simplicity, even if it feels a bit icky to couple your front end and back end like that.
I really wonder why the swarm intelligence of software developers still hasn’t decided on a single best clearly defined architecture for serving web applications, decades after the building blocks have been in place.
Let the server render everything. Let JS render everything, server is only providing the initial div and serves only JSON from then on. Actually let JS render partial HTML rendered on the server! Websockets anyone?
Imagine SQL server architecture or iOS development had this kind of ADHS syndrome.
I remember when my greatest fear was sql injection. It’s great to see we have become more secure with our technology.
Were there not enough eyes on React Server Components before the patches from last week?
I've noticed a pattern in the security reports for a project I'm involved in. After a CVE is released, for the next month or so there will likely be additional reports targeting the same (or similar) areas of the framework. There is definitely a competitive spirit amongst security researchers as they try to get more CVEs credited to them (and potentially bounties).
have you seen the code of next.js? its completely impenetrable, and the packages have legacy versions of the same files coexisting, it's like huge hairball
Our team is working to fix our Next.js project.It's so painful.
Now I'm doubting RSC is a good engineering technology or a good practice.The real world is tradeoffs: RSC really help us improve our develop speed as we have good teamates that has good understanding of fullstack.
Do hope such things won't happen again.
How is it painful? You need to bump a minor version? It took me less than 5 minutes 90% of which was waiting for CI. There's even a tool you can run to do it for you.
For the vast majority of projects it seems like the disadvantages of these highly complex RPC systems far exceed the benefits... Not just in terms of security but also the reduced observability compared to simple JSON..
In my point of view, this is well deserved to idiots that are seriously using RSC in production, despite that being a very bad idea...
I noticed requests that were exploiting the vulnerability were turning into timeouts pretty much immediately after rolling out the patch. I’m surprised it took so long for it to be announced.
Any attempt that blurs boundary between client and server is unsafe.
Im confused, did the update from last week for the RCE bug also include fixes for these new CVEs or will I need to update again? npm audit says theres no issues
is it not obvious?
> These issues are present in the patches published last week.
> The patches published last week are vulnerable.
> If you already updated for the Critical Security Vulnerability, you will need to update again.
GitHub has to review the advisories and publish it for it to show in `npm audit`, so it's delayed.
You need to update again.
This could be the Next.js motto.
You need to upgrade again, and no the docs aren’t finished (and they won’t be before the new new version).
My Umami stats box got "pwned" about 15 mins after the last CVE was published and I spent an hour or so cleaning up that mess and upgrading everything. Not looking forward to doing it again today.
I wonder what does these vulnerabilities mean for Facebook. As per my knowledge, Facebook's the biggest web app written in React.
Does Facebook actually use RSC? I thought it was mainly pushed by the Nextjs/Vercel side of the React team.
No, but it's primarily because Meta has their own server infrastructure already. RSCs are essentially the React team trying to generalize the data fetching patterns from Meta's infrastructure into React itself so they can be used more broadly.
I wrote an extensive post and did a conference talk earlier this year recapping the overall development history and intent of RSCs, as best as I understand it from a mostly-external perspective:
- https://blog.isquaredsoftware.com/2025/06/react-community-20...
- https://blog.isquaredsoftware.com/2025/06/presentations-reac...
So contrary to all other changes, this one was not done for Facebook to use. What was the reason behind RSC then?
Like I said above and in the post: it was an attempt to generalize the data fetching patterns developed inside of Meta and make them available to all React devs.
If you watch the various talks and articles done by the React team for the last 8 years, the general themes are around trying to improve page loading and data fetching experience.
Former React team member Dan Abramov did a whole series of posts earlier this year with differently-focused explanations of how to grok RSCs: "customizable Backend for Frontend", "avoiding unnecessary roundtrips", etc:
- https://overreacted.io
Conceptually, the one-liner Dan came up with that I liked is "extending React's component model to the server". It's still parent components passing props to child components, "just" spread across multiple computers.
Yeah the "just" is doing a lot of things, nobody asked for a react server but it turns out it could be the base for a $10B cloud company. Classical open source rugpull.
Market capture?
That seems to be the case. They killed React for that.
No they don't. I think Meta is just big enough that they don't really care what is happening with React anymore haha.
This is about React Server Components, a subset/feature of React that can optionally be installed and used.
Apps that use React without server components are not affected.
I'm only surprised it took this long for an exposure of backend data to the front end to be discovered in RSC
LOL. I must have divination powers. I am currently working on a UI framework and opened an issue just 3 weeks ago that says:
***
Seems that server functions are all the rage. We are unlikely to have them.
The main reason is that it ties the frontend and the backend together in undesirable ways.
It forces a js backend upon people (what if I want to use Go for instance).
The api is not client agnostic anymore. How to specify middleware is not clear.
Requires a bundler, so destroys isomorphism (isomorphic code requires no difference between the client and the server/ environment agnostic).
Even if it requires a bundler because it separates client and server implementation files, it blurs the data scoping (especially worrying for sensitive data) Do one thing and do it well: separate frontend and backend.
It might be something that is useful for people who only plan on having a javascript web frontend server separate from the API server that links to the backend service.
Besides, it is really not obvious to me how it becomes architecturally clearer. It would double the work in terms of security wrt authorization etc. This is at least not a generic pattern.
So I'd tend to go opposite to the trend and say no. Who knows, we might revisit it if anything changes in the future.
***
And boy, look at the future 3 weeks later...
To be fair, the one good thing is that they are hardening their implementation thanks to these discoveries. But still seems to me that this is wholly unnecessary and possibly will never be safe enough.
Anyway, not to toot my own horn, I know for a fact these things are difficult. Just found the timing funny. :)
I'm curious about your UI framework, is it public?
Not public yet. Under review.
React patches one vulnerability and two more are revealed, just like a Hydra.
At this point you might as well deprecate RSC as it is clearly a contraption for someone trying to justify a promotion at Meta.
Maybe they are going to silently remove “Built RSC at Meta!” in their LinkedIn bios after this. So what other vulnerabilities are going to be revealed in React after this one?
Meta don’t use RSC: https://bsky.app/profile/en-js.bsky.social/post/3lmvwmr5rfs2...
> We are not using RSC at Meta yet, bc of limits of our packaging infra (it’s great at different things) and because Relay+GraphQL gives us many of the same benefits as RSCs. But we are fans and users of server driven UI and incrementally working toward RSC.
(as of April 2025)
Interesting how DoS ranks higher than code exposure in severity.
I personally think it's the other way around, since code exposure increases the odds that a security breach happens, while DoS does not increase chances of exposure, but affects reliability.
Obviously we are simplifying a multidimensional severity to one dimension, but I personally think that breaches are more important than reliability. I'd rather have my app go down than be breached.
And I don't think it's a trivial difference, if you'd rather have a breach than downtime, you will have a breach.
“use insecure”
I remember some podcast interview with Miško Hevery talking about how Qwik was very emphatic about what code ran on the server and what ran on the client. Seems self-evident and prescient. It was a great interview as Miško Hevery is extremely articulate about the problems at hand. If I find it, I'll post.
Oh boy, I somehow missed that React was offering these.
Google has a similar technology in-house, and it was a bit of a nightmare a few years back; the necessary steps to get it working correctly required some very delicate dancing.
I assume it's gotten better given time.
Related:
React2Shell and related RSC vulnerabilities threat brief - Cloudflare
https://blog.cloudflare.com/react2shell-rsc-vulnerabilities-... (https://news.ycombinator.com/item?id=46237515)
The JavaScript fanatics will downvote me for saying this, but I'll say this, "using a single JavaScript codebase on your client-side and server-side is like cooking food in your toilet, sooner or later, contamination is guaranteed" [1]
1 - https://ashishb.net/tech/javascript/
You can still have separate codebases for server and client in JS/TS...
> You can still have separate codebases for server and client in JS/TS...
Indeed, but unlike Go/Python (backend) and TS/JS (frontend), the separation is surmountable, and the push to "reuse" is high.
You're mixing programming languages with software architecture.
> You're mixing programming languages with software architecture.
Programming languages do lead to certain software architectures. These are independent but not orthogonal issues.
This isn't a Javascript problem, this is a React problem. You could theoretically rewrite React and RSC in any language and the outcome would be the same. Say Python ran in the browser natively, and you reimplented React on browser and server in Python. Same problem, not Javascript.
> This isn't a Javascript problem, this is a React problem.
It happened with Next.js as well https://github.com/vercel/next.js/discussions/11106
> Say Python ran in the browser natively, and you reimplented React on browser and server in Python. Same problem, not Javascript.
Yes.
And since Python does not natively run in the browser, that mistake never happens. With JavaScript, the desire to have "backend and frontend in a single codebase" requires active resistance.
> It happened with Next.js as well
It's the same vulnerabilities because Next uses the vulnerable parts of React.
Your rational is quite poor as I can write an isomorphic web app in C or Rust or Go and run parts in the browser, what then? Look, many of us also strongly dislike JavaScript but generally that distaste is based on its actual shortcomings and failures, you don't have to invent new ones plenty already exist.
> I can write an isomorphic web app in C or Rust or Go and run parts in the browser, what then?
If you have a single codebase for Go-based code running in an untrusted browser (the "toilet") and a trusted backend (the "kitchen"), then the same contamination is highly likely.
>And since Python does not natively run in the browser, that mistake never happens.
Did you even bother to read my comment? Try again, please. Next time don't skip over parts.
dammit
SSR is just dumb. Its very rare that you would benefit much from this approach, the only thing you get an complexity bomb exploding in your face.
How about either just return html (maybe with htmx), or have a "now classic" SPA.
The frontend must be the most over engineered shitshow we as devs have ever created. Its where hype meets the metal.
Y'all are so pessimistic. React Server Components are great. React is a complex piece of software. Bugs happen.
Personally I prefer simple software without bugs! This security vulnerability highlights a serious issue with React. It’s a SPA framework, a server side framework, and a functional component library all at the same time. And it’s apparently getting so complex that it’s introducing source code exposures.
I’m not interested in flame wars per se, but I can tell you there are better alternatives, and that the closer you stay towards targeting the browser itself the better, because browser APIs are at least an order of magnitude more secure and performant than equivalent JS operations.
After Log4Shell, additional CVEs were reported as well.
It’s common for critical CVEs to uncover follow‑up vulnerabilities because researchers scrutinize adjacent code paths looking for variant exploit techniques to test whether the initial mitigation can be bypassed.
The vulnerabilities existing is not a consequence of previous CVEs so this seems like an irrelevant non sequitur to keep mentioning everywhere.