As someone who has used the Snapdragon X Elite (12 core Oryon) Dev Kit as a daily driver for the past year, I find this exciting. The X Elite performance still blows my mind today - so the new X2 Elite with 18 cores is likely going to be even more impressive from a performance perspective!
I can't speak to the battery life, however, since it is dismal on my Dev Kit ;-)
Surface Pro 11 owner here. SQL Server won't install on ARM without hacks. Hyper-V does not support nested virtualization on ARM. Most games are broken with unplayable graphical glitches with Qualcomm video drivers, but fortunately not all. Most Windows recovery tools do not support ARM: no Media Creation Tool, no Installation Assistant, and recovery drives created on x64 machines aren't compatible [EDIT: see reply, I might be mistaken on this]. Creation of a recovery drive for a Snapdragon-based Surface (which you have to do from a working Snapdragon-based Surface) requires typing your serial code into a Microsoft website, then downloading a .zip of drivers that you manually overwrite onto the recovery media that Windows 11 creates for you.
Day-to-day, it's all fine, but I may be returning to x64 next time around. I'm not sure that I'm receiving an offsetting benefit for these downsides. Battery life isn't something that matters for me.
You ABSOLUTELY do not have to create a recovery drive from a Snapdragon based device. I've done it multiple times from x64 Windows for both a SPX and 11.
Hmm, thank you, that's good to know. Did you just apply the Snapdragon driver zip over the x64 recovery drive? It didn't work for me when my OS killed itself but I could easily have done something wrong in my panic over the machine not working. Since I only have the one Snapdragon device, I was making the assumption that it would have worked if I had a second one, but I didn't actually know that.
Apple had a great translation layer (Rosetta) that allows you to run x64 code, and it's very fast. However, Apple being Apple, they are going to discontinue this feature in 2026, that's when we'll see some Apple users really struggling to go fully arm, or just ditch their MacBook. I know if Apple does follow through with killing Rosetta, I'll do the latter.
It's a transpiler that takes the x86-64 binary assembly and spits out the aarch64 assembly only on the first run AFAIK. This is then cached on storage for consecutive runs.
Did it? From that list: SQL server doesn't work on Mac and there's no Apple equivalent, virtualisation is built into the system so that kind of worked but with restrictions, games barely exist Mac so a few that cared did the ports but it's still minimal. There's basically no installation media for Macs in the same way as windows in general.
What I'm trying to say is - the scope is very different / smaller there. There's a tonne of things that didn't work on Macs both before and after and the migration was not that perfect either.
Out of the gate, Apple silicon lacked nested virtualization, too. They added it in the M3 chip and macOS 15. Macs have different needs than Windows though; I think it's less of a big deal there. On Windows we need it for running WSL2 inside a VM.
I'd guess the M3 features aren't required for nested virtualization, and it was more of a sw design decision to only add the support when some helpful hardware features were shipped too. Eg here's nested virtualization support for ARM on Linux in 2017: https://lwn.net/Articles/728193/
Nested virt does need hardware support to implement efficiently and securely. The Apple chips added that over time, eg M2 actually had somewhat workable support but still incomplete and hacky https://lwn.net/Articles/928426/ - the GIC (interrupt controller) was a mess to virtualise in older versions, which is different from the instruction set of the CPU.
On Windows nested virtualization already existed before WSL, all the kernel and device drivers security features introduced on Windows 10, and made always enabled on Windows 11, require running Hyper-V, which is a type 1 hypervisor.
So it is rather easy having to deal with nested virtualization, even those of us that seldom use WSL.
Linux is different. Decades of being tied to x86 made the OS way more coupled with the processor family than one might think.
Decades of bugfixes, optimizations and workarounds were made assuming a standard BIOS and ACPI standards.
Specially on the desktop side.
That, and the fact that SoC vendors are decades behind on driver quality. They remind me of the NDiswrapper era.
Also, a personal theory I have is that have unfair expectations with ARM Linux. Back then, when x86 Linux had similar compatibility problems, there was nothing to be compared with, so people just accepted that Linux was going to be a pain and that was it.
Now the bar is higher. People expect Linux to work the way it does in x86, in 2025.
This doesn't pass the smell test when Linux powers so many smart or integrated devices and IoT on architectures like ARM, MIPS, Xtensa, and has done so for decades.
I didn't even count Android here which is Linux kernel as first class citizen on billions of mostly ARM-based phones.
Every Mac transitions to ARM, only a very small amount of Windows PCs are running ARM. SO right now there's not an large user base to incentivise software to be written for it.
You are right that Windows on ARM cannot be called a success. But if you make Windows/macOS cross platform software then your software needs to be written for ARM anyway.
So if you support macOS/x86, macos/ARM, and Windows/x86, then the additional work to add Windows/ARM is rather small, unless you do low-level stuff (I remember Fortnite WoA port taking almost a year from announcement to release due to anticheat).
Apple already went through this before with PowerPC -> x86. They had universal binaries, Rosetta, etc. to build off of. And they got to do it with their own hardware, which includes some special instructions intended to help with emulation.
> Apple already went through this before with PowerPC -> x86
Not to mention 68K -> PowerPC.
Rhapsody supported x86, and I think during the PowerPC era Apple kept creating x86 builds of OS X just in case. This may have helped to keep things like byte order dependencies from creeping in.
Having a narrow product line helped Apple a lot. Similarly being able to deprecate things faster than business-oriented Microsoft. Apple also controls silicon implementation. So they could design hardware features that enabled low to zero overhead x86 emulation. All in all Rosetta 2 was a pretty good implementation.
Microsoft is trying to retain binary compatibility across architectures with ARM64EC stuff which is intriguing and horrifying. They, however, didn't put any effort into ensuring Qualcomm is implementing the hardware side well. Unlike Apple, Qualcomm has no experience in making good desktop systems and it shows.
I have a similar Windows Arm64 machine (Lenovo "IdeaPad 5 Slim"), RDP into it works OK.
There is one issue I ran into that I haven't on my (self-built) Windows desktops: when Windows Hello (fingerprint lock) is enabled, and neither machine is on a Windows domain, the RDP client will just refuse to authenticate.
On the bright side, there's a good chance that Windows on ARM is not well supported by malware. There's a situation where you benefit from things being broken.
Yeah, I too was surprised to find the dev experience very good: all JetBrains IDEs work well, Visual Studio appears to work fine, and most language toolchains seem well supported.
Have I had any app compatibility issues?
To quote Hamlet, Act 3, Scene 3, Line 87: "No."
The Prism binary emulation for x86 apps that don't have an ARM equivalent has been stellar with near-native performance (better than Rosetta in macOS). And I've tried some really obscure stuff!
I suspect that's due to the GPU and not due to Prism, because they basically just took a mobile GPU and stuffed it into a laptop chip. Generally performance seems to be on par with whatever a typical flagship Android devices can do.
Desktop games that have mobile ports generally seem to run well, emulation is pretty solid too (e.g. Dolphin). Warcraft III runs OK-ish.
The GPUs don't go toe-to-toe with current gen desktop GPUs but they should be significantly better than the GTX 650, a mid range desktop GPU from 2012, the game (2019) lists as recommended. It does sound like something odd is going on than just lack of hardware.
That something odd is called GPU drivers. Even Intel struggled (they recently announced that they are dropping all gpu driver older than Alchemist development) to get games running on their iGpus
Ironically, the app I've had the most trouble with is Visual Studio 2022. Since it has a native ARM64 build and installation of the x64 version is blocked, there are a bunch of IDE extensions that are unavailable.
We’ve been using X Elite Snapdragon laptops (Thinkpad T14s and Yoga Slim running Ubuntu’s concept images) to build large amounts of ARM software without the need for cross-compiling. The hardware peripheral support isn’t 100% yet (good enough) but I’ve been impressed with the performance.
ARM seems to be popular in the server space and it’s nice to see it trickling down to the PC market.
Does anybody know if the X2 supports the x86 Total store ordering (TSO) memory ordering model? That's how Apple silicon does such efficient emulation of x86. I'd think that would be even MORE important for a Windows ARM64 laptop where there is so much more legacy x86 software going back decades.
Does anyone have benchmarks for Rosetta with TSO vs the Linux version with no-TSO? I guess it might be a bit challenging to achieve apples to apples, although you could run a test benchmark on OSX and then Asahi on the same hardware, I think?
I've always been curious about just how much Rosetta magic is the implementation and how much is TSO; Prism in Windows 24H2 is also no slouch. If the recompiler is decent at tracing data dependencies it might not have to fence that much on a lot of workloads even without hardware TSO.
People who have worked on the Windows x64 emulator claim that TSO isn't as much of a deal as claimed, other factors like enhanced hardware flag conversion support and function call optimizations play a significant role too:
> People who have worked on the Windows x64 emulator claim that TSO isn't as much of a deal as claimed
This is a misinterpretation of what the author wrote! There is a real and significant performance impact in emulating x86 TSO semantics on non-TSO hardware. What the author argues is that enabling TSO process-wide (like macOS does with Rosetta) resolves this impact but it carries counteracting overhead in non-emulated code (such as the emulator itself or in ARM64EC).
The claimed conclusion is that it's better to optimize TSO emulation itself rather than bruteforce it on the hardware level. The way Microsoft achieved this is by having their compiler generate metadata about code that requires TSO and by using ARM64EC, which forwards any API calls to x86 system libraries to native ARM64 builds of the same libraries. Note how the latter in particular will shift the balance in favor of software-based TSO emulation since a hardware-based feature would slow down the native system libraries.
Without ecosystem control, this isn't feasible to implement in other x86 emulators. We have a library forwarding feature in FEX, but adding libraries is much more involved (and hence currently limited to OpenGL and Vulkan). We're also working on detecting code that needs TSO using heuristics, but even that will only ever get us so far. FEX is mainly used for gaming though, where we have a ton of x86 code that may require TSO (e.g. mono/Unity) but wouldn't be handled by ARM64EC, so the balance may be in favor of hardware TSO either way here.
For reference, this is the paragraph (I think) you were referring to:
> Another common misconception about Rosetta is that it is fast because the hardware enforces Intel memory ordering, something called Total Store Ordering. I will make the argument that TSO is the last thing you want, since I know from experience the emulator has to access its own private memory and none of those memory accesses needs to be ordered. In my opinion, TSO is ar red herring that isn't really improving performance, but it sounds nice on paper.
For really old software, it tends not to make good use of multiple cores anyway and you can simply emulate just a single core to achieve total store ordering.
Anything modern and popular and you can probably get it recompiled to ARM64
Unfortunately games are the most common demanding multithread applications. Studios throw a binary over the fence and then get dissolved. Seems to be the way the entire industry operates.
Maybe more ISA diversity will incentivize publishers to improve long-term software support but I have little hope.
Their top model still only has "Up to 228 GB/s" bandwdith which places it in the low end category for anything AI related, for comparison Apple Silicon is up to 800GB/s and Nvidia cards around 1800GB/s and no word if it supports 256-512GB of memory.
> Their top model still only has "Up to 228 GB/s" bandwdith which places it in the low end category for anything AI related, for comparison Apple Silicon is up to 800GB/s
Most Apple Silicon is much less than 800 GB/s.
The base M4 is only 120GB/s and the next step up M4 Pro is 273GB/s. That’s in the same range as this part.
It’s not until you step up to the high end M4 Max parts that Apple’s memory bandwidth starts to diverge.
For the target market with long battery life as a high priority target, this memory bandwidth is reasonable. Buying one of these as a local LLM machine isn’t a good idea.
This, and always check benchmarks instead of assuming memory bandwidth is the only possible bottleneck. Apple Silicon definitely does not fully use its advertised memory bandwidth when running LLMs.
The base model X2 Elite has memory bandwidth of 152 GB/s. M4 Pro is a modest win against the Extreme as mentioned, and Qualcomm has no M4 Max competitor that I'm aware of.
I think the pure hardware specs compare reasonably against AS, aside from the lack of a Max of course. Apple's vertical integration and power efficiency make their product much more compelling though, at least to me. (Qualcomm, call me when the Linux support is good.)
Yet the apps top the App Store charts. Considering that these are not upgradable I think the specs are relevant. Just as I thought Apple shipping systems with 8 GB minimums was not good future proofing.
Today Qualcomm CEO stated[0] that the combination of Android and ChromeOS, e.g. Android Computers, will be available on Snapdragon laptops. Maybe these X2 CPUs will be in those laptops.
If you look at the verified hardware list for ChromeOS Flex[0], you can get an idea of what ChromeOS devices are being deployed for. Apart from education and companies that use Google Workspace, there's a lot of ChromeOS devices deployed as kiosks and call center computers. This is reflected not only in obscure documentation, but also in the marketing material[1].
The "enterprise" managability and reduced attack surface is driving Google to jack up Chromebook prices. The "Chromebook Plus" models are nearing the same price as a midrange Dell Inspiron, HP OmniBook, or Lenovo IdeaPad. You may have also noticed M4 MacBook Airs can be bought for the price of an iPhone 17, and I suspect that's partially a response from Apple to the Chromebook price increases. Buying a $600 Chromebook might have been sane for someone tired of Microsoft and not interested in a $1000 Macbook Air, but in 2025, with the Macbook Air prices going down significantly[2], Chromebooks are not as appealing to regular consumers (different story for businesses).
For people complaining about battery control and android emulation on linux, ChromeOS is a boon.
You effectively get an actual Linux distro + most of android, with a side of Chrome. It's way closer to "a real computer" than an iPad for instance, and only loses to the Surface Pro/Z13 line in term of versatility IMHO.
It really wasn't bad, my only deal breakers were keyboard remapping being non existent and the bluetooth stack being flaky.
I got a ChromeOS device a few years ago and it was great. I think they get an underserved bad reputation from being the locked-down devices you're forced to use in schools, but a personal ChromeOS device is a capable computer that can run any Android app or desktop Linux app.
Though having said that, in the past year I've replaced ChromeOS with desktop Linux (postmarketOS) and I love it even more now. 4GB of RAM was a bit slim for running everything in micro-VMs for "security," which is what ChromeOS does. I've had no trouble with battery life or Android emulation (Waydroid) since switching.
Sorry, but "CLI stuff" is not "as far as it goes" with desktop Linux apps on ChromeOS. ChromeOS provides Wayland and PulseAudio servers to the apps as well so GUI and audio works too. It even synchronises file associations and installs a ChromeOS-like GTK theme into the container. The Linux GUI apps I had installed back when I used it felt completely native.
It worked on my device. The page you linked looks very outdated and doesn't have my device's board or any device made in the past 5 years. The lists of unsupported devices also look pretty reasonable - old kernels, CPUs that don't support virtualisation and 32-bit ARM. Since modern ChromeOS uses the same virtualisation to run Android apps, I doubt there's a modern device where it doesn't work.
If Snapdragon (or ARM players in general) wanted to challenge x86 and Apple dominance, do they need to compete in the exact same arena? Could they carve out a niche (example: ultra-efficient always-on machines) and then expand?
Exactly! That makes this move all the more interesting. The smartphone SoC market is saturated, and margins are shrinking. Laptops/PCs give Qualcomm a chance to leverage its IP in a higher-ASP segment. Expanding is logical, but the competitive bar is way higher.
“ARM chip” is a pretty broad umbrella. Apple’s M-series is based on the ARM ISA, the microarchitecture is Apple’s own design, and the SoCs are built with very different cache hierarchies, memory bandwidth, and custom accelerators. I was simply using Apple as an example of another big player.
“Multi-day” battery life sounds wild! That’s probably the biggest thing for users. It would be good for Apple to get some competition because their M-chips seemed so far away from everything else.
Still, even if someone uses it for two hours a day and then just closes it being able to run for multiple days without charging the way Macs can is fantastic.
I agree it seems incredibly unlikely that you’re doing multiple days of eight hours of work without charging.
Longer is always better, so if it’s true at all great for them.
Any battery life claim needs to be aligned with the consumer-class operating system and application layer (iOS, Android, etc). Multi-day battery life on a non-Google-Pixel Android device with typical usage would be interesting.
AFAIK Windows on ARM is completely pushed by Microsoft (obviously they're limited by their own competence) and Qualcomm has been kind of phoning it in.
I trust MS in this. NT has been multi-arch since day one. x86 wasn’t even the original lead architecture.
They also know the score. Intel is not in a good place, and Apple has been showing them up in lower power segments like laptops, which happen to be the #1 non-server segment by far.
They don’t want to risk getting stuck the way Apple did three times (68k, POC, Intel) where someone else was limiting their sales.
So they’re laying groundwork. If it’s a backup plan, they’re ready. If ARM takes off and x86 keeps going well, they’re even better off.
FOSS support for Windows ARM has been hampered by Github (owned by MS) not supporting free Windows ARM runners. They may be finally getting their act together but are years late to the game.
These all have nightmarish support. They're not a big deal for Qualcomm so the driver support is garbage. And you're stuck on their kernel like one of those Raspberry Pi knock offs. It's just really hard to take them seriously.
When laptop OEMs stop catering to the lowest common denominator corporate IT purchasers (departments which don't care about screen quality, speaker quality, or much of anything else outside of does the spec sheet on paper match our requirements and is it cheap).
I have a Yoga Slim 7x, which has the ARM. Screen quality is fantastic along with build quality, touchpad and keyboard feel :shrug:
It really depends on what Laptop line you buy. Dells have overwhelmingly become garbage, right next to HP.
Speaker quality on a laptop oth? Couldn't care less, I use headphones/earbuds 99% of the time because If I'm going portable computer, I'm traveling and I don't want to be an inconsiderate arse.
The Yoga Slim 7x is a rather unique outlier. I was on the market for a non-Mac laptop a little while ago, and the was literally the only one that met my standards.
> departments which don't care about screen quality, speaker quality, or much of anything else outside of does the spec sheet on paper match our requirements and is it cheap)
Translation: departments which don't care about worker's wellbeing.
Looking at the SOCs used, only Dell, Microsoft, and Samsung used the 2nd fastest SoC, the X1E-80-100 - the Dell and Microsoft laptops could be configured with 64GB soldered.
Samsung also used the fastest SoC (the only OEM to do so), the X1E-84-100. From a search of their USA website, you're stuck with only 16GB on any of their Snapdragon laptops. :(
I'd hope whichever OEM(s) uses the Snapdragon X2 Elite Extreme SoC (X2E-96-100) allows users to configure RAM up to 64GB or 128GB.
I'm holding my breath though. I have a Samsung Edge 4 laptop and I didn't find the battery life impressive - prob got around 6 hours under coding / programming tasks. GPU performance is terrible too.
I feel like I'm constantly charger-tending all my non-Apple silicon laptops.
M-series instant wake from sleep is also years ahead of the Windows wakeup roulette, so even if this new processor helps with time away from chargers... we still have the Windows sleep/hibernate experience.
the snapdragon x2 elite extreme (X2E-96-100) SoC supports "128GB+" but qualcomm hasn't specified what the max limit is. this soc also has higher memory bandwidth (228GB/s over 192-bit bus) than the x2 elite.
Linux support is still basically non-existent for the first gen, and they made all this deal about supporting Linux and the open source community. This is to say, don't trust them
That'd definitely fit the Qualcom pattern of trying to force you to update by not upstreaming their linux drivers.
This is one place where windows has an advantage over linux. Window's longterm support for device drivers is generally really good. A driver written for Vista is likely to run on 11.
Old situation: "Android drivers" are technically Linux drivers in that they are drivers which are built for a specific, usually ancient, version of Linux with no effort to upstream, minimal effort to rebase against newer kernels, and such poor quality that there's a reason they're not upstreamed.
New situation: "Android drivers" are largely moved to userspace, which does have the benefit of allowing Google to give them a stable ABI so they might work against newer kernels with little to no porting effort. But now they're not really Linux drivers.
In neither case does it really help as much as you'd hope.
Not surprising considering I haven't seen a programming manual or actual datasheet for these things in the first place. Usually helps if you tell the community how to interact with your hardware ..
Not even true: Arm, Intel, AMD, and most other hardware vendors (who are actively making an effort to support Linux on their parts) actually publish useful[^1] documentation.
edit: Also, not knocking the Qualcomm folks working on Linux here, just observing that the lack of hardware documentation doesn't exactly help reeling in contributors.
[^1]: Maybe in some cases not as useful as it could be when bringing up some OS on hardware, but certainly better than nothing
As someone who has used the Snapdragon X Elite (12 core Oryon) Dev Kit as a daily driver for the past year, I find this exciting. The X Elite performance still blows my mind today - so the new X2 Elite with 18 cores is likely going to be even more impressive from a performance perspective!
I can't speak to the battery life, however, since it is dismal on my Dev Kit ;-)
Unless they added low power cores to it, its probably isn't great. Chip design was for originally for datacenters.
Didn't laptops with Snapdragon X Elite CPUs have pretty good battery life?
https://www.pcworld.com/article/2375677/surface-laptop-2024-...
X2 Elite shouldn't be that different I think.
Wait, you got one of those Dev kits? How? I thought they were all cancelled.
Edit: apparently they did end up shipping.
They got cancelled after they started shipping, and even people who received the hardware got refunded.
How's the compatibility? Are there any apps that don't work that are critical?
Surface Pro 11 owner here. SQL Server won't install on ARM without hacks. Hyper-V does not support nested virtualization on ARM. Most games are broken with unplayable graphical glitches with Qualcomm video drivers, but fortunately not all. Most Windows recovery tools do not support ARM: no Media Creation Tool, no Installation Assistant, and recovery drives created on x64 machines aren't compatible [EDIT: see reply, I might be mistaken on this]. Creation of a recovery drive for a Snapdragon-based Surface (which you have to do from a working Snapdragon-based Surface) requires typing your serial code into a Microsoft website, then downloading a .zip of drivers that you manually overwrite onto the recovery media that Windows 11 creates for you.
Day-to-day, it's all fine, but I may be returning to x64 next time around. I'm not sure that I'm receiving an offsetting benefit for these downsides. Battery life isn't something that matters for me.
You ABSOLUTELY do not have to create a recovery drive from a Snapdragon based device. I've done it multiple times from x64 Windows for both a SPX and 11.
Hmm, thank you, that's good to know. Did you just apply the Snapdragon driver zip over the x64 recovery drive? It didn't work for me when my OS killed itself but I could easily have done something wrong in my panic over the machine not working. Since I only have the one Snapdragon device, I was making the assumption that it would have worked if I had a second one, but I didn't actually know that.
Yes, just copy the zip over like the instructions say.
That’s brutal.. I wonder why the Apple Silicon transition seemed so much smoother in comparison.
Apple had a great translation layer (Rosetta) that allows you to run x64 code, and it's very fast. However, Apple being Apple, they are going to discontinue this feature in 2026, that's when we'll see some Apple users really struggling to go fully arm, or just ditch their MacBook. I know if Apple does follow through with killing Rosetta, I'll do the latter.
It's a transpiler that takes the x86-64 binary assembly and spits out the aarch64 assembly only on the first run AFAIK. This is then cached on storage for consecutive runs.
Apple also implemented x86 memory semantics for aarch64 to allow for simpler translation and faster execution.
For one thing Apple dropped 32-bit before they transitioned to ARM while Windows compatibility goes back 30 years.
Did it? From that list: SQL server doesn't work on Mac and there's no Apple equivalent, virtualisation is built into the system so that kind of worked but with restrictions, games barely exist Mac so a few that cared did the ports but it's still minimal. There's basically no installation media for Macs in the same way as windows in general.
What I'm trying to say is - the scope is very different / smaller there. There's a tonne of things that didn't work on Macs both before and after and the migration was not that perfect either.
Out of the gate, Apple silicon lacked nested virtualization, too. They added it in the M3 chip and macOS 15. Macs have different needs than Windows though; I think it's less of a big deal there. On Windows we need it for running WSL2 inside a VM.
I'd guess the M3 features aren't required for nested virtualization, and it was more of a sw design decision to only add the support when some helpful hardware features were shipped too. Eg here's nested virtualization support for ARM on Linux in 2017: https://lwn.net/Articles/728193/
Nested virt does need hardware support to implement efficiently and securely. The Apple chips added that over time, eg M2 actually had somewhat workable support but still incomplete and hacky https://lwn.net/Articles/928426/ - the GIC (interrupt controller) was a mess to virtualise in older versions, which is different from the instruction set of the CPU.
On Windows nested virtualization already existed before WSL, all the kernel and device drivers security features introduced on Windows 10, and made always enabled on Windows 11, require running Hyper-V, which is a type 1 hypervisor.
So it is rather easy having to deal with nested virtualization, even those of us that seldom use WSL.
Because Apple controls verything vs Windows/Linux world where hundres (thouthands?) of OEM create things?
I agree with you on the Windows side.
Linux is different. Decades of being tied to x86 made the OS way more coupled with the processor family than one might think.
Decades of bugfixes, optimizations and workarounds were made assuming a standard BIOS and ACPI standards.
Specially on the desktop side.
That, and the fact that SoC vendors are decades behind on driver quality. They remind me of the NDiswrapper era.
Also, a personal theory I have is that have unfair expectations with ARM Linux. Back then, when x86 Linux had similar compatibility problems, there was nothing to be compared with, so people just accepted that Linux was going to be a pain and that was it.
Now the bar is higher. People expect Linux to work the way it does in x86, in 2025.
And manpower in FOSS is always limited.
> Decades of being tied to x86
This doesn't pass the smell test when Linux powers so many smart or integrated devices and IoT on architectures like ARM, MIPS, Xtensa, and has done so for decades.
I didn't even count Android here which is Linux kernel as first class citizen on billions of mostly ARM-based phones.
my asahi linux m1 mac book air would disagree with you
You are talking out of your ass here. If you make bold statements like this you need to provide evidence. Linux works fine on many platforms...
Every Mac transitions to ARM, only a very small amount of Windows PCs are running ARM. SO right now there's not an large user base to incentivise software to be written for it.
You are right that Windows on ARM cannot be called a success. But if you make Windows/macOS cross platform software then your software needs to be written for ARM anyway.
So if you support macOS/x86, macos/ARM, and Windows/x86, then the additional work to add Windows/ARM is rather small, unless you do low-level stuff (I remember Fortnite WoA port taking almost a year from announcement to release due to anticheat).
The first few months were a little tricky depending on what software you needed, but it did smooth out pretty quickly.
Apple already went through this before with PowerPC -> x86. They had universal binaries, Rosetta, etc. to build off of. And they got to do it with their own hardware, which includes some special instructions intended to help with emulation.
> Apple already went through this before with PowerPC -> x86
Not to mention 68K -> PowerPC.
Rhapsody supported x86, and I think during the PowerPC era Apple kept creating x86 builds of OS X just in case. This may have helped to keep things like byte order dependencies from creeping in.
Because it was handled by the only tech company left that actually cares about the end user. Not exactly a mystery.
Having a narrow product line helped Apple a lot. Similarly being able to deprecate things faster than business-oriented Microsoft. Apple also controls silicon implementation. So they could design hardware features that enabled low to zero overhead x86 emulation. All in all Rosetta 2 was a pretty good implementation.
Microsoft is trying to retain binary compatibility across architectures with ARM64EC stuff which is intriguing and horrifying. They, however, didn't put any effort into ensuring Qualcomm is implementing the hardware side well. Unlike Apple, Qualcomm has no experience in making good desktop systems and it shows.
> Apple also controls silicon implementation.
People sometimes say that as if came without foresight or cost or other complexities in their business.
No, in the end they are hyper strategic and it pays off.
Given how Apple makes it maintenance hostile and secures against their end customers, no.
Does Remote Desktop into the Surface work well?
When I'm home, I often just remote desktop into my laptop.
I'm wondering if remoting into ARM Windows is as good?
I have a similar Windows Arm64 machine (Lenovo "IdeaPad 5 Slim"), RDP into it works OK.
There is one issue I ran into that I haven't on my (self-built) Windows desktops: when Windows Hello (fingerprint lock) is enabled, and neither machine is on a Windows domain, the RDP client will just refuse to authenticate.
I had to use a trick to "cache" the password on the "server" end first, see https://superuser.com/questions/1715525/how-to-login-windows...
Yes everything in user space works as expected. Note that NT has supported non-x86 processors since 1992.
According to some accounts, the name NT even was a reference to the Intel i860, which was the original target processor.
On the bright side, there's a good chance that Windows on ARM is not well supported by malware. There's a situation where you benefit from things being broken.
Most apps for dev work actually work; - RStudio - VS Code - WSL2 - Fusion 360 - Docker
Only major exception is: - Android Studio's Emulator (although, the IDE does work)
Yeah, I too was surprised to find the dev experience very good: all JetBrains IDEs work well, Visual Studio appears to work fine, and most language toolchains seem well supported.
JetBrains stuff (love it!) is built on Java, so I’m not terribly surprised. I don’t know how much native code there is though.
Plus they’ve been through the Apple Silicon change, so it’s not the first time they’ve been on non-x86 either.
Have I had any app compatibility issues? To quote Hamlet, Act 3, Scene 3, Line 87: "No."
The Prism binary emulation for x86 apps that don't have an ARM equivalent has been stellar with near-native performance (better than Rosetta in macOS). And I've tried some really obscure stuff!
Same here. I've not had any issues with my Surface Pro 11.
That's certainly not what the reviews say.
Adobe apps that ran fine on Rosetta didn't work at all on Prism.
https://www.pcmag.com/articles/how-well-does-windows-on-arms...
For me it is too slow to run Age of Empires 2: DE multiplayer. More than ten year old Laptops with Intel chips are faster there.
I suspect that's due to the GPU and not due to Prism, because they basically just took a mobile GPU and stuffed it into a laptop chip. Generally performance seems to be on par with whatever a typical flagship Android devices can do.
Desktop games that have mobile ports generally seem to run well, emulation is pretty solid too (e.g. Dolphin). Warcraft III runs OK-ish.
The GPUs don't go toe-to-toe with current gen desktop GPUs but they should be significantly better than the GTX 650, a mid range desktop GPU from 2012, the game (2019) lists as recommended. It does sound like something odd is going on than just lack of hardware.
https://www.videocardbenchmark.net/gpu.php?gpu=Snapdragon+X+...
https://www.videocardbenchmark.net/gpu.php?gpu=GeForce+GTX+6...
There are also some architectural differences between mobile & desktop GPUs which may impact games that are not optimized for the platform: https://chipsandcheese.com/p/the-snapdragon-x-elites-adreno-...
That something odd is called GPU drivers. Even Intel struggled (they recently announced that they are dropping all gpu driver older than Alchemist development) to get games running on their iGpus
Ironically, the app I've had the most trouble with is Visual Studio 2022. Since it has a native ARM64 build and installation of the x64 version is blocked, there are a bunch of IDE extensions that are unavailable.
We’ve been using X Elite Snapdragon laptops (Thinkpad T14s and Yoga Slim running Ubuntu’s concept images) to build large amounts of ARM software without the need for cross-compiling. The hardware peripheral support isn’t 100% yet (good enough) but I’ve been impressed with the performance.
ARM seems to be popular in the server space and it’s nice to see it trickling down to the PC market.
How's the battery life?
Does anybody know if the X2 supports the x86 Total store ordering (TSO) memory ordering model? That's how Apple silicon does such efficient emulation of x86. I'd think that would be even MORE important for a Windows ARM64 laptop where there is so much more legacy x86 software going back decades.
Does anyone have benchmarks for Rosetta with TSO vs the Linux version with no-TSO? I guess it might be a bit challenging to achieve apples to apples, although you could run a test benchmark on OSX and then Asahi on the same hardware, I think?
I've always been curious about just how much Rosetta magic is the implementation and how much is TSO; Prism in Windows 24H2 is also no slouch. If the recompiler is decent at tracing data dependencies it might not have to fence that much on a lot of workloads even without hardware TSO.
People who have worked on the Windows x64 emulator claim that TSO isn't as much of a deal as claimed, other factors like enhanced hardware flag conversion support and function call optimizations play a significant role too:
http://www.emulators.com/docs/abc_exit_xta.htm
> People who have worked on the Windows x64 emulator claim that TSO isn't as much of a deal as claimed
This is a misinterpretation of what the author wrote! There is a real and significant performance impact in emulating x86 TSO semantics on non-TSO hardware. What the author argues is that enabling TSO process-wide (like macOS does with Rosetta) resolves this impact but it carries counteracting overhead in non-emulated code (such as the emulator itself or in ARM64EC).
The claimed conclusion is that it's better to optimize TSO emulation itself rather than bruteforce it on the hardware level. The way Microsoft achieved this is by having their compiler generate metadata about code that requires TSO and by using ARM64EC, which forwards any API calls to x86 system libraries to native ARM64 builds of the same libraries. Note how the latter in particular will shift the balance in favor of software-based TSO emulation since a hardware-based feature would slow down the native system libraries.
Without ecosystem control, this isn't feasible to implement in other x86 emulators. We have a library forwarding feature in FEX, but adding libraries is much more involved (and hence currently limited to OpenGL and Vulkan). We're also working on detecting code that needs TSO using heuristics, but even that will only ever get us so far. FEX is mainly used for gaming though, where we have a ton of x86 code that may require TSO (e.g. mono/Unity) but wouldn't be handled by ARM64EC, so the balance may be in favor of hardware TSO either way here.
For reference, this is the paragraph (I think) you were referring to:
> Another common misconception about Rosetta is that it is fast because the hardware enforces Intel memory ordering, something called Total Store Ordering. I will make the argument that TSO is the last thing you want, since I know from experience the emulator has to access its own private memory and none of those memory accesses needs to be ordered. In my opinion, TSO is ar red herring that isn't really improving performance, but it sounds nice on paper.
This is more like what I’d expect! This is a great article too, thank you, this is the kind of thing I come to HN for :)
There was a paper with benchmarks posted recently here but I cant find it immediately. I think it was 6-10% from memory.
For really old software, it tends not to make good use of multiple cores anyway and you can simply emulate just a single core to achieve total store ordering.
Anything modern and popular and you can probably get it recompiled to ARM64
Unfortunately games are the most common demanding multithread applications. Studios throw a binary over the fence and then get dissolved. Seems to be the way the entire industry operates.
Maybe more ISA diversity will incentivize publishers to improve long-term software support but I have little hope.
Their top model still only has "Up to 228 GB/s" bandwdith which places it in the low end category for anything AI related, for comparison Apple Silicon is up to 800GB/s and Nvidia cards around 1800GB/s and no word if it supports 256-512GB of memory.
> Their top model still only has "Up to 228 GB/s" bandwdith which places it in the low end category for anything AI related, for comparison Apple Silicon is up to 800GB/s
Most Apple Silicon is much less than 800 GB/s.
The base M4 is only 120GB/s and the next step up M4 Pro is 273GB/s. That’s in the same range as this part.
It’s not until you step up to the high end M4 Max parts that Apple’s memory bandwidth starts to diverge.
For the target market with long battery life as a high priority target, this memory bandwidth is reasonable. Buying one of these as a local LLM machine isn’t a good idea.
This, and always check benchmarks instead of assuming memory bandwidth is the only possible bottleneck. Apple Silicon definitely does not fully use its advertised memory bandwidth when running LLMs.
As I stated this is the top Qualcomm model we're talking about, not the base which is significantly lower.
Given their top model underperforms the most common M4 chip and the M5 is about to be released it's not very impressive at all.
Even the old M2 Max in my early 2023 MacBook Pro has 400GB/s.
The base model X2 Elite has memory bandwidth of 152 GB/s. M4 Pro is a modest win against the Extreme as mentioned, and Qualcomm has no M4 Max competitor that I'm aware of.
https://www.qualcomm.com/content/dam/qcomm-martech/dm-assets...
I think the pure hardware specs compare reasonably against AS, aside from the lack of a Max of course. Apple's vertical integration and power efficiency make their product much more compelling though, at least to me. (Qualcomm, call me when the Linux support is good.)
Most consumers don’t care about local LLMs anyway.
Yet the apps top the App Store charts. Considering that these are not upgradable I think the specs are relevant. Just as I thought Apple shipping systems with 8 GB minimums was not good future proofing.
Looking at the Mac App Store in the US, no they don't. There's not an LLM app in sight (local or otherwise).
What apps with local llm top app store charts?
They asked ChatGPT.
Today Qualcomm CEO stated[0] that the combination of Android and ChromeOS, e.g. Android Computers, will be available on Snapdragon laptops. Maybe these X2 CPUs will be in those laptops.
[0] https://www.techradar.com/phones/android/ive-seen-it-its-inc...
Does anyone buy these?
If you look at the verified hardware list for ChromeOS Flex[0], you can get an idea of what ChromeOS devices are being deployed for. Apart from education and companies that use Google Workspace, there's a lot of ChromeOS devices deployed as kiosks and call center computers. This is reflected not only in obscure documentation, but also in the marketing material[1].
The "enterprise" managability and reduced attack surface is driving Google to jack up Chromebook prices. The "Chromebook Plus" models are nearing the same price as a midrange Dell Inspiron, HP OmniBook, or Lenovo IdeaPad. You may have also noticed M4 MacBook Airs can be bought for the price of an iPhone 17, and I suspect that's partially a response from Apple to the Chromebook price increases. Buying a $600 Chromebook might have been sane for someone tired of Microsoft and not interested in a $1000 Macbook Air, but in 2025, with the Macbook Air prices going down significantly[2], Chromebooks are not as appealing to regular consumers (different story for businesses).
[0] https://support.google.com/chromeosflex/answer/11513094?sjid...
[1] https://chromeos.google/business-solutions/use-case/contact-...
[2] https://www.zdnet.com/article/the-m4-macbook-air-is-selling-...
For people complaining about battery control and android emulation on linux, ChromeOS is a boon.
You effectively get an actual Linux distro + most of android, with a side of Chrome. It's way closer to "a real computer" than an iPad for instance, and only loses to the Surface Pro/Z13 line in term of versatility IMHO.
It really wasn't bad, my only deal breakers were keyboard remapping being non existent and the bluetooth stack being flaky.
I got a ChromeOS device a few years ago and it was great. I think they get an underserved bad reputation from being the locked-down devices you're forced to use in schools, but a personal ChromeOS device is a capable computer that can run any Android app or desktop Linux app.
Though having said that, in the past year I've replaced ChromeOS with desktop Linux (postmarketOS) and I love it even more now. 4GB of RAM was a bit slim for running everything in micro-VMs for "security," which is what ChromeOS does. I've had no trouble with battery life or Android emulation (Waydroid) since switching.
Not really any, Crostini has plenty of restrictions.
Cool if one wants to CLI stuff alongside Web and Android apps, but that is as far as it goes for GNU/Linux, with many yes but.
https://chromium.googlesource.com/chromiumos/docs/+/1792b43f...
Sorry, but "CLI stuff" is not "as far as it goes" with desktop Linux apps on ChromeOS. ChromeOS provides Wayland and PulseAudio servers to the apps as well so GUI and audio works too. It even synchronises file associations and installs a ChromeOS-like GTK theme into the container. The Linux GUI apps I had installed back when I used it felt completely native.
Without hardware acceleration and sound issues depending on the model, that is why I linked the page, as I was expecting such reply.
It worked on my device. The page you linked looks very outdated and doesn't have my device's board or any device made in the past 5 years. The lists of unsupported devices also look pretty reasonable - old kernels, CPUs that don't support virtualisation and 32-bit ARM. Since modern ChromeOS uses the same virtualisation to run Android apps, I doubt there's a modern device where it doesn't work.
ChromeOS is popular in schools and for extremely locked down, managed corporate devices.
Really hope they sort out Linux support on these. Seems like it would make a great travel laptop
If Snapdragon (or ARM players in general) wanted to challenge x86 and Apple dominance, do they need to compete in the exact same arena? Could they carve out a niche (example: ultra-efficient always-on machines) and then expand?
Are you aware of countless SoCs meant for use in smartphones and below? This is them expanding.
Exactly! That makes this move all the more interesting. The smartphone SoC market is saturated, and margins are shrinking. Laptops/PCs give Qualcomm a chance to leverage its IP in a higher-ASP segment. Expanding is logical, but the competitive bar is way higher.
Also a bunch of Chromebooks with MediaTek chips.
Apple chips are ARM chips.
“ARM chip” is a pretty broad umbrella. Apple’s M-series is based on the ARM ISA, the microarchitecture is Apple’s own design, and the SoCs are built with very different cache hierarchies, memory bandwidth, and custom accelerators. I was simply using Apple as an example of another big player.
Well so is the snapdragon X elite, including the older snapdragons (anyone remember scorpion cores on QSD8x50?)
Any thermal design power data? It's difficult to evaluate their efficiency claims (work per watt) without it.
“Multi-day” battery life sounds wild! That’s probably the biggest thing for users. It would be good for Apple to get some competition because their M-chips seemed so far away from everything else.
Careful; the multi-day claims may depend on having an unrealistically huge battery, or being active only sporadically across the time period.
Still, even if someone uses it for two hours a day and then just closes it being able to run for multiple days without charging the way Macs can is fantastic.
I agree it seems incredibly unlikely that you’re doing multiple days of eight hours of work without charging.
Longer is always better, so if it’s true at all great for them.
Any battery life claim needs to be aligned with the consumer-class operating system and application layer (iOS, Android, etc). Multi-day battery life on a non-Google-Pixel Android device with typical usage would be interesting.
Not a single benchmark even against the previous generation. Just a "legendary leap in performance".
Bigly fast, trust them!
Blazingly fast, even
They showed benchmarks in the video but it's probably best to wait for independent reviews anyway.
Phoronix!
18 cores = 12 Prime and 6 Performance Cores
Not sure what a prime core is.
For comparison the M4 Pro can go as high as 10 performance cores and 4 efficiency cores.
Looks like some benchmarks have started leaking: https://www.notebookcheck.net/Snapdragon-8-Elite-Gen-5-perfo...
Mind you, Geekerwan managed to push the A19 Pro to 4019 in Geekbench 6 by using active cooling. https://youtu.be/Y9SwluJ9qPI
Today I learned that people are overclocking phone CPUs/SoCs
Seems to be the first Arm CPU to hit 5 GHz. I couldn’t find the ISA details, and curious if they will support SME, like the M-series Apple chips?
Single core only @turbo-boost.
It does have SME.
It doesn't say which generation of core is it. Are they the same as the one in Elite Gen 5?
Has Microsoft actually pushed for the ARM changes? Because I don't believe Qualcomm can do it alone.
Yes, it's the same Oryon V3.
AFAIK Windows on ARM is completely pushed by Microsoft (obviously they're limited by their own competence) and Qualcomm has been kind of phoning it in.
I trust MS in this. NT has been multi-arch since day one. x86 wasn’t even the original lead architecture.
They also know the score. Intel is not in a good place, and Apple has been showing them up in lower power segments like laptops, which happen to be the #1 non-server segment by far.
They don’t want to risk getting stuck the way Apple did three times (68k, POC, Intel) where someone else was limiting their sales.
So they’re laying groundwork. If it’s a backup plan, they’re ready. If ARM takes off and x86 keeps going well, they’re even better off.
FOSS support for Windows ARM has been hampered by Github (owned by MS) not supporting free Windows ARM runners. They may be finally getting their act together but are years late to the game.
These all have nightmarish support. They're not a big deal for Qualcomm so the driver support is garbage. And you're stuck on their kernel like one of those Raspberry Pi knock offs. It's just really hard to take them seriously.
Ironically M1 chip is better supported on Linux.
Yes, but the M1/M2 only…
why is it so hard for these companies to do any kind of descent marketing? more importantly, when do we get descent macbook air competitors?
> when do we get descent macbook air competitors
When laptop OEMs stop catering to the lowest common denominator corporate IT purchasers (departments which don't care about screen quality, speaker quality, or much of anything else outside of does the spec sheet on paper match our requirements and is it cheap).
I have a Yoga Slim 7x, which has the ARM. Screen quality is fantastic along with build quality, touchpad and keyboard feel :shrug:
It really depends on what Laptop line you buy. Dells have overwhelmingly become garbage, right next to HP.
Speaker quality on a laptop oth? Couldn't care less, I use headphones/earbuds 99% of the time because If I'm going portable computer, I'm traveling and I don't want to be an inconsiderate arse.
The Yoga Slim 7x is a rather unique outlier. I was on the market for a non-Mac laptop a little while ago, and the was literally the only one that met my standards.
> departments which don't care about screen quality, speaker quality, or much of anything else outside of does the spec sheet on paper match our requirements and is it cheap)
Translation: departments which don't care about worker's wellbeing.
This is just a laptop cpu, not an end consumer product…
They’re not marketing to consumers, or even really enthusiasts though right?
They’re marketing to OEMs.
Who is likely to package this into existing lines, from the majors? Is this a future lenovo/thinkpad carbon?
I would assume it'll follow the path as the first X Elite.
MS put out surface & surface laptop with it, Lenovo did do the ThinkPad X1 with it, and Dell put it in the XPS line.
the OEMs who used the Snapdragon X1 Elite in windows laptops, from https://en.wikipedia.org/wiki/List_of_devices_using_Qualcomm... :
Acer, Asus, Dell, HP, Lenovo, Microsoft, Samsung
Looking at the SOCs used, only Dell, Microsoft, and Samsung used the 2nd fastest SoC, the X1E-80-100 - the Dell and Microsoft laptops could be configured with 64GB soldered.
Samsung also used the fastest SoC (the only OEM to do so), the X1E-84-100. From a search of their USA website, you're stuck with only 16GB on any of their Snapdragon laptops. :(
I'd hope whichever OEM(s) uses the Snapdragon X2 Elite Extreme SoC (X2E-96-100) allows users to configure RAM up to 64GB or 128GB.
X1 Carbon is part of the Intel Evo Platform. These are co-developed with Intel and therefore this line is exclusive to them.
X13s was confirmed to be sunset, another T14s is the most likely candidate among the ThinkPads.
It's likely to be in Thinkpads (unless Lenovo lost so much money on the X Elite that they ragequit ARM). They also had a testimonial from HP.
Why can't I scroll on this page with the trackpad? Mouse scroll and arrow scroll both work fine.
I'm holding my breath though. I have a Samsung Edge 4 laptop and I didn't find the battery life impressive - prob got around 6 hours under coding / programming tasks. GPU performance is terrible too.
I feel like I'm constantly charger-tending all my non-Apple silicon laptops.
M-series instant wake from sleep is also years ahead of the Windows wakeup roulette, so even if this new processor helps with time away from chargers... we still have the Windows sleep/hibernate experience.
i wonder if intel and nvidia will catch up before they manage to deliver decent linux support...
Those memory bandwidth numbers are making me proud of being a LPDDR4 holdout.
how much ram can these support ?
Supposedly 128 GB although I doubt vendors will ship that much.
the snapdragon x2 elite extreme (X2E-96-100) SoC supports "128GB+" but qualcomm hasn't specified what the max limit is. this soc also has higher memory bandwidth (228GB/s over 192-bit bus) than the x2 elite.
also see https://wccftech.com/snapdragon-x2-elite-extreme-die-package...
Linux support is still basically non-existent for the first gen, and they made all this deal about supporting Linux and the open source community. This is to say, don't trust them
The truth is much more subtle than "nonexistent" IMO [1].
Clearly it's a priority because the support for ChromeOS/android support is a big headline this year.
[1] https://discourse.ubuntu.com/t/ubuntu-24-10-concept-snapdrag...
Also worth noting that not all the bits needing support are inside of the Snapdragon, so specific vendor support from Dell, Lenovo etc is required.
My (admittedly cynical) interpretation is that they are dropping support for desktop Linux completely and shipping Android drivers instead.
That'd definitely fit the Qualcom pattern of trying to force you to update by not upstreaming their linux drivers.
This is one place where windows has an advantage over linux. Window's longterm support for device drivers is generally really good. A driver written for Vista is likely to run on 11.
A stable driver ABI will do that. And a couple billion in revenue to fund bending over backwards to make sure stuff doesn't break.
I thought “Android drivers” were Linux drivers?
I think the situation is:
Old situation: "Android drivers" are technically Linux drivers in that they are drivers which are built for a specific, usually ancient, version of Linux with no effort to upstream, minimal effort to rebase against newer kernels, and such poor quality that there's a reason they're not upstreamed.
New situation: "Android drivers" are largely moved to userspace, which does have the benefit of allowing Google to give them a stable ABI so they might work against newer kernels with little to no porting effort. But now they're not really Linux drivers.
In neither case does it really help as much as you'd hope.
Old Android also had a bunch of weird kernel drivers that were not upstream; they mostly are now so Android kernel is converging on Linux finally.
Android drivers don't support Wayland etc.
They “supported Linux” by putting it in a virtual machine guarded by the hardware against the machine’s owner. No thank you.
Not surprising considering I haven't seen a programming manual or actual datasheet for these things in the first place. Usually helps if you tell the community how to interact with your hardware ..
That ended 10-20 years ago. The best you can hope for now is vendor-provided drivers.
Not even true: Arm, Intel, AMD, and most other hardware vendors (who are actively making an effort to support Linux on their parts) actually publish useful[^1] documentation.
edit: Also, not knocking the Qualcomm folks working on Linux here, just observing that the lack of hardware documentation doesn't exactly help reeling in contributors.
[^1]: Maybe in some cases not as useful as it could be when bringing up some OS on hardware, but certainly better than nothing
They expected linux devs to build it for free
In some cases the linux devs want to build it for free, but they still need enough information to work with
How's the WSL2 support on these Aarch64 Windows systems?
I'm not a huge fan of working in WSL, because I actively dislike the Windows GUI.
I have both Ubuntu and Docker Desktop set up in WSL2 on my X Elite laptop, they both work great, no issues (at least none that I have run into).