Impressive that this was done in 3 days at all, but to anyone who is familiar at all with System 7's appearance, the screenshot is almost comically "off" and gives away that this is not a straight port so much as some kind of clean-room reimplementation. The attached paper is more reserved, calling this a "bootable prototype".
"we reconstructed a bootable prototype of Apple System 7.1 directly from 68k binaries in three days"
"We present an AI-assisted reverse engineering framework that achieves dramatic speedups—on the order of hundreds of times faster than traditional manual methods—by orchestrating specialized agents for evidence curation, struct recovery, and code drafting. Using this approach, we recreated a bootable prototype of Apple System 7.1 from binary analysis in just 3 days."
This is one of my hopes for Large LANGUAGE Models is that they aid in JIT emulation of the "languages" of OSes and assembly between architectures.
The amount of software preservation that could occur by having LLMs port binaries to new architectures (and maybe do reverse engineering of the source code) is something that is well short of AGI, but would be tremendously useful.
Alas I don't think any LLM vendor will pay much attention to this, there is too much money in Javascript/HTML primarily and the other mainstream langs secondarily.
But LLMs should in theory be able to navigate the edge cases of doing things in different OSes / Windowing toolkits / etc better than straight decompiler/recompilers would be able to.
This is related to a big potential area for LLMs: porting legacy enterprise code to newer systems, just like this guy did.
Impressive that this was done in 3 days at all, but to anyone who is familiar at all with System 7's appearance, the screenshot is almost comically "off" and gives away that this is not a straight port so much as some kind of clean-room reimplementation. The attached paper is more reserved, calling this a "bootable prototype".
It's likely that they didn't have the rights to use the original fonts or icons.
And yet they advertise: "Chicago Bitmap Font: Pixel-perfect rendering of the classic Mac font"
Worth mentioning this isn't a port of the entire system, more a reimplementation that lacks MANY features of the real System 7
You know this makes me wonder about porting windows xp or such to ARM for double the fun nowadays.
is there a blogpost about the process/what tools were used to do so?
This is _not_ a port of the (leaked) System 7 sources from 68k/PPC to x86. It rather seems to be reverse engineered from the 68k binaries.
From the project's README.md: "This is a reimplementation project for educational and preservation purposes."
See https://zenodo.org/records/17196870 for the related paper:
"we reconstructed a bootable prototype of Apple System 7.1 directly from 68k binaries in three days"
"We present an AI-assisted reverse engineering framework that achieves dramatic speedups—on the order of hundreds of times faster than traditional manual methods—by orchestrating specialized agents for evidence curation, struct recovery, and code drafting. Using this approach, we recreated a bootable prototype of Apple System 7.1 from binary analysis in just 3 days."
This is one of my hopes for Large LANGUAGE Models is that they aid in JIT emulation of the "languages" of OSes and assembly between architectures.
The amount of software preservation that could occur by having LLMs port binaries to new architectures (and maybe do reverse engineering of the source code) is something that is well short of AGI, but would be tremendously useful.
Alas I don't think any LLM vendor will pay much attention to this, there is too much money in Javascript/HTML primarily and the other mainstream langs secondarily.
But LLMs should in theory be able to navigate the edge cases of doing things in different OSes / Windowing toolkits / etc better than straight decompiler/recompilers would be able to.
This is related to a big potential area for LLMs: porting legacy enterprise code to newer systems, just like this guy did.
One of the things that LLMs seem to do quite well in natural language is "now do this text in the style of $famousAuthor". That seems related.