I’ve been on/off playing with 9front on an old laptop. I’ve been having a lot of fun with it, it’s fun to write code for, but i have had a hard time using it as anything but a toy.
I would love to use it as my main desktop, but ultimately (and kind of unsurprisingly), the blocker is the lack of a modern browser and a lack of video acceleration.
I’m sure I could hobble something together with virtualization for the former but I don’t see a fix for video acceleration coming.
Maybe I could install it on a server or something.
I did the same with an old Thinkpad but somehow found it relies too heavily on the mouse. I might still go back to it because I love how far they've taken the "everything is a file" idea and would like to experiment more with that.
I saw on HN in a different Plan 9 thread (though I'm having a bit of trouble finding it), where someone mentioned the idea of using Plan 9 to build a custom router.
I have a very rough mental model of how that could be done, and I think it would be cool to say I have that, but I haven't been bothered to replace my beloved NixOS router yet.
Same here. I made a custom router (debian, nftables, dnsmasq, etc and python code to manage it all over ssh) and I spent way too much time on it to replace it :-)
IMO, the biggest curse of the Internet age is how Distributed OS's did not become mainstream. Maybe we should repackage these as Unikernels and run our apps using their distribution services directly on a hypervisor.
k8s is really just a distributed OS implemented on top of Linux containers, only with extra facilities for automated tuning, scaling and overall management that are lacking on bare plan9.
9front it's far ahead of docker and crappy namespaces running on a libre reimplementation of a dead end Unix version. They did things right from the start. bind it's far superior to anything else.
They did not have the original unix vision. and it is a lot easier to to design an interface as a programming interface than shoehorn it into a filesystem interface.
I think having a filesystem interface is pretty great, and plan9 showed it could be done. but having to describe all your io in the [database_key, open(), close(), read(), write(), seek()] interface. can be tricky and limiting for the developer. It is pretty great for the end user however. Having a single api for all io is a super power for adaptive access patterns.
I think the thing that bothers me the most about the bsd socket interface is how close it is to a fs interface. connect()/bind() instead of open(), recv()/send() instead ot read()/write() but it still uses file discripters so that stuff tends to work the same. We almost had it.
As much as I like BSD and as great an achievement that the socket interface was, I still think this was their big failure.
> Plan 9 does not have specialised system calls or ioctls for accessing the networking stack or networking hardware. Instead, the /net file system is used. Network connections are controlled by reading and writing control messages to control files. Sub-directories such as /net/tcp and /net/udp are used as an interface to their respective protocols.
> Combining the design concepts
> Though interesting on their own, the design concepts of Plan 9 were supposed to be most useful when combined. For example, to implement a network address translation (NAT) server, a union directory can be created, overlaying the router's /net directory tree with its own /net. Similarly, a virtual private network (VPN) can be implemented by overlaying in a union directory a /net hierarchy from a remote gateway, using secured 9P over the public Internet. A union directory with the /net hierarchy and filters can be used to sandbox an untrusted application or to implement a firewall.[43] In the same manner, a distributed computing network can be composed with a union directory of /proc hierarchies from remote hosts, which allows interacting with them as if they are local.
> When used together, these features allow for assembling a complex distributed computing environment by reusing the existing hierarchical name system
I remember first setting up NAT or IP masquerading around 1998. It seemed like an ugly hack and some custom protocols did not work.
I use a bunch of VPNs now and it still seems like a hack.
The Plan 9 way just seems very clean although you now have to secure the server more strongly because you are exporting filesystems from it and others are mounting it.
> The Plan 9 way just seems very clean although you now have to secure the server more strongly because you are exporting filesystems from it and others are mounting it.
With that I mind I wish (the standard Unix gripe!) 9P had a more complex permissions model... 9P's flexibility and per-process namespaces get you a long way, but it's not a natural way to express them.
> The Plan 9 way just seems very clean although you now have to secure the server more strongly because you are exporting filesystems from it and others are mounting it.
aye. this was my first thought too. I seem to recall older Windows doing something like the same thing -- e.g. internet controls tied to the same system as the files -- and that's how we got the 90s-2000s malware 'asplosion.
Where the elegance starts to fade for me is when you see all the ad hoc syntaxes for specifying what ta connect to and what to mount. I have no love for tcp!10.0.0.1!80 or #c or #I. I want to move away from parsing strings in trusted code, especially when that code is C.
I also have no love for "read a magic file to have a new connection added to your process".
9P is neat but utterly unprepared for modern day use where caching is crucial for performance.
Clean doesn't mean easy to use. I've worked with a system before that had a very clean, elegant design (message-passing/mailboxes), easy to implement, easy to apply security measures to, small, efficient, everything you could ask for, and pretty much the first thing anyone who used it did was write a set of wrappers for it to make it look and feel more natural.
> The best apis are those that are hated by the developer and loved by the end users.
No, just those loved by the API consumer. Negative emotions on one end doens't do anything positive.
In the case of plan9, not everything can be described elegantly in the filesystem paradigm and a lot of things end up having really awkward "ctl" files which you write command strings to that the fileserver needs to parse. It also handicaps performance due to the number of filesystem operation roundtrips you usually end up making.
Maybe if combined with something io_uring-esque, but the complexity of that wouldn't be very plan9-esque.
> a lot of things end up having really awkward "ctl" files which you write command strings to that the fileserver needs to parse.
These are no different in principle than ioctl calls in *ix systems. The `ctl` approach is at least a proper generalization. Being able to use simple read/write primitives for everything else is nonetheless a significant gain.
They are very different. ioctl's on a file take an operation and arguments that are often userspace pointers as the kernel can freely access any process's memory space. ctl files on the other hand are merely human-readable strings that are parsed.
Say, imagine an API where you need to provide an 1KiB string. The plan9 version would have to process the input byte for byte to sort out what the command was, then read the string to a dynamic buffer while unescaping it until it finds, say, the newline character.
The ioctl would just have an integer for the operation, and if it wanted to it could set the source page up for CoW so it didn't even have to read or copy the data at all.
Then we have to add the consideration of context switches: The traditional ioctl approach is just calling process, kernel and back. Under plan9, you must switch from calling process, to kernel, to fileserver process, to kernel, to fileserver process (repeat multiple times for multiple read calls), to kernel, and finally to calling process to complete the write. Now if you need a result you need to read a file, and so you get to repeat the entire process for the read operation!
Under Linux we're upset with the cost of the ioctl approach, and for some APIs plan to let io_uring batch up ioctls - the plan9 approach would be considered unfathomably expensive.
> The `ctl` approach is at least a proper generalization.
ioctl is already a proper generalization of "call operation on file with arguments", but because it was frowned upon originally it never really got the beauty-treatment it needed to not just be a lot of header file defines.
However, ioctl'ing a magic define is no different than writing a magic string.
It's perfectly possible to provide binary interfaces that don't need byte-wise parsing or that work more like io_uring as part of a Plan9 approach, it's just not idiomatic. Providing zero-copy communication of any "source" range of pages across processes is also a facility that could be provided by any plan9-like kernel via segment(3) and segattach(2), though the details would of course be somewhat hardware-dependent and making this "sharing" available across the network might be a bit harder.
Indeed, you can disregard plan9 common practice and adopt the ioctl pattern, but then you just created ioctl under a different name, having gained nothing over it.
You will still have the significant context switching overhead, and you will still need distinct write-then-read phases for any return value. Manual buffer sharing is also notably more cumbersome than having a kernel just look directly at the value, and the neat part of being able to operate these fileservers by hand from a shell is lost.
So while I don't disagree with you on a technical level, taking that approach seems like it misses the point of the plan9 paradigm entirely and converts it to a worse form of the ioctl-based approach that it is seen as a cleaner alternative to.
Being able to do everything in user space looks like it might be a worthwhile gain in some scenarios. You're right that there can be some context switching overhead to deal with, though even that might possibly be mitigated; the rendezvous(2) mechanism (which works in combination with segattach(2) in plan9) is relevant, depending on how exactly it's implemented under the hood.
I must admit that the ability to randomly bind on top of your "drivers" to arbitrarily overwrite functionality, whether to VPN somewhere by binding a target machine's network files, or how rio's windows were merely /dev/draw proxies and you could forward windows by just binding your own /dev/draw on the target, holds a special place in my heart. 9front, if nothing else, is fun to play with. I just don't necessarily consider it the most optimal or most performant design.
(I also have an entirely irrational love for the idea the singular /bin folder with no concept of $PATH, simply having everything you need bound on top... I hate $PATH and the gross profile scripts that go with it with a passion.)
> I also have an entirely irrational love for the idea the singular /bin folder with no concept of $PATH, simply having everything you need bound on top
That's really an easy special case of what's called containerization or namespacing on Linux-like systems. It's just how the system works natively in plan9.
can you paper over the *ixian abstraction using transformer based metamodeling language oriented programming and the individual process namespace Lincos style message note passing hierarchy lets the Minsky society of mind idea fall out?
Aleph lacked GC, which Rob Pike considered the main reason for its implementation failure on Plan 9, and initially bounds checking was also missing.
Two key design difference from Go and its two predecessors.
Dis is an implementation detail, Go could offer the same dynamism with AOT toolchain, as proven by other languages with ahead of time toolchains available.
So does Rust. Rust is 'smarter' than Limbo, in that it can avoid using its GC in a lot of cases (but not all, hence why it still has GC to fall back on when necessary), but, I mean, that discovery was the reason for why Rust was created. Limbo was already there otherwise. Every new language needs to try to add something into the mix.
Still, the thoughts were in common, even though the final solution didn't end up being exactly the same.
Rust does not have a garbage collector in any way or form. It's just automatic memory like we're used to (e.g., stack in C++), with the compiler injecting free/drop when an object goes out of scope.
What Rust brings is ownership with very extensive lifecycle tracking, but that is a guard rail that gives compile-time failures, not something that powers memory management.
(If you consider the presence of Rc<T> to make Rust garbage collected, then so is C garbage collected as developers often add refcounting to their structs.)
> so is C garbage collected as developers often add refcounting to their structs.
Absolutely, C also can use a garbage collector. Obviously you can make any programming language do whatever you want if you are willing to implement the necessary pieces. That isn't any kind of revelation. It is all just 1s and 0s in the end. C, however, does not come with an expectation of it being provided. For all practical purposes you are going to have to implement it yourself or use some kind of third-party solution.
The difference with Limbo and Rust is that they include reference counting GCs out of the box. It is not just something you have the option of bolting onto the side if you are willing to put in the effort. It is something that is already there to use from day one.
It is not C using the garbage collector - it is you writing a garbage collector in C. The application or library code you develop with the language is not itself a feature of the language, and the language you wrote the code in is not considered to be "using" your code.
Rust and C are unaware of any type of garbage collection, and therefore never "use" garbage collection. They just have all the bells and whistles to allow you to reference count whatever type you'd like, and in case of Rust there's just a convenient wrapper in the standard library to save you from writing a few lines of code. However, this wrapper is entirely "bolted onto the side": You can write your own Rc<T>, and there would be no notable difference to the std version.
So no, neither Rust nor C can use a garbage collector, but you can write code with garbage collection in any feature-complete language. This is importantly very different from languages that have garbage collection as a feature, like Limbo, Go, JavaScript, etc.
That's right, you can write a garbage collector in C for C to use. You can also write a garbage collector in C for Javascript to use, you could even write a garbage collector in C for Rust to use, but in this case we are talking about garbage collector for C to use.
If you are writing a garbage collector that will not be used, why bother?
This feels like it's into trolling, so one last round:
The C language does not use your C code, it is your C code that uses the C language.
The tools available to C is what the language specification dictated and what the compiler implemented. For example, C might use stack memory and related CPU instructions, because the C specification described "automatic memory" and the compiler implemented it with the CPU's stack functionality. It might insert calls to "memcpy" as this function is part of the C language spec. For C++, the compiler will insert calls to constructors and destructors as the language specified them.
The C language does not specify a garbage collector so it can never use one.
You, however, can use C to write a garbage collector to manually use in your C code. C remains entirely unaware of the garbage collectors existence as it has no idea what the code you write does - it will never call it on its own and the compiler will never make any decisions based on its existence. From C's perspective, it's still just memory managed manually by your application with your logic.
In JavaScript and Go, the language specifies the presence of garbage collection and how that should work, and so any runtime is required to implement it accordingly. You can write that runtime in C, but the C code and C compiler will still not be garbage collected.
The C standard is actually carefully written to allow for placing distinct "objects" in separate memory segments of a non-flat address space, such that ordinary pointer arithmetic cannot be expected to reach across to a separate "object". This is not far from allowing for some sort of GC as part of low-level C implementation, and in fact the modern Fil-C relies on it.
This is actually quite an interesting topic in its own right, but not quite the discussion above, which is about whether C or Rust include a GC or is considered to "use" the GC you hand-rolled or pulled in from a library.
I wouldn't consider FilC's GC a GC in the conventional sense either, in that code compiled with Fil-C is still managed manually, with the (fully-fledged) GC effectively only serving as a way to do runtime validation and to turn what would otherwise be a UAF with undefined behavior into a well-defined and immediate panic. This aspect is in essence an alternative approach to what is done by AddressSanitizer.
I'll have to look a bit more into Fil-C though. Might be interesting to see how it compares to the usual sanitizers in practice.
Your impression that there is a semantic authority misses the mark. While you are free to use English as you see fit, so too is everyone else. We already agreed on the intent of the message, so when I say something like "C uses C code", it absolutely does, even if you wouldn't say it that way yourself. I could be alone in this usage and it would remain valid. Only intent is significant.
However, I am clearly not alone in that style of usage. I read things like "Rust can use code written in C" on here and in other developer venues all the time. Nobody ever appears confused by such a statement even. If Rust can use code written in C, why can't C use code written in C?
> The C language does not specify a garbage collector so it can never use one.
The C language also does not specify a linked list. Go tell your developer friends that C can never use a linked list. Please take a photo when they look at you like you have two heads. Admittedly I lack the ability to say something so outlandish to another human with a straight face, but for the sake of science I put that into an LLM. It called me out on the bullshit, pointing out that C can, in fact, use a linked list.
For what it is worth, I also put "C can never use a garbage collector" into an LLM. It also called me out on that bullshit just the same. LLMs are really good at figuring out how humans generally use terminology. It is inherit to how they are trained. If an LLM is making that connection, so too would many humans.
> In JavaScript and Go, the language specifies the presence of garbage collection
The Go language spec does, no doubt as a result of Pike's experience with Alef. The JavaScript spec[1] does not. Assuming you aren't making things up, I am afraid your intent was lost. What were you actually trying to say?
> C code and C compiler will still not be garbage collected.
That depends. GC use isn't typical in the C ecosystem, granted, but you absolutely can use garbage collection in a C program. You can even use something like the CCured compiler to have GC added automatically. The world is your oyster. There is no way you couldn't have already realized that, though, especially since we already went over it earlier. It is apparent that your intent wasn't successfully transferred again. What are you actually trying to say here?
> This is turning into trolling.
The mightiest tree in the forest could be cut down with that red herring!
[1] The standard calls itself ECMAScript, but I believe your intent here is understood.
> The C language also does not specify a linked list. Go tell your developer friends that C can never use a linked list.
They would not blink because the statement is accurate. To the C language and to the C compiler, there are no linked lists - just random structs with random pointers pointing to god knows what. C does know about arrays though.
> The JavaScript spec[1] does not [specify garbage collection].
I have good reason to believe that you are not familiar with the specification, although to be fair most developers would not be familiar with its innards.
The specification spends quite a while outlining object liveness and rules for when garbage is allowed to be collected. WeakRefs, FinalizationRegistries, the KeptObjects list on the agent record, ...
Just like with Go, it is perfectly valid to have an implementation of a "garbage collector" that is a no-op that never collects anything, which means that the application will continously leak memory until it runs out and crashes as the language provides no mechanism to free memory - for Go, you can switch to this with `GOGC=off`. The specific wording from ECMA-262:
> This specification does not make any guarantees that any object or symbol will be garbage collected. Objects or symbols which are not live may be released after long periods of time, or never at all. For this reason, this specification uses the term "may" when describing behaviour triggered by garbage collection.
If you're not used to reading language specs the rest of the details can be a bit dry to extract, but the general idea is that the spec outlines automatic allocation, permission for a runtime to deallocate things that are not considered "live", and the rules under which something is considered to be "live". And importantly, it provides no means within the language to take on the task of managing memory yourself.
This is how languages specify garbage collection as the language does not want to limit you to a specific garbage collection algorithm, and only care about what the language needs to guarantee.
> [1] The standard calls itself ECMAScript, but I believe your intent here is understood.
sigh.
> For what it is worth, I also put "C can never use a garbage collector" into an LLM. It also called me out on that bullshit just the same.
more sigh. LLMs always just wag their tails when you beg the question on an opinion. They do not do critical thinking for you or have any strong opinions to give of their own.
> To the C language and to the C compiler, there are no linked lists
But are most certainly able to use one. Just as they can use a garbage collector. You are quite right that these are not provided out of the box, though. If you want to use them, you are on your own. Both Limbo and Rust do provide a garbage collector to use out of the box, though, so that's something different.
> The specification spends quite a while outlining object liveness and rules for when garbage is allowed to be collected. WeakRefs, FinalizationRegistries, the KeptObjects list on the agent record, ...
But, again, does not specify use of a garbage collector. It could use one, or not. That is left up to the implementer.
> it is perfectly valid to have an implementation of a "garbage collector" that is a no-op that never collects anything
It's perfectly valid as far as the computer is concerned, but in the case of Go not spec-compliant. Obviously you don't have to follow the spec. It is not some fundamental law of the universe. But if you want to be complaint, that is not an option. I get you haven't actually read the spec, but you didn't have to either as this was already explained in the earlier comment.
> This is how languages specify garbage collection
That is how some languages specify how you could add garbage collection if you so choose. It is optional, though. At very least you can always leak memory. Go, however, explicitly states that it is garbage collected, always. An implementation of Go that is GC-less and leaks memory, while absolutely possible to do and something the computer will happily execute, does not meet the conditions of the spec.
I would say that RC is GC, yes, as it is most definitely technically true. But it was pjmlp who suggested it originally (Limbo also uses reference counting), so we have clear evidence that others also see reference counting as being GC. We wouldn't have a discussion here otherwise.
While RC is a GC algorithm, chapter 5 from GC handbook, it doesn't count when it isn't part of the type system because then it becomes optional and not part of the regular use of the programming language.
Additionally Limbo's GC is a bit more complicaticated than a plain add_ref()/release() pair of library calls.
ZeroFS [0] is very thankful for what it brought to Linux with the v9fs [1] subsystem which is very nice to work with (network native) compared to fuse :)
I believe that the Windows Subsystem for Linux (WSL, really a Linux subsystem on Windows) uses the Plan 9 network protocol, 9p, to expose the host Windows filesystem to the Linux virtual environment.
I don't know. I use a lot of Swift and C++ and while both are OK languages there is an absurd amount of complexity in these languages that doesn't seem to serve any real purpose. Just a lot of foot traps, really.
Coming back to Plan9 from that world is a breeze, the simplicity is like a therapy for me. So enjoyable.
If "modern" means complex, I don't think it fits Plan9.
I don't know about Swift, but in C++, the complexity serves at least three purposes:
1. Backwards compatibility, in particular syntax-wise. New language-level functionality is introduced without changing existing syntax, but by exploiting what had been mal-formed instructions.
2. Catering to the principle of "you don't pay for what you don't use" - and that means that the built-ins are rather spartan, and for convenience you have to build up complex structures of code yourself.
3. A multi-paradigmatic approach and multiple, sometimes conflicting, usage scenarios for features (which detractors might call "can't make up your mind" or "design by committee").
The crazy thing is that over the years, the added complexity makes the code for many tasks simpler than it used to be. It may involve a lot of complexity in libraries and under-the-hood, but paradoxically, and for the lay users, C++ can be said to have gotten simpler. Until you have to go down the rabbit hole of course.
AFAIK there is no Rust compiler for Plan 9 or 9front. The project is using a dialect of C and its own C compiler(s). I doubt adding Rust to the mix will help. For a research OS, C is a nice clean language and the Plan 9 dialect has a some niceties not found in standard C.
If you really want Rust, check this https://github.com/r9os/r9 it is Plan 9 reimplemented in Rust (no idea about the project quality):
R9 is a reimplementation of the plan9 kernel in Rust. It is not only inspired by but in many ways derived from the original Plan 9 source code.
There isn't, though you can run it over wasm on it. I tried it a while back with a port of the w2c2 transpiler (https://github.com/euclaise/w2c9/), but something like wazero is a more obvious choice
It is kind of interesting that C inventors, contrary to the folks that worship C, not only did not care about ANSI/ISO compatibility, they ended up exploring Aleph, Limbo and Go.
While Bell Labs eventually started Cyclone, which ended up influencing Rust.
That's interesting, thanks. I feel a need for simple multitasking/networking OS for synthesizable RV32I core (not RTOS like, but more like Unix or CP/M). Would be nice to try Plan9 on it once port is out.
I’m not sure it still makes sense to do OS research so close to the metal. Most computing is done up on the application level, and our abstractions there suck, and I haven’t seen any evidence that “everything is a file” helps much in a world of web APIs and SQL databases
Some of us are still interested in the world underneath all that web stuff!
Multiple experimental operating systems at multiple abstraction levels sounds like a good idea, though. What sort of system software would you like to build?
I’m actually building an “OS” that’s up a level. it’s more like git, it has a concept of files but they’re documents in a distributed store. I can experiment with interaction patterns without caring about device drivers
Operating systems are where device drivers live. It sounds awfully impractical to develop alternatives at this stage. I think OP is right.
I think OSes should just freeze all their features right now. Does anyone remember all the weird churn in the world of Linux, where (i) KDE changed from version 3 to 4, which broke everyone's KDE completely unnecessarily (ii) GNOME changed from version 2 to 3, which did the same (iii) Ubuntu Linux decided to change their desktop environment away from GNOME for no reason - but then unchanged it a few years later? When all was said and done, nothing substantive really got done.
So stop changing things at the OS level. Only make conservative changes which don't break the APIs and UIs. Time to feature-freeze, and work on the layers above. If the upper layers take over the work of the lower layers, then over time the lower layers can get silently replaced.
I have never had so much negative feedback and ad-hom attacks on HN as for that story, I think. :-D
Short version, the chronology goes like this:
2004: Ubuntu does the first more-or-less consumer-quality desktop Linux that is 100% free of charge. No paid version. It uses the current best of breed FOSS components and they choose GNOME 2, Mozilla, and OpenOffice.
By 2006 Ubuntu 6.06 "Dapper Drake" comes out, the first LTS. It is catching on a bit.
Fedora Core 6 and RHEL 4 are also getting established, and both use GNOME 2. Every major distro offers GNOME 2, even KDE-centric ones like SUSE. Paid distros like Mandriva and SUSE as starting to get in some trouble -- why pay when Ubuntu does the job?
Even Solaris uses GNOME 2.
2006-2007, MS is getting worried and starts talking about suing. It doesn't know who yet so it just starts saying intentionally not-vague-at-all things like the Linux desktop infringes "about 265 patents".
This is visibly true if you are 35-40 years old: if you remember desktop GUI OSes before 1995, they were all over the place. Most had desktop drive icons. Most had a global menu bar at the top. This is because most copied MacOS. Windows was an ugly mess and only lunatics copied that. (Enter the Open Group with Motif.)
But then came Win95. Huge hit.
After 1995, every GUI gets a task bar, it gets buttons for apps, even window managers like Fvwm95 and soon after IceWM. QNX Neutrino looks like it. OS/2 Warp 4 looks like it. Everyone copies it.
Around the time NT 4 is out and Win98 is taking shape, both KDE and GNOME get going and copy the Win9x look and feel. Xfce dumps its CDE look and feel, goes FOSS, and becomes a Win95 copy.
MS had a case. Everyone had copied them. MS is not stupid and it's been sued lots of times. You betcha it patented everything and kept the receipts. The only problem it has is: who does it sue?
RH says no. GNOME 3 says "oh noes our industry leading GU is, er, yeah, stale, it's stagnant, it's not changing, so what we're gonna do is rip it up and start again! With no taskbar and no hierarchical start menu and no menu bars in windows and no OK and CANCEL buttons at the bottom" and all the other things that they can identify that are from Win9x.
GNOME is mainly sponsored by Red Hat.
Canonical tries to get involved; RH says fsck off. It can't use KDE, that's visibly a ripoff. Ditto Xfce, Enlightenment, etc. LXDE doesn't exist yet.
So it does its own thing based on the Netbook Launcher. If it daren't imitate Windows then what's the leading other candidate? This Mac OS X thing is taking off. It has borrowed some stuff from Windows like Cmd+Tab and Fast User Switching and stuff and got away with it. Let's do that, then.
SUSE just wearily says "OK, how much? Where do we sign?"
RISC OS had a recognizable task bar around 1987, so 2006-2007 is just long enough for any patent on that concept to definitely expire. This story doesn't make any sense. As for dialog boxes with buttons at the bottom and plenty of buttons inside apps, the Amiga had them in 1984.
Yes, the Icon Bar is prior art, but there are 2 problems with that.
1. It directly inspired the NeXTstep Dock.
This is unprovable after so long, but the strong suspicion is that the Dock inspired Windows 4 "Chicago" (later Windows 95) -- MS definitely knew of NeXT, but probably never heard of Acorn.
So it's 2nd hand inspiration.
2. The Dock isn't a taskbar either.
3. What the prior art may be doesn't matter unless Acorn asserted it, which AFAIK it didn't, as it no longer existed by the time of the legal threats. Nobody else did either.
4. The product development of Win95 is well documented and you can see WIP versions, get them from the Internet Archive and run them, or just peruse screenshot galleries.
The odd thing is that the early development versions look less like the Dock or Icon Bar than later ones. It's not a direct copy: it's convergent evolution. If they'd copied, they would have got there a lot sooner, and it would be more similar than it is.
> so 2006-2007 is just long enough for any patent on that concept to definitely expire.
RISC OS as Arthur: 1987
NeXTstep 0.8 demo: 1988
Windows "Chicago" test builds: 1993, 5Y later, well inside a 20Y patent lifespan
Win95 release: 8Y later
KDE first release: 1998
GNOME first release: 1999
The chronology doesn't add up, IMHO.
> This story doesn't make any sense. As for dialog boxes with buttons at the bottom and plenty of buttons inside apps, the Amiga had them in 1984.
You're missing a different point here.
Buttons at the bottom date back to at least the Lisa.
The point is that GNOME 3 visibly and demonstrably was trying to avoid potential litigation by moving them to the CSD bar at the top. Just as in 1983 or so GEM made its menu bar drop-down instead of pull-down (menus open on mouseover, not on click) and in 1985 or so AmigaOS made them appear and open only on a right-click -- in attempts to avoid getting sued by Apple.
> The point is that GNOME 3 visibly and demonstrably was trying to avoid potential litigation by moving them to the CSD bar at the top.
Well, the buttons in the titlebar at the top are reminiscent of old Windows CE dialog boxes, so I guess they're not really original either! What both Unity and GNOME 3 looks like to me is an honest attempt to immediately lead in "convergence" with mobile touch-based solutions. They first came up in the netbook era where making Linux run out-of-the-box on a market-leading small-screen, perhaps touch-based device was quite easy - a kind of ease we're only now getting back to, in fact.
That's why it's a research OS, a lot of people (or at least some) think that the current range of mainstream OS are not very well designed, and we can do better.
I'm not saying Plan 9 is the alternative, but it is kind of amazing how un-networked modern Operating Systems are, and we just rely on disparate apps and protocols to make it feel like the OS is integrated into networks, but they only semi-are.
I didn’t really see the appeal until I learned how to use FUSE.
There’s something elegant about filesystems. Even more than pipes, filesystems can be used to glue programs together. Want to control your webcam with Vim? Expose a writable file. Want to share a device across the network? Expose it as a file system, mount that filesystem on your computer.
Idk I still find low level OS stuff super interesting because it hasn't had a rework in so long. With everything we've learnt since the age of modern computing, drives larger than a few MBs, super fast memory and fast cryptography to name a few.
It's interesting to imagine a new OS that incorporates these changes from it's infancy.
I appreciate all of the effort put in by Linux, BSD, Android, QNX and closed source OSs' have put in to building upon existing ideas and innovating gradually on them. But man I really want to see something better than everything is a file. I really enjoyed the stuff BeOS was pitching.
The "everything is a file" approach is nice in many cases, I'm worried though if it works everywhere. Maybe if done right. Subversion (SVN) shows branches as separate file trees.. and ClearCase too (though I'm on thin ice with ClearCase, having used it very little). And I just can't stand the file-oriented way SVN works, I could never get used to it.
But there are a lot of other cases where "it's a file" does work, I've experimented with creating Fuse filesystem interfaces to some stuff now and then.
You're going to have to explain to me how a parametrized request/response system like calling a Web API or making a SQL query can be mapped to reading files. I've seen some stuff that people do with FUSE and it looks like ridiculous circus hoop jumping to making the Brainfuck-is-Turing-complete version of a query system. We have syntax for a reason.
Typically, if you were writing your hypothetical sql client in rc shell, you'd implement an interface that looks something like:
<>/mnt/sql/clone{
echo 'SELECT * from ...' >[1=0]
cat /mnt/sql/^`{read}^/data # or awk, or whatever
}
This is also roughly how webfs works. Making network connections from the shell follows the same pattern. So, for that matter, does making network connections from C, just the file descriptor management is in C.
This is... I don't know. I don't get why I would care to sling SQL over a file system versus a network socket.
I mean, Postgres could offer an SSH interface as a dumb pipe to psql to just have you push text SQL queries in your application. But it doesn't, it offers a binary protocol over a network socket. All the database engines have had the same decision point and have basically gone down the same path of implementing a wire protocol over a persistent socket connection.
So yeah, I don't get what doing things this way would give me as either a service provider or a service consumer. It looks like video game achievements for OS development nerds, "unlocked 'everything is a file'." But it doesn't look like it actually enables anything meaningful.
But if it requires understanding of a data protocol, it doesn't really matter if it's over the file system or a socket or flock of coked-up carrier pigeons. You still need to write custom user space code somewhere. Exposing it over the file system doesn't magically make composable applications, it just shuffles the code around a bit.
In other words, the transport protocol is just not the hard part of anything.
It's not hard, but it's sure a huge portion of the repeated boilerplate glue. Additionally, the data protocols are also fairly standardized in Plan 9; The typical format is tabular plain text with '%q'-verb quoting.
There's a reason that the 9front implementation of things usually ends up at about 10% the size of the upstream.
The benefit is that you can allocate arbitrary computers to compute arbitrary things. As it is now, you have to use kubernetes and it's a comedy. Though perhaps the same in effect, there are dozens of layers of abstraction that will forever sting you.
You're thinking from the perspective of the terminal user—ie, a drooling, barely-conscious human trying to grasp syntax and legal oddities of long-dead humans. Instead you need to think from the perspective of a star trek captain. Presumably they aren't manually slinging sql queries. Such tasks are best automated. We are all the drooling terminal user in the end, but plan9 enabled you to at least pretend to be competent.
Plan9 allows for implementing file servers in user space and exporting a whole file tree as a virtual "folder", so it's really more of "everything as a file server". No different than FUSE, really.
From what I've seen, Plan 9 fans turn their noses up at FUSE. They say FUSE is not "it", but don't really seem to explain what "it" is to differentiate it from FUSE.
And as Feynman said, you don't truly understand a thing until you can teach it. So that leaves us in a weird predicament where the biggest proponents of Plan 9 apparently don't understand Plan 9 well enough to teach it to the rest of us.
It depends what you mean by "it". FUSE clearly doesn't give you every feature in plan9, and in fact you can't have that without giving up the current Linux syscall API completely and replacing it with something vastly simpler that leaves a lot more to be done in user space. That's not something that Linux is going to do by default, seeing as they have a backward compatibility guarantee for existing software. Which is totally OK as far as it goes; the two systems just have different underlying goals.
Plan 9 supports file server processes natively, and that's the part that's most FUSE-like. The full OS also has many other worthwhile features that are not really addressed by FUSE on its own, or even by Linux taken as a whole.
One key difference is that the equivalent to kernel syscalls on *nix generally involves userland-provided services, and this applies to a lot more than just ordinary file access. The local equivalents to arbitrary "containerization/namespacing" and "sandboxing" are just natively available and inherent to how the system works. You can't do this out of the box on *nix where every syscall directly involves kernel facilities, so the kernel must have special provisions to containerize, sandbox, delegate specific things to userland services etc.
In addition to the sibling comment, you might also consider simply not using the APIs or SQL queries to begin with. Many people have entire careers without touching either.
I think you're failing to get that using a filesystem API to work with things that aren't naturally anything like filesystems might get perverse. And standard filesystems are a pretty unnatural way to lay out information anyway, given that they force everything into a tree structure.
This is what I was trying to get at. A lot of the data I deal with is directed, cyclic graphs. Actually, I personally think most data sets we care about are actually directed graphs of some kind, but we've gotten so used to thinking of them as trees that we force the metaphor too far. I mean, file systems are an excellent example of a thing we actually want to be a graph but we've forced into being a tree. Because otherwise why would we have ever invented symlinks?
There's a bunch of literature about accessing graphs through tree lenses. I'm not sure exactly what you're looking for.
SQL certainly forces you to look at graphs as trees. Do you have an specific interface you're trying to access? If you're trying to use a graph database, why mention APIs and SQL?
I just assumed they wanted to interface with existing json over http apis rather than write their own code. The sibling of my previous comment addresses that concern.
Can Plan 9 do transactions? If so, it is unsuitable for being a database. It can run databases, because those can work without transactions. But can't do native writes without them. Can it do transactional reads? How would you represent isolation levels?
How do you do a join on Plan 9? I get the impression that these are coded in each client. But complicated queries need indexes and optimizer. SQL database has advantage that can feed it and it figures out the plan.
Plan 9 is just a brand smeared across a codebase, just like every other operating system.
> If so, it is unsuitable for being a database. It can run databases, because those can work without transactions. But can't do native writes without them. Can it do transactional reads? How would you represent isolation levels?
Indeed, no, we shouldn't be sure everything-is-a-file makes sense to do OS research. I don't think this is particularly necessarily what need to considered close to the metal. But it is os research.
I think you're right about where computing is today. It's mostly at the app level.
I think you once again hit a super hard conventionality chord & speak to where we are by saying we don't have much evidence of "everything is a file* helping, anywhere. Broadly.
But analyzing where we are & assessing they everything-is-a-file isn't a sure thing doesn't dissuade me. Apps have wanted control, and there's beenfew drivers to try to unite & tie together computing. App makers would actively resist if not drag their feet against giving up total dominion of the user experience. OS makers don't have the capital to take over the power from apps. The strain of unweaving this corporate power interests is immense.
There have been some attempts. BeOS tried to do interesting things with enriching files, with making their more of a database. Microsoft's cancelled WinFS is rumored to have similarly made a sort of OS filesystem/database hybrid that would be useful to the users without the apps. But these are some of the few examples we have of trying anything.
We're in this era where agents are happening, and it's clear that there's very few clear good paths available to us now for agents to actuate & articulate the changes they could and should be doing. Which is just a reflection of app design where the system state is all bundled up deeply inside these bespoke awkward UIs. App design doesn't afford good access, and part of the proof is that other machines can't control apps short of enormous visual processing, which leaves much ambiguity. If agents can't it also strongly implies humans had little chance to master and advance their experience too.
I strongly think we should have some frontiers for active OS research that are user impactful. We ought be figuring out how to allow better for users, in ways that will work broadly & cross cuttingly. Everything is a file seems like one very strong candidate here, for liberating some of the power out of the narrow & super specific rigid & closed application layer.
I think Dan was also super on point writing A Social Filesystem. Which is that social networks & many online systems are everything-as-a-file under the hood. And that there is generic networked multi-party social networking platform available, that we have a super OS already here that does files super interestingly. And Dan points out how it unlocks things, how not having one specific app but having our online data allow multiple consumers, multiple tools, is super interesting an opening.
So, everything is a file is very webful. A URL logically ought be. A multi-media personal data server for every file you can imagine creates an interest powerful OS, and a networked OS.
And users have been warped into fitting the small box their apps demand of them so far. They've had no option about it. All incentive has been to trap users more and more to have no off roads to keep your tool being the one tool for the job.
Distribute the power. Decentralize off the app. Allow other tools. Empower broader OS or platform to let users work across media types and to combine multiple tools and views in their workflow. Allow them to script and control the world around them, to #m2m orchestrate & drive tool use.
I don't disagree with anything you said I just think it's a 30 year old basis you stand from, one that hasn't helped had gotten better and which has ongoingly shrunk what is possible & limited the ability to even start trying for more or better. I don't think we are served by what it feels like you are trying to highlight. And I think "everything is a file" could be an incredible way to start opening up better, possibly, maybe!! but I'm very down to hear other reasonable or out there ideas!! I'm just not interested in staying in the disgraceful anti-user app-controlled unyielding quagmire we have been trapped in for decades.
I guess I feel like if we’re rewriting device drivers then we’re in a turing tarpit. I think there’s room for innovation at what is traditionally considered the application level - we run git, postgres, document stores etc as applications. I think the way to solve the next generation of coordinating is by doing more interesting stuff on this layer.
Plan 9 is still alive and kicking -- The next Plan 9 conference will be in Victoria, BC in Canada later this year.
https://iwp9.org/
9front averages several commits a day:
https://git.9front.org/plan9front/9front/HEAD/log.html
I’ve been on/off playing with 9front on an old laptop. I’ve been having a lot of fun with it, it’s fun to write code for, but i have had a hard time using it as anything but a toy.
I would love to use it as my main desktop, but ultimately (and kind of unsurprisingly), the blocker is the lack of a modern browser and a lack of video acceleration.
I’m sure I could hobble something together with virtualization for the former but I don’t see a fix for video acceleration coming.
Maybe I could install it on a server or something.
I did the same with an old Thinkpad but somehow found it relies too heavily on the mouse. I might still go back to it because I love how far they've taken the "everything is a file" idea and would like to experiment more with that.
I saw on HN in a different Plan 9 thread (though I'm having a bit of trouble finding it), where someone mentioned the idea of using Plan 9 to build a custom router.
I have a very rough mental model of how that could be done, and I think it would be cool to say I have that, but I haven't been bothered to replace my beloved NixOS router yet.
Same here. I made a custom router (debian, nftables, dnsmasq, etc and python code to manage it all over ssh) and I spent way too much time on it to replace it :-)
As an aside, it's fun to see someone on the organizing committee affiliated with Cray, Inc.
https://git.9front.org/plan9front/9front/b18221b10c83d81a9eb...
> Theo is more specific than troll; it presents insults from OpenBSD founder Theo de Raadt.
check fortune(1) command.
That makes a lot more sense, especially as someone who has been on the receiving end of Theo on the OpenBSD mailing list.
It's a kind of mark of distinction, like Rodney Rude fans being personally insulted by him.
I wonder if there's various grades? Does a personal insult in private email rate higher than a general one on a mailing list?
People wanting a Retina-capable drawterm to access Plan9/9front from their Macs are welcome to have a look at https://github.com/rcarmo/drawterm
Thanks for the improvements! Two small quality of life fixes over the original that I particularly appreciate:
That last one has been really nice when screen sharing with colleagues.It had to happen. I was constantly annoyed at having to RDP over to a Linux box to try stuff. And I might end up doing a Plan9 RDP server as well.
Ooh la la
IMO, the biggest curse of the Internet age is how Distributed OS's did not become mainstream. Maybe we should repackage these as Unikernels and run our apps using their distribution services directly on a hypervisor.
k8s is really just a distributed OS implemented on top of Linux containers, only with extra facilities for automated tuning, scaling and overall management that are lacking on bare plan9.
9front it's far ahead of docker and crappy namespaces running on a libre reimplementation of a dead end Unix version. They did things right from the start. bind it's far superior to anything else.
But... muh scalability!
Why did BSD make Unix sockets something outside of the file system?
I can do this in bash but I always thought it would be more elegant to do a similar thing in C. I thought Plan 9 handled it more like this?
cat < /dev/tcp/localhost/22
SSH-2.0-OpenSSH_10.0
They did not have the original unix vision. and it is a lot easier to to design an interface as a programming interface than shoehorn it into a filesystem interface.
I think having a filesystem interface is pretty great, and plan9 showed it could be done. but having to describe all your io in the [database_key, open(), close(), read(), write(), seek()] interface. can be tricky and limiting for the developer. It is pretty great for the end user however. Having a single api for all io is a super power for adaptive access patterns.
I think the thing that bothers me the most about the bsd socket interface is how close it is to a fs interface. connect()/bind() instead of open(), recv()/send() instead ot read()/write() but it still uses file discripters so that stuff tends to work the same. We almost had it.
As much as I like BSD and as great an achievement that the socket interface was, I still think this was their big failure.
I just think this sounds very elegant
https://en.wikipedia.org/wiki/Plan_9_from_Bell_Labs#/net
> Plan 9 does not have specialised system calls or ioctls for accessing the networking stack or networking hardware. Instead, the /net file system is used. Network connections are controlled by reading and writing control messages to control files. Sub-directories such as /net/tcp and /net/udp are used as an interface to their respective protocols.
> Combining the design concepts
> Though interesting on their own, the design concepts of Plan 9 were supposed to be most useful when combined. For example, to implement a network address translation (NAT) server, a union directory can be created, overlaying the router's /net directory tree with its own /net. Similarly, a virtual private network (VPN) can be implemented by overlaying in a union directory a /net hierarchy from a remote gateway, using secured 9P over the public Internet. A union directory with the /net hierarchy and filters can be used to sandbox an untrusted application or to implement a firewall.[43] In the same manner, a distributed computing network can be composed with a union directory of /proc hierarchies from remote hosts, which allows interacting with them as if they are local.
> When used together, these features allow for assembling a complex distributed computing environment by reusing the existing hierarchical name system
I remember first setting up NAT or IP masquerading around 1998. It seemed like an ugly hack and some custom protocols did not work.
I use a bunch of VPNs now and it still seems like a hack.
The Plan 9 way just seems very clean although you now have to secure the server more strongly because you are exporting filesystems from it and others are mounting it.
> The Plan 9 way just seems very clean although you now have to secure the server more strongly because you are exporting filesystems from it and others are mounting it.
With that I mind I wish (the standard Unix gripe!) 9P had a more complex permissions model... 9P's flexibility and per-process namespaces get you a long way, but it's not a natural way to express them.
> The Plan 9 way just seems very clean although you now have to secure the server more strongly because you are exporting filesystems from it and others are mounting it.
aye. this was my first thought too. I seem to recall older Windows doing something like the same thing -- e.g. internet controls tied to the same system as the files -- and that's how we got the 90s-2000s malware 'asplosion.
> I just think this sounds very elegant
Where the elegance starts to fade for me is when you see all the ad hoc syntaxes for specifying what ta connect to and what to mount. I have no love for tcp!10.0.0.1!80 or #c or #I. I want to move away from parsing strings in trusted code, especially when that code is C.
I also have no love for "read a magic file to have a new connection added to your process".
9P is neat but utterly unprepared for modern day use where caching is crucial for performance.
Clean doesn't mean easy to use. I've worked with a system before that had a very clean, elegant design (message-passing/mailboxes), easy to implement, easy to apply security measures to, small, efficient, everything you could ask for, and pretty much the first thing anyone who used it did was write a set of wrappers for it to make it look and feel more natural.
> can be tricky and limiting for the developer. It is pretty great for the end user however.
This seems to be a great general principle of api design! The best apis are those that are hated by the developer and loved by the end users.
> The best apis are those that are hated by the developer and loved by the end users.
No, just those loved by the API consumer. Negative emotions on one end doens't do anything positive.
In the case of plan9, not everything can be described elegantly in the filesystem paradigm and a lot of things end up having really awkward "ctl" files which you write command strings to that the fileserver needs to parse. It also handicaps performance due to the number of filesystem operation roundtrips you usually end up making.
Maybe if combined with something io_uring-esque, but the complexity of that wouldn't be very plan9-esque.
> a lot of things end up having really awkward "ctl" files which you write command strings to that the fileserver needs to parse.
These are no different in principle than ioctl calls in *ix systems. The `ctl` approach is at least a proper generalization. Being able to use simple read/write primitives for everything else is nonetheless a significant gain.
They are very different. ioctl's on a file take an operation and arguments that are often userspace pointers as the kernel can freely access any process's memory space. ctl files on the other hand are merely human-readable strings that are parsed.
Say, imagine an API where you need to provide an 1KiB string. The plan9 version would have to process the input byte for byte to sort out what the command was, then read the string to a dynamic buffer while unescaping it until it finds, say, the newline character.
The ioctl would just have an integer for the operation, and if it wanted to it could set the source page up for CoW so it didn't even have to read or copy the data at all.
Then we have to add the consideration of context switches: The traditional ioctl approach is just calling process, kernel and back. Under plan9, you must switch from calling process, to kernel, to fileserver process, to kernel, to fileserver process (repeat multiple times for multiple read calls), to kernel, and finally to calling process to complete the write. Now if you need a result you need to read a file, and so you get to repeat the entire process for the read operation!
Under Linux we're upset with the cost of the ioctl approach, and for some APIs plan to let io_uring batch up ioctls - the plan9 approach would be considered unfathomably expensive.
> The `ctl` approach is at least a proper generalization.
ioctl is already a proper generalization of "call operation on file with arguments", but because it was frowned upon originally it never really got the beauty-treatment it needed to not just be a lot of header file defines.
However, ioctl'ing a magic define is no different than writing a magic string.
It's perfectly possible to provide binary interfaces that don't need byte-wise parsing or that work more like io_uring as part of a Plan9 approach, it's just not idiomatic. Providing zero-copy communication of any "source" range of pages across processes is also a facility that could be provided by any plan9-like kernel via segment(3) and segattach(2), though the details would of course be somewhat hardware-dependent and making this "sharing" available across the network might be a bit harder.
Indeed, you can disregard plan9 common practice and adopt the ioctl pattern, but then you just created ioctl under a different name, having gained nothing over it.
You will still have the significant context switching overhead, and you will still need distinct write-then-read phases for any return value. Manual buffer sharing is also notably more cumbersome than having a kernel just look directly at the value, and the neat part of being able to operate these fileservers by hand from a shell is lost.
So while I don't disagree with you on a technical level, taking that approach seems like it misses the point of the plan9 paradigm entirely and converts it to a worse form of the ioctl-based approach that it is seen as a cleaner alternative to.
Being able to do everything in user space looks like it might be a worthwhile gain in some scenarios. You're right that there can be some context switching overhead to deal with, though even that might possibly be mitigated; the rendezvous(2) mechanism (which works in combination with segattach(2) in plan9) is relevant, depending on how exactly it's implemented under the hood.
I must admit that the ability to randomly bind on top of your "drivers" to arbitrarily overwrite functionality, whether to VPN somewhere by binding a target machine's network files, or how rio's windows were merely /dev/draw proxies and you could forward windows by just binding your own /dev/draw on the target, holds a special place in my heart. 9front, if nothing else, is fun to play with. I just don't necessarily consider it the most optimal or most performant design.
(I also have an entirely irrational love for the idea the singular /bin folder with no concept of $PATH, simply having everything you need bound on top... I hate $PATH and the gross profile scripts that go with it with a passion.)
> I also have an entirely irrational love for the idea the singular /bin folder with no concept of $PATH, simply having everything you need bound on top
That's really an easy special case of what's called containerization or namespacing on Linux-like systems. It's just how the system works natively in plan9.
can you give a few examples of this "lot of things"? What operations do not map naturally to file access?
can you paper over the *ixian abstraction using transformer based metamodeling language oriented programming and the individual process namespace Lincos style message note passing hierarchy lets the Minsky society of mind idea fall out?
The transition step between UNIX and Inferno, and between C and Limbo as main userspace language, by its authors.
Which tends to be forgotten when praising Plan 9.
Is it correct to say Golang is bringing Limbo to the masses?
Partially, Go still doesn't support a few Limbo features.
However the influence is quite clear, plus the Oberon-2 style methods and SYSTEM package.
No, it's bringing Aleph to the masses. Limbo is a cousin, and Dis was certainly very interesting and something I wish had caught on.
Aleph lacked GC, which Rob Pike considered the main reason for its implementation failure on Plan 9, and initially bounds checking was also missing.
Two key design difference from Go and its two predecessors.
Dis is an implementation detail, Go could offer the same dynamism with AOT toolchain, as proven by other languages with ahead of time toolchains available.
Dis is not an implementation detail for Inferno, though. And I wish it had gone much further.
I agree, but that is another matter.
However I will commit the sacrilege to suggest Android is the closest we got there on mainstream OSes.
That might be Rust, actually. They have more in common with thoughts about type systems, built-in constructs, deterministic memory usage, etc.
Limbo looks more like Go on the concurrency front, but that was inherited from Alef/Plan 9. That wasn't what Limbo brought to the table.
Limbo uses a garbage collector, though.
So does Rust. Rust is 'smarter' than Limbo, in that it can avoid using its GC in a lot of cases (but not all, hence why it still has GC to fall back on when necessary), but, I mean, that discovery was the reason for why Rust was created. Limbo was already there otherwise. Every new language needs to try to add something into the mix.
Still, the thoughts were in common, even though the final solution didn't end up being exactly the same.
Rust does not have a garbage collector in any way or form. It's just automatic memory like we're used to (e.g., stack in C++), with the compiler injecting free/drop when an object goes out of scope.
What Rust brings is ownership with very extensive lifecycle tracking, but that is a guard rail that gives compile-time failures, not something that powers memory management.
(If you consider the presence of Rc<T> to make Rust garbage collected, then so is C garbage collected as developers often add refcounting to their structs.)
> so is C garbage collected as developers often add refcounting to their structs.
Absolutely, C also can use a garbage collector. Obviously you can make any programming language do whatever you want if you are willing to implement the necessary pieces. That isn't any kind of revelation. It is all just 1s and 0s in the end. C, however, does not come with an expectation of it being provided. For all practical purposes you are going to have to implement it yourself or use some kind of third-party solution.
The difference with Limbo and Rust is that they include reference counting GCs out of the box. It is not just something you have the option of bolting onto the side if you are willing to put in the effort. It is something that is already there to use from day one.
> Absolutely, C also can use a garbage collector.
It is not C using the garbage collector - it is you writing a garbage collector in C. The application or library code you develop with the language is not itself a feature of the language, and the language you wrote the code in is not considered to be "using" your code.
Rust and C are unaware of any type of garbage collection, and therefore never "use" garbage collection. They just have all the bells and whistles to allow you to reference count whatever type you'd like, and in case of Rust there's just a convenient wrapper in the standard library to save you from writing a few lines of code. However, this wrapper is entirely "bolted onto the side": You can write your own Rc<T>, and there would be no notable difference to the std version.
So no, neither Rust nor C can use a garbage collector, but you can write code with garbage collection in any feature-complete language. This is importantly very different from languages that have garbage collection as a feature, like Limbo, Go, JavaScript, etc.
> it is you writing a garbage collector in C.
That's right, you can write a garbage collector in C for C to use. You can also write a garbage collector in C for Javascript to use, you could even write a garbage collector in C for Rust to use, but in this case we are talking about garbage collector for C to use.
If you are writing a garbage collector that will not be used, why bother?
This feels like it's into trolling, so one last round:
The C language does not use your C code, it is your C code that uses the C language.
The tools available to C is what the language specification dictated and what the compiler implemented. For example, C might use stack memory and related CPU instructions, because the C specification described "automatic memory" and the compiler implemented it with the CPU's stack functionality. It might insert calls to "memcpy" as this function is part of the C language spec. For C++, the compiler will insert calls to constructors and destructors as the language specified them.
The C language does not specify a garbage collector so it can never use one.
You, however, can use C to write a garbage collector to manually use in your C code. C remains entirely unaware of the garbage collectors existence as it has no idea what the code you write does - it will never call it on its own and the compiler will never make any decisions based on its existence. From C's perspective, it's still just memory managed manually by your application with your logic.
In JavaScript and Go, the language specifies the presence of garbage collection and how that should work, and so any runtime is required to implement it accordingly. You can write that runtime in C, but the C code and C compiler will still not be garbage collected.
The C standard is actually carefully written to allow for placing distinct "objects" in separate memory segments of a non-flat address space, such that ordinary pointer arithmetic cannot be expected to reach across to a separate "object". This is not far from allowing for some sort of GC as part of low-level C implementation, and in fact the modern Fil-C relies on it.
This is actually quite an interesting topic in its own right, but not quite the discussion above, which is about whether C or Rust include a GC or is considered to "use" the GC you hand-rolled or pulled in from a library.
I wouldn't consider FilC's GC a GC in the conventional sense either, in that code compiled with Fil-C is still managed manually, with the (fully-fledged) GC effectively only serving as a way to do runtime validation and to turn what would otherwise be a UAF with undefined behavior into a well-defined and immediate panic. This aspect is in essence an alternative approach to what is done by AddressSanitizer.
I'll have to look a bit more into Fil-C though. Might be interesting to see how it compares to the usual sanitizers in practice.
> The C language does not use your C code
Your impression that there is a semantic authority misses the mark. While you are free to use English as you see fit, so too is everyone else. We already agreed on the intent of the message, so when I say something like "C uses C code", it absolutely does, even if you wouldn't say it that way yourself. I could be alone in this usage and it would remain valid. Only intent is significant.
However, I am clearly not alone in that style of usage. I read things like "Rust can use code written in C" on here and in other developer venues all the time. Nobody ever appears confused by such a statement even. If Rust can use code written in C, why can't C use code written in C?
> The C language does not specify a garbage collector so it can never use one.
The C language also does not specify a linked list. Go tell your developer friends that C can never use a linked list. Please take a photo when they look at you like you have two heads. Admittedly I lack the ability to say something so outlandish to another human with a straight face, but for the sake of science I put that into an LLM. It called me out on the bullshit, pointing out that C can, in fact, use a linked list.
For what it is worth, I also put "C can never use a garbage collector" into an LLM. It also called me out on that bullshit just the same. LLMs are really good at figuring out how humans generally use terminology. It is inherit to how they are trained. If an LLM is making that connection, so too would many humans.
> In JavaScript and Go, the language specifies the presence of garbage collection
The Go language spec does, no doubt as a result of Pike's experience with Alef. The JavaScript spec[1] does not. Assuming you aren't making things up, I am afraid your intent was lost. What were you actually trying to say?
> C code and C compiler will still not be garbage collected.
That depends. GC use isn't typical in the C ecosystem, granted, but you absolutely can use garbage collection in a C program. You can even use something like the CCured compiler to have GC added automatically. The world is your oyster. There is no way you couldn't have already realized that, though, especially since we already went over it earlier. It is apparent that your intent wasn't successfully transferred again. What are you actually trying to say here?
> This is turning into trolling.
The mightiest tree in the forest could be cut down with that red herring!
[1] The standard calls itself ECMAScript, but I believe your intent here is understood.
> The C language also does not specify a linked list. Go tell your developer friends that C can never use a linked list.
They would not blink because the statement is accurate. To the C language and to the C compiler, there are no linked lists - just random structs with random pointers pointing to god knows what. C does know about arrays though.
> The JavaScript spec[1] does not [specify garbage collection].
I have good reason to believe that you are not familiar with the specification, although to be fair most developers would not be familiar with its innards.
The specification spends quite a while outlining object liveness and rules for when garbage is allowed to be collected. WeakRefs, FinalizationRegistries, the KeptObjects list on the agent record, ...
Just like with Go, it is perfectly valid to have an implementation of a "garbage collector" that is a no-op that never collects anything, which means that the application will continously leak memory until it runs out and crashes as the language provides no mechanism to free memory - for Go, you can switch to this with `GOGC=off`. The specific wording from ECMA-262:
> This specification does not make any guarantees that any object or symbol will be garbage collected. Objects or symbols which are not live may be released after long periods of time, or never at all. For this reason, this specification uses the term "may" when describing behaviour triggered by garbage collection.
If you're not used to reading language specs the rest of the details can be a bit dry to extract, but the general idea is that the spec outlines automatic allocation, permission for a runtime to deallocate things that are not considered "live", and the rules under which something is considered to be "live". And importantly, it provides no means within the language to take on the task of managing memory yourself.
This is how languages specify garbage collection as the language does not want to limit you to a specific garbage collection algorithm, and only care about what the language needs to guarantee.
> [1] The standard calls itself ECMAScript, but I believe your intent here is understood.
sigh.
> For what it is worth, I also put "C can never use a garbage collector" into an LLM. It also called me out on that bullshit just the same.
more sigh. LLMs always just wag their tails when you beg the question on an opinion. They do not do critical thinking for you or have any strong opinions to give of their own.
I'm done, have a nice day.
> To the C language and to the C compiler, there are no linked lists
But are most certainly able to use one. Just as they can use a garbage collector. You are quite right that these are not provided out of the box, though. If you want to use them, you are on your own. Both Limbo and Rust do provide a garbage collector to use out of the box, though, so that's something different.
> The specification spends quite a while outlining object liveness and rules for when garbage is allowed to be collected. WeakRefs, FinalizationRegistries, the KeptObjects list on the agent record, ...
But, again, does not specify use of a garbage collector. It could use one, or not. That is left up to the implementer.
> it is perfectly valid to have an implementation of a "garbage collector" that is a no-op that never collects anything
It's perfectly valid as far as the computer is concerned, but in the case of Go not spec-compliant. Obviously you don't have to follow the spec. It is not some fundamental law of the universe. But if you want to be complaint, that is not an option. I get you haven't actually read the spec, but you didn't have to either as this was already explained in the earlier comment.
> This is how languages specify garbage collection
That is how some languages specify how you could add garbage collection if you so choose. It is optional, though. At very least you can always leak memory. Go, however, explicitly states that it is garbage collected, always. An implementation of Go that is GC-less and leaks memory, while absolutely possible to do and something the computer will happily execute, does not meet the conditions of the spec.
> I'm done
Done what? It is not clear what you started.
No it doesn't, it uses affine types.
Also the first language to use that idea was Cyclone, a research language at Bell Labs with the goal to replace C.
Are you trying to say that Rc/Arc are GCs? I guess you're technically correct, but no one sees it that way.
I would say that RC is GC, yes, as it is most definitely technically true. But it was pjmlp who suggested it originally (Limbo also uses reference counting), so we have clear evidence that others also see reference counting as being GC. We wouldn't have a discussion here otherwise.
While RC is a GC algorithm, chapter 5 from GC handbook, it doesn't count when it isn't part of the type system because then it becomes optional and not part of the regular use of the programming language.
Additionally Limbo's GC is a bit more complicaticated than a plain add_ref()/release() pair of library calls.
https://doc.cat-v.org/inferno/concurrent_gc/concurrent_gc.pd...
> because then it becomes optional
Exactly. Optional implies use. So, in case you forgot to read the thread, both Limbo and Rust use GC.
ZeroFS [0] is very thankful for what it brought to Linux with the v9fs [1] subsystem which is very nice to work with (network native) compared to fuse :)
[0] https://github.com/Barre/ZeroFS
[1] https://docs.kernel.org/filesystems/9p.html
I believe that the Windows Subsystem for Linux (WSL, really a Linux subsystem on Windows) uses the Plan 9 network protocol, 9p, to expose the host Windows filesystem to the Linux virtual environment.
I was hoping it would explain "what is Plan 9", or rather, "why is it called plan 9, and what were the other 8 plans?"...
Its name ("Plan 9 from Bell Labs") is a reference to the movie "Plan 9 from Outer Space" :-)
> A printed version of the proceedings will be provided to the attendees
How adorable!
Modern Plan9 web version https://github.com/tractordev/apptron
That's cool, but what about it is plan9-like?
I would love to see more Rust on Plan9 implementations, IMHO, could be a good modern combination.
I don't know. I use a lot of Swift and C++ and while both are OK languages there is an absurd amount of complexity in these languages that doesn't seem to serve any real purpose. Just a lot of foot traps, really. Coming back to Plan9 from that world is a breeze, the simplicity is like a therapy for me. So enjoyable.
If "modern" means complex, I don't think it fits Plan9.
I don't know about Swift, but in C++, the complexity serves at least three purposes:
1. Backwards compatibility, in particular syntax-wise. New language-level functionality is introduced without changing existing syntax, but by exploiting what had been mal-formed instructions.
2. Catering to the principle of "you don't pay for what you don't use" - and that means that the built-ins are rather spartan, and for convenience you have to build up complex structures of code yourself.
3. A multi-paradigmatic approach and multiple, sometimes conflicting, usage scenarios for features (which detractors might call "can't make up your mind" or "design by committee").
The crazy thing is that over the years, the added complexity makes the code for many tasks simpler than it used to be. It may involve a lot of complexity in libraries and under-the-hood, but paradoxically, and for the lay users, C++ can be said to have gotten simpler. Until you have to go down the rabbit hole of course.
As a Swift noob, I would appreciate hearing what these foot traps are. This is in the context of Swift as a systems programming language?
AFAIK there is no Rust compiler for Plan 9 or 9front. The project is using a dialect of C and its own C compiler(s). I doubt adding Rust to the mix will help. For a research OS, C is a nice clean language and the Plan 9 dialect has a some niceties not found in standard C.
If you really want Rust, check this https://github.com/r9os/r9 it is Plan 9 reimplemented in Rust (no idea about the project quality):
R9 is a reimplementation of the plan9 kernel in Rust. It is not only inspired by but in many ways derived from the original Plan 9 source code.
There isn't, though you can run it over wasm on it. I tried it a while back with a port of the w2c2 transpiler (https://github.com/euclaise/w2c9/), but something like wazero is a more obvious choice
It is kind of interesting that C inventors, contrary to the folks that worship C, not only did not care about ANSI/ISO compatibility, they ended up exploring Aleph, Limbo and Go.
While Bell Labs eventually started Cyclone, which ended up influencing Rust.
I’m fairly sure that Rust compiler is bigger than the entire 9front (and 9front has Doom in it).
Since Rust depends on LLVM, which is massive, that is almost certainly true. It seems likely even if you don't include LLVM though.
You would like Golang more than Rust. At leat the authors (and ex-authors) for sure they are aware of Go, they invented it too.
An author of the Golang ARM port has ideas for how Go could be better done. Not throwing away names in the compile process is one that is general.
Is there Plan9 port for RISC-V (RV32I) ?
There's a 9legacy port, and an in-progress 9front port.
https://m.youtube.com/watch?v=EOg6UzSss2A
That's interesting, thanks. I feel a need for simple multitasking/networking OS for synthesizable RV32I core (not RTOS like, but more like Unix or CP/M). Would be nice to try Plan9 on it once port is out.
Probably not. And there aren't many 32-bit RISC-V cores with an MMU. I guess you can use a simulator if you found one.
I use one written in SpinalHDL. :-)
Next question is how much RAM it needs to boot and can it be used without rio ?
>9front.org frequently questioned answers
Knowing that project am I going to be rickrolled?
I’m not sure it still makes sense to do OS research so close to the metal. Most computing is done up on the application level, and our abstractions there suck, and I haven’t seen any evidence that “everything is a file” helps much in a world of web APIs and SQL databases
Some of us are still interested in the world underneath all that web stuff!
Multiple experimental operating systems at multiple abstraction levels sounds like a good idea, though. What sort of system software would you like to build?
I’m actually building an “OS” that’s up a level. it’s more like git, it has a concept of files but they’re documents in a distributed store. I can experiment with interaction patterns without caring about device drivers
Operating systems are where device drivers live. It sounds awfully impractical to develop alternatives at this stage. I think OP is right.
I think OSes should just freeze all their features right now. Does anyone remember all the weird churn in the world of Linux, where (i) KDE changed from version 3 to 4, which broke everyone's KDE completely unnecessarily (ii) GNOME changed from version 2 to 3, which did the same (iii) Ubuntu Linux decided to change their desktop environment away from GNOME for no reason - but then unchanged it a few years later? When all was said and done, nothing substantive really got done.
So stop changing things at the OS level. Only make conservative changes which don't break the APIs and UIs. Time to feature-freeze, and work on the layers above. If the upper layers take over the work of the lower layers, then over time the lower layers can get silently replaced.
> Ubuntu Linux decided to change their desktop environment away from GNOME for no reason
Oh, there absolutely were reasons. I covered them here:
https://www.theregister.com/Print/2013/06/03/thank_microsoft...
I have never had so much negative feedback and ad-hom attacks on HN as for that story, I think. :-D
Short version, the chronology goes like this:
2004: Ubuntu does the first more-or-less consumer-quality desktop Linux that is 100% free of charge. No paid version. It uses the current best of breed FOSS components and they choose GNOME 2, Mozilla, and OpenOffice.
By 2006 Ubuntu 6.06 "Dapper Drake" comes out, the first LTS. It is catching on a bit.
Fedora Core 6 and RHEL 4 are also getting established, and both use GNOME 2. Every major distro offers GNOME 2, even KDE-centric ones like SUSE. Paid distros like Mandriva and SUSE as starting to get in some trouble -- why pay when Ubuntu does the job?
Even Solaris uses GNOME 2.
2006-2007, MS is getting worried and starts talking about suing. It doesn't know who yet so it just starts saying intentionally not-vague-at-all things like the Linux desktop infringes "about 265 patents".
This is visibly true if you are 35-40 years old: if you remember desktop GUI OSes before 1995, they were all over the place. Most had desktop drive icons. Most had a global menu bar at the top. This is because most copied MacOS. Windows was an ugly mess and only lunatics copied that. (Enter the Open Group with Motif.)
But then came Win95. Huge hit.
After 1995, every GUI gets a task bar, it gets buttons for apps, even window managers like Fvwm95 and soon after IceWM. QNX Neutrino looks like it. OS/2 Warp 4 looks like it. Everyone copies it.
Around the time NT 4 is out and Win98 is taking shape, both KDE and GNOME get going and copy the Win9x look and feel. Xfce dumps its CDE look and feel, goes FOSS, and becomes a Win95 copy.
MS had a case. Everyone had copied them. MS is not stupid and it's been sued lots of times. You betcha it patented everything and kept the receipts. The only problem it has is: who does it sue?
RH says no. GNOME 3 says "oh noes our industry leading GU is, er, yeah, stale, it's stagnant, it's not changing, so what we're gonna do is rip it up and start again! With no taskbar and no hierarchical start menu and no menu bars in windows and no OK and CANCEL buttons at the bottom" and all the other things that they can identify that are from Win9x.
GNOME is mainly sponsored by Red Hat.
Canonical tries to get involved; RH says fsck off. It can't use KDE, that's visibly a ripoff. Ditto Xfce, Enlightenment, etc. LXDE doesn't exist yet.
So it does its own thing based on the Netbook Launcher. If it daren't imitate Windows then what's the leading other candidate? This Mac OS X thing is taking off. It has borrowed some stuff from Windows like Cmd+Tab and Fast User Switching and stuff and got away with it. Let's do that, then.
SUSE just wearily says "OK, how much? Where do we sign?"
RISC OS had a recognizable task bar around 1987, so 2006-2007 is just long enough for any patent on that concept to definitely expire. This story doesn't make any sense. As for dialog boxes with buttons at the bottom and plenty of buttons inside apps, the Amiga had them in 1984.
> RISC OS had a recognizable task bar around 1987
Absolutely not the same thing -- and I bought my first Archimedes in 1989.
It's a bar and it contains icons, but it does not have:
* a hierarchical app launcher at one end
* buttons for open _windows_
* a separate area of smaller icons for notifications & controls
* it can't be repositioned or placed in portrait orientation
I am more familiar with this subject than you might realise. I arranged for the project lead of RISC OS to do this talk:
https://www.rougol.jellybaby.net/meetings/2012/PaulFellows/
Then a decade later I interviewed him:
https://www.theregister.com/2022/06/23/how_risc_os_happened/
Yes, the Icon Bar is prior art, but there are 2 problems with that.
1. It directly inspired the NeXTstep Dock.
This is unprovable after so long, but the strong suspicion is that the Dock inspired Windows 4 "Chicago" (later Windows 95) -- MS definitely knew of NeXT, but probably never heard of Acorn.
So it's 2nd hand inspiration.
2. The Dock isn't a taskbar either.
3. What the prior art may be doesn't matter unless Acorn asserted it, which AFAIK it didn't, as it no longer existed by the time of the legal threats. Nobody else did either.
4. The product development of Win95 is well documented and you can see WIP versions, get them from the Internet Archive and run them, or just peruse screenshot galleries.
http://toastytech.com/guis/c73.html
The odd thing is that the early development versions look less like the Dock or Icon Bar than later ones. It's not a direct copy: it's convergent evolution. If they'd copied, they would have got there a lot sooner, and it would be more similar than it is.
> so 2006-2007 is just long enough for any patent on that concept to definitely expire.
RISC OS as Arthur: 1987
NeXTstep 0.8 demo: 1988
Windows "Chicago" test builds: 1993, 5Y later, well inside a 20Y patent lifespan
Win95 release: 8Y later
KDE first release: 1998
GNOME first release: 1999
The chronology doesn't add up, IMHO.
> This story doesn't make any sense. As for dialog boxes with buttons at the bottom and plenty of buttons inside apps, the Amiga had them in 1984.
You're missing a different point here.
Buttons at the bottom date back to at least the Lisa.
The point is that GNOME 3 visibly and demonstrably was trying to avoid potential litigation by moving them to the CSD bar at the top. Just as in 1983 or so GEM made its menu bar drop-down instead of pull-down (menus open on mouseover, not on click) and in 1985 or so AmigaOS made them appear and open only on a right-click -- in attempts to avoid getting sued by Apple.
> The point is that GNOME 3 visibly and demonstrably was trying to avoid potential litigation by moving them to the CSD bar at the top.
Well, the buttons in the titlebar at the top are reminiscent of old Windows CE dialog boxes, so I guess they're not really original either! What both Unity and GNOME 3 looks like to me is an honest attempt to immediately lead in "convergence" with mobile touch-based solutions. They first came up in the netbook era where making Linux run out-of-the-box on a market-leading small-screen, perhaps touch-based device was quite easy - a kind of ease we're only now getting back to, in fact.
That's why it's a research OS, a lot of people (or at least some) think that the current range of mainstream OS are not very well designed, and we can do better.
I'm not saying Plan 9 is the alternative, but it is kind of amazing how un-networked modern Operating Systems are, and we just rely on disparate apps and protocols to make it feel like the OS is integrated into networks, but they only semi-are.
I didn’t really see the appeal until I learned how to use FUSE.
There’s something elegant about filesystems. Even more than pipes, filesystems can be used to glue programs together. Want to control your webcam with Vim? Expose a writable file. Want to share a device across the network? Expose it as a file system, mount that filesystem on your computer.
Idk I still find low level OS stuff super interesting because it hasn't had a rework in so long. With everything we've learnt since the age of modern computing, drives larger than a few MBs, super fast memory and fast cryptography to name a few.
It's interesting to imagine a new OS that incorporates these changes from it's infancy.
I appreciate all of the effort put in by Linux, BSD, Android, QNX and closed source OSs' have put in to building upon existing ideas and innovating gradually on them. But man I really want to see something better than everything is a file. I really enjoyed the stuff BeOS was pitching.
Well, on the file system side BeOS was pitching "virtual folders" that are really no different than what plan9 provides.
The "everything is a file" approach is nice in many cases, I'm worried though if it works everywhere. Maybe if done right. Subversion (SVN) shows branches as separate file trees.. and ClearCase too (though I'm on thin ice with ClearCase, having used it very little). And I just can't stand the file-oriented way SVN works, I could never get used to it. But there are a lot of other cases where "it's a file" does work, I've experimented with creating Fuse filesystem interfaces to some stuff now and then.
> I haven’t seen any evidence that “everything is a file” helps much in a world of web APIs and SQL databases
Well for one thing, such an abstraction enables you to avoid web apis and sql databases!
You're going to have to explain to me how a parametrized request/response system like calling a Web API or making a SQL query can be mapped to reading files. I've seen some stuff that people do with FUSE and it looks like ridiculous circus hoop jumping to making the Brainfuck-is-Turing-complete version of a query system. We have syntax for a reason.
Typically, if you were writing your hypothetical sql client in rc shell, you'd implement an interface that looks something like:
This is also roughly how webfs works. Making network connections from the shell follows the same pattern. So, for that matter, does making network connections from C, just the file descriptor management is in C.This is... I don't know. I don't get why I would care to sling SQL over a file system versus a network socket.
I mean, Postgres could offer an SSH interface as a dumb pipe to psql to just have you push text SQL queries in your application. But it doesn't, it offers a binary protocol over a network socket. All the database engines have had the same decision point and have basically gone down the same path of implementing a wire protocol over a persistent socket connection.
So yeah, I don't get what doing things this way would give me as either a service provider or a service consumer. It looks like video game achievements for OS development nerds, "unlocked 'everything is a file'." But it doesn't look like it actually enables anything meaningful.
How would you connect to Postgres in 4 lines of shell normally? How would you do it for a rest api? How about any other systems?
For Plan 9, it's all the same, all using the same interfaces, with little library glue.
Opening a window, and running a command in it? Similar interfaces. Adding LSP to your editor? Got it, you mount it and write to the files.
Universal shared conventions are powerful.
But if it requires understanding of a data protocol, it doesn't really matter if it's over the file system or a socket or flock of coked-up carrier pigeons. You still need to write custom user space code somewhere. Exposing it over the file system doesn't magically make composable applications, it just shuffles the code around a bit.
In other words, the transport protocol is just not the hard part of anything.
It's not hard, but it's sure a huge portion of the repeated boilerplate glue. Additionally, the data protocols are also fairly standardized in Plan 9; The typical format is tabular plain text with '%q'-verb quoting.
There's a reason that the 9front implementation of things usually ends up at about 10% the size of the upstream.
The benefit is that you can allocate arbitrary computers to compute arbitrary things. As it is now, you have to use kubernetes and it's a comedy. Though perhaps the same in effect, there are dozens of layers of abstraction that will forever sting you.
You're thinking from the perspective of the terminal user—ie, a drooling, barely-conscious human trying to grasp syntax and legal oddities of long-dead humans. Instead you need to think from the perspective of a star trek captain. Presumably they aren't manually slinging sql queries. Such tasks are best automated. We are all the drooling terminal user in the end, but plan9 enabled you to at least pretend to be competent.
Plan9 allows for implementing file servers in user space and exporting a whole file tree as a virtual "folder", so it's really more of "everything as a file server". No different than FUSE, really.
From what I've seen, Plan 9 fans turn their noses up at FUSE. They say FUSE is not "it", but don't really seem to explain what "it" is to differentiate it from FUSE.
And as Feynman said, you don't truly understand a thing until you can teach it. So that leaves us in a weird predicament where the biggest proponents of Plan 9 apparently don't understand Plan 9 well enough to teach it to the rest of us.
It depends what you mean by "it". FUSE clearly doesn't give you every feature in plan9, and in fact you can't have that without giving up the current Linux syscall API completely and replacing it with something vastly simpler that leaves a lot more to be done in user space. That's not something that Linux is going to do by default, seeing as they have a backward compatibility guarantee for existing software. Which is totally OK as far as it goes; the two systems just have different underlying goals.
You're frustrating me. You replied to me saying "it's basically FUSE" and then after I replied to you, you come back and say, "it's not really FUSE."
Plan 9 supports file server processes natively, and that's the part that's most FUSE-like. The full OS also has many other worthwhile features that are not really addressed by FUSE on its own, or even by Linux taken as a whole.
Like. WHAT!!!???
One key difference is that the equivalent to kernel syscalls on *nix generally involves userland-provided services, and this applies to a lot more than just ordinary file access. The local equivalents to arbitrary "containerization/namespacing" and "sandboxing" are just natively available and inherent to how the system works. You can't do this out of the box on *nix where every syscall directly involves kernel facilities, so the kernel must have special provisions to containerize, sandbox, delegate specific things to userland services etc.
In addition to the sibling comment, you might also consider simply not using the APIs or SQL queries to begin with. Many people have entire careers without touching either.
Why would I ever consider doing that?
That's up to you. Why ask me?
I think you're failing to get that using a filesystem API to work with things that aren't naturally anything like filesystems might get perverse. And standard filesystems are a pretty unnatural way to lay out information anyway, given that they force everything into a tree structure.
This is what I was trying to get at. A lot of the data I deal with is directed, cyclic graphs. Actually, I personally think most data sets we care about are actually directed graphs of some kind, but we've gotten so used to thinking of them as trees that we force the metaphor too far. I mean, file systems are an excellent example of a thing we actually want to be a graph but we've forced into being a tree. Because otherwise why would we have ever invented symlinks?
There's a bunch of literature about accessing graphs through tree lenses. I'm not sure exactly what you're looking for.
SQL certainly forces you to look at graphs as trees. Do you have an specific interface you're trying to access? If you're trying to use a graph database, why mention APIs and SQL?
I just assumed they wanted to interface with existing json over http apis rather than write their own code. The sibling of my previous comment addresses that concern.
Can Plan 9 do transactions? If so, it is unsuitable for being a database. It can run databases, because those can work without transactions. But can't do native writes without them. Can it do transactional reads? How would you represent isolation levels?
How do you do a join on Plan 9? I get the impression that these are coded in each client. But complicated queries need indexes and optimizer. SQL database has advantage that can feed it and it figures out the plan.
Plan 9 is just a brand smeared across a codebase, just like every other operating system.
> If so, it is unsuitable for being a database. It can run databases, because those can work without transactions. But can't do native writes without them. Can it do transactional reads? How would you represent isolation levels?
Bruh ask 9front
Indeed, no, we shouldn't be sure everything-is-a-file makes sense to do OS research. I don't think this is particularly necessarily what need to considered close to the metal. But it is os research.
I think you're right about where computing is today. It's mostly at the app level.
I think you once again hit a super hard conventionality chord & speak to where we are by saying we don't have much evidence of "everything is a file* helping, anywhere. Broadly.
But analyzing where we are & assessing they everything-is-a-file isn't a sure thing doesn't dissuade me. Apps have wanted control, and there's beenfew drivers to try to unite & tie together computing. App makers would actively resist if not drag their feet against giving up total dominion of the user experience. OS makers don't have the capital to take over the power from apps. The strain of unweaving this corporate power interests is immense.
There have been some attempts. BeOS tried to do interesting things with enriching files, with making their more of a database. Microsoft's cancelled WinFS is rumored to have similarly made a sort of OS filesystem/database hybrid that would be useful to the users without the apps. But these are some of the few examples we have of trying anything.
We're in this era where agents are happening, and it's clear that there's very few clear good paths available to us now for agents to actuate & articulate the changes they could and should be doing. Which is just a reflection of app design where the system state is all bundled up deeply inside these bespoke awkward UIs. App design doesn't afford good access, and part of the proof is that other machines can't control apps short of enormous visual processing, which leaves much ambiguity. If agents can't it also strongly implies humans had little chance to master and advance their experience too.
I strongly think we should have some frontiers for active OS research that are user impactful. We ought be figuring out how to allow better for users, in ways that will work broadly & cross cuttingly. Everything is a file seems like one very strong candidate here, for liberating some of the power out of the narrow & super specific rigid & closed application layer.
I think Dan was also super on point writing A Social Filesystem. Which is that social networks & many online systems are everything-as-a-file under the hood. And that there is generic networked multi-party social networking platform available, that we have a super OS already here that does files super interestingly. And Dan points out how it unlocks things, how not having one specific app but having our online data allow multiple consumers, multiple tools, is super interesting an opening.
So, everything is a file is very webful. A URL logically ought be. A multi-media personal data server for every file you can imagine creates an interest powerful OS, and a networked OS.
And users have been warped into fitting the small box their apps demand of them so far. They've had no option about it. All incentive has been to trap users more and more to have no off roads to keep your tool being the one tool for the job.
Distribute the power. Decentralize off the app. Allow other tools. Empower broader OS or platform to let users work across media types and to combine multiple tools and views in their workflow. Allow them to script and control the world around them, to #m2m orchestrate & drive tool use.
I don't disagree with anything you said I just think it's a 30 year old basis you stand from, one that hasn't helped had gotten better and which has ongoingly shrunk what is possible & limited the ability to even start trying for more or better. I don't think we are served by what it feels like you are trying to highlight. And I think "everything is a file" could be an incredible way to start opening up better, possibly, maybe!! but I'm very down to hear other reasonable or out there ideas!! I'm just not interested in staying in the disgraceful anti-user app-controlled unyielding quagmire we have been trapped in for decades.
I guess I feel like if we’re rewriting device drivers then we’re in a turing tarpit. I think there’s room for innovation at what is traditionally considered the application level - we run git, postgres, document stores etc as applications. I think the way to solve the next generation of coordinating is by doing more interesting stuff on this layer.