Aah, the PowerPC architecture! Very nice! Remember working on the ibm P5 series, really nice machines with both hot swap and cold swap components, color coded for convenience. Great operating system, aix was solidly integrated into the architecture. Lpars and soft partitions, really flexible. And available too, we rarely rebooted the whole machine, in fact can probably count on one hand the number of times we rebooted it
Very cool, we had a couple of these (along with a couple HP-UX and SGI boxes, amongst a sea of Sun workstations) at my first gig as a Unix administrator. It was such a treat to see the diversity of the proprietary *nix world when Linux was taking over (this was the late 00's when their fates were clearly written at a megacorp that kept them around mostly for contractual obligations).
I still have to deal with a handful of UNIX systems at $WORK mostly AIX, and I don't really like it much compared to all of the Linux boxes that we mostly use.
On one hand it seems to be rock solid and all of that but on the other it's like driving a Ferrari to go to work instead of a more sensitive Toyota.
Most of them are being replaced by cheaper Linux servers where memory is not so pricey and mostly feel the same, albeit some memory allocation/caching difference
I did some work on AIX once. The thing that I remember is that I was granted some kind of zone/slice or wathever they call for compartmentalization. It didn't even had SSH so I had to use telnet.
The guy I was supposed to prepare the system for could only install Oracle from some crappy java UI wizard so I had to request the sysadmin to install a lot of Linux libraries and programs to enable X11 over SSH.
From memory there was LPAR "Logical Partitions" - which were effectively like a VM.
and there was WPAR "Workload Partitions" - which had a shared OS and were more like a container.
I had some "interesting" experiences getting stuff to work on WPAR's.
IIRC, WPARs could be just for one process, or full OS (but sharing the resources of one AIX instance, I guess that running on an LPAR or directly in the hardware).
I first learned on an AIX box in college; Cygwin/X gave me X11 access and worked perfectly, although I couldn’t tell you whether that used telnet or ssh. Back then I used telnet a lot without any regard for security.
Nicely put (oof!). I believe it also enforced a minimal color depth, which none of our machines could directly support on their own hardware, forcing the use of remote X11 displays.
thats true on many systems... nothing special about 0x0 other than NULL happens to be defined as 0 in most toolchains an some functions use NULL to report an error.
Yes and no. Performancewise, the iconic Ferrari Testarossa from 80s/90s does 0-62mph in 5.8sec. That's in the ballpark of today's family SUV EV, like the Tesla Model Y (standard version, 'Performance' does 3.3sec) or Hyundai Ioniq 5 (again standard version, performance 'N' does 3.4sec).
But I'm sure the "fun factor" in a Ferrari is much greater and of course there's a nostalgia factor as well... it was "THE" supercar when I was a kid. I would love to drive one today and it would be much cooler than a Tesla Y or Ioniq 5 :-)
The last Testarossa I saw in the wild was around 2010 parked in Hoxton London. None of the upholstery was holding up and it looked like it might not be driveable. But it got there somehow.
> HP's HP-UX hardware being an exception since they just sloppily hacked in standard ATI cards / which means you wouldn't get the extra benefits of running a GXT6500 on AIX as you would with a FireGL X3 on HP-UX. HP probably had the lowest share of the UNIX CAD market so they probably felt little need to invest much R&D: not to mention HP can't make a proper enterprise workstation or server ANYWAYS.
Can somebody provide an example why would someone prefer such a workstation over a Windows workstation back then? I.e., which specific programs/applications demanded it?
Silicon Graphics was still viable in 2006, mostly used for engineering (and maybe video production) graphics. Sun and IBM also competed in this space. SGI went bust in 2009 due to competitive pressures from Windows/x86 workstations. 2006 was probably the last hurrah for this type of workstation.
Mind that this was early Windows XP era. The Windows "workstation" would probably have something like a RIVA TNT with 16MB of graphics memory. Meanwhile the Intellistation had way more powerful options (e.g. 128MB on a single card, or exotic 4 cards x 16MB configurations).
But even if you could beef your PC hardware to similar specs, the CAD software was probably just not there (yet). Not to mention that pre-SP2 Windows XP were pretty terrible on their own.
> At that point Windows XP 32-bit was the most commonly used variant, and while you could run XP 64-bit (and IBM did have native support for it on the IntelliStation 9228), XP 64-bit had so many problems so most users were stuck with 3.9 GB of RAM. Therefore if we were to assume that UNIX and said UNIX hardware offered way more memory, it starts to make sense
Aah, the PowerPC architecture! Very nice! Remember working on the ibm P5 series, really nice machines with both hot swap and cold swap components, color coded for convenience. Great operating system, aix was solidly integrated into the architecture. Lpars and soft partitions, really flexible. And available too, we rarely rebooted the whole machine, in fact can probably count on one hand the number of times we rebooted it
Very cool, we had a couple of these (along with a couple HP-UX and SGI boxes, amongst a sea of Sun workstations) at my first gig as a Unix administrator. It was such a treat to see the diversity of the proprietary *nix world when Linux was taking over (this was the late 00's when their fates were clearly written at a megacorp that kept them around mostly for contractual obligations).
I still have to deal with a handful of UNIX systems at $WORK mostly AIX, and I don't really like it much compared to all of the Linux boxes that we mostly use. On one hand it seems to be rock solid and all of that but on the other it's like driving a Ferrari to go to work instead of a more sensitive Toyota. Most of them are being replaced by cheaper Linux servers where memory is not so pricey and mostly feel the same, albeit some memory allocation/caching difference
I did some work on AIX once. The thing that I remember is that I was granted some kind of zone/slice or wathever they call for compartmentalization. It didn't even had SSH so I had to use telnet.
The guy I was supposed to prepare the system for could only install Oracle from some crappy java UI wizard so I had to request the sysadmin to install a lot of Linux libraries and programs to enable X11 over SSH.
From memory there was LPAR "Logical Partitions" - which were effectively like a VM. and there was WPAR "Workload Partitions" - which had a shared OS and were more like a container.
I had some "interesting" experiences getting stuff to work on WPAR's.
IIRC, WPARs could be just for one process, or full OS (but sharing the resources of one AIX instance, I guess that running on an LPAR or directly in the hardware).
But yeah, bit more like a container.
I first learned on an AIX box in college; Cygwin/X gave me X11 access and worked perfectly, although I couldn’t tell you whether that used telnet or ssh. Back then I used telnet a lot without any regard for security.
> crappy java UI wizard
Nicely put (oof!). I believe it also enforced a minimal color depth, which none of our machines could directly support on their own hardware, forcing the use of remote X11 displays.
Is it true that 0x00000000 is a valid memory address on aix? I’m sure I read it somewhere but struggled to confirm it..
Yes, I believe this was an optimization to allow IBM’s compiler to do speculative loads before a null check.
Alien Infested uniX indeed :)
thats true on many systems... nothing special about 0x0 other than NULL happens to be defined as 0 in most toolchains an some functions use NULL to report an error.
Linux still has to copy a few Aix tricks, like the way lazy linking works.
Naive question: by your analogy, would a 1990s Ferrari perform today as it did back then?
I guess yes, although given today's petrol prices and environmental restrictions, it wouldn't be able to drive anywhere (at least in the EU)
Yes and no. Performancewise, the iconic Ferrari Testarossa from 80s/90s does 0-62mph in 5.8sec. That's in the ballpark of today's family SUV EV, like the Tesla Model Y (standard version, 'Performance' does 3.3sec) or Hyundai Ioniq 5 (again standard version, performance 'N' does 3.4sec).
But I'm sure the "fun factor" in a Ferrari is much greater and of course there's a nostalgia factor as well... it was "THE" supercar when I was a kid. I would love to drive one today and it would be much cooler than a Tesla Y or Ioniq 5 :-)
Also, 80s/90s Ferraris weren't very reliable... :P
The last Testarossa I saw in the wild was around 2010 parked in Hoxton London. None of the upholstery was holding up and it looked like it might not be driveable. But it got there somehow.
Absolute vs relative performance is important to consider
And normalized performance? :)
> HP's HP-UX hardware being an exception since they just sloppily hacked in standard ATI cards / which means you wouldn't get the extra benefits of running a GXT6500 on AIX as you would with a FireGL X3 on HP-UX. HP probably had the lowest share of the UNIX CAD market so they probably felt little need to invest much R&D: not to mention HP can't make a proper enterprise workstation or server ANYWAYS.
As the kids say: LOL.
Can somebody provide an example why would someone prefer such a workstation over a Windows workstation back then? I.e., which specific programs/applications demanded it?
Silicon Graphics was still viable in 2006, mostly used for engineering (and maybe video production) graphics. Sun and IBM also competed in this space. SGI went bust in 2009 due to competitive pressures from Windows/x86 workstations. 2006 was probably the last hurrah for this type of workstation.
It's sprayed all over TFA: CAD
Mind that this was early Windows XP era. The Windows "workstation" would probably have something like a RIVA TNT with 16MB of graphics memory. Meanwhile the Intellistation had way more powerful options (e.g. 128MB on a single card, or exotic 4 cards x 16MB configurations).
But even if you could beef your PC hardware to similar specs, the CAD software was probably just not there (yet). Not to mention that pre-SP2 Windows XP were pretty terrible on their own.
nVidia had Quadro https://en.wikipedia.org/wiki/Quadro Generally the same GPUs, but with different memory configurations and firmwares.
The ATi equivalent was FireGL.
A TNT was from the late 90s. In 2006 512MB consumer GPUs were common.
It's in the article. CAD workstations (Katia etc).
Also AIX was a safer and better certified system back then (think DoD stuff).
From the link:
> At that point Windows XP 32-bit was the most commonly used variant, and while you could run XP 64-bit (and IBM did have native support for it on the IntelliStation 9228), XP 64-bit had so many problems so most users were stuck with 3.9 GB of RAM. Therefore if we were to assume that UNIX and said UNIX hardware offered way more memory, it starts to make sense
An interesting writeup about one of the last Unix workstations from 2006.
I definitely didn't expect something called an "AIX workstation" to have been released in 2006.