Aside from this file, the "fork" concept of Mac file systems caused some wtf moments. Fork not being fork() but being the two-pronged idea in that file system, both a resource and a data component existed as pair. One metadata and one the file contents. In Unix, the metadata was in the directory block inode, and wasn't bound to the file in a formalism uniquely, it had to be represented by structure in tar, or cpio or zip distinctly. Implementing Mac compatible file support in Unix meant treating the resource fork first class and the obvious way you do it is for each file have .file beside it.
You couldn't map all the properties of the resource fork into an inode block of the time in UFS. It has stuff like the icon. More modern fs may have larger directory block structure and can handle the data better.
Resource fork used to contain all the stuff you could edit with ResEdit (good old times!) right? Icons, various gui resources, could be text and translation assets too. For example Escape Velocity plugins used custom resource types and a ResEdit plugin made them easy to edit there.
A lot of Classic Mac apps just used the resource fork to store all their data. It was basically used as a Berkeley DB, except the keys were limited to a 32-bit OSType plus a 16-bit integer, and performance was horrible. But it got the job done when the files were small, had low on-disk overhead, and was ridiculously easy to deploy.
Once you pushed an app beyond the level of usage the developer had performed in their initial tests, it would crawl to a near-halt, thrashing the disk like crazy on any save. Apple's algorithm would shift huge chunks of the file multiple times per set of updates, when usually it would be better to just rewrite the entire file once. IIRC, part of the problem was an implicit commitment to never strictly requiring more than a few KBs of available disk space.
In a sense, the resource fork was just too easy and accessible. In the long run, Mac users ended up suffering from it more than they benefited. When Apple finally got rid of it, the rejoice was pretty much universal. There was none of the nostalgia that usually accompanies disappearing Apple techs, especially the ones that get removed outright instead of upgraded (though one could argue that's what plists, XML and bundles did.)
The rejoicing was definitely not universal. It really felt like the NeXT folks wanted to throw out pretty much the entire Mac (except keeps its customer base and apps) and any compatibility had to be fought for through customer complaints.
Personally, MacOS X bundles (directories that were opaque in the Finder) seemed like a decent enough replacement for resource forks. The problem was that lots of NeXT-derived utilities munged old Mac files by being ignorant of resource forks and that was not ok.
The 9->X trapeze act was a colossal success, but in retrospect it was brutally risky. I can't think of a successful precedent involving popular tech. The closest parallel is OS/2, which was a flop for the ages.
A large amount of transition code was written in those years. One well-placed design failure could have cratered the whole project. Considering that the Classic environment was a good-enough catch-all solution, I would have also erred on the side of retiring things that were redundant in NeXT-land.
Resource forks were one of the best victims, 1% functionality and 99% technical debt. The one I mourned for was the Code Fragment Manager. It was one of Apple's best OS9 designs and was massively superior to Mach-O (and even more so wrt other unices.) Alas, it didn't bring enough value to justify the porting work, let alone the opportunity cost and risk delta.
I'm still mourning file name extensions and the loss of the spatial Finder.
MacOS X bundles are actually NeXTStep bundles, and are behind the same idea in Java JAR files with META-INF directory, and .NET resources, due to Objective-C's legacy on all those systems.
NSUserDefaults, the modern programmer's fork DB :)
A bit more detail: the first three extents the resource and data forks are stored as part of the entry in the catalog (for a total of up to six extents). On HFS each extent can be 2^16 blocks long (I think HFS+ moved to 32-bit lengths). Anything beyond that (due to size or fragmentation) will have its info stored in an overflow catalog. The overflow catalogs are a.) normal files and b.) keyed by the id (CNID) of the parent directory. If memory serves this means that the catalog file itself can become fragmented but also the lookups themselves are a bit slow. There are little shortcuts (threads) that are keyed by the CNID of the file/directory itself, but as far as I can tell they're only commonly written for directories not files.
tl;dr For either of the forks (data or resource) once you got beyond the capacity of three extents or you start modifying things on a fragmented filesystem performance will go to shit.
Here's some https://arstechnica.com/gadgets/2001/08/metadata/
I credit ResEdit hacking partially for steering my path towards becoming a programmer. I had my Classic Mac OS installs throughly customized, as well as the other various programs and games that stored their assets in resource forks.
It was a lot of fun and something I’ve missed in modern computing. Not even desktop Linux is really fills that void. ResEdit and the way it exposed everything complete with built-in editors was really something special.
ResEdit and using it to modify Escape Velocity is 100% the reason I’m still in this industry.
Same here but only for joining the industry. Now it's the opposite, that webdev still hasn't reached that level of maturity of classic Mac OS makes me want to quit.
The other big thing in the resource fork was the executable code segments that made up the application. In fact applications typically had nothing but the data fork at all. It was all in the resource fork.
I always thought the resource fork as a good idea poorly implemented. IMO they should have just given you a library that manipulated a regular file. Then you could choose to use it or not but it would still be a single file. It could have a standard header to identify it and the system could look inside if that header was there.
One of the big problems with resource forks was that no other system supported them so to host a mac file on a non-mac drive or an ftp server, etc, the file had to be converted to something that contained both parts, then converted back when brought to the mac. It was a PITA.
NTFS has alternate data streams. I think its hardly ever used.
https://en.wikipedia.org/wiki/NTFS#Alternate_data_stream_(AD...
NTFS ACLs (aka file permissions) are stored in alternate data streams.
I work on ReFS and a little bit on NTFS. Alternate data streams are simply seekable bags of bytes, just like the traditional main data file stream. Security descriptors, extended attributes, reparse points and other file metadata are represented as a more general concept called an "attribute".
You can't actually open a security descriptor attribute and modify select bytes of it to create an invalid security descriptor, as you would if it were a general purpose stream.
Help me understand the terminology. I thought alternative data streams were just non-resident attributes. Attributes like "$SECURITY_DESCRIPTOR" have reserved names but, conceptually, I thought were stored in the same manner as an alternative data stream. (Admittedly, I've never seen the real NTFS source code-- I've only perused open source tools and re-implementations.)
Essentially, attribute names directly specify the attribute type - so $SECURITY_DESCRIPTOR declared the entry in FILE attribute list to be a security descriptor. DATA attributes have another name field to handle multiple instances
I see. So there's one more layer of indirection there that I'm missing.
The article said most browsers mark downloaded files.
That's done as part of xattr, or extended attributes. It's a very flexible system. For example you can add comments to a file so they are indexed by Spotlight.
Except NTFS does not have "extended attributes" in Linux/Irix/HPFS sense.
Every FILE object in the database is ultimately (outside of some low level metadata) a map of Type-(optional Name)-Length-Value entries, of which file contents and what people think of as "extended attributes" are just random DATA type entries (empty DATA name marks the default to own when you do file I/O).
It's similar to ZFS (in default config) and Solaris UFS where a file is also a directory
Very commonly used to hide malware and other things you don't want the average user or windows admin to find.
I used to dual boot OS X and Windows on my Mac in the late 2000s. I am pretty certain when I open the HFS+ volume and copy things to the NTFS volume, some stuff became alternate data streams. Windows even had a UI to tell me about it. I didn't understand it then but my guess would be that's the resource fork.
Used by malware mostly, I think.
It was all of the forked data that made dual format CDs/DVDs "interesting". In the beginning it was a trick. Eventually, the Mac burning software made it a breeze. Making a Mac bootable DVD was also interesting.
I recall seeing CD-ROMs that had both Mac and Windows software on it, and depending on which OS it was mounted on, it would show the Windows EXE or the Mac app... I wonder how that's done. I'm guessing there was a clever trick so files on both filesystems share the same data (e.g. if the program/game had a movie, it would only store the bytes of the movie once but its addressable as a file on each filesystem), but that sounds like a nightmare.
I can probably look it up and figure it out myself, ah, the joys of learning about obsolete tech!
There were also the audio CDs that had data on them. Audio CD players would just play the audio, but a CD-ROM could access both. Some had apps that were games that would play the audio portion for the game.
If you want to know about the different types of CDs, you'll want to know about the various colors: https://en.wikipedia.org/wiki/Rainbow_Books
Some Playstation 1 were setup to also play the game soundtrack if you put them in an audio CD player.
MechWarrior 2: Mercenaries (for PC) was the same way. Rocking soundtrack. Beautiful game, provided you had a Voodoo 2.
The Mac version of the original Descent was like this too, with a great redbook audio soundtrack. The game wasn't locked to the original disc though, you could pop out the CD in the middle of the game and replace it with any other audio CD and it'd play that just as well.
I remember listening to the Warcraft 2 soundtrack from the game CD-ROM in the living room audio CD player.
IIRC from that time, those CD-ROMs contained two tracks, one formatted with ISO 9660 and another with HFS+. Windows didn't come with HFS+ drivers so it ignored it, and probably MacOS prioritized mounting the HFS+ track.
I've seen some where the combined file size exposed on each track would be larger than a CD could hold, so there had to be something more going on. StarCraft and Brood War come to mind with the large StarDat.mpq / BrooDat.mpq files.
TL;DR ISO9660 provided an area to stuff type-tagged extension information for each directory entry.
In addition, first 32kB of iso9660 are unused, which allowed tricks like putting another filesystem metadata there.
By carefully arranging metadata on disk it was then possible to make essentially overlapping partitions, stuffing each filesystem metadata in area unused by the other, with files reusing the same space
You can hide files from windows by setting a property on the file. You can hide files from MacOS by inserting it's name in a file called ".hidden".
As it starts about 32k in, the ISO 9660 superblock doesn't inherently conflict with an Apple partition map which starts at the beginning. Apple also had proprietary ISO 9660 extensions that add extra metadata to the directory entries much like the RockRidge extension does. Those would get ignored by non-Apple implementations of ISO 9660.
Microsoft went a different route with its long filename extensions (Joliet) – they simply created a whole different (UCS-2/UTF-16 encoded) directory tree. An ISO 9660 implementation that's compatible with Joliet will prefer the Unicode directory hierarchy and look there for files.
You have the same "resource fork" concept in Unix xattrs and NTFS streams.
No disagree, Both came later IIRC. Melbourne unis work on appletalk and Apple file system support was in the late 80s and I believe POSIX xattr spec work was mid nineties, NTFS was '93 or so. The fork model in apple file store was eighties work.
GP wasn’t arguing about timelines.
NTFS ADS were created to accommodate Mac OS resource forks on network volumes when using AFP.
Gotcha! I assumed they were invented for Windows centric reasons.
The concept of extended file attributes has been introduced by HPFS, in OS/2, in 1989.
From HPFS it was taken by SGI XFS (the ancestor of Linux XFS) and MS NTFS, both in 1993.
From there it has spread to various other file systems and specifications.
The concept of resource forks is earlier, but both are examples of using alternate data streams in a file.
I’d say this is not the right way to describe a resource fork. Instead, think of it as two sets of file contents—one called "data" and one called "rsrc". On-disk, they are both just bytestreams.
The catch is that you usually store a specific structure in the resource fork—smaller chunks of data indexed by 4-byte type codes and 2-byte integer IDs. Applications on the 68K normally stored everything in the resource fork. Code, menus, dialog boxes, pictures, icons, strings, and whatever else. If you copy an old Mac application to a PC or Unix system without translation, what you got was an empty file. This meant that Mac applications had to be encoded into a single stream to be sent over the network… early on, that meant BinHex .hqx or MacBinary .bin, and later on you saw Stuffit .sit archives.
That’s why these structures don’t fit into an inode—it’s like you’re trying to cram a whole goddamn file in there. The resource fork structure had internal limits that capped it at 16 MB, but you could also just treat it as a separate stream of data and make it as big as you want.
In Unix, it's said that "Everything is a file" - i.e. that everything on the system that applications need to manage should either be actual files on disk or present themselves to the application as if they were files.
This adage translated to classic MacOS becomes "Everything is a resource". The Resource Manager started out as developer cope from Bruce Horn for not having access to SmallTalk anymore[0], but turned out to completely overtake the entire Macintosh Toolbox API. Packaging everything as type-coded data with standard-ish formats meant cross-cutting concerns like localization or demand paging were brokered through the Resource Manager.
All of this sounds passe today because you can just use directories and files, and have the shell present the whole application as a single object. In fact, this is what all the ex-Apple staff who moved to NeXT wound up doing, which is why OSX has directories that end in .app with a bunch of separate files instead. The reason why they couldn't do this in 1984 is very simple: the Macintosh File System (MFS) that Apple shipped had only partial folder support.
To be clear, MFS did actually have folders[1], but only one directory[2] for the entire volume. What files went in which folders was stored in a separate special file that only the Finder read. There was no Toolbox support for reading folder contents, just the master directory, so applications couldn't actually put files in folders. Not even using the Toolbox file pickers.
And this meant the "sane approach" NeXT and OSX took was actually impossible in the system they were developing. Resources needed to live somewhere, so they added a second bytestream to every file and used it to store something morally equivalent to another directory that only holds resources. The Resource Manager treats an MFS disk as a single pile of files that each holds a single pile of resources.
[0] https://www.folklore.org/The_Grand_Unified_Model.html?sort=d...
[1] As in, a filesystem object that can own other filesystem objects.
[2] As in, a list of filesystem objects. Though in MFS's case it's more like an inode table...
One of most important technical details about resources in early MacOS is that it allowed the system to swap resources by using double indirect pointers (aka handles) with the lock bit stuffed into the upper 8 bits of the 32 bit. Stealing the extra flag bits from the upper bits instead of increasing the alignment to make a few lower bits available was fine on the 68000 and 68010 with their 24 Bit address space, but exploded into your face on an 020/030 with a real 32 Bit address space. It was a nightmare do develop and debug. A mix of assembler, Pascal and C without memory protection, but at least you could use ResEdit to put insults into Menu entries on school computers.
Good 'ol purgeable resources: one of the reasons why the early Mac could get away with 128kb and lots of floppy swapping.
Prefixing the file name with a single dot - is this a file system convention ? Or just a "good idea" ?
Unix convention to hide. .Files hidden from ls unless -a used but cd .config/ works fine. It matched the use of . For "this dir" and .. for "parent dir" also hidden by default. It was in v7 on a pdp11, my first experience of Unix in 1980. Probably pre-dated that.
Oh sure. I started with v6 on a pdp-10 in 1979. And the leading dot is ingrained in my brain.
But what I'm wondering about is the idea of associating (for example) "myfile.xyz" and ".myfile.xyz". I've never heard of this as a convention for associating metadata.
resource and data forks were hfs(+) features that appeared in pre-osx versions of macos. post-osx made use of the bsd fast filesystem and a rather nice unix style convention from nextstep where the on-disk representation of a .app or .pkg (which would appear as a single entity in the gui) was actually a directory tree. this would rather elegantly include ui resources as well as multiple binaries for cross platform support.
Application metadata describing what file types an application could open, what icons to use for those file types if they matched the application’s creator code was stored in the resource fork of the application, but file metadata never was stored in the resource fork. File types, creator codes, lock, invisible, bozo, etc. bits always were stored in the file system.
See for example the description of the MFS disk format at https://wiki.osdev.org/MFS#File_Directory_Blocks