The author might have had better luck by using an external storage device to boot the Mac and delete unneeded files on the internal disk from there:
Use an external storage device as a Mac startup disk https://support.apple.com/en-us/111336
Was surprised to learn that with Apple silicon-based Macs, not all ports are equal when it comes to external booting:
If you're using a Mac computer with Apple silicon, your Mac has one or more USB or Thunderbolt ports that have a type USB-C connector. While you're installing macOS on your storage device, it matters which of these ports you use. After installation is complete, you can connect your storage device to any of them.
* Mac laptop computer: Use any USB-C port except the leftmost USB-C port when facing the ports on the left side of the Mac.
* iMac: Use any USB-C port except the rightmost USB-C port when facing the back of the Mac.
* Mac mini: Use any USB-C port except the leftmost USB-C port when facing the back of the Mac.
* Mac Studio: Use any USB-C port except the rightmost USB-C port when facing the back of the Mac.
* Mac Pro with desktop enclosure: Use any USB-C port except the one on the top of the Mac that is farthest from the power button.
* Mac Pro with rack enclosure: Use any USB-C port except the one on the front of the Mac that's closest to the power button.
If the filesystem itself got into a deadlocked state, booting from anything and going through the FS driver to delete files from it won't work.
What do you mean by "deadlocked state" for a filesystem?
Modern (well, post-ZFS) filesystems operate by moving the filesystem through state changes where data is not (immediately) destroyed, but older versions of the data are still available for various purposes. Similar to an ACID-compliant database, something like a backup or recovery process can still access older snapshots of the filesystem, for various values of "older" that might range from milliseconds to seconds to years.
With that in mind, you can see how we get in a scenario where deleting a file will require a minor bit of storage for recordkeeping the old and new states, before it can actually free up the storage by releasing the old state. There is supposed to be an escape hatch for getting yourself out of a situation where there isn't even enough storage for this little bit of record keeping, but either the author didn't know whatever trick is needed or the filesystem code wasn't well-behaved in this area (it's a corner-case that isn't often tested).
i've filled up an zfs array to the point where i could not delete files.
the trick is to truncate a large enough files, or enough small files, to zero.
not sure if this is a universal shell trick, but worked on those i tried: "> filename"
For reasons I am completely unwilling to research, just doing `> filename` has not worked for me in a while.
Since then I memorized this: `cat /dev/null >! filename`, and it has worked on systems with zsh and bash.
Simple to verify with strace -f bash -c "> file":
man 2 openat:Sure, but I just get an interactive prompt when I type `> file` and I honestly don't care to troubleshoot. ¯\_(ツ)_/¯
Ok, we'll leave that a mystery then!
"truncate -s0 filename"
I believe "> filename" only works correctly if you're root (at least in my experience, if I remember correctly).
EDIT: To remove <> from filename placeholder which might be confusing, and to put commands in quotes.
Oh yes, that one also worked everywhere I tried, thanks for reminding me.
Pleasure.
It saved me just yesterday when I needed to truncate hundreds of gigabytes of Docker logs on a system that had been having some issues for a while but I didn't want to recreate containers.
"truncate -s 0 /var/lib/docker/containers/**/*-json.log"
Will truncate all of the json logs for all of the containers on the host to 0 bytes.
Of course the system should have had logging configured better (rotation, limits, remote log) in the first place, but it isn't my system.
EDIT: Missing double-star.*
That seems to be zsh-specific syntax that is like ">" except that overrides a CLOBBER setting[1].
However, it won't work in bash. It will create file named "!" with the same contents as "filename". It is equivalent to "cat /dev/null filename > !". (Bash lets you put the redirection almost anywhere, including between one argument and another.)
---
[1] See https://zsh.sourceforge.io/Doc/Release/Redirection.html
Yikes, then I have remembered wrong about bash, thank you.
In that case I'll just always use `truncate -s0` then. Safest option to remember without having to carry around context about which shell is running the script, it seems.
It'd be better to do ": >filename"
: is a shell built-in for most shells that does nothing.
It feels like insanity that the default configuration of any filesystem intended for laymen can fail to delete a file due to anything other than an I/O error. If you want to keep a snapshot, at least bypass it when disk space runs out? How many customers do the vendors think would prefer the alternative?!
Pretty much by the time you get to 100% full on ZFS, the latency is going to get atrocious anyway, but from my understanding there are multiple steps (from simplest to worst case) that ZFS permits in case you do hit the error:
1. Just remove some files - ZFS will attempt to do the right thing
2. Remove old snapshots
3. Mount the drive from another system (so nothing tries writing to it), then remove some files, reboot back to normal
4. Use `zfs send` to copy the data you want to keep to another bigger drive temporarily, then either prune the data or if you already filtered out any old snapshots, zero the original pool and reload it by `zfs send` from before.
Modern defrag seems very cumbersome xD
Defragmentation and ability to do it are not free.
You can have cheap defrag but comparatively brittle filesystems by making things modifiable in place.
You can have filesystem that has as its primary value "never lose your data", but in exchange defragmentation is expensive.
It's not really just keeping snapshots that is the issue, usually. It's just normal FS operation, meant to prevent data corruption if any of these actions is interrupted, as well as various space-saving measures. Some FSs link files together when saving mass data so that identical blocks between them are only stored once, which means any of those files can only be fully deleted when all of them are. Some FSs log actions onto disk before and after doing them so that they can be restarted if interrupted. Some FSs do genuinely keep files on disk if they're already referenced in a snapshot even if you delete them – this is one instance where a modal about the issue should probably pop up if disk space is low. And some OSes really really really want to move things to .Trash1000 or something else stupid instead of deleting them.
I'm most surprised by the lack of testing. Macs tend to ship with much smaller SSDs than other computers because that's how Apple makes money ($600 for 1.5TB of flash vs. $100/2TB if you buy an NVMe SSD), so I'd expect that people run out of space pretty frequently.
And if you make the experience broken and frustrating people will throw the whole computer away and buy a new one since the storage can’t be upgraded.
Some filesystems may require allocating metadata to delete a file. AFIK it's a non issue with traditional Berkeley-style systems, since metadata and data come from a separate pools. Notably ZFS has this problem.
btrfs has this problem too it seems. but there it is usually easy to add a usb stick to extend the filesystem and fix the problem.
i find it really frustrating though. why not just reserve some space?
Yeah, with ZFS some will make an unused dataset with a small reservation (say 1G) that you can then shrink to delete files if the disk is full.
The recommended solution is to apply a quota on top-level dataset, but that's mainly for preventing fragmentation or runaway writes.
I think the solution is to not use a filesystem that is broken in this way.
Note that ZFS explicitly has safeguards against total failure. No filesystem will work well with near full state when it comes to fragmentation.
This is a whataboutism. Being unable to use the filesystem, due to space full, without arcane knowledge, is not the same as "not working well".
This is a brokwn implementation.
You're misunderstanding. See the sibling thread where p_l says that this problem has been resolved, and any further occurrence would be treated as a bug. Setting the quota is only done now to reduce fragmentation (ZFS's fragmentation avoidance requires sufficient free space to be effective).
No, I'm not. They said the "recommended solution" for this issue is to use a quota.
They also said it was mainly used for other issues, such as fragmentation. In other words, this was stated as a fix for the file delete issue.
How does this invalidate my comment, that this was a broken implementation?
It doesn't matter if it will be fixed in the future, or was just fixed.
According to rincebrain, the "disk too full to delete files" was fixed "shortly after the fork" which means "shortly after 2012." My information was quite out of date.
Well I'm glad they fixed a bug, which made the filesystem unusable. Good on them, and thank you for clarification.
This hasn't been a problem you should be able to hit in ZFS in a long time.
It reserves a percent of your pool's total space precisely to avoid having 0 actual free space and only allows using space from that amount if the operation is a net gain on free space.
Yeah, a situation where you pool gets suspended due to no space and you can't delete files is considered a bug by OpenZFS.
I mean, the pool should never have gotten suspended by that, even before OpenZFS was forked; just ENOSPC on rm.
Oh, that's good to know. I hit it in the past, but it was long enough ago that ZFS still had format versions.
Yeah, the whole dance around slop space, if I remember my archaeology, went in shortly after the fork.
For more details about this slop space, see this comment:
https://github.com/openzfs/zfs/blob/99741bde59d1d1df0963009b...
btrfs does reserve some space for exactly this issue, although it might not always be enough.
https://btrfs.readthedocs.io/en/latest/btrfs-filesystem.html
Where `rm`, or more technically unlink(2), fails due to ENOSPC, like in the article...
Hilariously this failure case doesn't seem to be listed in the docs. https://developer.apple.com/library/archive/documentation/Sy...
Anyone know why you can’t use the first usb-c port on a Mac laptop to make the bootable os?
The ports mentioned expose the serial interface that can be used to restore/revive the machine in DFU mode
https://support.apple.com/en-us/108900
That said, no idea why they can’t be used in this case
My intuitive guess here is how the ports are connected to the T2 security chip. One port is as you said a console port that allows access to perform commands to flash/recover/re-provision the T2 chip. Same as an OOB serial port on networking equip.
The rest of the ports the T2 chip has read/write access to devices connected to it. Since this is an OS drive, I'm guessing it needs to be encrypted and the T2 chip handles this function.
That doesn't make it technically impossible to implement booting from that port.
The firmware is based on iPhone boot process from my understanding, and simply does not have space in ROM to implement boot from external storage.
The rest of the code necessary to boot from external sources is located on main flash
Yes, but the decision to use this firmware was made by Apple.
This is like saying my software did not work because it was based on an incompatible version of some library. Maybe so, but that is a bad excuse. Implementing systems is hard, and like the rest of us, Apple should not get away with bad excuses. And this is even more true because they control more of the stack.
OTOH, the current implementation works and is sufficient so Apple could easily decide that it’s not worth modifying firmware that already works to solve an inexistant issue.
Sure, but it also doesn't make it necessary or useful to implement booting from that port - booting from a port IMHO is not a feature that Apple wants to offer to its target audience at all, so it's sufficient if some repair technician can do that according to a manual which says which port to use in which scenario.
Also on my mbpro at least the mentioned port is the one closest to the magsafe connector and may have funny electrical connections to it, perhaps.
What if it's through a USB c adapter to a usbA thumbstick?
Or boot it into Target Disk Mode using another machine.
you can't boot the arm Macs into target disk mode, you can only boot to the recovery os and share the drive - it shows up as a network share iirc. I was super annoyed by this a few weeks ago because you can, for example, use spotlight to search for "target disk mode" and it will show up, and looks like it will take you to the reboot in target disk mode option, but once you're there it's just the standard "choose a boot drive" selector.
The author tried essentially the same thing as what you suggest. He booted into recoveryOS (a separate partition) then from there tried to delete files from the main system partition. But rm failed with the same error "No space left on device". So as others have suggested, truncating a file might have worked "echo -n >file"
The next step I have used and seen recommended after recoveryOS is single user mode, which is what I think I used to solve the same issue on an old mac. I vaguely remember another reason I used single user mode where recovery mode failed but I do not remember any details.
My bet is that you can get nearly the same functionality with single user mode vs booting from external media, but I only have a vague understanding of the limitations of all three modes from 3-5 uses via tutorials.
iirc, not all ports were equal when it came to charging with the m1 macs, so this is actually not so surprising.
But charging through many ports requires extra circuitry to support more power on every port, while booting from multiple ports just requires the boot sequence firmware to talk to more than one USB controller (like PC motherboards do, for example)
Why do you think that would work; if using recoveryOS or starting the Mac Share Disk/Target Disk mode didn't?
This is the kind of comment someone is going to be very happy to read in 8 years when they’re looking for answers for their (then) ancient Mac.