return to table of content

A disk so full, it couldn't be restored

miles
58 replies
15h53m

The author might have had better luck by using an external storage device to boot the Mac and delete unneeded files on the internal disk from there:

Use an external storage device as a Mac startup disk https://support.apple.com/en-us/111336

Was surprised to learn that with Apple silicon-based Macs, not all ports are equal when it comes to external booting:

If you're using a Mac computer with Apple silicon, your Mac has one or more USB or Thunderbolt ports that have a type USB-C connector. While you're installing macOS on your storage device, it matters which of these ports you use. After installation is complete, you can connect your storage device to any of them.

* Mac laptop computer: Use any USB-C port except the leftmost USB-C port when facing the ports on the left side of the Mac.

* iMac: Use any USB-C port except the rightmost USB-C port when facing the back of the Mac.

* Mac mini: Use any USB-C port except the leftmost USB-C port when facing the back of the Mac.

* Mac Studio: Use any USB-C port except the rightmost USB-C port when facing the back of the Mac.

* Mac Pro with desktop enclosure: Use any USB-C port except the one on the top of the Mac that is farthest from the power button.

* Mac Pro with rack enclosure: Use any USB-C port except the one on the front of the Mac that's closest to the power button.

userbinator
39 replies
15h43m

If the filesystem itself got into a deadlocked state, booting from anything and going through the FS driver to delete files from it won't work.

HumanOstrich
38 replies
15h30m

What do you mean by "deadlocked state" for a filesystem?

syncsynchalt
18 replies
15h6m

Modern (well, post-ZFS) filesystems operate by moving the filesystem through state changes where data is not (immediately) destroyed, but older versions of the data are still available for various purposes. Similar to an ACID-compliant database, something like a backup or recovery process can still access older snapshots of the filesystem, for various values of "older" that might range from milliseconds to seconds to years.

With that in mind, you can see how we get in a scenario where deleting a file will require a minor bit of storage for recordkeeping the old and new states, before it can actually free up the storage by releasing the old state. There is supposed to be an escape hatch for getting yourself out of a situation where there isn't even enough storage for this little bit of record keeping, but either the author didn't know whatever trick is needed or the filesystem code wasn't well-behaved in this area (it's a corner-case that isn't often tested).

werid
10 replies
9h48m

i've filled up an zfs array to the point where i could not delete files.

the trick is to truncate a large enough files, or enough small files, to zero.

not sure if this is a universal shell trick, but worked on those i tried: "> filename"

pdimitar
8 replies
8h22m

For reasons I am completely unwilling to research, just doing `> filename` has not worked for me in a while.

Since then I memorized this: `cat /dev/null >! filename`, and it has worked on systems with zsh and bash.

matja
2 replies
6h38m

Simple to verify with strace -f bash -c "> file":

    openat(AT_FDCWD, "file", O_WRONLY|O_CREAT|O_TRUNC, 0666) = 3
man 2 openat:

    O_TRUNC
        If the file already exists and is a regular file and the
        access mode allows writing (i.e., is O_RDWR or O_WRONLY) it
        will be truncated to length 0.
        ...

pdimitar
1 replies
6h31m

Sure, but I just get an interactive prompt when I type `> file` and I honestly don't care to troubleshoot. ¯\_(ツ)_/¯

matja
0 replies
6h6m

Ok, we'll leave that a mystery then!

alias_neo
2 replies
7h29m

"truncate -s0 filename"

I believe "> filename" only works correctly if you're root (at least in my experience, if I remember correctly).

EDIT: To remove <> from filename placeholder which might be confusing, and to put commands in quotes.

pdimitar
1 replies
7h26m

Oh yes, that one also worked everywhere I tried, thanks for reminding me.

alias_neo
0 replies
7h20m

Pleasure.

It saved me just yesterday when I needed to truncate hundreds of gigabytes of Docker logs on a system that had been having some issues for a while but I didn't want to recreate containers.

"truncate -s 0 /var/lib/docker/containers/**/*-json.log"

Will truncate all of the json logs for all of the containers on the host to 0 bytes.

Of course the system should have had logging configured better (rotation, limits, remote log) in the first place, but it isn't my system.

EDIT: Missing double-star.*

adrianmonk
1 replies
3h46m

That seems to be zsh-specific syntax that is like ">" except that overrides a CLOBBER setting[1].

However, it won't work in bash. It will create file named "!" with the same contents as "filename". It is equivalent to "cat /dev/null filename > !". (Bash lets you put the redirection almost anywhere, including between one argument and another.)

---

[1] See https://zsh.sourceforge.io/Doc/Release/Redirection.html

pdimitar
0 replies
22m

Yikes, then I have remembered wrong about bash, thank you.

In that case I'll just always use `truncate -s0` then. Safest option to remember without having to carry around context about which shell is running the script, it seems.

ralferoo
0 replies
6h23m

It'd be better to do ": >filename"

: is a shell built-in for most shells that does nothing.

dataflow
4 replies
12h54m

It feels like insanity that the default configuration of any filesystem intended for laymen can fail to delete a file due to anything other than an I/O error. If you want to keep a snapshot, at least bypass it when disk space runs out? How many customers do the vendors think would prefer the alternative?!

p_l
2 replies
10h4m

Pretty much by the time you get to 100% full on ZFS, the latency is going to get atrocious anyway, but from my understanding there are multiple steps (from simplest to worst case) that ZFS permits in case you do hit the error:

1. Just remove some files - ZFS will attempt to do the right thing

2. Remove old snapshots

3. Mount the drive from another system (so nothing tries writing to it), then remove some files, reboot back to normal

4. Use `zfs send` to copy the data you want to keep to another bigger drive temporarily, then either prune the data or if you already filtered out any old snapshots, zero the original pool and reload it by `zfs send` from before.

Shorel
1 replies
7h29m

Modern defrag seems very cumbersome xD

p_l
0 replies
5h52m

Defragmentation and ability to do it are not free.

You can have cheap defrag but comparatively brittle filesystems by making things modifiable in place.

You can have filesystem that has as its primary value "never lose your data", but in exchange defragmentation is expensive.

kamray23
0 replies
10h22m

It's not really just keeping snapshots that is the issue, usually. It's just normal FS operation, meant to prevent data corruption if any of these actions is interrupted, as well as various space-saving measures. Some FSs link files together when saving mass data so that identical blocks between them are only stored once, which means any of those files can only be fully deleted when all of them are. Some FSs log actions onto disk before and after doing them so that they can be restarted if interrupted. Some FSs do genuinely keep files on disk if they're already referenced in a snapshot even if you delete them – this is one instance where a modal about the issue should probably pop up if disk space is low. And some OSes really really really want to move things to .Trash1000 or something else stupid instead of deleting them.

jrockway
1 replies
12h53m

I'm most surprised by the lack of testing. Macs tend to ship with much smaller SSDs than other computers because that's how Apple makes money ($600 for 1.5TB of flash vs. $100/2TB if you buy an NVMe SSD), so I'd expect that people run out of space pretty frequently.

callalex
0 replies
41m

And if you make the experience broken and frustrating people will throw the whole computer away and buy a new one since the storage can’t be upgraded.

aidenn0
17 replies
15h10m

Some filesystems may require allocating metadata to delete a file. AFIK it's a non issue with traditional Berkeley-style systems, since metadata and data come from a separate pools. Notably ZFS has this problem.

em-bee
16 replies
13h42m

btrfs has this problem too it seems. but there it is usually easy to add a usb stick to extend the filesystem and fix the problem.

i find it really frustrating though. why not just reserve some space?

aidenn0
14 replies
12h47m

Yeah, with ZFS some will make an unused dataset with a small reservation (say 1G) that you can then shrink to delete files if the disk is full.

p_l
7 replies
10h2m

The recommended solution is to apply a quota on top-level dataset, but that's mainly for preventing fragmentation or runaway writes.

bbarnett
6 replies
5h58m

I think the solution is to not use a filesystem that is broken in this way.

p_l
5 replies
5h55m

Note that ZFS explicitly has safeguards against total failure. No filesystem will work well with near full state when it comes to fragmentation.

bbarnett
4 replies
5h9m

This is a whataboutism. Being unable to use the filesystem, due to space full, without arcane knowledge, is not the same as "not working well".

This is a brokwn implementation.

aidenn0
3 replies
4h59m

You're misunderstanding. See the sibling thread where p_l says that this problem has been resolved, and any further occurrence would be treated as a bug. Setting the quota is only done now to reduce fragmentation (ZFS's fragmentation avoidance requires sufficient free space to be effective).

bbarnett
2 replies
4h45m

No, I'm not. They said the "recommended solution" for this issue is to use a quota.

They also said it was mainly used for other issues, such as fragmentation. In other words, this was stated as a fix for the file delete issue.

How does this invalidate my comment, that this was a broken implementation?

It doesn't matter if it will be fixed in the future, or was just fixed.

aidenn0
1 replies
2h11m

According to rincebrain, the "disk too full to delete files" was fixed "shortly after the fork" which means "shortly after 2012." My information was quite out of date.

bbarnett
0 replies
2h8m

Well I'm glad they fixed a bug, which made the filesystem unusable. Good on them, and thank you for clarification.

rincebrain
5 replies
10h18m

This hasn't been a problem you should be able to hit in ZFS in a long time.

It reserves a percent of your pool's total space precisely to avoid having 0 actual free space and only allows using space from that amount if the operation is a net gain on free space.

p_l
1 replies
10h3m

Yeah, a situation where you pool gets suspended due to no space and you can't delete files is considered a bug by OpenZFS.

rincebrain
0 replies
2h33m

I mean, the pool should never have gotten suspended by that, even before OpenZFS was forked; just ENOSPC on rm.

aidenn0
1 replies
5h1m

Oh, that's good to know. I hit it in the past, but it was long enough ago that ZFS still had format versions.

rincebrain
0 replies
2h34m

Yeah, the whole dance around slop space, if I remember my archaeology, went in shortly after the fork.

steve_rambo
0 replies
11h25m

btrfs does reserve some space for exactly this issue, although it might not always be enough.

https://btrfs.readthedocs.io/en/latest/btrfs-filesystem.html

GlobalReserve is an artificial and internal emergency space. It is used e.g. when the filesystem is full. Its total size is dynamic based on the filesystem size, usually not larger than 512MiB, used may fluctuate.
deeth_starr_v
9 replies
13h38m

Anyone know why you can’t use the first usb-c port on a Mac laptop to make the bootable os?

Tsiklon
7 replies
12h28m

The ports mentioned expose the serial interface that can be used to restore/revive the machine in DFU mode

https://support.apple.com/en-us/108900

That said, no idea why they can’t be used in this case

joecool1029
5 replies
12h20m

That said, no idea why they can’t be used in this case

My intuitive guess here is how the ports are connected to the T2 security chip. One port is as you said a console port that allows access to perform commands to flash/recover/re-provision the T2 chip. Same as an OOB serial port on networking equip.

The rest of the ports the T2 chip has read/write access to devices connected to it. Since this is an OS drive, I'm guessing it needs to be encrypted and the T2 chip handles this function.

amelius
4 replies
9h51m

That doesn't make it technically impossible to implement booting from that port.

p_l
2 replies
9h1m

The firmware is based on iPhone boot process from my understanding, and simply does not have space in ROM to implement boot from external storage.

The rest of the code necessary to boot from external sources is located on main flash

amelius
1 replies
8h39m

Yes, but the decision to use this firmware was made by Apple.

This is like saying my software did not work because it was based on an incompatible version of some library. Maybe so, but that is a bad excuse. Implementing systems is hard, and like the rest of us, Apple should not get away with bad excuses. And this is even more true because they control more of the stack.

pjerem
0 replies
2h25m

OTOH, the current implementation works and is sufficient so Apple could easily decide that it’s not worth modifying firmware that already works to solve an inexistant issue.

PeterisP
0 replies
5h8m

Sure, but it also doesn't make it necessary or useful to implement booting from that port - booting from a port IMHO is not a feature that Apple wants to offer to its target audience at all, so it's sufficient if some repair technician can do that according to a manual which says which port to use in which scenario.

nottorp
0 replies
11h56m

Mac laptop computer: Use any USB-C port except the leftmost USB-C port when facing the ports on the left side of the Mac.

Also on my mbpro at least the mentioned port is the one closest to the magsafe connector and may have funny electrical connections to it, perhaps.

plussed_reader
0 replies
13h33m

What if it's through a USB c adapter to a usbA thumbstick?

timcederman
1 replies
15h18m

Or boot it into Target Disk Mode using another machine.

olliej
0 replies
12h10m

you can't boot the arm Macs into target disk mode, you can only boot to the recovery os and share the drive - it shows up as a network share iirc. I was super annoyed by this a few weeks ago because you can, for example, use spotlight to search for "target disk mode" and it will show up, and looks like it will take you to the reboot in target disk mode option, but once you're there it's just the standard "choose a boot drive" selector.

mrb
1 replies
6h57m

The author tried essentially the same thing as what you suggest. He booted into recoveryOS (a separate partition) then from there tried to delete files from the main system partition. But rm failed with the same error "No space left on device". So as others have suggested, truncating a file might have worked "echo -n >file"

davorak
0 replies
3h57m

The next step I have used and seen recommended after recoveryOS is single user mode, which is what I think I used to solve the same issue on an old mac. I vaguely remember another reason I used single user mode where recovery mode failed but I do not remember any details.

My bet is that you can get nearly the same functionality with single user mode vs booting from external media, but I only have a vague understanding of the limitations of all three modes from 3-5 uses via tutorials.

Macha
1 replies
6h19m

Was surprised to learn that with Apple silicon-based Macs, not all ports are equal when it comes to external booting

iirc, not all ports were equal when it came to charging with the m1 macs, so this is actually not so surprising.

voidbert
0 replies
6h3m

But charging through many ports requires extra circuitry to support more power on every port, while booting from multiple ports just requires the boot sequence firmware to talk to more than one USB controller (like PC motherboards do, for example)

klausa
0 replies
14h29m

Why do you think that would work; if using recoveryOS or starting the Mac Share Disk/Target Disk mode didn't?

appplication
0 replies
13h19m

This is the kind of comment someone is going to be very happy to read in 8 years when they’re looking for answers for their (then) ancient Mac.

voidwtf
13 replies
16h26m

It seems like Time Machine has been steadily declining. I'm not sure why there is no impetus to get it reliable and functioning well. Between sparse bundles becoming corrupt and having to start a new backup and failing functionality I haven't felt like Time Machine is worth setting up anymore. This is in stark contrast to the iOS/iPadOS backups which have worked every time.

ksec
4 replies
16h3m

It seems like Time Machine has been steadily declining.

Because they dont sell Time Capsule anymore. And they want you to backup everything to iCloud to grow their Services Revenue.

philistine
1 replies
15h18m

Extrapolate it one more step: Apple is clearly working on an iCloud backup for Macs, cause then that's more services revenue. While they're doing that, why would they fix bugs in Time Machine. People can't surely be using this old thing while we're working on the spiffy new thing!

firecall
0 replies
14h47m

The problem though is that it is likely to be complete environment backup and restore, like iOS.

But we shall see!

Hopefully they will provide the ability to backup and restore file versions!

firecall
1 replies
14h49m

Totally.

But iCloud isn’t a backup.

Its sync.

And it will happily sync corrupt files and does not provide any versioning.

The best TimeMachine is an SSD connected locally to the Mac.

The second best is an SSD running on a Mac setup with TomeMachine Server.

Then you are lucky if backups continue to work. And even luckier if you can sensibly restore anything via the hellscape that is the interstellar wormhole travel interface! ;-)

BackBlaze is very reliable at least! Not cheap with a house full of computers to backup though :-/

realusername
0 replies
11h42m

And it will happily sync corrupt files and does not provide any versioning.

It does not even provide a basic progress bar when used on the phone.

ryukoposting
2 replies
13h49m

As a non-Mac user, this sounds like a catastrophic and inexcusable bug the likes of which would inspire a dogpile of hatred against desktop operating systems with penguin mascots and/or headquarters in Washington.

ipv6ipv4
0 replies
2h33m

Time Machine works fine. Better than anything available on Windows. I’ve used it for more than a decade and a half, with multiple restores, across multiple machines.

My current desktop Mac environment is a direct descendant of my original Mac from 2004 thanks, largely, to Time Machine.

folbec
0 replies
9h56m

That's the magic of genius marketing

aequitas
1 replies
11h3m

I don't share this experience. I've been running Time Machine for years now on a Samba share for multiple Macs and if anything I've only seen an improvement. Years back I would regularly get a corrupted Time Machine sparse bundle that had to be recreated (or I would restore a previous ZFS snapshot and it would continue of off that), but it also ran over AFP back then I think, not SMB. Lately I've not have had any of these issues on any of the machines. I do have specific flags enabled in the smb.conf file that are recommended for Time Machine backups.

andruby
0 replies
8h30m

Have you restored from a backup during that Time? I have a similar setup (sambsa share on a ZFS NAS) and was appalled at how long it took.

Both machines on wired gigabit ethernet, yet the restore took more than 24 hours. And that was for just a 1TB disk.

wlesieutre
0 replies
16h18m

I don’t know how many times I’ve had Time Machine decide it didn’t want to work anymore and I had to wipe the backups and start fresh to make it work again, but it’s a much larger number than it should be.

steve1977
0 replies
13h28m

The whole of desktop macOS has been steadily declining, so I‘m not surprised by this story in the slightest.

But hey, we get new emojis and moving desktop wallpapers…

fmajid
0 replies
11h26m

Mac OS quality control has been declining since they fired Scott Forstall, and it wasn’t amazing under his tenure to begin with.

sneak
13 replies
16h16m

Time Machine has been consistently unreliable the entirety of the time since it launched well over a decade ago. It should be common knowledge that it sucks.

Use Backblaze if you don’t care about privacy, rsync+ssh to a selfhosted zfs box if you do.

jen729w
11 replies
16h10m

Backblaze supports encryption, as does the excellent Arq, which is a better solution than rolling your own rsync+ssh.

lh7777
10 replies
15h22m

Backblaze Personal does support encryption, but it's always been incomplete. If you supply your own encryption key, it's true that Backblaze can't read your data at rest. But to restore files, you have to send your key to Backblaze's server, which will then decrypt the data so that you can download it. They say that they never store the key and promptly delete the unencrypted files from the server, but to me this is still an unnecessary risk. There's no reason why they couldn't handle decryption locally on the client device, but they justify on-server decryption in the name of convenience -- you can restore files via the web without downloading an app. If you're concerned about this, the solution is to use B2 with a 3rd party app like Arq.

philistine
8 replies
15h11m

I actually use Arq to send my Time Machine backups and the rest of my NAS to S3 Glacier, in case the house burns down or the drives fail (whichever comes first). It works great and is very cheap!

kstrauser
3 replies
13h25m

Caution: restoring from Glacier can be hellishly expensive. Poke around at https://liangzan.net/aws-glacier-calculator/ and see what prices you see given your data size.

philistine
2 replies
13h14m

Expensive external backups if I ever need it is better than none at all. It's a bet, but hey so is insurance.

EDIT: I checked your tool. It's a 1000 bucks to restore 4 TB in 48 hours. If the house burns down, insurance will cover that. I guess now I know I gotta check those drives a bit more.

kstrauser
0 replies
12h56m

Ok, cool. As long as you know about it up front! I’ve heard nightmare stories of people being very surprised by their bill afterward.

Dylan16807
0 replies
10h42m

It's a 1000 bucks to restore 4 TB in 48 hours.

What? This tool is exceptionally out of date. Retrieval cost is $30/TB at the high end, and for glacier deep archive and a 48 hour window it only costs $2.50/TB. (Plus a few cents per thousand requests, so maybe don't use tiny objects.)

Glacier's percentage-rate-based retrieval pricing was only active from 2012-2016.

The bandwidth charge of $90/TB is still accurate. Though there are ways to reduce it.

sneak
1 replies
10h0m

Arq is closed source and proprietary and its cryptographic functioning and integrity cannot be easily audited or verified.

Why use closed source crypto for money when free software that can be reviewed is available gratis? There are much better options.

pitaj
1 replies
13h57m

I'm curious, can you share more details? For instance, which Glacier tier do you use?

philistine
0 replies
13h13m

The cheapest one. It would take either a long-ass time to restore or cost a lot of money, but I'm betting I'm not going to ever need it.

csnover
0 replies
13h29m

If you supply your own encryption key, it's true that Backblaze can't read your data at rest.

It’s worse than this. The private key for data decryption is sent to their server by the installer before you can even set a PEK. Then, setting the PEK sends the password to them too, since that’s where your private key is stored. So you have to take their word not just that they never store the key and promptly delete unencrypted files during restoration, but also that they destroy the unprotected private key and password when you set up PEK. It’s a terrible scheme that seems almost deliberately designed to lull people into a false sense of security.

abhinavk
0 replies
16h10m

You can use backup tools like restic/borg/kopia to encrypt/compress before uploading to Backplaze or any other cloud service.

caseyy
8 replies
15h30m

Windows with NTFS will also break in mysterious ways when the system disk is full.

There was a time when OSs could deal with the system disk being full quite well. And not so long ago.

tredre3
6 replies
15h22m

I regularly run my Windows 10 NTFS drive down to 0 bytes free.

I'm always amazed at how the file system survives just fine, but the machine doesn't even crash!

I'm not sure where you got your experience from.

caseyy
3 replies
14h33m

My work demands that I generate large amounts of data and I don’t know how much I’ll have to generate up-front. So I run out of disk space a lot.

My experience is that Windows and many of its programs will become very unstable with 0 b on the system drive. And about 3 times out of maybe 50, the system also became unbootable. I’ve learned to do whatever I can to free up space before restarting for stability.

The last time I’d regularly run out of space on Win was around Windows 98 times. I never had a problem then. Now in Windows 11 times, it’s a real headache.

Not sure how you’re so lucky.

juitpykyk
1 replies
10h51m

Maybe you should use a secondary partition for work

caseyy
0 replies
7h53m

Hehe, I probably should.

Kwpolska
0 replies
12h2m

I manage to get to 0 or close to that sometimes, usually through uncontorlles pagefile expansion. Some apps may misbehave, but explorer is stable enough to let me delete something.

comex
0 replies
14h28m

I’ve done the same on my Mac perhaps twice in the last few years. Like you, I encountered no crash or any other obvious consequences… I just deleted data and moved on. Though I didn’t try leaving it with zero bytes free for an extended amount of time, or rebooting. Who knows what would happen then.

Still, whatever this APFS bug is, the conditions to trigger it are more specific than just filling up the disk.

bloomingeek
0 replies
14h40m

I tried, accidentally, to over fill the HDD on a Windows Vista machine. Vista popped up a box telling me I couldn't do it. Unfortunately, in my panic, I didn't take a picture of the warning for posterity.

staticfloat
7 replies
15h3m

I ran into an issue like this in my first ever job! I accidentally filled up a cluster with junk files and the sysadmin started sending me emails saying I needed to fix it ASAP but rm wouldn’t work. He taught me that file truncation usually works when deletion doesn’t, so you can usually do “cat /dev/null > foo” when “rm foo” doesn’t work.

pram
0 replies
15h0m

You can actually just do >file

pdimitar
0 replies
8h19m

To me what works is `cat /dev/null >! filename`.

mjevans
0 replies
14h18m

In shell :>filepath often works...

However sometimes filesystems can't do that. For those cases, hopefully the filesystem supports: resize-grow, resize-shrink, and either additional temporary storage or is on top of an underlying system which can add/remove backing storage. You may also need to use custom commands to restore the filesystem's structure to one intended for a single block device (btrfs comes to mind here).

jaimehrubiks
0 replies
15h1m

Great to know

deltarholamda
0 replies
2h5m

I accidentally filled a ZFS root SSD with a massive samba log file (samba log level set way high to debug a problem, and then forgot to reset it), and had to use truncate to get it back.

I knew that ZFS was better about this, but even so I still got that "oh... hell" sinking feeling when you really bork something.

JohnMakin
0 replies
11h56m

I was once in a situation years ago where a critical piece of infrastructure could brick itself irreparably with a deadlock unless it was always able to write to the file system, so I had a backup process just periodically send garbage directly to dev null and as far as I know that dirty hack is still running years later.

/dev/null is magical and worth reading into

JdeBP
0 replies
14h44m

Although note that several comments here report situations where truncation doesn't work either. 21st century filesystem formats are a lot more complex than UFS, and with things like snapshotting and journalling there are new ways for a filesystem to deadlock itself.

userbinator
6 replies
15h58m

My best guess at what happened (based on a little knowledge of HFS+ disk structures, but not APFS) is that the journal file also filled up, and since deletion requires writing to it and possibly expanding it, you get into the unusual situation where deletion requires, at least temporarily, more space.

macOS continued to write files until there was just 41K free on the drive.

I've (accidentally) ran both NTFS and FAT32 to 0 bytes free, and it was always possible to delete something even in that situation.

Digging around in forums, I found that Sonoma has broken the SMB/Samba-based networking mount procedure for Time Machine restores, and no one had found a solution. This appears to still be the case in 14.4.

In my experience SMB became unreliable and just unacceptably buggy many years ago, starting around the 10.12-10.13 timeframe; and now it looks like Apple doesn't care about whether it works at all anymore.

I hate to think what people without decades of Mac experience do when confronted with systemic, cascading failures like this when I felt helpless despite what I thought I knew and all the answers I searched for and found on forums.

I don't have "decades of Mac experience", but the first thing I'd try is a fsck --- odd not to see that mentioned here.

If I were asked to recover from this situation, and couldn't just copy the necessary contents of the disk to another one before formatting it and then copying back, I'd get the APFS documentation (https://developer.apple.com/support/downloads/Apple-File-Sys...) and figure out what to edit (with dd and a hex editor) to get some free space.

wazoox
0 replies
6h57m

Ah, Apple. SMB has always performed from horribly slowly a few years back, to barely decent recently, but is still way slower than NFS or (oh the irony) Appleshare on the exact same hardware.

Tested a few years ago throughput to a big NAS connected in 10gigEo from a Hackintosh with BlackMagic Disk Speed Test :

* running Windows, SMB achieves 900MB/s

* running MacOS, SMB achieves 200MB/s

* running MacOS, NFS and AFP both achieve 1000MB/s

Anything related to professional work is a sad joke in MacOS, alas.

(People keep repeating that AFP is dead, however it still works fine as a client on my Mac Pro -- and performs so much better than SMB than it's almost comical).

greenicon
0 replies
12h13m

For a networked Time Machine restore you can reinstall MacOS without restoring first and then use the migration utility to restore from a remote Time Machine. That seems to use a different smb binary which works. Still, I find it infuriating that restoring, one of the most important things you do on a machine, is broken and was not caught by QA.

chrisjj
0 replies
8h44m

get into the unusual situation where deletion requires, at least temporarily, more space.

s/unusual/usual/ surely.

begueradj
0 replies
12h12m

So far, you're the only one who provided a technical explanation for this.

arghwhat
0 replies
10h25m

That's for a journalling filesystem. For CoW filesystems, the issue is that any change to the filesystem is done by making a new file tree containing your change, and then updating the root to point the new tree. Later, garbage collection finds files that are no longer part of an active tree and returns their storage to the pool.

Changes are usually batched to reduce the amount of tree changes to a manageable amount. A bonus of this design is that a filesystem snapshot is just another reference to a particular tree.

This requires space, but CoW filesystems also usually reserve an amount of emergency storage for this reason.

qiqitori
5 replies
14h4m

Maybe flamebait, but here is my honest opinion, which I believe is aligned with the hacker ethos: maybe if you were using an open-source operating system you could, with a little experience, write a simple tool that deletes a couple files without allocating new metadata. (Or more likely, somebody else would have been there before you and you could just use their tool.)

yjftsjthsd-h
1 replies
12h40m

Does such a tool exist for BTRFS or ZFS?

bjoli
0 replies
9h54m

I was in this situation with BTRFS, but it was simple to extend the partition with an old 2gb usb stick and the problem resolved itself.

Kwpolska
1 replies
12h6m

Most users on open-source operating systems can't code as well. And even if they could, this still requires knowledge or guesswork to find the trick for the deletion. Some people suggest truncation, and that's possible with a shell, but what would you do if it failed as well?

qiqitori
0 replies
11h8m

I have an answer, but I'm not sure you'll find it particularly fulfilling because it is quite hypothetical.

The average user just needs to be able to ask the question in a decent place. E.g., Hacker News or a fitting Stack Exchange site. Some developers not afraid of touching the kernel will see the question (or one like it), and if no workaround (e.g. truncation) is found to be acceptable, someone may decide to look into the kernel source to see if it's feasible at all. They may find a lower level function that deletes without writing metadata. Or they may find the function in the filesystem driver's source code where the metadata is written first, and if that was successful, the actual data is written. In the easiest case, you could create a copy of the function with the calls swapped, and a live CD with the modified driver could be created. (Course, this solution is quite unsafe, as writing the metadata could still fail for some other or related reason, so it's a bit of an emergency solution.)

There are two other filesystems that were mentioned in the discussion here, btrfs and ZFS.

ZFS solved the problem by reserving space, so creating such a tool isn't needed. (However, ZFS is not part of Linux, so I'm not too interested in digging into the details.)

btrfs users apparently accept this as a fact-of-life, but have what they consider decent-enough workarounds, see e.g. https://www.reddit.com/r/btrfs/comments/ibjrpm/can_i_somehow....

(I use neither ZFS nor btrfs; I prefer boring filesystems, thank you very much.)

Dylan16807
0 replies
11h3m

maybe if you were using an open-source operating system you could, with a little experience, write a simple tool that deletes a couple files without allocating new metadata

For a filesystem where this happens, it would not be simple and it would require a lot of experience to get right.

Or more likely, somebody else would have been there before you and you could just use their tool.

I don't think open-source makes such a tool much more likely to exist.

owyn
5 replies
16h5m

Oh! What I would do is take the disk out, mount it on another machine and delete files then put it back........ /s

I think the whole stack of operating systems and tools that assume that this is possible get in trouble when it's not possible. I don't want my computer to become a locked down sandbox but it seems like this is where we are headed.

userbinator
2 replies
15h52m

If you mounted the disk on another machine, I suspect in this situation the filesystem was so full that you'd get the same error when trying to delete, like what the author here encountered when booting into Recovery OS.

The only solution is to copy all the files you want to keep to another (not full) disk, then reformat and copy them back, or if you don't have another disk to copy to, somehow edit the disk directly to "manually" free some space.

owyn
1 replies
15h46m

I was just joking about how Macbook drives are not replaceable but some replies on this thread have potential real solutions, which is awesome. I forgot you could attach two Macs during the boot process, which is usually used when you are setting up the new computer and copying files but could be used to fix the broken one too. That's another possible fix, buy a new one with twice the disk space and transfer files on setup?

Gigachad
0 replies
15h38m

MacBooks have a recovery mode you can boot in to which can download a fresh macOS install off the internet to reinstall the whole system.

worddepress
1 replies
13h25m

On Windows it is hellish trying to copy across files like this due to weird permissions issues that crop up. A weird trick that has worked for me is .zip the files you want to copy, copy across the zip, then unzip that.

djmips
0 replies
9h3m

That's an oldie but a goodie!

willyt
4 replies
9h15m

They mentioned that the problem occurred while steam was downloading, I wonder if because steam is ultra cross platform with bare minimum OS specific UI it is using something quite low level to write data to disk? Maybe NSFile does some checks that posix calls can’t do while remaining compliant with the spec or something weird like that. That would explain why people using various low level ´pro level’ cross platform tools like databases would have issues but typical garage band user is usually ok. If you’re doing database writes you probably don’t want the overhead of these checks making your file system performance look bad so it’s left to the software to check that it’s not going to fill up the file system. Stab in the dark hypothesis. I would hope that however we are writing data to the file system it shouldn’t be able to lock it up like this. I’d be curious for someone with technical knowledge of this to chime in.

sspiff
2 replies
8h58m

Still, you should not be able to brick your device into a state like this with legitimate, normal, non elevated operations.

If the POSIX API does have some limitation which would prevent this error from occurring with higher level APIs (which I sincerely doubt), macOS should simply start failing with errno = ENOSPC earlier for POSIX operations.

There is no other system that behaves like this, and we wouldn't be making excuses like this if Microsoft messed something basic up like this.

willyt
1 replies
8h56m

I agree, though others are saying that BTRFS and ZFS can also get into this state.

sspiff
0 replies
7h59m

I'd have to try, but have never encountered something like it on btrfs (though to be fair I've had many other issues and bugs with it over the years!)

I understand the logic, but typically I've seen filesystem implementations block writes once metadata volumes become close enough to full. Also, and I don't know if this is a thing on modern filesystems, you used to be able to reserve free space for root user only, precisely for recovering from issues like this in the past.

themoonisachees
0 replies
8h28m

Steam simply stores games in it's install folder, and while downloading the (compressed) game files it keeps them fragmented in a separate directory. As far as I can tell it doesn't employ special low-level APIs, because on lower power hardware (and even sometimes on gaming gear) the bottleneck is often the decompression step. This is what steam is doing when you are downloading a game and it stops using the network but the disk usage is going and the processor gets pinned at 100%.

I also heard of this happening to regular users downloading stuff with safari. It is simply terrible design on apple's part that you can kill a macOS install simply by filling it up so much that it becomes possible to not be able to delete files.

jen729w
4 replies
16h11m

I had this issue in October 2018 as documented in this Stack Overflow question, whose text I’ll paste below.

I was lucky: I had an additional APFS partition that I could remove, thus freeing up disk space. Took me a while to figure out, during which time I was in a proper panic.

---

https://apple.stackexchange.com/questions/338721/disk-full-t...

I’m in a pickle here. macOS Mojave, just updated the other day. I managed to fill my disk up while creating a .dmg, and the system froze. I rebooted. Kernel panic.

Boot to Recovery mode. Mount the disk. Open Terminal.

–bash–3.2# rm /path/to/large/file

rm: /path/to/large/file: No space left on device

Essentially the same issue as this Unix thread from ‘08! https://www.unix.com/linux/69889-unable-remove-file-using-rm...

I’ve tried echo x > /path/to/large/file, no good.

It’s borked. Does anyone have any suggestions that aren’t “wipe the drive and restore from your backup”?

bombcar
1 replies
15h31m

Sounds like creating a sliver of an extra partition of a gig or so might be valuable insurance.

Kind of like the old Unix file systems that would reserve 5% for root.

pixelfarmer
0 replies
9h54m

Since the SSD days (I started like 15 years ago with that) I keep a bit of space empty for two reasons: Emergency situations where you may need some extra space and to give SSDs a bit more room to relocate blocks to (they have a certain amount internally reserved already).

djmips
0 replies
9h11m

I see in that stackexchange post, there's since been another alternative potential solution where you delete you virtual memory partition and if it's large enough it can give you back enough space to allow deleting of files to happen.

bouke
0 replies
13h13m

With APFS this is not as straightforward though, as containers are only allocated when written too.

whartung
3 replies
15h48m

I had this happen to me, though I can’t recall how I fixed it.

In general I’ve had good success with Time Machine. I, too, have lost TM volumes. I just erased them and started again. Annoying to be sure but 99.99% of the time don’t need a years worth of backups.

The author mentioned copying the Time Machine drive. I have never been able to successfully do that. Last time I tried I quit after 3 days. As I understand it, only Finder can copy a Time Machine drive. Terrible experience.

That said, I’d rather cope with TM. It’s saved me more than it’s hurt me, and even an idiot like me can get it to work.

I did have my machine just complain about one of my partitions being irreparable, but it mounted read only so I was able to copy it, and am currently copying it back.

I don’t know if this is random bit rot, or if something is going wrong with the drive. That would be Bad, it’s a 3TB spinning drive. Backed up with BackBlaze (knock on wood), but I’d rather not have to go through the recovery process if I could avoid it.

Problem is I don’t know how to prevent it. It’s been suggested that SSDs are potentially less susceptible to bit rot, so maybe switching to one of those is a wise plan. But I don’t know.

pronoiac
1 replies
14h23m

I have notes somewhere on roundtripping Time Machine backups between USB drives and network shares. (It's non-trivial, and it's not supported, but it worked.) It was with HFS+ backups, and there were various bits that were "Here Be Dragons", so I never posted them.

desro
0 replies
13h31m

would be interested in a write-up of this if you ever get a chance

skhr0680
0 replies
14h51m

The author mentioned copying the Time Machine drive. I have never been able to successfully do that. Last time I tried I quit after 3 days. As I understand it, only Finder can copy a Time Machine drive. Terrible experience.

rsync -av $SOURCE $DEST has never let me down. Copy or delete on Time Machine files using Finder never worked for me.

Problem is I don’t know how to prevent it. It’s been suggested that SSDs are potentially less susceptible to bit rot, so maybe switching to one of those is a wise plan. But I don’t know.

OpenZFS with two drives should protect you from bit rot. ZFS almost became the Mac file system in Snow Leopard.

thenickdude
3 replies
13h42m

By contrast, ZFS has "slop space" to avoid this very problem (wedging the filesystem by running out of space during a large operation). By default it reserves 3.2% of your volume's space for this, up to 128GB.

So by adjusting the Linux kernel tunable "spa_slop_shift" to shrink the slop space, you can regain up to 128GB of bonus space to successfully complete your file deletion operations:

https://openzfs.github.io/openzfs-docs/Performance%20and%20T...

nobody9999
1 replies
12h40m

By contrast, ZFS has "slop space" to avoid this very problem

As does ext4 (although they call the space "reserved blocks"). 'man tune2fs' for details. As well as most other modern (and not so modern[0]) filesystems.

[0] As I recall, the same was true for SunOS'[1] UFS back in the 1980s.

[1] https://en.wikipedia.org/wiki/SunOS

_flux
0 replies
11h44m

In ext[234]fs the reserved blocks is something else though: they are reserved to a specific user, by default root. So if normal users fill out the filesystem, the root user still has some space to write. Sort of a simple quota system.

I believe this problem in is only relevant to CoW filesystems. With ext[234]fs you can set the reserved blocks to 0, fill the fs, and always remove files to fix the situation.

bell-cot
0 replies
9h18m

Yes - and reserving a percentage of disk space (for this reason) was a routine feature of "real" filesystems decades before ZFS (or Linux) even existed.

It's kinda like how almost any 1980's MS-DOS shareware terminal program was really good at downloading files over a limited-bandwidth connection, but current versions of MS Windows are utter crap at that should-be-trivial task.

desro
3 replies
15h10m

Impressive. I've never dealt with a situation where even `rm` failed, but I have had the displeasure of using and managing modern Macs with 256 GB (or less) of internal storage. I like to keep a "spaceholder" file of around 16GB so when things inevitably fill up and prevent an update or something else, I can nuke the placeholder without having to surgically prune things with `ncdu`

LeifCarrotson
1 replies
14h42m

I find that one of the main benefits of a space holding file is that when it's needed, freeing up that space provides a window of time where you can implement a long-term solution (like buying a new drive with quadruple the storage space of the original for the cost of an hour of that employee/machine's time).

derekp7
0 replies
14h27m

Had an ex Navy Submariner for a manager many years ago. On some problem systems, he created several such files. He called them ballast files.

armchairhacker
2 replies
16h0m

btrfs used to have this issue, the problem being that the filesystem has to append metadata to do any operation including (ironically) deletion: https://serverfault.com/a/478742.

AFAIK it's fixed now, because btrfs reserves some space and reports "disk full" before it's reached. macOS probably does the same (I'd hope), but it seems in this case the boundary wasn't enforced properly and the background snapshot caused it to write beyond.

jclulow
0 replies
15h3m

Yes, ZFS has the same fundamental issue that all COW file systems have here. We have a reserved pool of space that you can't use except for operations like removing stuff.

The new problem with that reserved pool mechanism is that in 2024 it's probably way too big, because it's essentially a small but fixed percentage of the storage size. Don't let people use thresholds of total size without some kind of absolute cap!

Dalewyn
2 replies
16h19m

What would the reasoning be for the file system failing this hard? Filling a partition to 99.999999% capacity always produces a nonsensical situation, but it's usually still recoverable without resorting to DBANing it first.

opello
0 replies
15h49m

I'm so curious about this too. I've done this with ext2 and ext3 in the past and truncating a large file solved the problem if rm wouldn't. It's been long enough I don't remember so specifically though, but certainly wasn't "dd to larger disk and grow the partition" to the rescue.

If it was a journal issue would something akin to using an initramfs (or live environment) and mounting with data=writeback enable removing files? Or maybe APFS doesn't support that?

epmos
0 replies
16h7m

I do not know the details of Apple's file system, but I wouldn't be surprised if it needs to allocate space for the log (journal) and can't do so.

That isn't reasonable in the sense of "this is what the filesystem should do in this situation" but if the log and user data are allocated from the same pool it is quite possible to exhaust both.

wyldfire
1 replies
14h59m

If you can truncate() an existing file (via 'echo > big_file.img' or similar), I would hope the filesystem could deallocate the relevant extents without requiring more space. Seems a bit like a filesystem defect not to reserve enough space to recover from this condition with unlink().

anticensor
0 replies
13h20m

you need delete_and_dealloc(), not unlink()

supermatt
1 replies
9h9m

Sounds like there was no room to write to the journal (journaling is enabled by default on HFS+). Disabling the journaling for that volume (which it doesn’t appear you tried?) will have likely allowed you to perform your deletes.

diskutil disableJournal “/Volumes/Macintosh HD” (or whatever volume is)

djxfade
0 replies
7h41m

New macOS versions use APFS, not HFS+. As far as I'm aware, you can't disable journaling on APFS.

quechimba
1 replies
14h42m

I had an external hard drive that I overfilled by accident while making a manual backup of media files, and after that I couldn't even mount the APFS volume. Apparently it's something that can happen.

In the end I was able to mount and rescue the data using https://github.com/libyal/libfsapfs

I followed this guide: https://matt.sh/apfs-object-map-free-recovery

djmips
0 replies
9h5m

Hey thanks for the knowledge!

protoman3000
1 replies
9h46m

I found that Sonoma has broken...

All too familiar. I have two Macs. I upgraded one of them to Sonoma and ever since then it has been nothing but headache and disappointment. Starting from the upgrade having failed (meaning I had to completely wipe the disk and install Sonoma from scratch, luckily I still had data), to problems with Handoff, the firewall seems to not work, Excel very slow etc etc.

I don't recommend using Sonoma.

andrelaszlo
0 replies
7h41m

It's terrible :(

Bluetooth audio was a joke even before the update, and now it's almost unusable.

mproud
1 replies
15h7m

Safe Boot is your magical way to have the computer delete purgeable and temporary files, like boot caches. Hold shift down and once it gets to the login window, restart again.

Otherwise, go to Recovery mode, mount the disk in Disk Utility, and then open Terminal and rm some shit.

klausa
0 replies
14h27m

The author mentions trying this in recoveryOS to no success.

entropicgravity
1 replies
13h12m

I ran into a similar situation not long ago on the system partition of a linux installation. The partition was too small to begin with and as new updates piled up there was almost no space left to start deleting stuff. It took me about half an hour to find a subdirectory with a tiny bit of stuff that could be deleted. It was like being in a room so plugged up with junk that you couldn't open the (inward swinging) door to let yourself out.

From the tiny beginning I started being able to delete bigger and bigger spaces until finally it was clear and then of course I resized the partition so that wouldn't happen again. The End.

dataflow
0 replies
13h2m

Confused, why didn't you just expand the partition to begin with?

And I feel like that ought to be the lesson for power users: always leave a bit of slack space after your partition.

Waterluvian
1 replies
7h12m

I’m most surprised that Steam has apparently done this. It never lets me install anything unless I have the space, and it blocks off the space proactively.

deathanatos
0 replies
2h14m

I've seen it both complain that I didn't have sufficient disk for an install, and then, after I've made room on the disk, have the same installation fail due to a lack of disk space.

However they're doing it, their disk space calculations are either wrong or estimates.

TazeTSchnitzel
1 replies
10h1m

I've had a similar experience on my iPhone. The disk became so full that deleting things was seemingly no longer actually doing anything. Rebooting, the phone couldn't be logged into. Rebooting again, it boot-looped. Rebooting once more, it booted into an inconsistent state where app icons still existed on the home screen, but the actual app was missing, so the icon was blank and the app could not be launched. I became concerned about data integrity and ultimately restored from a backup.

I am certain this was a result of APFS being copy-on-write and supporting snapshotting. If no change is immediately permanent, but instead old versions of files stay around in a snapshot, then if you don't have enough space for more snapshot metadata you're in trouble. Maybe they skip the snapshot in low disk space situations, but they still have the copy-on-write metadata problem.

the-golden-one
0 replies
5h29m

I’ve had the exact same thing. Amazing to think in 2024, despite all the clever APFS volume management stuff, you can still put a ‘sealed’ device such as an iPhone into a state where it has to be recovered by DFU just by filling up the user’s storage.

In contrast, after accidentally maxing out the space on my windows 11 office laptop which has a single data and boot volume, I was still able to boot it and sort the issue out.

EVa5I7bHFq9mnYK
1 replies
8h58m

The elephant in the room is that it was too expensive to buy a Macbook with more storage for a child, and now it's impossible to upgrade. I usually just replace an SSD if I run over 50% utilization.

sspiff
0 replies
8h54m

SSDs become slower as the fill up, and wear leveling has less available space to choose from when writing data.

But filling up an SSD should not brick your volume like that. This is a filesystem implementation bug, not a user error.

voytec
0 replies
7h14m

I wonder if emptying some large-ish file via `:>file` would help.

tristor
0 replies
1h42m

A lot of hate in the comments here for Time Machine. Maybe some of that is justified, but I will say that I've been using Macs professionally since 2012 and while I do use other backup services also, there has never been a competing solution that allows the simplified versioning capabilities and performance of a Time Machine backup. On every single one of my Macs I keep a USB3/USB4 SSD connected just for Time Machine as a target, because it's so useful compared to things like Code42, BackBlaze, Carbonite, SpiderOak, et al.

slillibri
0 replies
2h6m

It seems odd that he acknowledges the existence of Time Machine local snapshots but doesn't mention deleting the local snapshots manually. Using `tmutil listlocalsnapshots` and `tmutil deletelocalsnapshots` will actually free up space. I had this experience recently when my local storage was at 99% and simple `rm` wasn't freeing up any space. Once I deleted the local snapshot, 100s of GB were freed.

As a side node, Time Machine has been pretty garbage lately. I back up to a local Synology NAS and letting it run automatically will just spin with `connecting to backup disk` (or some such message), but running manually works just fine.

saurik
0 replies
13h50m

I know someone who ran out of disk space on her iPhone and then tried to fix some issues she didn't realize were being caused by it by upgrading the device to a new version of iOS; but then it failed the upgrade as it couldn't resize the disk but had already committed to doing such (I later laboriously figured this out by debugging the process using idevicerestore). I feel like this was a bug in its "how much space will I need to have to install successfully, let's verify I have enough before I begin" calculation, and maybe later versions of iOS have fixed the issue, but sadly they are all even larger and the fixed version would just prevent it from trying to upgrade in the first place, not fix it once it got to this point.

resource_waste
0 replies
9h8m

The sheer number of people who have ran into this is a bit mind boggling.

Clearly Apple knows about this, they make no effort to fix it?

r618
0 replies
4h19m

i was testing low disk space like situation on a Mac Mini used as small home network storage/server, maybe ~ 1o years ago (running maybe Yosemite at the time or something similar) out of curiosity - so Intel mac and pre-APFS system

not sure it this was OS or filesystem feature, but it refused to allocate literally anything if free space reached ~ 100-500MB on system partition so it was being kept _always_ usable, even logging was denied IIRC

mihau
0 replies
7h51m

Be careful with iPhones/iOS too. Ignoring „not enough space” warnings will eventually put it into a boot loop.

There are threads on Reddit about this.

For me, flashing it through Finder with the official firmware was the only option. I lost some photos, the rest I was able to restore from iCloud backup.

magicalhippo
0 replies
11h0m

Reminds me of when a customer's database kept crashing with an error code indicating the disk was full. Except Windows Explorer showed the disk having hundreds of gigs free...

Took us a little while to figure out that the problem was the database file was so fragmented NTFS couldn't store more fragments for the file[1].

What had happened was they had been running the database in a VM with very low disk space for a long time, several times actually running out of space, before increasing the virtual disk and resizing the partition to match. Hence all the now-available disk space.

Just copying the main database file and deleting the old solved it.

[1]: https://superuser.com/questions/1315108/ntfs-limitations-max...

m348e912
0 replies
7h15m

I maxed out the inodes on a partition once. That was a head scratcher trying to figure out what was wrong. It was then I learned about the importance of block sizes when formatting a drive.

kazinator
0 replies
9h15m

In the shell you can truncate files to zero size with a redirection:

  $ > large-file
It's possible that rm / unlink require working space to perform the transaction of removing the file from the directory, while truncation does not.

jerrysievert
0 replies
2h24m

this happened to me last Monday. it took over 2 days to recover.

it gobsmacked me that I needed space on a filesystem to make space on a filesystem.

jaredhallen
0 replies
13h8m

Depending on how dedicated one was, it might work to dd the full contents of the affected disk to a larger one. Then expand the partition, expand the filesystem, and assuming everything is hunky dory, free up some space and reverse all those steps.

hobs
0 replies
13h22m

Reminds me of the time I worked at Apple support and got a phone call where a guy had wiped his system drive like 10x+ because he didnt want to lose his data but had run out of inodes so literally nothing was able to create or do anything, and his entire OS was shitting the bed.

I showed him how he could type rm -rf and then paste the files in the terminal and it took him a good 15 minutes for him to get it all organized but once he did he literally cried because I got his computer back.

Those days were pretty brutal, but there were shining moments.

gadders
0 replies
5h14m

In the old days (2003) I put a web app live with debug level of logging turned on. It filled the disk up so much the Sys Admin had to get a bus to the data centre to reboot the server (Sun e450 from memory) hands on at the keyboard.

farkanoid
0 replies
9h19m

My wife's IPhone 12 Pro Max had the same problem.

She somehow managed to fill up the entire 512GB. Updates were unsuccessful, she couldn't make calls and wasn't able to delete anything to make room.

She couldn't even back up her phone through iTunes, the only option was to purchase an iCloud subscription and back up to the cloud in order to access her photos.

delta_p_delta_x
0 replies
11h55m

Huh, something like this happened to my mother's iPhone, too. She kept taking photos until the storage was filled to the brim.

One day she had discharged the battery completely, shutting down the phone; after recharging, she tried to restart it, only to be sent into a boot-loop. There is no (official) way to resolve this except repeatedly reboot and hope that at least once, Springboard loads and you can immediately jump into Photos and start mass-deleting, or at least connect to a computer and transfer the media out of the phone.

dehrmann
0 replies
16h1m

I wonder if truncate would have worked.

dannyw
0 replies
16h19m

This happened to me too, without Time Machine. It’s devastating how rm on macOS can’t remove files when the disk is full.

atemerev
0 replies
12h19m

APFS is really sensitive to this situation, more than other file systems. I don’t remember how I managed to resolve this situation, but it involved booting into the single user mode and some magic not accessible to a regular Mac user.

artgship
0 replies
5h16m

I had a similar situation happen with running multiple virtual machines where their storage was a .vdi file - the files grew too big and maxed out my storage. On next boot ubuntu would not start, same error telling me disk is too full.

I had a recent backup so i just reinstalled everything. I always make sure there is some space on the disks from that moment on.

amarshall
0 replies
15h24m

macOS outstripped its ability to throttle filling storage

Does it actually do that? I.e. stop the user from writing new data when storage space is extremely low?

albertzeyer
0 replies
11h23m

When `rm file` gives you "No space left on device", a trick you can do:

    echo > file  # delete content of file first
    rm file  # now it should work

_wire_
0 replies
1h38m

You have to wonder why in this age a trillion dollar computer company that helped usher the PC revolution would allow their systems to become crippled by a full storage device, especially when these devices are applied as indispensable tools for everyday life.

I'm begging the question:

In the 70s / 80s there was consumer tort litigation for all kinds of "misrepresentation", which was fair, because there was a huge and growing business of dark patterns of false claims, faulty products, and schemes for exploitation of unwitting and/or oppressing customers.

But it also became absurd, like class action suits against RCA for selling TVs with 25" inch screens where the picture only measured 24.5 inches due to the cabinet facia overlapping the edge of the tube or the raster not reaching all the way to the edge, etc.

Tort reform became a hot-button political issue because an enormous subsector of "consumer-rights" legal practice developed to milk payouts under consumer law. You still see this today for "personal injury."

So Apple has trillions in pockets and all their kit is sold with capacity specs.

Well, customers has better be able to get access to all that capacity, even if it kills their device.

I'm wondering if it's not a bug but a legal calculus, a la intro to Fight Club where Ed Norton is reviewing the share value implications of Ford paying off claims for Pintos that explode when rear-ended versus the cost of a recall?

_wire_
0 replies
11h57m

Ran into precisely this problem with a friend's Ventura Mini last year.

The solution was to boot into recovery and mount the Data partition using Disk Utility.

I don't recall where the Data partition gets mounted but I think it is:

"/System/Volumes/Macintosh HD - Data"

Or just Data, since Sonoma. It will be clear from Disk Utility.

Then close Disk Utility and go into Terminal and run rm on a big unseeded file.

You can find one using:

find <data-mnt> -size +100m

Using rm will fail.

Unmount the Data partition and run fsck on it.

This completes the deletion.

From there enough more space can be freed in recovery to have a healthy buffer, then reboot normally and finish cleaning.

It seems that when the volume gets so full that rm doesn't work anymore the filesystem also gets corrupted.

HTH and that I didn't forget anything.

Zetobal
0 replies
5h45m

Fun times old android phones had the same bug.

Tade0
0 replies
9h20m

I had a similar issue with a PostgreSQL database - we were running an application that performed simulations and on that day we tried one that was an order of magnitude larger than the previously largest ones.

Suddenly queries started failing, so I investigated and even managed to remove some files on the affected volume, but it wasn't enough. Eventually, as the volume's storage class did not allow for expansion, I reached out to the support team to move the data to a larger one.

Shorel
0 replies
9h12m

This is a serious failure of the backup mechanism not being able to restore from a backup, and a serious failure of the operating system, not being able to delete files in a mounted external disk.

Luckily, all important files were in the cloud, and you could write a blog post describing these monumental failures.

RamRodification
0 replies
11h38m

> ... Samba (the disk-sharing protocol), ...

Pet peeve (or alternatively: correct me if I'm wrong), Samba is not a protocol. It's a software suite that implements the SMB (Server Message Block) protocol

LoganDark
0 replies
8h42m

they had filled macOS’s startup volume storage so full that the operating system was incapable of deleting files

Yeah, this has happened to me too. It became a lot more of an issue when my disk was (forcefully, non-consensually) converted to "APFS" which, along with breaking both alternative operating systems I had installed, also seemed to have a much greater chance of entirely bricking once I ran out of space.

I was consistently able to repair it by running a disk check in recovery mode, then deleting the files from the recovery terminal, however that is only accessible by a reboot, which by its very nature, necessarily loses all work that I had open, as it's impossible to save.

I never had this issue with HFS+. Everyone who says it can technically lock up is missing the fact that in practice it was still far more resilient than the newer and supposedly "better" APFS.

JdeBP
0 replies
14h19m

People find it a confusing idea to grasp that deleting things actually requires more space, either temporarily or permanently. Other comments here have gone into the details of why some modern filesystems with snapshotting and journalling and so forth actually end up needing to allocate from free space in order to delete stuff.

In a different field: In the early decade of Wikipedia it often had to be explained to people that (at least from roughly 2004 onwards) deleting pages with the intention of saving space on the Wikipedia servers actually did the opposite, since deletion added records to the underlying database.

Related situations:

* In Rahul Dhesi's ZOO archive file format, deleting an archive entry just sets a flag on the entry's header record. ZOO also did VMS-like file versioning, where adding a new version of a file to an archive did not overwrite the old one.

* Back in the days of MS/DR/PC-DOS and FAT, with (sometimes) add-on undeletion utilities installed, deleting a file would need more space to store a new entry into the database that held the restore information for the undeletion utility.

* Back in the days of MS/DR/PC-DOS and FAT, some of the old disc compression utilities compressed metadata as well, leading to (rare but possible) situations where metadata changes could affect compressibility and actually increase the (from the outside point of view) volume size.

"I delete XYZ in order to free space." is a pervasive concept, but it isn't strictly a correct one.

JackYoustra
0 replies
11h35m

This happened to me! My solution was to go to an apple store, buy one of their portable SSDs right there, cp everything on to the SSD (that didn't appear to use any additional space!), wipe the mac, and then rm some unneeded stuff on the ssd before cp-ing back and using their no-fee return to return the SSD. There were a few esoteric issues, but for the most part it worked.

FartyMcFarter
0 replies
8h3m

Terminal: While Terminal would launch, using the standard Unix rm command resulted in a similar error: “No space left on device.”

This looks like a huge bug, and the elephant in the room.

What's the reason for `rm` requiring space left on the device?

1970-01-01
0 replies
15h17m

A viable solution would be to clone all data onto a bigger drive, then delete some files, wipe the disk, and finally copy everything back.