In wal2 mode, the system uses two wal files instead of one. The files are named "<database>-wal" and "<database>-wal2", where "<database>" is of course the name of the database file. When data is written to the database, the writer begins by appending the new data to the first wal file. Once the first wal file has grown large enough, writers switch to appending data to the second wal file. At this point the first wal file can be checkpointed (after which it can be overwritten). Then, once the second wal file has grown large enough and the first wal file has been checkpointed, writers switch back to the first wal file. And so on.
Looks so logical that I don't understand why WAL mode was not implemented like this from the get go. Probably an optimization wrongly dismissed as premature?
Anyways, looking forward to this mode reaching general availability.
I'm not much into the details but I'm wondering why is it not always writing to "<database>-wal" once it is full, it is renamed to "<database>-wal-0" and it starts writing again into "<database>-wal", once it is full again it is renamed to "<database>-wal-1" ?
when doing backups, managing file permissions, etc, it's really convenient to only have a fixed number of known files to worry about.
Switching to WAL already makes handling Sqlite databases much less convenient, since you now have three files instead of one, and need a filesystem snapshotting mechanism to reliably back them up (so you don't have one state in the database and another in the wal). Making the filenames and number of files less predictable would make that mode not worth it for many use cases
Even the mechanism predating WALv1 (rollback journal) uses two files. I don't think there is any way of using SQLite crash-proof with just a single file.
Besides, even if the database is single-file it's still necessary to use filesystem snapshotting for live backup, or it's likely to get an inconsistent copy.
With smallish databases, just pipe the .dump command to gzip. No need to snapshot an entire filesystem just to back up a few hundred megabytes (what I would consider "smallish"). Backup and restore speeds aren't a significant concern at those sizes, anyway, and you get human-readable backups that can be inspected with standard Unix tools.
We would back up double-digit GB MySQL databases by piping mysqldump into gzip as well. Like you I’m sure there’s a size limit where that doesn’t work. I never found out if it was CPU or memory constrained though.
Where I’m at, the DBAs shifted from pipes to temporary files. They said at the double-digit TB size, it would occasionally fail otherwise. This was with Postgres / barman / pigz, FWIW.
Never saw it myself, so I have no idea what the cause was.
Or use "VACUUM INTO" to get a backup that is is even faster (and easier) to restore.
You can't use a filesystem snapshotting mechanism... Even when not using wal mode, you can't just cp the .db file somewhere (in some circumstances, yes).
Instead, sqlite provides an online backup api specifically for creating backups. This also takes wal mode into account.
VACUUM INTO?
Pretty much every journaling mechanism requires snapshots for reliable backups. This includes either of SQLite's modes (undo journal and WAL).
Presumably because you don't want to keep a full directory of WALs up to infinity. This uses WAL_size*2 on the disk and keeps it from growing beyond that.
This is to handle the case where transactions keep getting written to the wal file while it is being "checkpointed", preventing it from ever being deleted. You only need to alternate between two files, one checkpointed, one written to, for this.
Probably because of this.
I don't get how wal2 fixes the long-running reader problem though. Maybe they were just referring to the former problem?
Because with a single wal file you can't checkpoint it during a read since said file may change out from under you.
With two wal files, the one you are actively appending to can be treated like in wal1 mode but the one that isn't being appended to is immutable for the time being just like the main database.
This means you can treat the actual db file and the immutable wal file together as one immutable database file with some underlying abstraction. That abstraction then allows you to perform the checkpoint operation since the abstraction can keep all that immutable data accessible in some form or another while reworking the data structure of the db file.
Then once the checkpoint is complete, the abstraction can clear the now redundant immutable wal file, become transparent, and just present the underling single DB file.
And now once the wal file you are actively appending to reaches a sufficient size, you "lock" that one, rendering it immutable, and switch over to appending to the cleared wal file you were previously checkpointing. With this you can now checkpoint again without blocking reads or writes.
so there might eventually be wal3 and wal4 files and so on?
nope. since if the wal you are checkpointing isn't done yet, you just wait to "lock" and switch files until that operation is complete.
Checkpointing can be considered "lock free" since the operation will always eventually complete. How long it takes will depend on the wal file being checkpointed into the db but it'll eventually complete in some finite amount of time.
Because you know that any given checkpointing operation has to eventually complete, you can simply keep appending to the current "append" wal file and then tackle those changes when you finish the current checkpoint op (at which point the wal file you just finished checkpointing is free to take the appends).
Wouldn't that require the reader to finish? Making the reader block the checkpoint from wrapping up?
Not if you abstract those reads. It's not like the application is directly opening a file handle. Instead it goes through the sqlite library which accesses not only the db and the wal file but also a non-persistent shm (shared memory) file used for maintaining a consistent view of the write ahead log.
When a reader is reading, it puts a shared lock on the specific data it is reading in the shm file. The checkpointer respects that lock and may (potentially) continue working elsewhere in the db file, slowly updating the indices for checkpointed data in the shm file.
The checkpointer won't change the underlying data that the reader has a lock on but they may have created a new location for it. When the reader is finally done reading, the checkpointer can quickly grab an exclusive lock and update the header in the shm for that data to point to the new destination (and then release said lock). Since the checkpointer never holds this lock for very long, the reader can either block when trying to get a shared lock or it can retry the lock a few moments later. Now that the header in the shm only points to the new location, the checkpointer can safely do whatever it needs to with the data in the old location.
Slowly rinse repeat this until the checkpointer has gotten through the entire write ahead log. At that point there should be no remaining references in the shm to data within the wal file.
Now the wal file can be "unlocked" and if the other wal file is large enough, it can be locked, writes switch over to the other wal, and the cycle repeats anew.
Edit: Importantly, this requires that all readers be on a snapshot that includes at least one commit from the "new" wal file. So compared with wal1, wal2 allows you to have long running readers as long as they start past the last commit of the "previous" wal file.
Think of them more as “walblue” and “walgreen”
But Wal also keeps "history" so one reader transaction can see x0 from wal1, another reader can see x1 from Wal1. Wal1 cannot be merged into main db otherwise x0 is lost?
Yes that is correct. You can't checkpoint data after the oldest snapshot. But to my knowledge there's no way to force a read snapshot to continue to exist.
So while you can make multiple back to back reads that use the same snapshot, I believe there's no guarantee that the snapshot will still exist when the next read is opened unless the previous read is also still open (in which case an error is returned).
That seems to set an upper bound on how long a reader can block a checkpoint (unless the reader is intentionally staging reads to block the checkpoint).
Theoretically you could implement checkpoints that flatten everything between snapshots into single commits but the complexity and overhead probably isn't worth it given that the only real blocker for wal2 is an edge case that is nigh impossible to encounter unless you intentionally try to trigger it.
Open an BEGIN transaction forces read from a particular snapshot in SQLite. There are some complications around lock upgrade in that case: BEGIN a, read, BEGIN b, read, write from a transaction fail because cannot upgrade read lock to write lock. Other than that, in WAL mode, it will work exactly as expected:
BEGIN a, read x0 BEGIN b, write x1, END b BEGIN c, read will return x1 Back to a transaction, read again, return x0 still.
I suspect that having two WAL files means you can freely checkpoint the one that isn't currently being written to, but that's just a guess.
Now I can't help but wonder if there should be a `waln` mode where the WAL files would round robin instead of alternate between just two potentially allowing for much more intense write cadence.
That would make reading slower, since readers need to read from WAL as well.
There is a .shm (shared memory) file that all SQLite connections to the database would use to determine which page ranges are in the WAL versus main DB file. So that overhead already exists when WAL is enabled.
I'm not familiar with its inner workings, but if there were n WAL files, I'd expect the overhead to be a function of n.
Il give this thread two or three more replies before it reimplement logrotate from first principles
I'm just speculating here, but in a normal database you would have different processes writing the wal files to the database or archives.
You don't have that with sqlite, so I don't see an obvious advantage for this, except if they now spawn a process or thread to do this concurrently.
Edit: so I read the doc (shame on me) and it has nothing to do with speed. Its purpose is to prevent a wal file from growing too large.
I believe SQLite supports accessing a single database file from multiple processes or threads at once
There can be many readers simultaneously, but only ever one writer at a time.
I guess it is to make it crash save, e.g. in case the server dies in the middle of a write, the database would be corrupt after restart, with the wal file it can recover.
It may simply have been a design oversight at the time. However, the page notes the following difference of WAL2 from WAL, which could be an issue in some cases:
https://en.wikipedia.org/wiki/Log-structured_merge-tree