return to table of content

B-Trees and Database Indexes

benwilber0
25 replies
18h17m

This is why you should _never_ make your UUID column the primary key.

For one, it's enormous. Now you have to copy that 128bit int to every side of the relation.

Two, in most cases it's completely random. Unless you had the forethought to use something other than UUIDv4 (gen_random_uuid). So now you just have a bunch of humongous random numbers clogging up your indexes and duplicated everywhere for no good reason.

Use regular bigserial (64bit) PKs for internal table relations and UUIDs (128bit) for application-level identifiers and natural keys. Your database will be very happy!

srcreigh
6 replies
14h21m

Nothing wrong with 128bits primary key. Postgres is a horrible DB that can’t order rows on disk, so it doesn’t matter if you use UUIDs or not, you’re SOL.

In fact, you could use 512bits primary key with MySQL and still, range queries would be 10x faster and 10x less space in ram than in Postgres. This article explains why that is.

chipdart
3 replies
12h38m

Nothing wrong with 128bits primary key. Postgres is a horrible DB that can’t order rows on disk, so it doesn’t matter if you use UUIDs or not, you’re SOL.

Why do you believe a database should order rows on disk?

stoperaticless
0 replies
11h40m

1. Faster access by that column. 2. Other DBs allow it/default to it.

srcreigh
0 replies
4h51m

In Postgres every read happens in increments of 8kb

If your rows are not ordered on disk, the amount of data you need to load to query 100 rows is insane

10kb query result (100 rows 100 bytes) requires almost 1mb of data to be loaded- it’s 99% waste

Postgres is 100x slower than other DBs for range queries

Every single other database in existence has this feature except for Postgres

jashmatthews
1 replies
13h23m

Clustered tables and sequential keys have their own downsides though like lock contention on the "last" page.

srcreigh
0 replies
4h36m

Not really a problem in practice.

Most clustered tables don’t have a single last page. For ex it’s common to order a table by (used_id,pk) so a users data is grouped together. Dropbox did this

For the ones which do have a single last page, it’s easy to remove that, you don’t have to use sequential IDs. Use a uuid.

Basically this problem only happens when you cluster globally by mistake. just don’t make that mistake.

Sophistifunk
4 replies
18h11m

It's very interesting to me that we have to keep telling people this, but it hasn't become part of the "hive folk knowledge" we all seem to develop. I think DB vendors have been sleeping on an opportunity to encourage better practices.

paxys
3 replies
17h37m

DB vendors haven't done enough to offer ID generation as a core part of their system. Ideally "what ID do I use for this object" shouldn't even be a consideration, because of course the database should handle it. It is the system of record after all. Yet your options are pretty much limited to UUID or a basic incremental counter that fails to meet any real world production constraints.

hobs
2 replies
5h59m

Basic incremental counters work for most real world production constraints. Most people are not going to create tables with 4.2 billion rows, even with failed inserts. If you are doing that its an extreme of either very much you know what you are doing, or you very much do not; I have seen both in production.

paxys
1 replies
5h55m

What if I have multiple partitions? Replication? What if I don't want business data to be exposed due to strictly incremental counters? What if I want unique IDs across different tables?

sgarland
0 replies
2h30m

Partitions should not impact the use of an INT PK, except that you’ll need to include the partition key in the PK, e.g. (id, created_at) if partitioning by datetime. The displayed ordering without an explicit ORDER BY may not make sense, but to be fair, there are never any guarantees about implicit order.

Replication should be fine, unless you mean active-active in which case I suggest a. not doing that b. using interleaved chunks, or a coordinator node that hands them out.

Business data exposure can be avoided (if it’s actually a problem, and not just a theoretical one) in a variety of ways; two of the most common are:

* Don’t use the id in the slug.

* Have a iid column that’s random and exposed, while keeping the integer as the PK.

If you need unique IDs across tables, then I question your use of an RDBMS, because you aren’t really making use of the relational aspect.

wbsun
3 replies
17h24m

Isn’t there also hash based index for random keys?

benwilber0
2 replies
16h49m

Maybe? idk. Not in Postgres. The default index is a B-Tree. A hash-based index would be terrible for disk-seeking, in any case.

benwilber0
0 replies
14h27m

Probably the worst PK index of all time. There's a reason why it's barely ever used.

paulryanrogers
2 replies
18h14m

Won't you still need indexes on those UUIDs anyway? And possibly have to do more joins to resolve them?

dspillett
0 replies
8h37m

> Won't you still need indexes on those UUIDs anyway?

Depends on how index pages are structured in the DB. With MySQL (assuming the InnoDB engine) the primary key is pretty much always a clustered index, in MS SQL Server this is the case more often than not too. This means that any page expansion due to splits from the randomness of the data being inserted affects the whole table not just the key index, and rebuilding to claim back the wasted space later will take a lot longer as you are moving all your based data around. With the UUID as a supplementary key, likely indexed on its own, all that gets enlarged by excess page splitting is those 128-bit values not the entire rows and an index rebuild to fix that up after the fact moves a lot less data around.

So yes, you have an extra index on top of your primary key and other additional ones needed by your apps and reports, but not having a random UUID as your clustering key can be a significant benefit. Using more ordered UUIDs minimises this difference considerably though.

Even ignoring the random-key-causes-page-splits issue, if you've mitigated it with more ordered UUIDs, a wider primary key increases the size of all supplementary indexes assuming MySQL arranges things similarly to SQL Server: rather than having a hidden page/row identifier (as SQL Server does without any clustered key defined, it calls such arrangements heap tables), each supplementary index includes the clustering key value with every row. So while having a 32-bit primary key for internals and the 128-bit UUID as an extra key adds 4 bytes per row (for the extra INT32) to the base data, it saves 12 bytes from each row in each non-clustered index (as the INT32 is included, not the UUID).

> And possibly have to do more joins to resolve them?

Usually not. Usually when you have both the INT32 (or sometimes IN64) surrogate key is for all internal use and a UUID only for external references, so the UUIDs are only important as the initial filter and not likely not taking part in a JOIN at all. After the initial filter to find the item you want in the main table of the query, the JOINs to collect data from other tables will all be by the surrogate (INT) keys. The UUID is almost never used as a foreign key reference in this arrangement.

benwilber0
0 replies
18h13m

You only need 1 index on the UUID. Instead of everywhere the UUID is referenced from other tables

fabian2k
1 replies
10h54m

There are cases where it is really useful to have only universally unique IDs, e.g. if you have multi-tenant systems and at some point you need to move tenants to a different server/instance or merge tenants on the same DB.

weaksauce
0 replies
1h47m

they do mention that it’s is good to have both. also using a newer uuid version that is more sortable temporally is also wise.

Use regular bigserial (64bit) PKs for internal table relations and UUIDs (128bit) for application-level identifiers and natural keys. Your database will be very happy!
thesz
0 replies
18h7m

If you sort UUIDs, there will be a lot of common prefixes. B-trees implicitly sort data so you can factor common prefix from all keys of B-tree page.

But!

UUIDs are random and B-tree will have increased fragmentation after a short while.

I once tried to insert a puny 1 million scale free graph edges into a B-tree (BerkeleyDB) in 1K batches and it failed miserably - I've waited for an hour and then killed it. LSM trees were an orders of magnitude faster at 100K edges, so BDB had shown that B-trees are no match there.

B-trees are semi-static data structures, they are hard to rebuild incrementally if data is random. But they shine if input keys are sorted.

Use UUIDs as you like to use them if your storage engine is LSM tree. Use staged sorting (LSM tree in disguise) if you ue B-tree.

kijin
0 replies
13h51m

You're assuming that people store UUIDs as 128-bit int. That's overly generous. I know people who use varchar without a second thought, more than doubling the storage requirement!

espadrine
0 replies
5h15m

Use regular bigserial (64bit) PKs for internal table relations and UUIDs (128bit) for application-level identifiers and natural keys

These recommendations are database-specific. It is definitely true of MySQL, since its row storage hinges on the primary key (and other indices yield that primary key). To have a hot page cache, you need to always add to the same page, so an incremented primary index is preferred. A UUIDv4 secondary index would still have cold page caches though, defeating the purpose. (Unless using UUIDv7.)

But in CockroachDB, the data is distributed. Random INSERTs actually have a speedup over sequential ones, as those insertions can happen in parallel (on different machines). Similarly, SELECTs are faster if the primary index is random, because a single machine is not overloaded; it effectively acts as a load balancer.

dspillett
0 replies
9h4m

> Use regular bigserial (64bit) PKs for internal table relations and UUIDs (128bit) for application-level identifiers and natural keys.

If you are worried about size ballooning, rather than the randomness, make sure you actually need more than 32-bit standard integers (well, 31-bit as they tend to be signed and we usually start from 0 or 1 not -2,147,483,648).

Avoiding roll-over issues is sometimes a valid concern sometimes with 32-bit INTs, but I've seen people specify 64-bit surrogate keys when their data is not likely to grow to where 31 would be an issue in the next few hundred years… There is an argument that 64-bit values are more efficient in modern CPUs, but I've not seen any good tests that show a measurable difference in the context of DB keys where there are other overheads to consider due to the doubling in size.

VeejayRampay
9 replies
23h47m

beautiful interactive visualizations, this is top shelf in terms of pedagogy and vulgarization

eclectic29
7 replies
23h27m

Slightly off topic: Learnt a new word today 'vulgarization' which seems to have a completely different meaning from the obvious. Thanks.

egwynn
6 replies
23h7m

Note that, in the abstract, “vulgar” means “common” (as in “vulgar latin”). Indeed, its negative connotations come from that same sense: “common” people are unrefined.

hinkley
5 replies
22h51m

The association between vulgarity and propriety (and class distinctions) sort of ruins that word, particularly in the english speaking west.

I wonder if that's as big of a problem in the romance languages (which all treat left/right the same way - left = bad, right = good)

jjgreen
2 replies
22h33m

Indeed: are you sinister or dexterous?

nickpeterson
0 replies
19h12m

As a left-handed contrarian, I’ve always enjoyed that sinister and left handed go hand in hand.

hinkley
0 replies
21h38m

In French the same word for “right” means the same notion in English for

- direction

- straight ahead

- civics

- propriety

egwynn
0 replies
6h7m

This goes pretty deep in English. I'd argue that the semantic intention behind the colloquial usage of "vulgar" is nearly inseparable from the "class distinction" baggage it carries. Consider these common synonyms and their etymologies:

- Rude: "coarse, rough, unfinished, unlearned" (https://www.etymonline.com/word/rude#etymonline_v_16610)

- Mean: "shared by all, common, general" (https://www.etymonline.com/word/mean#etymonline_v_12495)

And even synonyms like obscene, indecent, or disgusting, which don't evoke this distinction directly, still almost always ultimately rely on separating things based on what is "good" and "clean" according to class distinctions.

LtdJorge
0 replies
20h12m

Yes, in Spanish vulgar is used as inappropriate. We have "el vulgo" (el pueblo, the people), which kinda teaches you the correct meaning, popular, unrefined. But "vulgo" is seldomly used.

bddicken
0 replies
23h39m

That's the goal! Thanks for the kind words.

is_true
6 replies
16h33m

The cookie modal doesn't work on Firefox mobile and it takes half the height.

Why don't let the user set that up on their browser

vanderZwan
1 replies
9h0m

it takes half the height.

Ah so that's what the "planet scale" name refers to /j

is_true
0 replies
5h37m

It takes all the screen space from latitude 0° to -90°!!!

handelaar
0 replies
8h7m

Nor Chrome desktop

crabmusket
0 replies
4h26m

It has such an enticing "reject optional" button next to "accept all" and I was so impressed that they'd actually made the opt-out flow as easy as the opt-in flow... until I tried to use it. It's just maliciously incompetent at this point.

bddicken
0 replies
3h9m

I'm sorry about this! working on a fix.

IAmLiterallyAB
0 replies
14h56m

Or Chromium mobile

hinkley
6 replies
22h47m

I realized after a few years of doing it that my strategy for keeping Wikis useful is to treat them as B-Trees.

When the landing page gets too full/too many outgoing links, I start pushing links and paragraphs down into the child pages, to leave space for a fair share of timely links and on-boarding docs.

Similar and older links get pushed down into the sibling that best represents the topic. Then if the destination page is now too big, similar and older links get pushed down to their children. Eventually all of the outdated docs are three levels down from the landing page, where only historians and experts will see them. And sometimes as we finally decide how part of the system really should work, siblings get combined into one page, minus the speculative work that gets pushed down deeper in the tree. It works remarkably well. At the end of the day documentation is a search problem.

I highly recommend it for a Friday afternoon exercise when you want to be productive but you know starting a new task is a complete waste of time.

caseyohara
5 replies
22h7m

Do you have a recommendation for Wiki software you like to use? My team is in need of an internal knowledge base, and I like the structure of wikis. Most of the SaaS products I've tried or looked at are a bit too shiny/fancy and don't seem to match my mental model of how a wiki-style knowledge base should work.

zelphirkalt
0 replies
7h37m

My recommendation is, if you want a wiki for developers, use something that is based on a markup language, sources pages from a git repo, and does not do too much magic behind the scenes. This will result in a more maintained and liked wiki than any bloated SaaS wiki.

shnock
0 replies
17h0m

Confluence, but we're already in deep w Atlassian

left-struck
0 replies
8h44m

If you can self host, wiki.js is easy to set up in a docker container. Mediawiki (What Wikipedia runs on) is pretty easy too.

If you have a small team, Obsidian and a syncing solution like git or obsidian sync might work.

I was able to work with my company’s it team to set up a wiki which is only accessible from within the network, including by vpn, and is hosted on a vps.

infogulch
0 replies
2h46m

Oxide Computer published their Request for Discussion (RFD) site software [1] that they use for internal published documentation and discussion. Many are published to the public, but some are private. [2] They talk about how they use it and where it came from in their most recent podcast: RFDs: The Backbone of Oxide [3].

I suspect this would be a good 'substitute' for documents that are often hosted on internal wikis.

[1]: https://github.com/oxidecomputer/rfd-site

[2] https://rfd.shared.oxide.computer/rfd/0001

[3]: https://oxide.computer/podcasts/oxide-and-friends/2065190

hinkley
0 replies
19h13m

I don't think it really matters which you use. I've unfortunately been stuck in Atlassian for ages.

But if you were shopping for one, from the standpoint of keeping the docs working being able see missing pages and see incoming links to a page are both pretty helpful. I kinda miss the latter.

seanman
2 replies
4h28m

I have been looking for something like this for so long, amazing post. I would love a section on composite indexes. That is something that I still have a hard visualizing…

tpetry
0 replies
4h1m

I invented a new way of visualisi g composite ones when I wrote my book about indexing. You can scroll down on the landing page to the fundamentals chapter to see it.

https://sqlfordevs.com/ebooks/indexing

bddicken
0 replies
3h11m

Thanks Sean! Yeah that would be very cool to have an interactive visual for that as well. So many possibilities!

yosri-xp
1 replies
18h23m

Thanks for the amazing visual, Me and my team had worked on BTree+ indexing support on the top of Aerospike as we have different huge data sets 5T of data and each data set belong to an X property which suppose to have its own order table indexing.

The challenging part was evicting the expired keys from the BTree+ where the inserted keys would have TTL therefore we decided to fuse only one level branch and within the first sibling leaf nodes as it would be expensive if we perform clean up all the way up which would cause high lock contention and slow things down substantially specially when keys get inserted/deleted/updated.

Also we had to do sharding on top of the BTree+ to speed things up and reduce the high lock contention, that way we know what shard the the keys belong to and we lock the branch before performing any CUD, That way we can perform high concurrent operations on multiple shards/branches.

The clean up process might have some caveat and the Btree+ ends up unbalanced. We had to provide rebuild indexing feature so that would fix all the gaps if necessary to avoid extra clean up operations.

Again Thanks for the visuals.

bddicken
0 replies
3h9m

You're welcome! Sharding with B tree indexes... Hmmm, I know a company that does that.

misonic
1 replies
10h29m

may I ask what the v0, v1, ...v10 mean in those graphs? different pages?

s4i
0 replies
8h0m

I think it's "value #0", "value #1", etc. Confused me too.

tnvmadhav
0 replies
9h46m

great piece of education. The interactive demos like these help a lot.

sroussey
0 replies
17h6m

Awesome article!

I only wished that the reference to InnoDB storing data in the B tree itself is otherwise referred to as a clustered index.

MyISAM before it was non-clustered.

Oracle and others let you choose.

dorbodwolf
0 replies
12h59m

If our disk block and B-tree node is 16k, and our keys, values, and child pointers are all 8 bits, this means we could store 682 key/values with 683 child pointers per node. A three level tree could store over 300 million key/value pairs (682 × 682 × 682 = 317,214,568). —— Should be 8 bytes per element?