return to table of content

A MySQL compatible database engine written in pure Go

osigurdson
14 replies
17h20m

I suspect Go is probably better, but as a long time C# developer I cringe at the idea of implementing a DB with GC language. It seems that you would be fighting the GC all the time and have to write lots a lot of non-obvious low allocation code, using unmanaged structures, unsafe, etc., a lot. All doable of course, but seems like it would be starting on the wrong foot. Maybe fine for a very small team, but onboarding new devs with the right skill set would be hard.

neonsunset
4 replies
16h59m

You would be surprised by performance of modern .NET :)

Writing no-alloc is oftentimes done by reducing complexity and not doing "stupid" tricks that work against JIT and CoreLib features.

For databases specifically, .NET might actually be positioned very well with its low-level features (intrisics incl. SIMD, FFI, struct generics though not entirely low-level) and high-throughput GC.

Interesting example of this applied in practice is Garnet[0]/FASTER[1]. Keep in mind that its codebase still has many instances of un-idiomatic C# and you can do way better by further simplification, but it already does the job well enough.

[0] https://github.com/microsoft/garnet

[1] https://github.com/microsoft/FASTER

osigurdson
3 replies
16h7m

Using net6. I agree, performance is generally great / just as fast as its peers (i.e. Java and Go). However, if you need to think about memory a lot, GCed runtimes are an odd choice.

neonsunset
2 replies
15h45m

Why is that?

osigurdson
1 replies
15h18m

Primarily because you not only need to think about memory, but you also need to think about the general care and feeding of the GC as well (the behavior of which can be rather opaque). To each their own, but based on my own (fairly extensive) experience, I would not create a new DB project in a GCed runtime given the choice. That being said, I do think C# is a very nice language with a high performance runtime, has a wonderful ecosystem and is great choice for many / most projects.

neonsunset
0 replies
15h0m

Isn't the characteristic of languages like C++ or Rust is you have to think way more about managing the memory (and even more so if you use C)?

Borrow checker based .drop/deallocation is very, very convenient for data with linear or otherwise trivial lifetime, but for complex cases you still end up paying with either your effort, Arc/Arc<Mutex, or both. Where it does help is knowing that your code is thread-safe, something Rust is unmatched at.

But otherwise, C# is a quite unique GC-based language since it offers full set of tools for low level control when you do need that. You don't have to fight GC because once you use e.g. struct generics for abstractions and stack allocated/pooled buffers or native memory for data neatly wrapped into spans, you get something close to C but in a much nicer looking and convenient package (GC is also an optimization since it acts like an arena allocator, and each individual object has less cost than pure reference counted approach).

Another language, albeit with small adoption, is D which too has GC when you want an escape hatch while offering features to compete with C++ (can't attest to its GC performance however).

raggi
2 replies
16h5m

yup, it's bad, and even if you "do everything right" minimization wise, if you're still using the heap then eventually fragmentation will come for you too

neonsunset
1 replies
15h43m

Languages using moving garbage collectors, like C# and Java are particularly good at not having to deal with fragmentation at all or marginally at most.

raggi
0 replies
1h9m

Yup, and exposing pointers in a gc language was a mistake as it blocks this. It limits (efficient) applications to small deployments

jillesvangurp
2 replies
7h22m

There are quite a few database products and other data intensive systems written in Go, Java, and many other languages. Generally this is much less of an issue than you think. And it's offset by several benefits that come with having some nice primitives to do e.g. concurrency and nice language to work with.

On the JVM you have things like Cassandra, Elasticsearch, Kafka, etc. each of which offer performance and scale. There are lots more examples. As far as I know they don't do any of the things you mention; at least not a lot. And you can use memory mapped files on the JVM, which helps as well. Elasticsearch uses this a lot. And I imagine Kafka and Cassandra do similar things.

As for skillset, you definitely need to know what you are doing if you are going to write a database. But that would be true regardless of the language.

serial_dev
1 replies
3h44m

While it is true that Cassandra and Kafka are great software that countless developers rely on to handle massive scale...

It is also true that the JVM and the GC are a bottleneck in what they are able to offer. Scylla and Redpanda's pitch is "we are like this essential piece of software, but without the JVM and GC".

Of course, having a database written in Go still has its pros and cons, so each to their own.

jillesvangurp
0 replies
1h9m

The JVM and GC have a learning curve for the people implementing the database. But most users wouldn't get exposed to any of that. And if you manage it properly, things work fine. I've used Elasticsearch (and Opensearch) for many years. This is not really an issue these days. It was 10 years ago when JVM garbage collection was a lot less advanced than it is these days. These days, that stuff just works. I haven't had to tune GC for at least half a decade on the JVM. It's become a complete and utter non issue.

There are many valid reasons to pick other products but Elasticsearch is pretty good and fast at what it does. I've seen it ingest content at half a million documents per second. No stutters. Nothing. Just throw data at it and watch it keep up with that for an hour sustained (i.e. a bit over a billion documents). CPUs maxed out. This was about as fast as it went. We threw more data at it and it slowed down but it didn't die.

That data of course came from kafka being passed through a bunch of docker processes (Scala). All JVM based. Zero GC tuning needed. There was lots of other tuning we did. But the JVM wasn't a concern.

winrid
0 replies
17h13m

Well, with a manually managed language you have to do those things pretty much all the the time, but with a GC you can pick which parts are manually managed.

Also I suspect this project isn't for holding hundreds of GB of stuff in memory all the time, but I could be wrong.

klabb3
0 replies
7h11m

I think this depends on the level of optimization you go for. At the extreme end, you’re not gonna use “vanilla” anything, even in C or Rust. So I doubt that you’ll get that smooth onboarding experience.

In Go, I’ve found that with a little bit of awareness, and a small bag of tricks, you can get very low allocations on hot paths (where they matter). This comes down to using sync.Pool and being clever with slices to avoid copying. That’s footgun-performance tradeoff that’s well worth it, and can get you really far quickly.

hnlmorg
0 replies
9h53m

There’s already mysql-like databases written in non-GC’ed languages. Such as myself itself.

Odds are if you need one written in Go then you’re requirements are somewhat different. For example the need to stub for testing.

geenat
7 replies
20h6m

What's the replication story currently like?

zachmu
6 replies
20h3m

The vanilla package can replicate to or from MySQL via binlog replication. But since it's memory only, that's probably not what you want. You probably want to supply the library a backend with persistence, not the built-in memory-only one

Dolt can do the same two directions of MySQL binlog replication, and also has its own native replication options:

https://docs.dolthub.com/sql-reference/server/replication

geenat
2 replies
19h57m

Interesting!

If you have an existing MySQL or MariaDB server, you can configure Dolt as a read-replica. As the Dolt read-replica consumes data changes from the primary server, it creates Dolt commits, giving you a read-replica with a versioned history of your data changes.

This is really cool.

timsehn
0 replies
19h46m

Coming soon we'll have the ability to replicate a branch HEAD to MySQL as well.

dice
2 replies
12h58m

Have you benchmarked the replication? Or do you know of anyone who's running it against a primary with a couple 10s of thousands of writes per second?

zachmu
0 replies
5h6m

I don't think we do have any benchmarks of replication from mySQL, but I am positive there's no chance it can handle 10,000 TPS.

cess11
0 replies
9h49m

That's a lot. With Percona clusters I started having issues requiring fine-tuning around a third of that at quite short peak loads, maybe ten minutes sustained high load topping out at 6-10k writes/s. Something like 24 cores, 192 GB RAM on the main node.

Not sure how GC works in Golang but if you see 20k writes/s sustained that's what I'd be nervous about. If every write is 4 kB I think it would be something like a quarter of a TB per hour, probably a full TB at edge due to HTTP overhead, so, yeah, a lot to handle on a single node.

Maybe there are performance tricks I don't know about that makes 20k sustained a breeze, I just know that I had to spend time tuning RAM usage and whatnot for peaks quite a bit earlier and already at that load planned for sharding the traffic.

webprofusion
1 replies
15h12m

This is very cool! Couple of suggestions:

- Don't use "mysql" in the name, this is a trademark of Oracle corporation and they can very easily sue you personally if they want to, especially since you're using it to develop a competing database product. Other products getting away with it doesn't mean they won't set their sights on you. This is just my suggestion and you can ignore it if you want to.

- Postgres wire/sql compatibility. Postgres is for some reason becoming the relational king so implementing some support sooner rather than later increases your projects relevance.

sa-code
0 replies
13h53m

Missed an opportunity to can this uSql!

jddj
3 replies
22h43m

I always found the idea behind dolt to be very enticing.

Not enticing enough to build a business around, due to it being that bit too different and the persistence layer being that bit too important. But the sort of thing that I'd love it if the mainstream DBs would adopt.

I didn't realise the engine was written in Go, and honestly the first place my mind wonders is to performance.

zachmu
0 replies
21h38m

We haven't benchmarked the in-memory database implementation bundled in go-mysql-server in a while, but I would be surprised if it's any slower than MySQL, considering that Dolt runs on the same engine and is ~2x slower than MySQL including disk-access.

https://docs.dolthub.com/sql-reference/benchmarks/latency

jchanimal
0 replies
21h38m

If you like the idea of the Dolt prolly trees[1], I'm building a database[2] that uses them for indexing, (eventually) allowing for shared index updates across actors. Our core uses open-source JavaScript[3], but there are a few other implementations including RhizomeDB in Rust[4]. I'm excited about the research in this area.

[1] https://docs.dolthub.com/architecture/storage-engine/prolly-...

[2] https://fireproof.storage

[3] https://github.com/mikeal/prolly-trees

[4] https://jzhao.xyz/thoughts/Prolly-Trees

jbverschoor
3 replies
22h43m

This seems to be a wire-protocol proxy for mysql -> SQL.

The default proxied database is dolt. I'm guessing this is extracted from dolt itself as that claims to be wire-compatible with mysql. Which all makes total sense.

zachmu
2 replies
21h31m

Not a proxy in the traditional sense, no. go-mysql-server is a set of libraries that implement a SQL query engine and server in the abstract. When provided with a compatible database implementation using the provided interfaces, it becomes a MySQL compatible database server. Dolt [1] is the most complete implementation, but the in-memory database implementation the package ships with is suitable for testing.

We didn't extract go-mysql-server from Dolt. We found it sitting around as abandonware, adopted it, and used it to build Dolt's SQL engine on top of the existing storage engine and command line [2]. We decided to keep it a separate package, and implementation agnostic, in the hopes of getting contributions from other people building their own database implementations on top of it.

[1] https://github.com/dolthub/dolt [2] https://www.dolthub.com/blog/2020-05-04-adopting-go-mysql-se...

jsteenb2
1 replies
18h47m

Really excellent work! For the curious, would you all be creating an in-memory database implementation that is postgres compatible for the doltgres project?

zachmu
0 replies
18h9m

We are moving in that direction but it's not a primary goal at this point. Once we have more basic functionality working correctly in doltgres we will examine splitting off a separate package for it. The in memory implementation has a bunch of MySQL specific stuff in it right now and we're still learning what pieces need to be generalized to share code.

fedxc
3 replies
18h11m

I always look at these implementations and go wow! But then I think, is there any real use for this?

zbuttram
0 replies
18h8m

The readme mentions at least one interesting use which presumably is the impetus for its creation: https://github.com/dolthub/dolt

zachmu
0 replies
18h6m

If your program integrates with mysql in production, you can use this for much faster local tests. It doesn't have to be a go program, although that makes it easier.

kitd
0 replies
9h55m

If you want to run arbitrary queries on structured data then SQL is a good language to do it in. This library gives you the opportunity to build such a SQL layer on top of your own custom structured data sources, whatever they may be.

davgoldin
3 replies
11h47m

Congrats, looks like a lot of hard work!

Could I swap storage engine with own key value storage e.g. rocksdb or similar?

zachmu
2 replies
5h2m

Yes, that's the idea. Writing a simple read only database back end is not too tough.

davgoldin
1 replies
3h51m

Why read only? What's stopping this engine from using (for example) FoundationDB as storage?

zachmu
0 replies
3h19m

Nothing, it's just quite a bit more complicated to implement a backend that supports writes.

TechTechTech
3 replies
11h43m

It would be great if this evolves to support mysql to postgresql and mysql to sqlite.

Then we can finally have multiple database engine support for WordPress and others.

aargh_aargh
2 replies
10h0m

It is always the edge cases that will kill you. In the case of WP on PostgreSQL, the reason you want WP in the first place is the plugins and those will be hit or miss on PostgreSQL. Just give up on the combination of those two.

kreetx
1 replies
8h2m

Isn't there an adapter from mysql-to-postgres which would essentially mimic all the quirks in mysql onto an actual postgres?

aargh_aargh
0 replies
6h5m

I believe this is what Janus (NEXTGRES) does.

To clarify, the wire protocol is the easy part, the semantic differences how each database does things is a whole other can of worms. Such emulation will never be 100%, quirks and all.

sgammon
2 replies
22h19m

Isn't that........ Vitess?

zachmu
1 replies
21h30m

Vitess (a fork of it anyway) provides the parser and the server. The query engine is all custom go code.

sgammon
0 replies
19h43m

Ah, cool, that makes sense. Thanks for clarifying

pizza234
2 replies
22h7m

The compatibility (and functionality in general) is severely limited, not usable in production:

No transaction support. Statements like START TRANSACTION, ROLLBACK, and COMMIT are no-ops.

Non-performant index implementation. Indexed lookups and joins perform full table scans on the underlying tables.

I actually wonder if they support triggers, stored procedures etc.

zgk7iqea
0 replies
21h47m

Only for the in-memory implementation. It is also specifically stated that you shouldn’t use the in-memory stub in production

zachmu
0 replies
21h36m

Yes, triggers and stored procedures are supported. Concurrency is the only real limitation in terms of functionality.

The bundled in-memory database implementation is mostly for use in testing, for people who run against mysql in prod and want a fast compatible go library to test against.

For a production-ready database that uses this engine, see Dolt:

https://github.com/dolthub/dolt

maxloh
2 replies
21h14m

I know it is a matter of choice, but why was MySQL chosen instead of PostgreSQL? The latter seems to be more popular on Hacker News.

tobinfekkes
0 replies
21h8m

Typically, things that are more popular on Hacker News are not most popular with the rest of the world.

kamikaz1k
2 replies
22h20m

shouldn't these projects have a perf comparison table? there was a post a couple days ago about the an in-memory Postgres, but same problem on the perf.

if someone is considering running it, they're probably considering it against the actual thing. and I would think the main decision criteria is: _how much faster tho?_

zachmu
1 replies
21h27m

This is a reasonable point, we'll run some benchmarks and publish them.

We expect that it's faster than MySQL for small scale. Dolt is only 2x slower than MySQL and that includes disk access.

https://docs.dolthub.com/sql-reference/benchmarks/latency

kamikaz1k
0 replies
18h56m

Thanks! Appreciate your response.

In dynamic language land, we tend to use real DBs for test runs. So having a faster DB wouldn't hurt!

neximo64
1 replies
21h53m

Is there anything like this for postgres?

malkia
1 replies
21h52m

Is this for integration/smoke testing?

timsehn
0 replies
21h0m

Most direct users of go-mysql-server use it to test Golang <> MySQL interactions without needing a running server.

We here at DoltHub use it to provide SQL to Dolt.

geenat
1 replies
20h18m

With Vitess likely consolidating its runtimes (vtgate, vtctl, vttablet, etc) into a single unified binary: https://github.com/vitessio/vitess/issues/7471#issuecomment-...

... it would be a wild future if Vitess replaced the underlying MySQL engine with this (assuming the performance is good enough for Vitess).

zachmu
0 replies
19h3m

I don't think this is in the cards for vitess, their whole architecture is built around managing sharded mysql instances.

didip
1 replies
21h58m

tidb has been around for a while, it is distributed, written in Go and Rust, and MySQL compatible. https://github.com/pingcap/tidb

Somewhat relatedly, StarRocks is also MySQL compatible, written in Java and C++, but it's tackling OLAP use-cases. https://github.com/StarRocks/starrocks

But maybe this project is tackling a different angle. Vitess MySQL library is kind of hard to use. Maybe this can be used to build ORM-like abstraction layer?

verdverm
0 replies
21h19m

Dolt supports git like semantics, so you can commit, pull, merge, etc...

west0n
0 replies
17h51m

Interesting, another project implemented in Go that is compatible with MySQL server, alongside others like Vitess and TiDB.

taf2
0 replies
17h50m

Could this be used as kind of connection proxy to allow for more clients to a single pool of database servers?

speleding
0 replies
10h34m

How hard would it be to use this as an in-memory replacement for MySQL for testing, let's say, a Rails project?

Given how important the DB layer is I would be careful to use something like this in production, but if it allows speeding up the test suite it could be really interesting.

karmakaze
0 replies
22h22m

Compatible has many aspects. I'd be interested in the replication protocols.

cvalka
0 replies
19h26m

TiDB!

ceving
0 replies
6h30m

Why not standard conforming SQL instead of MySQL?

amelius
0 replies
20h56m

Performance comparison against the original?