return to table of content

Garage: Open-Source Distributed Object Storage

j-pb
30 replies
10h58m

What I'm really missing in this space is something like this for content addressed blob storage.

I feel like a lot of complexity and performance overhead could be reduced if you only store immutable blobs under their hash (e.g Blake3). Combined with a soft delete this would make all operations idempotent, blobs trivially cacheable, and all state a CRDT/monotonically mergeable/coordination free.

There is stuff like IPFS in the large, but I want this for local deployments as a S3 replacement, when the metadata is stored elsewhere like git or a database.

skinkestek
4 replies
7h35m

Perkeep has (at least until last I checked it) the very interesting property of being completely impossible for me to make heads or tails of while also looking extremely interesting and useful.

So in the hope of triggering someone to give me the missing link (maybe even a hyperlink) for me to understand it, here is a the situation:

I'm a SW dev that also have done a lot of sysadmin work. Yes, I have managed to install it. And that is about it. There seems to be so many features there but I really really don't understand how I am supposed to use the product or the documentation for that matter.

I could start an import of Twitter or something else an it kind of shows up. Same with anything else: photos etc.

It clearly does something but it was impossible to understand what I am supposed to do next, both from the ui and also from the docs.

tgulacsi
0 replies
6h38m

Beside personal photo store, I use the storage part for file store at work (basically, indexing is off), with a simplifying wrapper for upload/download: github.com/tgulacsi/camproxy

With the adaptive block hashing (varying block sizes), it beats gzip for compression.

mdaniel
0 replies
1h29m

I was curious to see if I could help, and I wondered if you saw their mailing list? It seems to have some folks complaining about things they wish it did, which strangely enough is often a good indication of what it currently does

There's also "Show Parkeep"-ish posts like this one <https://groups.google.com/g/perkeep/c/mHoUUcBz2Yw> where the user made their own Pocket implementation complete with original page snapshotting

The thing that most stood out to me was the number of folks who wanted to use Parkeep to manage its own content AND serve as the metadata system of record for external content (think: an existing MP3 library owned by an inflexible media player such as iTunes). So between that and your "import Twitter" comment, it seems one of its current hurdles is that the use case one might have for a system like this needs to be "all in" otherwise it becomes the same problem as a removable USB drive for storing stuff: "oh, damn, is that on my computer or on the external drive?"

lockyc
0 replies
7h19m

I agree 100%

breakingcups
0 replies
7h8m

Perkeep is such a cool, interesting concept, but it seems like it's on life-support.

If I'm not mistaken, it used to be funded by creator Brad Fitz, who could afford to hire a full-time developer on his Google salary, but that time has sadly passed.

It suffers from having so many cool use-cases that it struggles to find a balance in presentation.

j-pb
0 replies
5h30m

Yeah, there are pleanty of dead and abandoned projects in this space. Maybe the concept is worthless without a tool for metadata management? Also I should probably have specified that by "missing" I mean, "there is nothing well maintained and production grade" ^^'

j-pb
0 replies
5h27m

Yeah I've been following it on and off since it was camli-store. Maybe it tried to do too much at once and didn't focus on just the blob part enough, but I feel like it never really reached a coherent state and story.

j-pb
2 replies
5h32m

I get a

Trac detected an internal error:

IOError: [Errno 28] No space left on device

So it looks like it is pretty dead like most projects in this space?

diggan
1 replies
5h26m

Because the website seems to have a temporary issue, the project must be dead?

Tahoe-LAFS seems alive and continues development, although it seems to not have seen as many updates in 2024 as previous years: https://github.com/tahoe-lafs/tahoe-lafs/graphs/contributors

j-pb
0 replies
5h23m

More like based on the prior that all projects in that space arent' in the best of health. Thanks for the github link, that didn't pop up in my quick google search.

the_duke
1 replies
10h43m

Garage splis the data into chunks for deduplication, so it basically already does content addressed storage under the hood..

They probably don't expose it publicly though.

j-pb
0 replies
5h38m

Yeah, and as far as I understood they use the key hash to address the overall object descriptor. So in theory using the hash of the file instead of the hash of the key should be a simple-ish change.

Tbh I'm not sure if content aware chunking isn't a sirens call:

  - It sounds great on paper, but once you start storing encrypted (which you have to do if you want e2e encryption) or compressed blobs (e.g. images) it won't work anymore.

  - Ideally you would store things with enough fine grained blobs that blob-level deduplication would suffice.

  - Storing a blob across your cluster has additional compute, lookup, bookkeeping, and communication overhead, resulting in worse latency. Storing an object as a contiguous unit makes the cache/storage hierarchies happy and allows for optimisations like using `sendfile`.

  - Storing the blobs as a unit makes computational storage easier to implement, where instead of reading the blob and processing it, you would send a small WASM program to the storage server (or drive? https://semiconductor.samsung.com/us/ssd/smart-ssd/) and only receive the computation result back.

singinwhale
1 replies
10h4m

Sounds a little like Kademlia, the DHT implementation that BitTorrent uses.

It's a distributed hash table where the value mapped to a hash is immutable after it is STOREd (at least in the implementations that I know)

j-pb
0 replies
6h2m

Kademlia could certainly be a part of a solution to this, but it's a long road from the algorithm to the binary that you can start on a bunch of machines to get the service, e.g. something like SeaweedFS. BitTorrent might actually be the closest thing we have to this, but it is at the opposite spectrum of the latency -distributed axis.

ramses0
1 replies
5h35m

Check also SeaweedFS, it has some interesting tradeoffs made, but I hear you with wanting some of the properties you're looking for.

tempest_
0 replies
4h29m

I am using seaweed for a project right now. Some things to consider with seaweed.

- It works pretty well, at least up to the 15B objects I am using it for. Running on 2 machines with about 300TB, (500 raw) storage on each.

- The documentation, specifically with regards to operations like how to backup things, or different failure modes of the components can be sparse.

- One example of the above is I spun up a second filer instance (which is supposed to sync automatically) which caused the master server to emit an error while it was syncing. The only way to know if it was working was watching the new filers storage slowly grow.

- Seaweed has a pretty high bus factor, though the dev is pretty responsive and seems to accept PRs at a steady rate.

od0
1 replies
4h43m

Take a look at https://github.com/n0-computer/iroh

Open source project written in Rust that uses BLAKE3 (and QUIC, which you mentioned in another comment)

j-pb
0 replies
4h19m

It certainly has a lot of overlap and is a very interesting project, but like most projects in this space, I feel like it's already doing too much. I think that might be because many of these systems also try to be user facing?

E.g. it tries to solve the "mutability problem" (having human readable identifiers point to changing blobs); there are blobs and collections and documents; there is a whole resolver system with their ticket stuff

All of these things are interesting problems, that I'd definitely like to see solved some day, but I'd be more than happy with an "S3 for blobs" :D.

lima
1 replies
5h23m

The RADOS K/V store is pretty close. Ceph is built on top of it but you can also use it as a standalone database.

yencabulator
0 replies
1h37m

Nothing content-addressed in RADOS. It's just a key-value store with more powerful operations that get/put, and more in the strong consensus camp than the parents' request for coordination free things.

(Disclaimer: ex-Ceph employee.)

jiggawatts
1 replies
5h43m

Something related that I've been thinking about is that there aren't many popular data storage systems out there that use HTTP/3 and/or gRPC for the lower latency. I don't just mean object storage, but database servers too.

Recently I benchmarked the latency to some popular RPC, cache, and DB platforms and was shocked at how high the latency was. Every still talks about 1 ms as the latency floor, when it should be the ceiling.

j-pb
0 replies
5h25m

Yeah QUIC would probably be a good protocol for such a system. Roundtrips are also expensive, ideally your client library would probably cache as much data as the local disk can hold.

j-pb
0 replies
6h0m

Yeah, the subdirectories and mime-type seemed like an unnecessary complication. Also looks pretty dead.

rkunnamp
0 replies
59m

IPFS like "coordination free" local S3 replacement! Yes. That is badly needed.

ianopolous
0 replies
3h24m

That's how we use S3 in Peergos (built on IPFS). You can get S3 to verify the sha256 of a block on write and reject the write if it doesn't match. This means many mutually untrusting users can all write to the same bucket at the same time with no possibility for conflict. We talk about this more here:

https://peergos.org/posts/direct-s3

amluto
0 replies
6h11m

I would settle for first-class support for object hashes. Let an object have metadata, available in the inventory, that gives zero or more hashes of the data. SHA256, some Blake family hash, and at least one decent tree hash should be supported. There should be a way to ask the store to add a hash to an existing object, and it should work on multipart objects.

IOW I would settle for content verification even without content addressing.

S3 has an extremely half-hearted implementation of this for “integrity”.

fijiaarone
17 replies
16h55m

I don’t understand why everyone wants to replicate AWS APIs for things that are not AWS.

S3 is a horrible interface with a terrible lack of features. It’s just file storage without any of the benefits of a file syste - no metadata, no directory structures, no ability to search, sort, or filter.

Combine that with high latency network file access and an overly verbose API. You literally have a bucket for storing files, when you used to have a toolbox with drawers, folders, and labels.

Replicating a real file system is not that hard, and when you lose the original reason for using a bucket —- because your were stuck in the swamp with nothing else to carry your files in — why keep using it when you’re out of the mud?

TheColorYellow
4 replies
16h52m

Because at this point it's a well known API. I bet people want to recreate AWS without the Amazon part, and so this is for them.

Which, to your point, makes no sense because as you rightly point out, people use S3 because of the Amazon services and ecosystem it is integrated with - not at all because it is "good tech"

acdha
2 replies
16h22m

S3 was the second AWS service, behind SQS, and saw rapid adoption which cannot be explained by integration with services introduced later.

vlovich123
1 replies
13h14m

Storage is generally sticky but I wouldn’t be so quick to dismiss that reason because it might explain why anything would fail to displace it; a bunch of software is written against S3 and the entire ecosystem around it is quite rich. It doesn’t explain the initial popularity but does explain stickiness. Initial popularity was because it was the first good REST API to do cloud storage AND the price was super reasonable.

acdha
0 replies
6h20m

Oh, I’m definitely not saying integration or compatibility have nothing to do with it - only that “horrible interface with a terrible lack of features” seems impossible to reconcile with its immense popularity.

otabdeveloper4
0 replies
6h35m

S3 is just HTTP. There isn't really an ecosystem for S3, unless you just mean all the existing http clients.

vineyardmike
1 replies
14h54m

Does your file system have search? Mine doesn’t. Instead I have software that implements search on top of it. Does it support filtering? Mine uses software on top again. Which an S3 api totally supports.

Does your remote file server magically avoid network latency? Mine doesn’t.

In case you didn’t know, inside the bucket you can use a full path for S3 files. So you can have directories or folders or whatever.

Some benefits of this system (KV style access) is to support concurrent usage better. Not every system needs it, but if you’re using an object store you might.

Scaevolus
1 replies
12h25m

S3 exposes effectively all the metadata that POSIX APIs do, in addition to all the custom metadata headers you can add.

Implementing a filesystem versus an object store involves severe tradeoffs in scalability and complexity that are rarely worth it for users that just want a giant bucket to dump things in.

The API doesn't matter that much, but everything already supports S3, so why not save time on client libraries and implement it? It's not like some alternative PUT/GET/DELETE API will be much simpler-- though naturally LIST could be implemented myriad ways.

nh2
0 replies
6h16m

There are many POSIX APIs that S3 does not cover. For example directories, and thus efficient renames and atomic moves of sub hierarchies.

nh2
0 replies
6h37m

I use CephFS.

Blob storage is easier than POSIX file systems:

You have server-client state. The concept of opened files, directories, and their states. Locks. The ability for multiple writers to write to the same file while still providing POSIX guarantees.

All of those need to correctly handle failure of both the client and the server.

CephFS implements that with a Metadata server that has lots of logica and needs plenty of RAM.

A distributed file system like CephFS is more convenient than S3 in multiple ways, and I agree it's preferable for most use cases. But it's undoubtedly more complex to build.

klysm
0 replies
16h34m

Replicating a real file system is not that hard

Ummmm what? Replicating a file system is insanely hard

favadi
0 replies
16h31m

S3 is a horrible interface with a terrible lack of features.

Because turn out that most applications do not require that many features when it comes to persistent storage.

duskwuff
0 replies
16h17m

I don’t understand why everyone wants to replicate AWS APIs for things that are not AWS.

It's mostly just S3, really. You don't see anywhere near as many "clones" of other AWS services like EC2, for instance.

And there's a ton of value on being able to develop against a S3 clone like Garage or Minio and deploy against S3 - or being able to retarget an existing application which expected S3 to one of those clones.

didntcheck
0 replies
8h32m

You wouldn't want your "interactive" user filesystem on S3, no, but as the storage backend for a server application it makes sense. In those cases you very often are just storing everything in a single flat folder with all the associated metadata in your application's DB instead

By reducing the API surface (to essentially just GET, PUT, DELETE), it increases the flexibility of the backend. It's almost trivial to do a union mount with object storage, where half the files go to one server and half go to another (based on a hash of the name). This can and is done with POSIX filesystems too, but it requires more work to fully satisfy the semantics. One of the biggest complications is having to support file modification and mmap. With S3 you can instead only modify a file by fully replacing it with PUT. Which again might be unacceptable for a desktop OS filesystem, but many server applications already satisfy this constraint by default

crabbone
0 replies
9h59m

It's a legitimate question and I'm glad you asked! (I'm not the author of Garage and have no affiliation).

Filesystems impose a lot of constraints on data-consistency that make things go slow. In particular, when it comes to mutating directory structure. There's also another set of consistency constraints when it comes to dealing with file's contents. Object stores relax or remove these constraints, which allows them to "go faster". You should, however, carefully consider if the constraints are really unnecessary for your case. The typical use-case for object stores is something like storing volume snapshots, VM images, layers of layered filesystems etc. They would perform poorly if you wanted to use them to store the files of your programming project, for example.

acdha
0 replies
16h17m

Replicating a real file system is not that hard

What personal experience do you have in this area? In particular, how have you handled greater than single-server scale, storage-level corruption, network partitions, and atomicity under concurrent access?

Nathanba
0 replies
16h13m

it's because many other cloud services offer sending to S3, that's pretty much it

computerfan494
14 replies
16h50m

I have used Garage for a long time. It's great, but the AWS sigv4 protocol for accessing it is just frustrating. Why can't I just send my API key as a header? I don't need the full AWS SDK to get and put files, and the AWS sigv4 is a ton of extra complexity to add to my projects. I don't care about the "security benefits" of AWS sigv4. I hope the authors consider a different authentication scheme so I can recommend Garage more readily.

klysm
4 replies
16h35m

Sending your api key in the header is equivalent to basic auth.

vineyardmike
2 replies
15h1m

This is not intended for commercial services. Realistically, this software was made for people who keep servers in their basement. The security profile of LAN users is very different than public AWS.

anonzzzies
1 replies
11h29m

The site says it was made (initially) and used for a commercial French hoster.

vineyardmike
0 replies
10h29m

They’re a self-described “non-profit experimental hosting group”. It’s used to host their website, chat server data, etc.

It’s great they made it (I use personally!) but that’s more akin to a home-lab than commercial vendor.

computerfan494
0 replies
16h26m

Yep, and that's fine with me. I don't have a problem with basic auth.

computerfan494
1 replies
1h30m

I have done this for my purposes, but it's slow and unnecessary bloat I wish I didn't have to have.

ianopolous
0 replies
29m

5 hmac-sha256's per signature are slow?

surfingdino
1 replies
12h50m

It makes sense to tap into the existing ecosystem of AWS S3-compatible clients.

otabdeveloper4
0 replies
6h38m

Plain HTTP (as in curl without any extra headers) is already an S3-compatible client.

If this 'Garage' doesn't support the plain HTTP use case then it isn't S3 compatible.

zipping1549
0 replies
11h29m

Of course curl has it

6LLvveMx2koXfwn
0 replies
16h38m

Implementing v4 on the server side also requires the service to keep the token as plain text. If it's a persistent password, rather than an ephemeral key, that opens up another whole host of security issues around password storage. And on the flip side requiring the client to hit an endpoint to receive a session based token is even more crippling from a performance perspective.

n_ary
4 replies
12h10m

Tried this for my own homelab, either I misconfigured it or it consumes x2(linearly) memory(working) of the stored data. So, for example, if I put 1GB of data, seaweed would immediately consume 2GB of memory constantly!

Edit: memory = RAM

crest
2 replies
11h35m

Are you claiming that SeaweedFS requires twice as much RAM as the sum of the sizes of the stored objects?

n_ary
1 replies
9h23m

Correct. I experimented by varying the data volume, it was linearly correlated by x2 of data volume.

ddorian43
0 replies
8h53m

Create a reproducible issue in their github. The developer is very responsive.

TechDebtDevin
0 replies
11h12m

That is odd. It likely has something to do with the index caching and how many replication volumes you configured. By default it indexes all file metadata in RAM (I think) but that wouldn't justify that type of memory usage. I've always used mostly default configurations in Docker Swarm, similar to this:

https://github.com/cycneuramus/seaweedfs-docker-swarm/blob/m...

evanjrowley
0 replies
14h40m

Looks awesome. Been looking for some flexible self-hosted WebDAV solutions and SeaweedFS would be an interesting choice.

makkesk8
4 replies
9h29m

We moved over to garage after running minio in production with about ~2PB after about 2 years of headache. Minio does not deal with small files very well, rightfully so, since they don't keep a separate index of the files other than straight on disk. While ssd's can mask this issue to some extent, spinning rust, not so much. And speaking of replication, this just works... Minio's approach even with synchronous mode turned on, tends to fall behind, and again small files will pretty much break it all together.

We saw about 20-30x performance gain overall after moving to garage for our specific use case.

sandGorgon
3 replies
8h37m

quick question for advice - we have been evaluating minio for a in-house deployed storage for ML data. this is financial data which we have to comply on a crap ton of regulations.

so we wanted lots of compliance features - like access logs, access approvals, short lived (time bound) accesses, etc etc.

how would you compare garage vs minio on that front ?

withinboredom
2 replies
6h5m

You will probably put a proxy in front of it, so do your audit logging there (nginx ingress mirror mode works pretty good for that)

mdaniel
1 replies
1h44m

As a competing theory, since both Minio and Garage are open source, if it were my stack I'd patch them to log with the granularity one wished since in my mental model the system of record will always have more information than a simple HTTP proxy in front of them

Plus, in the spirit of open source, it's very likely that if one person has this need then others have this need, too, and thus the whole ecosystem grows versus everyone having one more point of failure in the HTTP traversal

withinboredom
0 replies
1h37m

Hmm... maybe??? If you have a central audit log, what is the probability that whatever gets implemented in all the open (and closed) source projects will be compatible?

CyberDildonics
4 replies
15h5m

What is the difference between a "distributed object storage" and a file system?

vineyardmike
1 replies
14h59m

It’s an S3 api compatible object store that supports distributed storage across different servers.

Object store = store blobs of bytes. Usually by bucket + key accessible over HTTP. No POSIX expectation.

Distributed = works spread across multiple servers in different locations.

CyberDildonics
0 replies
4h40m

store blobs of bytes

Files

by bucket

Directories

key accessible

File names

over HTTP

Web server

crest
0 replies
11h31m

Files are normally stored hierarchically (e.g. atomically move directories), and updated in place. Objects are normally considered to exist in a flat namespace and are written/replaced atomically. Object storage requires less expensive (in a distributed system) metadata operations. This means it's both easier and faster to scale out object storage.

crabbone
0 replies
10h4m

There are few.

From the perspective of consistency guarantees, object storage gives fewer of such guarantees (this is seen as allowing implementations to be faster than typical file-systems). For example, since there isn't a concept of directories in object store, the implementation doesn't need to deal with the problems that arise while copying or moving directories with files open in those directories.

There are some non-storage functions that are performed only by filesystems, but not object storage. For example, suid bits.

It's also much more common to use object stores for larger chunks of data s.a. whole disk snapshots, VM images etc. While filesystems aim for the middle-size (small being RDBMs) s.a. text files you'd open in a text editor. Subsequently, they are optimized for these objectives. Filesystems care a lot about what happens when random small incremental and possibly overlapping updates happen to the same file, while object stores care about performance of sequential reads and writes the most.

This excludes the notion of "distributed" as both can be distributed (and in different ways). I suppose you meant to ask about the difference between "distributed object storage" and "distributed filesystem".

neon_me
3 replies
10h33m

Whats the motivation behind project like this one?

We got ceph, minio, seaweedfs ... and a dozen of others. I am genuinly curious what is the goal here?

koito17
1 replies
10h18m

Minio assumes each node has identical hardware. Garage is designed for use-cases like self-hosting, where nodes are not expected to have identical hardware.

otabdeveloper4
0 replies
6h37m

Minio doesn't, it has bucket replication and it works okay.

WhereIsTheTruth
0 replies
4h9m

performance, therefore cheaper

seaghost
2 replies
6h14m

I want something very simple to run locally that has s3 compatibility just for the dev work and testing. Any recommendations?

surfingdino
1 replies
12h51m

There's also OpenStack Swift.

comvidyarthi
1 replies
16h37m

Is this open source ?

anonzzzies
1 replies
9h20m

NLNet sponsored a lot of nice things.

lifty
0 replies
7h43m

The EU, but yeah. NLNet are the ones that judged the applications and disbursed the funds.

Daviey
1 replies
11h42m

Last time I looked at Garage it only supported paired storage replication, such that if I had a 10GB disk in location A and a 1TB disk is location 2 and 3, it would only support "RAID1-esq" mirroring, so my storage would be limited to 10GB

leansensei
0 replies
8h0m

That's a deliberate design decision.

sunshine-o
0 replies
9h23m

I really appreciate the low memory usage of Garage compared to Minio.

The only thing I am missing is the ability to automatically replicate some buckets on AWS S3 for backup.

icy
0 replies
11h25m

I've been running this on K3s at home (for my website and file server) and it's been very well behaved: https://git.icyphox.sh/infra/tree/master/apps/garage

I find it interesting that they chose CRDTs over Raft for distributed consensus.

MoodyMoon
0 replies
7h34m

Apache Ozone is an alternative for an object store running on top of Hadoop. Maybe someone who has experience running this in a production environment can comment on it.

https://ozone.apache.org/