return to table of content

Willow Protocol

tux3
35 replies
1d4h

How does this compare to IPFS?

I personally found IPFS very disappointing in practice, so I'm very hopeful for a successor.

(The promise of IPFS is great, but it is excruciatingly slow, clunky, and buggy. IPFS has a lot of big ideas but suffers from a lack of polish that would make Augías look clean. And as soon as you scale to larger collections of files, it quickly crumbles under its own weight. You can throw more resources at it, but past some point it just falls over. It just doesn't work outside of small-scale tests.)

rklaehn
17 replies
1d3h

if you are looking for something similar to ipfs but a bit more minimalistic and performance oriented, check out iroh https://github.com/n0-computer/iroh .

It is a set of open source libraries for peer to peer networking and content-addressed storage. It is written in rust, but we have bindings to many languages.

One part of iroh is a work in progress implementation of the willow spec. The lower layers include a networking library similar to libp2p and a library for content-addressed storage and replication based on blake3 verified streaming.

Most iroh developers have been active in the ipfs community for many years and have shared similar frustrations... See this talk from me in 2019 :-)

https://youtu.be/Qzu0xtCT-R0?t=169

ramrunner0xff
4 replies
1d

https://veilid.com/ should also be a great alternative. i haven't had time to use it yet, but it was built to address performance issues with ipfs and allow for both dht style content discovery, but also for direct websocket connections for streaming (and doing that in an anonymous fashion)

rklaehn
3 replies
23h30m

This looks very interesting. They made very similar choices than we (iroh) did. Rust, ed keys, blake3.

They seem to do their own streams, while we are adapting QUIC to a more p2p approach. Also the holepunching approach seems to be different. But I would love to get more details.

ramrunner0xff
2 replies
22h50m

https://yewtu.be/watch?v=Kb1lKscAMDQ

this was the presentation at DC'31. i will also check out iroh! thanks for working in building something in this space, it is much much needed!

rklaehn
1 replies
22h13m

Thanks. This is awesome. I think they are doing more work themselves in terms of crypto, whereas we rely on QUIC+TLS more.

Regarding holepunching, our approach is a bit less pure p2p, but has quite good success rates. We copy the DERP protocol from tailscale.

I am confident that we have a better story regarding handling of large blobs. We don't just use blake3, but blake3 verified streaming to allow for range requests.

Also I wrote my own rust library for blake3 verified streaming that reduces the overhead of the verification data. https://crates.io/crates/bao-tree

I tried to get on their discord at https://veilid.com/discord, but I get an invalid invite. You know a better way to get in touch?

ramrunner0xff
0 replies
20h7m

hmm this is strange, i tried the invite and it worked for me. If you are on fedi, @thegibson@hackers.town is part of the team.

thanks for the links, i will get in touch personally when i try ir0h :)

binary132
4 replies
1d3h

Interesting — does the Rust crate export a C API then?

rklaehn
3 replies
1d2h

Not officially. We currently have bindings for rust, python, golang and swift.

These were the most asked for bindings (python for ml, golang for networking and swift for ios apps).

We are using uniffi https://mozilla.github.io/uniffi-rs/

Would you need C or C++ bindings?

binary132
2 replies
20h28m

Ah, I see. Hm. I might be interested in a C API since that could be used in C, C++, and Lua equally well. I really was just wondering what the common implementation between the bindings was since it struck me as unusual that there would be a number of bindings but not C (which is AIUI the only interface besides Rust that Rust can really export.)

rklaehn
1 replies
11h30m

So, I am not the one that is doing the bindings, so take this with a grain of salt.

It seems uniffi does create C compatible bindings in order to make bindings for all these other languages. But these are internal bindings that are ugly and not intended to be used externally.

binary132
0 replies
3h34m

Perhaps if an issue were opened describing the necessary steps to provide a fluent and stable C API, staying consistent with the uniffi approach you're using for the other wrappers, then someone enterprising could pick up the ball and run with it. :)

snvzz
3 replies
16h56m

iroh seems to have a couple of "killer tools" already, known as dumbpipe[0] and sendme[1].

Although I am concerned that while dumbpipe does mention cryptography, sendme's webpage makes no mention of it (?).

0. https://www.dumbpipe.dev/

1. https://iroh.computer/sendme

rklaehn
2 replies
11h31m

It's using the same transport. Basically sendme is like dumbpipe, but adds blake3 verified streaming from the iroh-bytes crate for content-addressed data transport.

snvzz
1 replies
8h10m

I imagined as much, but the website does still not mention encryption.

Which works against it. E2EE is a requirement today.

rklaehn
0 replies
6h49m

Thanks for letting us know. We will add a section about encryption.

These tiny tools are basically one week projects to show off the tech, but they try to be useful on it's own as well.

orthecreedence
2 replies
21h2m

Hi, I'm super intrigued by Willow and your work on iroh. Do you have any kind of documentation on how iroh deviates from Willow, or what parts of Willow are planned to be implemented vs omitted?

rklaehn
1 replies
1h24m

Not yet. We have been busy with other stuff, also the willow spec has been a bit of a moving target until now.

We would like to take our rust willow impl and separate it a bit more from our code base, so that iroh documents are just users of the willow crate.

orthecreedence
0 replies
17m

That makes sense. I think I might try to really jam through the Willow docs and get a good understanding. If it all looks good, I might be able to help out splitting these things out =].

ComputerGuru
7 replies
1d

Willow solves the biggest problem I have always had with IPFS: it’s content addressable, which is nice for specific things but not generic enough to make the protocol actually practical and usable in the real world outside of specific use cases. (Namely you can’t update resources or even track related updates to resources.)

(Mathematically, a name-addressable system is actually a superset of content-addressable systems as you can always use the hash of the content as the name itself.)

rklaehn
5 replies
21h50m

To be fair, IPFS does offer not just content addressing but also a mechanism for mutability with IPNS. You can think of a willow namespace (or iroh document) as a key value store of IPNS entries.

The problem with IPNS is that the performance is... not great... to put it politely, so it is not really an useful primitive to build mutability.

You end up building your own thing using gossip, at which point you are not really getting a giant benefit anymore.

unshavedyak
4 replies
20h15m

Is the performance a critical design flaw or just an implementation issue?

tadfisher
1 replies
13h28m

Imagine if DNS supplied every URL and not just domain names. You need some mechanism to propagate resource changes. IPNS has two practical mechanisms: a global DHT that takes time to propagate, and a pub/sub that requires peers to actively subscribe to your changes.

anacrolix
0 replies
7h38m

btlink does DNS per domain name, which you could argue is a sweet spot between too many queries and being too broad. at least in the case of the web, it works nicely.

rklaehn
0 replies
9h42m

Difficult to answer.

IPNS uses the IPFS kademlia DHT, which has some performance problems that you can argue are fundamental.

For solving a similar problem with iroh, we would use the bittorrent mainline DHT, which is the largest DHT in existence and has stood the test of time - it still exists despite lots of powerful entities wanting it to go away.

It also generally has very good performance and a very minimalist design.

There is a rust crate to interact with the mainline DHT, https://crates.io/crates/mainline , and a more high level idea to use DHTs as a kind of p2p DNS, https://github.com/nuhvi/pkarr

anacrolix
0 replies
7h38m

It's a design flaw.

nayuki
0 replies
23h14m

a name-addressable system is actually a superset of content-addressable systems as you can always use the hash of the content as the name itself

It's a superset in that sense but not a superset in another sense.

In a content-addressable system, if I post a link to another piece of content by hash, then no one can ever substitute a different piece of content. Like, if I reference the hash of a news article, no one can edit that article after the fact without being detected. This is a super-useful feature of CAS that is not a feature of NAS. Other implications:

* I can review a piece of software, deem that it's not malware in my opinion, and link to the software by hash. No one can substitute a piece of malware without detection.

* Suppose you get a link from a trusted source. Now you can download a copy of the underlying content from any untrusted source, without a care about authentication or trusted identities. This describes BitTorrent.

detourdog
5 replies
1d2h

Can you quantify the breaking point of IPFS and the number of files. I was considering it for a project that has fewer than 200,000 entries.

foobiekr
2 replies
1d

I'd be very curious what project you have for which IPFS is a good solution.

detourdog
1 replies
23h22m

I'm not sold on IPFS but the idea of using a file system as a top level global index is attractive to me. I find the 2 best references for human information is global location and time. I think an operating system structured around those constants could be a winner.

I'm not sold on IPFS and will look at Willow and IROH.

foobiekr
0 replies
20h57m

A global hash-based index is literally an undergraduate project to do well. You could even ride atop bittorrent if you really had to.

tux3
1 replies
1d2h

It depends on how many files you have, but also the file size. My understanding is that IPFS splits files into 256kB chunks with a content ID (CID), and then when you expose your project, it tries to advertise every CID of every file to peers.

200,000 files could take a while to advertise, but from memory it should work, should hang for less than 15 minutes. But depending on your hardware, file size, quality of connection to your peers, alignment of planets, etc.

If you add one order of magnitude above that, it starts to become tricky. Manageable if you shard over several nodes and look for workarounds for perf issues. But if you keep growing a bit past that point, it can't keep up with publishing every small chunk of every file one by one fast enough.

But it's also very possible perf has improved since the last time I tried it, so definitely take this with a grain of salt, you might want to try installing and running the publish command and see what happens.

snvzz
0 replies
17h31m

"hang" sounds pretty bad.

Even if there's a lot of sharding and propagating and whatever to do, it should happen in the background, and never interfere with user experience.

From your description, it seems their implementation has serious issues.

wharvle
0 replies
1d2h

Being written in Go may have made development of the reference client fast (that was the creators' contention when I asked, anyway) but killed its growth as a standard. Inability to have a portable lib-ipfs that could quickly, easily, and completely give almost any language-ecosystem or daemon ipfs capabilities is a real drag.

pera
0 replies
1d4h
anacrolix
0 replies
7h37m

Consider https://github.com/anacrolix/btlink. It's a proof of concept, and has all the basics. I designed it and I worked for IPFS, and I am the maintainer of a popular DHT and BitTorrent client implementation.

wavemode
26 replies
1d3h

Some questions in protocol design have no clear-cut answer. Should namespaces be identified via human-readable strings, or via the public keys of some digital signature scheme? That depends entirely on the use-case. To sidestep such questions, the Willow data model is generic over certain choices of parameters. You can instantiate Willow to use strings as the identifiers of namespaces, or you could have it use 256 bit integers, or urls, or iris scans, etc.

This makes Willow a higher-order protocol: you supply a set of specific choices for its parameters, and in return you get a concrete protocol that you can then use. If different systems instantiate Willow with non-equal parameters, the results will not be interoperable, even though both systems use Willow.

Help me out here - isn't the point of a protocol that two independently developed systems don't have to agree on how to implement the protocol? What value does Willow have if two systems that both purport to be "Willow-compatible" aren't compatible with each another?

layer8
8 replies
1d2h

Protocols can have parameters. The protocol will be interoperable between parties who choose compatible parameters.

For example, SSH only works if both sides support and can agree on the same cryptographic algorithms, which is something that SSH is parametrized over.

wavemode
7 replies
1d1h

Unless I missed it somewhere, Willow has no specified handshake procedure. So there's no standardized way for the two sides to come to an agreement on how to communicate. (Willow appears to be completely agnostic even of the data encoding used to communicate in the first place.)

In that sense it is even more high level and abstract than parameterized protocols like SSH.

sph
2 replies
20h7m

And I think it's a good idea. Take HTTP for example. It is much more than its protocol. Freeing it from those details means that what is valid today:

    GET / HTTP/1.1
    Host: example.com
Could have the same semantics in any format:

    { 
      "method": "GET",
      "version": "1.1",
      "headers": [{"name": "Host", "value": "example.com"}]
    }

Of course a server accepting the former won't be able to communicate with the latter, but that's just an implementation detail that Willow does not want to commit to at this stage, and does not make it any less complete. Just a bit impractical.

sillysaurusx
0 replies
18h58m

The history of cryptography failures suggests that it’s better for a protocol to be opinionated and complete rather than open ended.

ianburrell
0 replies
16h19m

Format agnostic layers are a good thing. HTTP2 depends on it with different encoding but same meaning. But have to choose some encoding for communication to happen. Otherwise it is a meta protocol.

jcul
2 replies
19h32m

Not all protocols have to have a handshake to negotiate how they will communicate. Sometimes you just need to have both sides configured the same way. E.g. PTP clocks being in the same domain, having the same delay request mechanism etc.

skissane
1 replies
15h5m

Not all protocols have to have a handshake to negotiate how they will communicate. Sometimes you just need to have both sides configured the same way.

True, but the industry has been moving away from "you have to configure both ends" towards autonegotiation.

e.g. RS-232 you had to configure data rates (e.g. 9600 baud) and encoding (e.g. 8N1 - 8 data bits, no parity bit, 1 stop bit), and if you didn't have the same config at both ends, communication wouldn't happen. USB, the primary successor, determines stuff like that by negotiation. Or similarly, with Ethernet autonegotiation, you no longer have to worry about manually configuring speed/duplex on each end–which I can remember being a big drama 20 years ago.

jcul
0 replies
11h19m

Yeah, that is true. And even as I typed that I thought, PTP can be fairly hand off and auto negotiates most stuff. I guess E2E vs P2P it can't really because they can represent different physical infrastructure.

Probably not the best example, I just happened to have the PTP spec open at the time!

layer8
0 replies
1d1h

True, it was an example of how “the same protocol” doesn’t necessarily imply compatibility. The algorithm negotiation procedure doesn’t guarantee that there will be an agreement, so it is somewhat secondary to the argument.

smaudet
7 replies
1d3h

No I think you are right, this isn't a protocol. Its a protocol generator...

"higher order" is some nonsense, and would make me shy away from using it...

black_puppydog
4 replies
1d2h

Huh? I'd understand your phrase "protocol generator" to be the equivalent of "higher order protocol" in the same way that a "higher order function" can be seen as a "function generator"...

smaudet
3 replies
1d1h

The difference would be that a function, order non-withstanding, is (a) callable (site). "Higher order", merely means one that takes/returns other functions, but they are still fundamentally, functions (callable sites).

A protocol has the property that it is implementation independent, but that it has a defined interface (i.e. it is immediately usable).

This is neither (not a defined interface, implementation dependent). If it doesn't share either property with a protocol, then you can't claim that it is truly a protocol, "higher order" or otherwise.

This confused verbiage is what should be cause for concern - note that I can claim a "higher order" protocol with JSON or gRPC - its all the basic building blocks for a protocol, just both sides need to implement the same stuff!

Except, neither JSON nor gRPC are crazy enough to claim to be a "higher order" protocol, which to me puts this in the rubbish bin of over-complicated technologies looking for a problem, like SOAP, JavaBeans, OSGi - all of these could also be claimed to be "higher order" protocols as well.

The term is meaningless, and so I assume, is this project.

jazzyjackson
2 replies
1d

its wild to me that you would dismiss a project as meaningless because you can't immediately understand it

smaudet
1 replies
1d

Oh I might read their specs: "Willow is a family of specifications:" https://willowprotocol.org/specs/index.html#specifications

This part looks useful.

But is it useful to be a "willow" family of protocols? Probably not.

Their claims on the front page are extraordinary. Extraordinary claims require extraordinary evidence, and heading the page with nonsense is not a good start.

mplewis
0 replies
22h45m

Pretty rude to dismiss a project without reading any of its documentation.

gray_-_wolf
1 replies
1d3h

It is unusual term in the area of protocols, but it seems understandable it tries to draw a parallel to higher order functions. So "some nonsense" might be a bit strong...

smaudet
0 replies
1d1h

Protocols are useful, and as such the costs are high to using a bad one. Confused verbiage does not instill confidence that the authors know what they are doing, or that there is real benefit.

So, perhaps it is strong language, but I think it is a reasonable reaction.

whizzter
1 replies
1d1h

I think the idea here is that Willow is meant to handle "higher order" issues like encryption and the especially prickly sharing encrypted data within a more cooperative environment, so that application builders can focus on their more specific applications.

Say that I want to implement something Figma-like for designing drug-runner operations, Willow seems to be an excellent building block (Yes, the example is kinda out-there but it's meant to indicate that genericity intended here).

conradev
0 replies
1d

I wish the web page came out and said what Willow actually provides and what is up to the developer.

As far as I can tell, this is primarily a crypographic specification, like Noise (http://www.noiseprotocol.org) except for a stateful key-value store instead of stateless connections

tenebrisalietum
1 replies
21h45m

Is IP useless because the other end might not support UDP or TCP?

More practically/less absurdly: is SSL useless because side A doesn't support the same ciphers as side B.

Perhaps a Willow negotiation protocol would be needed to reconcile, but it's a bad idea from a security perspective because it enables downgrade attacks.

smaudet
0 replies
20h59m

Comparing apples and oranges.

A protocol is allowed to have presumptions, and then provide an interface.

Its not allowed to have no interface at all and no presumptions (A void).

rklaehn
0 replies
1d2h

Think of it like const generics in languages like rust or C++.

You can make two data structures with a const parameter.

If the parameter is not the same, they are not compatible (not the same type). The parameter can be tuned according to the specific needs of the application.

cobertos
0 replies
1d3h

By having a more generic protocol on top, it allows the same tools to be used for different specific end results. So shared libraries and common debugging tools can benefit more use cases. You might even make a "higher-order" tool that can work with any willow data, at the cost of specific UI affordances that can be used when you know more about the underlying data.

This is sort of how ActivityPub is a thing, but it underpins multiple, sorta-but-not-really interoperable systems like Lemmy and Mastodon.

binary132
0 replies
1d3h

FWIW, HTTP allows subprotocol / backwards-compatible negotiation. Perhaps that could be similar to how implementations of this might need to cooperate.

alex-mohr
0 replies
18h49m

Willow appears closer to a "Protocol Construction Kit" than a protocol itself.

As a construction kit, it has value for people who want to make protocols where they'll control both ends, but don't have to re-implement basic table stakes.

Joker_vD
0 replies
1d2h

What value does Willow have if two systems that both purport to be "Willow-compatible" aren't compatible with each another?

That you can claim to support Willow, or be Willow-compatible, without actually having to interoperate with your competitors. See e.g. the usage history of x509 in the nineties.

ynniv
14 replies
1d2h

Total erasure of data.

This is disappointing. What's been read can never be un-read; to say otherwise is deceptive.

black_puppydog
9 replies
1d2h

In the same way, once an attacker can exploit some weakness in a system, it's game over. Yet defense in depth is a thing and makes it much less likely that bad things happen.

In this case, yes, it's impossible to guarantee that some malicious peer doesn't ignore my "plea to delete". But combined with the fact that my data will only be replicated to/by peers I already have a trust relationship with (as opposed to e.g. on a blockchain) it provides another layer of protection that a system without deletion simply doesn't have. Not perfect, but not useless either.

ynniv
8 replies
1d2h

Yes, but that's not what "Total erasure of data" means.

The project's goals are hard and noble. It would be better to under-promise and over-deliver than to make everyone question their claims. Maybe I'm just a grumpy old man at this point, but there are already too many caveat emptors in computing. They could have said "better" erasure of data.

wharvle
3 replies
1d2h

It's a handy feature to have in the protocol if you're operating detached networks using it, and can control all the clients. If you're using it as internal infrastructure. Which, personally, is the only way I've ever been interested in using these sorts of things.

You're right that it's nigh-meaningless for a public cluster.

ynniv
2 replies
1d2h

Ok, but where exactly is that when the FBI is pulling Bitcoin private keys out of files accidentally synchronized to iCloud Files? These are hard problems.

wharvle
1 replies
1d2h

I don't understand your concern. Guaranteed (if you control the clients, for ordinary values of "guaranteed", not, like, mathematically-rigorous ones) deletion is a handy feature if you need to be able to comply with regulations, or just want to be sure you're not wasting disk space on stuff you intended to delete, without having to do extra work.

Attackers are a whole other matter, and their existence doesn't make the feature pointless, for the above reasons.

ynniv
0 replies
1d2h

deletion is a handy feature if you need to be able to comply with regulations

This is a good point, and "GDPR compliant erasure of data" would be a great way to explain it. As a user I can guess what that means, and as an engineer it doesn't sound like magic.

detourdog
2 replies
1d2h

What is the point of noting trying to accomplish hard and noble things. I'm sure there are plenty of people willing to take shortcuts.

I appreciate people trying to do something hard and noble.

ynniv
1 replies
1d2h

Me too! But let's be honest about how things work.

detourdog
0 replies
23h26m

I thought I was. I don't think it has much to do with the way the world works. I think it might have more to do with how one works the world. There are plenty of people that don't want to try the impossible but the impossible should be explored.

rklaehn
0 replies
1d2h

Prefix pruning is a very different approach than tombstones. An update will actually remove data, and not just mark the data as removed.

Maybe "total erasure of data" is a too strong promise, but the fact that you can not force nodes that you don't control to unsee things is common knowledge, so in my opinion this does not need a qualifier.

xpe
2 replies
21h16m

I can appreciate the spirit of the comment. I'm more pedantic than is typical -- perhaps even more than is healthy! Even so, I don't think the claim is deceptive.

Willow's claim has to do with erasure of the _networked_ data. It doesn't claim that copies people make are destroyed. Almost everyone understands and expects that if you can view data, you usually can somehow make some kind of copy of it. The question usually comes down to: how good of a copy?

Perhaps the best way to prevent perfect copying of data is to prevent someone from viewing it on a device they control.

aidenn0
1 replies
20h56m

Almost everyone understands and expects that if you can view data, you usually can somehow make some kind of copy of it.

This is true for the target audience of the article, but certainly not for people in general. It might be true for people in their 20s, but I strongly doubt it's true for any other age range.

xpe
0 replies
3h12m

I take your point. How well this is understood is an empirical question. I retract my claim that it holds for "almost everyone" from the broader population.

orthecreedence
0 replies
21h11m

I don't really get what your nitpick is here. There is no conceivable way in the universe to unshare information that has been shared. The idea here is that you can stop further sharing of that information. I think that's a fairly reasonable and obvious interpretation.

catapart
10 replies
1d4h

So this is pure spec? No implementations at all?

b_fiive
4 replies
1d4h

iroh documents are a work-in-progress implementation of willow: https://github.com/n0-computer/iroh

We've been working with the willow team as we go & giving feedback on the spec.

disclosure: I work on iroh.

elmolino89
3 replies
1d4h

The iroh name resembles a bit iRODS, another system for distributed file sharing and fine grained permissions.

Quick googling did not give me a proper grasp of the use cases for iroh/IPFS vs iRODs.

Would you be willing to list the benefits of iroh vs IRODS?

rklaehn
1 replies
1d3h

I was not aware of iRODS.

Iroh is named by a certain fictional character that likes tea. Any similarity is a coincidence.

But it seems like iRODS is much more high level than iroh. E.g. iroh certainly does not contain anything for workflow automation. You could probably implement something like iRODS using iroh-net and iroh-bytes.

jw_cook
0 replies
1d1h

Iroh is named by a certain fictional character that likes tea.

"The file was in my sleeve the whole time!"

b_fiive
0 replies
1d3h

wow, thank you for pointing me to IRODS, I was not aware of the project! Big difference I'm seeing as I read the docs for IRODs is a datacenter-grade data management _service_, whereas iroh is a multiplatform SDK for building your own applications.

Seems like one would want IRODs if they have massive amounts of highly sensitive data that needs fine grained access control. You would want iroh if you're building an app that uses direct connections between end-user devices to scale data sync

castles
2 replies
1d4h
nathan_phoenix
1 replies
1d4h

Still confused to be honest...

What does it mean that Earthstar will become a Willow protocol? Isn't it an implementation of Willow?

m3talsmith
0 replies
1d4h

There are at least two implementations there on that page:

- One in typescript - One in rust

candiddevmike
1 replies
1d4h

A spec without an implementation is a lovely idea at best.

rklaehn
0 replies
1d3h

iroh developer here.

willow was not developed in a vacuum.

the willow folks have worked with us while we have implemented many ideas from willow, starting with range based set reconciliation ( https://arxiv.org/abs/2212.13567 )

they have been open to removing parts that have turned out to add too much complexity to implementations.

throwaway2562
7 replies
1d3h

Still confused here. What is the actual, concrete ‘as-a-user-I-want-to’ application for which this is meant to be an ideal fit? Sorry if a dumb question.

layer8
3 replies
1d2h

It’s like Dropbox, including sharing, but without a centralized service, instead peer-to-peer.

dirkf
2 replies
1d1h

So something like https://syncthing.net/ ?

rklaehn
0 replies
1d1h

It's more generic than that.

Syncthing is designed specifically for file system sync (and does a very good job). Willow could be used for file system tasks, but also for storing app data that is unrelated to file systems, like a KV store database.

You should be able to write a good syncthing like app using the willow protocol, especially if you choose blake3 as the hash function.

layer8
0 replies
1d1h

Syncthing doesn’t have sharing support AFAICS, but yes, Willow could be used as the underlying protocol for something like Syncthing.

jbverschoor
0 replies
1d

Same here. no clue what it does.. It could be a syncthing/dropbox something. It could be some sharing protocol. I dunno

jauntywundrkind
0 replies
1d1h

It should be able to underpin most apps. The sky is the limit. Or your imagination is the limit; whichever comes first.

This is a protocol for generic shared information spaces, where each person still owns & can manage permissions for their pieces of data in the space. It's a general idea that's present & implicit in most existing online spaces.

bo1024
0 replies
1d2h

Yeah, it would have helped me if they walked through what it actually means to "use" Willow. Do I install something like Dropbox on my computer? Do I write code that calls Willow as a library?

jd3
2 replies
20h15m

other commenters have mentioned ipfs, dropbox, syncthing, etc. but this most closely resembles http://upspin.io/ with the caveat that willow is p2p and upspin uses a centralized key server

https://www.youtube.com/watch?v=ENLWEfi0Tkg

KomoD
1 replies
20h3m

http://upspin.com/

That does not go anywhere.

jd3
0 replies
19h59m

Oops, should be io tld. Fixed!

pmarreck
1 replies
1d2h

my first question (given that IPFS doesn’t seem to do this well) is “does it scale?”

Still useful otherwise if not… assuming there’s an actual client/server for it on mac/linux…

rklaehn
0 replies
1d2h

This is a good question. But it is worth noting that not everything has to scale globally.

E.g. in iroh-sync (which is an experimental impl of the willow protocol) you are not concerned with global scaling. You care only about nodes that are in the same document.

So while if you request hash QmciUVE1BqKPXMSvTTGwHZo1ywYdZRm9FfBvEJkB6J4USb via ipfs, you are trying to globally find anybody that has this hash, which is a very difficult task.

If you ask for some content-addressed data in an iroh document, you know to only ask nodes that participate in this particular document, which makes the task much easier.

Edit: regarding clients, iroh is released for osx, windows and linux. Iroh as a library also works on ios. Download instructions are here: https://iroh.computer/docs/install

cranberryturkey
1 replies
1d2h
philsnow
0 replies
22h5m

Unreviewed Content

This community has not been reviewed and might contain content inappropriate for certain viewers. View in the Reddit app to continue.

Wow, what absolute horseshit. The march continues to acquire marketing signals at any cost.

waterheater
0 replies
1d2h

Another webpage compares Willow to other protocols like IPFS: https://willowprotocol.org/more/compare/index.html#willow_co...

According to them, data on IPFS is immutable, stateless, and globally-namespaced, whereas data on Willow is mutable, stateful, and conditionally-namespaced. I interpret Willow as an authenticated, permissioned, content-based, globally-addressed, distributed database system, where an address has the hierarchy and expressiveness of a URL.

One particularly nice feature about the documentation: if you hover over an underlined word (https://willowprotocol.org/specs/data-model/index.html#data_...), a pop-up box provides a definition or explanation. Importantly, some terms in the pop-up are underlined themselves, so you can dig down into the terminology with great ease. More projects should implement this functionality.

snvzz
0 replies
16h47m

Willow specification published 17/01/2024

Please try to follow RFC3339 when writing dates.

E.g. 11/01/2024 is ambiguous, as it could be January 11th or November 1st, whereas 2024-01-11 is RFC3339-compliant and does not exhibit this problem.

randall
0 replies
1d2h

So stoked about this. Lower level than holepunch, and it sounds like it has everything I need to get going.

orthecreedence
0 replies
21h4m

This is kind of exactly what I've been looking for. I've been trying to weave stuff together with libp2p, but this looks very promising as a way to handle a lot of the lower-level junk I don't care about. While I didn't go in-depth on the docs, I can see that this would be able to model a lot of different applications right off the bat. Very cool.

m3kw9
0 replies
19h11m

Decentralized and no ICO needed

jhardy54
0 replies
1d3h

Aljoscha and gwil are excellent people, I’m excited to see them working together. Looks to me like they’re solving some of the biggest problems with Secure Scuttlebutt.

j-pb
0 replies
7h46m

"Wrangling the complexity of distributed systems shouldn’t mean we trade away basic features like deletion, or accept data structures which can only grow without limit."

The CALM theorem would like a word with you.

You simply can't have consistent non-monotonic systems.

Forgetting is ok, deleting is not.

KomoD
0 replies
20h4m

I really like the illustrations

Kinrany
0 replies
23h45m

What's the purpose of subspaces, given that there are namespaces?

What's the purpose of having separators in the keys?