return to table of content

I moved my blog from IPFS to a server

p4bl0
62 replies
21h40m

I'm surprised by the beginning of the post talking about pioneering in 2019. Maybe it is the case for ENS (I never cared for it), but regarding IPFS, my website was available over IPFS 3 years before that in 2016 [1]. Granted, I was never IPFS only. I also started publishing a series of article about censorship-resistant internet (Tor, I2P, IPFS, and ZeroNet) in 2600 Magazine – The Hacker Quarterly – back in 2017 [2].

Anyway, I came to the same conclusion as the author, but several years ago: in the end, nothing is actually decentralized, and maintaining this illusion of decentralization is actually costly, for no real purpose (other than the initial enjoyment of playing with a new tech, that is).

So I stopped maintaining it a few years ago. That decision was also because of the growing involvement of some of these projects with blockchain tech that I never wanted to be a part of. This is also why I cancelled my article series in 2600 before publishing those on IPFS and ZeroNet.

[1] See for example this archive of my HN profile page from 2016 with the link to it: https://web.archive.org/web/20161122210110/https://news.ycom...

[2] https://pablo.rauzy.name/outreach.html#2600

r3trohack3r
23 replies
21h20m

Anyway, I came to the same conclusion as the author, but several years ago: in the end, nothing is actually decentralized, and maintaining this illusion of decentralization is actually costly, for no real purpose (other than the initial enjoyment of playing with a new tech, that is).

Do you have any writing (blog posts, HN comments, etc.) where you explore this thought more? I'm in the thick of building p2p software, very interested in what you came to know during that time.

pphysch
15 replies
20h22m

True P2P networks don't scale, because every node has to store an (accurate if partial) representation of the whole network. Growing the network is easy, but growing every single node is virtually impossible (unless you control them all...). Any attempt to tackle that tends to increase centralization, e.g. in the form of routing authorities.

And if you try to tackle centralization directly (like banning routers or something), you will often create an anti-centralization regulator, which is, you guessed it, another form of centralization.

So your decentralized P2P network is either small and works good, medium and works not so good, or large and not actually decentralized.

The best P2P networks know their limits and don't try to scale infinitely. For human-oriented network, Dunbar's Number (N=~150) is a decent rule of thumb; any P2P network larger than that almost certainly has some form of centralization (like trusted bootstrapping/coordination server addresses that are hard-coded in every client install, etc.)

KMag
14 replies
19h51m

True P2P networks don't scale, because every node has to store an (accurate if partial) representation of the whole network

Former LimeWire dev here... which P2P networks use a fully meshed topology? LimeWire and other Gnutella clients just have a random mesh with a fixed number of (ultra)peers. If the network gets too large, then your constrained broadcast queries hit their hop count before reaching the edge of the network, but that seems fine.

Last I checked, Freenet used a variation on a random mesh.

Kademlia's routing tables take O(log(N)) space and traffic per-peer to maintain (so O(N log(N)) for global total network space and traffic). Same for Chord (though, twice as much traffic due to not using a symmetric distance metric like XOR).

There are plenty of "True" (non-centralized) P2P networks that aren't fully meshed.

sanity
12 replies
19h34m

Creator of Freenet here. Freenet[1] relies on peers self-organizing into a small world network[2].

Small world networks have the advantage of being able to find data in log N time where N is the network size, they're also completely decentralized, self-healing, and distribute load evenly across peers. The principle is similar to DHTs like Kademlia but more flexible and intuitive IMO, while having similar scaling characteristics.

It's surprisingly common for people to confuse small world networks with "scale free networks", but scale free networks rely on a subset of highly connected peers which do a disproportionate amount of the work - which isn't truly decentralized.

The new Freenet design incorporates adaptive learning into the routing algorithm. When a peer is deciding where to route a message, it predicts the response probability and time for each neighboring peer based on past performance and chooses the best. With conventional "greedy routing", peers choose the neighbor with a location closest to the data being retrieved. The new approach is adaptive to actual network performance.

[1] Both the original Freenet from 1999 and the new sequel we're currently building - see https://freenet.org/ for more. We hope to launch the network in the next few weeks.

[2] https://en.wikipedia.org/wiki/Small-world_network

KMag
5 replies
19h17m

Thanks for the great work, Ian!

As far a second-generation Freenet, I heard i2p started out as a proposed re-factoring and generalization of Freenet's encrypted transport layer. Are there any plans on using i2p to carry Freenet traffic?

babymode
3 replies
18h56m

I think ive been following the dev chat long enough to answer that the new freenet is a new, seperate network to the original (now called hyphanet I think) that handles transport by itself, and end to end encryption is not in scope of the project but can be built on top

sanity
2 replies
18h50m

the new freenet is a new, seperate network to the original

This is correct - while old and new Freenet both rely on a small-world network, they are very different and not compatible. Borrowing from our FAQ[1], the main differences are:

Functionality: The previous version of Freenet (now called Hyphanet) was analogous to a decentralized hard drive, while the current version is analogous to a full decentralized computer.

Real-time Interaction: The current version allows users to subscribe to data and be notified immediately if it changes. This is essential for systems like instant messaging or group chat.

Programming Language: Unlike the previous version, which was developed in Java, the current Freenet is implemented in Rust. This allows for better efficiency and integration into a wide variety of platforms (Windows, Mac, Android, MacOS, etc).

Transparency: The current version is a drop-in replacement for the world wide web and is just as easy to use.

Anonymity: While the previous version was designed with a focus on anonymity, the current version does not offer built-in anonymity but allows for a choice of anonymizing systems to be layered on top.

[1] https://freenet.org/faq#faq-2

BlueTemplar
1 replies
15h16m

Doesn't Java have the widest variety of hardware running it, thanks to its Virtual Machine ?

I can even remember my Motorola Razr being arguably (almost) a smartphone because, while a far cry from Symbian, it could already run Java applications ! (Notably, IIRC, Opera mini ?)

P.S.: Also, I tried Freenet about around that time too ! I'm a bit confused about this being a "new" project... why not naming it "Freenet 2" then ? Why did Freenet "1" had to change its name ??

sanity
0 replies
14h12m

Doesn't Java have the widest variety of hardware running it, thanks to its Virtual Machine ?

Java has the advantage that you can run it on a wide variety of hardware platforms without recompilation, but it has largely failed to attain broad usage/support for desktop apps and so it's a bad choice for something like Freenet in 2024.

A systems programming language like Rust gives us a lot more control over things like memory allocation, allowing apps to be a lot more efficient. This is important with Freenet because we need it to run in the background without slowing down the user's computer.

Rust can also be compiled to run on all major platforms, Windows, Mac, Linux, Android, iOS, etc.

P.S.: Also, I tried Freenet about around that time too ! I'm a bit confused about this being a "new" project... why not naming it "Freenet 2" then ? Why did Freenet "1" had to change its name ??

Using the name for the new software was a difficult decision and not without risk.

The "Freenet" name was never intended to belong to a specific codebase. From the start we viewed the original Java implementation as a prototype, which is one reason we never actually released version 1.0 (even 7 years after the project started we were still on version 0.7). At the time I had no idea that it would be over 20 years before I had a design I thought would be suitable, but here we are.

This new Freenet is the original concept but designed, not as a prototype, but as software that can gain broad adoption. In that sense it is the fulfilment of my original vision.

I did consider calling it Freenet 2, but these days not that many people have heard of the original, so on balance I believe it would have been confusing for the (hopefully) much bigger userbase we hope to reach.

sanity
0 replies
19h2m

Thank you :)

I2P was created by someone who was previously involved with Freenet, but its design is a lot closer to Tor than to Freenet. Both I2P and Tor are anonymizing proxies, they allow services to be hidden, but they're still centralized.

While they are quite different, there is enough overlap that running Freenet over I2P (or Tor) would be wildly inefficient and slow, so I wouldn't recommend it. Freenet is designed to run over UDP directly.

The new Freenet is designed to allow the creation of completely decentralized services. Briefly, it's a global key-value store in which keys are webassembly code that specify what values are permitted under that key, and the conditions under which those values can be modified. This key-value store is observable, so anyone can subscribe to a key and be notified immediately if the value changes.

This is just scratching the surface, for anyone interested in a much more comprehensive explanation of the new Freenet please see this talk I gave a few months ago: [1] You can also find a FAQ here: [2]

[1] https://www.youtube.com/watch?v=yBtyNIqZios

[2] https://freenet.org/faq

dtaht
1 replies
18h54m

Clever. I care about congestion control issues mainly. Got that handled? Tried ecn?

sanity
0 replies
18h41m

The low-level transport is one of the final components we're working on prior to launch - we previously hoped to use something off-the-shelf but on close examination nothing fit our requirements.

We handle congestion by specifying a global maximum upload rate for the peer, which will be a fraction of the peer's total available bandwidth - the goal being to avoid congestion. In the future we'll likely use an isotonic regression to determine the relationship between upstream bandwidth usage and packet loss, so that we can adaptively choose an appropriate maximum rate.

This is a more detailed explanation of the transport protocol, it's quite a bit simpler than ECN but we'll refine it over time: [1]

At a higher level a peer's resource usage will be proportional to the number of connected peers, and so the peer will adaptively add or remove connections to stay under resource usage limits (which includes bandwidth).

[1] https://github.com/freenet/freenet-core/blob/186770521_port_...

Uptrenda
1 replies
18h4m

Freenet is very cool. You did good work. Absolute giga chad.

sanity
0 replies
17h40m

Thank you, first time I've been called a Chad - I'll take it ;)

Charon77
1 replies
16h10m

I've always taken interest into p2p, but never heard of Freenet so thanks for being here!

Question: how good is the latency once connections are already established, say for a real time video call over Freenet, or if this is not possible? Is there any server in the middle that all packets need to route to, especially for peers behind firewall

sanity
0 replies
12h29m

how good is the latency once connections are already established, say for a real time video call over Freenet, or if this is not possible?

We're aiming for the delay between a contract's state being modified and all subscribers being notified to be no more than 1 second, which should be acceptable for applications like IM.

If you were doing something like a video call you'd negotiate it over Freenet and then establish a direct connection for the video/audio for minimal latency.

Is there any server in the middle that all packets need to route to, especially for peers behind firewall

Freenet uses UDP hole-punching to establish direct connections between peers even if both are behind firewalls. A new peer uses a public (non-firewalled) peer to join the network initially but once joined all further communications can be between firewalled peers.

pphysch
0 replies
19h9m

Sure, but ultrapeers/supernodes/routers/etc are forms of centralization. Locally, the network is centralized around these supernodes, and they can be annoying/impossible to bypass. The "inner network" or backbone of supernodes, if exists, can also represent a central authority. Nothing necessarily wrong with any of this, but it can stretch the meaning of P2P if it really means P2central-authority2P.

Functionally there is almost no difference between me sending you an (anonymous, encrypted) message over Facebook versus over some sophisticated, large, hierarchical "P2P" network. We still have to trust the local authorities, so to speak.

Almondsetat
4 replies
19h49m

Your software cannot be more decentralized than your hardware.

For example, true p2p can only happen if you meet with someone and use a cable, bluetooth or local wifi. Anything over the internet needs to pass through routers and *poof* decentralization's gone and you now need to trust servers to a varying level of degrees

colordrops
2 replies
19h43m

"varying" is a pretty wide range here. If you mean "trust" as in trust to maintain connectivity, yes, but beyond that there are established ways to create secure channels over untrusted networks. Could you provide specifics about what you mean if anything beyond basic connectivity?

Almondsetat
1 replies
19h39m

Sure: what is the process to start downloading a torrent? what is the process to message someone on Jami? what is the process to call someone on (old) Skype or Jitsi? Answer this and you realize you can only get as decentralized as your hardware infrastructure

Charon77
0 replies
16h17m

Well for torrent it starts by contacting one of the various trackers that know other peers. It's not centralized but there's only a couple of trackers out there.

There's no trust between any of the peers, but each of the torrent piece has associated hash, meaning peers cannot give you invalid data without being caught (unless hash collision occurs).

Peers can be discovered with DHT magic, but ultimately, can only be dialed if the ISP allows peers to receive connections.

Barrin92
0 replies
12h41m

Anything over the internet needs to pass through routers and poof decentralization's gone

That's not true. Yes the strict p2p connection is gone but decentralization is what the name says, a network of connections without a single center. The internet and it's routing systems are decentralized. Of course every decentralized system can also be stressed to a point of failure and not every node is automatically trustworthy.

unethical_ban
0 replies
13h45m

I'll add to what others have said better.

Decentralized, globally accessible resources still take some kind of coordination to discover those resources, and a shit-ton of redundant nodes and data and routing. There is always some coordination or consensus.

At least, that's my take on it. Does not tor have official records for exit and bridge nodes?

p4bl0
0 replies
20h0m

The main thing is that "true" (in the sense of absolute) decentralization does not actually work. It doesn't work technically, and it doesn't work socially/politically. We always need some form of collective control over what's going on. We need moderation tools, we need to me able to make errors and fix them, etc. Centralized systems tend to be authoritarians, but the pursue of absolute decentralization always end up being very individualistic (and some kind of wrongly placed elitism). There are middle grounds: federated systems for example, like Mastodon or emails, actually work.

That is not to say that all p2p software is bad, especially since we call p2p a lot of things that are not entirely p2p. For example, BitTorrent is a p2p software, but its actual usage by humans relies on multiple more-or-less centralized point, trackers and torrent search engine.

int_19h
20 replies
20h42m

I may be missing something, but name resolution has been touted as one of the more legitimate and sensible uses for blockchain for a very long time. Could you clarify what your issues with it in IPFS context are?

p4bl0
9 replies
19h34m

Well, I do not actually believe that blockchains can do name resolution correctly. First and foremost, the essential thing to understand about blockchains is that the only thing that is guaranteed by writing an information on a blockchain is that this information is written on this blockchain. And that's it. If the writing itself is not performative, in that it's mere existence performs what it describes, then nothing has been done. It works for crypto-assets because what makes a transaction happen is that it is written on the blockchain where that crypto-asset lives and where people look to see what happened with that crypto-asset.

But for any other usage, it cannot work, blockchain are useless. Someone or something somewhere has to make sure either that what's written on the blockchain corresponds to the real world, or to make the real worlds corresponds to what's written on the blockchain. Either way you need to have a kind of central authority, or at least trusted third parties. And that means you don't actually need a blockchain in the first place. We have better, more secured, more efficient, less costly alternatives to using a blockchain.

Back to name resolutions.

Virtually no one is going to actually host locally the blockchain where all names are stored. That would be way too big and could only get bigger and bigger, as a blockchain stores transactions (i.e., diffs) rather than current state. So in practice people and their devices would ask resolvers, just like they currently do with DNS. These resolvers would need to keep a database of the state of all names up-to-date because querying a blockchain is way too inefficient, running such a resolvers would be a lot more costly than running a DNS servers so there would be less of them. Here we just lost decentralization which was the point of the system. But that's just a technical problem. There is more: what if someone gets a name and we as a society (i.e., justice, whatever) decides that they should not be in control of it? Either we are able to enforce this decision and it means the system is not actually decentralized (so, we don't need a blockchain), or we can't, and that's a problem. What if a private key is lost, the associated names are gone forever? What if your private key is leaked by mistake and someone hostile take control of your name?

Using a blockchain for names resolution doesn't actually work, not for a human society.

mikegreenberg
4 replies
18h16m

Either way you need to have a kind of central authority, or at least trusted third parties.

You lost me here. Couldn't the local user ('s process) reference the same block chain instead of another trusted party?

delfinom
3 replies
18h6m

The problem with block chains is you need the entire history, or at least a good chunk of it to walk it for data. The latest end of the chain doesn't contain everything, it simply contains the most recent transactions.

This can be hundreds of gigabytes if not more at scale.

This is where the central authority comes in play, in the name of storage and performance efficiency.

Even crypto wallet apps use third party central servers to query your wallet totals. Because you aren't fitting the download of the block chain on your phone.

mikegreenberg
2 replies
17h31m

I don't think the blockchain walk has to be done locally. Much like someone looking up DNS records don't need to participate in the DB management to get their records, there can be intermediaries which provide the response and still rely on the blockchain as a source of truth?

The value of the blockchain (in the context of name resolution) would (should) be limited to enabling trustless-ness of the response. I can cryptographically authenticate the origin of the response. If you don't trust that source, zk proofs would enable the user to validate the response is part of the latest version of the blockchain's state without looking at all of the history.

I think the cost of carrying the whole history is a red herring.

p4bl0
1 replies
10h29m

there can be intermediaries which provide the response and still rely on the blockchain as a source of truth

But then you have to trust the intermediaries. You can verify their claim, but doing so is so costly it's what made you turn to intermediaries in the first place.

I can cryptographically authenticate the origin of the response.

A blockchain is not needed for that, certificates can do that.

zk proofs (…) the latest version of the blockchain's state (…) cost of carrying the whole history

Knowing enough information about the latest version of a blockchain's state to validate responses would require either that you trust the third party which would provide the hash of the last block to you, or that you follow, blocks after blocks, what's added to to ledger, verifying each block's integrity and all. I'm not saying that's not doable, but that it either requires some boot-up time or to be online all the time; i.e., it more or less amounts to running a node which is what we seem to agree is not something most people / end devices will do.

heinrich5991
0 replies
8h31m

You should be able to cryptographically prove a) the block height of the current block and b) the state of any account, in constant space, using zero-knowledge proofs.

You don't need to trust a third party and do not need to be online all the time for that.

null0pointer
1 replies
17h37m

Either way you need to have a kind of central authority, or at least trusted third parties.

Not everyone needs to run a node, and not everyone could, but it is totally feasible for an individual to run their own if they decide they can't trust anyone else for whatever reason. Especially if you were running a node specifically for the purpose of name resolution you could discard the vast, vast majority of data on the Ethereum blockchain (for example).

what if someone gets a name and we as a society (i.e., justice, whatever) decides that they should not be in control of it? [...] and that's a problem.

No, that is a feature of a decentralized system. Individual node operators would be able to decide whether or not to serve particular records, but censorship resistance is one of the core features of blockchains in the first place.

What if a private key is lost, the associated names are gone forever?

The association wouldn't be gone, it would just be unchangeable until it eventually expires. This is a known tradeoff if you are choosing ENS over traditional domain name registration.

What if your private key is leaked by mistake and someone hostile take control of your name?

As opposed to today where someone hostile, like for instance the Afghani government (The Taliban), can seize your domain for any reason or no reason at all?

---

I think we just have a fundamental disagreement about what types of features and use cases a name resolution system should have. That's completely fine, you're entitled to your own believes. You can use the system that most closely resembles your beliefs, and I'll use the one that most closely resembles mine. Fortunately for us different name resolution system can peacefully coexist due to the nature of name mappings. At least for now, none that I know of collide in namespace.

CaptainFever
0 replies
14h39m

No, that is a feature of a decentralized system. Individual node operators would be able to decide whether or not to serve particular records, but censorship resistance is one of the core features of blockchains in the first place.

Exactly, take a look at Sci Hub or The Pirate Bay continuously needing to change domain names due to seizures, for example. I'd want them to be able to actually own their domain names, either via blockchain or private key (e.g. Tor).

In fact Sci Hub tried HNS for some time but seems to have dropped out of it.

hexage1814
1 replies
12h54m

There is more: what if someone gets a name and we as a society (i.e., justice, whatever) decides that they should not be in control of it?[...]or we can't, and that's a problem

That's a feature.

lottin
0 replies
11h24m

Do you think lawlessness is a feature?

edent
7 replies
20h20m

It isn't. Unless you want a long incomprehensible string.

Someone is always going to want a short, unique, and memorable name. And when two people share the same name (McDonald, Nissan, etc) there needs to be a way to disambiguate them.

If people die and are unable to release a desirable name, that just makes the whole system less desirable.

I know one of the canonical hard problems in Computer Science is "naming things" and this is a prime example!

patmorgan23
3 replies
15h29m

Namecoin has existed for a long time. It acts just like a traditional domain registar. First person to register a name gets it, and they have to pay a maintenance fee to keep it. Don't pay the maintenance fee and then someone else can register the name.

acdha
2 replies
14h36m

Yes, but statistically nobody uses it due to those problems. Squatters quickly snapped up the most popular DNS names but since nobody uses it there’s no financial benefit from paying Danegeld, and that’s a vicious cycle of irrelevance.

This is the core flaw of most of these systems: people hype them up selling the idea that it’ll become huge later, but without some link to real value or authority there’s no reason to favor one implementation over another or doing nothing at all.

This comes up a lot in the case of IPFS because the core problem is that most people won’t host anything on the internet. It’s expensive and legally risky, which forces you to vet who you host and then you have a less revolutionary pitch which has to be based on cost, performance, etc. and that’s a tough market.

int_19h
1 replies
11h48m

It might not be popular in general, but surely IPFS crowd specifically would be a lot more receptive to such a thing? IPFS itself is similarly niche.

acdha
0 replies
6h3m

Perhaps, but that doesn’t mean they’re suckers. If you’re going to have to deal with bootstrapping an alternate root anyway you probably aren’t motivated to pay off a speculator for the privilege of making their holdings more valuable.

null0pointer
1 replies
17h51m

ENS (which is what the GP refers to) has human readable names. But it doesn't have support for A/AAAA records today (does anyone know why? A-record support was in the original spec). Aside from that the only reason you wouldn't be able to type "mycoolsite.eth" into your browsers URL bar and have it resolve to an IP is because no browser has an ENS resolver built in. Yet.

dazaidesu
0 replies
16h3m

Brave does right?

bombcar
0 replies
20h17m

And if you want a long incomprehensible string we already have that .onion sites work without a blockchain, too.

ShamelessC
1 replies
20h12m

but name resolution has been touted as one of the more legitimate and sensible uses for blockchain for a very long time.

Blockchain enthusiasts have a history of talking out of their ass and being susceptible to the lies of others.

ShamelessC
0 replies
17h7m

Downvotes, nice. Whatever helps you sleep at night.

chaxor
11 replies
19h26m

I never fully understood the use of ipfs/iroh for websites, but I really like the idea for data caching in data science packages.

It makes me more sense to me that someone would be much more willing to serve large databases and neural network weights that they actually use everyday, rather than 'that one guys website they went to that one time'.

I'm very surprised it's not as popular, if not more popular to just have @iroh-memoize decorators everywhere in people's database ETL code.

That's a better use case (sense the user has a vested interest in keeping the data up) than helping people host we sites.

wharvle
9 replies
18h43m

IMO the case for something like IPFS gets worse and worse the larger proportion of clients are on battery. This makes it a really poor choice for the modern, public Web, where a whole lot of traffic comes from mobile devices.

Serving things that are mostly or nigh-exclusively used by machines connected to the power grid (and, ideally, great and consistent Internet connections) is a much better use case.

kmeisthax
8 replies
16h5m

This is half the reason why P2P died in the late 2000s. Mobile clients need to leech off a server to function.

The other reason why it died is privacy. Participating in a P2P network reveals your IP address, which can be used to get a subscriber address via DMCA subpoenas, which is how the RIAA, MPAA, and later Prenda Law attacked the shit out of Gnutella and BitTorrent. Centralized systems don't usually expose their users to legal risk like P2P does.

I have to wonder: how does IPFS protect people from learning what websites I've been on, or have pinned, without compromising the security of the network?

marcus_holmes
4 replies
13h26m

Spotify killed music sharing, not the RIAA.

There's still plenty of video and book pirating happening. Until the streaming industry gets its shit together and coalesces into a single provider, or allows peering, then that's going to continue.

The legal and privacy risks of P2P are both mitigated very simply with a VPN.

mjevans
1 replies
11h8m

They also need to just sell a 'license for a "personal private viewing copy"' of a work and provide non-DRM files that users can self archive and maintain.

No, DRM is not necessary, it's already proven that someone, among the 8 billion monkeys (with some really smart ones) hammering away _will_ figure out a way of liberating the data from the shackles. The whole premise is fundamentally broken in that the viewers are distrusted from seeing the data in the clear. It just adds cost, friction, and failure points.

Convenience (EASE OF USE!!!), a fair price, and content that doesn't go away are how alternative distribution methods die. Just low how bootleg booze largely doesn't exist outside of prohibition since the market functions.

zer00eyz
0 replies
10h4m

> Just low[sic] how bootleg booze largely doesn't exist outside of prohibition since the market functions.

Tell me that you hang out with law abiding citizens without telling me...

Moonshine, home brew... people are out there sticking it too the man as much as they can.

If you have made home made cider, or beer, or yogurt, pickles, canned anything you know that its a labor but the product is better and far cheaper than what you can buy.

Convenience, quality, ease of use... People will pay a massive premium for these things. This (to this dismay of HN) is the Apple model. You can bleed the customer if they love you, if you have a good product.

This was a problem in early film, and the paramount decree was a thing: https://www.promarket.org/2022/12/12/the-paramount-decrees-a...

One would think that this should apply to streaming services, but sadly no, they get treated like network television did (does).

And I know that one of you will glom on to the paramount decree as an argument for the iPhone App Store shenanigans of late. Sadly they aren't remotely close to each other Apple isnt restricting your timing, or telling you what your price should be.

crtasm
0 replies
50m

If everyone's on VPNs, nobody can connect to each other. I'm only aware of a couple of VPN services that offer port forwarding.

apitman
0 replies
11h6m

They had a single provider. They purposefully moved away from that model to make more money, and it's working.

BlueTemplar
1 replies
15h28m

How did it "die" in the late oughties, when FAIs were boasting about releasing routers with built-in NAS with torrent support in 2011, and projects like Popcorn Time only got popular in 2014 ?

anthk
0 replies
7h25m

A clueless gen-z user maybe, which was born with smartphones. On some media, ED2K and Nicotine+ (Soulseek network) it's the only way to fetch that content. Classical series/movies/comic books/special vinyl editions ripped into FLAC... those won't be at full FLAC/CBZ quality (high DPI png's) on YT/Spotify or whatever web site or APP for tablets.

anthk
0 replies
7h28m

P2P it's still pretty much alive, at least for Bittorrent and ED2K.

wuiheerfoj
0 replies
19h19m

Desci Labs (1) do something like this - papers and all the associated data (raw data, code, analyses, peer review comments) are published and linked together - along with all the citations to other papers - in a big DAG.

I believe their data is stored in a p2p - it might interest you!

1. https://desci.com/

teeray
1 replies
11h35m

I also started publishing a series of article about censorship-resistant internet (Tor, I2P, IPFS, and ZeroNet) in 2600 Magazine – The Hacker Quarterly – back in 2017

I very much enjoyed your articles on Tor and I2P :) I2P was entirely new to me, so I found that particularly interesting. I did idly wonder when the next article was coming, so I’m glad I didn’t just miss it in some issue. Totally understand where you’re coming from.

p4bl0
0 replies
10h41m

Thanks! It's always great to have feedback on paper-published content :).

DEDLINE
1 replies
19h56m

When evaluating use-cases where blockchain technology is leveraged to disintermediate, I came to your same conclusions. Technically novel? Yes, sure. But, for what?

hanniabu
0 replies
19h48m

For incentive alignment, consensus, trustless, etc

neiman
0 replies
21h28m

Maybe it is the case for ENS

Oh yeah, I was referring strictly to IPFS+ENS websites. I have been working with it for several years so my mind goes for this use-case automatically.

koito17
17 replies
21h23m

it’s quite an inconvenience to run your own IPFS node. But even if you do run your own node, the fact you access a website doesn’t mean you pin it. Not at all.

This has always been my major UX gripe with IPFS. The fact that `ipfs add` in the command line does little but generate a hash and you need to actually pin things in order to "seed" them, so to speak. So "adding a file to IPFS", in the sense of "adding a file to the network", requires the user to know that (1) the "add" in `ipfs add` does not add a file to the network, and (2) you must pin everything you want to replicate manually. I remember as recently as 2021 having to manually pin each file in a directory since pinning the directory does not recursively pin files. Doing this by hand for small folders is okay, but large folders? Not so much.

More importantly, the BitTorrent duplication problems that IPFS has solved are also solved in BitTorrent v2, and BitTorrent v2 IMO solves these problems in a much better way (you can create "hybrid torrents" which allows a great deal of backwards compatibility with existing torrent software).

This isn't a UX issue, but another thing that makes it hard for me to recommend IPFS to friends is the increasing association with "Web3" and cryptocurrency. I don't have any strong opinions on "Web3", but to many people, it's an instant deal-breaker.

hot_gril
14 replies
21h0m

IPFS provides nice stable links to media, and there are HTTP->IPFS gateways if needed. That seems useful for embedding content on multiple apps/sites. Yeah it happens to fit NFTs particularly well, then again we all know what BitTorrent is known for. And yes I agree IPFS has some UI problems.

Would BitTorrent also be suitable for hosting embeddable content? I haven't seen that yet. A magnet URL is mainly a file hash and doesn't seem to encode a particular peer server, kinda like IPFS. But every time I've torrented Ubuntu, it's taken half a minute just to find the peers.

mvdtnz
10 replies
20h33m

IPFS provides nice stable links to media

Anyone who has tried to torrent an old movie or lesser-known television show knows this is simply not true.

hot_gril
9 replies
20h29m

I mean it's not like HTTP where all URLs are tied to a particular webserver and can even be changed on that server. If someone different starts seeding, you'll get the same data again at the same URL, with built-in checksumming.

adamzochowski
8 replies
19h34m

That is something I wanted to know, does IPFS guarantee that same two files have same two IPFS URLs / hash links?

Otherwise, someone sharing same data again, because it will be in different IPFS folder won't be actually discoverable as same data.

mikegreenberg
2 replies
17h16m

Caveat: The other comments mention the file's contents being the only dependency on the hash, but the algo used to hash would also need to be the same. If the hash algo changes in two cases, the same content would have a different hash in those two cases.

hot_gril
1 replies
16h38m

In this case, would pinning the file make it accessible from either hash? I'd expect it to, but idk, I've only ever seen sha256 hashes on IPFS.

jaccarmac
0 replies
16h25m

Kinda. Shooting from the hip based on fuzzy gatherings from IPFS usage here, but as I understand it: The leaf-level data blocks will be shared between the Merkle trees, but at least the tip (the object a given hash actually refers to) and maybe some of the other structural information will be different.

hot_gril
2 replies
19h30m

Yes, a file's hash is only based on its contents. The way I understand it, a file doesn't really live in a directory, it's more like a directory (which is a kind of file itself) references files. So the same file can be in two directories, yet it'll have the same URL/hash. And if you "add" files to a directory, you're really uploading a separate copy of the dir that'll have a different hash.

I checked myself on this, but someone else might want to check me cause I'm not an expert.

wuiheerfoj
1 replies
19h11m

This is generally true, though it’s possible to encode the same data into a slightly different shaped DAG to optimise for eg video streaming performance afaiu (balanced vs imbalanced). UnixFS vs raw bytes may also be different but I’m not 100%

hot_gril
0 replies
18h58m

From the fs's point of view, these are different file contents. But yeah, there's nothing stopping you from pinning something different that looks the same to a person.

ranger_danger
0 replies
14h37m

Basically it depends on specific settings that can be changed in the client as to how the individual block pieces are encoded and therefore what the resulting hash ends up being. So no there's no inherent guarantee but you may get lucky with some copies of the same file.

fwip
0 replies
19h21m

Yes, IPFS hashes individual "blocks" (pieces) of files. If two files have the same content, they will share block hashes.

rakoo
2 replies
18h48m

Would BitTorrent also be suitable for hosting embeddable content?

Same as IPFS: gateways can exist. It's not specific to Bittorrent, or IPFS.

A magnet URL is mainly a file hash and doesn't seem to encode a particular peer server, kinda like IPFS.

Magnet links can include a HTTP server that also hosts the content

hot_gril
1 replies
18h40m

I'm sure a BitTorrent gateway can exist, but I'm wondering why it doesn't seem to "be a thing." I've never seen one used, nor do I see an obvious public one when searching. Whereas IPFS gateways are so mainstream that even Cloudflare runs a public one.

rakoo
0 replies
7h15m

It's because of the kind of content that is shared. BitTorrent serves a lot of content you are not allowed to redistribute, so having an open gateway immediately puts you at risk of aiding the distribution of content. But it does work, someone even made something native to browsers so browsers themselves can share content: https://webtorrent.io/. There are even fuse "gateways" to make it native to your computer and pretend the files exist locally: https://github.com/search?q=bittorrent+fuse&type=repositorie...

IPFS doesn't seem to be used for that kind of content much, it seems to be targeted more towards web-native content (html pages, images, that kind of stuff). It's probably safer for Cloudflare to run this.

flashm
0 replies
20h41m

‘ipfs add’ pins the file by default, not sure if that’s recent behaviour though.

Hendrikto
0 replies
7h57m

I remember as recently as 2021 having to manually pin each file in a directory since pinning the directory does not recursively pin files. Doing this by hand for small folders is okay, but large folders? Not so much.

Can‘t you just use a glob?

MenhirMike
8 replies
21h25m

the more readers it has, the faster it is to use it since some readers help to spread the content (Scalable).

In other words: Once a few big websites are established, no small website will ever be able to gain traction again because the big websites are simply easier to reach and thus more attractive to use. And just like an unpopular old torrent, eventually you run out of seeders and your site is lost forever.

One can argue about the value of low traffic websites, but I got to wonder: Who in their right mind thinks "Yeah, I want to make a website and then have others decide if it's allowed to live". Then again, maybe that kind of "survival of the fittest" is appealing to some folks.

As far as I am concerned, it sounds like a stupid idea. (Which the author goes into more detail, so that's a good write up)

fodkodrasz
6 replies
21h13m

This is a false dilemma. Why would you not "seed" (pin) your own site, and be at others' mercy? You pin it, and when others also do so, the readers get faster and more redundant service.

kimixa
2 replies
21h1m

For "unpopular" sites having a single origin somewhat removes the advantages of IPFS, it's not decentralized, not censorship resilient, and still costs the publisher for ongoing infrastructure to host it. Yet still had the disadvantages and complexity of IPFS vs a static http server.

So if you're not going to be publishing something that will always have multiple copies floating around, why use IPFS?

fodkodrasz
0 replies
20h31m

1. to give a chance to avoid being slashdotted. 2. to allow anybody who finds it valuable to archive it, or parts of it?

The complexity of IPFS is another thing, which should be solved. However popular or unpopular your site might be, you must host is somewhere somehow, if you wish to be sure it sticks around. It is simple as that.

cle
0 replies
20h5m

It helps to use more specific terms than "decentralized" and "censorship resilient", there are a lot of attack vectors for both. IPFS certainly does address some of the attack vectors, but not all. For example if the "centralized" thing you're worried about is DNS and certificate authorities, then you can avoid those authorities entirely with IPFS. Replication is one aspect of centralization, and IPFS doesn't completely fail at it, it's just more expensive (you can guarantee two node replication, you will have to run two nodes though). And there are other aspects not addressed by IPFS at all like its reliance on IP address routing infrastructure.

p4bl0
1 replies
20h53m

If you need to pin your content anyway, it's actually faster and less expensive to host a normal website then. And if you want to get it to readers faster, there are a lot a cheap or free CDN available, but that's generally not even an issue with the kind of website we're talking about here when they're served normally, over the web.

fodkodrasz
0 replies
20h30m

Yes, that is the state of affairs now. I can use cloudfront for my site, but cannot use it to pin my ipfs site (should I have one) as far as I know.

You are fighting a strawman. If you don't take of your site, but expect others to take care of it (pin it), then it is not your site. You must ensure it has at least one pinned version. Others might o might not pin it, it depends on the popularity, or the accessibility of the stack, which is lacking right now according to the article.

kevincox
0 replies
6m

It is also worth noting that most IPFS peers will cache content for some period of time even if not explicitly pinned. So if you site hits the top of Hacker News (and everyone was using a native IPFS browser) you would suddenly have thousands of peers with the content available. So in theory your one node can serve infinite users since once you serve one copy to the first user that user can replicate it to others. (The real world is of course more complicated but the idea works decently well in practice.)

lelandbatey
0 replies
21h18m

It's not up to others alone; you get a say too because you can seed your own content and that can be fast. In the worst case of no interest, then it's approximately the same as you hosting your own website in the world of today. This doesn't exonerate the shortfall of the "old torrent" pattern though, as you say.

schmichael
7 replies
21h43m

I pinned content from my own server and played forever with settings and definitions, but couldn’t get the content to be available everywhere.

The blog you’re reading now is built with Jekyll and is hosted on my own 10$ server.

don’t get me wrong, I’m still an IPFS fanboy.

...how could you still be a fanboy? When IPFS cannot fulfill even the most basic function of globally serving static content, why does it deserve anyone's interest? It's not even new or cutting edge at this point. After 8 years of development how can the most basic functionality still not work for even an expert?

withinboredom
3 replies
21h35m

[wrong thread]

TheRealPomax
2 replies
21h25m

What on earth are you talking about? IPFS has nothing to do with "coins", it's a distributed data management system. You can use IPFS for finance in the same way you can use a database for finance. You can also use IPFS for literally anything else that falls in the "data that you want other people to be able to access" category.

withinboredom
0 replies
21h22m

oh shit, I replied to the wrong comment.

nottorp
0 replies
6h45m

But it seems associated with blockchain and something named filecoin?

There goes all credibility.

neiman
1 replies
21h38m

It worked fine before for many years when it was slightly less popular.

They're having growing pains due to scalability problems and some libraries, like Helia (the JS library of IPFS), being new. I guess I'm also quite stubborn in wanting to do it my way, without the aid of any services, and for the content I pin to be available in all places, including Helia in the browser.

p4bl0
0 replies
21h32m

The official IPFS client in Go has always been very, very hungry for resources. At some point it crashed because it needed to many file descriptors if it ran for too long. Even for a simple static site with infrequent update it needed some maintenance. But even if one was willing to put the effort in, it was not actually rewarded, because if your server were you pin your website is offline, the truth is that your site is offline too, so what's the point?

heipei
0 replies
21h24m

It's working fabulously for hosting phishing websites frontend by the popular IPFS gateway providers like Cloudflare, so at least there's that...

lindig
7 replies
21h42m

Filecoin, which is based on IPFS, creates a market for unused storage. I think that idea is great but for adoption it needs to be as simple as Dropbox to store files. But visit https://filecoin.io/ and the dropbox-like app that you could be willing to try is nowhere to be found. So maybe it is an enterprise solution? That isn't spelled out either. So I am not surprised that this has little traction and the article further confirms the impression.

pierat
1 replies
21h40m

Here's your $.10/day for that 1GB with bandwidth... but running the filecoin stack will cost you a $50/mo server.

That fucker's a PIG on cpu and ram.

kkielhofner
0 replies
20h11m

IPFS is as well.

Clearly much more going on but take a machine that can serve 10k req/s with [insert 100 things here] without flinching and watch it maybe, just maybe, do 10 with IPFS.

I'm not kidding.

poorman
0 replies
20h39m

That flagship app you are looking for seems to be https://nft.storage/ (by Protocol Labs).

nickstinemates
0 replies
20h27m

this is what storj.io does.

diggan
0 replies
21h11m

to be as simple as Dropbox to store files. But visit https://filecoin.io/ and the dropbox-like app that you could be willing to try is nowhere to be found

I agree with this fully. But as said elsewhere, it's kind of far away from that, and also slightly misdirected.

Imagine asking someone to get started with web development by sending them to https://www.ietf.org/rfc/rfc793.txt (the TCP specification). Filecoin is just the protocol, and won't ever solve that particular problem, as it's not focused on solving that particular problem, it's up to client implementations to solve.

But the ecosystem is for sure missing an easy to use / end-user application like Dropbox for storing files in a decentralized and secure way.

chrisco255
0 replies
13h35m

Fileverse is an app built on ipfs and it is very user friendly: https://fileverse.io/

ahmedfromtunis
0 replies
19h27m

This is, in my opinion, is the first and only "solution" to a real problem built using the blockchain.

Distributed file storage, if done correctly, can be a transformative technology. And it can be even more revolutionary implemented at the OS level.

b_fiive
7 replies
21h22m

Totally biased founder here, but I work on https://github.com/n0-computer/iroh, a thing that started off as an IPFS implementation in rust, but we broke out & ended up doing our own thing. We're not at the point where the iroh implements "the full IPFS experience" (some parts border on impossible to do while keeping a decentralized promise), but we're getting closer to the "p2p website hosting" use case each week.

hot_gril
2 replies
21h6m

Is it named after the Avatar character?

joshspankit
0 replies
20h57m

Yes, and that makes me happy every time I see it.

b_fiive
0 replies
20h50m

I can neither confirm nor deny, but oh boy does uncle iroh seem cool

gabesullice
1 replies
16h58m

Super intriguing. Thanks for sharing!

It reminds me a bit of an early Go project called Upspin [1]. And also a bit of Solid [2]. Did you take any inspiration from them?

What excites me about your project is that you're addressing the elephant in the room when it comes to data sovereignty (~nobody wants to self-host a personal database but their personal devices aren't publicly accessible) in an elegant way.

By storing the data on my personal device and (presumably?) paying for a managed relay (and maybe an encrypted backup), I can keep my data in my physical possession, but I won't have to host anything on my own. Is that the idea?

[1] https://upspin.io/ [2] https://solidproject.org/

b_fiive
0 replies
16h30m

Ah <3 Upspin! It's been a minute. I've personally read through & love Upspin. I always found solid a little too tied to RDF & the semantic web push. The solid project is/was super valiant effort, but these days I feel like the semantic web push peaked with HTML & schema.org.

By storing the data on my personal device and (presumably?) paying for a managed relay (and maybe an encrypted backup), I can keep my data in my physical possession, but I won't have to host anything on my own. Is that the idea?

We're hoping to give that exact setup to app developers (maybe that's you :). We still have work to do on encryption at rest to keep the hosted server "dumb", and more plumbing into existing app development ecosystems like flutter, expo, tauri, etc. but yes, that's the hope. Give developers tools to ship apps that renegotiate the "user social contract".

ChadNauseam
1 replies
4h56m

I feel like this type of project is a natural fit as the transport layer for CRDT-based applications. Something like: each user/device has an append-only log of CRDT events, then applications merge events from multiple logs to create a collaborative experience. (I have no idea if iroh supports append-only logs, but it seems like a common thing for projects in this space to support.) What do you think?

b_fiive
0 replies
4h36m

yep! Iroh documents [1] give you a very nice primitive that is technically a CRDT, but in practice most people use it as a key-value store. We really wanted a mutable solution that would support real deletions (instead of tombstones), and starting with append-only logs locks you out of that choice.

With Iroh + CRDTs you have three choices: 1. Use iroh's connection & gossip layers in conjunction with a mature CRDT library like Automerge or Y.js. 2. Build a more sophisticated CRDT on top of iroh documents. 3. Worry a little less about weather your data structures form a semilattice & build on a last-writer wins key-value store (basically: just use documents)

We've seen uses for all three. Hope that helps!

[1] https://iroh.computer/docs/layers/documents

yieldcrv
4 replies
21h26m

I host all the static files of my Netlify and Vercel servers on IPFS

it is simple enough and free even on hosted solutions, and it keeps my Netlify and Vercel free during spikes in traffic

but the availability issue is perplexing, just like OP encountered

some people just randomly wont be able to resolve some assets on your site, sometimes! the gateways go up and down, their cache of your file comes and goes. browsers dont natively resolve ipfs:// uris. its very weird.

indigodaddy
3 replies
20h58m

“ and it keeps my Netlify and Vercel free during spikes in traffic” — how exactly would this help re: potentially breaching outbound transfer limits?

yieldcrv
2 replies
20h55m

if static assets arent being requested from their server then it would not contribute to your bandwidth meter

indigodaddy
1 replies
20h22m

You still have to transfer that data through the network pipe to the end user no? How the server itself accesses the files to do so seems irrelevant to me.

Drblessing
0 replies
19h7m

I think if the assets are served in the html from ipfs gateway, then it doesn't add to your outbound traffic. Also once browsers support natively ipfs:// static content the game changes and the IPFS party really gets started.

axegon_
4 replies
21h11m

I see where the author is coming from but I find something else strange: Considering that the blog is in practice a collection of static files, I don't see the benefit of paying for a server at all. Host it on github, if github gets killed off for whatever reason, switch to something else and move on. Seems like an unnecessary overhead to me.

neiman
2 replies
21h5m

I get told that a lot! xD

My original aim was to write an IPFS blogging engine for my personal use, so I needed some dynamic loading from IPFS there.

Now I switched to Jekyll, and it would be easier to host the blog on Github indeed, but I'm kind of playing a quixotic game of trying to minimize the presence of Google/Microsoft/Amazon and other big-tech in my life.

walterbell
1 replies
18h8m

Free tier of indie https://neocities.org supports static sites like Jekyll.

rapnie
0 replies
9h18m

https://codeberg.page .. similar idea to Github Pages.

hot_gril
0 replies
21h9m

Same. IPFS seems far more useful for hosting static content that might be embedded in multiple websites.

nikisweeting
2 replies
9h34m

Is there anything that allows one to mount an IPFS dir as a read/write FUSE drive yet? Once they have that, I'm all in, even if it's slow...

willscott
0 replies
7h24m

https://github.com/djdv/go-filesystem-utils/pull/40 lets you interact with IPFS as an NFS mount

ianopolous
0 replies
9h9m

We have a FUSE mount in Peergos[0][1] (built on IPFS). It lets you mount any folder read or write (within your access).

[0] https://github.com/peergos/peergos [1] https://peergos.org

mbgerring
1 replies
20h29m

Did they ever address the issue with IPFS where running a node necessarily required you to distribute an unknowable amount of pieces of material you may not want to be hosting (like CSAM, for example)?

treyd
0 replies
18h16m

That has never been an issue. You only seed what you choose to. It's basically the same model as BitTorrent but with about 15 fewer years of R&D behind it and much less organic user adoption.

geokon
1 replies
10h15m

"This is a huge difference from BitTorrent where the only way to get content is to run your own software, and when you download something you also share it, by default."

As far as I understand this isn't a solved technical problem - but mostly a cultural quirk and probably just due to how the early torrent clients were configured

There is for instance a major Chinese torrent client (that name escapes me) that doesn't seed by default - so the whole thing could have easily not worked. If IPFS clients don't seed by default then that kinda sounds like either a design mistake or a "culture problem"

I've always wondered if there was a way to check if a client is reseeding (eg. request and download a bit from a different IP) and then blacklist them if they provide the data (or throttle them or something)

wruza
0 replies
9h46m

They probably don’t seed by default for a good reason. While torrents aren’t inherently political unlike signal and others, their culture is close to “legal ussues”. As a seeder I respect that and that’s why I’m seeding to high ratios. For every seeder of a specific piece of content there are 10x (20x? 50x?) more people who cannot share it back.

But if you want to fence them off, you can use private trackers with ul/dl ratio accounting.

filebase
1 replies
20h37m
neiman
0 replies
20h23m

Oh yes, I even had a free IPNS pinning service (for community members mostly) built with a friend.

https://dwebservices.xyz/

anacrolix
1 replies
17h42m

For a BitTorrent based take on IPFS: https://GitHub.com/anacrolix/btlink

Avamander
0 replies
7h53m

This seems significantly more viable than IPFS, if you can leverage all the existing available torrents out there.

zubairq
0 replies
11h20m

I did a similar thing on a project of mine where I used to use IPFS as the only long term storage layer on a project of mine. I still use IPFS but now I use it more as "long term unreliable storage". Note that I use IPFS without Filecoin or any external pinning services, instead chose to pin the content from my own server on regular basis.

The current status is that I plan to bring back IPFS usage more in the future for my project, but will wait for the ecosystem to mature a bit more first with regards to libraries.

xiaojay2022
0 replies
1h28m
wyck
0 replies
19h29m

I build a blog in IPFS, its basically reliant on several centralized services to actually work in browsers (DNS , GitHub, Fleek, etc). I wrote about how I build it here, the experaince was underwhelming. https://labcabin.eth.limo/writing/how-site-built.html

tempaccount1234
0 replies
20h1m

As a user I’d stay away from sharing IPFS because of legal reasons. Just like torrenting, by actively distributing content I take up legal responsibility (at least in Europe where I’m located) - that’s risk is tolerable for a legal torrent because the file doesn’t change over time. For a changing web site, I’d constantly have to monitor the site for changes or trust the site 100% - which is not happening as soon as the site is somewhat controversial…

shp0ngle
0 replies
20h41m

Yeah this is exactly my experience with IPFS. Nobody actually uses IPFS directly, and even those few that do never actually pin anything because it's an extra step.

(Also I heard it's computationally costly, but I am not sure if it's true, I can't imagine why it would be the case actually.)

As a result it's actually more centralised than web, there are like 3 pinning services that everyone uses. At which point I don't get the extra hoops.

sharperguy
0 replies
6h6m

I think the main difference between IPFS and bittorrent in terms of usage patterns is that IPFS is being used to host content that could easily by just a regular HTTP server, whereas bittorrent is hosting data which is highly desired and would be impossible or very expensive to host on HTTP.

And so naturally relays pop up, and the relays end up being more convenient than actually using the underlying protocol.

nonrandomstring
0 replies
21h38m

Well done to the author for writing this up.

Having tried fringe technologies over the years, spun up a server and run them for a few months, struggled and seen all the rough edges and loose threads, I often come to the point of feeling - this technology is good, but it's not ready yet. Time to let more determined people carry the torch.

The upside is:

- you tried, and so contributed to the ecosystem

- you told people what needs improving

Just quitting and not writing about your experience seems a waste for everyone, so good to know why web hosting on IPFS is still rough.

mawise
0 replies
19h12m

What about a cross between IPFS/Nostr and RSS? RSS (or atom) already provides a widely-adopted syndication structure. All that's missing are signatures so that a middleman can re-broadcast the same content. Maybe with signatures that's really reinventing SSB[1]. But if we think of the network in a more social sense, where you're connecting to trusted peers (think: irl friends), maybe the signatures aren't even that important. All that's left then is to separate the identifier for the feed from the location--today those are both URL--so you can fetch a feed from a non-authoritative peer.

[1]: https://en.wikipedia.org/wiki/Secure_Scuttlebutt

ianopolous
0 replies
9h24m

I've been hosting my website on Peergos (built on ipfs) for years now (founder here). Peergos solves a lot of the issues with mutable data (also privacy, access control). You can see how fast updates show up from an independent server here: https://peergos.org/posts/p2p-web-hosting

My personal website served from a peergos gateway (anyone can run one) is https://ianopolous.peergos.me/

If you want to read more check out our book: https://book.peergos.org

hirako2000
0 replies
9h57m

On the need to run a node, i have a little project to wrap the static site content with an IPFS node in JS.

E.g there is already helia.

https://github.com/ipfs/helia

Just waiting for running a node on the browser tab to become insignificant, resource wise.

fsiefken
0 replies
18h59m

Perhaps use the DAT p2p network as an alternative? https://kovah.medium.com/publishing-a-static-website-to-the-...

fractalnetworks
0 replies
13h48m

yup ipfs literally doesnt work, why else did they need to do an ico and introduce centralized indexers...

deephire
0 replies
19h7m

Any thoughts on the services that make hosting a blog on IPFS easier?

Services like https://dappling.network, https://spheron.network, https://fleek.co, etc?

I've seen some DeFi protocols use IPFS to add some resiliency to their frontends. If their centralized frontend with vercel or whatever is down, they can direct users to their IPFS/ENS entrypoint.

dannyobrien
0 replies
18h42m

I'm not sure quite how relevant it is to Neiman's work, but this is a pretty interesting blog post on decentralized web apps, and the tradeoffs with using various versions of IPFS in the browser: https://blog.ipfs.tech/dapps-ipfs/

alucart
0 replies
20h24m

I'm exploring a similar project, having a "decentralized" website (hosted on github) which saves users' data in the blockchain itself and then provides that same data to other users through public APIs and/or from actual blockchain clients.

Wonder if there is actual use or need for such thing.

ChrisArchitect
0 replies
20h58m

Aside: visited this curious as to what the Nieman Journalism Lab (https://www.niemanlab.org/) was doing with IPFS if anything. Not the nicest near-collision naming move.