43c1ae14359c4ba2d4adc78cc9e4601a
THE Web is generally not decentralised. The Internet is not decentralised, either. DNS is centralised, certificates are centralised (if you rely on the concept of 'trusted' CAs), and with most services you rely on just one address to work (for things to become accessible; it's possible to have multiple servers assigned/connected to the same address, but that's redundancy, not decentralisation).
"We started using it about a year ago, starting with daily bulletins and then adding IRC logs, both in the form of HTML and plain text (the latter was added months later)."IPFS is generally good; when it works, it sure works well (albeit not quickly, the latency is incredibly high, ranging from seconds to minutes, which is unsuitable for some use cases). As I noted in the video above, this week has been more eventful than usual because the IPFS daemon started respawning endlessly and was still malfunctioning. Last night it just completely stopped working all of a sudden. With DHT traffic taking up the lion's share of the pie (unless you serve something such as video), IPFS does not scale well. It's very costly, requiring a lot of energy and bandwidth for relatively small returns. To make matters worse, it occasionally can and would become inaccessible, it can use up all the bandwidth (requiring further configuration), and it's difficult to debug. So adopting IPFS for site-related delivery of content can become a lot of work devoted to maintenance, not to mention CPU cycles and bandwidth. We have a few thousands of objects in it and it's stretching it to the limits, at least for a device with a residential connection. Several other people have reported similar issues, so we know we're not alone. What's ugly is that many of those reports -- like much of the code -- are still hosted by proprietary software (Microsoft GitHub) and are "GitHub Issues", i.e. vendor lock-in. That sends across a negative message; GitHub is an enemy of decentralisation, it's proprietary, and it is a den of arbitrary censorship on behalf of Hollywood, governments, etc.
IPFS can very quickly become utterly wasteful, just like Bitcoin and other digital (or crypto) 'coins'. But unlike coin mining, timeliness matters. IPFS can become completely inaccessible for long periods of time, with no fallbacks in place. That means downtime. We've been spending hours on IPFS this past week and it's not even serving the content (it times out); it is failing for long periods of time. It's almost impossible to debug because it is decentralised and diagnosing a swarm is incredibly difficult, akin to guesswork or "hocus pocus". One time it works, the next time it might not...
"IPFS can very quickly become utterly wasteful, just like Bitcoin and other digital (or crypto) 'coins'."As noted at the end of this video, adding a new object scales poorly (but linearly, not quadratically/exponentially) as the index of objects needs to be rebuilt from scratch (in the Go implementation at least), which means that when the number of objects doubles it can take twice as long to add new ones. If this carries on for a few years it can take an hour if not hours just to add our daily objects. Hours of CPU cycles! Maybe future/present versions tackle this issue already, so we can be patient and hope IPFS will mature/evolve gracefully. Otherwise, it is untenable for the purposes/work we've assigned to it originally (last October).
The video isn't an admission of mistake or regret; I don't personally regret pouring so much energy into IPFS, I just hope to express my thoughts on things that can be improved and probably should be improved. IPFS isn't a very young project (it has been around for quite a while), but its releases are considered not stable and work in progress. If we're part of a large experiment and the risk we take is occasional downtime (over IPFS, not Gemini or HTTP), then so be it. ⬆