Visiting a Web Page or a Public URL Should be Safe, Predictable, and Benign
Today in IRC someone pointed out an issue with Web pages, which despite having many elements them blocked by "cleverer" browsers still fail to 'shave off' rather malicious elements. Speaking from personal experience, as well as my wife's experience (another machine, different version of Debian, not the same browsers), there are pages that once visited can very quickly exhaust all RAM, either forcing a reboot or a very lengthy/complicated recovery process. Then sites wonder why many folks are reluctant to visit or, if they visit, they block lots of things (not just ads)...
RSS feeds have long provided somewhat of a workaround, but they don't address the core issue. Why did the Web became so malicious and when did merely visiting some address (or clicking some URL/hyperlink) become a risk to one's machine? If there are open files when people reboot, then there's a prospect of data loss too, not just the loss of time. One hour from now my main laptop will exceed 700 days in uptime (last reboot was in 2023), but more than a dozen times already I had to rush to tty2 and killall browser windows due to notorious domains, which have the same effect on other browsers on another machine (so this is reproducible).
Maybe the issue isn't the Web or Web browsers or Web sites. Maybe the issue is the growing complexity and the lenience when it comes to sites running proprietary software on your machine only because you visit them.
As usual, simple tends to be better. Today's Web and "Webapps" aren't simple to use and usually aren't easy to navigate either. They're just bloatware and sometimes they're malicious even if unintentionally (poor testing or lack of control over third parties invoked when one visits a bloated site/page).
It's probably too late to "fix" the Web. We need to adopt and encourage adoption of alternatives to it. █
