Bonum Certa Men Certa

Techrights Coding Projects: Making the Web Light Again

A boatload of bytes that serve no purpose at all (99% of all the traffic sent from some Web sites)

Very bloated boat



Summary: Ongoing technical projects that improve access to information and better organise credible information preceded by a depressing overview regarding the health of the Web (it's unbelievably bloated)

OVER the past few months (since spring) we've been working hard on coding automation and improving the back end in various ways. More than 100 hours were spent on this and it puts us in a better position to grow in the long run and also improve uptime. Last year we left behind most US/USPTO coverage to better focus on the European Patent Office (EPO) and GNU/Linux -- a subject neglected here for nearly half a decade (more so after we had begun coverage of EPO scandals).



As readers may have noticed, in recent months we were able to produce more daily links (and more per day). About a month ago we reduced the volume of political coverage in these links. Journalism is waning and the quality of reporting -- not to mention sites -- is rapidly declining.

"As readers may have noticed, in recent months we were able to produce more daily links (and more per day)."To quote one of our guys, "looking at the insides of today's web sites has been one of the most depressing things I have experienced in recent decades. I underestimated the cruft in an earlier message. Probably 95% of the bytes transmitted between client and server have nothing to do with content. That's a truly rotten infrastructure upon which society is tottering."

We typically gather and curate news using RSS feed readers. These keep sites light and tidy. They help us survey the news without wrestling with clickbait, ads, and spam. It's the only way to keep up with quality while leaving out cruft and FUD (and Microsoft's googlebombing). A huge amount of effort goes into this and it takes a lot of time. It's all done manually.

"We typically gather and curate news using RSS feed readers. These keep sites light and tidy. They help us survey the news without wrestling with clickbait, ads, and spam.""I've been letting wget below run while I am mostly outside painting part of the house," said that guy, having chosen to survey/assess the above-stated problem. "It turns out that the idea that 95% of what web severs send is crap was too optimistic. I spidered the latest URL from each one of the unique sites sent in the links from January through July and measured the raw size for the individual pages and their prerequisites. Each article, including any duds and 404 messages, averaged 42 objects [3] per article. The median, however, was 22 objects. Many had hundreds of objects, not counting cookies or scripts that call in scripts.

"I measured disk space for each article, then I ran lynx over the same URLs to get the approximate size of the content. If one counts everything as content then the lynx output is on average 1% the size of the raw material. If I estimate that only 75% or 50% of the text rendered is actual content then that number obviously goes down proportionally.

"I suppose that means that 99% of the electricity used to push those bits around is wasted as well. By extension, it could also mean that 99% of the greenhouse gases produced by that electricity is produced for no reason.

"The results are not scientifically sound but satisfy my curiosity on the topic, for now.

"Eliminating the dud URLs will produce a much higher object count.

“The results are not scientifically sound but satisfy my curiosity on the topic, for now.”
      --Anonymous
"Using more mainstream sites and fewer tech blogs will drive up the article sizes greatly.

"The work is not peer reviewed or even properly planned. I just tried some spur of the minute checks on article sizes in the first way I could think of," said the guy. We covered this subject before in relation to JavaScript bloat and sites' simplicity, but here we have actual numbers to present.

"The numbers depend on the quality of the data," the guy added, "that is to say the selection of links and the culling the results of 404's, paywall messages, and cookie warnings and so on.

"As mentioned I just took the latest link from each of the sites I have bookmarked this year. That skews it towards lean tech blogs. Though some publishers which should know very much better are real pigs:




$ wget --continue --page-requisites --timeout=30 --directory-prefix=./test.a/ https://www.technologyreview.com/s/614079/what-is-geoengineering-and-why-should-you-care-climate-change-harvard/ . . .

$ lynx --dump https://www.technologyreview.com/s/614079/what-is-geoengineering-and-why-should-you-care-climate-change-harvard/ > test.b

$ du -bs ./test.? 2485779 ./test.a 35109 ./test.b



"Trimming some of the lines of cruft from the text version for that article, I get close to two orders of magnitude difference between the original edition versus the trimmed text edition:

$ du -bs ./test.?
2485779	./test.a
35109	./test.b
27147	./test.c


"Also the trimmed text edition is close to 75% the size of the automated text edition. So, at least for that article, the guess of 75% content may be about right. However, given the quick and dirty approach, of this survey, not much can be said conclusively except 1) there is a lot of waste, 2) there is an opportunity for someone to do an easy piece of research."

Based on links from 2019-08-08 and 2019-08-09, we get one set of results (extracted all URLs saved from January 2019 through July 2019; http and https only, eliminated PDF and other links to obviously non-html material). Technical appendices and footnotes are below for those wishing to explore further and reproduce.







+ this only retrieves the first layer of javascript, far from all of it + some site gave wget trouble, should have fiddled the agent string, --user-agent="" + too many sites respond without proper HTTP response headers, slows collection down intolerably + the pages themselves often contain many dead links + serial fetching is slow and because the sites are unique

$ find . -mindepth 1 -maxdepth 1 -type d -print | wc -l 91 $ find . -mindepth 1 -type f -print | wc -l 4171 which is an average of 78 objects per "article"

+ some sites were tech blogs with lean, hand-crafted HTML, mainstream sites are much heavier, so the above average is skewed towards being too light

Quantity and size of objects associated with articles, does not count cookies nor secondary scripts:

$ find . -mindepth 1 -type f -printf '%s\t%p\n' \ | sort -k1,1n -k2,2 \ | awk '$1>10{ sum+=$1; c++; s[c]=$1; n[c]=$2 } END{ printf "%10s\t%10s\n","Bytes","Measurement"; printf "%10d\tSMALLEST\n",s[1]; for (i in s){ if(i==int(c/2)){ printf "%10d\tMEDIAN SIZE\n",s[i]; } }; printf "%10d\tLARGEST\n",s[c]; printf "%10d\tAVG SIZE\n",sum/c; printf "%10d\tCOUNT\n",c; }'

Bytes File Size 13 SMALLEST 10056 MEDIAN SIZE 32035328 LARGEST 53643 AVG SIZE 38164 COUNT









Overall article size [1] including only the first layer of scripts,

Bytes Article Size 8442 SMALLEST 995476 MEDIAN 61097209 LARGEST 2319854 AVG 921 COUNT

Estimated content [2] size including links, headers, navigation text, etc:

+ deleted files with errors or warnings, probably a mistake as that skews the results for lynx higher

Bytes Article Size 929 SMALLEST 18782 MEDIAN 244311 LARGEST 23997 AVG 889 COUNT

+ lynx returns all text within the document not just the main content, at 75% content the figures are more realistic for some sites:

Bytes Measurement 697 SMALLEST 14087 MEDIAN 183233 LARGEST 17998 AVG 889 COUNT

at 50% content the figures are more realistic for other sites:

465 SMALLEST 9391 MEDIAN 122156 LARGEST 11999 AVG 889 COUNT






       


$ du -bs * \ | sort -k1,1n -k2,2 \ | awk '$2!="l" && $1 { c++; s[c]=$1; n[c]=$2; sum+=$1 } END { for (i in s){ if(i==int(c/2)){ m=i }; printf "% 10d\t%s\n", s[i],n[i] }; printf "% 10s\tArticle Size\n","Bytes"; printf "% 10d\tSMALLEST %s\n",s[1],n[1]; printf "% 10d\tMEDIAN %s\n",s[m],n[m]; printf "% 10d\tLARGEST %s\n",s[c],n[c]; printf "% 10d\tAVG\n", sum/c; printf "% 10d\tCOUNT\n",c; }' OFS=$'\t'









[1]

$ time bash -c 'count=0; shuf l \ | while read u; do echo $u; wget --continue --page-requisites --timeout=30 "$u" & echo $((count++)); if ((count % 5 == 0)); then wait; fi; done;'









[2]

$ count=0; time for i in $(cat l); do echo;echo $i; lynx -dump "$i" > $count; echo $((count++)); done;








[3]

$ find . -mindepth 1 -maxdepth 1 -type d -print | wc -l 921

$ find . -mindepth 1 -type f -print | wc -l 38249









[4]

$ find . -mindepth 1 -type f -print \ | awk '{sub("\./","");sub("/.*","");print;}' | uniq -c | sort -k1,1n -k2,2 | awk '$1{c++;s[c]=$1;sum+=$1;} END{for(i in s){if(i == int(c/2)){m=s[i];}}; print "MEDIAN: ",m; print "AVG", sum/c; print "Quantity",c; }'









[5]

$ find . -mindepth 1 -type f -name '*.js' -exec du -sh {} \; | sort -k1,1rh | head 16M ./www.icij.org/app/themes/icij/dist/scripts/main_8707d181.js 3.4M ./europeanconservative.com/wp-content/themes/Generations/assets/scripts/fontawesome-all.min.js 1.8M ./www.9news.com.au/assets/main.f7ba1448.js 1.8M ./www.technologyreview.com/_next/static/chunks/commons.7eed6fd0fd49f117e780.js 1.8M ./www.thetimes.co.uk/d/js/app-7a9b7f4da3.js 1.5M ./www.crossfit.com/main.997a9d1e71cdc5056c64.js 1.4M ./www.icann.org/assets/application-4366ce9f0552171ee2c82c9421d286b7ae8141d4c034a005c1ac3d7409eb118b.js 1.3M ./www.digitalhealth.net/wp-content/plugins/event-espresso-core-reg/assets/dist/ee-vendor.e12aca2f149e71e409e8.dist.js 1.2M ./www.fresnobee.com/wps/build/webpack/videoStory.bundle-69dae9d5d577db8a7bb4.js 1.2M ./www.ft.lk/assets/libs/angular/angular/angular.js






[6] About page bloat, one can pick just about any page and find from one to close to two orders of magnitude difference between the lynx dump and the full web page. For example,




$ wget --continue --page-requisites --timeout=30 \ --directory-prefix=./test.a/ \ https://www.newsweek.com/saudi-uae-war-themselves-yemen-1453371 . . .

$ lynx --dump \ https://www.newsweek.com/saudi-uae-war-themselves-yemen-1453371 \ > test.b

$ du -bs ./test.? 250793 ./test.a 15385 ./test.b

Recent Techrights' Posts

IBM Reduces the Thresholds for Acceptance (and the Salaries)
Are chatbots good enough as IBM staff?
When It Comes to Rust, Keep All the Eyes on the Ball (Technical and Legal Perils, Sustainability Questions)
It's not about security or politics
Social Control Media is Just a Digital Weapon
Social control media is not social and not media
 
Happy Birthday (or Anniversary) to SoylentNews
"Happy Birthday SoylentNews"
Techrights' Architecture
Stability is the main goal
Linux Foundation Continues Falling Off a Cliff in Geminispace
Gemini Protocol will turn 7 this summer
Links 16/02/2026: cURL’s Daniel Stenberg Asserts That Slop is DDoSing Free Software, But Still Uses a Plagiarism and GPL-Violating Blender (Microsoft GitHub)
Links for the day
The Techrights Community Never Needed Money, Only Goodwill
We accomplish things by a track record of suppressed facts
"AboutCode" is a Microsoft Proxy and Microsoft's Acquisition of the OSI Advances Via OSI Moles
presenting direct evidence anybody can verify
They Will Call Smart People "Luddites"
Is society "seeing the light"?
Microsoft Amutable Already Reveals That Its Focus Is Not Linux, It'll Promote "Remote Attestation"
This is basically an attack on Software Freedom, even if they toss around the brand "Linux"
More People in Chad Move to GNU/Linux
Last year we began to see GNU/Linux rising there - a trend which continues this year
Dr. Andy Farnell on How Universities and Culture of Education Got Crushed by "Technofascist Nightmare"
Farnell says he "already soft-quit in [his] mind"
Debt of Broadcom Grew by More Than 50%, Broadcom is Deeper in Debt Than Google
Expect many more cuts
Over at Tux Machines...
GNU/Linux news for the past day
IRC Proceedings: Sunday, February 15, 2026
IRC logs for Sunday, February 15, 2026
Links 15/02/2026: Slop, Politics, and Gemini
Links for the day
Small is Beautiful (in Cascading Style Sheets/Inheritance Rules)
If done correctly, pages can take a tenth of a second to fully load
Microsoft Has Fallen to New Lows in Hong Kong This Year
That Windows "market share" falls there is perhaps expected
Free Software Foundation (FSF) Raised About 1.5 Million Dollars This Winter, Almost 50% More Than in All of 2024 Combined
Verbal advocacy goes a long way
Spread the Word About EPO Strikes and Patent Injustices in Europe
Corruption in Europe is a real thing
The Register MS is Promoting Slop, Promotion Connected to Microsoft (Trying to Replace Judges With Microsoft)
marketing spun as "science"
He Did Not Have Enough Souls
A lot of the subjects we cover here no other site dares touch
"Mix Vale" is a Slopfarm
3 "articles" about "ubuntu"
Links 15/02/2026: Roy Medvedev Dead at 100, Rise of "YouTube Politicians"
Links for the day
Links 15/02/2026: How Alexey Navalny Was Executed by Putin, Erdogan Helping Iran
Links for the day
IBM Fedora Keeps Promoting Slop, Red Hat Has Been Turned Into Chaff and Trash to Help IBM's Stock (With "AI" Storytelling)
Red Hat's Fedora is an old brand (20+ years). It no longer stands for what it meant to people in the Fedora Core days (I was a Fedora user back then).
What IBM Said About 2026 Layoffs and What's Happening in Practice
t'll leave IBM at the very bottom, in due course (customers will notice something profound has changed)
Gemini Links 15/02/2026: "Already Midway February" and Loadbars Remembered
Links for the day
Over at Tux Machines...
GNU/Linux news for the past day
IRC Proceedings: Saturday, February 14, 2026
IRC logs for Saturday, February 14, 2026
Microsoft's Bing Down to 0.5% in Armenia
Microsoft does not want shareholders to see this
Libel by Bots: Unexplored Legal Area?
Liability can be traced back to the operator
Maybe Obvious, But Merits Repeating: A Lot of "Demand" for Slop is Faked, Manufactured, Fabricated by Dark Patterns, Bundling, Media PR (Deception/Hype) Campaigns
Over the past few years many products and services got rebranded as "AI"
xAI and X (Twitter) Live on Borrowed Time, It'll Get a Lot Worse Fast
Being associated with a child porn site formerly known as "Twitter" is odorous to say the least
Microsoft is Lobbying Brussels via Opensource.org and OSI
The new (GAFAM) management at OSI is not serving the OSI's original mission
Will Lockett's Newsletter: Microsoft became Microslop and Windows users are "flocking" to GNU/Linux "to escape the mess"
"Users are fed up and jumping ship from Windows to Mac or Linux. In fact, it appears that Windows has lost 400 million users since 2022!"
Photographic Collections
There are going to be over 100,000 JPEG, PNG, and GIF files by the time we turn 20
Norway Curbs Social Control Media as It Harms Norway's Society
A decrease from 11% to just 1.87% is possible to reason about
Accomplishments of Our Community
Why I enjoy writing in Techrights
Microsoft Invented a Slop CEO ("AI CEO") Because Real Interest in Slop is Waning, So It's Just Faking Its Prominence
It's noise
Google Promoting Slop, Not Journalism
The truth of the matter is, Google is part of this problem and it doesn't seem to care
Another IBM Company (Spawned by IBM) is Hiding the Scale of Layoffs, Just Like Red Hat and Kyndryl
Why is the scale of the layoffs there shrouded in secrecy?
Links 14/02/2026: Financial Woes in Hong Kong and "Hong Kong Journalists Face ‘Precarious’ Future After Jimmy Lai Jailed"
Links for the day
Gemini Links 14/02/2026: Fish Shell and Meta Slash-commands
Links for the day
Links 14/02/2026: "Bias and Toxicity in" Slop, Microsoft's Vista 11 System Update Breaks Systems Again
Links for the day
Links 14/02/2026: "Suppression of Free Speech" and "Climate Change Puts Winter Games on Thin Ice"
Links for the day
EPO "Cocaine Communication Manager" - Part I - Getting the Word Out About What the 'Alicante Mafia' Did to Europe's Second-Largest Institution
Can't everyone in the European media agree that letting cokeheads run Europe's second-largest institution is a terrible idea?
Richard Stallman in the United States - Part I - Huge Audience (Offline and Online), 'Cancel Culture' Attempted and Failed
the comeback of Richard Stallman (RMS) in the United States
GitHub Cannot Survive for Much Longer
Microsoft is trying to just hide the debt
Ed Zitron: Microsoft Is A Decaying Empire That Bet The Future On Making In Excess Of $500 Billion In New Revenue Within The Next 4 To 6 Years From AI — And It Hasn’t Made A Dime In Profit Yet
Microsoft bets its future on a bunch of nothing
Over at Tux Machines...
GNU/Linux news for the past day
IRC Proceedings: Friday, February 13, 2026
IRC logs for Friday, February 13, 2026
Gemini Links 14/02/2026: "Throwback VR Headset" and OFFLFIRSOCH 2026
Links for the day