Bonum Certa Men Certa

Techrights Coding Projects: Making the Web Light Again

A boatload of bytes that serve no purpose at all (99% of all the traffic sent from some Web sites)

Very bloated boat



Summary: Ongoing technical projects that improve access to information and better organise credible information preceded by a depressing overview regarding the health of the Web (it's unbelievably bloated)

OVER the past few months (since spring) we've been working hard on coding automation and improving the back end in various ways. More than 100 hours were spent on this and it puts us in a better position to grow in the long run and also improve uptime. Last year we left behind most US/USPTO coverage to better focus on the European Patent Office (EPO) and GNU/Linux -- a subject neglected here for nearly half a decade (more so after we had begun coverage of EPO scandals).



As readers may have noticed, in recent months we were able to produce more daily links (and more per day). About a month ago we reduced the volume of political coverage in these links. Journalism is waning and the quality of reporting -- not to mention sites -- is rapidly declining.

"As readers may have noticed, in recent months we were able to produce more daily links (and more per day)."To quote one of our guys, "looking at the insides of today's web sites has been one of the most depressing things I have experienced in recent decades. I underestimated the cruft in an earlier message. Probably 95% of the bytes transmitted between client and server have nothing to do with content. That's a truly rotten infrastructure upon which society is tottering."

We typically gather and curate news using RSS feed readers. These keep sites light and tidy. They help us survey the news without wrestling with clickbait, ads, and spam. It's the only way to keep up with quality while leaving out cruft and FUD (and Microsoft's googlebombing). A huge amount of effort goes into this and it takes a lot of time. It's all done manually.

"We typically gather and curate news using RSS feed readers. These keep sites light and tidy. They help us survey the news without wrestling with clickbait, ads, and spam.""I've been letting wget below run while I am mostly outside painting part of the house," said that guy, having chosen to survey/assess the above-stated problem. "It turns out that the idea that 95% of what web severs send is crap was too optimistic. I spidered the latest URL from each one of the unique sites sent in the links from January through July and measured the raw size for the individual pages and their prerequisites. Each article, including any duds and 404 messages, averaged 42 objects [3] per article. The median, however, was 22 objects. Many had hundreds of objects, not counting cookies or scripts that call in scripts.

"I measured disk space for each article, then I ran lynx over the same URLs to get the approximate size of the content. If one counts everything as content then the lynx output is on average 1% the size of the raw material. If I estimate that only 75% or 50% of the text rendered is actual content then that number obviously goes down proportionally.

"I suppose that means that 99% of the electricity used to push those bits around is wasted as well. By extension, it could also mean that 99% of the greenhouse gases produced by that electricity is produced for no reason.

"The results are not scientifically sound but satisfy my curiosity on the topic, for now.

"Eliminating the dud URLs will produce a much higher object count.

“The results are not scientifically sound but satisfy my curiosity on the topic, for now.”
      --Anonymous
"Using more mainstream sites and fewer tech blogs will drive up the article sizes greatly.

"The work is not peer reviewed or even properly planned. I just tried some spur of the minute checks on article sizes in the first way I could think of," said the guy. We covered this subject before in relation to JavaScript bloat and sites' simplicity, but here we have actual numbers to present.

"The numbers depend on the quality of the data," the guy added, "that is to say the selection of links and the culling the results of 404's, paywall messages, and cookie warnings and so on.

"As mentioned I just took the latest link from each of the sites I have bookmarked this year. That skews it towards lean tech blogs. Though some publishers which should know very much better are real pigs:




$ wget --continue --page-requisites --timeout=30 --directory-prefix=./test.a/ https://www.technologyreview.com/s/614079/what-is-geoengineering-and-why-should-you-care-climate-change-harvard/ . . .

$ lynx --dump https://www.technologyreview.com/s/614079/what-is-geoengineering-and-why-should-you-care-climate-change-harvard/ > test.b

$ du -bs ./test.? 2485779 ./test.a 35109 ./test.b



"Trimming some of the lines of cruft from the text version for that article, I get close to two orders of magnitude difference between the original edition versus the trimmed text edition:

$ du -bs ./test.?
2485779	./test.a
35109	./test.b
27147	./test.c


"Also the trimmed text edition is close to 75% the size of the automated text edition. So, at least for that article, the guess of 75% content may be about right. However, given the quick and dirty approach, of this survey, not much can be said conclusively except 1) there is a lot of waste, 2) there is an opportunity for someone to do an easy piece of research."

Based on links from 2019-08-08 and 2019-08-09, we get one set of results (extracted all URLs saved from January 2019 through July 2019; http and https only, eliminated PDF and other links to obviously non-html material). Technical appendices and footnotes are below for those wishing to explore further and reproduce.







+ this only retrieves the first layer of javascript, far from all of it + some site gave wget trouble, should have fiddled the agent string, --user-agent="" + too many sites respond without proper HTTP response headers, slows collection down intolerably + the pages themselves often contain many dead links + serial fetching is slow and because the sites are unique

$ find . -mindepth 1 -maxdepth 1 -type d -print | wc -l 91 $ find . -mindepth 1 -type f -print | wc -l 4171 which is an average of 78 objects per "article"

+ some sites were tech blogs with lean, hand-crafted HTML, mainstream sites are much heavier, so the above average is skewed towards being too light

Quantity and size of objects associated with articles, does not count cookies nor secondary scripts:

$ find . -mindepth 1 -type f -printf '%s\t%p\n' \ | sort -k1,1n -k2,2 \ | awk '$1>10{ sum+=$1; c++; s[c]=$1; n[c]=$2 } END{ printf "%10s\t%10s\n","Bytes","Measurement"; printf "%10d\tSMALLEST\n",s[1]; for (i in s){ if(i==int(c/2)){ printf "%10d\tMEDIAN SIZE\n",s[i]; } }; printf "%10d\tLARGEST\n",s[c]; printf "%10d\tAVG SIZE\n",sum/c; printf "%10d\tCOUNT\n",c; }'

Bytes File Size 13 SMALLEST 10056 MEDIAN SIZE 32035328 LARGEST 53643 AVG SIZE 38164 COUNT









Overall article size [1] including only the first layer of scripts,

Bytes Article Size 8442 SMALLEST 995476 MEDIAN 61097209 LARGEST 2319854 AVG 921 COUNT

Estimated content [2] size including links, headers, navigation text, etc:

+ deleted files with errors or warnings, probably a mistake as that skews the results for lynx higher

Bytes Article Size 929 SMALLEST 18782 MEDIAN 244311 LARGEST 23997 AVG 889 COUNT

+ lynx returns all text within the document not just the main content, at 75% content the figures are more realistic for some sites:

Bytes Measurement 697 SMALLEST 14087 MEDIAN 183233 LARGEST 17998 AVG 889 COUNT

at 50% content the figures are more realistic for other sites:

465 SMALLEST 9391 MEDIAN 122156 LARGEST 11999 AVG 889 COUNT






       


$ du -bs * \ | sort -k1,1n -k2,2 \ | awk '$2!="l" && $1 { c++; s[c]=$1; n[c]=$2; sum+=$1 } END { for (i in s){ if(i==int(c/2)){ m=i }; printf "% 10d\t%s\n", s[i],n[i] }; printf "% 10s\tArticle Size\n","Bytes"; printf "% 10d\tSMALLEST %s\n",s[1],n[1]; printf "% 10d\tMEDIAN %s\n",s[m],n[m]; printf "% 10d\tLARGEST %s\n",s[c],n[c]; printf "% 10d\tAVG\n", sum/c; printf "% 10d\tCOUNT\n",c; }' OFS=$'\t'









[1]

$ time bash -c 'count=0; shuf l \ | while read u; do echo $u; wget --continue --page-requisites --timeout=30 "$u" & echo $((count++)); if ((count % 5 == 0)); then wait; fi; done;'









[2]

$ count=0; time for i in $(cat l); do echo;echo $i; lynx -dump "$i" > $count; echo $((count++)); done;








[3]

$ find . -mindepth 1 -maxdepth 1 -type d -print | wc -l 921

$ find . -mindepth 1 -type f -print | wc -l 38249









[4]

$ find . -mindepth 1 -type f -print \ | awk '{sub("\./","");sub("/.*","");print;}' | uniq -c | sort -k1,1n -k2,2 | awk '$1{c++;s[c]=$1;sum+=$1;} END{for(i in s){if(i == int(c/2)){m=s[i];}}; print "MEDIAN: ",m; print "AVG", sum/c; print "Quantity",c; }'









[5]

$ find . -mindepth 1 -type f -name '*.js' -exec du -sh {} \; | sort -k1,1rh | head 16M ./www.icij.org/app/themes/icij/dist/scripts/main_8707d181.js 3.4M ./europeanconservative.com/wp-content/themes/Generations/assets/scripts/fontawesome-all.min.js 1.8M ./www.9news.com.au/assets/main.f7ba1448.js 1.8M ./www.technologyreview.com/_next/static/chunks/commons.7eed6fd0fd49f117e780.js 1.8M ./www.thetimes.co.uk/d/js/app-7a9b7f4da3.js 1.5M ./www.crossfit.com/main.997a9d1e71cdc5056c64.js 1.4M ./www.icann.org/assets/application-4366ce9f0552171ee2c82c9421d286b7ae8141d4c034a005c1ac3d7409eb118b.js 1.3M ./www.digitalhealth.net/wp-content/plugins/event-espresso-core-reg/assets/dist/ee-vendor.e12aca2f149e71e409e8.dist.js 1.2M ./www.fresnobee.com/wps/build/webpack/videoStory.bundle-69dae9d5d577db8a7bb4.js 1.2M ./www.ft.lk/assets/libs/angular/angular/angular.js






[6] About page bloat, one can pick just about any page and find from one to close to two orders of magnitude difference between the lynx dump and the full web page. For example,




$ wget --continue --page-requisites --timeout=30 \ --directory-prefix=./test.a/ \ https://www.newsweek.com/saudi-uae-war-themselves-yemen-1453371 . . .

$ lynx --dump \ https://www.newsweek.com/saudi-uae-war-themselves-yemen-1453371 \ > test.b

$ du -bs ./test.? 250793 ./test.a 15385 ./test.b

Recent Techrights' Posts

All-Time Lows for Windows in Spain and Portugal
data which became publicly available less than 24 hours ago in statCounter
SLAPP Censorship - Part 64 Out of 200: Not Amused by Repeated Threats (to "Shut Down" My "Existence" While Mentioning My Wife Too)
it's about censorship
The NHS is Under Attack by Anthropic and Microsoft (or Their Lemmings That Infect the NHS)
They are kidding themselves if they seriously believe Web-facing source code repositories are the real threat to patients
cPanel is Not Linux, cPanel is Proprietary Software
It's fair to say I've used cPanel for 23 years
 
Links 03/05/2026: Water Shortages Crises and Slop Fakes "Are Coming for Your Bank Account" (Slop-Enabled Fraud)
Links for the day
The Corrupt Lecture the Non-Corrupt - Part XI - EPO 'Products' to Cement Asian and American Monopolies
Only a fool would believe Lame Duck Campinos
Microsoft Windows Falls Below 9% in South Africa
As one can expect, GNU/Linux is measured as going up in France
Gemini Links 03/05/2026: The Black Side of the Web, LiveJournal, Chimarrão
Links for the day
A Month Since Mass Layoffs at Red Hat (400+ Engineers Laid Off), The Media Didn't Cover It
We are very concerned about the state of the media
Over at Tux Machines...
GNU/Linux news for the past day
IRC Proceedings: Saturday, May 02, 2026
IRC logs for Saturday, May 02, 2026
Gemini Links 02/05/2026: Strange Psychosis and TUIs
Links for the day
Links 02/05/2026: Microsoft Has Begun Rebranding Vista 11 as 'XBox' (Because the Console is Dying), Slop Rejected by Oscars
Links for the day
IBM's CEO 10 Years Ago in IBM-Sponsored Forbes: "For those willing to embrace [blockchains], the future will indeed be bright."
How well did this prediction materialise?
RightsCon Cancellation as a Data Point in a World Gone Astray
RightsCon should not even be controversial
Links 02/05/2026: Gen Z is Turning Against Slop and OpenAI/Microsoft Rift Explained
Links for the day
Storage and Memory Prices Are Rising Not Because of High Demand (Production Can Match Demand), It's Partly Because of Price-Fixing (Same as Food Price Increases)
Sophisticated robberies are still robberies
Thousands of Layoffs at IBM, So IBM Pays Mainstream Media to Claim That IBM is Hiring (Paid Lies)
This is a story about the media failing us, not just IBM failing as a company
A Look at DataStax Bluewashing (IBM and Layoffs)
IBM is a place that many people leave or get pushed out of
Gemini Links 02/05/2026: Leaving Session, Alhena 5.5.7, and Slop Failing Customers
Links for the day
Over at Tux Machines...
GNU/Linux news for the past day
IRC Proceedings: Friday, May 01, 2026
IRC logs for Friday, May 01, 2026
Links 01/05/2026: Microsoft 'Headcount' Decreasing, Apple Quietly Killing Vision Pro
Links for the day
Oracle's Debt Grew by Over 50 Billion Dollars in 6 Months
Larry Ellison spent a lot of money buying a lot of the corporate media
In Praise of Debian
30 hours ago we began an upgrade
What Linus (Torvalds, the Linux Dude) Meant by "Show Me the Code"
"Show Me the Code" is a common cultural reference
Yes, GNU/Linux Can Run on Playstation 5, But Don't Buy It, Learn From Sony's Past of Rootkit and PS3 Betrayal
Millions of Playstation 3 owners will never forget what Sony did to them
XBox Will Not Last Much Longer, XBox Chief Admits Problems
Microsoft's latest "results"
Dealing With Demagogue in Free Software
Don't spread their ideology and never participate in any of their projects
What May 1 Means to Us (and to Many Others)
To me, May 1 means something
Microsoft Lunduke is 'Pulling a Garrett' by Turning Technical and Legal Debate Over Rust Into a 'Trans Debate'
Don't fall for the demagogue
Links 01/05/2026: Regulatory Trouble for Apple, Now Even Mozilla Pushes Back Against Google
Links for the day
Microsoft "Buyout" Offer is Less Than One Year's Salary
So our assumption about this was correct
The Corrupt Lecture the Non-Corrupt - Part X - European Patent Office Managers Have Crossed Red Lines, According to Themselves
The girlfriend of the President of the European Patent Office (EPO) is trying to muzzle EPO critics
Techrights is Still Growing, Attacking Techrights Does Not Weaken the Community
Bullying us for 2+ years does not result in fear, it results in us feeling more emboldened and motivated
SLAPP Censorship - Part 63 Out of 200: Graveley as a Stripped-Down Version of Garrett in the Particulars of Claim (5RB Barrister Could Do This in One Minute)
Lazily and sloppily, it looks like the barrister took Garrett's claims and tweaked them a little (shortened) for Graveley
Lots of People Leave IBM, Today IBM Has About 1,000 Workers Fewer Than Yesterday
Confluent "last day" for 800+ people
Been a Very Busy Week
Next week, as we have no upgrades to prepare for, we should be able to publish at the usual pace of 20+ pages per day
In New Letter Sent to Chair and Heads of Delegation of the Administrative Council of the European Patent Organisation the Staff Union Explains How to End European Patent Office Strikes
If Campinos continues to behave as he does right now, the Council can show him the door
Links 01/05/2026: Poems and Continuous Privacy Policy
Links for the day
Over at Tux Machines...
GNU/Linux news for the past day
IRC Proceedings: Thursday, April 30, 2026
IRC logs for Thursday, April 30, 2026
Microsoft Debt Rose Almost $50 Billion Since We Moved to Debian
GAFAM has a new name for debt