Bonum Certa Men Certa

Techrights Coding Projects: Making the Web Light Again

A boatload of bytes that serve no purpose at all (99% of all the traffic sent from some Web sites)

Very bloated boat



Summary: Ongoing technical projects that improve access to information and better organise credible information preceded by a depressing overview regarding the health of the Web (it's unbelievably bloated)

OVER the past few months (since spring) we've been working hard on coding automation and improving the back end in various ways. More than 100 hours were spent on this and it puts us in a better position to grow in the long run and also improve uptime. Last year we left behind most US/USPTO coverage to better focus on the European Patent Office (EPO) and GNU/Linux -- a subject neglected here for nearly half a decade (more so after we had begun coverage of EPO scandals).



As readers may have noticed, in recent months we were able to produce more daily links (and more per day). About a month ago we reduced the volume of political coverage in these links. Journalism is waning and the quality of reporting -- not to mention sites -- is rapidly declining.

"As readers may have noticed, in recent months we were able to produce more daily links (and more per day)."To quote one of our guys, "looking at the insides of today's web sites has been one of the most depressing things I have experienced in recent decades. I underestimated the cruft in an earlier message. Probably 95% of the bytes transmitted between client and server have nothing to do with content. That's a truly rotten infrastructure upon which society is tottering."

We typically gather and curate news using RSS feed readers. These keep sites light and tidy. They help us survey the news without wrestling with clickbait, ads, and spam. It's the only way to keep up with quality while leaving out cruft and FUD (and Microsoft's googlebombing). A huge amount of effort goes into this and it takes a lot of time. It's all done manually.

"We typically gather and curate news using RSS feed readers. These keep sites light and tidy. They help us survey the news without wrestling with clickbait, ads, and spam.""I've been letting wget below run while I am mostly outside painting part of the house," said that guy, having chosen to survey/assess the above-stated problem. "It turns out that the idea that 95% of what web severs send is crap was too optimistic. I spidered the latest URL from each one of the unique sites sent in the links from January through July and measured the raw size for the individual pages and their prerequisites. Each article, including any duds and 404 messages, averaged 42 objects [3] per article. The median, however, was 22 objects. Many had hundreds of objects, not counting cookies or scripts that call in scripts.

"I measured disk space for each article, then I ran lynx over the same URLs to get the approximate size of the content. If one counts everything as content then the lynx output is on average 1% the size of the raw material. If I estimate that only 75% or 50% of the text rendered is actual content then that number obviously goes down proportionally.

"I suppose that means that 99% of the electricity used to push those bits around is wasted as well. By extension, it could also mean that 99% of the greenhouse gases produced by that electricity is produced for no reason.

"The results are not scientifically sound but satisfy my curiosity on the topic, for now.

"Eliminating the dud URLs will produce a much higher object count.

“The results are not scientifically sound but satisfy my curiosity on the topic, for now.”
      --Anonymous
"Using more mainstream sites and fewer tech blogs will drive up the article sizes greatly.

"The work is not peer reviewed or even properly planned. I just tried some spur of the minute checks on article sizes in the first way I could think of," said the guy. We covered this subject before in relation to JavaScript bloat and sites' simplicity, but here we have actual numbers to present.

"The numbers depend on the quality of the data," the guy added, "that is to say the selection of links and the culling the results of 404's, paywall messages, and cookie warnings and so on.

"As mentioned I just took the latest link from each of the sites I have bookmarked this year. That skews it towards lean tech blogs. Though some publishers which should know very much better are real pigs:




$ wget --continue --page-requisites --timeout=30 --directory-prefix=./test.a/ https://www.technologyreview.com/s/614079/what-is-geoengineering-and-why-should-you-care-climate-change-harvard/ . . .

$ lynx --dump https://www.technologyreview.com/s/614079/what-is-geoengineering-and-why-should-you-care-climate-change-harvard/ > test.b

$ du -bs ./test.? 2485779 ./test.a 35109 ./test.b



"Trimming some of the lines of cruft from the text version for that article, I get close to two orders of magnitude difference between the original edition versus the trimmed text edition:

$ du -bs ./test.?
2485779	./test.a
35109	./test.b
27147	./test.c


"Also the trimmed text edition is close to 75% the size of the automated text edition. So, at least for that article, the guess of 75% content may be about right. However, given the quick and dirty approach, of this survey, not much can be said conclusively except 1) there is a lot of waste, 2) there is an opportunity for someone to do an easy piece of research."

Based on links from 2019-08-08 and 2019-08-09, we get one set of results (extracted all URLs saved from January 2019 through July 2019; http and https only, eliminated PDF and other links to obviously non-html material). Technical appendices and footnotes are below for those wishing to explore further and reproduce.







+ this only retrieves the first layer of javascript, far from all of it + some site gave wget trouble, should have fiddled the agent string, --user-agent="" + too many sites respond without proper HTTP response headers, slows collection down intolerably + the pages themselves often contain many dead links + serial fetching is slow and because the sites are unique

$ find . -mindepth 1 -maxdepth 1 -type d -print | wc -l 91 $ find . -mindepth 1 -type f -print | wc -l 4171 which is an average of 78 objects per "article"

+ some sites were tech blogs with lean, hand-crafted HTML, mainstream sites are much heavier, so the above average is skewed towards being too light

Quantity and size of objects associated with articles, does not count cookies nor secondary scripts:

$ find . -mindepth 1 -type f -printf '%s\t%p\n' \ | sort -k1,1n -k2,2 \ | awk '$1>10{ sum+=$1; c++; s[c]=$1; n[c]=$2 } END{ printf "%10s\t%10s\n","Bytes","Measurement"; printf "%10d\tSMALLEST\n",s[1]; for (i in s){ if(i==int(c/2)){ printf "%10d\tMEDIAN SIZE\n",s[i]; } }; printf "%10d\tLARGEST\n",s[c]; printf "%10d\tAVG SIZE\n",sum/c; printf "%10d\tCOUNT\n",c; }'

Bytes File Size 13 SMALLEST 10056 MEDIAN SIZE 32035328 LARGEST 53643 AVG SIZE 38164 COUNT









Overall article size [1] including only the first layer of scripts,

Bytes Article Size 8442 SMALLEST 995476 MEDIAN 61097209 LARGEST 2319854 AVG 921 COUNT

Estimated content [2] size including links, headers, navigation text, etc:

+ deleted files with errors or warnings, probably a mistake as that skews the results for lynx higher

Bytes Article Size 929 SMALLEST 18782 MEDIAN 244311 LARGEST 23997 AVG 889 COUNT

+ lynx returns all text within the document not just the main content, at 75% content the figures are more realistic for some sites:

Bytes Measurement 697 SMALLEST 14087 MEDIAN 183233 LARGEST 17998 AVG 889 COUNT

at 50% content the figures are more realistic for other sites:

465 SMALLEST 9391 MEDIAN 122156 LARGEST 11999 AVG 889 COUNT






       


$ du -bs * \ | sort -k1,1n -k2,2 \ | awk '$2!="l" && $1 { c++; s[c]=$1; n[c]=$2; sum+=$1 } END { for (i in s){ if(i==int(c/2)){ m=i }; printf "% 10d\t%s\n", s[i],n[i] }; printf "% 10s\tArticle Size\n","Bytes"; printf "% 10d\tSMALLEST %s\n",s[1],n[1]; printf "% 10d\tMEDIAN %s\n",s[m],n[m]; printf "% 10d\tLARGEST %s\n",s[c],n[c]; printf "% 10d\tAVG\n", sum/c; printf "% 10d\tCOUNT\n",c; }' OFS=$'\t'









[1]

$ time bash -c 'count=0; shuf l \ | while read u; do echo $u; wget --continue --page-requisites --timeout=30 "$u" & echo $((count++)); if ((count % 5 == 0)); then wait; fi; done;'









[2]

$ count=0; time for i in $(cat l); do echo;echo $i; lynx -dump "$i" > $count; echo $((count++)); done;








[3]

$ find . -mindepth 1 -maxdepth 1 -type d -print | wc -l 921

$ find . -mindepth 1 -type f -print | wc -l 38249









[4]

$ find . -mindepth 1 -type f -print \ | awk '{sub("\./","");sub("/.*","");print;}' | uniq -c | sort -k1,1n -k2,2 | awk '$1{c++;s[c]=$1;sum+=$1;} END{for(i in s){if(i == int(c/2)){m=s[i];}}; print "MEDIAN: ",m; print "AVG", sum/c; print "Quantity",c; }'









[5]

$ find . -mindepth 1 -type f -name '*.js' -exec du -sh {} \; | sort -k1,1rh | head 16M ./www.icij.org/app/themes/icij/dist/scripts/main_8707d181.js 3.4M ./europeanconservative.com/wp-content/themes/Generations/assets/scripts/fontawesome-all.min.js 1.8M ./www.9news.com.au/assets/main.f7ba1448.js 1.8M ./www.technologyreview.com/_next/static/chunks/commons.7eed6fd0fd49f117e780.js 1.8M ./www.thetimes.co.uk/d/js/app-7a9b7f4da3.js 1.5M ./www.crossfit.com/main.997a9d1e71cdc5056c64.js 1.4M ./www.icann.org/assets/application-4366ce9f0552171ee2c82c9421d286b7ae8141d4c034a005c1ac3d7409eb118b.js 1.3M ./www.digitalhealth.net/wp-content/plugins/event-espresso-core-reg/assets/dist/ee-vendor.e12aca2f149e71e409e8.dist.js 1.2M ./www.fresnobee.com/wps/build/webpack/videoStory.bundle-69dae9d5d577db8a7bb4.js 1.2M ./www.ft.lk/assets/libs/angular/angular/angular.js






[6] About page bloat, one can pick just about any page and find from one to close to two orders of magnitude difference between the lynx dump and the full web page. For example,




$ wget --continue --page-requisites --timeout=30 \ --directory-prefix=./test.a/ \ https://www.newsweek.com/saudi-uae-war-themselves-yemen-1453371 . . .

$ lynx --dump \ https://www.newsweek.com/saudi-uae-war-themselves-yemen-1453371 \ > test.b

$ du -bs ./test.? 250793 ./test.a 15385 ./test.b

Recent Techrights' Posts

SLAPP Censorship - Part 58 Out of 200: 5RB and Brett Wilson LLP Helped Garrett and Graveley Make Equivalent of GAFAM NDAs Superficially 'Enforceable' in the UK, Using Threats
laziness results in many hours and high lawyers' fees
 
Red Hat Circling Down the Slop Drain
IBM, governed by slop fanatics, is going to do a lot of damage
Slop is an Addiction, Its Users Find It Addictive
please do not tolerate people who slop
The Corrupt Lecture the Non-Corrupt - Part VII - Secrecy at the EPO (Regarding Cocaine and Nepotism) Has Undermined Trust in Management
If Europe's second-largest institution is run by the "Alicante Mafia", does this mean that other key European institutions are "Mafia"?
SLAPP Censorship - Part 59 Out of 200: Mentioning the Fact Alex Graveley Arrested and Charged for Strangulation in Texas is "Reckless" and "Malicious", According to His 'Hired Guns' in London
it was framed as "malicious"
Links 27/04/2026: Strikes, Corruption in Spain (Spanish PM Sanchez' Wife), and YouTuber Faces Jail Time
Links for the day
Gemini Links 27/04/2026: Gopher Catch-up, Year of Contentment, and Path to Freedom
Links for the day
Over at Tux Machines...
GNU/Linux news for the past day
IRC Proceedings: Sunday, April 26, 2026
IRC logs for Sunday, April 26, 2026
Journalistic Malpractice: Helping Microsoft Paint 'Voluntary' Layoffs (Before PIPs) as "Buyouts"
What does this tell us about today's media?
The Man IBMers Regard or Already See as Likely Successor of Krishna (or Next CEO of IBM) is a Slop Fanatic
How dangerously misguided
The Corrupt Lecture the Non-Corrupt - Part VI - Management of the European Patent Office (EPO) Covered Up Cocaine Use, Even Colleagues Not Informed
the self-described "fu--ing president"
Who Controls Fedora? IBM and GAFAM.
Don't for a moment believe that IBM understands GNU/Linux. We are quite certain nobody in IBM's Board of Directors uses it.
State of Slop About GNU/Linux
As the incentive to publish is reduced (competing with slop is no fun), the effort/money invested in stories goes down
Links 26/04/2026: Korean Inflation, GLP-1 Drugs Linked to Cognitive Impairment, Lithuania's Public Broadcaster LRT Besieged
Links for the day
Hopefully Smooth Sailing in OS Upgrade
There are some contingencies at hand
Links 25/04/2026: "Horrible Economics of AI Are Starting to Come Crashing Down", More Restrictions Placed on Social Control Media
Links for the day
Getting Aggressive Suggestive of Loss - Part IV - Shutting Down My Existence
Would anyone out there tolerate such messages sent from burner accounts?
Gemini Links 26/04/2026: Gemini Movie Database (or GeminiMDB) and Star Trek III
Links for the day
Weeks Before Linux Removed Over 100,000 Lines of Code Due to Slop 'Bug Reports' Microsoft Paid 'Linux' Foundation to Advance Slop in the Name of 'Security'
What can possible go wrong? Both for security and for stability.
Tracking Ages of People
To stay "safe" tell us your age
Over at Tux Machines...
GNU/Linux news for the past day
IRC Proceedings: Saturday, April 25, 2026
IRC logs for Saturday, April 25, 2026
"A single witness shall not rise up against a person regarding any wrongdoing or any sin that he commits; on the testimony of two or three witnesses a matter shall be confirmed." (Deuteronomy 19-21)
The spouse of Garrett repeatedly points out that Garrett can barely code or can only do so very poorly
Rust People Sabotage Stability for the Sake of a Falsely-Promised 'Security'
Set aside severe performance issues, poor handling of "edge cases", general bugs, lack of compatibility, and even crashes
SLAPP Censorship - Part 57 Out of 200: 5RB and Brett Wilson LLP Made the Garrett and Graveley Particulars of Claims a Lot Like Photocopies!
They seem very much irritated that I speak about this
Huge Strike at the European Patent Office (EPO) This Coming Friday (May 1st)
International Worker’s day
Links 25/04/2026: Nokia Wins Embargo in Kangaroo Court Where Judges Are Salaried Nokia Staff (UPC), Allison Pearson Defamation Case (UK) Succeeds, Smokey Robinson and "Puff Daddy" (US) Fail
Links for the day
Gemini Links 25/04/2026: Weekly Echoes, Gemtext Tables, and Using Offpunk
Links for the day
Corporate Media Did Not Specify What Microsoft Means by "Buyouts" (Layoffs), It May Be Hardly Different From Severance
Time will tell, but investigative journalism hardly exists anymore, so we won't hold our breath
The Corrupt Lecture the Non-Corrupt - Part V - "Diversity" and "Inclusion" at EPO Means Sleeping With Sister of "Cocaine Communication Manager" and Making Them Millionaires
Remember that top applicants or key stakeholders of the EPO are already complaining about a lack of quality
Links 25/04/2026: Fake GAFAM Valuations (Gripping the Market Based on False Accounting), "Evidence Isn't Just for Research", and "Putin Defends Mobile Internet Outages"
Links for the day
Dr. Andy Farnell on Why Calling Slop or Chaff "Hey Hi" (AI) Harm Us All, Except for "Ten or Twenty Rich Industrialists"
"words to avoid"
Internet Trolls Likely Trying to Distract From the Demise of IBM, Problems With Red Hat
there seems to be trolling online aimed at suppressing discussion
Debian Upgrade Coming Up (Soon)
Yesterday we contacted the datacentre staff about it
Getting Aggressive Suggestive of Loss - Part III - Threats From Burner Accounts Formally Treated as a Crime
Countries that cannot preserve freedom from self-censorship are countries where free press ultimately cannot prevail
Over at Tux Machines...
GNU/Linux news for the past day
IRC Proceedings: Friday, April 24, 2026
IRC logs for Friday, April 24, 2026
Gemini Links 25/04/2026: 3.4k+ Capsules, Microsoft Layoffs, Call for Nuclear Disarmament, "Internet is Sad and Lonely"
Links for the day