Bonum Certa Men Certa

Techrights Coding Projects: Making the Web Light Again

A boatload of bytes that serve no purpose at all (99% of all the traffic sent from some Web sites)

Very bloated boat



Summary: Ongoing technical projects that improve access to information and better organise credible information preceded by a depressing overview regarding the health of the Web (it's unbelievably bloated)

OVER the past few months (since spring) we've been working hard on coding automation and improving the back end in various ways. More than 100 hours were spent on this and it puts us in a better position to grow in the long run and also improve uptime. Last year we left behind most US/USPTO coverage to better focus on the European Patent Office (EPO) and GNU/Linux -- a subject neglected here for nearly half a decade (more so after we had begun coverage of EPO scandals).



As readers may have noticed, in recent months we were able to produce more daily links (and more per day). About a month ago we reduced the volume of political coverage in these links. Journalism is waning and the quality of reporting -- not to mention sites -- is rapidly declining.

"As readers may have noticed, in recent months we were able to produce more daily links (and more per day)."To quote one of our guys, "looking at the insides of today's web sites has been one of the most depressing things I have experienced in recent decades. I underestimated the cruft in an earlier message. Probably 95% of the bytes transmitted between client and server have nothing to do with content. That's a truly rotten infrastructure upon which society is tottering."

We typically gather and curate news using RSS feed readers. These keep sites light and tidy. They help us survey the news without wrestling with clickbait, ads, and spam. It's the only way to keep up with quality while leaving out cruft and FUD (and Microsoft's googlebombing). A huge amount of effort goes into this and it takes a lot of time. It's all done manually.

"We typically gather and curate news using RSS feed readers. These keep sites light and tidy. They help us survey the news without wrestling with clickbait, ads, and spam.""I've been letting wget below run while I am mostly outside painting part of the house," said that guy, having chosen to survey/assess the above-stated problem. "It turns out that the idea that 95% of what web severs send is crap was too optimistic. I spidered the latest URL from each one of the unique sites sent in the links from January through July and measured the raw size for the individual pages and their prerequisites. Each article, including any duds and 404 messages, averaged 42 objects [3] per article. The median, however, was 22 objects. Many had hundreds of objects, not counting cookies or scripts that call in scripts.

"I measured disk space for each article, then I ran lynx over the same URLs to get the approximate size of the content. If one counts everything as content then the lynx output is on average 1% the size of the raw material. If I estimate that only 75% or 50% of the text rendered is actual content then that number obviously goes down proportionally.

"I suppose that means that 99% of the electricity used to push those bits around is wasted as well. By extension, it could also mean that 99% of the greenhouse gases produced by that electricity is produced for no reason.

"The results are not scientifically sound but satisfy my curiosity on the topic, for now.

"Eliminating the dud URLs will produce a much higher object count.

“The results are not scientifically sound but satisfy my curiosity on the topic, for now.”
      --Anonymous
"Using more mainstream sites and fewer tech blogs will drive up the article sizes greatly.

"The work is not peer reviewed or even properly planned. I just tried some spur of the minute checks on article sizes in the first way I could think of," said the guy. We covered this subject before in relation to JavaScript bloat and sites' simplicity, but here we have actual numbers to present.

"The numbers depend on the quality of the data," the guy added, "that is to say the selection of links and the culling the results of 404's, paywall messages, and cookie warnings and so on.

"As mentioned I just took the latest link from each of the sites I have bookmarked this year. That skews it towards lean tech blogs. Though some publishers which should know very much better are real pigs:




$ wget --continue --page-requisites --timeout=30 --directory-prefix=./test.a/ https://www.technologyreview.com/s/614079/what-is-geoengineering-and-why-should-you-care-climate-change-harvard/ . . .

$ lynx --dump https://www.technologyreview.com/s/614079/what-is-geoengineering-and-why-should-you-care-climate-change-harvard/ > test.b

$ du -bs ./test.? 2485779 ./test.a 35109 ./test.b



"Trimming some of the lines of cruft from the text version for that article, I get close to two orders of magnitude difference between the original edition versus the trimmed text edition:

$ du -bs ./test.?
2485779	./test.a
35109	./test.b
27147	./test.c


"Also the trimmed text edition is close to 75% the size of the automated text edition. So, at least for that article, the guess of 75% content may be about right. However, given the quick and dirty approach, of this survey, not much can be said conclusively except 1) there is a lot of waste, 2) there is an opportunity for someone to do an easy piece of research."

Based on links from 2019-08-08 and 2019-08-09, we get one set of results (extracted all URLs saved from January 2019 through July 2019; http and https only, eliminated PDF and other links to obviously non-html material). Technical appendices and footnotes are below for those wishing to explore further and reproduce.







+ this only retrieves the first layer of javascript, far from all of it + some site gave wget trouble, should have fiddled the agent string, --user-agent="" + too many sites respond without proper HTTP response headers, slows collection down intolerably + the pages themselves often contain many dead links + serial fetching is slow and because the sites are unique

$ find . -mindepth 1 -maxdepth 1 -type d -print | wc -l 91 $ find . -mindepth 1 -type f -print | wc -l 4171 which is an average of 78 objects per "article"

+ some sites were tech blogs with lean, hand-crafted HTML, mainstream sites are much heavier, so the above average is skewed towards being too light

Quantity and size of objects associated with articles, does not count cookies nor secondary scripts:

$ find . -mindepth 1 -type f -printf '%s\t%p\n' \ | sort -k1,1n -k2,2 \ | awk '$1>10{ sum+=$1; c++; s[c]=$1; n[c]=$2 } END{ printf "%10s\t%10s\n","Bytes","Measurement"; printf "%10d\tSMALLEST\n",s[1]; for (i in s){ if(i==int(c/2)){ printf "%10d\tMEDIAN SIZE\n",s[i]; } }; printf "%10d\tLARGEST\n",s[c]; printf "%10d\tAVG SIZE\n",sum/c; printf "%10d\tCOUNT\n",c; }'

Bytes File Size 13 SMALLEST 10056 MEDIAN SIZE 32035328 LARGEST 53643 AVG SIZE 38164 COUNT









Overall article size [1] including only the first layer of scripts,

Bytes Article Size 8442 SMALLEST 995476 MEDIAN 61097209 LARGEST 2319854 AVG 921 COUNT

Estimated content [2] size including links, headers, navigation text, etc:

+ deleted files with errors or warnings, probably a mistake as that skews the results for lynx higher

Bytes Article Size 929 SMALLEST 18782 MEDIAN 244311 LARGEST 23997 AVG 889 COUNT

+ lynx returns all text within the document not just the main content, at 75% content the figures are more realistic for some sites:

Bytes Measurement 697 SMALLEST 14087 MEDIAN 183233 LARGEST 17998 AVG 889 COUNT

at 50% content the figures are more realistic for other sites:

465 SMALLEST 9391 MEDIAN 122156 LARGEST 11999 AVG 889 COUNT






       


$ du -bs * \ | sort -k1,1n -k2,2 \ | awk '$2!="l" && $1 { c++; s[c]=$1; n[c]=$2; sum+=$1 } END { for (i in s){ if(i==int(c/2)){ m=i }; printf "% 10d\t%s\n", s[i],n[i] }; printf "% 10s\tArticle Size\n","Bytes"; printf "% 10d\tSMALLEST %s\n",s[1],n[1]; printf "% 10d\tMEDIAN %s\n",s[m],n[m]; printf "% 10d\tLARGEST %s\n",s[c],n[c]; printf "% 10d\tAVG\n", sum/c; printf "% 10d\tCOUNT\n",c; }' OFS=$'\t'









[1]

$ time bash -c 'count=0; shuf l \ | while read u; do echo $u; wget --continue --page-requisites --timeout=30 "$u" & echo $((count++)); if ((count % 5 == 0)); then wait; fi; done;'









[2]

$ count=0; time for i in $(cat l); do echo;echo $i; lynx -dump "$i" > $count; echo $((count++)); done;








[3]

$ find . -mindepth 1 -maxdepth 1 -type d -print | wc -l 921

$ find . -mindepth 1 -type f -print | wc -l 38249









[4]

$ find . -mindepth 1 -type f -print \ | awk '{sub("\./","");sub("/.*","");print;}' | uniq -c | sort -k1,1n -k2,2 | awk '$1{c++;s[c]=$1;sum+=$1;} END{for(i in s){if(i == int(c/2)){m=s[i];}}; print "MEDIAN: ",m; print "AVG", sum/c; print "Quantity",c; }'









[5]

$ find . -mindepth 1 -type f -name '*.js' -exec du -sh {} \; | sort -k1,1rh | head 16M ./www.icij.org/app/themes/icij/dist/scripts/main_8707d181.js 3.4M ./europeanconservative.com/wp-content/themes/Generations/assets/scripts/fontawesome-all.min.js 1.8M ./www.9news.com.au/assets/main.f7ba1448.js 1.8M ./www.technologyreview.com/_next/static/chunks/commons.7eed6fd0fd49f117e780.js 1.8M ./www.thetimes.co.uk/d/js/app-7a9b7f4da3.js 1.5M ./www.crossfit.com/main.997a9d1e71cdc5056c64.js 1.4M ./www.icann.org/assets/application-4366ce9f0552171ee2c82c9421d286b7ae8141d4c034a005c1ac3d7409eb118b.js 1.3M ./www.digitalhealth.net/wp-content/plugins/event-espresso-core-reg/assets/dist/ee-vendor.e12aca2f149e71e409e8.dist.js 1.2M ./www.fresnobee.com/wps/build/webpack/videoStory.bundle-69dae9d5d577db8a7bb4.js 1.2M ./www.ft.lk/assets/libs/angular/angular/angular.js






[6] About page bloat, one can pick just about any page and find from one to close to two orders of magnitude difference between the lynx dump and the full web page. For example,




$ wget --continue --page-requisites --timeout=30 \ --directory-prefix=./test.a/ \ https://www.newsweek.com/saudi-uae-war-themselves-yemen-1453371 . . .

$ lynx --dump \ https://www.newsweek.com/saudi-uae-war-themselves-yemen-1453371 \ > test.b

$ du -bs ./test.? 250793 ./test.a 15385 ./test.b

Recent Techrights' Posts

XBox Consoles Nearly Dead by Now, the 'XBox' (ex-Box) Brand Now Stands for Something Full of Slop, Spam, Filler, and Chaff
We're seeing the last day (maybe year) of "XBox"
Fake IBM Retirements (IBM Gives Older Workers Ultimatums, Deadlines, and Carrots on Sticks)
As they point out, IBM is desperate to lower costs
 
Fewer Involuntary Interruptions This Year
This year we're doing much better
Prisons Are for Dangerous People Who Pose a Threat to the Public, Not People Who Inform the Public
At the end of the week EPO workers go on strike
Microsoft Loses Grip on Indian Ocean
Many countries, including in older allies of the US (such as Canada and the US), look for ways to get out of Microsoft dependence urgently
The Great "AI" CON Explained by Dr. Andy Farnell
LLMs are basically advertisers of sorts
Links 26/01/2026: "Journalists Detained", in Germany "Unjustly Jailed Man Gets €1.3 Million Compensation"
Links for the day
Red Hat Quietly Going Extinct After Bluewashing in 2026
At this point it would be rather foolish to assume that IBM will let Red Hat just "do its own thing" or maintain its corporate culture, identity, projects etc.
The "Alicante Mafia" - Part XII - Kris De Neef and Roberta Romano-Götsch, Who Stepped in for the Cokehead, Have No Comment on His Cocaine Usage (and the EPO's Cover-up)
Sh-t floats to the top.
Over at Tux Machines...
GNU/Linux news for the past day
IRC Proceedings: Sunday, January 25, 2026
IRC logs for Sunday, January 25, 2026
Gemini Links 26/01/2026: Cold Perception, Software Patches in NixOS, and Sunk Cost Fallacy
Links for the day
Linuxiac is Basically a Fake News Site, But It's Being Fed by Google News
Because Google News is run by Google, a slop pusher
Links 25/01/2026: Slop "Tribalism", Nike Apparently Cracked
Links for the day
Claims That PIPs Are Abused for Silent Mass Layoffs at IBM (Without Severance) or Forced Retirements
Performance Improvement Plans (PIPs) "clearly bogus as everyone on my team who has been on one has been fired"
WebM Version of Richard Stallman's Latest Talk (Georgia Tech Talk)
The file size is smaller
After Half a Decade Vista 11 is Still a Giant Failure
Don't expect Microsoft to gain a foothold
Details on IBM Layoffs in the EU Last Week, Same Allegedly Coming to the US Shortly
"Around 50 people affected in Belgium."
Technology Trends Driven by DRM Giants, Planned Obsolescence, Not the Needs of the Buyers
The "pushers" think of customers as "users"; and they encourage passivity, Stockholm Syndrome
Links 25/01/2026: Microsoft BitLocker Backdoored for Decades Already, Microsoft-Backed ICE Still Murders Civilians
Links for the day
Gemini Links 25/01/2026: "Expert in a Dying Field" and Global Commands
Links for the day
Over at Tux Machines...
GNU/Linux news for the past day
IRC Proceedings: Saturday, January 24, 2026
IRC logs for Saturday, January 24, 2026
After the Slop Bubble
At the end, looking back, we'll all generally understand that the net effort of slop was environmental destruction
IBM CEO Says IBM is Just Reliant on Buzzwords That Are Overhyped
IBM has nothing to show anymore and telling fairytales to shareholders is a temporary 'fix'
The "Alicante Mafia" - Part XI - No Comment From Steve Rowan, Niloofar Simon, and Christoph Ernst About Cocaine Inside EPO
What kind of patent office is this?
Projection of Fanatic From Microsoft
Microsoft Lunduke is pandering to the 4Chan 'crowd'
Digg.com (Digg) is a Censorship Platform, Just Another Social Control Media/Network, Controlled by the Few
We are not going to bother with any social control media
Spam, Slop, and Fake 'Articles' Regarding "Linux"
Serial Sloppers like these are harming real reporting about Linux and GNU
Rape investigation dropped: Will Fowles & ALP transgender deception
Reprinted with permission from Daniel Pocock
Diversity, Grooming & Debian transgender Zero
Reprinted with permission from Daniel Pocock
Pauline / Maria / Alice Climent(-Pommeret) & Debian transgender offensive cybersecurity deception
Reprinted with permission from Daniel Pocock
Did judge with transgender sister & Debian conflict of interest help cover-up a death?
Reprinted with permission from Daniel Pocock
Giving a Voice to the Community (Even When It's Inconvenient or 'Scary')
Once upon a time we were threatened with deplatforming for merely reposting articles by Daniel Pocock; we no longer have this problem
Links 24/01/2026: CBS News Demolished From the Inside and Many Publishers Admit Layoffs
Links for the day
Gemini Links 24/01/2026: Dreams and Raspberry Pi Zero 2W
Links for the day
Richard Stallman's First Talk in US College Since 2018: Videos and Photos
There are some backstories
Judge Richard Oulevey (Grandcour Choeur, Tribunal Vaud) & Debian shaming abuse victims and witnesses
Reprinted with permission from Daniel Pocock
Judgment: French army vanquishes German FSFE on Hitler's birthday, Microsoft contract dispute (1716711)
Reprinted with permission from Daniel Pocock
EDPB/CNIL privacy expert Amandine Jambert (cryptie, FSFE) implicitly admitted lying about harassment when she resigned admitting conflict of interest
Reprinted with permission from Daniel Pocock
Links 24/01/2026: TikTok Controlled by Alt Reich in US Now, White House Shares Fake, Manipulated, Misleading Images Already
Links for the day
Projection Tactics - Part IV: SLAPP by Americans Against Techrights (UK) to Hide Serious Abuses Against American Women
"PRs need to stop being complicit in suppression of information via SLAPPs"
Dirty Laundry at Debian and Elsewhere
We cannot just brush aside real issues involving real people and their families
Illegal, Unconstitutional Kangaroo Court for Patents Drops the Masks, Shows Its Real Purpose is to Serve Multinational Monopolists and Crush European SMEs
Europe (or the EU) is rapidly becoming a corporate project, not a unified governance initiative
The "Alicante Mafia" - Part X - EPO Strikes to Begin Next Week
Things gradually escalate this month
Gemini Links 24/01/2026: Snow, Boxing, and Lisp is Fun
Links for the day
Over at Tux Machines...
GNU/Linux news for the past day
IRC Proceedings: Friday, January 23, 2026
IRC logs for Friday, January 23, 2026