Bonum Certa Men Certa

Techrights Coding Projects: Making the Web Light Again

A boatload of bytes that serve no purpose at all (99% of all the traffic sent from some Web sites)

Very bloated boat



Summary: Ongoing technical projects that improve access to information and better organise credible information preceded by a depressing overview regarding the health of the Web (it's unbelievably bloated)

OVER the past few months (since spring) we've been working hard on coding automation and improving the back end in various ways. More than 100 hours were spent on this and it puts us in a better position to grow in the long run and also improve uptime. Last year we left behind most US/USPTO coverage to better focus on the European Patent Office (EPO) and GNU/Linux -- a subject neglected here for nearly half a decade (more so after we had begun coverage of EPO scandals).



As readers may have noticed, in recent months we were able to produce more daily links (and more per day). About a month ago we reduced the volume of political coverage in these links. Journalism is waning and the quality of reporting -- not to mention sites -- is rapidly declining.

"As readers may have noticed, in recent months we were able to produce more daily links (and more per day)."To quote one of our guys, "looking at the insides of today's web sites has been one of the most depressing things I have experienced in recent decades. I underestimated the cruft in an earlier message. Probably 95% of the bytes transmitted between client and server have nothing to do with content. That's a truly rotten infrastructure upon which society is tottering."

We typically gather and curate news using RSS feed readers. These keep sites light and tidy. They help us survey the news without wrestling with clickbait, ads, and spam. It's the only way to keep up with quality while leaving out cruft and FUD (and Microsoft's googlebombing). A huge amount of effort goes into this and it takes a lot of time. It's all done manually.

"We typically gather and curate news using RSS feed readers. These keep sites light and tidy. They help us survey the news without wrestling with clickbait, ads, and spam.""I've been letting wget below run while I am mostly outside painting part of the house," said that guy, having chosen to survey/assess the above-stated problem. "It turns out that the idea that 95% of what web severs send is crap was too optimistic. I spidered the latest URL from each one of the unique sites sent in the links from January through July and measured the raw size for the individual pages and their prerequisites. Each article, including any duds and 404 messages, averaged 42 objects [3] per article. The median, however, was 22 objects. Many had hundreds of objects, not counting cookies or scripts that call in scripts.

"I measured disk space for each article, then I ran lynx over the same URLs to get the approximate size of the content. If one counts everything as content then the lynx output is on average 1% the size of the raw material. If I estimate that only 75% or 50% of the text rendered is actual content then that number obviously goes down proportionally.

"I suppose that means that 99% of the electricity used to push those bits around is wasted as well. By extension, it could also mean that 99% of the greenhouse gases produced by that electricity is produced for no reason.

"The results are not scientifically sound but satisfy my curiosity on the topic, for now.

"Eliminating the dud URLs will produce a much higher object count.

“The results are not scientifically sound but satisfy my curiosity on the topic, for now.”
      --Anonymous
"Using more mainstream sites and fewer tech blogs will drive up the article sizes greatly.

"The work is not peer reviewed or even properly planned. I just tried some spur of the minute checks on article sizes in the first way I could think of," said the guy. We covered this subject before in relation to JavaScript bloat and sites' simplicity, but here we have actual numbers to present.

"The numbers depend on the quality of the data," the guy added, "that is to say the selection of links and the culling the results of 404's, paywall messages, and cookie warnings and so on.

"As mentioned I just took the latest link from each of the sites I have bookmarked this year. That skews it towards lean tech blogs. Though some publishers which should know very much better are real pigs:




$ wget --continue --page-requisites --timeout=30 --directory-prefix=./test.a/ https://www.technologyreview.com/s/614079/what-is-geoengineering-and-why-should-you-care-climate-change-harvard/ . . .

$ lynx --dump https://www.technologyreview.com/s/614079/what-is-geoengineering-and-why-should-you-care-climate-change-harvard/ > test.b

$ du -bs ./test.? 2485779 ./test.a 35109 ./test.b



"Trimming some of the lines of cruft from the text version for that article, I get close to two orders of magnitude difference between the original edition versus the trimmed text edition:

$ du -bs ./test.?
2485779	./test.a
35109	./test.b
27147	./test.c


"Also the trimmed text edition is close to 75% the size of the automated text edition. So, at least for that article, the guess of 75% content may be about right. However, given the quick and dirty approach, of this survey, not much can be said conclusively except 1) there is a lot of waste, 2) there is an opportunity for someone to do an easy piece of research."

Based on links from 2019-08-08 and 2019-08-09, we get one set of results (extracted all URLs saved from January 2019 through July 2019; http and https only, eliminated PDF and other links to obviously non-html material). Technical appendices and footnotes are below for those wishing to explore further and reproduce.







+ this only retrieves the first layer of javascript, far from all of it + some site gave wget trouble, should have fiddled the agent string, --user-agent="" + too many sites respond without proper HTTP response headers, slows collection down intolerably + the pages themselves often contain many dead links + serial fetching is slow and because the sites are unique

$ find . -mindepth 1 -maxdepth 1 -type d -print | wc -l 91 $ find . -mindepth 1 -type f -print | wc -l 4171 which is an average of 78 objects per "article"

+ some sites were tech blogs with lean, hand-crafted HTML, mainstream sites are much heavier, so the above average is skewed towards being too light

Quantity and size of objects associated with articles, does not count cookies nor secondary scripts:

$ find . -mindepth 1 -type f -printf '%s\t%p\n' \ | sort -k1,1n -k2,2 \ | awk '$1>10{ sum+=$1; c++; s[c]=$1; n[c]=$2 } END{ printf "%10s\t%10s\n","Bytes","Measurement"; printf "%10d\tSMALLEST\n",s[1]; for (i in s){ if(i==int(c/2)){ printf "%10d\tMEDIAN SIZE\n",s[i]; } }; printf "%10d\tLARGEST\n",s[c]; printf "%10d\tAVG SIZE\n",sum/c; printf "%10d\tCOUNT\n",c; }'

Bytes File Size 13 SMALLEST 10056 MEDIAN SIZE 32035328 LARGEST 53643 AVG SIZE 38164 COUNT









Overall article size [1] including only the first layer of scripts,

Bytes Article Size 8442 SMALLEST 995476 MEDIAN 61097209 LARGEST 2319854 AVG 921 COUNT

Estimated content [2] size including links, headers, navigation text, etc:

+ deleted files with errors or warnings, probably a mistake as that skews the results for lynx higher

Bytes Article Size 929 SMALLEST 18782 MEDIAN 244311 LARGEST 23997 AVG 889 COUNT

+ lynx returns all text within the document not just the main content, at 75% content the figures are more realistic for some sites:

Bytes Measurement 697 SMALLEST 14087 MEDIAN 183233 LARGEST 17998 AVG 889 COUNT

at 50% content the figures are more realistic for other sites:

465 SMALLEST 9391 MEDIAN 122156 LARGEST 11999 AVG 889 COUNT






       


$ du -bs * \ | sort -k1,1n -k2,2 \ | awk '$2!="l" && $1 { c++; s[c]=$1; n[c]=$2; sum+=$1 } END { for (i in s){ if(i==int(c/2)){ m=i }; printf "% 10d\t%s\n", s[i],n[i] }; printf "% 10s\tArticle Size\n","Bytes"; printf "% 10d\tSMALLEST %s\n",s[1],n[1]; printf "% 10d\tMEDIAN %s\n",s[m],n[m]; printf "% 10d\tLARGEST %s\n",s[c],n[c]; printf "% 10d\tAVG\n", sum/c; printf "% 10d\tCOUNT\n",c; }' OFS=$'\t'









[1]

$ time bash -c 'count=0; shuf l \ | while read u; do echo $u; wget --continue --page-requisites --timeout=30 "$u" & echo $((count++)); if ((count % 5 == 0)); then wait; fi; done;'









[2]

$ count=0; time for i in $(cat l); do echo;echo $i; lynx -dump "$i" > $count; echo $((count++)); done;








[3]

$ find . -mindepth 1 -maxdepth 1 -type d -print | wc -l 921

$ find . -mindepth 1 -type f -print | wc -l 38249









[4]

$ find . -mindepth 1 -type f -print \ | awk '{sub("\./","");sub("/.*","");print;}' | uniq -c | sort -k1,1n -k2,2 | awk '$1{c++;s[c]=$1;sum+=$1;} END{for(i in s){if(i == int(c/2)){m=s[i];}}; print "MEDIAN: ",m; print "AVG", sum/c; print "Quantity",c; }'









[5]

$ find . -mindepth 1 -type f -name '*.js' -exec du -sh {} \; | sort -k1,1rh | head 16M ./www.icij.org/app/themes/icij/dist/scripts/main_8707d181.js 3.4M ./europeanconservative.com/wp-content/themes/Generations/assets/scripts/fontawesome-all.min.js 1.8M ./www.9news.com.au/assets/main.f7ba1448.js 1.8M ./www.technologyreview.com/_next/static/chunks/commons.7eed6fd0fd49f117e780.js 1.8M ./www.thetimes.co.uk/d/js/app-7a9b7f4da3.js 1.5M ./www.crossfit.com/main.997a9d1e71cdc5056c64.js 1.4M ./www.icann.org/assets/application-4366ce9f0552171ee2c82c9421d286b7ae8141d4c034a005c1ac3d7409eb118b.js 1.3M ./www.digitalhealth.net/wp-content/plugins/event-espresso-core-reg/assets/dist/ee-vendor.e12aca2f149e71e409e8.dist.js 1.2M ./www.fresnobee.com/wps/build/webpack/videoStory.bundle-69dae9d5d577db8a7bb4.js 1.2M ./www.ft.lk/assets/libs/angular/angular/angular.js






[6] About page bloat, one can pick just about any page and find from one to close to two orders of magnitude difference between the lynx dump and the full web page. For example,




$ wget --continue --page-requisites --timeout=30 \ --directory-prefix=./test.a/ \ https://www.newsweek.com/saudi-uae-war-themselves-yemen-1453371 . . .

$ lynx --dump \ https://www.newsweek.com/saudi-uae-war-themselves-yemen-1453371 \ > test.b

$ du -bs ./test.? 250793 ./test.a 15385 ./test.b

Recent Techrights' Posts

Polygamy, from Catholic Synod on Synodality to Social Control Media & Debian CyberPolygamy
Reprinted with permission from Daniel Pocock
Only a Third of or 1 in 3 Web-Connected Devices is a Desktop or Laptop, According to statCounter
we can expect Android to widen its lead
Peter Moon's (Computerworld) Interview With Richard Stallman
Stallman: If you want freedom don't follow Linus Torvalds
At What Point Does Outsourcing Constitute Malpractice?
Brett Wilson LLP's new staff page is misleading
 
Nirbheek Chauhan in Planet GNOME Explains Why Wayland Pushers Are Losing
"A strange game. The only winning move is not to play."
The Days Are Getting Shorter, the First Half of 2025 is Almost Over
We're gratified to see significant increase in traffic and also positive feedback on the work we do
Turning GNU/Linux Into a Political Football
X (not the site) is Free software
X Server Still Works for Many People
A lot of people will grow suspicious of Wayland boosters/pushers if they persist and insist on using these tactics
Exactly a Week Ago "BetaNews Staff" Said "Betanews Is Growing Alongside You". Since Then Every Article (All by "Camila Nogueira") Has Been LLM Slop.
BetaNews is basically a slopfarm
When the Microsoft Aggressors Rely on Several Law Firms ('Attack Dogs', 'Guns for Hire'), Not Just One, Lawyering Up Against Techrights (Acting on Behalf of Americans Against UK Publishers)
From serving customers at some restaurant he has moved on to bullying people with demand letters
Over at Tux Machines...
GNU/Linux news for the past day
IRC Proceedings: Monday, June 23, 2025
IRC logs for Monday, June 23, 2025
The "Tarzan Effect" in Compilers and Software
What happens when you forcibly make things 'work', either by hacks or by disregarding warnings (like those that compilers tend to issue)?
Gemini Links 23/06/2025: Mass Tourism, Hair Love, and Google Gemini as a Googlebomb
Links for the day
Law Firm Burgess Mee Does Not Fully Deny Participating in Abusive Litigation for Serial Strangler From Microsoft
I am not unfamiliar with these tactics
The Modus Operandi of Wayland Pushers: Make It Political
do what I say or you're a nazi...
Links 23/06/2025: RFE/RL Contributor Vladyslav Yesypenko Released, Recording Industry Cutbacks
Links for the day
Brett Wilson LLP Solicitors (M): Over 99.9% of Our E-mail is Self-Marketing, We Send You 3.5MB E-mails for Less Than 1KB of Text
Why would tech people entrust legal matters to such people?
United Arab Emirates (UAE) Sailing to GNU/Linux, According to statCounter
countries in that region will quickly learn the price of neglecting digital sovereignty
From Do Your Own Research to Do Your Own Search
The Web is full of garbage; search engines amplify this garbage
More People Moving to Geminispace?
at age 6+ Gemini Protocol seems to have gained some maturity and it seems like more people use it
Permutation in LLMs Does, Inevitably, Change Meanings and Therefore LLMs Cannot Properly Rephrase or Summarise Texts
LLMs lack actual grasp or comprehension of what they spew out
Links 23/06/2025: Many Security Breaches, Population Declines
Links for the day
Gemini Links 23/06/2025: "America at the Crossroads" and OpenWRT Surgery
Links for the day
Over at Tux Machines...
GNU/Linux news for the past day
IRC Proceedings: Sunday, June 22, 2025
IRC logs for Sunday, June 22, 2025
Pure Dove
Different means different, and sometimes those who "deviate" from "the norm" have a point
Censorship is a Sign of Weakness Which Invites More Censorship Attempts
revolutionaries don't succumb to pressure from bullies
Why It's Unlikely That LLM Slop Will Dominate the Web in the Long Run
Slopfarms will eventually perish (they have no actual value) and "survivors" on the Web will be sites that never depended on search engines and social control media
GNU/Linux in Argentina Now Measured Near 5%
Like in central Europe, they must be seeing an increasingly hostile US
BetaNews is Fake News, Composed by LLM Slop
nothing in BetaNews is written by humans anymore
Links 22/06/2025: Giving Up on Smartphones and 'Jaws' at 50
Links for the day
Gemini Links 22/06/2025: Furniture Construction and Bubble for Comments
Links for the day
Links 22/06/2025: Windows TCO Tales and YouTube Getting More Hostile to Users
Links for the day
The FSF Board and FSF Beard
So the FSF's Board has grown
Law Firms Facing the Consequences for Patently Abusive Litigation on Behalf of Microsoft Employees Who Got Arrested for Strangulation and Had Done Even Worse Things
Having spent 1.5 years bullying me with patronising letters on behalf of Microsofters, last week they got served a massive bill and, in effect, lost the Hearing
New Report From the EPO's Staff Representatives in The Hague (LSCTH) Reveals Many Unsolved Issues
Local Staff Committee The Hague (LSCTH) wrote to staff just before the weekend
LLMs Breaking Everything
Computing and the Net became a playground for scammers and "bros", like people who "invented" fake currencies and also try to tell us that LLMs spewing out things will have some real value
Links 22/06/2025: More Slop Lawsuits (Copyrights) and "America’s Oligarch Problem"
Links for the day
Gemini Links 22/06/2025: Gigantic Toolchest and Annoying Bots
Links for the day