EditorsAbout the SiteComes vs. MicrosoftUsing This Web SiteSite ArchivesCredibility IndexOOXMLOpenDocumentPatentsNovellNews DigestSite NewsRSS


Why Techrights Doesn’t Do Social Control Media

Posted in Site News at 3:36 am by Dr. Roy Schestowitz

More about Social Control than about Media. Standing on one’s own means more freedom of speech and no self-censorship.

One-tree hill

Summary: Being managed and censored by platform owners (sometimes their shareholders) isn’t an alluring proposition when a site challenges conformist norms and the status quo; Techrights belongs in a platform of its own

AFTER posting more than 670,000 tweets and having coined the term "Social Control Media" (which even Julian Assange and Wikileaks adopted) I decided to no longer use Twitter except to check replies once a day. It had become a massive productivity drain and usually an utter waste of time. I still have stuff posted there, albeit only as copies exported from decentralised and Free software platforms such as Diaspora and Pleroma (Mastodon-compatible).

“It seems pretty clear where Twitter is going with this; it wants to eventually become another Facebook.”A decade ago Techrights, Boycott Novell and TechBytes had active accounts in Identi.ca (and two of these in Twitter as well). At some stage it seemed clear that this kind of activity was detrimental to — not contributory towards — actual journalism. Techrights never had a Twitter account and character length is still a major limitation. Over the years surveillance and bloat got a lot worse; almost exactly a year ago Twitter also killed third-party tools by deprecating key APIs. It seems pretty clear where Twitter is going with this; it wants to eventually become another Facebook. We probably don’t have to explain why Facebook is so bad (many aspects to that).

There’s nothing to regret here overall; we didn’t participate in these sites and we probably lost nothing by staying out of these. I have personal accounts there, but these express my personal views (on politics) rather than the site’s.

“There’s nothing to regret here overall; we didn’t participate in these sites and we probably lost nothing by staying out of these.”Social Control Media (so-called ‘social’ ‘media’) is neither social nor media; when people socialise they don’t get managed by the billions by one single company/shareholders and media has generally (historically) checked claims/facts. Twitter lacks that.

Social Control Media has sadly ‘replaced’ the “long form” writings in a lot of blogs. That’s a shame really. Quality is being compromised for the sake of speed and concision. When we’re trying to actually find/syndicate reliable blogs we nowadays come to realise that many are inactive/dormant. Instead of sites they become “accounts” (on someone else’s platform, complete with throttling, censorship and ads); what used to be a site/blog is just some Twitter account that posts and reposts unverified nonsense. Techrights doesn’t wish to ever become something so shallow.


Links Are Not Endorsements

Posted in Site News at 2:10 am by Dr. Roy Schestowitz

No rollerblades

Summary: If the only alternative is to say nothing and link to nothing, then we have a problem; a lot of people still assume that because someone links to something it therefore implies agreement and consent

WE recently wrote about how Twitter cheapens if not ruins fact-finding/fact-checking. Weeks later it turned out that someone who had decided to declare a person dead was in fact wrong. That person is still alive. The Web is a fascinating maze of fabrications, hearsay, bad reporting, but also good investigative journalism. It’s not always easy to tell apart one from the other and it’s something we work on. Sometimes, including last night, we get feedback; a reader made contact with us to tell us something that we linked to (in daily links) was wrong. It doesn’t happen very often, maybe a couple of times per year and usually when it’s about more sensitive and divisive subjects like Kashmir.

“We don’t link to social control media; we only link to blogs and news sites. That’s still prone to mistakes/issues. But most importantly when we link to something that does not imply we endorse the message.”Our daily links are an effort to make sense of a lot of information. (Mis)information overload is a big problem and the ‘cheapening’ of it, especially because of social control media, means that any random person can utter false words. These things get rated and shared (passed on) based on emotion rather than adherence to facts, so that’s another problem. The scoring is all wrong. Nowadays a lot of so-called ‘articles’ are based on nothing but a “tweet”; that’s another problem.

Sometimes a Web site or an article we link to may offend someone. Sometimes the article is wrong. We don’t link to social control media; we only link to blogs and news sites. That’s still prone to mistakes/issues. But most importantly when we link to something that does not imply we endorse the message. Sometimes we link to sites we do not agree with simply because we need to highlight something that was said, done, or happened. In social control media it’s common to specify in profiles/disclaimers that “links (or other) are not endorsements”; the same applies here.

What we argue in our own (original) articles is another matter altogether. At least those can be judged based on more than a link alone.


Techrights Coding Projects: Making the Web Light Again

Posted in Site News at 11:05 am by Dr. Roy Schestowitz

A boatload of bytes that serve no purpose at all (99% of all the traffic sent from some Web sites)

Very bloated boat

Summary: Ongoing technical projects that improve access to information and better organise credible information preceded by a depressing overview regarding the health of the Web (it’s unbelievably bloated)

OVER the past few months (since spring) we’ve been working hard on coding automation and improving the back end in various ways. More than 100 hours were spent on this and it puts us in a better position to grow in the long run and also improve uptime. Last year we left behind most US/USPTO coverage to better focus on the European Patent Office (EPO) and GNU/Linux — a subject neglected here for nearly half a decade (more so after we had begun coverage of EPO scandals).

As readers may have noticed, in recent months we were able to produce more daily links (and more per day). About a month ago we reduced the volume of political coverage in these links. Journalism is waning and the quality of reporting — not to mention sites — is rapidly declining.

“As readers may have noticed, in recent months we were able to produce more daily links (and more per day).”To quote one of our guys, “looking at the insides of today’s web sites has been one of the most depressing things I have experienced in recent decades. I underestimated the cruft in an earlier message. Probably 95% of the bytes transmitted between client and server have nothing to do with content. That’s a truly rotten infrastructure upon which society is tottering.”

We typically gather and curate news using RSS feed readers. These keep sites light and tidy. They help us survey the news without wrestling with clickbait, ads, and spam. It’s the only way to keep up with quality while leaving out cruft and FUD (and Microsoft's googlebombing). A huge amount of effort goes into this and it takes a lot of time. It’s all done manually.

“We typically gather and curate news using RSS feed readers. These keep sites light and tidy. They help us survey the news without wrestling with clickbait, ads, and spam.”“I’ve been letting wget below run while I am mostly outside painting part of the house,” said that guy, having chosen to survey/assess the above-stated problem. “It turns out that the idea that 95% of what web severs send is crap was too optimistic. I spidered the latest URL from each one of the unique sites sent in the links from January through July and measured the raw size for the individual pages and their prerequisites. Each article, including any duds and 404 messages, averaged 42 objects [3] per article. The median, however, was 22 objects. Many had hundreds of objects, not counting cookies or scripts that call in scripts.

“I measured disk space for each article, then I ran lynx over the same URLs to get the approximate size of the content. If one counts everything as content then the lynx output is on average 1% the size of the raw material. If I estimate that only 75% or 50% of the text rendered is actual content then that number obviously goes down proportionally.

“I suppose that means that 99% of the electricity used to push those bits around is wasted as well. By extension, it could also mean that 99% of the greenhouse gases produced by that electricity is produced for no reason.

“The results are not scientifically sound but satisfy my curiosity on the topic, for now.

“Eliminating the dud URLs will produce a much higher object count.

“The results are not scientifically sound but satisfy my curiosity on the topic, for now.”
“Using more mainstream sites and fewer tech blogs will drive up the article sizes greatly.

“The work is not peer reviewed or even properly planned. I just tried some spur of the minute checks on article sizes in the first way I could think of,” said the guy. We covered this subject before in relation to JavaScript bloat and sites' simplicity, but here we have actual numbers to present.

“The numbers depend on the quality of the data,” the guy added, “that is to say the selection of links and the culling the results of 404′s, paywall messages, and cookie warnings and so on.

“As mentioned I just took the latest link from each of the sites I have bookmarked this year. That skews it towards lean tech blogs. Though some publishers which should know very much better are real pigs:

$ wget --continue --page-requisites --timeout=30


. . .

$ lynx --dump


> test.b

$ du -bs ./test.?
2485779	./test.a
35109	./test.b

“Trimming some of the lines of cruft from the text version for that article, I get close to two orders of magnitude difference between the original edition versus the trimmed text edition:

$ du -bs ./test.?
2485779	./test.a
35109	./test.b
27147	./test.c

“Also the trimmed text edition is close to 75% the size of the automated text edition. So, at least for that article, the guess of 75% content may be about right. However, given the quick and dirty approach, of this survey, not much can be said conclusively except 1) there is a lot of waste, 2) there is an opportunity for someone to do an easy piece of research.”

Based on links from 2019-08-08 and 2019-08-09, we get one set of results (extracted all URLs saved from January 2019 through July 2019; http and https only, eliminated PDF and other links to obviously non-html material). Technical appendices and footnotes are below for those wishing to explore further and reproduce.

+ this only retrieves the first layer of javascript, far from all of it
+ some site gave wget trouble, should have fiddled the agent string,
+ too many sites respond without proper HTTP response headers,
	slows collection down intolerably
+ the pages themselves often contain many dead links
+ serial fetching is slow and because the sites are unique

$ find . -mindepth 1 -maxdepth 1 -type d -print | wc -l
$ find . -mindepth 1 -type f -print | wc -l
which is an average of 78 objects per "article"

+ some sites were tech blogs with lean, hand-crafted HTML,
	mainstream sites are much heavier,
	so the above average is skewed towards being too light

Quantity and size of objects associated with articles,
does not count cookies nor secondary scripts:

$ find . -mindepth 1 -type f -printf '%s\t%p\n' \
| sort -k1,1n -k2,2 \
| awk '$1>10{
		printf "%10s\t%10s\n","Bytes","Measurement";
		printf "%10d\tSMALLEST\n",s[1];
		for (i in s){
				printf "%10d\tMEDIAN SIZE\n",s[i];
		printf "%10d\tLARGEST\n",s[c];
		printf "%10d\tAVG SIZE\n",sum/c;
		printf "%10d\tCOUNT\n",c;

     Bytes      File Size
        13      SMALLEST
     10056      MEDIAN SIZE
  32035328      LARGEST
     53643      AVG SIZE
     38164      COUNT


Overall article size [1] including only the first layer of scripts,

     Bytes      Article Size
      8442      SMALLEST
    995476      MEDIAN
  61097209      LARGEST
   2319854      AVG
       921      COUNT

Estimated content [2] size including links, headers, navigation text, etc:

+ deleted files with errors or warnings,
	probably a mistake as that skews the results for lynx higher

     Bytes      Article Size
       929      SMALLEST
     18782      MEDIAN
    244311      LARGEST
     23997      AVG
       889      COUNT

+ lynx returns all text within the document not just the main content,
	at 75% content the figures are more realistic for some sites:

     Bytes      Measurement
       697	SMALLEST
     14087	MEDIAN
    183233	LARGEST
     17998	AVG
       889	COUNT

	at 50% content the figures are more realistic for other sites:

       465	SMALLEST
      9391	MEDIAN
    122156	LARGEST
     11999	AVG
       889	COUNT

$ du -bs * \
| sort -k1,1n -k2,2 \
| awk '$2!="l" && $1 {
	END {
		for (i in s){
			printf "% 10d\t%s\n", s[i],n[i]
		printf "% 10s\tArticle Size\n","Bytes";
		printf "% 10d\tSMALLEST %s\n",s[1],n[1];
		printf "% 10d\tMEDIAN %s\n",s[m],n[m];
		printf "% 10d\tLARGEST  %s\n",s[c],n[c];
		printf "% 10d\tAVG\n", sum/c;
		printf "% 10d\tCOUNT\n",c;
	}' OFS=$'\t'


$ time bash -c 'count=0;
shuf l \
| while read u; do
	echo $u;
	wget --continue --page-requisites --timeout=30 "$u" &
	echo $((count++));
	if ((count % 5 == 0)); then


$ count=0;
time for i in $(cat l); do
	echo;echo $i;
	lynx -dump "$i" > $count;
	echo $((count++));


$ find . -mindepth 1 -maxdepth 1 -type d -print | wc -l

$ find . -mindepth 1 -type f -print | wc -l


$ find . -mindepth 1 -type f -print \
| awk '{sub("\./","");sub("/.*","");print;}' | uniq -c | sort -k1,1n
-k2,2 | awk '$1{c++;s[c]=$1;sum+=$1;} END{for(i in s){if(i ==
int(c/2)){m=s[i];}}; print "MEDIAN: ",m; print "AVG", sum/c; print
"Quantity",c; }'


$ find . -mindepth 1 -type f -name '*.js' -exec du -sh {} \; | sort
-k1,1rh | head
16M     ./www.icij.org/app/themes/icij/dist/scripts/main_8707d181.js
1.8M    ./www.9news.com.au/assets/main.f7ba1448.js
1.8M    ./www.thetimes.co.uk/d/js/app-7a9b7f4da3.js
1.5M    ./www.crossfit.com/main.997a9d1e71cdc5056c64.js
1.2M    ./www.ft.lk/assets/libs/angular/angular/angular.js

[6] About page bloat, one can pick just about any page and find from one to close to two orders of magnitude difference between the lynx dump and the full web page. For example,

$ wget --continue --page-requisites --timeout=30 \
    --directory-prefix=./test.a/ \


. . .

$ lynx --dump \
    https://www.newsweek.com/saudi-uae-war-themselves-yemen-1453371 \
    > test.b

$ du -bs ./test.?
250793	./test.a
 15385	./test.b

Guest Post: Why I Wrote Fig (the Computer Language That Helps Techrights Operations)

Posted in Free/Libre Software, Site News at 8:38 am by Dr. Roy Schestowitz

By figosdev

Fig project

Summary: Fig project introduced; its developer wrote some code for Techrights and has just decided to explain why he uses his own programming language

SOME helper code we recently published for the site [1, 2] had been written in Fig, whose developer explained the language as follows.

Since Techrights is now publishing Fig code, I thought it best to write a couple of articles about it — one explaining how to use it, and the other explaining why.

Stallman didn’t start the GNU project just to have another operating system, and I didn’t write Fig just to have another language. Rather I grew up with fun languages like Basic and Logo, and wanted to use them to teach coding to friends and others.

Logo lives on in modern adaptations, and so does Basic. A lot of people think Basic is bad for you, but that’s due to stuff written about the versions of Basic in the 1960s and 70s, before it gained better commands for structural programming. Both Linus Torvalds and I first learned Basic as our introduction to programming. It’s a fun, simple language when it’s done right.

“Both Linus Torvalds and I first learned Basic as our introduction to programming. It’s a fun, simple language when it’s done right.”Python is a good language for beginners, and it has modern advantages that even the most modern versions of Basic do not — it’s also a good shell language, a decent language for making games, as well as applications. But Python is definitely not as simple as Basic, and I wanted to create an even simpler language if possible.

A simple language benefits from having few rules, but it does not need to be completely consistent. If your language only has 2 rules, it will be very consistent but it could be tedious to work within that level of consistency. Fig tries to find a balance between having as few rules as possible, and having enough to be friendly.

The original Basic had very few commands, which made it trivial to learn “the whole thing.” Fig was not my first attempt to create a fun, simple language — in fact it is named after the logo for a previous language project of mine — originally fig was called “Fig Basic”. Like its predecessor, Fig was a experiment in lightweight syntax.

I had a couple of rules for developing Fig — one, punctuation would only be added as needed, within reason. This produced a language that uses “quotes for strings” and # hashes for comments. Decimal points work the same way in fig that they do in Python and Basic.

“Since Fig compiles to Python code, and has an inline Python feature, it was (and remains) possible to cheat and just use Python pretty much wherever it is needed.”As with punctuation for syntax, the other rule was to only add new statements or functions as needed, when it became too tedious to do without them. This resulted in a language with fewer than 100 commands.

Since Fig compiles to Python code, and has an inline Python feature, it was (and remains) possible to cheat and just use Python pretty much wherever it is needed. You can create a Fig function that is just a wrapper around a Python import, and that’s pretty nice. Then you can call that function using Fig’s lightweight syntax.

Fig can also do shell code, so it can interact with other programs on the computer. But I wrote an extensive (20 short chapters) tutorial for Basic more than 12 years ago, started adapting it to Python, and eventually decided to teach coding in terms of the following near-standard features:

1. variables
2. input
3. output
4. basic math
5. loops
6. conditionals
7. functions

I realise that in Python, “functions” are really method calls, but that’s not very important to most people learning programming for the first time. I see a lot of people working their way up to “rock paper scissors” programs when first learning Python, and that’s typical of how they taught Basic as well.

“While Python is case-sensitive and indented, Fig uses Basic-like (Pascal-like, Bash-like) command pairs and is case-insensitive.”I started creating simple Python tutorials aimed at Basic coders, and those gradually turned into a simple library of Basic-like Python routines, which eventually turned into Fig Basic. And Fig Basic is different than most dialects of Basic, because it took into account a lot of things I learned while trying to teach Basic and Python — namely, the things people had the most problems with in terms of syntax.

While Python is case-sensitive and indented, Fig uses Basic-like (Pascal-like, Bash-like) command pairs and is case-insensitive. Today even most versions of Basic are case-sensitive, but during its peak as a language it was not.

Fig is not parenthetical and has no operator order, other than left to right. It inherits Python’s (and QBasic’s) conventions about newlines — after an if statement, you start a newline (QBasic had a cheat about that, but Fig is consistent.)

So for example, a for loop from 10 to 2 with a step of -1 might look like this:

    for p = (10, 2, -2) 
        now = p ; print

Many things about this are optional — both equals signs, the parentheses, the commas and the indentation. Here is the code with those removed:

    for p 10 2 -2 
    now p print

This is based on the inspiration Fig takes from logo (specifically turtle graphics) which tends to require less punctuation in its syntax than almost any popular language — certainly among the educational languages.

The inline Python feature naturally requires indentation.

“The inline Python feature naturally requires indentation.”But I’ve created languages before and since, and Fig is only the best of those. What I really want is for people to learn programming and for people to learn how to create simple programming languages. The latter is something I’ve taught as a sort of “next step” after coding basics.

I strongly feel that we need a more computer-literate society, and code is really the shortest route to computer literacy. A programming language is basically a text-based interface to the computer itself (built on a number of abstractions… so is a command line, but a command line works like a programming language all on its own.) We need to make it easy for teachers to learn, so they can make it easy for students to learn.

A more computer-literate society would be better able to make political decisions regarding computers, and it would be better able to contribute to Free software, so creating more tools to teach coding to everyone would help immensely in my opinion.

“I strongly feel that we need a more computer-literate society, and code is really the shortest route to computer literacy.”And I feel if more people worked to create their own language, we would learn a lot more about what sorts of features (and omissions) would best suit the task of creating an educational language for everyone. Sure, Basic did very well and Logo has done very well; Python is doing well. As to whether we can make it so fewer people struggle, or explaining what makes coding truly easy or truly difficult in the initial steps, there is only so much data we have on this. We could have a lot more.

Years ago, I wanted to create a piece of software as a “kit” for making it much, much easier to design your own language. Instead of creating a formal specification and writing/generating a parser, you would choose a parser based on the style of your language and tweak it to your needs. There would be a giant collection of features, and a way of turning that into a catalog, from which you would “select” the routines you wanted your language to have.

Meanwhile, Fig is possible to take apart and change. version 4.6 is a single 58k Python script, with 1,154 unique lines of code. This includes a simple help system, which allows you to search for any command by name (even part of the name) and that gives you the syntax and a quick description of the command and what it does.

I would be just as happy for people to adapt Fig or be inspired to create their own version or own language, as I would be for them to adopt Fig for their own use. I would still like to make it easier to teach coding, so that more people are capable, earlier on, with less intimation — just like in the days when Logo was really, really simple.

“I would be just as happy for people to adapt Fig or be inspired to create their own version or own language, as I would be for them to adopt Fig for their own use.”I now use Fig for most tasks, and let me apologise for the code that Roy has published so far. I wrote that as code for internal use and took every shortcut, and he was totally free to publish it, but I don’t use the same care (recommended with good reason) when naming variables that I do when naming commands for a programming language. I actually put a lot of thought into that sort of thing — but when I name variables, I’m very sloppy. It’s a beginner’s mistake, and I still do that more than a quarter of a century later.

I will write a simple Fig tutorial or two next, but I will try to use better variable names. One convention I use a lot though — is if I need a “throwaway” variable, I will just use “p” or “now.” If the variable is going to be referenced a lot, these are definitely not good variables names at all. Writing Fig has made me a better coder and designing a language will make you a better coder too, but the variable names thing — sorry about that.

Fig puts almost 100% of variables on the left of each line (almost like creating a named pipe) so they’re easy to find. For loops and Forin loops put the variable a little more to the right, but every “standard” command in Fig begins with a variable:

    howmuch = 2
    p = "hello" ; left howmuch ; ucase ; len ; print ; print ; print

You can download the latest version of fig (4.6 is very stable!) here from PyGame. You can also find the language featured in the first issue of DistroWatch Weekly of the new 2017 year:

[download fig 4.6] SHA256: 4762871cf854cc8c1b35304a26d7e22c5f73b19a223b699d7946accea98d2783

Whatever language you love most, happy coding!

Licence: Creative Commons cc0 1.0 (public domain)


Techrights Blog Archive and Word Analysis

Posted in Site News at 2:21 am by Dr. Roy Schestowitz

Old library

Summary: Summing up almost 13 years of Techrights and making archival simpler just in case something bad happens some time in the future

IN LIGHT of Linux Journal's shutdown, which was sudden and announced in the middle of the summer holidays (may have been strategic timing), we’ve already published a list of Techrights blog URLs and our full Wiki as XML files (or compressed archive of them all). That’s not because we expect anything bad to happen; it’s just better to be prepared in advance rather than when it’s too late. I’ve poured my entire adult life into this site and the analysis/research it takes to write/run it. It’s good to have backups, including elsewhere. Mirroring does no harm, it improves overall resilience when things are decentralised. During the archiving process figosdev wrote some code that we have not published until today. It’s used to generate a keyword index with merges for similar ones. So for example, for the latter it comes up with (number of occurrences on the left):

14191 chrome 9611:chrome. 233:chromebook 1754:chromebook. 121:chromebooks 1513:chromebooks. 156:chromebox 74:chromecast 396:chromeos 333:
14195 e.u. 109:eu 13203:eu-us 107:eu-wide 144:eu. 632:
14271 process 11848:process. 2423:
14278 china 13069:china-based 95:china. 1114:
14371 receive 3588:received 7239:received. 141:receives 1405:receiving 1998:
14411 firefox 14411:
14556 download 9003:download. 1015:downloadable 220:downloaded 1417:downloader 101:downloaders 97:downloading 1100:downloads 1454:downloads. 149:
14739 gates 14158:gates-backed 114:gates-funded 141:gates. 326:
14850 digital 14577:digital. 86:digitally 187:
14894 employed 938:employee 2773:employee. 154:employees 6620:employees. 704:employer 989:employer. 168:employers 644:employing 284:employment 1158:employment. 93:employs 369:
15026 package 6369:package. 512:packaged 533:packagers 85:packages 5762:packages. 767:packaging 998:
15170 menace 148:menacing 100:menos 418:mental 1124:mentality 273:mentally 216:mention 4183:mention. 103:mentioned 6743:mentioned. 155:mentioning 720:mentions 987:
15196 games 14158:games. 1038:
15231 event 7653:event. 954:events 6020:events. 604:
15350 corporate 8067:corporates 77:corporation 2759:corporation. 240:corporations 3790:corporations. 417:
15419 battistelli 14513:battistelli. 906:
15560 browser 9915:browser. 1333:browsers 4024:browsers. 288:
15709 privacy 13613:privacy-focused 124:privacy. 992:privacy/surveillance 980:
15749 press 14852:press. 897:
15759 require 3825:required 4340:required. 380:requirement 1182:requirement. 139:requirements 2226:requirements. 480:requires 3187:
15932 intel 15517:intel-based 101:intel. 232:intel/amd 82:
16050 police 15513:police. 537:
16695 mobile 16432:mobile. 263:
16877 troll 4535:troll. 434:trolles 205:trolling 1127:trolling. 111:trolls 9276:trolls. 1189:
16880 enterprise 13684:enterprise-class 132:enterprise-grade 221:enterprise-ready 131:enterprise. 424:enterprises 2072:enterprises. 216:
16985 hardware 15321:hardware. 1664:
17492 policies 4017:policies. 638:policy 11524:policy. 1313:
17832 reported 9524:reported. 848:reportedly 3159:reporter 1927:reporter. 94:reporters 2141:reporters. 139:
17865 user 16688:user. 826:userbase 82:username 174:usernames 95:
18106 federal 18106:
18157 money 16399:money. 1758:
18170 result 6842:result. 452:resulted 1431:resulting 1346:results 7119:results. 980:
18209 management 17099:management. 1110:
18362 research 11250:research. 727:researched 153:researcher 1346:researchers 4435:researchers. 196:researching 255:
18746 review 12483:review. 723:reviewed 1304:reviewer 149:reviewers 230:reviewing 721:reviews 2931:reviews. 205:
18993 censor 1588:censored 952:censored. 74:censoring 859:censors 823:censorship 12500:censorship. 701:censorship/free 972:censorship/privacy/civil 404:censorship/web 120:
19233 debconf 276:debian 17855:debian-based 626:debian. 476:
19233 name 10460:name. 764:named 4402:names 3607:
19371 facebook 18648:facebook. 723:
19681 question 9688:question. 879:questioned 808:questioning 729:questions 6938:questions. 639:
19804 order 14028:order. 692:ordered 2399:ordering 431:orders 2047:orders. 207:
19861 organisation 3331:organisation. 393:organisational 97:organisations 1789:organisations. 194:organization 7031:organization. 717:organizational 290:organizations 5484:organizations. 535:
20396 national 20321:national-security 75:
20636 member 7943:member. 320:members 11588:members. 785:
21621 copyright 16583:copyright. 470:copyrighted 710:copyrights 3676:copyrights. 182:
21704 official 11777:official. 211:officials 9084:officials. 632:
21809 rights 18893:rights. 1936:rights/policing 980:
21944 cloud 20228:cloud-based 661:cloud. 1055:
21978 nsa 21533:nsa. 445:
22096 industrial 2497:industries 1720:industries. 357:industry 15436:industry. 2086:
22146 kde 21049:kde. 488:kde3 72:kde4 537:
22170 fedora 21663:fedora. 507:
22411 standard 8863:standard. 889:standardisation 232:standardised 77:standardization 422:standardize 182:standardized 329:standardizing 80:standards 9113:standards-based 123:standards. 1265:standards/consortia 836:
22416 market 16524:market. 2932:markets 2361:markets. 599:
22428 protect 6965:protected 2043:protected. 142:protecting 2235:protection 8152:protection. 633:protections 1715:protections. 307:protective 236:
22526 political 13646:politician 758:politicians 3860:politicians. 247:politico 350:politics 3118:politics. 547:
22664 product 8764:product. 1060:products 10698:products. 2142:
22994 publish 2200:published 13513:published. 392:publisher 1348:publisher. 70:publishers 1654:publishers. 127:publishes 825:publishing 2757:publishing. 108:
23532 next 23174:next. 358:
23594 permalink 23594:
23726 foundation 22066:foundation. 1660:
23765 opensource 268:open-source 22369:open-source. 110:open-sourced 387:open-sources 402:open-sourcing 229:
24419 gnome 23574:gnome-based 84:gnome-shell 123:gnome. 497:gnome3 141:
24981 video 18925:video. 596:video> 271:videolan 113:videos 4794:videos. 282:
25022 icons 24892:icons. 130:
25227 plan 8738:plan. 578:planned 3084:planned. 168:planning 3385:planning. 77:plans 8758:plans. 439:
25671 attack 10039:attack. 819:attacked 1573:attacker 739:attackers 1018:attacking 1986:attacks 8438:attacks. 1059:
26054 network 14441:network. 1261:networked 234:networking 4706:networking. 186:networks 4378:networks. 848:
26394 countries 9065:countries. 1438:country 13370:country. 2521:
26827 information 24642:information. 2185:
27074 phone 16072:phone. 891:phones 9315:phones. 796:
28926 http 28821:http/2 105:
29542 posted 28779:posted. 94:poster 429:posters 240:
29681 server-side 196:server 19956:server. 1406:servers 6949:servers. 1174:
29914 office 27490:office. 2424:
30007 platform 19050:platform. 3365:platformer 484:platforms 5649:platforms. 1459:
30132 development 26183:development. 2005:developments 1744:developments. 200:
30137 code 25179:code. 3191:coder 167:coders 349:coding 1168:coding. 83:
30291 media 28467:media. 1824:
30445 feature 9058:feature. 481:features 18736:features. 2170:
30791 kernel 27263:kernel. 2250:kernel-based 284:kernels 870:kernels. 124:
31184 communities 3334:communities. 737:community 23348:community-based 105:community-driven 193:community. 3467:
31828 u.s 201:u.s. 28527:u.s.-backed 113:u.s.-based 132:u.s.-led 93:us-backed 91:us-based 401:us-led 135:usa 1960:usa. 175:
32042 used 31372:used. 670:
32990 program 13901:program. 2001:programmed 179:programmer 803:programmers 1432:programmers. 143:programmes 381:programming 5917:programming. 237:programming/development 818:programs 6212:programs. 966:
33607 reader 4579:reader. 278:readers 28308:readers. 347:readership 95:
34045 internet 30644:internet. 2458:internet/net 943:
34523 america 11784:america. 1032:american 14414:american. 80:americans 6414:americans. 589:americas 210:
34826 page 6591:page. 1001:pages 3250:pages. 23984:
34864 technological 1251:technologically 152:technologies 7327:technologies. 1290:technology 22387:technology. 2457:
35836 presidency 1256:presidency. 275:president 27787:president. 1084:presidential 4834:presidents 600:
37154 distribution 12395:distribution. 1967:distributions 12171:distributions. 1095:distro 5812:distro. 652:distros 2743:distros. 319:
37446 epo 35039:epo. 2075:epo.org 332:
37687 public 35751:public. 1936:
37723 social 37600:social-media 123:
37846 gnu 6426:gnu. 93:gnu/hurd 81:gnu/kfreebsd 85:gnu/linux 29520:gnu/linux-based 91:gnu/linux. 1550:
38367 novell 36653:novell. 1714:
39579 developer 13599:developer. 373:developers 23843:developers. 1764:
40375 case 27904:case. 2515:caselaw 108:cases 8853:cases. 995:
41238 version 33068:version. 1440:versioned 146:versioning 164:versions 5822:versions. 598:
41601 users 37295:users. 4306:
42802 europe 16585:europe. 1902:european 22948:europeans 444:euros 923:
42826 court 34888:court. 1976:courts 5330:courts. 632:
43149 photon 97:photos 2662:photos. 191:photoshop 483:php 2278:php-fpm 105:php5 164:phpmyadmin 164:phrase 911:phrases 272:physical 2586:physically 388:physician 130:physicians 160:physicist 95:physics 466:physx 70:pi 7518:pi. 361:piana 111:pic 72:pichai 173:pick 1939:picked 1183:picket 100:picking 722:picks 900:pico 71:pico-itx 156:pics 99:picture 2355:picture. 321:pictured 311:pictures 1450:pictures. 80:pidgin 230:pie 1637:pie. 72:piece 5026:piece. 171:pieces 2059:pieces. 132:pierre 180:pierre-yves 94:pieter 113:pig 220:pigs 193:pike 97:pile 578:piled 78:piles 129:pilger 118:piling 112:pill 149:pillar 135:pillars 217:pilot 988:pilots 320:pim 201:pimping 70:pin 272:pinch 120:pine 176:pine64 118:
43445 reporting 4831:reporting. 240:report 26127:report. 1583:reports 9493:reports. 1171:
43555 update 20736:update. 1135:updated 8817:updated. 210:updatedx2 85:updater 93:updates 10442:updates. 999:updating 1038:
43583 desktop 36980:desktop. 2291:desktop/gtk 1351:desktops 2468:desktops. 493:
49940 win32 90:windows 46655:windows-based 204:windows-only 192:windows. 2799:
50033 project 32819:project. 3734:projects 11447:projects. 2033:
51865 government 43803:government. 2416:governmental 467:governments 4822:governments. 357:
54220 web 52026:web-based 864:web-browser 150:web. 1180:
54986 goog 218:google 53003:google+ 410:google. 1355:
55737 secure 7203:secure. 417:security 46115:security-focused 77:security. 1925:
60088 support 49775:support! 74:support. 3530:supported 6428:supported. 281:
64412 android 60980:android-based 684:android-powered 403:android-x86 196:android. 1650:android/linux 499:
66673 patenting 1475:patenting. 120:patents 58590:patents. 6408:patents] 80:
68988 people 65485:people. 3115:peoples 388:
71735 ubucon 83:ubuntu 68622:ubuntu! 86:ubuntu-based 643:ubuntu-powered 183:ubuntu. 1791:ubuntu/linux 327:
82416 patent 79808:patent. 1178:patented 1245:patented. 185:
111622 release 54907:release! 153:release. 5083:released 36637:released! 844:released. 1621:releases 11390:releases. 987:
117419 who 86708:why 30100:why. 611:
123764 open 123225:open. 539:
128149 software 111593:software-as-a-service 94:software-based 127:software-defined 636:software-like 535:software-related 106:software. 8488:software/open 6570:
165377 msft 784:microsoft 157420:microsoft-connected 384:microsoft-friendly 239:microsoft-funded 276:microsoft-influenced 73:microsoft-novell 147:microsoft-sponsored 129:microsoft-taxed 102:microsoft. 5451:microsoft/novell 227:microsoft] 145:
199787 linux 189841:linux! 329:linux- 121:linux. 8892:linux-related 104:linux/android 266:linux/unix 150:linux] 84:

The underlying code, trc.fig (in Fig):

#### license: creative commons cc0 1.0 (public domain) 
#### http://creativecommons.org/publicdomain/zero/1.0/ 
# load log.txt (db of entries) and search from trc.txt (output from coll.fig)


nl 10 chr
now "" arr
nowlen arr

function getlink p
    eachr p ltrim
    pg split eachr nl mid 1 1
    pr split pg "ignorethis " join pr "" return pr

function rplace p c t
    now split p c join now t return now

function ctext t
    quot 34 chr
    nl 10 chr
    tab 9 chr
    now t lcase rplace now quot nl swap now t
    now t rplace now " " nl swap now t
    now t rplace now "&" nl swap now t
    now t rplace now ";" nl swap now t

    now t rplace now ")" nl swap now t
    now t rplace now "(" nl swap now t
    now t rplace now "," nl swap now t
    now t rplace now "'" nl swap now t
    now t rplace now ":" nl swap now t
    now t rplace now "?" nl swap now t
    t = t.replace(unichr(8220), chr(10))
    t = t.replace(unichr(8221), chr(10))
    now t rplace now tab nl swap now t return t

function nohtml p
    buf ""
    intag 0 
    forin each p
        ifequal each "<"
            intag 1
        ifequal intag 0
            now buf plus each swap now buf
        ifequal each ">" 
            intag 0
    now return buf

function lookfor findthese db dbcopy origfind
    nl 10 chr
    dblen db len
    buf 1
    for eachlen 1 dblen 1
        each arrget db eachlen
        copyeach arrget dbcopy eachlen

        forin eachofthese findthese
            iftrue eachofthese
                #now eachofthese plus "-" print
                now each split now nl
                found 0
                found instr now eachofthese
                iftrue found

                    iftrue buf
                        now origfind print # print current list of searchwords
                        buf 0

                    now "*[[" prints
                    now copyeach split now "\n" mid 6 2 
                    eurl getlink copyeach prints "|" prints
                    etitle now mid 2 1 ltrim rtrim
                    edate now mid 1 1 ltrim rtrim plus " " plus etitle plus "]]" print
                    iftrue 0
                        now eachofthese print # print found keyword



db "" arr
dbcopy "" arr
dblen 0
    db = open("log.txt").read().split("If you liked this post, consider")
    dbcopy = db[:]
    dblen = len(db)

now "processing entries... " prints
prevcount 0
for eachdb dblen 1 -1
    dbcount eachdb divby dblen times 1000 int divby 10 minus 100 times -1

    ifmore dbcount prevcount
        now dbcount str plus "% " prints
        prevcount dbcount
    p arrget db eachdb 
    now nohtml p ctext now
    db arrset eachdb now

    p arrget dbcopy eachdb 
    now nohtml p
    dbcopy arrset eachdb now

now "" print

pg arropen "trc.txt" 
nowlen pg len

for eachloop nowlen 1 -1
    each pg mid eachloop 1   
    eaches split each " " 
    eacheslen eaches len minus 1
    now "" print
    reaches eaches right eacheslen join reaches " " split reaches ":" 

    iftrue reaches
        findthese arr mid 1 0 
        forin eachreaches reaches
            onereach split eachreaches " " mid 1 1 
            findthese plus onereach 
        iftrue findthese
            now lookfor findthese db dbcopy each


And coll.fig

#### license: creative commons cc0 1.0 (public domain) 
#### http://creativecommons.org/publicdomain/zero/1.0/ 
# combine trranks into trc.txt (with > trc.txt)
fl arropen "trranks"
buf "" arr mid 1 0
buftext ""
bufsum 0
forin each fl
    lentr each rtrim len
    iftrue lentr
        buf plus each
        iftrue buf
            forin beach buf
                now buftext plus beach plus ":" swap now buftext
                teg split beach " " mid 1 1
                rank split beach " " mid 2 1 val
                now bufsum plus rank swap now bufsum
            now bufsum prints " " prints
            now buftext print 
            buf "" arr mid 1 0
            buftext ""
            bufsum 0

There’s more code we’ve yet to share. Just needs more tidying up. We want everything to be open data, Free/libre software, Open Access etc. It takes time.


Techrights Available for Download as XML Files

Posted in Site News at 3:09 am by Dr. Roy Schestowitz

Open data, Free/libre software, free society

tar gz file

Summary: Archiving of the site in progress; scripts and data are being made available for all to download

IN ADDITION to Drupal pages and almost 26,000 WordPress blog posts (and videos and podcasts, not to mention loads of IRC logs) we also have nearly 700 Wiki pages and a reader decided to organise an archive or an index for an archive.

Now available for download as a compressed archive (lots of XML files) is last week’s pile. It’s also available for download from the Internet Archive. Those are all the Wiki pages.

The code in Fig:

# < fig code >
# comments | built-in functions | user functions | inline python

#### license: creative commons cc0 1.0 (public domain) 
#### http://creativecommons.org/publicdomain/zero/1.0/ 
proginf = "wikidump 0.1 jul 2019 mn"
# dump xml for each wiki page on techrights (no edit hist)
# create each actual file
function dumppage url
    urltext = arrcurl url
    title = ""
    forin each urltext
        findtitle = instr each "<title>"
        iftrue = findtitle
            title = each ltrim rtrim split title "<title>" join title "" split title "</title>" join title ""
    iftrue title
        outfilename = "techrights_" plus title plus ".xml" split outfilename "/" join outfilename ":slash:" open "w"
        forin each urltext
            now = each fprint outfilename
        now = outfilename close

# get list of pages
allpages = arrcurl "http://techrights.org/wiki/index.php/Special:AllPages"
allpageslen = allpages len
longest = 0
longestindex = 0
for each 1 allpageslen 1
    eachlen = allpages mid each 1 len
    ifmore eachlen longest
        longest = eachlen
        longestindex = each

# process list of pages and call dumppage for each
quot = 34 chr
pages = allpages mid longestindex 1 split pages quot
forin each pages
    iswiki = instr each "/wiki/index.php/"
    ifequal each "/wiki/index.php/Special:AllPages"
        iftrue iswiki
            now = "http://techrights.org" plus each split now "http://techrights.org/wiki/index.php/" join now "http://techrights.org/wiki/index.php/Special:Export/" dumppage now

# create tgz archive
pos = 0
    if figosname != "nt": pos = 1
iftrue pos
    tm = time split tm ":" join tm "."
    dt = date split dt "/" join dt "-"
    tgzname = "techrightswiki_" plus dt plus "_" plus tm plus ".tar.gz"
    now = "tar -cvzf " plus tgzname plus " techrights_*" shell

Or alternatively in Python (Fig-derived):

#!/usr/bin/env python2
# encoding: utf-8
# fig translator version: fig 4.6
import sys, os
from sys import stdin, stdout
from sys import argv as figargv
    from colorama import init
    pass # (only) windows users want colorama installed or ansi.sys enabled

    from sys import exit as quit

from random import randint
from time import sleep

from os import chdir as figoch
from os import popen as figpo
from os import system as figsh
from os import name as figosname
figsysteme = 0
figfilehandles = {}
figfilecounters = {}
from sys import stdout
def figlocate(x, l = "ignore", c = "ignore"):    
    import sys 
    if l == "ignore" and c == "ignore": pass 
    # do nothing. want it to return an error? 

    elif l < 1 and c != "ignore": 
        sys.stdout.write("[" + str(c) + "G") # not ansi.sys compatible 
    elif l != "ignore" and c == "ignore": 
        sys.stdout.write("[" + str(l) + ";" + str(1) + "H") 
    else: sys.stdout.write("[" + str(l) + ";" + str(c) + "H") 

import time

def fignonz(p, n=None):
    if n==None:
        if p == 0: return 1
        if p == 0: return n
    return p

def fignot(p):
    if p: return 0
    return -1

figbac = None
figprsbac = None
sub = None
def fignone(p, figbac):
    if p == None: return figbac
    return p
    return -1

def stopgraphics():
    global yourscreen
    global figraphics
    figraphics = 0
    try: pygame.quit()
    except: pass

def figcolortext(x, f):
    b = 0
    if f == None: f = 0
    if b == None: b = 0
    n = "0"
    if f > 7: 
        n = "1"
        f = f - 8

    if   f == 1: f = 4 ## switch ansi colours for qb colours
    elif f == 4: f = 1 ## 1 = blue not red, 4 = red not blue, etc.

    if   f == 3: f = 6
    elif f == 6: f = 3

    if b > 7: b = b - 8

    if   b == 1: b = 4
    elif b == 4: b = 1

    if   b == 3: b = 6
    elif b == 6: b = 3

    stdout.write("\x1b[" + n + ";" + str(30+f) + "m")
    return "\x1b[" + n + ";" + str(30+f) + ";" + str(40+b) + "m"

def figcolourtext(x, f):
    b = 0
    if f == None: f = 0
    if b == None: b = 0
    n = "0"
    if f > 7: 
        n = "1"
        f = f - 8

    if   f == 1: f = 4 ## switch ansi colours for qb colours
    elif f == 4: f = 1 ## 1 = blue not red, 4 = red not blue, etc.

    if   f == 3: f = 6
    elif f == 6: f = 3

    if b > 7: b = b - 8

    if   b == 1: b = 4
    elif b == 4: b = 1

    if   b == 3: b = 6
    elif b == 6: b = 3

    stdout.write("\x1b[" + n + ";" + str(30+f) + "m")
    return "\x1b[" + n + ";" + str(30+f) + ";" + str(40+b) + "m"

figcgapal =   [(0, 0, 0),      (0, 0, 170),    (0, 170, 0),     (0, 170, 170),
(170, 0, 0),   (170, 0, 170),  (170, 85, 0),   (170, 170, 170), 
(85, 85, 85),  (85, 85, 255),  (85, 255, 85),  (85, 255, 255), 
(255, 85, 85), (255, 85, 255), (255, 255, 85), (255, 255, 255)]

def figget(p, s): 
    return s

def fighighlight(x, b):
    f = None
    if f == None: f = 0
    if b == None: b = 0
    n = "0"
    if f > 7: 
        n = "1" 
        f = f - 8

    if   f == 1: f = 4 ## switch ansi colours for qb colours
    elif f == 4: f = 1 ## 1 = blue not red, 4 = red not blue, etc.

    if   f == 3: f = 6
    elif f == 6: f = 3

    if b > 7: b = b - 8

    if   b == 1: b = 4
    elif b == 4: b = 1

    if   b == 3: b = 6
    elif b == 6: b = 3

    stdout.write("\x1b[" + n + str(40+b) + "m")
    return "\x1b[" + n + str(40+b) + "m"

def figinstr(x, p, e):
        return p.index(e) + 1
        return 0

def figchdir(p):
        print "no such file or directory: " + str(p)

def figshell(p):
    global figsysteme
        figsysteme = figsh(p)
        print "error running shell command: " + chr(34) + str(p) + chr(34)

def figarrshell(c):
    global figsysteme
        figsysteme = 0
        sh = figpo(c)
        ps = sh.read().replace(chr(13) + chr(10), 
        chr(10)).replace(chr(13), chr(10)).split(chr(10))
        figsysteme = sh.close()
        print "error running arrshell command: " + chr(34) + str(c) + chr(34) 
    return ps[:]

def figsgn(p):
    p = float(p)
    if p > 0: 
        return 1
    if p < 0: 
        return -1
    return 0

def figstr(p): 
    return str(p)

def figprint(p): 
    print p

def figchr(p): 
    if type(p) == str:
        if len(p) > 0:
            return p[0]
    return chr(p)

def figprints(p): 

def figleft(p, s): 
    return p[:s]

def figmid(p, s, x):
    arr = 0
    if type(p) == list or type(p) == tuple: 
        arr = 1
    rt = p[s - 1:
        x + s - 1]
    if arr and len(rt) == 1: 
        rt = rt[0]
    return rt

def figright(p, s): 
    return p[-s:]

def figrandint(x, s, f):
    return randint(s, f)

def figlcase(p): 
    return p.lower()

def figucase(p): 
    return p.upper()

def figint(p): 
    return int(p)

def figarrset(x, p, s): 
    if type(s) == tuple:
        if len(s) == 1: fas = s[0]
    elif type(s) == list:
        if len(s) == 1: fas = s[0]
        fas = s
    x[p - 1] = s

def figopen(x, s):
    import fileinput
    if s.lower() == "w":
        if (x) not in figfilehandles.keys(): 
            figfilehandles[x] = open(x[:], s.lower())
    elif s.lower() == "r":
        if (x) not in figfilehandles.keys():
            figfilehandles[x] = fileinput.input(x[:])
            figfilecounters[x] = 0
        if (x) not in figfilehandles.keys(): figfilehandles[x] = open(x[:], s[:])

def figfprint(x, s):
    fon = figosname
    sep = chr(10)
    if fon == "nt": 
        sep = chr(13) + chr(10)
    figfilehandles[s].write(str(x) + sep)

def figflineinput(x, s):
        p = figfilehandles[s][figfilecounters[s]].replace(chr(13), 
        "").replace(chr(10), "")
        figfilecounters[s] += 1
        p = chr(10)
    return p

def figclose(x):
    if (x) in figfilehandles.keys():
        del figfilehandles[x]
            del figfilecounters[x]

def figcls(x):
    if figosname == "nt": 
        cls = figsh("cls") 

def figarropen(x, s):
    x = open(s).read().replace(chr(13) + chr(10), chr(10)).replace(chr(13), 
    return x[:]

def figarrcurl(x, s):
    from urllib import urlopen
    x = str(urlopen(s).read())
    x = x.replace(chr(13) + chr(10), 
    chr(10)).replace(chr(13), chr(10)).split(chr(10))
    return x[:]

def figarrstdin(x):
    ps = []
    for p in stdin: 
        ps += [p[:-1]]
    return ps[:]

def figarrget(x, p, s): 
    if 1:
        return p[s - 1]

def figplus(p, s): 
    if type(p) in (float, int):
        if type(s) in (float, int):
            p = p + s
            p = p + s # float(s) if you want it easier
        if p == float(int(p)): 
            p = int(p)
        if type(p) == str: p = p + s # str(s) if you want it easier
        if type(p) == list: 
            if type(s) == tuple:
                p = p + list(s)
            elif type(s) == list:
                p = p + s[:]
                p = p + [s]
        if type(p) == tuple: 
            if type(s) == tuple:
                p = tuple(list(p) + list(s))
            elif type(s) == list:
                p = tuple(list(p) + s[:])
                p = tuple(list(p) + [s])
    return p

def figjoin(p, x, s):
    t = ""
    if len(x): 
        t = str(x[0])
    for c in range(len(x)):
        if c > 0: t += str(s) + str(x[c]) 
    return t # s.join(x)

def figarr(p):
    if type(p) in (float, int, str): 
        p = [p]
        p = list(p)
    return p

def figsplit(p, x, s):
    return x.split(s)

def figval(n): 
    n = float(n) 
    if float(int(n)) == float(n): 
        n = int(n) 
    return n
def figtimes(p, s):
    if type(p) in (float, int):
        p = p * s # float(s) if you want it easier
        if p == float(int(p)): 
            p = int(p)
        if type(p) == list:
            p = p[:] * s # figval(s)
            p = p * s # figval(s) if you want it easer
    return p

def figdivby(p, s):
    p = float(p) / s
    if p == float(int(p)): 
        p = int(p)
    return p
def figminus(p, s): 
    return p - s

def figtopwr(p, s): 
    p = p ** s
    if p == float(int(p)): 
        p = int(p)
    return p
def figmod(p, s): 
    return p % s

def figcos(p): 
    from math import cos
    p = cos(p)
    if p == float(int(p)): 
        p = int(p)
    return p

def figsin(p): 
    from math import sin 
    p = sin(p)
    if p == float(int(p)): 
        p = int(p)
    return p
def figsqr(p): 
    from math import sqrt
    p = sqrt(p)
    if p == float(int(p)): 
        p = int(p)
    return p

def figltrim(p): 
    return p.lstrip() 
def figlineinput(p): 
    return raw_input() 
def figlen(p): 
    return len(p) 
def figasc(p): 
    return ord(p[0])
def figatn(p):
    from math import atan
    p = atan(p)
    if p == float(int(p)): 
        p = int(p)
    return p

def fighex(p): 
    return hex(p)
def figrtrim(p): 
    return p.rstrip() 
def figstring(x, p, n): 
    if type(n) == str: 
        return n * p 
    return chr(n) * p 
def figtimer(p):
    from time import strftime
    return int(strftime("%H"))*60*60+int(strftime("%M"))*60+int(strftime("%S"))

def figtime(p): 
    from time import strftime
    return strftime("%H:%M:%S")

def figdate(p): 
    from time import strftime
    return strftime("%m/%d/%Y")

def figcommand(p): 
    return figargv[1:]

def figtan(p): 
    from math import tan 
    p = tan(p)
    if p == float(int(p)): 
        p = int(p)
    return p

def figoct(p): return oct(p)

def figsleep(p, s): 
def figarrsort(p): 

def figdisplay(x): 
    global figraphics, figrupd
    figrupd = 0
    if figraphics == 1:

def figreverse(p): 
    if type(p) == list: 
        return p
    elif type(p) == str:
        p = map(str, p) 
        p = "".join(p)
        return p

def figarreverse(p): 

def figfunction(p, s): 
    return p

def figend(x): 

def figif(p, s): 
    return p

def figthen(p, s): 
    return p

def figsystem(x): 

#### http://creativecommons.org/publicdomain/zero/1.0/ 

figlist = 0
try: figlist = int(type(proginf) == list)
except NameError: pass
if not figlist: proginf = 0
proginf = "wikidump 0.1 jul 2019 mn"

# dump xml for each wiki page on techrights (no edit hist)

# create each actual file
def dumppage(url):

    figlist = 0
    try: figlist = int(type(urltext) == list)
    except NameError: pass
    if not figlist: urltext = 0
    urltext = figarrcurl(urltext, url)

    figlist = 0
    try: figlist = int(type(title) == list)
    except NameError: pass
    if not figlist: title = 0
    title = ""

    for each in urltext:

        figlist = 0
        try: figlist = int(type(findtitle) == list)
        except NameError: pass
        if not figlist: findtitle = 0
        findtitle = figinstr(findtitle, each, "<title>")

        if findtitle:

            figlist = 0
            try: figlist = int(type(title) == list)
            except NameError: pass
            if not figlist: title = 0
            title = each
            title = figltrim(title)
            title = figrtrim(title)
            title = figsplit(title, title, "<title>")
            title = figjoin(title, title, "")
            title = figsplit(title, title, "</title>")
            title = figjoin(title, title, "")

    if title:

        figlist = 0
        try: figlist = int(type(outfilename) == list)
        except NameError: pass
        if not figlist: outfilename = 0
        outfilename = "techrights_"
        outfilename = figplus(outfilename, title)
        outfilename = figplus(outfilename, ".xml")
        outfilename = figsplit(outfilename, outfilename, "/")
        outfilename = figjoin(outfilename, outfilename, ":slash:")
        figopen(outfilename, "w")

        for each in urltext:

            figlist = 0
            try: figlist = int(type(now) == list)
            except NameError: pass
            if not figlist: now = 0
            now = each
            figfprint(now, outfilename)

        figlist = 0
        try: figlist = int(type(now) == list)
        except NameError: pass
        if not figlist: now = 0
        now = outfilename

# get list of pages

figlist = 0
try: figlist = int(type(allpages) == list)
except NameError: pass
if not figlist: allpages = 0
allpages = figarrcurl(allpages, "http://techrights.org/wiki/index.php/Special:AllPages")

figlist = 0
try: figlist = int(type(allpageslen) == list)
except NameError: pass
if not figlist: allpageslen = 0
allpageslen = allpages
allpageslen = figlen(allpageslen)

figlist = 0
try: figlist = int(type(longest) == list)
except NameError: pass
if not figlist: longest = 0
longest = 0

figlist = 0
try: figlist = int(type(longestindex) == list)
except NameError: pass
if not figlist: longestindex = 0
longestindex = 0

for each in range(int(float(1)), int(float(allpageslen)) + figsgn(1), fignonz(int(float(1)))):

    figlist = 0
    try: figlist = int(type(eachlen) == list)
    except NameError: pass
    if not figlist: eachlen = 0
    eachlen = allpages
    eachlen = figmid(eachlen, each, 1)
    eachlen = figlen(eachlen)

    if eachlen > longest:

        figlist = 0
        try: figlist = int(type(longest) == list)
        except NameError: pass
        if not figlist: longest = 0
        longest = eachlen

        figlist = 0
        try: figlist = int(type(longestindex) == list)
        except NameError: pass
        if not figlist: longestindex = 0
        longestindex = each

# process list of pages and call dumppage for each

figlist = 0
try: figlist = int(type(quot) == list)
except NameError: pass
if not figlist: quot = 0
quot = 34
quot = figchr(quot)

figlist = 0
try: figlist = int(type(pages) == list)
except NameError: pass
if not figlist: pages = 0
pages = allpages
pages = figmid(pages, longestindex, 1)
pages = figsplit(pages, pages, quot)

for each in pages:

    figlist = 0
    try: figlist = int(type(iswiki) == list)
    except NameError: pass
    if not figlist: iswiki = 0
    iswiki = figinstr(iswiki, each, "/wiki/index.php/")

    if each == "/wiki/index.php/Special:AllPages":

        figlist = 0
        try: figlist = int(type(ignoreit) == list)
        except NameError: pass
        if not figlist: ignoreit = 0

        if iswiki:

            figlist = 0
            try: figlist = int(type(now) == list)
            except NameError: pass
            if not figlist: now = 0
            now = "http://techrights.org"
            now = figplus(now, each)
            now = figsplit(now, now, "http://techrights.org/wiki/index.php/")
            now = figjoin(now, now, "http://techrights.org/wiki/index.php/Special:Export/")
            figbac = now
            now = dumppage(now) ; now = fignone(now, figbac) ; 

# create tgz archive

figlist = 0
try: figlist = int(type(pos) == list)
except NameError: pass
if not figlist: pos = 0
pos = 0

if figosname != "nt": pos = 1
if pos:

    figlist = 0
    try: figlist = int(type(tm) == list)
    except NameError: pass
    if not figlist: tm = 0
    tm = figtime(tm)
    tm = figsplit(tm, tm, ":")
    tm = figjoin(tm, tm, ".")

    figlist = 0
    try: figlist = int(type(dt) == list)
    except NameError: pass
    if not figlist: dt = 0
    dt = figdate(dt)
    dt = figsplit(dt, dt, "/")
    dt = figjoin(dt, dt, "-")

    figlist = 0
    try: figlist = int(type(tgzname) == list)
    except NameError: pass
    if not figlist: tgzname = 0
    tgzname = "techrightswiki_"
    tgzname = figplus(tgzname, dt)
    tgzname = figplus(tgzname, "_")
    tgzname = figplus(tgzname, tm)
    tgzname = figplus(tgzname, ".tar.gz")

    figlist = 0
    try: figlist = int(type(now) == list)
    except NameError: pass
    if not figlist: now = 0
    now = "tar -cvzf "
    now = figplus(now, tgzname)
    now = figplus(now, " techrights_*")

We also have a complete set of pages and associated scripts/metadata. We might post these separately some other day. We encourage people to make backups or copies. No site lasts forever and decentralisation is key to longterm preservation.


Five Techrights Prototypes

Posted in Site News at 4:28 am by Dr. Roy Schestowitz

Summary: Thoughts on how to improve the reading experience on this site, whose layout has been the same for almost thirteen years

THIS Web site turns 13 in a few months (November) and it will, at that stage, have published about 26,000 blog posts (that’s an average of about 2,000 a year). The daily links have been around for more than a decade and the same goes for our IRC channels. Thankfully I still have a daytime/nighttime job to help pay my bills, including this site’s bills. I’m personally losing money — not just time — on the site. But it’s worth it. The site’s goal/work is important, as almost nobody else does it (Groklaw did it for almost a decade), and it’s rewarding in the fulfillment sense. I sort of live for it.

“I’m personally losing money — not just time — on the site. But it’s worth it.”Recently some readers suggested changes to the site’s layout. We’ve had lengthy discussions about these (those are actual, working Web pages, but we’re showing just screenshots of these because they’re crude prototypes, nothing beyond that). In the interest of transparency and spirit of collaboration we’ve decided to do a quick (but possibly lengthy) blog post about it. Just to keep readers ‘in the loop’ and invite feedback. There’s more in IRC logs when they become available (soon, some time next month).

To us — and to me personally — what’s most important is the substance, not presentation, but poor presentation can harm a site. I don’t judge sites by their appearance but their contents; if you’re promoting all sorts of “alternative medicine” and “survival kits” and “gold bars” in the side bar, that’s an instant credibility loss. If you use words like “fuck” and “asshole” in an article, that too can be a problem (depending on the context). If there are no links in the article, that alone might not detract from the message, but it helps to have ways to verify and trace individual claims back to sources. It inspires confidence. When dealing with companies that spent billions of dollars on PR it is inevitable that “shoot the messenger” tactics and nit-picking will be leveraged to dismiss the message. People have been defaming me, even using imposter accounts (with my name), for nearly 15 years.

Due to the size of Techrights (total number of pages) it has been exceptionally challenging keeping everything up to date, up to standard, consistent etc. But we try our best given our limited capacity. We recently got more volunteers. We’re very thankful. They’re part of us. They’re the family. We're growing.

“In the interest of transparency and spirit of collaboration we’ve decided to do a quick (but possibly lengthy) blog post about it.”Anyway, about layout…

“I’ve taken a few hours to make small modifications on Techrights stories web page,” one reader told us, “as we chatted in another dialog. These examples are very ugly, and of course are not intended as anything but a proof of concept. In fact, I’m not sure if I fully like them even in concept. So please feel free of [sic] not liking any of it either. But I wanted to give it a try anyways, in my little spare time. Who knows… maybe they end up triggering some actual good idea eventually. [...Y]ou’ll find 4 html files.”

It is based on the saved “/?stories” page from that date.

“Then there are three other versions of it,” the reader explained. “They use the same assets (as they’re all just an edited copy of the html file), and every one has a different modification…”

- “v1″ reacts to posts given its height. When height > 1000px, there’s a script that limits the post height to 1000px, adds a little gradient at the bottom, and a clickable div for expanding the full content.


- “v2″ does the same, but instead of expanding the contents, the div just opens the post url (given in the post’s title) in a different tab.


- “v3″ shrinks the posts to 100px, without checking its height, and adds a div on the post’s top-right corner for expanding it or folding it.


“Of course this is all client-side,” the reader noted, “even when there’s no need for it (as in “v2″, or the css classes assignation). The point is just to toy with the idea of scrolling and being able to focus in what I want to read, without trying to go to a “fully personalized” privacy nightmare and/or a 20MB webpage full of unwanted assets, invasive behaviour, and obfuscated javascript. Just tiny tools that should not break anything and could make it better for someone. By trying this stuff, I guess using desktop I liked more something in the lines of “v1″ or “v2″, but liked more “v3″ on a cellphone. There’s something about the tiny screen and not having too much data in it that feels adequate, but not on the monitor. It would be nice to not have this dialog in private, as I believe a lot of readers may have cool ideas.”

“To us — and to me personally — what’s most important is the substance, not presentation, but poor presentation can harm a site.”We’ve passed that around internally last week. “I haven’t looked in detail,” one person remarked, “but in regards to layout it is 100% possible to avoid any and all javascript and use only pure CSS. If it is a matter of ‘bling’ quite a bit can be done with CSS3: There are accordion menus, pull down menus, and many other things possible with just CSS. However, my main concern is the reflow of the columns on smaller, more narrow screens. The [...] v3 at least does not address that, at least not once it has been sanitized and the scripts removed.”

The person then named/suggested a better mode: “Here’s an example of one site that does 1, 2, or 3 columns depending on the browser’s width: https://www.deccanchronicle.com/

“Techrights needs to be able to reflow like that. HTML 5, which should also be used, has “summary” and “details” elements for hiding or revealing additional material. There are work-arounds in CSS for HTML4, but HTML5 ought to be the standard used. Working with WordPress to make those kinds of changes will be the hardest part.”

This is still work in progress and IRC logs, which we publish once in 4 months (for efficiency’s sake), have more on that.

“When worked on those files,” the reader added, “my stronger doubt was: “What are the most important issues to tackle here?”. And that’s very subjective, so I needed your feedback to know it: it’s your site, and there are lots of readers. I didn’t want to be many hours tunning/designing/refactor not-my-web with only my criteria. And that was specially uncomfortable on the “styling” front. Also, didn’t wanted to let the whole thing die without doing anything. So, some dirty sketches sounded like a good start, without needing to tackle all possible ideas at once, and without needing to touch a single given style. If that kind of work were well received, I was about to work next on the styles, which includes the flow, but also things like the font size given the screen size, the distribution of the posts, things like “night mode friendly”, and so on.”

“This is still work in progress and IRC logs, which we publish once in 4 months (for efficiency’s sake), have more on that.”Longtime readers are probably aware that the site’s looks have been virtually the same since 2006. This is intentional and it assures consistency (e.g. the looks and layout of old posts, of which there are many)

We recently ranted about JavaScript. We don’t want to ‘pollute’ the site with it. It would not help the substance.

“There’s also new stuff like CSS grids, flexboxes, and other stuff I never tried,” the reader said. “But in my experience, working on a CSS+HTML ONLY solution (which I always try to do) ends up in LOTS of quirks convulting the code, or in sacrificing something for the sake of purity. Things like the box model, pixels and margins that renders differently in different browsers, and so on, end up generating weird scrolls, broken lines, overlapped items, and stuff like that. On my blog I refused to program a reflow, because I couldn’t care less about mobile browsers, and it just adapt to screen size; I just zoom with the fingers if I happen to read it on mobile, or just flip the screen. It’s not a big nor a news site. So it barely uses any javascript, it has my own personal bling that pleases just me, and works well enough. WordPress didn’t make it hard for me to program it. [...] Techrights deserves a professional work. That means no trash code, no CSS whack-a-more against heavy HTML, no gratuitous javascript, and yet good SEO and good compatibility with other tools. That means lots of work. And neither you or the readers shouldn’t do that without a clearer idea of what should be done.”

“We’re open to the idea of making changes site-wide provided they’re sufficiently beneficial and the benefits outweigh harms associated with backward compatibility.”If anyone has ideas, please share them with us in the comments, in IRC, or bytesmedia at bytesmedia.co.uk (shared inbox). We’re open to the idea of making changes site-wide provided they’re sufficiently beneficial and the benefits outweigh harms associated with backward compatibility.

Another person has meanwhile created a prototype without JavaScript and explained: “I agree with most of that except regarding the importance of CSS. In my experience it remains the only way to keep the pages light weight. However, to do that, it does have to be written by hand and not use any off-the-shelf libraries. As for the flow and for stripping extraneous material, see the [following]. Don’t mind the background colors or the borders, they are there to show the blocks used.” Prototypes are shown below (as screenshots only, as the underlying code isn’t polished).




RSS Feeds of Sites Techrights Routinely Linked to in Recent Years

Posted in Site News at 6:41 am by Dr. Roy Schestowitz

Newspaper creation

Summary: Newspapers may be a thing of the past, but RSS feeds still exist and they’re an alternative not censored or throttled (based on subjective rankings) by centralised authorities

In an effort to focus on technology — to cover GNU/Linux, FOSS, patents and spend less time on politics or other matters we don’t specialise in — about a month ago we significantly decreased coverage (in daily links) of political matters. There are nonetheless some sites we can recommend and they have RSS feeds, which make them simpler to follow for updates (we do not recommend social control media which eliminates neutrality).

Here are some of the key ones, in no particular order:

  • ProPublica Articles and Investigations: RSS
  • Jack of Kent blog (my lawyer): RSS
  • Using Our Intelligence: RSS
  • Truthout Stories: RSS
  • American Civil Liberties Union: RSS 1, RSS 2, RSS 3, RSS 4, RSS 5
  • Craig Murray: RSS 6
  • FAIR Blog: RSS
  • Project Censored: RSS
  • La Quadrature du Net: RSS
  • John Pilger: RSS
  • ORG blog: RSS
  • Shadowproof: RSS
  • Creative Commons: RSS
  • Truthdig: RSS
  • Common Dreams: RSS
  • Robert Reich: RSS
  • Meduza: RSS
  • EFF: RSS
  • Aral Balkan: RSS
  • CounterPunch Index: RSS
  • Techdirt: RSS
  • Pirate Party UK: RSS

Over the years many were deleted based on loss of credibility and incidents that made them untrustworthy, compromised, dubious. The above withstood fairly strict standards, so they’re probably sincere.

« Previous Page« Previous entries « Previous Page · Next Page » Next entries »Next Page »

RSS 64x64RSS Feed: subscribe to the RSS feed for regular updates

Home iconSite Wiki: You can improve this site by helping the extension of the site's content

Home iconSite Home: Background about the site and some key features in the front page

Chat iconIRC Channels: Come and chat with us in real time

New to This Site? Here Are Some Introductory Resources




Samba logo

We support

End software patents


GNU project


EFF bloggers

Comcast is Blocktastic? SavetheInternet.com

Recent Posts