01.17.22

The GUI Challenge

Posted in Free/Libre Software, GNU/Linux at 5:13 pm by Guest Editorial Team

Authored by Andy Farnell

Free red light

Summary: The latest article from Andy concerns the Command Line Challenge

Cheapskates wonderful guide is currently running a “One Week Command Line Challenge“. Some of the students I teach now are so young (to an old beard like me) they think this is some “crazy new thing”. Is there new hope and a new perspective to be explored here? Something other than retro and cool. Perhaps historical baggage, the narrative of how “superior” graphical interfaces replaced “old” consoles is an obstacle to new visions for the next generation?

As a lifelong textual user interface (TUI) user this got me thinking. If you were to give me “The GUI Challenge” I’d be sunk! My world (dwm, emacs, w3m etc) feels so familiar, it’s in my bones. After thirty or forty years on the command line if I were forced to use “normal computers” it would cripple my ability to do anything.

“After thirty or forty years on the command line if I were forced to use “normal computers” it would cripple my ability to do anything.”The command-line is super empowering, but particular. Put me on a Mac or Windows machine and I revert to a child-like flap, randomly clicking around on icons that look promising. I’d be twenty times less productive than my peers, yet, modesty be damned, I’m ten times more effective/productive at average computing tasks than other professionals when in my comfort zone – at the command-line. Isn’t this true for us all, that we have our comfy shoes?

Of course this isn’t about some innate inability to use graphical tools. I’ve mastered some jolly complex ones like Blender and Unreal editors (virtual world building), and ProTools or Ardour (for sound and music). One of the most complex I recall was a VLSI/CAD creator that used two four button mice (or mouse and ball).

So, is the command line challenge unfair? I am no more capable of quickly learning a new graphical paradigm than an entrenched GUI user is of adopting the keyboard and console. This probably applies at any age or ability level where you are comparing like-for-like paradigm switching.

No, the issue here is deeper and is about utility paradigms. How do people relate to computers as tools at the highest level – at the operating system level and above?

If you dig back in the Usenet and mailing-list archives, you’ll find fascinating, passionate and intelligent debates on the merits of different interfaces going right back to Xerox-PARC. They are really separate computing cultures. There’s a fair historical summary here.

The above history ends in 2001. GUIs did not end there, the debate has moved further, and many new things have not been well analysed. Mobile, which essentially emulates button-based handheld appliances, cannot really be compared to GUI (in its traditional sense), even though it’s technically a computer running a graphical interface.

“Mobile, which essentially emulates button-based handheld appliances, cannot really be compared to GUI (in its traditional sense), even though it’s technically a computer running a graphical interface.”It’s only since about 2010 that the GUI function of abstracting (hiding away complexity) was subverted by wicked corporations to hide away deception and to effect control. This shift from the abstract to the abstruse and obstructive is what we sometimes call “Dark Computing Patterns”, but really it goes deeper than that – visual computing is it’s own realm of psychology, politics, semiotics, iconography and subterfuge that in many cases thoroughly bastardises the function of computers qua “tools”.

The GUI/TUI debate can be framed in many ways; preference, freedom, extensibility, cognitive overhead, portability, control (tweakability), depth of understanding (legibility), and more.

For me, tool longevity and stability are important. I still use the same applications and skills I learned in 1980. Some people, foolishly I think, imagine that to be a bad/anti-progressive stance. One of the most underrated abilities in computer programming is knowing when something is finished. As is the ability to just use something instead of worshipping it as a digital artefact (cue NFT “first editions of brand apps).

By contrast many of my colleagues must re-learn their entire productivity stack every few months at the whim of corporate developers or seemingly random events in “the market”. I literally hear them anthropomorphising:

“Oh, Slack won’t let me do that now”

“Oh, Google ate my email”

“Sorry, something broke, can you resend it please?”

Their “computers” are chaotic mystery machines, magic fun fairs where superstitious ritual ministrations must be performed. This sort of Scooby-Doo “clown computing” has no place in serious business, in my opinion. So, another hugely underrated quality that TUIs favour is stability.

Where did this mess come from? In the 1980s “home computers” created a culture of their own, and from there Apple and Microsoft, needed to counter a socially constructed but actually mythical “fear” of computers as nerdy and silly, but also “dangerous”. Remember granny worrying that it would “blow up” if you typed the wrong thing?

Continuing a culture of sysadmins from the time-sharing Unix days, we created the “user” as a particular stereotype. To put it quite bluntly, we manufactured “users” to be idiots. Indeed, use of the word “users” instead of a more neutral term like “operators” is significant. The developer-user relationship today is a power relationship, and often an abusive one (in both directions).

In fact denigrating attitudes have their roots in the fragility of early software development. The “user” was an enemy who would always find ways to break our software and exhibit extraordinary “stupidity” by failing to understand our non-obvious interface puzzles. We used tropes like (P.E.B.K.A.C), lusers, and treated others with disrespectful and superior smugness.

Computing had its hashtag moment, and markets demanded that perceptions change. Microsoft solved the problem by erecting some soothing blue fire-hazard cladding around a crumbling DOS. Underneath, exposure to “The Registry” was like staring directly into the open core of Chernobyl.

At that point, enter Apple, who could play Good Cop, adding value by simply subtracting (or consolidating) features. For many, Steve Jobs was elevated to the man who “invented computers”. For a certain generation, he did. The ancient science of HCI (human computer interaction) was beaten and disfigured into the designer denomination of UX/UI that emphasised intuition, feel, and experience, which in turn ushered in the age of performative productivity. This trajectory of form over function culminated in neurotic obsessions with $2000 disposable thin laptops and the Onion’s infamous Apple Wheel parody that confused many as to whether it was a genuinely good idea.

Meanwhile the command line simply kept calm and carried on. Nothing changed in 30 years. Those who ran the servers, databases, scientific and technical applications never strayed far from the console, except where “presentation” demanded. However, through the mass media and advertising, digital technology became synonymous with these corporate veneers over actual computers, while Hollywood made the command-line a glowing green preserve of malcontents bent on destroying civilisation.

So, although the Command Line Challenge is fun – and I hope it inspires some people to go beyond their comfort zone – let’s be aware that human factors, history and politics play a greater role behind the scenes. Yes, it’s about mental models, rote motor skills and habits, rather than any intrinsic good or bad. But it’s also about culture and popular ideas of what a computer “is”.

The emphasis of Cheapskate’s article is on TUI allowing the use of older computers. That’s a very topical and important concern in the age of climate emergency. If readers don’t know already about books like Gerry McGovern’s World Wide Waste, I urge you to read more about e-waste. Making the connections between textual interfacing, more modest tech-minimalist use, and a better society and healthier planet, isn’t obvious to everyone.

There are many reasons people may prefer to return to the command line. I vastly prefer TUI’s for another reason. As a teacher I deal in ideas not applications, so it’s a way of imparting lasting concepts instead of ephemeral glitter. Commands are connections of action concepts to words, essential for foundational digital literacy. Almost everything I can teach (train) students to use by GUI will have changed by the time they graduate.

For younger people the difference is foundational. My daughter and I sit down together and do basic shell skills. She can log in, launch an editor, play music and her favourite cartoon videos. We use Unix talk to chat. It’s slow, but great fun, because character based coms is very expressive as you see the other person typing. She’s already internalising the Holy Trinity – storage, processing and movement.

To make this work I obviously customised bash, creating a kind of safe sandbox for her with highly simplified syntax. This week we are learning about modifier keys – shift is for SHOUTING and control is to CANCEL (you can’t get around needing to teach CTRL-C). What we are really working on is her typing skills, which are the foundation of digital literacy in my opinion. I think at the age of 5 she is already a long way ahead of her school friends who paw at tablets.

In conclusion then, the TUI/GUI saga is about much more than interchangeable and superficial ways of interacting with computers. In it’s essence it is about literacy, the ability to read and write (type). Behind, and ahead of it, are matters of cultural importance relevant to education, autonomy, democracy, self-expression, and the economy. So if you’re a mouser or screen smudger, why not give Cheapskate’s challenge a try?

01.16.22

[Meme] Gemini Space (or Geminispace): From 441 Working Capsules to 1,600 Working Capsules in Just 12 Months

Posted in Free/Libre Software at 6:58 pm by Dr. Roy Schestowitz

Gemini capsules

1600 working Gemini capsules

Summary: Gemini space now boasts 1,600 working capsules, a massive growth compared to last January, as we noted the other day (1,600 is now official)

Gemini Rings (Like Webrings) and Shared Spaces in Geminspace

Posted in Free/Libre Software at 5:51 pm by Dr. Roy Schestowitz

Video download link | md5sum 00694971abd0b010920fe27e4fa4f1d5
Connected Communities in Geminispace
Creative Commons Attribution-No Derivative Works 4.0

Summary: Much like the Web of 20+ years ago, Gemini lets online communities — real communities (not abused tenants, groomed to be ‘monetised’ like in Facebook or Flickr) — form networks, guilds, and rings

THE ‘old’ Web was, in a lot of ways, far more charming than today’s bloated and hostile ‘Web’, which became increasingly proprietary (Web browsers are rapidly becoming little but a canvas for proprietary software that runs remotely).

The original Web emancipated people, whereas today’s Web mostly oppresses them. “Each service we use that’s operated by someone other than ourselves is another point of failure in our lives,” one Gemini blogger recently noted. “Each of these corporations is handling staggeringly large amounts of personal data.”

“The original Web emancipated people, whereas today’s Web mostly oppresses them.”In the old Web we had things like Geocities (where I had a site when I was 15 or 16) and shared spaces like chiark, which still remains unchanged (not trying to become more “modern”). There’s something very similar to it in tilde.team, which even includes LEO, “a webring but for Gemini instead of the web” (yes, remember webrings?) and it certainly seems to have grown quite a bit.

Gemini space is a lot bigger than people care to realise and it grows rapidly. Towards the end the video above shows that there are now close to 2,000 known capsules and we’re very, very close to 1,600 active capsules which are known to Lupa (maybe it will exceed that number by the end of the day). There are a lot more users than capsules, probably tens of thousands regular users. Suffice to say, those are people who install a 'proper' client instead of using some Web gateway.

The Corporate Cabal (and Spy Agencies-Enabled Monopolies) Engages in Raiding of the Free Software Community and Hacker Culture

Posted in Deception, Free/Libre Software, GNU/Linux, Microsoft at 8:46 am by Dr. Roy Schestowitz

Video download link | md5sum 6fda57fbbfbb0443719a2afd74df26d5
Raiding the Community
Creative Commons Attribution-No Derivative Works 4.0

Summary: In an overt attack on the people who actually did all the work — the geeks who built excellent software to be gradually privatised through the Linux Foundation (a sort of price-fixing and openwashing cartel for shared interests of proprietary software firms) — is receiving more widespread condemnation; even the OSI has been bribed to become a part-time Microsoft outsourcer as organisations are easier to corrupt than communities

FOUR days ago we mentioned what Microsoft had done to Marak, in effect confiscating his work on Free software though he wasn’t working for Microsoft or even being paid for his work. This caused an uproar; does Microsoft covertly own and control everything in GitHub? If so, it’s not free hosting, it’s the passage of one’s work to Microsoft. What about projects that used GitHub for 14 years? Did they ever consent to such a transaction? And also, what did Microsoft actually pay for when it took over GitHub? Was this a non-consensual sale of code other than the proprietary software of GitHub itself?

“Recently, as we noted here several times, Microsoft tried to claim credit for the mission to Mars by merely asserting that everything in GitHub (as in every project with presence there) is property of Microsoft.”The video above talks about an upcoming series regarding the raiding of the Commons or the privatisation of volunteers’ hard work; not only Microsoft is doing it and I mention AWS as another example — raking in all the profits (financial gains) while denying a living wage to those who actually did all the work.

This crisis isn’t new and discussion about it is well overdue. Yes, Free software powers this planet (also our presence in space and Mars to some degree), but who controls this software? Recently, as we noted here several times, Microsoft tried to claim credit for the mission to Mars by merely asserting that everything in GitHub (as in every project with presence there) is property of Microsoft.

“Nobody accidentally makes a billion dollars while working on protecting human rights and democracy. The only way you make a billion dollars is by working on making a billion dollars.”Aral Balkan

01.15.22

Blogging and Microblogging in Geminispace With Gemini Protocol

Posted in Free/Libre Software at 8:01 am by Dr. Roy Schestowitz

Video download link | md5sum b9536100a22dd31d12f3bc226c0f0c11
Gemini Blogging and Microblogging
Creative Commons Attribution-No Derivative Works 4.0

Summary: Writing one’s thoughts and other things in Geminispace — even without setting up a Gemini server — is totally possible; gateways and services do exist for this purpose

THE majority of people who reject Gemini — usually without even trying it or ever giving it a chance — wrongly assume it cannot be used for some certain things which are (wrongly) perceived to be essential. But there are gateways for social control media, such as this one, and blogging is possible too, even without setting up one’s own capsule. Sure, the level of features and presentation isn’t on par with the Web, but that’s not the goal. We want an alternative to the Web, not just another Web.

“As it stands at the moment, we’re just a handful of capsules short of 1,600 active ones…”The video above shows some of the things that are possible in Geminispace (or Gemini space) when one wishes to publish essays and short thoughts, having already covered examples of Gemini chat clients, games, and mainstream news operations (those can be accessed via Gemini protocol as well).

We expect to have a lot more coverage regarding Gemini. As it stands at the moment, we’re just a handful of capsules short of 1,600 active ones, based on Lupa’s catalogue of capsules.

01.14.22

Gemini Clients: Comparing Moonlander, Telescope, Amfora, Kristall, and Lagrange (Newer and Older)

Posted in Free/Libre Software, GNU/Linux at 9:31 pm by Dr. Roy Schestowitz

Video download link | md5sum b203431f98541dcace6b7b6fcf4a1c5f
Comparing Six Gemini Clients
Creative Commons Attribution-No Derivative Works 4.0

Summary: There are many independent implementations of clients (similar to Web browsers) that deal with Gemini protocol and today we compare them visually, using Techrights as a test case/capsule

THE Gemini “newcomers” often ask what to download rather than how to install or set up one’s own Gemini capsule (this typically comes next). So we habitually present the differences between Gemini clients, which target different kinds of users with different needs, platforms (operating systems), and system capacity (some lack a GUI and cannot even attach a screen; some are literally blind). Well, the latest addition to the ‘gallery’ is Kristall, which is thus far our favourite Gemini client because of its decent GNU/Linux (and Qt) integration, not to mention built-in support for some very rudimentary HTML. Kristall is officially packaged for OpenBSD and select GNU/Linux distros.

“Kristall is officially packaged for OpenBSD and select GNU/Linux distros.”The video above shows Moonlander, Telescope, Amfora, Kristall, and Lagrange, of which I have multiple versions installed. In the video the earlier version Lagrange is shown before the recent one. Lagrange is being developed quite frequently and quickly, whereas Kristall was last worked on back in November.

There are other clients such as Castor, which was last updated 4 months ago. This one was last updated 10 hours ago.

At the time of writing Lupa is aware of 1,590 active capsules, so it’s very likely this count will exceed 1,600 some time in the weekend.

White House Asking Proprietary Software Companies That Add NSA Back Doors About Their Views on ‘Open Source’ Security

Posted in Deception, Free/Libre Software, Microsoft, Security at 5:34 pm by Dr. Roy Schestowitz

Video download link | md5sum 660351fe04a47c33611de299d17501b4
GAFAM Finger-pointing for White House
Creative Commons Attribution-No Derivative Works 4.0

Summary: The US government wants us to think that in order to tackle security issues we need to reach out to the collective ‘wisdom’ of the very culprits who created the security mess in the first place (even by intention, for imperialistic objectives)

THE very same companies that back-door their own software (i.e. deliberately make their products not secure) have been asked by the American administration for their views on the security of Free software and security of such software, which isn’t defective by design, maybe just by accident, occasionally.

We’ve already commented on this ludicrous situation in passing (in our Daily Links). The biggest National Security threat (Microsoft) is infiltrating panels on security, diverting attention away from the biggest threats to lesser threats, which are usually the solution, too. Lobbying? Outright political corruption? Both?

Either way, the above video concerns this new article, which is only one of many. We already listed about half a dozen earlier today. The author is so clueless that he calls the Linux Foundation the “Linux Open Source Foundation” and names IBM/Red Hat as if they’re separate entities. The same for GitHub and Microsoft. To quote: “The full tech participant list includes Akamai, Amazon, Apache Software Foundation, Apple, Cloudflare, Facebook/Meta, GitHub, Google, IBM, Linux Open Source Foundation, Microsoft, Oracle, RedHat and VMware.”

Of the above, only the Apache Software Foundation (ASF) actually speaks for Free/Open Source software. Yes, Zemlin’s PAC is little but a front group for some of those other companies.

Why are all the companies invited (assuming Red Hat is just IBM) to discuss this matter dripping “conflict of interest” and how can this establish trust? Why don’t they also discuss the threat posed by proprietary software? Some of the headlines that emerged afterwards want us to think that “Open Source” — not Microsoft et al — is the real “national security” threat. We’ll omit links to those “reports”… (FUD)

“…any real plan has to eliminate Microsoft from both the desktop and the supporting infrastructure. That is a staffing problem, not a technical one.”
      –Techrights associate
“Speaking of politics,” an associate noted today, “notice that the US’ concern about critical infrastructure is shifting all of the blame and attention on to FOSS. At the same time only the big, proprietary vendors are invited to the planning sessions with the government. They bring in clowns instead of the big names. They should at least be consulting with Bruce Perens, Bruce Schneier, Dan Geer, Moxie Marlinspike, Eugene Spafford, Daniel Bernstein, Paul Vixie etc. (notice that Spaf’s quote about Windows is now missing from pretty much every page that includes his old quotes…)”

And “even RMS and Linus Torvalds could add benefit if they had not been reframed as controversial by the attackers now moving in and out of DC. Wietse Venema is in the US too… Phil Zimmermann is still around too. Many of those involved in LibreSSL and OpenSSL are in the US as well… the list of knowledgeable, skilled, experienced people is long. No need for them to include any frauds, charlatans, or poseurs. But that’s what we get when Microsoft reps got in on the campaign team. Microsoft created the problems, and therefore is unable to solve them and it would be inappropriate to even have them involved. There’s a famous quote which goes approximately like this, “we cannot solve our problems with the same thinking we used to create them.” As such Microsoft representatives have to be cleared from the room long, long before discussion can start. Ransomware is just one symptom of microsoftianism. Even if Windows is retained for a shorter period on the desktop, servers could run FreeBSD with OpenZFS.The snapshotting feature would make data restoration much less inconvenient. However, any real plan has to eliminate Microsoft from both the desktop and the supporting infrastructure. That is a staffing problem, not a technical one. Even Microsofters, such as Mitchel Lewis, observe that, but most don’t dare speak up. I presume fear of NDAs and non-disparagement clauses in various contracts, especially terminations.”

“Microsoft created the problems, and therefore is unable to solve them and it would be inappropriate to even have them involved.”
      –Techrights associate
The number of articles we saw about Log4j that cited Microsoft as if it was a security expert was truly worrying. Since when does Microsoft get to play “concern troll” about “Open Source”?

“About the disappearance of the Spafford quote,” our associate noted: “It used to be cited everywhere but most of those sites are gone and the rest seem to have redacted just that one quote.”

Scientific Excellence and the Debian Social Contract

Posted in Debian, Free/Libre Software, GNU/Linux at 9:53 am by Dr. Roy Schestowitz

Video download link | md5sum 36cf190fdd0c12e45c5f7a57abbf9449
Corporate Politics in Debian
Creative Commons Attribution-No Derivative Works 4.0

Summary: The Debian Project turns 30 next year; in spite of it being so ubiquitous (most of the important distros of GNU/Linux are based on Debian) it is suffering growing pains and some of that boils down to corporate cash and toxic, deeply divisive politics

THE Debian Project, despite the widespread adoption of GNU/Linux globally, certainly isn’t going through easy times. The Debian Social Contract ought not be undermined by political hacks (pseudo-tolerance); it should prioritise science. Yesterday, for the second time in a row, Debian revealed that it had only recruited one Debian Developer per month. As I show in the video above, in past years and even some recent years they could recruit half a dozen or more per month. Last night Dr. Norbert Preining sadly announced that he would leave many Debian packages orphaned; those of us who use Debian know just how important those packages are (even KDE!) and finding a person to fill his shoes would be very difficult as he’s very experienced.

“Suppression of speech in the name of appeasing passive-aggressive bullies is always a bad strategy.”But his decision did not exactly shock me. Going a few years back, he said that his “demotion to Debian Maintainer is – as far as I read the consitution [3], the delegation of DAM [4], and the DAM Wiki page about their rights and powers [5], not legit since besides expulsion there is not procedure laid out for demotion, but I refrained from raising this for the sake of peace.”

They did the same thing to Daniel Pocock and then acted all shocked when he was upset, especially considering the fact that this was done as retribution for his FSFE ‘whistleblowing’ (telling Fellows, as their elected representative, that the FSFE wasn’t giving them their money’s worth). The attacks on Dr. Preining left him bruised as colleagues were choosing sides along superficial lines. People who didn’t (and still don’t) write any code were sucking the fun out of the project and sucking the life out of the community by dividing it along lines such as “pronouns”, not technical work. The video above goes through some of the events that interjected toxic politics into this technical project, causing scientists such as Preining to gradually lose interest, at least judging by the frequency of his posts in recent years.

Debian needs to regain stability, not by gagging people but by re-evaluating the way it treats dissent. Suppression of speech in the name of appeasing passive-aggressive bullies is always a bad strategy.

“I presume it is part of the sea change in the project that occurred with the TC takeover / intrigue which shoehorned 4th place choice, systemd, throughout the distro,” an associate of ours noted yesterday. “There have been many other scandals since then. There are two conflicting situations affecting all potential developers there and elsewhere. One is that volunteer project members want to focus on the code and not CoCs and other barriers to focusing on the code. The other is, as RMS points out, you can ignore the politics but the politics won’t ignore you. Those two facts cause problems where they collide.”

« Previous entries Next Page » Next Page »

RSS 64x64RSS Feed: subscribe to the RSS feed for regular updates

Home iconSite Wiki: You can improve this site by helping the extension of the site's content

Home iconSite Home: Background about the site and some key features in the front page

Chat iconIRC Channels: Come and chat with us in real time

New to This Site? Here Are Some Introductory Resources

No

Mono

ODF

Samba logo






We support

End software patents

GPLv3

GNU project

BLAG

EFF bloggers

Comcast is Blocktastic? SavetheInternet.com



Recent Posts