EditorsAbout the SiteComes vs. MicrosoftUsing This Web SiteSite ArchivesCredibility IndexOOXMLOpenDocumentPatentsNovellNews DigestSite NewsRSS

11.15.17

Benoît Battistelli and Elodie Bergot Have Just Ensured That EPO Will Get Even More Corrupt

Posted in Europe, Patents at 11:02 am by Dr. Roy Schestowitz

Examiners can come and go, with doors revolving ahead and behind them

Revolving door (politics)
Reference: Revolving door (politics)

Summary: Revolving door-type tactics will become more widespread at the EPO now that the management (Battistelli and his cronies) hires for low cost rather than skills/quality and minimises staff retention; this is yet another reason to dread anything like the UPC, which prioritises litigation over examination

EARLIER today we wrote about how the EPO moved on to corrupting academia, not just the media. And Bristows repeats the lies today. Team UPC loves it! Lies are OK to them as long as they’re good for their bottom line. A few hours ago EPO boosted LSE’s Antoine Dechezlepretre, who helped the EPO and Team UPC disseminate these lies.

Bristows we had very low expectations from (they would even fabricate/falsify things when that suits them), but now that the EPO bribes the media for puff pieces and does similar things with universities we must pause and think what the EPO has become. It’s a real threat to democracy, to society, and to science. The EPO is no longer harmful just to the EPO and to industry. It’s also toxic to academia, to media, and to the reputation of Europe.

Novagraaf’s Robert Balsters almost drank the Kool-Aid. Earlier today he perpetuated the myth that UPC is just a matter of time (delay). It’ll never happen though. Perhaps even he and his firm (Balsters) have come to realise this. “First Brexit,” he wrote, “then the UK and German general elections and now a case pending in the German Federal Constitutional Court have delayed the implementation date of the Unitary Patent and Unified Patent Court (UPC). Is this Europe-wide right ever going to come into being?”

Probably not. An additional issue (that he has not mentioned) is the EPO crisis, as well as the underlying points raised in the constitutional complaint. These are sometimes overlapping. One cannot discuss the UPC without relating to the EPO crisis, which is a very profound crisis. It’s not just a “social” conflict but also a technical problem, which itself causes much of this “social” conflict.

As it turns out, many unskilled examiners are being hired on the cheap and much of the work is being passed to unproven algorithms which have, provably, not yielded good results. EPO insiders already speak about (anonymously, for obvious reasons).

As alluded to earlier today, things are about to get worse in a couple of months or even 6-7 weeks. As an insider explained to me a few hours ago: “Ms Bergot announced her intention to have 100% (yes, you read right: ONE HUNDRED percent) of time-limited contracts for newcomers starting from 1st January 2018 !!! [] Many unanswered questions remain, such as – What will be the impact on the ability to recruit highly educated employees from all member states (knowing that the competition to attract the best ones is now global)? [] What will be the pressure on the new employees, for their performance? [] What will be the impact on the quality of their work? [] What will be the financial impact, particularly on the pension fund if there is a high turnover? [] How is the Office going to minimize the risk of corruption, which is precisely what permanent employment with advantageous working packages is supposed to prevent?”

As it turns out, WIPR is already covering this and it has solicited comments about this from the patent microcosm: (EIP, Bird & Bird etc.)

Benoît Battistelli, the president of the European Patent Office (EPO), has proposed an employment plan to recruit staff on renewable contracts of five years.

During a budget and finance commitee meeting in October in Munich, Battistelli and Elodie Bergot, principal director of human resources, added a motion to discuss permanent employment at the EPO to the agenda document.

A spokesperson for the EPO said that the office is in a “unique situation” with 97% of its staff hired on a permanent basis.

[...]

David Brinck, partner at EIP, said: the reputation of the EPO is founded on the quality and experience of its patent examiners.

“It must be a concern that moving to fixed-term contracts would hamper the recruitment of high-quality examiners and also potentially result in a high turnover of examiners with the position of EPO examiner eventually being seen as a CV filler rather than a worthy vocation in itself.”

Wouter Pors, partner at Bird & Bird, added that the EPO needs stability to be able to fill the current gaps.

“Given the uncertainty that employees have experienced in recent years, this is not the right moment to introduce flexibility. It is unlikely that highly educated professionals will give up their current employment for a future at the EPO which may be perceived as uncertain,” he cautioned.

Pors pretends to already know that Campinos will Fix Everything at the EPO. “Pors believes that perhaps in a year from now,” it says, “when the new management has restored trust among the EPO employees, a plan like this could be introduced successfully.”

He said the same about UPC (always “real soon” or “next year”), but failing even as a UPC propagandist [1, 2, 3] there’s little reason to trust anything he says. The entire plan is terrifying because it means that examination will be assigned to machines and operators with insufficient background (to judge or assist these machines). This is all wrong and a hallmark of the EPO’s disregard for academia (except when the EPO can pay it for some propaganda).

Meanwhile, over at IP Kat, these ‘machines’ continue to be discussed, using buzzwords such as AI (which is vastly overrated and nothing new):

“AI renders knowledge workers redundant.”

We’ll see this headline a few more times over the coming years. But it’s not always realistic about the outcomes. Patent search is very amenable to moving many tasks to AI implementations. one well, it should free examiners up to spend more time on examinations, if costs are to be kept at the current level, or to reduce costs for users, if examiner workload is to be kept constant. Both seem like positive outcomes.

“Curious that IPkat is interested in EPO again,” the next comment said. “Merpel should however know EPO examiners are not allowed to comment on the Office.”

Well, except anonymously.

“EPO management isn’t falling into an old pattern of seeking salvation in fashionable tech they fail to understand,” said the next comment. To quote:

In my humble experience, little separates AI from NI (Natural Idiocy). It’s true that there have been impressive improvements in some fields, like image recognition and automated translation, but even in those fields those improvements have only been achieved by force-feeding computers with phenomenal amounts of information, and I doubt that such an investment is yet economical or even possible in such a specialised field as patent searching. In any case, it still remains very much a matter of GIGO (garbage in – garbage out) and, AFAIK, in low tech fields like mechanics, the EPO databases are still quite corrupted by crappy OCR scans of old prior art (which can still be very much relevant in those fields: I’ve known cases where the killer prior art was over a century old).

I’m also slightly perplexed by the assumption that AI is put to better use in search than in examination: maybe AI would be less subject to hindsight bias than a human examiner when applying the Problem-Solution Approach, or determining whether something could be “directly and unambiguously derived” from a disclosure, don’t you think?

Anyway, before starting to wonder whether android examiners would dream of electric mousetraps, perhaps we should employ a dose of realism and ask ourselves whether EPO management isn’t falling into an old pattern of seeking salvation in fashionable tech they fail to understand, underestimating the challenges to implement it, and spending valuable resources into external consultancies in exchange of underwhelming results…

On the EPO, said the next comment, certain practices “are liable to render the patent office not fit for purpose.”

It also said that “EPO management is falling into the trap of pressing ahead with unproven technology without fully appreciating the possible implications.”

Here is the full comment:

It seems that you and I are in agreement. If you re-read my comment, it is clear that I only placed “off limits” those practices that are liable to render the patent office not fit for purpose.

I am in no way suggesting that investing in new technologies or adopting new ways of working is a bad thing. Instead, what I am saying is that all modernisation / efficiency drives must not render the EPO no longer fit for purpose.

On this point, I am afraid that I share Kant’s concerns that “the purpose of semi-automatic search is to de-skill the task of patent searching so as to enable the highly skilled and experienced examiners to be replaced by unskilled workers on short term contracts”. I would be delighted to be proved wrong. However, even if there is no diabolical plot behind the reforms are being, I suspect that Glad to be out of the madhouse is correct to wonder whether the EPO management is falling into the trap of pressing ahead with unproven technology without fully appreciating the possible implications.

The next comment had to be posted twice because it did not get through the first time (due to length):

The old EPOQUE databases have long since reached their design limits.
They have improved on that, but some limits will remain.
ANSERA is based on modern database structure, which is much more flexible.
ANSERA was and is therefore ncessary.

Re. search approach: EPOQUE with INTERNAL and XFULL is very classification oriented.
In ANSERA, classification symbol limitations often do not work, or are even working faultily.

Re. question 7: this is a perception bias. Newcomers still do get training in EPOQUE. But….
- tools training for newcomers has reduced from 7 weeks to 2 weeks, and encompasses more tools than before.
The correct teaching is left to the tutor. Who has to meet high preocution values, and has NOT received training how to train.
so newcomers get to lock themselves up and have to find something relevant.
Understanding the classification systematic isn’t easy, and prone to errors, leading to searches in irrelevant fields. If done correctly, your documents would all be highly relevant.
With ANSERA you get a google effect: many seemingly relevant documents, but very hard to filter down to the most relevant ones. Just like google gives some 20 relevant result pages. So you’d have to scroll through MANY more documents to see all relevant ones. Which ANSERA is rather unsuitable for, and transfering large amounts to the better document tool is still impractical.
So people look at the top 70 documents or so, according to some random evaluation algorithm.

And because finding documents loosely relevant is easy with ANSERA, non-engineers think it is the better tool. A real engineer gets frustrated, because getting the relevant documents out of the heap of close fields is impossible. But if you got the mention of how to use classification symbol searches only once within a haystack of other information in a new environment, retrieving this how-to when you need to is not something your brain will remember, so you pick some documents out of the large stack, and miss THE relevant classification symbol completely.
And since Quality Nominees get only 2 hours to do this check, including understanding the application, evaluating if the cited documents have been evaluated correctly (X/Y; technical features), checking clarity issues, …, no time remains to read up on the seach strategy, or even do a quick and dirty search in the correct classification symbol. And without better documents, there is no reason to mark a low quality search in the quality recording tool.

To recap: ANSERA itself is a wonderful tool. If the broken features get repaired.
They are making great progress.
But neither ANSERA itself nor EPOQUE are automated.

The useability of the automated pre-search (of which one element uses ANSERA) is extremely dependent on field and the specfic application you want to search. It made progress, then they had new versions which ais, for my field, a setback. Other fields improved.

Re question 5: I presume sometime external users of EPOQUE/INTERNAL/XFULL will also get access to ANSERA. But that access is very limited to patent offices, or the cut-down version EspaceNet offers.

The latest comment says this:

Fair comments. My experiences are reflected in your analysis. The new system is a bit of a black box which gives me results – but are they the best? In areas where classification systems are necessary and searching means looking at a lot of similar documents, finding small details isn’t easy.. Details of gear boxes which have gears, springs, fixed gears, internal gears etc. generate very general terms which may be closely located within almost every document. Finding the right conglomeration isn’t easy with a simple full text search for terms.
A difficulty is that the Epoque system still gives very good results and probably better than Ansera – for the moment.

Having already seen IP Kat nuking all comments (about 40 of them), we do try to keep a record of them. A lot of these are coming from EPO insiders, who are otherwise difficult to hear from (they are afraid of getting caught by the regime of terror at Eponia).

Australia is Banning Software Patents and Shelston IP is Complaining as Usual

Posted in Australia, Patents at 10:01 am by Dr. Roy Schestowitz

Because Shelston IP does not care what actual software developers want

Happy Birthday Sydney Harbour Bridge

Summary: The Australian Productivity Commission, which defies copyright and patent bullies, is finally having policies put in place that better serve the interests of Australians, but the legal ‘industry’ is unhappy (as expected)

THE decision to more officially ban software patents in Australia is not news. We wrote quite a few articles about that earlier in the year. As Kluwer Patent Blog put it this morning: “The purpose of the Bill is to implement the Government’s response to the Productivity Commission’s recommendations on Australia’s IP Arrangements.”

The article neglects to say that Australia is cracking down on software patents (software is not being mentioned at all by the author) and instead says that “Australia’s Government introduces draft legislation to abolish innovation patents” (which sounds rather misleading, as can often be expected from sites such as Kluwer Patent Blog). To quote:

The Productivity Commission recommended that Australia abolish the innovation patents regime, the principal reasons being that such patents have a lower inventive step than that of a standard patent and inhibited rather than assisted innovation from small business enterprises. The Government has agreed with this conclusion, noting that neither small business enterprises nor the Australian community at large benefited from it.

Part 4 of the draft Bill contains amendments to commence the abolition of the innovation patent system by preventing the filing of new applications, subject to certain exceptions. For example, existing rights before the commencement of the abolishing Act will remain unaffected, including the right to file divisional applications and convert standard patent applications to innovation patent applications where the patent date and priority date for each claim are before the abolishing Act’s commencement date.

What’s even worse than this is a paid ‘article’ from Shelston IP’s Matthew Ward. It came out a few hours ago and like their previous interventions (e.g. [1, 2, 3, 4, 5]) all we see here is Shelston IP still attacking patent sanity. To quote:

This is directly at odds with a recent resolution by the International Association for the Protection of Intellectual Property (AIPPI) favoring patent-eligibility of computer software inventions.

AIPPI is a lobby of greedy radicals and Australia has already conducted some surveys that revealed disdain for software patents. Why does Shelston IP insist on looking like an enemy of Australia’s software industry?

Patent Trial and Appeal Board (PTAB) Defended by Technology Giants, by Small Companies, by US Congress and by Judges, So Why Does USPTO Make It Less Accessible?

Posted in America, Law, Patents at 7:36 am by Dr. Roy Schestowitz

It’s not like the Patent Office desperately needs more money (there’s excess)

United States Patent and Trademark Office
Reference: United States Patent and Trademark Office at Wikipedia

Summary: In spite of the popularity of PTAB and the growing need/demand for it, the US patent system is apparently determined to help it discriminate against poor petitioners (who probably need PTAB the most)

LAST week the US government dealt with a serious issue we had been writing about for a number of months. CCIA, as it turns out, submitted a letter to the House Judiciary Subcommittee On IP [sic] and yesterday wrote this post:

Yesterday, we submitted a letter for the record to the House Judiciary Committee Subcommittee On Courts, Intellectual Property and the Internet. This letter, written in response to testimony submitted for the Subcommittee’s hearing on Sovereign Immunity and IP, provides the details of our analysis of the patents which Josh Malone and Phil Johnson identified as showing a disagreement on validity between the PTAB and federal courts. In contrast to their allegation of 200 patents, the real figure is far lower. Of the 3,056 patents reviewed by the PTAB which were also at issue in litigation in federal district courts, there are 43 cases (just over 1%) in which the PTAB and a district court have disagreed with one another.

[...]

Conclusion

The data, when correctly understood, shows that the PTAB only rarely disagrees with the federal courts when both review the validity of the same patent. The data also shows that the two venues only rarely review the validity of the same patent. We believe the Subcommittee’s work will benefit from this understanding of the extreme infrequency with which the PTAB and a district court reach different conclusions.

Additionally, based on information that William New wrote about, “US Congress Members Signal Move To Block Allergan Patent Deal With Tribe” (Mohawk).

To quote the part which is not behind paywall:

Members of a US congressional subcommittee on intellectual property held a hearing last week that appeared aimed at finding ways to stop companies from “renting” the sovereignty of Native American tribes in order to avoid a process that can lead to the invalidation of patents. Elected officials called a deal between Allergan pharmaceutical company and a northeastern tribe a “sham” and a “mockery”, and signalled the start of the legislative procedure to prevent such deals.

Notice the use of the words “sham” and “mockery”. There’s also “scam” — a popular term in various blogs and comments. A Federal judge called it a “sham”.

One might expect the USPTO to heed the warning and make PTAB even stronger, but instead, based on this post from Patently-O and another one from New’s publication, the USPTO reduces access to PTAB by means of fee hikes.

New wrote:

The United States Patent and Trademark Office (USPTO) today issued changes to some patent fees, including increases in certain areas, including the cost of using the inter partes review process. Following feedback from users, the office went with some proposed increases, while keeping others at existing levels despite proposals to increase them, it said.

In the interests of patent quality, the USPTO ought to make PTAB even more affordable, not less accessible (more expensive). Here is the full press release:

USPTO Finalizes Revised Patent Fee Schedule

WASHINGTON – The U.S. Department of Commerce’s United States Patent and Trademark Office (USPTO) today issued a final rule, “Setting and Adjusting Patent Fees during Fiscal Year 2017” to set or adjust certain patent fees, as authorized by the Leahy-Smith America Invents Act (AIA). The revised fee schedule is projected to recover the aggregate estimated cost of the USPTO’s patent operations, Patent Trial and Appeal Board (PTAB) operations, and administrative services. The additional fee collections will support the USPTO’s progress toward its strategic goals like pendency and backlog reduction, patent quality enhancements, technology modernization, staffing optimization, and financial sustainability.

In response to feedback from patent stakeholders, the USPTO altered several of the fee proposals presented in the Notice of Proposed Rule Making (NPRM). The key differences between the NPRM and the final rule are:

  • In response to stakeholder concerns, the USPTO reduced both plant and design issue fees from the levels proposed in the NPRM. Still, the large entity plant issue fee increases to $800 (+$40) and the large entity design issue fee increases to $700 (+$140). Plant and design patents do not pay maintenance fees, and the majority of plant and design applicants are eligible for small and micro entity fee reductions, which remain available.
  • Stakeholder feedback suggested that increased appeal fees could discourage patent holders’ access to increasingly important USPTO appeal services. In response, the USPTO elected to maintain the existing Notice of Appeal fee at $800 instead of increasing it to $1,000 as proposed in the NPRM. Likewise, the fee for Forwarding an Appeal to the Board increases to $2,240 (+$240) instead of $2,500 as proposed in the NPRM. The revised fees still do not fully recover costs, but taken together should allow continued progress on reducing the backlog of ex parte appeals.  Since the 2013 patent fee rulemaking, ex parte appeal fees have enabled the PTAB to hire more judges and greatly reduce the appeals backlog, from nearly 27,000 in 2012 to just over 13,000 at the end of FY 2017. Additional appeals fee revenue will support further backlog and pendency reductions.
  • Increases to the PTAB AIA trial fees are aimed at better aligning these fees with the USPTO’s costs and aiding the PTAB to continue to meet required AIA deadlines. The Office’s costs for Inter Partes Review requests are consistently outpacing the fees collected for this service.  These fee adjustments seek to more closely align fees and costs. Trial fees and associated costs still remain significantly less than court proceedings for most stakeholders.
    • Inter Partes Review Request Fee – up to 20 Claims increases to $15,500 (+$6,500)
    • Inter Partes Review Post-Institution Fee – Up to 15 Claims increases to $15,000 (+$1,000)

Other fee changes proposed in the NPRM remain the same.

For the full list of the patent fees that are changing and more information on fee setting and adjusting at the USPTO, please visit http://www.uspto.gov/about-us/performance-and-planning/fee-setting-and-adjusting.

PTAB is important and the cost of petition matters, especially to small companies which are being targeted by trolls and have limited budget. PTAB defends them from patent trolls and software patents without having to go through courts and appeals, which can add up to hundreds of thousands if not over a million dollars in fees (no matter the outcome).

IAM says that according to Google’s Suzanne Michel, “from [a] tech perspective IPRs have been very effective at reducing a lot of litigation” (direct quote from IAM but not Suzanne Michel). She is right.

United for Patent Reform‏ also quotes a report/opinion piece (HTIA’s John Thorne) which we mentioned a week ago: “PTAB and IPR have provided a relatively inexpensive & rapid way for @uspto to take a second & impartial look at the work of examiners & strike down patents that should have never issued in the first place…”

Hence our stubborn defense of PTAB.

Yesterday, IAM noted or highlighted yet another case of PTAB being used to thwart dubious patents, even if the petitioner is a large company (PTAB bashers like to obsess over such points).

The world’s largest oil and gas company Saudi Aramco has filed an inter partes review (IPR) against a Korean petrochemical business in what is a highly unusual move by one of the energy majors.

The Saudi national oil giant, which produces 12.5 million barrels per day, has brought the IPR against SK Innovation, which started life as the Korea Oil Company before morphing into a broad-based energy and chemicals business. The patent in question, number 9,023,979, relates to a method of preparing epoxide/CO2 polycarbonates and was issued in 2015.

It’s not clear what has prompted the review – there is no ongoing patent litigation between the two companies, which might mean that it is related to licensing negotiations that have broken down and Saudi Aramco has brought the IPR in order to gain some leverage in the talks.

[...]

Halliburton is among the most active of these, with 36 IPRs including 32 this year, mostly against its rival Schlumberger. Baker Hughes meanwhile has been involved in 27 IPRs either as petitioner or patent owner.

This should not be mistaken for the Supreme Court case regarding Oil States, but it certainly seems similar in certain aspects.

Declines in Patent Quality at the EPO and ‘Independent’ Judges Can No Longer Say a Thing

Posted in Europe, Patents at 6:36 am by Dr. Roy Schestowitz

They do, however, complain about their loss of independence

A shocked Battistelli

Summary: The EPO’s troubling race to the bottom (of patent quality) concerns the staff examiners and the judges, but they cannot speak about it without facing rather severe consequences

THE EPO, wrongly and arrogantly assuming that the UPC will materialise, is already making judges inside the EPO redundant or subservient. Even in defiance of the EPC. This is extremely serious as it’s a removal of oversight.

Several years ago we said that the EPO had done that (in late 2014) in order to gag those who speak about patent quality and can do so without fear of retribution. “Quality [of patents at the EPO] has dropped drastically the last 3 or 4 years,” one person wrote yesterday*. Ever since then, for obvious reasons, we have seen no dissenting judges (except retired ones). They self-censor, just like staff representatives do (consciously or subconsciously). EPO insiders already know what it means for EPO management to send judges to Haar (a symbolic act) and then try to invite chairs to actually celebrate this. Thankfully, most chairs are snubbing and declining this invitation.

Deep inside, EPO staff representatives don’t really believe much will change when Battistelli leaves. They just give Campinos the benefit of the doubt and act diplomatically. To quote a key paragraph (and the only one which contains new information of any kind in this article):

A source close to SUEPO said that as Campinos did not reject the numerous points issued against the current administration of the EPO, he is clearly aware of the contentions. However, the source added that what Campinos will actually do is still up for discussion, “since his answer is rather (to say the least) very vague”.

SUEPO links to this, so apparently it agrees.

Benoît Battistelli has caused a notable decrease in the number applications for EPs. He knows this, so right now he’s trying to cook/bake/fake the numbers by reducing costs — an old trick which this person fell for (and then got retweeted by the EPO).

Next time the EPO announces ‘results’ be sure to remember the decline in fees. It’s strategic. It’s designed to mask the decline in so-called ‘demand’. Regarding patent quality, one person said yesterday: “In practice there is no comparison of quality differences being made, as far as I know.”**
________
* With context added:

You’re a bit harsh. Of course it is not off-limits for EPO to consider changing their practices.

Or would you want them to still use index cards and miles of bound volumes of old applications?

There’s nothing wrong with improving efficiency. They can consider, and test, and evualuate all they want and only keep the good stuff.

I do agree that this should not reduce quality. And with the current EPO management that is indeed a worry. Quality has dropped drastically the last 3 or 4 years.

EPC article 1: A system of law, common to the Contracting States for the grant of patents for invention is established by this Convention.

For the grant of patents, not for their refusal! For the grant of patents, not neccessarily for the grant of high quality patents!

Quality should be assured by a constantly vigilant AC, alas, that is lacking.

** This comment alludes to the new system and the old system:

Back at Merpel’s questions…
1. Don’t know. And I’ve done the training…

Seriously, it is another technique and maybe it does work but it works in parallel with all my experience and doesn’t easily combine with it. It’s a bit like speaking Spanish for years and then one day being told that you would be better in Flemish. Why? Nobody really explains and you have no time to learn it. So you just ignore it. If you only learn Flemish and never learn Spanish (and they stop any Spanish classes), stats will always show Flemish is more popular except with the old fossils.

In practice there is no comparison of quality differences being made, as far as I know.

The EPO is Now Corrupting Academia, Wasting Stakeholders’ Money Lying to Stakeholders About the Unitary Patent (UPC)

Posted in Deception, Europe, Patents at 6:00 am by Dr. Roy Schestowitz

UPC boat

Summary: The Unified Patent Court/Unitary Patent (UPC) is a dying project and the EPO, seeing that it is going nowhere fast, has resorted to new tactics and these tactics cost a lot of money (at the expense of those who are being lied to)

NOT a day goes by without some EPO scandal (large or small). It’s like watching the ‘action’ on the deck of the Titanic while worrying for the fate of helpless passengers aboard.

“Those so-called ‘studies’ published by the EPO are mendacious speak totally worthless…”
      –Anonymous
Yesterday, as noted by Benjamin Henrion (FFII), the EPO wrote: “New report finds that the #UnitaryPatent could significantly enhance technology transfer in the EU. Other findings here…”

I’ve asked them: “New report or new lies?”

Henrion responded by saying “enhance “patent litigation” in the EU.”

Because it has nothing to do with “technology transfer” — whatever exactly that means (it usually gets used as a euphemism for “licensing”, amicable of coerced for).

“they actually do the opposite by suppressing permanent employment for new recruits from 1st January 2018 at the EPO. The impact on patent quality will be huge. But still, Battistelli prefers burning money by producing pro-UPC lies!”
      –Anonymous
Funnily enough, the EPO has once again linked to localhost:8080 in its official news feed (RSS) — an issue which they only fixed later in the day and several days too late. Are any competent workers left at the EPO? They appear to have misconfigured their software. Did some key IT staff leave? Either way, the ‘news’ at hand (warning: epo.org link) says the report was “carried out by a team of economists from the EPO, the University of Colorado Boulder and the London School of Economics…”

They did that for a fee, or with direct support from the EPO. The chief economist of the EPO seems like an old French mate of Battistelli and we have repeatedly caught him lying about the UPC.

What we see here is the EPO basically wasting a lot of money. It’s paying to produce pro-UPC lies and frame these as scholarly. Stakeholders’ money well spent? Of course not! And worse — it corrupts academia just like the EPO corrupts European media.

“Those so-called ‘studies’ published by the EPO are mendacious speak totally worthless,” one insider wrote. The “EPO should invest in patent quality,” s/he continued, “but they actually do the opposite by suppressing permanent employment for new recruits from 1st January 2018 at the EPO. The impact on patent quality will be huge. But still, Battistelli prefers burning money by producing pro-UPC lies!”

“What we see here is the EPO basically wasting a lot of money.”Shame on WIPR for being a mouthpiece for the EPO on this. As far as we can tell, it’s the only publication (so far) that amplified the above. Knowing a little about WIPR inside affairs, I am not at all surprised by this. The publisher actively tried to suppress if not spike some articles that revealed the ugly truths about the EPO.

See the comments on the corresponding tweet. EPO insiders have beaten me to it, seeing that WIPR is even quoting Benoît Battistelli directly from the press release (it can be found here). It’s almost as though journalism is dead and investigation/fact-checking is actively discouraged. Money (income) is in lying for the EPO now that Battistelli feeds some ‘parrots’ off his palm. He does not use his own money but EPO budget. He scatters it to the wind while lowering everyone’s salary but his own (and his cronies’). If Eponia was really a country, Battistelli would be accused of treason.

“If Eponia was really a country, Battistelli would be accused of treason.”Yesterday the EPO also wrote: “Negotiation is the preferred way to solve potential infringement issues; litigation is regarded as a last resort.”

Not if the UPC ever got its way and brought patent trolls to Europe. It would be totally chaotic and chaos of the kind that prosecutors earn a lot of money from (at technologists’ expense).

Links 15/11/2017: Fedora 27 Released, Linux Mint Has New Betas

Posted in News Roundup at 12:21 am by Dr. Roy Schestowitz

GNOME bluefish

Contents

GNU/Linux

  • Munich has putsch against Linux [Ed: does not quote any of the other side's arguments; Microsoft played dirty to cause this. It has been well documented.]

    Once the open sauce poster-boy Munich city council’s administrative and personnel committee has decided to purge Linux from its desk-top and invite Windows 10 to return by 2020.

    [...]

    She said the cost of the migration will not be made public until November 23, but today about 40 percent of 30,000 users already have Windows machines.

  • My Adventure Migrating Back To Windows

    I have had linux as my primary OS for about a decade now, and primarily use Ubuntu. But with the latest release I have decided to migrate back to an OS I generally dislike, Windows 10.

  • Top 10 Linux Tools

    One of the benefits to using Linux on the desktop is that there’s no shortage of tools available for it. To further illustrate this point, I’m going to share what I consider to be the top 10 Linux tools.

    This collection of Linux tools helps us in two distinct ways. It serves as an introduction to newer users that there are tools to do just about anything on Linux. Also, it reminds those of us who have used Linux for a number of years that the tools for just about any task is indeed, available.

  • Desktop

    • Take Linux and Run With It

      “How do you run an operating system?” may seem like a simple question, since most of us are accustomed to turning on our computers and seeing our system spin up. However, this common model is only one way of running an operating system. As one of Linux’s greatest strengths is versatility, Linux offers the most methods and environments for running it.

      To unleash the full power of Linux, and maybe even find a use for it you hadn’t thought of, consider some less conventional ways of running it — specifically, ones that don’t even require installation on a computer’s hard drive.

    • Samsung ditches Windows, shows Linux running on Galaxy Note 8 over DeX

      Samsung is now planning to deliver a full-fledged operating system over Samsung DeX with Linux, instead of Windows. While initially, Samsung’s DeX was supposed to run Windows 10 desktop in a virtual environment, the company is now leaning on Linux to offer a desktop experience.

    • Samsung demos Linux running on a Galaxy Note8 smartphone

      It has been known for some time that Samsung has been experimenting with the idea of running Linux distributions through its DeX platform on its Galaxy smartphones. The idea, being quite simple, is basically there to allow the user to use their device for multiple purposes, one of these being a replacement for the traditional desktop.

    • Samsung Demonstrates Ubuntu 16 Running Natively On DeX

      Samsung Electronics is entertaining the idea of bringing the full-fledged Linux operating system to the Samsung DeX platform, and these efforts were highlighted in a recent concept demo video published on YouTube by Samsung Newsroom, showcasing Samsung DeX running the Ubuntu 16 Linux distribution. Assuming that this feature will be implemented, it may place the DeX docking station on the radars of more potential customers as the product could grow in popularity especially amongst Linux users.

    • Dell Rolling Out More Developer-Focused Systems Preloaded With Ubuntu

      Canonical has announced that Dell is rolling out five new systems pre-installed with Ubuntu Linux. These systems are catering towards developers and come from all-in-one computers to new laptop models.

      Canonical just posted about five new Dell systems with Ubuntu pre-installed. Details are light as the Dell.com web-site is still reflecting these devices with Windows 10 on some of the pages and no mentions of these new models yet on the other general Dell Linux areas.

    • New Dell Precision Machines Available With Ubuntu Pre-Installed

      We are excited to announce the availability of 5 new Dell Precision computers that come pre-installed with Ubuntu. These are systems developed by and for developers, and are available in form factors ranging from sleek ultrabooks to powerful workstations. Here’s a quick runthrough of the latest offerings!

  • Server

    • Linux Now Powers 100% of the World’s Top 500 Supercomputers

      Linux now powers 100% of the world’s 500 fastest supercomputers. That’s according to the latest stats out from supercomputer hawks TOP500, who post a biannual list of the world’s most powerful commercially available computer systems. Linux has long dominated the TOP500 list, powering the majority of the machines that make it.

    • Linux Now Powers ALL TOP500 Supercomputers In The World | TOP500 List 2017
    • China Now Has More Supercomputers Than Any Other Country

      China now has more of the world’s most powerful computer systems than any other country, replacing the U.S as the dominant nation on the list of the planet’s 500 fastest supercomputers.

    • China Overtakes US in Latest Top 500 Supercomputer List

      China now claims 202 systems within the Top 500, while the United States — once the dominant player — tumbles to second place with 143 systems represented on the list.

      Only a few months ago, the US had 169 systems within the Top 500 compared to China’s 160.

    • IT disaster recovery: Sysadmins vs. natural disasters

      In terms of natural disasters, 2017 has been one heck of a year. Hurricanes Harvey, Irma, and Maria brought destruction to Houston, Puerto Rico, Florida, and the Caribbean. On top of that, wildfires burned out homes and businesses in the West.

      It’d be easy to respond with yet another finger-wagging article about preparing for disasters—and surely it’s all good advice—but that doesn’t help a network administrator cope with the soggy mess. Most of those well-meant suggestions also assume that the powers that be are cheerfully willing to invest money in implementing them.

    • Linux totally dominates supercomputers

      Linux rules supercomputing. This day has been coming since 1998, when Linux first appeared on the TOP500 Supercomputer list. Today, it finally happened: All 500 of the world’s fastest supercomputers are running Linux.

      The last two non-Linux systems, a pair of Chinese IBM POWER computers running AIX, dropped off the November 2017 TOP500 Supercomputer list.

      Overall, China now leads the supercomputing race with 202 computers to the US’ 144. China also leads the US in aggregate performance. China’s supercomputers represent 35.4 percent of the Top500′s flops, while the US trails with 29.6 percent. With an anti-science regime in charge of the government, America will only continue to see its technological lead decline.

  • Kernel Space

    • XFS For Linux 4.15 Brings “Great Scads of New Stuff”
    • RISC-V Hopes To Get In Linux 4.15, OpenRISC Adds SMP Support
    • ACPI & Power Management Updates For Linux 4.15
    • Linux 4.15 Is Off To A Busy Start
    • AMD Stoney Ridge Audio Supported By Linux 4.15

      The sound driver changes have been submitted for the Linux 4.15 kernel and includes finally supporting AMD Stoney Ridge hardware.

      Takashi Iwai of SUSE today sent in the sound updates for the Linux 4.15 kernel window. The noteworthy mentions are a new AC97 bus implementation and AMD Stoney platform support. There was also some hardening work of USB audio drivers, cleanups to the Intel ASoC platform code, and a variety of other low-level changes.

    • Linux Foundation

      • ​Kubernetes vendors agree on standardization

        Everyone and their uncle has decided to use Kubernetes for cloud container management. Even Kubernetes’ former rivals, Docker Swarm and Mesosphere, have thrown in the towel. Mesosphere came over in early October and Docker added Kubernetes support later the same month. There was only question: Would all these Kubernetes implementations work together? Thanks to the Cloud Native Computing Foundation (CNCF), the answer is yes.

    • Graphics Stack

      • Marek Has Been Taking To AMDGPU LLVM Optimizations

        Well known AMD open-source driver developer Marek Olšák has ruthlessly been optimizing the Radeon Mesa driver stack for years. With RadeonSI getting fine-tuned and already largely outperforming the AMDGPU-PRO OpenGL driver and most of the big ticket improvements complete, it appears his latest focus is on further optimizing the AMDGPU LLVM compiler back-end.

        This AMDGPU LLVM compiler back-end is what’s used by RadeonSI but is also leveraged by the RADV Vulkan driver, among other potential use-cases. Lately Marek has been filing patches for optimizing the instructions generated during the shader compilation process.

      • FFmpeg Expands Its NVDEC CUDA-Accelerated Video Decoding

        A few days back I wrote about FFmpeg picking up NVDEC-accelerated H.264 video decoding and since then more FFmpeg improvements have landed.

        As mentioned in the earlier article, NVDEC is the newer NVIDIA video decoding interface that is succeeding their Linux-specific VDPAU in favor of the cross-platform, CUDA-based NVIDIA Video Codec SDK. There’s also NVENC on the video encode side, while the recent FFmpeg work has been focused on the NVDEC GPU-based video decoding.

      • Intel Batch Buffer Logger Updated For Mesa

        Intel’s Kevin Rogovin has been working on a “BatchBuffer Logger” for the Intel graphics driver that offers some useful possibilities for assisting in debugging/analyzing problems or performance penalties facing game/application developers.

        The BatchBuffer Logger is designed to allow correlating API calls to data that in turn is added to a batch buffer for execution by the Intel graphics processor. The logger additionally keeps precise track of the GPU state and can report various metrics associated with each API call.

      • OpenGL 4.2 Support Could Soon Land For AMD Cayman GPUs On R600g

        David Airlie is looking to land OpenGL image support in the R600 Gallium3D driver that would be enabled for Radeon HD 5000 “Evergreen” GPUs and newer. For the HD 6900 “Cayman” GPUs, this would be the last step taking it to exposing OpenGL 4.2 compliance.

      • mesa 17.3.0-rc4

        The fourth release candidate for Mesa 17.3.0 is now available.

        As per the issue tracker [1] we still have a number of outstanding bugs blocking the release.

      • Mesa 17.3-RC4 Released, Handful Of Blocker Bugs Still Left

        Emil Velikov of Collabora has just announced the fourth weekly release candidate of the upcoming Mesa 17.3.

        The development cycle for 17.3 is going into overtime with no 17.3.0 stable release yet ready due to open blocker bugs. As of this morning there are still eight open blocker bugs against the 17.3 release tracker. The open issues involve Intel GPU hangs with Counter-Strike: Global Offensive and DiRT Rally, some Intel OpenGL/Vulkan test case failures, a performance regression for i965, and some other Intel issues.

      • VESA Pushes Out DisplayID 2.0 As The Successor To EDID For Monitors & Electronics

        DisplayID 2.0 is now official as the VESA standard to succeed the long-used Extended Display Identification Data “EDID” by TVs, monitors, and other consumer electronics.

        DisplayID 2.0 is designed to fill the needs of modern hardware with 4K+ resolutions, High Dynamic Range, Adaptive-Sync, AR/VR, and other use-cases not conceived when EDID first premiered in the 90′s as part of the DDC standard. Over EDID and E-EDID, DisplayID switches to using a variable length data structure and makes other fundamental design differences compared to these older identification standards.

      • Stereoscopy/3D Protocol Being Worked On For Wayland

        Collabora consultant Emmanuel Gil Peyrot has sent out a series of patches proposing a new (unstable) protocol for Wayland in dealing with stereoscopic layouts for 3D TV support but could be used in the future for VR HMDs, etc.

      • RADV Will Now Enable “Sisched” For The Talos Principle, Boosting Frame Rates

        The RADV Mesa Radeon Vulkan driver will now enable the sisched optimization automatically when running The Talos Principle in order to boost performance.

  • Applications

  • Desktop Environments/WMs

    • K Desktop Environment/KDE SC/Qt

      • Announcing KTechLab 0.40.0

        KTechLab, the IDE for microcontrollers and electronics, has reached a new milestone: its latest release, 0.40.0, does not depend on KDE3 and Qt3, but on KDE4 and Qt4. This means that KTechLab can be compiled and run on current operating systems.

        In the new, KDE4 and Qt4 based release, practically all features of the previous version are kept. Circuits, including PIC microcontrollers can be simulated, the programs running on PICs can be edited in C, ASM format, or graphically, by using Flowcode, and these programs can be easily prepared for programming real PICs. The only feature which has been removed is DCOP integration, which is not available in KDE4, and should be replaced with D-Bus integration.

      • KTechLab Microcontroller/Electronics IDE Ported To KDE4/Qt4
      • Qt WebGL: Cinematic Experience

        Following the previous blog posts related to the Qt WebGL plug-in development updates, we have some features to show.

      • KDevelop 5.2 released

        A little more than half a year after the release of KDevelop 5.1, we are happy to announce the availability of KDevelop 5.2 today. Below is a summary of the significant changes — you can find some additional information in the beta announcement.

        We plan to do a 5.2.1 stabilization release soon, should any major issues show up.

      • KDevelop 5.2 Released With New Analyzers, Better C++ / PHP / Python Support

        KDevelop 5.2 is now available as the newest feature release for this KDE-focused, multi-language integrated development environment.

        Building off the new “Analyzers” menu of KDevelop 5.1, the 5.2 release adds a Heaptrack analyzer for heap memory profiling of C/C++ applications and also integrates cppcheck for static analyzing of C++ code-bases.

      • meg@akademy2017

        It’s been a while since my last post over here. After being drained with a lot of work on the very first edition of QtCon Brasil, we all had to take some rest to recharge our batteries and get ready for some new crazinesses.

        This post is a short summary of the talk I presented at Akademy 2017, in the quite sunny Almería in Spain. Akademy is always a fascinating experience and it’s actually like being at home, meeting old friends and getting recurrently astonished by all awesomeness coming out of KDE community :).

        My talk was about architecting Qt mobile applications (slides here | video here). The talk started with a brief report on our Qt mobile development experiences at IFBa in the last two years and then I explained how we’ve been using lean QML-based architectures and code generators to leverage the productivity and provide flexible and reusable solutions for Qt mobile applications.

    • GNOME Desktop/GTK

      • Igalia is Hiring

        Igalia is hiring web browser developers. If you think you’re a good candidate for one of these jobs, you’ll want to fill out the online application accompanying one of the postings. We’d love to hear from you.

        We’re especially interested in hiring a browser graphics developer. We realize that not many graphics experts also have experience in web browser development, so it’s OK if you haven’t worked with web browsers before. Low-level Linux graphics experience is the more important qualification for this role.

  • Distributions

    • Reviews

      • Antergos 17.11 – the Antagonist

        Antergos shares the same roots with Manjaro. Both these distributions are in the Top 5 of Distrowatch list. However, my feelings from these operating systems are very different.
        I liked Manjaro very much, and I felt disappointed by Antergos.

        To certain extent, the disappointment was due to GNOME 3 desktop environment being used by default. I still dislike it, and it goes against my workflow. But there are some very Antergos-specific “features” that made me frown. Just to name a few: absence of office software in the default distribution, problem with software installation, huge memory usage.

        Manjaro and Antergos. Such close brothers, so much difference.

    • New Releases

      • KaOS 2017.11

        Just days after Plasma 5.11.3, KDE Applications 17.08.3 and Frameworks 5.40.0 where announced can you already see these on this new release. Highlights of Plasma 5.11.3 include making sure passwords are stored for all users when kwallet is disabled, sync xwayland DPI font to wayland dpi, notifications optionally stores missed and expired notifications in a history, the new Plasma Vault offers strong encryption features presented in a user-friendly way, Window title logic has been unified between X and Wayland windows, default X font DPI to 96 on wayland. All built on Qt 5.9.2.

        This release introduces Elisa as the default music player. KaOS users have chosen this option during a recent poll. It has been a few years, but the Juk music player is finally ported to kf5, thus available again in the KaOS repositories.

    • OpenSUSE/SUSE

    • Red Hat Family

    • Debian Family

      • Derivatives

        • Canonical/Ubuntu

          • Ubuntu 17.10 Radeon Performance: Stock vs. X-Swat Updates vs. Oibaf PPA vs. Pkppa vs. Padoka PPA

            There are several Launchpad PPA options for Ubuntu users wanting to update their Mesa-based drivers. For those curious about the state of these different third-party repositories, here are a few words on them and benchmarks.

          • Ubuntu 17.10 Review – For The Record

            So who is the target user base for Ubuntu 17.10? As much as I’d like to say newbies, I simply can’t do that. The help tool is very newbie friendly and would do well to have a variation on other GNOME-based distros. But GNOME 3 itself, even with Ubuntu development tweaks, is simply not going to win over someone used to a traditional menu layout.

            That said, I can say that while I still dislike the handling of GNOME extensions, indicators and other desktop elements, Ubuntu 17.10 is lightning fast, stable and has the basics in place to get the job done for most people used to a Linux desktop.

          • Flavours and Variants

            • Linux Mint 18.3 beta due for release this week

              The final release of the Linux Mint 18 series, Linux Mint 18.3, is due to see its beta release sometime this week. The final release will follow in tow a week or so after the beta. Ever since July, we’ve been tracking the changes that are due for Mint 18.3 “Sylvia”, however, the team behind the distribution have announced several last minute changes so it’s worth going over those now.

            • Linux Mint 18.3 “Sylvia” Cinnamon & MATE Beta Officially Out, Here’s What’s New

              Based on Ubuntu 16.04 LTS (Xenial Xerus) and running the Linux 4.10 kernel, Linux Mint 18.3 continues the long-term support (LTS) of the Linux Mint 18 series, which will receive updates and security patches until 2021. Both the Cinnamon and MATE editions have been released today with updated software and many new features.

              The Linux Mint 18.3 Cinnamon Beta edition features the latest Cinnamon 3.6 desktop environment, which comes with support for GNOME Online Accounts, libinput support as a replacement for the Synaptics touchpad driver, a much-improved on-screen keyboard, as well as a revamped configurator for Cinnamon spices.

            • Linux Mint 18.3 “Sylvia” MATE – BETA Release

              Linux Mint 18.3 is a long term support release which will be supported until 2021. It comes with updated software and brings refinements and many new features to make your desktop even more comfortable to use.

            • Linux Mint 18.3 “Sylvia” Cinnamon – BETA Release

              Linux Mint 18.3 is a long term support release which will be supported until 2021. It comes with updated software and brings refinements and many new features to make your desktop even more comfortable to use.

  • Devices/Embedded

Free Software/Open Source

  • How becoming open and agile led to customer success

    A few years ago, I worked as a service manager at Basefarm, a European managed services provider. I was part of a team supporting customers with infrastructure and managed services.

    One of our customers was TV4, the largest commercial TV company in Sweden. As part of our agreement, the four engineers in our team would dedicate 400 hours per month to TV4. The client expressed a simple but irritating problem: They always seemed waiting for us to implement the changes they wanted.

  • Juniper Builds Turn-Key Telco Cloud with Contrail, Red Hat OpenStack

    Tier 1 service providers, including AT&T, are already using Juniper Networks’ Contrail Networking in their telco clouds. Based on its experience with these operators, the vendor is now offering a turn key telco cloud system based on its Contrail software-defined networking (SDN) and built on Red Hat’s OpenStack distribution.

    “We realized that what service providers need is a turn key solution that takes best-of-breed products and takes an easy path to build a telco cloud,” said Pratik Roychowdhury, senior director of product management for Contrail at Juniper.

  • ETSI Open Source MANO announces Release THREE

    ETSI Open Source MANO group (ETSI OSM) announces the general availability of OSM Release THREE, keeping the pace of a release every 6 months. This release includes a large set of new capabilities as well as numerous enhancements in terms of scalability, performance, resiliency, security and user experience that facilitate its adoption in production environments.

  • ETSI debuts Release Three of Open Source MANO

    ETSI Open Source has made Open Source Mano (OSM) Release THREE generally available, illustrating the organization’s efforts to get out a new release every six months to help service providers and businesses with their NFV orchestration transitions.

    Featuring a new role-based access control, OSM Release THREE enables users from different service providers to access the OSM system with the appropriate set of privileges. It facilitates the adoption of complex operation workflows without compromising the security of the network or its operations.

  • Web Browsers

    • Chrome

      • Google: Chrome is backing away from public key pinning, and here’s why

        Google has announced plans to deprecate Chrome support for HTTP public key pinning (HPKP), an IETF standard that Google engineers wrote to improve web security but now consider harmful.

        HPKP, as described in IETF 7469, was designed to reduce the risk of a compromised Certificate Authority misissuing digital certificates for a site, allowing an attacker to perform a man-in-the-middle attack on encrypted Transport Layer Security (TLS) connections.

    • Mozilla

      • Fast. For good. Launching the new Firefox into the World

        Thirteen years ago, we marked the launch of Firefox 1.0 with a crowdfunded New York Times ad. It listed the names of every single person who contributed — hundreds of people. And it opened a lot of eyes. Why? It showed what committed individuals willing to put their actions and dollars behind a cause they believe in can make happen. In this case, it was launching Firefox, a web browser brought to market by Mozilla, the not-for-profit organization committed to making the internet open and accessible to everyone. And Firefox represented more than just a new and improved browser. It stood for an independent alternative to the corporately controlled Internet Explorer from Microsoft, and a way for people to take back control of their online experience.

      • Introducing the New Firefox: Firefox Quantum

        It’s by far the biggest update we’ve had since we launched Firefox 1.0 in 2004, it’s just flat out better in every way. If you go and install it right now, you’ll immediately notice the difference, accompanied by a feeling of mild euphoria. If you’re curious about what we did, read on.

      • Firefox’s faster, slicker, slimmer Quantum edition now out

        Mozilla is working on a major overhaul of its Firefox browser, and, with the general release of Firefox 57 today, has reached a major milestone. The version of the browser coming out today has a sleek new interface and, under the hood, major performance enhancements, with Mozilla claiming that it’s as much as twice as fast as it was a year ago. Not only should it be faster to load and render pages, but its user interface should remain quick and responsive even under heavy load with hundreds of tabs.

      • Firefox 57 “Quantum” Is Here, And It’s Awesome

        Firefox 57 is here. It introduces a new look, sees legacy add-ons dropped, and gives the core rendering engine a big old speed boost.

      • Firefox Features Google as Default Search Provider in the U.S., Canada, Hong Kong and Taiwan

        Firefox Quantum was released today. It’s the fastest Firefox yet built on a completely overhauled engine and a beautiful new design. As part of our focus on user experience and performance in Firefox Quantum, Google will also become our new default search provider in the United States, Canada, Hong Kong and Taiwan.

        Firefox default search providers in other regions are Yandex in Russia, Turkey, Belarus and Kazakhstan; Baidu in China; and Google in the rest of the world. Firefox has more choice in search providers than any other browser with more than 60 search providers pre-installed across more than 90 languages.

      • Firefox 57 Takes Quantum Leap Forward in Speed and Looks
      • Firefox Quantum 57 Is Here To Kill Google Chrome: Download For Windows, Mac, Linux
  • SaaS/Back End

  • CMS

    • Q&A: New CEO bets on open source future for Acquia CMS

      There are a lot of reasons. First of all, there’s a very good fit with Mike. That’s not just a good fit between him and me, but also to our culture and personality and how we think about different things, like the importance of cloud and open source. I also felt Mike was really well-prepared to lead our business. Mike has 25 years [of] experience with software as a service, enterprise content management and content governance. Mike has worked with small companies, as well as larger companies.

      At HP Enterprise and Micro Focus [acquired by HPE], Mike was responsible for managing more than 30 SaaS products. Acquia is evolving its product strategy to go beyond Drupal and the cloud to become a multiproduct company with Acquia Digital Asset Manager and Acquia Journey. So, our own transformation as a company is going from a single-product company to a multiproduct company. Mike is uniquely qualified to help us with that, based on his experience.

  • Pseudo-Open Source (Openwashing)

    • Open Yet Closed

      In the early days of Free Software, it was a safe assumption that anyone using a computer had coding skills of some sort — even if only for shell scripts. As a consequence, many advocates of Free Software, despite a strong focus on user freedoms, had a high tolerance for software that made source available under free terms without providing binaries.

      That was considered undesirable, but as long as the source code could be used it was not disqualifying. Many other ways evolved to ensure that the software was somehow impractical to deploy without a commercial relationship with a particular vendor, even if the letter of the rules around Free Software was met.

      This tolerance for “open but closed” models continued into the new Open Source movement. As long as code was being liberated under open source licenses, many felt the greater good was being served despite obstacles erected in service of business models.

      But times have changed. Random code liberation is still desirable, but the source of the greatest value to the greatest number is the collaboration and collective innovation open source unlocks. While abstract “open” was tolerated in the 20th century, only “open for collaboration” satisfies the open source communities of the 21st century. Be it “open core”, “scareware”, “delayed open”, “source only for clients”, “patent royalties required” or one of the many other games entrepreneurs play, meeting the letter of the OSD or FSD without actually allowing collaboration is now deprecated.

  • BSD

  • Public Services/Government

    • The Pentagon is set to make a big push toward open source software next year

      Nestled hundreds of pages into the proposed bill to fund the Department of Defense sits a small, unassuming section. The National Defense Authorization Act for Fiscal Year 2018 is the engine that powers the Pentagon, turning legislative will into tangible cash for whatever Congress can fit inside. Thanks to an amendment introduced by Sen. Mike Rounds of (R-SD) and co-sponsored by Sen. Elizabeth Warren (D-MA), this year the NDAA could institute a big change: should the bill pass in its present form, the Pentagon will be going open source.

      “Open source” is the industry term for using publicly accessible code, published for all to see and read. It’s contrasted with “closed source” or “proprietary” code, which a company guards closely as a trade secret. Open source, by its nature, is a shared tool, much more like creative commons than copyright. One big advantage is that, often, the agreements to run open-source software are much more relaxed than those behind proprietary code, and come without licensing fees. The license to run a copy of Adobe Photoshop for a year is $348; the similar open-source GNU Image Manipulation Program is free.

  • Licensing/Legal

    • Should we still doubt about the legality of Copyleft?

      The concept of Copyleft emerged from the libertarian activism of the free software movement, which brought together programmers from all over the world, in the context of the explosion of new technologies, Internet and the spreading of intangible property.

      Copyleft is a concept invented by Don Hopkins and popularized by Richard Stallman in the 1980s, with the GNU project whose main objective was to promote the free share of ideas and information and to encourage the inventiveness.

  • Openness/Sharing/Collaboration

    • Open Hardware/Modding

      • This Arduino-Powered “Time Machine” Glove Freezes Things Like A Boss

        Did you ever think about stopping things just by waving your hand? Well, probably, many times after getting some Hollywood adrenaline.

        A YouTuber named MadGyver might have thought the same more often than most of us. So, as a part of his new hack, he turned his gym glove into an Arduino-controlled time stopping glove that makes things ‘appear’ to come to a halt within a fraction of a second.

  • Programming/Development

    • Happy 60th birthday, Fortran

      The Fortran compiler, introduced in April 1957, was the first optimizing compiler, and it paved the way for many technical computing applications over the years. What Cobol did for business computing, Fortran did for scientific computing.

      Fortran may be approaching retirement age, but that doesn’t mean it’s about to stop working. This year marks the 60th anniversary of the first Fortran (then styled “FORTRAN,” for “FORmula TRANslation”) release.

      Even if you can’t write a single line of it, you use Fortran every day: Operational weather forecast models are still largely written in Fortran, for example. Its focus on mathematical performance makes Fortran a common language in many high-performance computing applications, including computational fluid dynamics and computational chemistry. Although Fortran may not have the same popular appeal as newer languages, those languages owe much to the pioneering work of the Fortran development team.

    • Google Contest Exposes Students to Open Source Coding

      Google is opening its eighth-annual Code-in Nov. 28. The challenge calls on pre-university students aged 13 to 17 to complete coding tasks on open source projects, with the aim of exposing teenagers to open source software development.

      To date, some 4,500 students have participated in the GCI contest, completing more than 23,000 tasks. For this year’s Code-in, 25 organizations are proving mentoring for participants, including Ubuntu, Drupal, Wikimedia and JBoss. Projects range from machine translation to games to medical records systems.

    • Why pair writing helps improve documentation

      Pair writing is when two writers work in real time, on the same piece of text, in the same room. This approach improves document quality, speeds up writing, and allows writers to learn from each other. The idea of pair writing is borrowed from pair programming.

Leftovers

  • Not every article needs a picture

    Adults do not need pictures to help them read. I understand that not putting photos on top of every single article might seem like a big undertaking at first, but once a few braves sites take it up, others will quickly follow suit. Putting a generic photo of a cell phone on top of an article about cell phones is insulting. To be clear: I am not an iconoclast. Including images in a story can be a nice addition; the problem is that this has now become a mandatory practice. Not every article should require a picture.

  • DevOps, Agile, and continuous delivery: What IT leaders need to know

    Enterprises across the globe have implemented the Agile methodology of software development and reaped its benefits in terms of smaller development times. Agile has also helped streamline processes in multilevel software development teams. And the methodology builds in feedback loops and drives the pace of innovation. Over time, DevOps and continuous delivery have emerged as more wholesome and upgraded approaches of managing the software development life cycle (SDLC) with a view to improve speed to market, reduce errors, and enhance quality. In this guide, we will talk more about Agile, how it grew, and how it extended into DevOps and, ultimately, continuous development.

  • Health/Nutrition

    • Trump’s Pick for Health Secretary Led Company That Jacked Up Insulin Prices

      The Canadian research team that developed insulin as a breakthrough treatment for diabetes back in 1923 sold the patent for just $3, essentially giving its intellectual property away for the greater good.

      Nowadays, the companies that manufacture this crucial medicine raise the price on a regular basis in order to maximize profits. One of those companies is Eli Lilly and Co., where Alex Azar II, the man that President Trump has selected to run the Department of Health and Human Services (HHS) recently worked as a top executive.

      During Azar’s eight-year tenure as president and vice president of Eli Lilly’s operations in the United States, the pharmaceutical giant raised the price of Humalog, a fast-acting form of insulin, from $2,657 per year to $9,172. That’s a 345 percent price increase for a drug that millions of patients depend on, according to Peter Maybarduk, the director of the Access to Medicines Program at the watchdog group Public Citizen.

    • Trump Nominates Former Drug Company Executive as HHS Secretary

      President Trump has nominated Alex Azar, a former top executive of pharmaceutical giant Eli Lilly, to lead the Department of Health and Human Services. The post has been vacant since the resignation of Tom Price. Public Citizen’s Robert Weissman criticized the nomination, saying, “Tom Price supported Big Pharma in the U.S. Congress. Now apparently Trump has decided to cut out the middleman and let a pharmaceutical executive literally run the federal department that protects the health of all Americans.”

    • Big Pharma’s Pushers: the Corporate Roots of the Opioid Crisis

      Sitting in a small cafe in a small town in western Massachusetts, Jordan talks about his problems with opioids. He was a construction worker, but an accident at his work site sent him to a hospital and into the arms of prescription painkillers. Jordan’s doctor did not properly instruct him about the dangers of these pills, which he used to kill the pain that ran down his leg. When the prescription ran out, Jordan found he craved the pills. “I used up my savings buying them on the black market,” he told me. When his own money ran out, Jordan got involved in petty theft. He went to prison for a short stint. The lack of proper care for his addiction in the prison allowed him to spiral into more dangerous drugs, which led to his near-death. Now released, Jordan struggles to make his way in the world.

      With us is Mary, another recovering addict who entered the world of prescription drugs after she had a car accident a few years ago. Her shoulders and neck hurt badly and so Mary’s doctor gave her a prescription for fentanyl, which is 50 to 100 times stronger than morphine. Mary used a fentanyl patch, which allowed the drug to slowly seep into her body through her skin. It was inevitable, Mary told me, that she became addicted to the drug. The pain went away, but the longing for the opioid continued. Mary, like Jordan, is in a de-addiction programme. It is an uphill climb, but Mary is confident. She is a bright person, whose eyes tell a story of great hope behind the fog of her addiction.

    • By legalizing GMOs, we are erasing Ubuntu

      On October 8, 2017, exactly 55 years after Uganda attained its independence, parliament passed the National Biotechnology and Biosafety Bill into law.

      It now awaits the president’s assent to start working. We are some 45 years into the internet age if Wikipedia information is anything to go by. In the past, we had the Stone Age, Iron Age, Bronze Age, steam age, machine age/industrial age, nuclear age, etc.

  • Security

  • Defence/Aggression

    • The Ever-Expanding ‘War on Terror’

      In the shadows, the U.S. special operations war on “terrorists” keeps on expanding around the globe, now reaching into Africa where few detectable American “interests” exist, writes Jonathan Marshall.

  • Transparency/Investigative Reporting

    • UK Gov’t Destroys Key Emails From Julian Assange Case, Shrugs About It

      I guess it all depends on when you ask the question. The second statement could be true pre- or post-email deletion, but probably more likely to be true after the scrubbing. But it’s a bit rich to ask everyone to believe these are simultaneously true — that the contents are unknown but also unlikely to be significant.

      The chance something “significant” may have been deleted remains high. And it will always remain so because the absence of emails means the absence of contradictory evidence. The UK is still interested in Assange and Wikileaks, even though it hasn’t pressed the issue of extradition in quite some time. This is CPS’s excuse for the mass deletion: the communications were related to extradition proceedings that ended in 2012 and contain nothing relevant to ongoing Assange-related government activity. According to CPS, this deletion was per policy.

      [...]

      The ending of an investigation or prosecution shouldn’t trigger a countdown clock that expires this quickly, especially when governments are almost always able to withhold documents while investigations and prosecutions are still ongoing. Generally speaking, government agencies are the only ones that can say definitively when investigations end, leaving document requesters to figure this out through trial and error.

      In this case, Maurizi will be continuing her FOI lawsuit against the CPS, but with some of the targeted documents already deleted, there’s little to be gained.

    • ‘The Atlantic’ Commits Malpractice, Selectively Edits To Smear WikiLeaks

      The author of the Atlantic article, Julia Ioffe, put a period rather than a comma at the end of the text about not wanting to appear pro-Trump or pro-Russia, and completely omitted WikiLeaks’ statement following the comma that it considers those allegations slanderous. This completely changes the way the interaction is perceived.

      This is malpractice. Putting an ellipsis (…) and then omitting the rest of the sentence would have been sleazy and disingenuous enough, because you’re leaving out crucial information but at least communicating to the reader that there is more to the sentence you’ve left out, but replacing the comma with a period obviously communicates to the reader that there is no more to the sentence. If you exclude important information while communicating that you have not, you are blatantly lying to your readers.

      There is a big difference between “because it won’t be perceived as coming from a ‘pro-Trump’ ‘pro-Russia’ source” and “because it won’t be perceived as coming from a ‘pro-Trump’ ‘pro-Russia’ source, which the Clinton campaign is constantly slandering us with.” Those are not the same sentence. At all. Different meanings, different implications. One makes WikiLeaks look like it’s trying to hide a pro-Trump, pro-Russian agenda from the public, and the other conveys the exact opposite impression as WikiLeaks actively works to obtain Donald Trump’s tax returns. This is a big deal.

      [...]

      What Ioffe’s tweets tell us is that she had full copies of the DMs, since she knew that there were more pages missing from the single tweet by Don Jr. that she had read. The deceitful omission that is the subject of this article was clarified in the first Don Jr. tweet she replied to. She read it, she analyzed it enough to figure out what was missing, but she said nothing about the fact that there were a lot more words in the sentence that she selectively edited out to convey the exact opposite of its meaning.

      I’m no detective, but it sure looks like this was a willful omission on Ioffe’s part made deliberately with the intention of damaging WikiLeaks’ reputation. I have been attempting to contact Ioffe, whose other work for the Atlantic includes such titles as “The History of Russian Involvement in America’s Race Wars” and “The Russians Are Glad Trump Detests the New Sanctions”; I will update this article if she has anything she’d like to say.

      Also worth noting is Ioffe’s omission of the fact that we’ve known since July that WikiLeaks had contacted Donald Trump Jr., as well as the fact that Julian Assange’s internet was cut at the time some of the Don Jr. messages were sent, meaning they may have been sent by someone else with access to the WikiLeaks account.

  • Environment/Energy/Wildlife/Nature

    • Solar Companies Are Scrambling to Find a Critical Raw Material

      Prices of polysilicon, the main component of photovoltaic cells, spiked as much as 35 percent in the past four months after environmental regulators in China shut down several factories.

    • More than 15,000 scientists from 184 countries issue ‘warning to humanity’

      William Ripple of Oregon State University’s College of Forestry, who started the campaign, said that he came across the 1992 warning last February, and noticed that this year happened to mark the 25th anniversary.

    • The Ongoing Misery of Puerto Rico

      There are a lot of labor issues going on. People are losing their jobs, businesses are closing, people are not getting paid for days they work. Some businesses have paid their workers even if they could not come in, but those are exceptional cases.

      There has been an inaccurate counting of deaths. The official number is 55 right now but every day you hear of situations where people are dying and whether they are attributed to the storm or not is a matter of great controversy. So many health and mental health issues are connected to the storm. The nursing homes are without air conditioning. There are four confirmed deaths from leptospirosis but we suspect there are a lot more.

  • Finance

  • AstroTurf/Lobbying/Politics

    • Facebook is killing Messenger Day and consolidating it with Facebook as Stories

      Previously, disappearing posts on Messenger Day and Facebook Stories existed separately. Now, Facebook Stories will be synced across both platforms, although camera filters will still remain separate. Along with this change, Facebook is also killing private ephemeral messaging feature Direct. Going forward, all replies to Stories as well as Facebook Camera messages will be directed through Messenger.

    • Facebook fact checkers say efforts are little more good PR

      “I don’t feel like it’s working at all. The fake information is still going viral and spreading rapidly,” The Guardian quoted one anonymous source as saying. “It’s really difficult to hold [Facebook] accountable. They think of us as doing their work for them. They have a big problem, and they are leaning on other organizations to clean up after them.”

    • ‘Way too little, way too late’: Facebook’s factcheckers say effort is failing

      Several fact checkers who work for independent news organizations and partner with Facebook told the Guardian that they feared their relationships with the technology corporation, some of which are paid, have created a conflict of interest, making it harder for the news outlets to scrutinize and criticize Facebook’s role in spreading misinformation.

    • Trump Jr. Messaged With WikiLeaks During, After Campaign

      Donald Trump Jr.’s release of the messages came hours after The Atlantic magazine, which obtained the string of messages, first reported them. As he released them, he appeared to downplay the exchanges, saying on his Twitter account, “Here is the entire chain of messages with @wikileaks (with my whopping 3 responses) which one of the congressional committees has chosen to selectively leak. How ironic!”

    • Wikileaks’ “Secret Correspondence“ with Don Trump Jr. published

      These conversations took place through Twitter DM and would have been accepted by Jr. Trump, and could have been blocked at any time. The timing of various tweets, matched with other events, certainly carries the appearance of a bi-directional relationship.

    • The Secret Correspondence Between Donald Trump Jr. and WikiLeaks

      Just before the stroke of midnight on September 20, 2016, at the height of last year’s presidential election, the WikiLeaks Twitter account sent a private direct message to Donald Trump Jr., the Republican nominee’s oldest son and campaign surrogate. “A PAC run anti-Trump site putintrump.org is about to launch,” WikiLeaks wrote. “The PAC is a recycled pro-Iraq war PAC. We have guessed the password. It is ‘putintrump.’ See ‘About’ for who is behind it. Any comments?” (The site, which has since become a joint project with Mother Jones, was founded by Rob Glaser, a tech entrepreneur, and was funded by Progress for USA Political Action Committee.)

      The next morning, about 12 hours later, Trump Jr. responded to WikiLeaks. “Off the record I don’t know who that is, but I’ll ask around,” he wrote on September 21, 2016. “Thanks.”

      [...]

      The messages were turned over to Congress as part of that body’s various ongoing investigations into Russian meddling in the 2016 presidential campaign. American intelligence services have accused the Kremlin of engaging in a deliberate effort to boost President Donald Trump’s chances while bringing down his Democratic rival, Hillary Clinton. That effort—and the president’s response to it—has spawned multiple congressional investigations, and a special counsel inquiry that has led to the indictment of Trump’s former campaign chair, Paul Manafort, for financial crimes.

    • Kansas is embracing an entirely unique brand of secrecy

      Among a list of items cited by the newspaper is the Kansas legislature’s refusal to list the names of the individuals who sponsor legislation, making it difficult for constituents to track whether their elected representatives are trying to push bills that are contrary to their beliefs or their economic interests. The Republican-controlled state house recently voted down an attempt to force the disclosure of legislation’s authors. The state legislature also routinely refuses to even disclose who voted for legislation within its different committees, according to the Star.

    • Why hide in the shadows, Kansas? State government is shrouded in secrecy

      The stories reveal a concerted and disturbing effort by officials at all levels of Kansas government to keep the public’s business secret.

    • Why America’s Future Could Look Like This

      “There is no such thing as tax reduction, only tax burden shifting. When you reduce taxes on the richest Americans, those less rich will pay the difference.”

    • Votes in 18 nations ‘hacked’ in last year

      Elections in 18 separate nations were influenced by online disinformation campaigns last year, suggests research.

      Independent watchdog Freedom House looked at how online discourse was influenced by governments, bots and paid opinion formers.

      In total, 30 governments were actively engaged in using social media to stifle dissent, said the report.

    • Sessions: ‘no reason to doubt’ Roy Moore’s accusers

      U.S. Attorney General Jeff Sessions said on Tuesday he “has no reason to doubt” five women who have accused U.S. Senate candidate Roy Moore of sexual misconduct with them when they were in their teens.

    • Media Who Went to Bat for Shut-Out Critics Should Also Stand Up for Targeted Copwatchers

      In September, the LA Times ran a two-part series on the tax and other benefits the Disney corporation managed to extract in Anaheim, California (9/24/17), and its efforts to influence city council elections (9/26/17). In a particularly hamfisted retaliatory move, Disney (though it didn’t call for any actual corrections) barred LA Times reporters from advance press screenings for its movies.

      Journalists took umbrage: The Washington Post‘s Alyssa Rosenberg (11/6/17) said she’d boycott the screenings until Disney backed down. The New York Times agreed, issuing a statement saying, “A powerful company punishing a news organization for a story they do not like” is a “dangerous precedent and not at all in the public interest.” The National Society of Film Critics and others disqualified Disney from awards (Variety, 11/7/17).

      This week, citing “productive discussions” with LA Times leadership, Disney rescinded the ban. “Journalistic solidarity,” claimed the Washington Post‘s Erik Wemple (11/7/17), served notice to Disney and “all prospective bullies: We media types sometimes do live up to the glorious principles that we mouth at panel discussions.”

      It was indeed a commendable action. So. Maybe now they’ve got this solidarity thing going, with the glorious principles and the concern about the powerful punishing people for stories they don’t like, corporate media could stretch the idea enough to see where solidarity is needed on issues perhaps even more pressing than whether you got your Thor: Ragnorok review before opening night or a day after.

  • Censorship/Free Speech

  • Privacy/Surveillance

    • 11 top tools to assess, implement, and maintain GDPR compliance

      The European Union’s General Data Protection Regulation (GDPR) goes into effect in May 2018, which means that any organization doing business in or with the EU has six months from this writing to comply with the strict new privacy law. The GDPR applies to any organization holding or processing personal data of E.U. citizens, and the penalties for noncompliance can be stiff: up to €20 million (about $24 million) or 4 percent of annual global turnover, whichever is greater. Organizations must be able to identify, protect, and manage all personally identifiable information (PII) of EU residents even if those organizations are not based in the EU.

    • Texas National Guard Latest Agency To Be Discovered Operating Flying Cell Tower Spoofers

      These aren’t the first DRT boxes to be exposed via public records requests. Law enforcement agencies in Chicago and Los Angeles are also deploying these surveillance devices — with minimal oversight and no public discussion prior to deployment. The same goes for the US Marshals Service, which has been flying its DRT boxes for a few years now with zero transparency or public oversight.
      The same goes for the National Guard in Texas. There doesn’t seem to be any supporting documentation suggesting any public consultation in any form before acquisition and deployment. Not only that, but there’s nothing in the documents obtained that clarifies what legal authority permits National Guard use of flying cell tower spoofers.

    • No snooping on American citizens without a court order

      To hear U.S. Rep. Ted Poe, R-Texas, tell it, NSA stands for “No Strings Attached” when it comes to the way the federal agency sweeps up and examines the most private data of American citizens.

      Poe raised the issue, with some emotion, on Tuesday during the House committee hearing held to question U.S. Attorney General Jeff Sessions. How the National Security Agency can so casually root around in the personal data of Americans who have done no wrong was beyond him.

    • Canadians Are Worried About NSA Spying But Don’t Understand How It Happens

      Four years after NSA contractor Ed Snowden exposed the US government’s massive internet spying apparatus (and incidentally revealed the “five eyes” global surveillance partnership that includes Canada), Canadians are more concerned about their digital privacy than ever before.

      But, according to a new report from the Canadian Internet Registration Authority (CIRA), which manages the .ca top-level domain, the vast majority simply do not understand the risks of being exposed to NSA surveillance, despite their concern. This, to say the least, is concerning.

    • NSA’s Hackers Were Themselves Hacked In Major Cybersecurity Breach

      And let’s talk now about an extraordinary security breach at the NSA. A group known as The Shadow Brokers have stolen sophisticated tools the agency uses to penetrate computer networks. In other words, the NSA’s own hackers have been hacked, it appears. This all began last year, and it looks like The Shadow Brokers have tried to sell some of the NSA’s cyberweapons. Matthew Olsen worked at the NSA as general counsel. He was later director of the National Counterterrorism Center. He’s in our studio this morning. Thanks for coming in.

  • Civil Rights/Policing

    • George H.W. Bush Accused of Child Sexual Assault

      Meanwhile, a sixth woman has come forward to accuse former President George H.W. Bush of groping her. Roslyn Corrigan says she was 16 years old when Bush grabbed her buttocks as she stood next to him for a photograph during a public event at a CIA office in Texas.

    • The Pentagon paid $370,000 to rent an MRI for Guantánamo. It doesn’t work.

      There’s a problem with a mobile MRI unit being leased by the Pentagon for $370,000 to scan a suspected terrorist’s brain as a prelude to his death-penalty trial, a prosecutor announced in court Tuesday.

      Army Col. John Wells disclosed the issue during pretrial hearings in the case against Abd al Rahim al Nashiri, 52, a Saudi man awaiting a death-penalty trial as the suspected architect of the Oct. 12, 2000, bombing of the USS Cole that killed 17 sailors.

      Air Force Col. Vance Spath, the trial judge, ordered the forensic scan in 2015. Wells told Spath that the magnetic resonance imaging equipment “is not functional and operational,” and requires maintenance.

      The equipment has been parked for about a month outside the base’s Navy hospital.

    • Family of man who dies after Taser incident gets $5.5 million verdict

      The parents of a 39-year-old who died in a Christmas Eve confrontation with the Los Angeles Police Department in 2014 was awarded $5.5 million by a federal jury on Monday, KPCC radio reports.

      KPCC reports that LAPD officers “hit the man with their batons and fists, pepper sprayed and restrained him.” An officer also stunned the man with a Taser six times in a row. He suffered a heart attack an hour later and died after two days.

      The coroner’s report blamed an enlarged heart, cocaine use, and “police restraint with use of Taser.”

    • City officials should listen to young people in debate over new police academy

      In the last six years, more than 160 young people under the age of 17 have been shot to death in Chicago. More than 1,550 others have been wounded in shootings.

      Isn’t it time grown-ups started listening to what young people have to say about stopping the gun violence?

      If anyone took the time to ask them, they would say unequivocally, “Invest in our future, not our incarceration.”

  • Intellectual Monopolies

    • Copyrights

      • Monkey Selfie Photographer Says He’s Now Going To Sue Wikipedia

        Thought the monkey selfie saga was over? I’m beginning to think that it will never, ever, be over. If you’re unfamliar with the story, there are too many twists and turns to recount here, but just go down the rabbit hole (monkey hole?) of our monkey selfie tag. Last we’d heard, PETA and photographer David Slater were trying to settle PETA’s totally insane lawsuit — but were trying to do so in an immensely troubling way, where the initial district court ruling saying, clearly, that monkeys don’t get a copyright would get deleted. Not everyone was comfortable with this settlement and some concerns have been brought before the court. As of writing this, the court seems to be sitting on the matter.

      • Microsoft Sued Over ‘Baseless’ Piracy Threats

        Microsoft and the BSA are accusing Rhode Island-based company Hanna Instruments of pirating software. Despite facing threats of millions of dollars in damages the company maintains its innocence, backed up by license keys and purchase receipts. The BSA’s lawyers are not convinced, however, so Hanna have decided to take the matter to court.

      • Hollywood Studios Force ISPs to Block Popcorn Time & Subtitle Sites

        The Oslo District Court has issued a judgment ordering 14 Internet service providers to block subscriber access to a range of websites offering and used by three Popcorn Time application variants. The ban, obtained by six major Hollywood studios, includes a pair of subtitle sites and extends to YTS, YIFY, and EZTV branded domains.

RSS 64x64RSS Feed: subscribe to the RSS feed for regular updates

Home iconSite Wiki: You can improve this site by helping the extension of the site's content

Home iconSite Home: Background about the site and some key features in the front page

Chat iconIRC Channels: Come and chat with us in real time

New to This Site? Here Are Some Introductory Resources

No

Mono

ODF

Samba logo






We support

End software patents

GPLv3

GNU project

BLAG

EFF bloggers

Comcast is Blocktastic? SavetheInternet.com



Recent Posts