Bonum Certa Men Certa

A Soup of Buzzwords From Brussels (the European Commission)

Video download link | md5sum 8e48a186bb1cfab6d4a7ab99da1d0094 Say Hello to Buzzwords Creative Commons Attribution-No Derivative Works 4.0



Stop Calling Everything AI, Machine-Learning Pioneer Says - IEEE SpectrumSummary: The European Commission (EC) is very much guilty of what the man on the right recently bemoaned; European officials are shaping our policies based on misconceptions and nebulous terms, brought forth by multinational corporations, their media, and their lobbyists in the Brussels area; today we focus on a new consultation which sparingly uses the buzzwords "Hey Hi" and "IoT" (in almost every paragraph, on average)

THIS morning we belatedly published this relatively short post concerning a misguided or misframed consultation, which is probably well-meant (or well-meaning) but makes inherently flawed/false assumptions about the problem at hand (liability for harm and/or defects). The above video is a very short version of what would otherwise take several hours to cover (e.g. addressing pertinent questions). It's repeatedly noted that the questions themselves are loaded, wrong, ill-advised, and potentially unhelpful. They compel us to believe that there's this "magic" called "AI" and that it cannot be understood, governed, and nobody can be held accountable for computerised systems anymore. It's a form of defeatism. Inducing a sense of helplessness.

In recent years we pointed out that the gross overuse of the term "AI" (which we've spun as "Hey Hi" for the sake of ridicule) is being exploited by patent maximalists. They want patents to be granted to computers and for patents to also cover computer programs (algorithms) by framing those programs as "AI". This is really bad (both things) as it defies common sense, not just patent law or its raison d'être.

An associate of ours has studied the document and more or less agrees. "I'd say it's more likely misframed," he adds. "By this decade, more ought to know about what general-purpose computing is about, so there is not the excuse of it being novel. Same with machine learning, where the innovation was in the 1970s and 1980s, but the computing power didn't catch up with the theory until the last decade or so."

If one visits the page in question right now it says: "This survey has not yet been published or has already been unpublished in the meantime."

We've chosen not to comment until it's officially over ("EU Survey - Adapting liability rules to the digital age and Artificial Intelligence").

As it states right there in the title, it's all about "Artificial Intelligence" -- a concept which is hardly defined or ill-defined.

"The sad situation in the world nowadays is that the politicians neither know anything at all about ICT nor know anyone they can turn to who will give then an honest answer" the associate adds. "Therefore it is important that I at least go through the motions of providing the feedback they have requested from EU citizens." The text below concerns Directive 85/374/EEC on liability for defective products (more in [1, 2] and it mentions "AI" about 100 times in total:

2000 character(s) maximum for each of the following:



Question: What do you think is the appropriate approach for consumers to claim compensation when damage is caused by a defective product bought through an online marketplace and there is no EU-based producer or importer?

Question: Please elaborate on your answers or specify other grounds of legal uncertainty regarding liability for damage caused by AI:

Question: Please elaborate on your answers. You may reflect in particular on the recently proposed AI Act and on the complementary roles played by liability rules and the other safety-related strands of the Commission’s AI policy in ensuring trust in AI and promoting the uptake of AI-enabled products and services:

Question: Please elaborate on your answers, in particular on whether your assessment is different for AI-enabled products than for AI-enabled services

Question: Please elaborate on your answers, in particular on whether your assessment is different for AI-enabled products than for AI-enabled services, as well as on other impacts of possible legal fragmentation

Question: Please elaborate on your answers and describe any other measures you may find appropriate:

Question: Please elaborate on your answer, describe any other approaches regarding strict liability you may find appropriate and/or indicate to which specific AI-enabled products and services strict liability should apply:

Question: Please elaborate on your answers, also taking into account the interplay with the other strands of the Commission’s AI policy (in particular the proposed AI Act). Please also describe any other measures you may find appropriate:

Question: Please elaborate on your answer and specify if you would prefer a different approach, e.g. an approach differentiating by area of AI application:

Question: Are there any other issues that should be considered?

-----

English EN European Commission EU Survey Save a backup on your local computer (disable if you are using a public/shared computer)

Adapting liability rules to the digital age and Artificial Intelligence

Fields marked with * are mandatory.

Introduction

This public consultation aims to:

confirm the relevance of the issues identified by the 2018 evaluation of the Product Liability Directive (e.g. how to apply the Directive to products in the digital and circular economy), and gather information and views on how to improve the Directive (Section I);

collect information on the need and possible ways to address issues related specifically to damage caused by Artificial Intelligence systems, which concerns both the Product Liability Directive and national civil liability rules (Section II).

You can respond to both sections or just to Section I. It is not possible to respond only to Section II.

About you

* Question: Language of my contribution

* Question: I am giving my contribution as

* Question: First name

* Question: Surname

* Question: Email (this won't be published) ______

* Question: Country of origin. Please add your country of origin, or that of your organisation.

The Commission will publish all contributions to this public consultation. You can choose whether you would prefer to have your details published or to remain anonymous when your contribution is published. For the purpose of transparency, the type of respondent (for example, ‘business association, ‘consumer association’, ‘EU citizen’) country of origin, organisation name and size, and its transparency register number, are always published. Your e-mail address will never be published. Opt in to select the privacy option that best suits you. Privacy options default based on the type of respondent selected

* Question: I agree with the personal data protection provisions

Section I – Product Liability Directive

This section of the consultation concerns Council Directive 85/374/EEC on liability for defective products (“Product Liability Directive”), which applies to any product marketed in the European Economic Area (27 EU countries plus Iceland, Liechtenstein and Norway). See also Section II for more in-depth questions about the Directive and AI.

According to the Directive, if a defective product causes damage to consumers, the producer must pay compensation. The injured party must prove the product was defective, as well as the causal link between the defect and the damage. But the injured party does not have to prove that the producer was at fault or negligent (‘strict liability’). In certain circumstances, producers are exempted from liability if they prove, e.g. that the product’s defect was not discoverable based on the best scientific knowledge at the time it was placed on the market.

Injured parties can claim compensation for death, personal injury as well as property damage if the property is intended for private use and the damage exceeds EUR 500. The injured party has 3 years to seek compensation. In addition, the producer is freed from liability 10 years after the date the product was put into circulation.

The Evaluation of the Directive in 2018 found that it was effective overall, but difficult to apply to products in the digital and circular economy because of its outdated concepts. The Commission’s 2020 Report on Safety and Liability for AI, Internet of things (IoT) and robotics also confirmed this.

The Evaluation also found that consumers faced obstacles to making compensation claims, due to thresholds and time limits, and obstacles to getting compensation, especially for complex products, due to the burden of proof.

* Question: How familiar are you with the Directive? Answer: I have detailed knowledge of the Directive, its objectives, rules and application Answer: I am aware of the Directive and some of its contents Answer: I am not familiar with the Directive Answer: No opinion

Adapting the Directive to the digital age

Question: The Directive holds importers strictly liable for damage caused by defective products when the producer is based outside the EU. Nowadays online marketplaces enable consumers to buy products from outside the EU without there being an importer.

Online marketplaces intermediate the sale of products between traders, including those established outside the EU, and consumers. Typically, they are not in contact with the products they intermediate and they frequently intermediate trade between many sellers and consumers.

Under the current rules, online marketplaces are covered by a conditional liability exemption (Article 14 of the e-Commerce Directive). The new proposal for a Digital Services Act includes obligations for online marketplaces to tackle illegal products online, e.g. gathering information on the identity of traders using their services. Moreover, the new proposal for a General Product Safety Regulation includes provisions for online marketplaces to tackle the sale of dangerous products online.

Do you agree or disagree with the following statements?

Strongly agree Agree Neutral Disagree Strongly disagree No opinion

The proposals for a Digital Services Act and General Product Safety Regulation are sufficient to ensure consumer protection as regards products bought through online marketplaces where there is no EU-based producer or importer. The Product Liability Directive needs to be adapted to ensure consumer protection if damage is caused by defective products bought through online marketplaces where there is no EU-based producer or importer.

Question: What do you think is the appropriate approach for consumers to claim compensation when damage is caused by a defective product bought through an online marketplace and there is no EU-based producer or importer? (2000 character(s) maximum) 0 out of 2000 characters used.

Question: Digital technologies may bring with them new risks and new kinds of damage.

Regarding risks, it is not always clear whether cybersecurity vulnerabilities can be considered a defect under the Directive, particularly as cybersecurity risks evolve throughout a product’s lifetime.

Regarding damage, the Directive harmonises the rights of consumers to claim compensation for physical injury and property damage, although it lets each Member State decide itself whether to compensate for non-material damage (e.g. privacy infringements, psychological harm). National rules on non-material damage differ widely. At EU level both material and non-material damage can be compensated under the General Data Protection Regulation (GDPR) when a data controller or processor infringes the GDPR, and the Environmental Liability Directive provides for the liability of companies for environmental damage.

Do you agree or disagree with the following statements? Strongly agree Agree Neutral Disagree Strongly disagree No opinion

Producers should potentially be held strictly liable for damages caused as a result of failure to provide necessary security updates for smart products The Directive should harmonise the right of consumers to claim compensation from producers who are not simultaneously data controllers or processors, for privacy or data protection infringements (e.g. a leak of personal data caused by a defect)

The Directive should harmonise the right of consumers to claim compensation for damage to, or destruction of, data (e.g. data being wiped from a hard drive even if there is no tangible damage) The Directive should harmonise the right of consumers to claim compensation for psychological harm (e.g. abusive robot in a care setting, home-schooling robot)

Some products, whether digital or not, could also cause environmental damage. The Directive should allow consumers to claim compensation for environmental damage (e.g. caused by chemical products) Coverage of other types of harm

Adapting the Directive to the circular economy

Question The Directive addresses defects present at the moment a product is placed on the market. However, changes to products after they are placed on the market are increasingly common, e.g. in the context of circular economy business models.

The Evaluation of the Directive found that it was not always clear who should be strictly liable when repaired, refurbished or remanufactured products were defective and caused damage. It is worth noting here that the Directive concerns the defectiveness of products and not the defectiveness of services. So, a third-party repair that was poorly carried out would not lead to the repairer being held liable under the Directive, although remedies may be available under national law.

Do you agree or disagree with the following statements? Strongly agree Agree Neutral Disagree Strongly disagree No opinion

Companies that remanufacture a product (e.g. restoring vehicle components to original as-new condition) and place it back on the market should be strictly liable for defects causing damage Companies that refurbish a product (e.g. restoring functionality of a used smartphone) and place it back on the market should be strictly liable for defects causing damage

The manufacturer of a defective spare part added to a product (e.g. to a washing machine) during a repair should be strictly liable for damage caused by that spare part

Policy approach and impacts of adapting the Directive to the digital and circular economy

Reducing obstacles to getting compensation

Question: The Evaluation of the Directive found that in some cases consumers face significant difficulties in getting compensation for damage caused by defective products.

In particular it found that difficulties in proving the defectiveness of a product and proving that the product caused the damage accounted for 53% of rejected compensation claims. In particular, the technical complexity of certain products (e.g. pharmaceuticals and emerging digital technologies) could make it especially difficult and costly for consumers to actually prove they were defective and that they caused the damage.

To what extent do you think that the following types of product present difficulties in terms of proving defectiveness and causality in the event of damage? (See additional burden of proof question concerning AI in Section II) To a very large extent To a large extent To a moderate extent To a small extent Not at all Don't know/no answer

All products

Technically complex products

Pharmaceuticals

AI-enabled products

IoT (Internet of Things) products Question: Other types of product (please specify): (50 character(s) maximum) 0 out of 50 characters used.

Reducing obstacles to making claims

Question: The Evaluation of the Directive found that in some cases consumers faced or could face significant difficulties in making compensation claims for damage caused by defective products. The current rules allow consumers to claim compensation for personal injury or property damage. Time limits apply to all compensation claims and several other limitations apply to compensation for property damage.

To what extent do the following features of the Directive create obstacles to consumers making compensation claims? To a very large extent To a large extent To a moderate extent To a small extent Not at all Don't know/no answer

Producers are released from liability for death/personal injury 10 years after placing the product on the market Producers are released from liability for property damage 10 years after placing the product on the market

Consumers have to start legal proceedings within 3 years of becoming aware of the damage

Consumers can claim compensation only for damage to property worth more than EUR 500

Consumers can claim compensation only for damage to property intended and used for private purposes

Policy approach and impacts of reducing obstacles to getting compensation and making claims

End of Section I on Product Liability Directive

*Question

In Section II of this consultation the problems linked to certain types of Artificial Intelligence – which make it difficult to identify the potentially liable person, to prove that person’s fault or to prove the defect of a product and the causal link with the damage – are explored further.

Would you like to continue with Section II on Artificial Intelligence? Answer Continue with Section II on Artificial Intelligence Answer Close the questionnaire

Section II - Liability for AI

Introduction

As a crucial enabling technology, AI can drive both products and services. AI systems can either be provided with a physical product (e.g. an autonomous delivery vehicle) or placed separately on the market.

To facilitate trust in and the roll-out of AI technologies, the Commission is taking a staged approach. First, on 21 April 2021, it proposed harmonised rules for development, placing on the market and use of certain AI systems (AI Act). The AI Act contains obligations on providers and users of AI systems, e.g. on human oversight, transparency and information. In addition, the recent proposal for a Regulation on Machinery Products (published together with the AI act) also covers new risks originating from emerging technologies, including the integration of AI systems into machinery.

However, safety legislation minimises but cannot fully exclude accidents. The liability frameworks come into play where accidents happen and damage is caused. Therefore, as a next step to complement the recent initiatives aimed at improving the safety of products when they are placed on the EU market, the Commission is considering a revision of the liability framework.

In the White Paper on AI and the accompanying 2020 Report on Safety and Liability, the Commission identified potential problems with liability rules, stemming from the specific properties of certain AI systems. These properties could make it difficult for injured parties to get compensation based on the Product Liability Directive or national fault-based rules. This is because in certain situations, the lack of transparency (opacity) and explainability (complexity) as well as the high degree of autonomy of some AI systems could make it difficult for injured parties to prove a product is defective or to prove fault, and to prove the causal link with the damage.

It may also be uncertain whether and to what extent national strict liability regimes (e.g. for dangerous activities) will apply to the use of AI-enabled products or services. National laws may change, and courts may adapt their interpretation of the law, to address these potential challenges. Regarding national liability rules and their application to AI, these potential problems have been further explored in this recent study.

With this staged approach to AI, the Commission aims to provide the legal certainty necessary for investment and, specifically with this initiative, to ensure that victims of damage caused by AI-enabled products and services have a similar level of protection to victims of technologies that operate without AI. Therefore, this part of the consultation is looking at all three pillars of the existing liability framework.

The Product Liability Directive, for consumer claims against producers of defective products. The injured party has to prove the product was defective and the causal link between that defect and the damage. As regards the Directive, the proposed questions build on the first section of the consultation.

National fault-based liability rules: The injured party has to prove the defendant’s fault (negligence or intent to harm) and a causal link between that fault and the damage.

National strict liability regimes set by each Member State for technologies or activities considered to pose an increased risk to society (e.g. cars or construction activities). Strict liability means that the relevant risk is assigned to someone irrespective of fault. This is usually justified by the fact that the strictly liable individual benefits from exposing the public to a risk.

In addition to this framework, the General Data Protection Regulation (GDPR) gives anyone who has suffered material or non-material damage due to an infringement of the Regulation the right to receive compensation from the controller or processor.

Problems – general

Question: Do you agree or disagree with the following statements? Strongly agree Agree Neutral Disagree Strongly disagree No opinion

There is uncertainty as to how the Product Liability Directive (i.e. liability for defective products) applies to damage caused by AI There is uncertainty as to whether and how liability rules under national law apply to damage caused by AI

When AI operates with a high degree of autonomy, it could be difficult to link the damage it caused to the actions or omissions of a human actor In the case of AI that lacks transparency (opacity) and explainability (complexity), it could be difficult for injured parties to prove that the conditions of liability (such as fault, a defect, or causation) are fulfilled Because of AI’s specific characteristics, victims of damage caused by AI may in certain cases be less protected than victims of damage that didn’t involve AI

It is uncertain how national courts will address possible difficulties of proof and liability gaps in relation to AI Question: Please elaborate on your answers or specify other grounds of legal uncertainty regarding liability for damage caused by AI: (2000 character(s) maximum) 0 out of 2000 characters used.

Question: Do you agree or disagree with the following statements? Strongly agree Agree Neutral Disagree Strongly disagree No opinion

The lack of adaptation of the current liability framework to AI may negatively affect trust in AI

The lack of adaptation of the current liability framework to AI may negatively affect the uptake of AI-enabled products and services Question: Please elaborate on your answers. You may reflect in particular on the recently proposed AI Act and on the complementary roles played by liability rules and the other safety-related strands of the Commission’s AI policy in ensuring trust in AI and promoting the uptake of AI-enabled products and services: (2000 character(s) maximum) 0 out of 2000 characters used.

Question: If the current liability framework is not adapted, to what extent do you expect the following problems to occur in relation to the production, distribution or use of AI-enabled products or services, now or in the foreseeable future? This question is primarily aimed at businesses and business associations.

To a very large extent To a large extent To a moderate extent To a small extent Not at all Don't know/no answer

Companies will face additional costs (e.g. legal information costs, increased insurance costs)

Companies may defer or abandon certain investments in AI technologies Companies may refrain from using AI when automating certain processes Companies may limit their cross-border activities related to the production, distribution or use of AI-enabled products or services Higher prices of AI-enabled products and services Insurers will increase risk-premiums due to a lack of predictability of liability exposures

It will not be possible to insure some products/services Negative impact on the roll-out of AI technologies in the internal market

Question: Please elaborate on your answers, in particular on whether your assessment is different for AI-enabled products than for AI-enabled services (2000 character(s) maximum) 0 out of 2000 characters used.

Question: With the growing number of AI-enabled products and services on the market, Member States may adapt their respective liability regimes to the specific challenges of AI, which could lead to increasing differences between national liability rules. The Product Liability Directive could also be interpreted in different ways by national courts for damage caused by AI.

If Member States adapt liability rules for AI in a divergent way, or national courts follow diverging interpretations of existing liability rules, to what extent do you expect this to cause the following problems in the EU? This question is primarily aimed at businesses and business associations.

To a very large extent To a large extent To a moderate extent To a small extent Not at all Don't know/no answer

Additional costs for companies (e.g. legal information costs, increased insurance costs) when producing, distributing or using AI-equipped products or services

Need for technological adaptations when providing AI-based cross-border services

Need to adapt AI technologies, distribution models (e.g. sale versus service provision) and cost management models in light of diverging national liability rules

Companies may limit their cross-border activities related to the production, distribution or use of AI-enabled products or services Higher prices of AI-enabled products and services Insurers will increase premiums due to more divergent liability exposures Negative impact on the roll-out of AI technologies Question: Please elaborate on your answers, in particular on whether your assessment is different for AI-enabled products than for AI-enabled services, as well as on other impacts of possible legal fragmentation (2000 character(s) maximum) 0 out of 2000 characters used.

Policy options

Question: Due to their specific characteristics, in particular their lack of transparency and explainability (‘black box effect’) and their high degree of autonomy, certain types of AI systems could challenge existing liability rules.

The Commission is considering the policy measures, described in the following questions, to ensure that victims of damage caused by these specific types of AI systems are not left with less protection than victims of damage caused by technologies that operate without AI. Such measures would be based on existing approaches in national liability regimes (e.g. alleviating the burden of proof for the injured party or strict liability for the producer). They would also complement the Commission’s other policy initiatives to ensure the safety of AI, such as the recently proposed AI Act, and provide a safety net in the event that an AI system causes damage.

Please note that the approaches to adapting the liability framework presented below relate only to civil liability, not to state or criminal liability. The proposed approaches focus on measures to ease the victim’s burden of proof (see next question) as well as a possible targeted harmonisation of strict liability and insurance solutions (subsequent questions). They aim to help the victim recover damage more easily.

Do you agree or disagree with the following approaches regarding the burden of proof? The answer options are not mutually exclusive. Regarding the Product Liability Directive, the following approaches build on the general options in the first part of this questionnaire. Strongly agree Agree Neutral Disagree Strongly disagree No opinion

The defendant (e.g. producer, user, service provider, operator) should be obliged to disclose necessary technical information (e.g. log data) to the injured party to enable the latter to prove the conditions of the claim If the defendant refuses to disclose the information referred to in the previous answer option, courts should infer that the conditions to be proven by that information are fulfilled

Specifically for claims under the Product Liability Directive: if an AI-enabled product clearly malfunctioned (e.g. driverless vehicle swerving off the road despite no obstacles), courts should infer that it was defective and caused the damage

If the provider of an AI system failed to comply with their safety or other legal obligations to prevent harm (e.g. those proposed under the proposed AI Act), courts should infer that the damage was caused due to that person’s fault or that, for claims under the Product Liability Directive, the AI system was defective

If the user of an AI system failed to comply with their safety or other legal obligations to prevent harm (e.g. those proposed under the proposed AI Act), courts should infer that the damage was caused by that person’s fault If, in a given case, it is necessary to establish how a complex and/or opaque AI system (i.e. an AI system with limited transparency and explainability) operates in order to substantiate a claim, the burden of proof should be shifted from the victim to the defendant in that respect Specifically for claims under the Product Liability Directive: if a product integrating an AI system that continuously learns and adapts while in operation causes damage, the producer should be liable irrespective of defectiveness; the victim should have to prove only that the product caused the damage Certain types of opaque or highly autonomous AI systems should be defined for which the burden of proof regarding fault and causation should always be on the person responsible for that AI system (reversal of burden of proof) EU action to ease the victim’s burden of proof is not necessary or justified Question: Please elaborate on your answers and describe any other measures you may find appropriate: (2000 character(s) maximum) 0 out of 2000 characters used.

Question: Separately from the strict liability of producers under the Product Liability Directive, national laws provide for a wide range of different strict liability schemes for the owner/user/operator. Strict liability means that a certain risk of damage is assigned to a person irrespective of fault.

A possible policy option at EU level could be to harmonise strict liability (full or minimum), separately from the Product Liability Directive, for damage caused by the operation of certain AI-enabled products or the provision of certain AI-enabled services. This could notably be considered in cases where the use of AI (e.g. in autonomous vehicles and autonomous drones) exposes the public to the risk of damage to important values like life, health and property. Where strict liability rules already exist in a Member State, e.g. for cars, the EU harmonisation would not lead to an additional strict liability regime.

Do you agree or disagree with the following approaches regarding liability for operating AI-enabled products and providing AI-enabled services creating a serious injury risk (e.g. life, health, property) for the public? Strongly agree Agree Neutral Disagree Strongly disagree No opinion

Full harmonisation of strict liability for operating AI-enabled products and providing AI-enabled services, limited to cases where these activities pose serious injury risks to the public Harmonisation of strict liability for the cases mentioned in the previous option, but allowing Member States to maintain broader and/or more far-reaching national strict liability schemes applicable to other AI-enabled products and services

Strict liability for operating AI-enabled products and providing of AI-enabled services should not be harmonised at EU level Question: Please elaborate on your answer, describe any other approaches regarding strict liability you may find appropriate and/or indicate to which specific AI-enabled products and services strict liability should apply: (2000 character(s) maximum) 0 out of 2000 characters used.

Question: The availability, uptake and economic effects of insurance policies covering liability for damage are important factors in assessing the impacts of the measures described in the previous questions. Therefore, this question explores the role of (voluntary or mandatory) insurance solutions in general terms.

The subsequent questions concern possible EU policy measures regarding insurance. To what extent do you agree with the following statements? Strongly agree Agree Neutral Disagree Strongly disagree No opinion

Parties subject to possible harmonised strict liability rules as described in the previous question would likely be covered by (voluntary or mandatory) insurance

In cases where possible facilitations of the burden of proof would apply (as described in the question on approaches to burden of proof), the potentially liable party would likely be covered by (voluntary or mandatory) liability insurance

Insurance solutions (be they voluntary or mandatory) could limit the costs of potential damage for the liable person to the insurance premium

Insurance solutions (be they voluntary or mandatory) could ensure that the injured person receives compensation

Question: Please elaborate on your answers: (2000 character(s) maximum) 0 out of 2000 characters used.

Question: Under many national strict liability schemes, the person liable is required by law to take out insurance. A similar solution could be chosen at EU level for damage caused by certain types of AI systems that pose serious injury risks (e.g. life, health, property) to the public.

Possible EU rules would ensure that existing insurance requirements are not duplicated: if the operation of a certain product, such as motor vehicles or drones, is already subject to mandatory insurance coverage, using AI in such a product or service would not entail additional insurance requirements.

Do you agree or disagree with the following approach on insurance for the use of AI systems that poses a serious risk of injury to the public? Strongly agree Agree Neutral Disagree Strongly disagree No opinion

A harmonised insurance obligation should be laid down at EU level, where it does not exist yet, for using AI products and providing AI-based services that pose a serious injury risk (e.g. life, health, property) to the public

Question: Taking into account the description of various options presented in the previous questions, please rank the following options from 1 (like best) to 8 (like least) 1 2 3 4 5 6 7 8

Option 1: (Aside from measures to ease the burden of proof considered in Section I) Amending the Product Liability Directive to ease the burden on victims when proving an AI-enabled product was defective and caused the damage Option 2: Targeted harmonisation of national rules on proof, e.g. by reversing the burden of proof under certain conditions, to ensure that it is not excessively difficult for victims to prove, as appropriate, fault and/or causation for damage caused by certain AI-enabled products and services Option 3: Harmonisation of liability irrespective of fault (‘strict liability’) for operators of AI technologies that pose a serious injury risk (e.g. life, health, property) to the public Option 4: option 3 + mandatory liability insurance for operators subject to strict liability

Option 5: option 1 + option 2 Option 6: option 1 + option 2 + option 3 Option 7: option 1 + option 2 + option 4 Option 8: No EU action. Outside the existing scope of the Product Liability Directive, each Member State would be free to adapt liability rules for AI if and as they see fit Question: Please elaborate on your answers, also taking into account the interplay with the other strands of the Commission’s AI policy (in particular the proposed AI Act). Please also describe any other measures you may find appropriate: (2000 character(s) maximum) 0 out of 2000 characters used.

Types of compensable harm and admissibility of contractual liability waivers

Question: Aside from bodily injury or damage to physical objects, the use of technology can cause other types of damage, such as immaterial harm (e.g. pain and suffering). This is true not only for AI but also for other potential sources of harm. Coverage for such damage differs widely in Member States.

Do you agree or disagree with harmonising compensation for the following types of harm (aside from bodily injury and property damage), specifically for cases where using AI leads to harm? Please note that this question does not concern the Product Liability Directive – a question on the types of harm for which consumers can claim compensation under this Directive can be found in Section I. The answer options are not mutually exclusive. Strongly agree Agree Neutral Disagree Strongly disagree No opinion

Pure economic loss (e.g. loss of profit) Loss of or damage to data (not covered by the GDPR) resulting in a verifiable economic loss

Immaterial harm like pain and suffering, reputational damage or psychological harm

Loss of or damage to data (not covered by the GDPR) not resulting in a verifiable economic loss

All the types of harm mentioned above

Question: Please specify any other types of harm: (500 character(s) maximum) 0 out of 500 characters used.

Question: Sometimes the person who has suffered damage has a contract with the person responsible. That contract may exclude or limit the right to compensation. Some Member States consider it necessary to prohibit or restrict all or certain such clauses. The Product Liability Directive also does not let producers limit or exclude their liability towards the injured person by contract.

If the liability of operators/users for damage caused by AI is harmonised at EU level, do you agree or disagree with the following approaches regarding contractual clauses excluding or limiting in advance the victim’s right to compensation? Strongly agree Agree Neutral Disagree Strongly disagree No opinion

The admissibility of contractual liability waivers should not be addressed at all

Such contractual clauses should be prohibited vis-à-vis consumers Such contractual clauses should be prohibited vis-à-vis consumers and between businesses

The contractual exclusion or limitation of liability should be prohibited only for certain types of harm (e.g. to life, body or health) and/or for harm arising from gross negligence or intent Question: Please elaborate on your answer and specify if you would prefer a different approach, e.g. an approach differentiating by area of AI application: (2000 character(s) maximum) 0 out of 2000 characters used.

Additional information

Question: Are there any other issues that should be considered? (3000 character(s) maximum) 0 out of 3000 characters used.

Question: You can upload relevant quantitative data, reports/studies and position papers to support your views here:

Only files of the type pdf,txt,doc,docx,odt,rtf are allowed

Question: Do you agree to the Commission contacting you for a possible follow-up? Answer Yes Answer No

If you're human, leave this field blank

Contact Mark.BEAMISH@ec.europa.eu Download PDF version

Report abuse EUSurvey is supported by the European Commission's ISA€² programme, which promotes interoperability solutions for European public administrations.


Whatever the European Commission comes up with at the end, we keep our hopes low in light of unbridled cronyism.

Recent Techrights' Posts

Microsoft Uses LLM Slop to Defraud (or Rob) Shareholders
Microsoft is basically defrauding its shareholders by LLM slop
The "Davos Effect": Tarnishing the Reputation of Places Not by Overtourism But by Oligarch Infestation
The last Venice needs is an affiliation with Venetian oligarchs
 
What Happens When Your Law Firm is Preoccupied With Harassing and Trying to Extort a Humble Couple in Manchester, Even on Behalf of Violent Microsoft Staff From Another Continent
It's good to see that law firms which operate in bad faith are perishing
Lawyer X, Law Firm X and Elon Musk's X: scandals linked by Old Xaverian
Reprinted with permission from Daniel Pocock
Gemini Links 01/07/2025: Distraction-Free Writing and Hytale Mismanagement
Links for the day
Links 01/07/2025: "Beauty of Blogging" and "Etiquette of Collapse"
Links for the day
The Web is a Dead End
We need to adopt alternatives
When Words Lose Their Intended Meaning
examples of words that, at least in the technical spheres, don't mean what they sound like
People Who Disagree With You on Technical Matters May or May Not Agree With You on Political Things (But Usually They Do)
What bothers me a great deal is seeing left-leaning people accusing other left-leaning people of being "nazis"
"Too Much Choice" and "Too Many Programming Languages"
What IBM and its apologists aim for was attempted in the 1930s and it failed
Microsoft Lost 400,000,000 Windows Users, According to Microsoft
more people adopt smaller computers and many people replace Windows with GNU/Linux, as they don't really need a new computer
Half a Year Gone, What's to Come Next
In the second half of 2025 we expect to be done with the Microsoft SLAPPs
Over at Tux Machines...
GNU/Linux news for the past day
IRC Proceedings: Monday, June 30, 2025
IRC logs for Monday, June 30, 2025
People at the Very Top of Microsoft Know How Bad Things Really Are
There's no product that can replace the former profitability of Windows licensing and stuff that went on top of Windows
Gemini Links 01/07/2025: Mid Year and a Tour of Old Languages
Links for the day
EPO Presentation Bemoans Misuse of Slop in Decision-Making on Patents and in Classification (Which is Likely Illegal Too)
We habitually mention failed use cases of LLMs on the Web
Mass Layoffs at Microsoft Confirmed, "XBox Hardware Is Dead"
It's possible that over 20% of the staff will be laid off
Links 30/06/2025: Kyrgyzstan vs Media Freedom, Dalai Lama Succession
Links for the day
Gemini Links 30/06/2025: Backend Programs in Gemini and Dynamic Content Without The Scripting
Links for the day
Links 30/06/2025: Zuckerberg’s Tax-Evading Scheme Harms Kids, US Copyright Office Lacks Leadership
Links for the day
Microsoft Isn't Laying Off Tens of Thousands to 'Invest' in Slop ('Hey Hi'), It's Laying Off Tens of Thousands Because It's Running Out of Money (and Willing Lenders)
the layoffs are a sign of the business failing, not "hey hi" (whatever that is) replacing staff
Intel Lays Off 20% of Its Workforce, Microsoft is Doing the Same This Year
Like a yoyo, whatever goes up will come back down
Microsoft XBox Layoffs: Almost 2,000 Layoffs Became "Over 2,000"? (Over 20% of the Staff)
over 20% of staff will be let go, not counting staff that leaves voluntarily
GNU/Linux Rises to New Highs in Angola, Africa in General is Abandoning Windows
Western media barely covers Microsoft layoffs in Africa, but in recent years Microsoft culled the workforce and even shut down entire operations
Summer Plans in Techrights and Elsewhere
massive layoffs at Microsoft
Destination Geminispace (in the Age of LLM Slop and Slop Images That Infest the Web and Social Control Media)
Geminispace isn't vast, but at least it is - on average - a lot "cleaner"
GNU/Linux Growing in Sierra Leone This Year
Based on what statCounter is seeing, this year there are more and more people there who adopt GNU/Linux
Serial Sloppers Gonna Slop
More sites out there ought to call out the cheaters
Quartz (qz.com) is Spam and a Slopfarm
It used to be OK. Then they fired the staff.
Links 30/06/2025: US Economic Woes, Extreme Heat
Links for the day
Over at Tux Machines...
GNU/Linux news for the past day
IRC Proceedings: Sunday, June 29, 2025
IRC logs for Sunday, June 29, 2025
Gemini Links 30/06/2025: "The AI Hype" and New AuraGem Ask
Links for the day
Our Desktops Are Not Your Experiments, X is Not an Experiment
Breaking what already worked
Microsoft's Big Lies Regarding This Week's Mass Layoffs Have Already Begun (and They're Already Being Spread by Slopfarms)
Microsoft is the "market leader" in slop
Explaining the Full Story of SLAPPs From Microsoft Staff
For every action there is a reaction, for every attack there will be proportionate consequences
The Openwashing Shills Initiative (OSI) - Part III: IRS and Status of OSI
"They lied to the US IRS and there’s a paper trail"
IBM Red Hat's Dogmatic Fanaticism Under a Thin Veil of "Modernism"
IBM now has the audacity to paint people who don't agree as "nazis"
Microsoft's Share in Guatemala Fell From 97% to 14%
Eventually Microsoft will get stuck in a loop of layoffs, layoffs, and more layoffs
They Made Technology Scary and Taught Us That It's Innocent, Friendly, Even "Social"
Rejection of all this "apps" and "gadgets" and "Smart" (whatever that means!) status quo isn't a rejection of society
The Media is Under Attacks Partly Because There's Little Other (Remaining) Press to Speak in Its Defence
The biggest danger here is that when there's very little press or no "opposition media" left it becomes even easier to crush critics because there aren't many people left to speak about the matter
If Your Web Site is Run by Bots, Eventually Nobody Will 'Read' It Except Bots (People Don't Want to Read Slop)
Eventually people learn from mistakes
Links 29/06/2025: Microsoft Releases False/Fake Benchmarks, "Google Wants You to Watch Ads or Take Surveys to Read Articles"
Links for the day
Links 29/06/2025: Data Breaches and Online Censorship
Links for the day
Gemini Links 29/06/2025: "The Price Of Eggs" and Gemini 3D Tic Tac Toe
Links for the day
Over at Tux Machines...
GNU/Linux news for the past day
IRC Proceedings: Saturday, June 28, 2025
IRC logs for Saturday, June 28, 2025