EU AI Liability Directive
Proposal for a DIRECTIVE OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL on adapting non-contractual civil liability rules to artificial intelligence (AI Liability Directive)
(Text with EEA relevance)
THE EUROPEAN PARLIAMENT AND THE COUNCIL OF THE EUROPEAN UNION,
Having regard to the Treaty on the Functioning of the European Union, and in particular Article 114 thereof,
Having regard to the proposal from the European Commission,
After transmission of the draft legislative act to the national parliaments,
Having regard to the opinion of the European Economic and Social Committee 29 ,
Having regard to the opinion of the Committee of the Regions 30 ,
Acting in accordance with the ordinary legislative procedure,
Whereas:
(1) Artificial Intelligence (‘AI’) is a set of enabling technologies which can contribute to a wide array of benefits across the entire spectrum of the economy and society. It has a large potential for technological progress and allows new business models in many sectors of the digital economy.
(2) At the same time, depending on the circumstances of its specific application and use, AI can generate risks and harm interests and rights that are protected by Union or national law. For instance, the use of AI can adversely affect a number of fundamental rights, including life, physical integrity and in respect to non-discrimination and equal treatment. Regulation (EU) …/… of the European Parliament and of the Council [the AI Act] 31 provides for requirements intended to reduce risks to safety and fundamental rights, while other Union law instruments regulate general 32 and sectoral product safety rules applicable also to AI-enabled machinery products 33 and radio equipment. 34 While such requirements intended to reduce risks to safety and fundamental rights are meant to prevent, monitor and address risks and thus address societal concerns, they do not provide individual relief to those that have suffered damage caused by AI. Existing requirements provide in particular for authorisations, checks, monitoring and administrative sanctions in relation to AI systems in order to prevent damage. They do not provide for compensation of the injured person for damage caused by an output or the failure to produce an output by an AI system.
(3) When an injured person seeks compensation for damage suffered, Member States’ general fault-based liability rules usually require that person to prove a negligent or intentionally damaging act or omission (‘fault’) by the person potentially liable for that damage, as well as a causal link between that fault and the relevant damage. However, when AI is interposed between the act or omission of a person and the damage, the specific characteristics of certain AI systems, such as opacity, autonomous behaviour and complexity, may make it excessively difficult, if not impossible, for the injured person to meet this burden of proof. In particular, it may be excessively difficult to prove that a specific input for which the potentially liable person is responsible had caused a specific AI system output that led to the damage at stake.
(4) In such cases, the level of redress afforded by national civil liability rules may be lower than in cases where technologies other than AI are involved in causing damage. Such compensation gaps may contribute to a lower level of societal acceptance of AI and trust in AI-enabled products and services.
(5) To reap the economic and societal benefits of AI and promote the transition to the digital economy, it is necessary to adapt in a targeted manner certain national civil liability rules to those specific characteristics of certain AI systems. Such adaptations should contribute to societal and consumer trust and thereby promote the roll-out of AI. Such adaptations should also maintain trust in the judicial system, by ensuring that victims of damage caused with the involvement of AI have the same effective compensation as victims of damage caused by other technologies.
(6) Interested stakeholders – injured persons suffering damage, potentially liable persons, insurers – face legal uncertainty as to how national courts, when confronted with the specific challenges of AI, might apply the existing liability rules in individual cases in order to achieve just results. In the absence of Union action, at least some Member States are likely to adapt their civil liability rules to address compensation gaps and legal uncertainty linked to the specific characteristics of certain AI systems. This would create legal fragmentation and internal market barriers for businesses that develop or provide innovative AI-enabled products or services. Small and medium-sized enterprises would be particularly affected.
(7) The purpose of this Directive is to contribute to the proper functioning of the internal market by harmonising certain national non-contractual fault-based liability rules, so as to ensure that persons claiming compensation for damage caused to them by an AI system enjoy a level of protection equivalent to that enjoyed by persons claiming compensation for damage caused without the involvement of an AI system. This objective cannot be sufficiently achieved by the Member States because the relevant internal market obstacles are linked to the risk of unilateral and fragmented regulatory measures at national level. Given the digital nature of the products and services falling within the scope of this Directive, the latter is particularly relevant in a cross-border context.
(8) The objective of ensuring legal certainty and preventing compensation gaps in cases where AI systems are involved can thus be better achieved at Union level. Therefore, the Union may adopt measures in accordance with the principle of subsidiarity as set out in Article 5 TEU. In accordance with the principle of proportionality as set out in that Article, this Directive does not go beyond what is necessary in order to achieve that objective.
(9) It is therefore necessary to harmonise in a targeted manner specific aspects of fault-based liability rules at Union level. Such harmonisation should increase legal certainty and create a level playing field for AI systems, thereby improving the functioning of the internal market as regards the production and dissemination of AI-enabled products and services.
(10) To ensure proportionality, it is appropriate to harmonise in a targeted manner only those fault-based liability rules that govern the burden of proof for persons claiming compensation for damage caused by AI systems. This Directive should not harmonise general aspects of civil liability which are regulated in different ways by national civil liability rules, such as the definition of fault or causality, the different types of damage that give rise to claims for damages, the distribution of liability over multiple tortfeasors, contributory conduct, the calculation of damages or limitation periods.
(11) The laws of the Member States concerning the liability of producers for damage caused by the defectiveness of their products are already harmonised at Union level by Council Directive 85/374/EEC 35 . Those laws do not, however, affect Member States’ rules of contractual or non-contractual liability, such as warranty, fault or strict liability, based on other grounds than the defect of the product. While at the same time the revision of Council Directive 85/374/EEC seeks to clarify and ensure that injured person can claim compensation for damages caused by defective AI-enabled products, it should therefore be clarified that the provisions of this Directive do not affect any rights which an injured person may have under national rules implementing Directive 85/374/EEC. In addition, in the field of transport, Union law regulating the liability of transport operators should remain unaffected by this Directive.
(12) [The Digital Services Act (DSA) 36 ] fully harmonises the rules applicable to providers of intermediary services in the internal market, covering the societal risks stemming from the services offered by those providers, including as regards the AI systems they use. This Directive does not affect the provisions of [the Digital Services Act (DSA)] that provide a comprehensive and fully harmonised framework for due diligence obligations for algorithmic decision-making by hosting service providers, including the exemption from liability for the dissemination of illegal content uploaded by recipients of their services where the conditions of that Regulation are met.
(13) Other than in respect of the presumptions it lays down, this Directive does not harmonise national laws regarding which party has the burden of proof or which degree of certainty is required as regards the standard of proof.
(14) This Directive should follow a minimum harmonisation approach. Such an approach allows claimants in cases of damage caused by AI systems to invoke more favourable rules of national law. Thus, national laws could, for example, maintain reversals of the burden of proof under national fault-based regimes, or national no-fault liability (referred to as ‘strict liability’) regimes of which there are already a large variety in national laws, possibly applying to damage caused by AI systems.
(15) Consistency with [the AI Act] should also be ensured. It is therefore appropriate for this Directive to use the same definitions in respect of AI systems, providers and users. In addition, this Directive should only cover claims for damages when the damage is caused by an output or the failure to produce an output by an AI system through the fault of a person, for example the provider or the user under [the AI Act]. There is no need to cover liability claims when the damage is caused by a human assessment followed by a human act or omission, while the AI system only provided information or advice which was taken into account by the relevant human actor. In the latter case, it is possible to trace back the damage to a human act or omission, as the AI system output is not interposed between the human act or omission and the damage, and thereby establishing causality is not more difficult than in situations where an AI system is not involved.
(16) Access to information about specific high-risk AI systems that are suspected of having caused damage is an important factor to ascertain whether to claim compensation and to substantiate claims for compensation. Moreover, for high risk AI systems, [the AI Act] provides for specific documentation, information and logging requirements, but does not provide a right to the injured person to access that information. It is therefore appropriate to lay down rules on the disclosure of relevant evidence by those that have it at their disposal, for the purposes of establishing liability. This should also provide an additional incentive to comply with the relevant requirements laid down in [the AI Act] to document or record the relevant information.
(17) The large number of people usually involved in the design, development, deployment and operation of high-risk AI systems, makes it difficult for injured persons to identify the person potentially liable for damage caused and to prove the conditions for a claim for damages. To allow injured persons to ascertain whether a claim for damages is well-founded, it is appropriate to grant potential claimants a right to request a court to order the disclosure of relevant evidence before submitting a claim for damages. Such disclosure should only be ordered where the potential claimant presents facts and information sufficient to support the plausibility of a claim for damages and it has made a prior request to the provider, the person subject to the obligations of a provider or the user to disclose such evidence at their disposal about specific high-risk AI systems that are suspected of having caused damage which has been refused. Ordering disclosure should lead to a reduction of unnecessary litigation and avoid costs for the possible litigants caused by claims which are unjustified or likely to be unsuccessful. The refusal of the provider, the person subject to the obligations of a provider or the user prior to the request to the court to disclose evidence should not trigger the presumption of non-compliance with relevant duties of care by the person who refuses such disclosure.
(18) The limitation of disclosure of evidence as regards high-risk AI systems is consistent with [the AI Act], which provides certain specific documentation, record keeping and information obligations for operators involved in the design, development and deployment of high-risk AI systems. Such consistency also ensures the necessary proportionality by avoiding that operators of AI systems posing lower or no risk would be expected to document information to a level similar to that required for high-risk AI systems under [the AI Act].
(19) National courts should be able, in the course of civil proceedings, to order the disclosure or preservation of relevant evidence related to the damage caused by high-risk AI systems from persons who are already under an obligation to document or record information pursuant to [the AI Act], be they providers, persons under the same obligations as providers, or users of an AI system, either as defendants or third parties to the claim. There could be situations where the evidence relevant for the case is held by entities that would not be parties to the claim for damages but which are under an obligation to document or record such evidence pursuant to [the AI Act]. It is thus necessary to provide for the conditions under which such third parties to the claim can be ordered to disclose the relevant evidence.
(20) To maintain the balance between the interests of the parties involved in the claim for damages and of third parties concerned, the courts should order the disclosure of evidence only where this is necessary and proportionate for supporting the claim or potential claim for damages. In this respect, disclosure should only concern evidence that is necessary for a decision on the respective claim for damages, for example only the parts of the relevant records or data sets required to prove non-compliance with a requirement laid down by [the AI Act]. To ensure the proportionality of such disclosure or preservation measures, national courts should have effective means to safeguard the legitimate interests of all parties involved, for instance the protection of trade secrets within the meaning of Directive (EU) 2016/943 of the European Parliament and of the Council 37 and of confidential information, such as information related to public or national security. In respect of trade secrets or alleged trade secrets which the court has identified as confidential within the meaning of Directive (EU) 2016/943, national courts should be empowered to take specific measures to ensure the confidentiality of trade secrets during and after the proceedings, while achieving a fair and proportionate balance between the trade-secret holder’s interest in maintaining secrecy and the interest of the injured person. This should include measures to restrict access to documents containing trade secrets and access to hearings or documents and transcripts thereof to a limited number of people. When deciding on such measures, national courts should take into account the need to ensure the right to an effective remedy and to a fair trial, the legitimate interests of the parties and, where appropriate, of third parties, and any potential harm to either party or, where appropriate, to third parties, resulting from the granting or rejection of such measures. Moreover, to ensure a proportionate application of a disclosure measure towards third parties in claims for damages, the national courts should order disclosure from third parties only if the evidence cannot be obtained from the defendant.
(21) While national courts have the means of enforcing their orders for disclosure through various measures, any such enforcement measures could delay claims for damages and thus potentially create additional expenses for the litigants. For injured persons, such delays and additional expenses may make their recourse to an effective judicial remedy more difficult. Therefore, where a defendant in a claim for damages fails to disclose evidence at its disposal ordered by a court, it is appropriate to lay down a presumption of non-compliance with those duties of care which that evidence was intended to prove. This rebuttable presumption will reduce the duration of litigation and facilitate more efficient court proceedings. The defendant should be able to rebut that presumption by submitting evidence to the contrary.
(22) In order to address the difficulties to prove that a specific input for which the potentially liable person is responsible had caused a specific AI system output that led to the damage at stake, it is appropriate to provide, under certain conditions, for a presumption of causality. While in a fault-based claim the claimant usually has to prove the damage, the human act or omission constituting fault of the defendant and the causality link between the two, this Directive does not harmonise the conditions under which national courts establish fault. They remain governed by the applicable national law and, where harmonised, by applicable Union law. Similarly, this Directive does not harmonise the conditions related to the damage, for instance what damages are compensable, which are also regulated by applicable national and Union law. For the presumption of causality under this Directive to apply, the fault of the defendant should be established as a human act or omission which does not meet a duty of care under Union law or national law that is directly intended to protect against the damage that occurred. Thus, this presumption can apply, for example, in a claim for damages for physical injury when the court establishes the fault of the defendant for non-complying with the instructions of use which are meant to prevent harm to natural persons. Non-compliance with duties of care that were not directly intended to protect against the damage that occurred do not lead to the application of the presumption, for example a provider’s failure to file required documentation with competent authorities would not lead to the application of the presumption in claims for damages due to physical injury. It should also be necessary to establish that it can be considered reasonably likely, based on the circumstances of the case, that the fault has influenced the output produced by the AI system or the failure of the AI system to produce an output. Finally, the claimant should still be required to prove that the output or failure to produce an output gave rise to the damage.
(23) Such a fault can be established in respect of non-compliance with Union rules which specifically regulate high-risk AI systems like the requirements introduced for certain high-risk AI systems by [the AI Act], requirements which may be introduced by future sectoral legislation for other high-risk AI systems according to [Article 2(2) of the AI Act], or duties of care which are linked to certain activities and which are applicable irrespective whether AI is used for that activity. At the same time, this Directive neither creates nor harmonises the requirements or the liability of entities whose activity is regulated under those legal acts, and therefore does not create new liability claims. Establishing a breach of such a requirement that amounts to fault will be done according to the provisions of those applicable rules of Union Law, since this Directive neither introduces new requirements nor affects existing requirements. For example, the exemption of liability for providers of intermediary services and the due diligence obligations to which they are subject pursuant to [the Digital Services Act] are not affected by this Directive. Similarly, the compliance with requirements imposed on online platforms to avoid unauthorised communication to the public of copyright protected works is to be established under Directive (EU) 2019/790 on copyright and related rights in the Digital Single Market and other relevant Union copyright law.
(24) In areas not harmonised by Union law, national law continues to apply and fault is established under the applicable national law. All national liability regimes have duties of care, taking as a standard of conduct different expressions of the principle how a reasonable person should act, which also ensure the safe operation of AI systems in order to prevent damage to recognised legal interests. Such duties of care could for instance require users of AI systems to choose for certain tasks a particular AI system with concrete characteristics or to exclude certain segments of a population from being exposed to a particular AI system. National law can also introduce specific obligations meant to prevent risks for certain activities, which are applicable irrespective whether AI is used for that activity, for example traffic rules or obligations specifically designed for AI systems, such as additional national requirements for users of high-risk AI systems pursuant to Article 29 (2) of [the AI Act]. This Directive neither introduces such requirements nor affects the conditions for establishing fault in case of breach of such requirements.
(25) Even when fault consisting of a non-compliance with a duty of care directly intended to protect against the damage that occurred is established, not every fault should lead to the application of the rebuttable presumption linking it to the output of the AI. Such a presumption should only apply when it can be considered reasonably likely, from the circumstances in which the damage occurred, that such fault has influenced the output produced by the AI system or the failure of the AI system to produce an output that gave rise to the damage. It can be for example considered reasonably likely that the fault has influenced the output or failure to produce an output, when that fault consists in breaching a duty of care in respect of limiting the perimeter of operation of the AI system and the damage occurred outside the perimeter of operation. On the contrary, a breach of a requirement to file certain documents or to register with a given authority, even though this might be foreseen for that particular activity or even be applicable expressly to the operation of an AI system, could not be considered as reasonably likely to have influenced the output produced by the AI system or the failure of the AI system to produce an output.
(26) This Directive covers the fault constituting non-compliance with certain listed requirements laid down in Chapters 2 and 3 of [the AI Act] for providers and users of high-risk AI systems, the non-compliance with which can lead, under certain conditions, to a presumption of causality. The AI Act provides for full harmonisation of requirements for AI systems, unless otherwise explicitly laid down therein. It harmonises the specific requirements for high-risk AI systems. Hence, for the purposes of claims for damages in which a presumption of causality according to this Directive is applied, the potential fault of providers or persons subject to the obligations of a provider pursuant to [the AI Act] is established only through a non-compliance with such requirements. Given that in practice it may be difficult for the claimant to prove such non-compliance when the defendant is a provider of the AI system, and in full consistency with the logic of [the AI Act], this Directive should also provide that the steps undertaken by the provider within the risk management system and the results of the risk management system, i.e. the decision to adopt or not to adopt certain risk management measures, should be taken into account in the determination of whether the provider has complied with the relevant requirements under the AI Act referred to in this Directive. The risk management system put in place by the provider pursuant to [the AI Act] is a continuous iterative process run throughout the lifecycle of the high-risk AI system, whereby the provider ensures compliance with mandatory requirements meant to mitigate risks and can, therefore, be a useful element for the purpose of the assessment of this compliance. This Directive also covers the cases of users’ fault, when this fault consists in non-compliance with certain specific requirements set by [the AI Act]. In addition, the fault of users of high-risk AI systems may be established following non-compliance with other duties of care laid down in Union or national law, in light of Article 29 (2) of [the AI Act].
(27) While the specific characteristics of certain AI systems, like autonomy and opacity, could make it excessively difficult for the claimant to meet the burden of proof, there could be situations where such difficulties do not exist because there could be sufficient evidence and expertise available to the complainant to prove the causal link. This could be the case, for example, in respect of high-risk AI systems where the claimant could reasonably access sufficient evidence and expertise through documentation and logging requirements pursuant to [the AI Act]. In such situations, the court should not apply the presumption.
(28) The presumption of causality could also apply to AI systems that are not high-risk AI systems because there could be excessive difficulties of proof for the claimant. For example, such difficulties could be assessed in light of the characteristics of certain AI systems, such as autonomy and opacity, which render the explanation of the inner functioning of the AI system very difficult in practice, negatively affecting the ability of the claimant to prove the causal link between the fault of the defendant and the AI output. A national court should apply the presumption where the claimant is in an excessively difficult position to prove causation, since it is required to explain how the AI system was led by the human act or omission that constitutes fault to produce the output or the failure to produce an output which gave rise to the damage. However, the claimant should neither be required to explain the characteristics of the AI system concerned nor how these characteristics make it harder to establish the causal link.
(29) The application of the presumption of causality is meant to ensure for the injured person a similar level of protection as for situations where AI is not involved and where causality may therefore be easier to prove. Nevertheless, alleviating the burden of proving causation is not always appropriate under this Directive where the defendant is not a professional user but rather a person using the AI system for its private activities. In such circumstances, in order to balance interests between the injured person and the non-professional user, it needs to be taken into account whether such non-professional users can add to the risk of an AI system causing damage through their behaviour. If the provider of an AI system has complied with all its obligations and, in consequence, that system was deemed sufficiently safe to be put on the market for a given use by non-professional users and it is then used for that task, a presumption of causality should not apply for the simple launch of the operation of such a system by such non-professional users. A non-professional user that buys an AI system and simply launches it according to its purpose, without interfering materially with the conditions of operations, should not be covered by the causality presumption laid down by this Directive. However, if a national court determines that a non-professional user materially interfered with the conditions of operation of an AI system or was required and able to determine the conditions of operation of the AI system and failed to do so, then the presumption of causality should apply, where all the other conditions are fulfilled. This could be the case, for example, when the non-professional user does not comply with the instructions of use or with other applicable duties of care when choosing the area of operation or when setting performance conditions of the AI system. This is without prejudice to the fact that the provider should determine the intended purpose of an AI system, including the specific context and conditions of use, and eliminate or minimise the risks of that system as appropriate at the time of the design and development, taking into account the knowledge and expertise of the intended user.
(30) Since this Directive introduces a rebuttable presumption, the defendant should be able to rebut it, in particular by showing that its fault could not have caused the damage.
(31) It is necessary to provide for a review of this Directive [five years] after the end of the transposition period. In particular, that review should examine whether there is a need to create no-fault liability rules for claims against the operator, as long as not already covered by other Union liability rules in particular Directive 85/374/EEC, combined with a mandatory insurance for the operation of certain AI systems, as suggested by the European Parliament. 38 In accordance with the principle of proportionality, it is appropriate to assess such a need in the light of relevant technological and regulatory developments in the coming years, taking into account the effect and impact on the roll-out and uptake of AI systems, especially for SMEs. Such a review should consider, among others, risks involving damage to important legal values like life, health and property of unwitting third parties through the operation of AI-enabled products or services. That review should also analyse the effectiveness of the measures provided for in this Directive in dealing with such risks, as well as the development of appropriate solutions by the insurance market. To ensure the availability of the information necessary to conduct such a review, it is necessary to collect data and other necessary evidence covering the relevant matters.
(32) Given the need to make adaptations to national civil liability and procedural rules to foster the rolling-out of AI-enabled products and services under beneficial internal market conditions, societal acceptance and consumer trust in AI technology and the justice system, it is appropriate to set a deadline of not later than [two years after the entry into force] of this Directive for Member States to adopt the necessary transposition measures.
(33) In accordance with the Joint Political Declaration of 28 September 2011 of Member States and the Commission on explanatory documents 39 , Member States have undertaken to accompany, in justified cases, the notification of their transposition measures with one or more documents explaining the relationship between the components of a directive and the corresponding parts of national transposition instruments. With regard to this Directive, the legislator considers the transmission of such documents to be justified,
HAVE ADOPTED THIS DIRECTIVE:
Article 1
Subject matter and scope
1. This Directive lays down common rules on:
(a) the disclosure of evidence on high-risk artificial intelligence (AI) systems to enable a claimant to substantiate a non-contractual fault-based civil law claim for damages;
(b) the burden of proof in the case of non-contractual fault-based civil law claims brought before national courts for damages caused by an AI system.
2. This Directive applies to non-contractual fault-based civil law claims for damages, in cases where the damage caused by an AI system occurs after [the end of the transposition period].
This Directive does not apply to criminal liability.
3. This Directive shall not affect:
(a) rules of Union law regulating conditions of liability in the field of transport;
(b) any rights which an injured person may have under national rules implementing Directive 85/374/EEC;
(c) the exemptions from liability and the due diligence obligations as laid down in [the Digital Services Act] and
(d) national rules determining which party has the burden of proof, which degree of certainty is required as regards the standard of proof, or how fault is defined, other than in respect of what is provided for in Articles 3 and 4.
4. Member States may adopt or maintain national rules that are more favourable for claimants to substantiate a non-contractual civil law claim for damages caused by an AI system, provided such rules are compatible with Union law.
Article 2
Definitions
For the purposes of this Directive, the following definitions shall apply:
(1) ‘AI system’ means an AI system as defined in [Article 3 (1) of the AI Act];
(2) ‘high-risk AI system’ means an AI system referred to in [Article 6 of the AI Act];
(3) ‘provider’ means a provider as defined in [Article 3 (2) of the AI Act];
(4) ‘user’ means a user as defined in [Article 3 (4) of the AI Act];
(5) ‘claim for damages’ means a non-contractual fault-based civil law claim for compensation of the damage caused by an output of an AI system or the failure of such a system to produce an output where such an output should have been produced;
(6) ‘claimant’ means a person bringing a claim for damages that:
(a) has been injured by an output of an AI system or by the failure of such a system to produce an output where such an output should have been produced;
(b) has succeeded to or has been subrogated to the right of an injured person by virtue of law or contract; or
(c) is acting on behalf of one or more injured persons, in accordance with Union or national law.
(7) ‘potential claimant’ means a natural or legal person who is considering but has not yet brought a claim for damages;
(8) ‘defendant’ means the person against whom a claim for damages is brought;
(9) ‘duty of care’ means a required standard of conduct, set by national or Union law, in order to avoid damage to legal interests recognised at national or Union law level, including life, physical integrity, property and the protection of fundamental rights.
Article 3
Disclosure of evidence and rebuttable presumption of non-compliance
1. Member States shall ensure that national courts are empowered, either upon the request of a potential claimant who has previously asked a provider, a person subject to the obligations of a provider pursuant to [Article 24 or Article 28(1) of the AI Act] or a user to disclose relevant evidence at its disposal about a specific high-risk AI system that is suspected of having caused damage, but was refused, or a claimant, to order the disclosure of such evidence from those persons.
In support of that request, the potential claimant must present facts and evidence sufficient to support the plausibility of a claim for damages
2. In the context of a claim for damages, the national court shall only order the disclosure of the evidence by one of the persons listed in paragraph 1, if the claimant has undertaken all proportionate attempts at gathering the relevant evidence from the defendant.
3. Member States shall ensure that national courts, upon the request of a claimant, are empowered to order specific measures to preserve the evidence mentioned in paragraph 1.
4. National courts shall limit the disclosure of evidence to that which is necessary and proportionate to support a potential claim or a claim for damages and the preservation to that which is necessary and proportionate to support such a claim for damages.
In determining whether an order for the disclosure or preservation of evidence is proportionate, national courts shall consider the legitimate interests of all parties, including third parties concerned, in particular in relation to the protection of trade secrets within the meaning of Article 2(1) of Directive (EU) 2016/943 and of confidential information, such as information related to public or national security.
Member States shall ensure that, where the disclosure of a trade secret or alleged trade secret which the court has identified as confidential within the meaning of Article 9(1) of Directive (EU) 2016/943 is ordered, national courts are empowered, upon a duly reasoned request of a party or on their own initiative, to take specific measures necessary to preserve confidentiality when that evidence is used or referred to in legal proceedings.
Member States shall also ensure that the person ordered to disclose or to preserve the evidence mentioned in paragraphs 1 or 2 has appropriate procedural remedies in response to such orders.
5. Where a defendant fails to comply with an order by a national court in a claim for damages to disclose or to preserve evidence at its disposal pursuant to paragraphs 1 or 2, a national court shall presume the defendant’s non-compliance with a relevant duty of care, in particular in the circumstances referred to in Article 4(2) or (3), that the evidence requested was intended to prove for the purposes of the relevant claim for damages.
The defendant shall have the right to rebut that presumption.
Article 4
Rebuttable presumption of a causal link in the case of fault
1. Subject to the requirements laid down in this Article, national courts shall presume, for the purposes of applying liability rules to a claim for damages, the causal link between the fault of the defendant and the output produced by the AI system or the failure of the AI system to produce an output, where all of the following conditions are met:
(a) the claimant has demonstrated or the court has presumed pursuant to Article 3(5), the fault of the defendant, or of a person for whose behaviour the defendant is responsible, consisting in the non-compliance with a duty of care laid down in Union or national law directly intended to protect against the damage that occurred;
(b) it can be considered reasonably likely, based on the circumstances of the case, that the fault has influenced the output produced by the AI system or the failure of the AI system to produce an output;
(c) the claimant has demonstrated that the output produced by the AI system or the failure of the AI system to produce an output gave rise to the damage.
2.In the case of a claim for damages against a provider of a high-risk AI system subject to the requirements laid down in chapters 2 and 3 of Title III of [the AI Act] or a person subject to the provider’s obligations pursuant to [Article 24 or Article 28(1) of the AI Act], the condition of paragraph 1 letter (a) shall be met only where the complainant has demonstrated that the provider or, where relevant, the person subject to the provider’s obligations, failed to comply with any of the following requirements laid down in those chapters, taking into account the steps undertaken in and the results of the risk management system pursuant to [Article 9 and Article 16 point (a) of the AI Act]:
(a) the AI system is a system which makes use of techniques involving the training of models with data and which was not developed on the basis of training, validation and testing data sets that meet the quality criteria referred to in [Article 10(2) to (4) of the AI Act];
(b) the AI system was not designed and developed in a way that meets the transparency requirements laid down in [Article 13 of the AI Act];
(c) the AI system was not designed and developed in a way that allows for an effective oversight by natural persons during the period in which the AI system is in use pursuant to [Article 14 of the AI Act];
(d) the AI system was not designed and developed so as to achieve, in the light of its intended purpose, an appropriate level of accuracy, robustness and cybersecurity pursuant to [Article 15 and Article 16, point (a), of the AI Act]; or
(e) the necessary corrective actions were not immediately taken to bring the AI system in conformity with the obligations laid down in [Title III, Chapter 2 of the AI Act] or to withdraw or recall the system, as appropriate, pursuant to [Article 16, point (g), and Article 21 of the AI Act].
3. In the case of a claim for damages against a user of a high-risk AI system subject to the requirements laid down in chapters 2 and 3 of Title III of [the AI Act], the condition of paragraph 1 letter (a) shall be met where the claimant proves that the user:
(a) did not comply with its obligations to use or monitor the AI system in accordance with the accompanying instructions of use or, where appropriate, suspend or interrupt its use pursuant to [Article 29 of the AI Act]; or
(b) exposed the AI system to input data under its control which is not relevant in view of the system’s intended purpose pursuant to [Article 29(3) of the Act].
4. In the case of a claim for damages concerning a high-risk AI system, a national court shall not apply the presumption laid down in paragraph 1 where the defendant demonstrates that sufficient evidence and expertise is reasonably accessible for the claimant to prove the causal link mentioned in paragraph 1.
5. In the case of a claim for damages concerning an AI system that is not a high-risk AI system, the presumption laid down in paragraph 1 shall only apply where the national court considers it excessively difficult for the claimant to prove the causal link mentioned in paragraph 1.
6. In the case of a claim for damages against a defendant who used the AI system in the course of a personal, non-professional activity, the presumption laid down in paragraph 1 shall apply only where the defendant materially interfered with the conditions of the operation of the AI system or if the defendant was required and able to determine the conditions of operation of the AI system and failed to do so.
7. The defendant shall have the right to rebut the presumption laid down in paragraph 1.
Article 5
Evaluation and targeted review
1. By [DATE five years after the end of the transposition period], the Commission shall review the application of this Directive and present a report to the European Parliament, to the Council and to the European Economic and Social Committee, accompanied, where appropriate, by a legislative proposal.
2. The report shall examine the effects of Articles 3 and 4 on achieving the objectives pursued by this Directive. In particular, it should evaluate the appropriateness of no-fault liability rules for claims against the operators of certain AI systems, as long as not already covered by other Union liability rules, and the need for insurance coverage, while taking into account the effect and impact on the roll-out and uptake of AI systems, especially for SMEs.
3. The Commission shall establish a monitoring programme for preparing the report pursuant to paragraphs 1 and 2, setting out how and at what intervals the data and other necessary evidence will be collected. The programme shall specify the action to be taken by the Commission and by the Member States in collecting and analysing the data and other evidence. For the purposes of that programme, Member States communicate the relevant data and evidence to the Commission, by [31 December of the second full year following the end of the transposition period] and by the end of each subsequent year.
Article 6
Amendment to Directive (EU) 2020/1828
In Annex I to Directive (EU) 2020/1828 40 , the following point (67) is added:
“(67) Directive (EU) …/… of the European Parliament and of the Council of … on adapting non contractual civil liability rules to artificial intelligence (AI Liability Directive) (OJ L …, …, p. …).”.
Article 7
Transposition
1. Member States shall bring into force the laws, regulations and administrative provisions necessary to comply with this Directive by [two years after the entry into force] at the latest. They shall forthwith communicate to the Commission the text of those provisions.
When Member States adopt those provisions, they shall contain a reference to this Directive or be accompanied by such a reference on the occasion of their official publication. Member States shall determine how such reference is to be made.
2. Member States shall communicate to the Commission the text of the main provisions of national law which they adopt in the field covered by this Directive.
Article 8
Entry into force
This Directive shall enter into force on the twentieth day following that of its publication in the Official Journal of the European Union.
Article 9
Addressees
This Directive is addressed to the Member States.
Done at Brussels,
For the European Parliament
The President
For the Council
The President
—-
(31) [Proposal for a Regulation laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) – COM(2021) 206 final]
(32) [Proposal for a Regulation of the European Parliament and of the Council on general product safety (COM(2021) 346 final)]
(33) [Proposal for a Regulation of the European Parliament and of the Council on machinery products (COM(2021) 202 final)]
(34) Commission Delegated Regulation (EU) 2022/30 supplementing Directive 2014/53/EU of the European Parliament and of the Council with regard to the application of the essential requirements referred to in Article 3(3), points (d), (e) and (f), of that Directive (OJ L 7, 12.1.2022, p. 6)
(35) Council Directive 85/374/EEC of 25 July 1985 on the approximation of the laws, regulations and administrative provisions of the Member States concerning liability for defective products (OJ L 210, 7.8.1985, p. 29).
(36) [Proposal for a Regulation of the European Parliament and of the Council on a Single Market For Digital Services (Digital Services Act) – COM(2020) 825 final]
(37) Directive (EU) 2016/943 of the European Parliament and of the Council of 8 June 2016 on the protection of undisclosed know-how and business information (trade secrets) against their unlawful acquisition, use and disclosure (OJ L 157, 15.6.2016, p. 1).
(38) European Parliament resolution of 20 October 2020 with recommendations to the Commission on a civil liability regime for artificial intelligence (2020/2014(INL)) – OJ C 404, 6.10.2021, p. 107.
(39) OJ C 369, 17.12.2011, p. 14.
(40) Directive (EU) 2020/1828 of the European Parliament and of the Council of 25 November 2020 on representative actions for the protection of the collective interests of consumers and repealing Directive 2009/22/EC (OJ L 409, 4.12.2020, p. 1).