资源描述
EN EN EU R O PEAN C O M M I SSIO N B russe ls, 19 .2.2020 C OM( 2020) 64 fina l REP ORT F ROM T HE COM M IS S ION T O T H E E UR OP E AN P AR L I AM E NT, T HE COUN CIL AND T HE E UR OP E AN E CONO M IC A ND S OCIAL C OMM IT T E E Re p or t on the saf e ty and li ab i li ty im p li c ation s of Ar tif icial I n te ll igence , the In te r n e t of T h in gs a n d r ob otic s 1 REPORT ON THE SAFETY AND LIABILITY IMPLICATIONS OF ARTIFICIAL INTELLIGENCE, THE INTERNET OF THINGS AND ROBOTICS 1. Introduction Artificial Intelligence (AI)1, the Internet of Things (IoT)2 and robotics will create new opportunities and benefits for our society. The Commission has recognised the importance and potential of these technologies and the need for significant investment in these areas.3 It is committed to making Europe a world-leader in AI, IoT and robotics. In order to achieve this goal, a clear and predictable legal framework addressing the technological challenges is required. 1.1. The existing safety and liability framework The overall objective of the safety and liability legal frameworks is to ensure that all products and services, including those integrating emerging digital technologies, operate safely, reliably and consistently and that damage having occurred is remedied efficiently. High levels of safety for products and systems integrating new digital technologies and robust mechanisms remedying occurred damage (i.e. the liability framework) contribute to better protect consumers. They also create trust in these technologies, a prerequisite for their uptake by industry and users. This in turn will leverage the competitiveness of our industry and contribute to the objectives of the Union4. A clear safety and liability framework is particularly important when new technologies like AI, the IoT and robotics emerge, both with a view to ensure consumer protection and legal certainty for businesses. The Union has a robust and reliable safety and product liability regulatory framework and a robust body of safety standards, complemented by national, non-harmonised liability legislation. Together, they ensure the well-being of our citizens in the Single Market and encourage innovation and technological uptake. However, AI, the IoT and robotics are transforming the characteristics of many products and services. The Communication on Artificial Intelligence for Europe5, adopted on 25 April 2018, announced that the Commission would submit a report assessing the implications of the emerging digital technologies on the existing safety and liability frameworks. This report aims to identify and examine the broader implications for and potential gaps in the liability and safety frameworks for AI, the IoT and robotics. The orientations provided in this report accompanying the White Paper on Artificial Intelligence are provided for discussion and are part of the broader consultation of stakeholders. The safety section builds on the evaluation6 1 The definition on Artificial Intelligence of the High-Level Expert Group (AI HLEG) is available at ec.europa.eu/futurium/en/ai-alliance-consultation/guidelines 2 The definition of the Internet of Things provided by the Recommendation ITU-T Y.2060 is available at itu.int/ITU-T/recommendations/rec.aspx?rec=y.2060 3 SWD(2016) 110, COM(2017) 9, COM(2018) 237 and COM(2018) 795. 4 ec.europa.eu/growth/industry/policy_en 5 eur-lex.europa.eu/legal-content/EN/TXT/?uri=COM%3A2018%3A237%3AFIN. The accompanying Staff Working Document (2018) 137 (eur-lex.europa.eu/legal- content/en/ALL/?uri=CELEX%3A52018SC0137) provided a first mapping of liability challenges that occur in the context of emerging digital technologies. 6 SWD(2018) 161 final. 2 of the Machinery Directive7 and the work with the relevant expert groups8. The liability section builds on the evaluation9 of the Product Liability Directive10, the input of the relevant experts groups11 and contacts with stakeholders. This report does not aim to provide an exhaustive overview of the existing rules for safety and liability, but focuses on the key issues identified so far. 1.2. Characteristics of AI, IoT and robotics technologies AI, IoT and robotics share many characteristics. They can combine connectivity, autonomy and data dependency to perform tasks with little or no human control or supervision. AI equipped systems can also improve their own performance by learning from experience. Their complexity is reflected in both the plurality of economic operators involved in the supply chain and the multiplicity of components, parts, software, systems or services, which together form the new technological ecosystems. Added to this is the openness to updates and upgrades after their placement on the market. The vast amounts of data involved, the reliance on algorithms and the opacity of AI decision-making, make it more difficult to predict the behaviour of an AI-enabled product and to understand the potential causes of a damage. Finally, connectivity and openness can also expose AI and IoT products to cyber- threats. 1.3. Opportunities created by AI, IoT and robotics Increasing users trust and social acceptance in emerging technologies, improving products, processes and business models and helping European manufacturers to become more efficient are only some of the opportunities created by AI, IoT and robotics. Beyond productivity and efficiency gains, AI also promises to enable humans to develop intelligence not yet reached, opening the door to new discoveries and helping to solve some of the worlds biggest challenges: from treating chronic diseases, predicting disease outbreaks or reducing fatality rates in traffic accidents to fighting climate change or anticipating cybersecurity threats. These technologies can bring many benefits by improving the safety of products, making them less prone to certain risks. For instance, connected and automated vehicles could improve road safety, as most road accidents are currently caused by human errors12. Moreover, IoT systems are designed to receive and process vast amounts of data from 7 Directive 2006/42/EC 8 Consumer Safety Network as established in Directive 2001/95/EC on general product safety (GPSD), Machinery Directive 2006/42/EC and Radio Equipment 2014/53/EU Directive expert groups composed of Member States, industry and other stakeholders such as consumer associations. 9 COM(2018) 246 final 10 Directive 85/374/EEC 11 The Expert Group on Liability and New Technologies was created to provide the Commission with expertise on the applicability of the Product Liability Directive and national civil liability rules and with assistance in developing guiding principles for possible adaptations of applicable laws related to new technologies. It consists of two formations, the Product Liability Formation and the New Technologies Formation, see ec.europa.eu/transparency/regexpert/index.cfm?do=groupDetail.groupDetail for example, in railway transport legislation, when a railway vehicle is modified after its certification, a specific procedure is imposed to the author of the modification and clear criteria defined in order to determine if the authority needs to be involved or not. The self-learning feature of the AI products and systems may enable the machine to take decisions that deviate from what was initially intended by the producers and consequently what is expected by the users. This raises questions about human control, so that humans instructions and safety information in a language which can be easily understood by consumers and other end-users, as determined by the Member State concerned.” 37 Article 10 (8) referring to the instructions for the end user and Annex VI referring to the EU Declaration of Conformity 38 So far “self-learning is used in the context of AI mostly to indicate that machines are capable of learning during their training; it is not a requirement yet that AI machines continue learning after they are deployed; on the contrary, especially in healthcare, AI machines normally stop learning after their training has successfully ended. Thus, at this stage, the autonomous behaviour deriving from AI systems does not imply that the product is performing tasks not foreseen by the developers. 39 This is in line with section 2.1 of the Blue Guide on the implementation of EU products rules 2016 40 Article 5 of Directive 2001/95/EC of the European Parliament and of the Council of 3 December 2001 on general product safety. 41 In case of any change to the railway system that may have an impact on safety (e.g. technical, operational change or also organisational change which could impact the operational or maintenance process), the process to follow is described in Annex I to COM Implementing regulation (EU) 2015/1136 (OJ L 185, 14.7.2015, p. 6). In case of significant change a safety assessment report should be provided to the proposer of the change by an independent assessment body (could be the national safety authority or another technically competent). Following the risk analysis process, the proposer of the change will apply the appropriate measures to mitigate risks (if the proposer is a railway undertaking or infrastructure manages, the application of the regulation is parts of its safety management system, whose application that is supervised by the NSA). 8 could choose how and whether delegating decision to AI products and systems, to accomplish human-chosen objectives42. The existing Union product safety legislation does not explicitly address the human oversight in the context of AI self-learning products and systems43. The relevant Union pieces of legislation may foresee specific requirements for human oversight, as a safeguard, from the product design and throughout the lifecycle of the AI products and systems. The future “behaviour” of AI applications could generate mental health risks44 for users deriving, for example, from their collaboration with humanoid AI robots and systems, at home or in working environments. In this respect, today, safety is generally used to refer to the users perceived threat of physical harm that may come from the emerging digital technology. At the same time, safe products are defined in the Union legal framework as products that do not present any risk or just the minimum risks to the safety and health of persons. It is commonly agreed that the definition of health includes both physical and mental wellbeing. Howevermental health risks should be explicitly covered within the concept of product safety in the legislative framework. For example, the autonomy should not cause excessive stress and discomfort for extended periods and harm mental health. In this regard, the factors that positively affect the sense of safety for older people45 are considered to be: having secure relationships with health care service sta, having control over daily routines, and being informed about them. Producers of robots interacting with older people should take these factors into consideration to prevent mental health risks. Explicit obligations for producers of, among others, AI humanoid robots to explicitly consider the immaterial harm their products could cause to users, in particular vulnerable users such as elderly persons in care environments, could be considered for the scope of relevant EU legislation. Another essential characteristic of AI-based products and systems is data dependency. Data accuracy and relevance is essential to ensure that AI based systems and products take the decisions as intended by the producer. The Union product safety legislation does not explicitly address the risks to safety derived from faulty data. However, according to the “use” of the product, producers should anticipate during the design and testing phases the data accuracy and its relevance for safety functions. For example, an AI-based system designed to detect specific objects may have difficulty recognising items in poor lighting conditions, so designers should include data coming from product tests in both typical and poorly lit environments. Another example relates to agricultural robots such as fruit-picking robots aimed at detecting and locating ripe fruits on trees or on the ground. While the algorithms involved already show success rates for classification of over 90%, a shortcoming in the datasets fuelling those 42 Policy and Investment Recommendations for Trustworthy AI, High-Level Expert Group on Artificial Intelligence, June 2019. 43 This does however not exclude that oversight may be necessary in a given situation as a result of some of the existing more general obligations concerning the placing on the market of the product 44 WHO Constitution, first bullet point: “Health is a state of complete physical, mental and social well-being and not merely the absence of disease or infirmity.” (who.int/about/who-we-are/constitution) 45 Social Robots: Technological, Societal and Ethical Aspects of Human-Robot Interaction, pp.237-264, Research, Neziha Akalin, Annica Kristoersson and Amy Lout, July 2019. 9 algorithms may lead those robots to make a poor decision and as a consequence injure an animal or a person. The question arises if the Union product safety legislation should contain specific requirements addressing the risks to safety of faulty data at the design stage as well as mechanisms to ensure that quality of data is maintained throughout the use of the AI products and systems. Opacity is another main characteristic of some of the AI based products and systems that may result from the ability to improve their performance by learning from experience. Depending on the methodological approach, AI-based products and systems can be characterised by various degrees of opacity. This may lead to a decision making process of the system difficult to trace (black box-effect). Humans may not need to understand every single step of the decision making process, but as AI algorithms grow more advanced and are deployed into critical domains, it is decisive that humans can be able to understand how the algorithmic decisions of the system have been reached. This would be particularly important for the ex-post mechanism of enforcement, as it will allow the enforcement authorities the possibility to trace the responsibility of AI systems behaviours and choices. This is also acknowledged by the Commission Communication on Building Trust in Human-Centric Artificial Intelligence46. The Union product safety legislation does not explicitly address the increasing risks derived from the opacity of systems based on algorithms. It is therefore necessary to consider requirements for transparency of algorithms, as well as for robustness, accountability and when relevant, human oversight and unbiased outcomes47, particularly important for the ex- post mechanism of enforcement and to build trust in the use of those technologies. One way of tackling this challenge w
展开阅读全文