人工智能的监管替代方案(英文版).pdf

返回 相关 举报
人工智能的监管替代方案(英文版).pdf_第1页
第1页 / 共12页
人工智能的监管替代方案(英文版).pdf_第2页
第2页 / 共12页
人工智能的监管替代方案(英文版).pdf_第3页
第3页 / 共12页
人工智能的监管替代方案(英文版).pdf_第4页
第4页 / 共12页
人工智能的监管替代方案(英文版).pdf_第5页
第5页 / 共12页
亲,该文档总共12页,到这儿已超出免费预览范围,如果喜欢就下载吧!
资源描述
computer law Braithwaite and Drahos, 2000 ; Drahos, 2017 ). During the second half of the 20th century, an appropriate form for a regulatory scheme was seen as involving a regulatory body that had available to it a com- prehensive, gradated range of measures, in the form of an enforcement pyramid or compliance pyramid ( Ayres and Braithwaite, 1992 , p. 35). That model envisages a broad base of encouragement, including education and guidance, which underpins mediation and arbitration, with sanctions and enforcement mechanisms such as directions and re- strictions available for use when necessary, and suspension and cancellation powers to deal with serious or repeated breaches. In recent decades, however, further forms of regulation have emerged, many of them reflecting the power of regula- tees to resist and subvert the exercise of power over their be- haviour. The notion of governance has been supplanting the 400 computer law Jordan et al., 2005 ). Much recent lit- ature has focussed on deregulation, through such mech- as regulatory impact assessments designed to jus- the ratcheting down of measures that constrain corporate eedom, and euphemisms such as better regulation to dis- the easing of corporations compliance burden. Mean- , government agencies resist the application of regula- y frameworks to themselves, resulting in a great deal of ste and corruption going unchecked. It might seem attractive to organisations to face few le- l obligations and hence to be subject to limited compliance exposure. On the other hand, the absence or weakness regulation encourages behaviour that infringes reasonable lic expectations. Cavalier organisational behaviour may driven by executives, groups and even lone individuals perceive opportunities. This can give rise to substan- direct and indirect threats to the reputation of every or- anisation in the sector. It is therefore in each organisations n self-interest for a modicum of regulation to exist, in der to provide a protective shield against media exposs, to avoid stimulating a public backlash and regulatory vism. The range of alternative forms that regulatory schemes can is examined in a later section. First, however, it is im- to consider the extent to which natural controls may cause harmful, terv 3. interv AI subject sic 2014b is tec sulting n ma too the small implement benefits tructur ing tur gime. . regulatees are reflected effective and efficient implementation by regulatees than in abstract, discursive prose directly, and by enforcement agencies them in order to achieve compliance thereby influencing the behaviour of all regulatees to the aims. regulatory intervention to be unnecessary and even and hence to identify the circumstances in which in- ention may be justifiable. Natural controls and the justification of ention technologies and AI-based artefacts and systems may be to limitations as a result of processes that are intrin- to the relevant socio-economic system ( Clarke, 1995, 2014a, ). AI may even stimulate natural processes whose effect to limit adoption, or to curb or mitigate negative impacts. A common example of a natural control is doubt about the hnologys ability to deliver on its proponents promises, re- in inventions being starved of investment. Where in- ovative projects succeed in gaining early financing rounds, it y transpire that the development and/or operational costs are high, or the number of instances it would apply to and/or benefits to be gained from each application may be too to justify the investment needed to develop artefacts or and deploy systems. In some circumstances, the realisation of the potential of a technology may suffer from dependence on infras- e that is unavailable or inadequate . For example, comput- could have exploded in the third quarter of the 19th cen- y, rather than 100 years later, had metallurgy of the day review computer law Ostrom, 1999 ). Whereas neo-conservative economists commonly recognise market failure as the sole justification for interventions, Stiglitz (2008) adds market ir- rationality (e.g. circuit-breakers to stop bandwagon effects in stock markets) and distributive justice (e.g. safety nets and anti-discrimination measures). In the case of AI, evidence of market failure was noted in the previous article in this series. Despite various technologies being operationally deployed, no meaningful organisational, industry or professional self-regulation exists. Such codes and guidelines as exist cover a fraction of the need, and are in any case unenforceable. Meanwhile market irrationality is ev- ident in the form of naive acceptance by user organisations of AI promoters claims; and distributive justice is being neg- atively impacted by unfair and effectively unappealable de- cisions in such areas as credit-granting and social welfare administration. A further important insight that can be gained from a study of natural controls is that regulatory measures can be designed to reinforce natural processes. For example, ap- proaches that are applicable in a wide variety of contexts in- clude adjusting the cost/benefit/risk balance perceived by the players, by subsidising costs, levying revenues and/or assign- ing risk. For example, applying strict liability to operators of drones and driverless cars could be expected to encourage much more careful risk assessment and risk management. An appreciation of pre-existing and enhanced natural con- trols is a vital precursor to any analysis of regulation, because the starting-point needs to be: What is there about the natural order of things that is inadequate, and how will intervention improve the situation? For example, the first of 6 principles proposed by the Aus- tralian Productivity Commission was “Governments should 35 (2019) 398409 401 not act to address problems through regulation unless a case for action has been clearly established. This should include evaluating and explaining why existing measures are not suf- ficient to deal with the issue”( PC, 2006 , p.v). That threshold test is important, in order to ensure a sufficient understanding of the natural controls that exist in the particular context. In practice, parliaments seldom act in advance of new tech- nologies being deployed. Reasons for this include lack of un- derstanding of the technology and its impacts, prioritisation of economic over social issues and hence a preference for stimulating new business rather than throttling it at birth, and more effective lobbying by innovative corporations than by consumers and advocates for social values. An argument exists to the effect that, the more impact- ful the technology, the stronger the case for anticipatory ac- tion by parliaments. Examples of technologies that are fre- quently mentioned in this context are nuclear energy and various forms of large-scale extractive and manufacturing in- dustries whose inadequate regulation has resulted in mas- sive pollution. A precautionary principle has been enunciated ( Wingspread, 1998 ). Its strong form exists in some jurisdic- tions environmental laws, along the lines of: When human activities may lead to morally unacceptable harm that is scientifically plausible but uncertain, actions shall be taken to avoid or diminish that potential harm . ( TvH, 2006 ). Beyond environmental matters in a number of specific ju- risdictions, however, the precautionary principle is merely an ethical norm to the effect that: If an action or policy is suspected of causing harm, and scientific consensus that it is not harmful is lacking, then the burden of proof falls on those taking the action The first article in this series argued that AIs threats are readily identifiable and substantial. Even if that contention is not accepted, however, the scale of impact that AIs pro- ponents project as being inevitable is so great that the pre- cautionary principle applies, at the very least in the weaker of its two forms. A strong case for regulatory intervention therefore exists, unless it can be shown that appropriate reg- ulatory measures are already in place. The following sec- tion accordingly presents a brief survey of existing regulatory arrangements. 4. Existing laws This section first considers general provisions of law that may provide suitable protections, or at least contribute to a regula- tory framework. It then reviews initiatives that are giving rise to AI-specific laws. 4.1. Generic laws Applications of new technologies are generally subject to ex- isting laws ( Bennett Moses, 2013 ). These include the various forms of commercial law, particularly contractual obligations including express and implied terms, consumer rights laws, and copyright and patent laws. In some contexts including 402 review ro vices that ne man la dir likel within place likel la lar terr costs vie affected for slo ti ne socio-tec Moses, In wa fected fr form, make ics 4.2. Spatiall and ha Leenes but wo tec ma ne the ge It ev is by ers, b bl (2018c and jurisdictions pp whic United Searle Suc nomic tion, ne is (2015) a fr ha ( ject this Gener doubts ( less little neutr la cesses, re suc with accor tion 5. This and gr tion the measur ur harmful pose off the sur (2) mec whic the pects. sign, Fo contr in computer law Scherer, 2016 ; HTR, 2018a, 2018b ), few identify AI-specific laws. Even such vital aspects as rker safety and employer liability appear to depend not on hnology-specific laws, but on generic laws, which may or y not have been adapted to reflect the characteristics of the w technologies. In HTR (2017) , South Korea is identified as having enacted first national law relating to robotics generally: the Intelli- nt Robots Development Distribution Promotion Act of 2008. is almost entirely facilitative and stimulative, and barely en aspirational in relation to regulation of robotics. There mention of a Charter, “including the provisions prescribed Presidential Decrees, such as ethics by which the develop- manufacturers, and users of intelligent robots shall abide” ut no such Charter appears to exist. A mock-up of a possi- e form for such a Charter is provided by Akiko (2012) . HTR ) offers a regulatory specification in relation to research technology generally, including robotics and AI. In relation to autonomous motor vehicles, a number of have enacted laws. See Palmerini et al. (2014, .3673) , Holder et al. (2016), DMV-CA (2018) , Vellinga (2017) , h reviews laws in the USA at federal level, California, the eng pur 35 (2019) 398409 Kingdom, and the Netherlands, and Maschmedt and (2018) , which reviews laws in three States of Australia. h initiatives have generally had a strong focus on eco- motivations, the stimulation and facilitation of innova- exemptions from some existing regulation, and limited w regulation or even guidance. One approach to regulation to leverage off natural processes. For example, Schellekens argued that a requirement of obligatory insurance was sufficient means for regulating liability for harm arising om self-driving cars. In the air, legislatures and regulators ve moved very slowly in relation to the regulation of drones Clarke and Bennett Moses, 2014 ; Clarke, 2016 ). Automated decision-making about people has been sub- to French data protection law for many years. In mid-2018 became a feature of European law generally, through the al Data Protection Regulation (GDPR) Art. 22, although have been expressed about that Articles effectiveness Wachter et al., 2017 ). On the one hand, it might be that AI-based technologies are disruptive than they are claimed to be, and that laws need adjustment. On the other, a mythology of technology ality pervades law-making. Desirable as it might be for ws to encompass both existing and future artefacts and pro- genuinely disruptive technologies have features that nder existing laws ambiguous and ineffective. Not only is AI not subject to adequate natural controls, but h laws as currently apply appear to be inadequate to cope the substantial threats it embodies. The following section dingly outlines the various forms of regulatory interven- that could be applied. The hierarchy of regulatory forms section reflects the regulatory concepts outlined earlier, presents alternatives within a hierarchy based on the de- ee of formality of the regulatory intervention. An earlier sec- considered Natural Regulation . In Fig. 1 , this is depicted as bottom-most layer (1) of the hierarchy. Regulatory theorists commonly refer to instruments and es that can be used to achieve interventions into nat- al processes. In principle, their purpose is the curbing of behaviours and excesses; but in some cases the pur- is to give the appearance of doing so, in order to hold stronger or more effective interventions. Fig. 1 depicts intentionally-designed regulatory instruments and mea- es as layers (2)(6), built on top of natural regulation. The second-lowest layer in the hierarchy, referred to as Infrastructural Regulation , is a correlate of artefacts like the hanical steam governor. Features of the infrastructure on h the regulatees depend can reinforce positive aspects of relevant socio-economic system and inhibit negative as- Those features may be byproducts of the artefacts de- or they may be retro-fitted onto it, or architected into it. r example, early steam-engines did not embody adequate ols, and the first governor was a retro-fitted feature; but, subsequent iterations, controls became intrinsic to steam- ine design. Information technology (IT) assists what were previously ely mechanical controls, such as where dam sluice-gate computer law Hosein et al., 2003 ). A range of constraints exists within computer and network architecture including standards and protocols and within infrastructure includ- ing hardware and software. In the context of AI, a relevant form that West Coast Code could take is the embedment in robots of something resembling laws of robotics. This notion first appeared in an Asimov short story, Runaround, published in 1942; but many commentators on robotics cling to it. For example, Devlin (2016) quotes a professor of robotics as perceiving that the British Standard Institutes guidance on ethical design of robots ( BS, 2016 ) represents “the first step towards embedding ethical values into robotics and AI”. On the other hand, a study of Asimovs robot fiction showed that he had comprehensively demonstrated the futility of the idea ( Clarke, 1993 ). No means exists to encode into artefacts human values, nor to embed within them means to reflect differing values among various stakeholders, nor to mediate conflicts among values and ob- jectives ( Weizenbaum, 1976 ; Dreyfus, 1992 ). Switching attention to the uppermost layer of the regu- latory hierarchy, (6) Formal Regulation exercises the power of a parliament through statutes. In common law countries at least, statutes are supplemented by case law that clarifies the application of the legislation. Formal regulation demands tified in Ta
展开阅读全文
相关资源
相关搜索
资源标签

copyright@ 2017-2022 报告吧 版权所有
经营许可证编号:宁ICP备17002310号 | 增值电信业务经营许可证编号:宁B2-20200018  | 宁公网安备64010602000642