纽约大学 AI Now 2017年度报告.pdf

返回 相关 举报
纽约大学 AI Now 2017年度报告.pdf_第1页
第1页 / 共37页
纽约大学 AI Now 2017年度报告.pdf_第2页
第2页 / 共37页
纽约大学 AI Now 2017年度报告.pdf_第3页
第3页 / 共37页
纽约大学 AI Now 2017年度报告.pdf_第4页
第4页 / 共37页
纽约大学 AI Now 2017年度报告.pdf_第5页
第5页 / 共37页
点击查看更多>>
资源描述
AI Now 2017 ReportAuthors Alex Campolo, New York University Madelyn Sanfilippo, New York University Meredith Whittaker, Google Open Research, New York University, and AI Now Kate Crawford, Microsoft Research, New York University, and AI Now EditorsAndrew Selbst, Yale Information Society Project and Data & Society Solon Barocas, Cornell University Table of ContentsRecommendations 1Executive Summary 3Introduction 6Labor and Automation 7Research by Sector and Task 7AI and the Nature of Work 9Inequality and Redistribution 13Bias and Inclusion 13Where Bias Comes From 14The AI Field is Not Diverse 16Recent Developments in Bias Research 18Emerging Strategies to Address Bias 20Rights and Liberties 21Population Registries and Computing Power 22Corporate and Government Entanglements 23AI and the Legal System 26AI and Privacy 28Ethics and Governance 30Ethical Concerns in AI 30AI Reflects Its Origins 31Ethical Codes 32Challenges and Concerns Going Forward 34Conclusion 36 AI Now 2017 Report 1RecommendationsThese recommendations reflect the views and research of the AI Now Institute at New York University. We thank the experts who contributed to the AI Now 2017 Symposium and Workshop for informing these perspectives, and our research team for helping shape the AI Now 2017 Report. 1. Core public agencies, such as those responsible for criminal justice, healthcare, welfare, and education (e.g “high stakes” domains) should no longer use “black box” AI and algorithmic systems. This includes the unreviewed or unvalidated use of pre-trained models, AI systems licensed from third party vendors, and algorithmic processes created in-house. The use of such systems by public agencies raises serious due process concerns, and at a minimum they should be available for public auditing, testing, and review, and subject to accountability standards. 2. Before releasing an AI system, companies should run rigorous pre-release trials to ensure that they will not amplify biases and errors due to any issues with the training data, algorithms, or other elements of system design. As this is a rapidly changing field, the methods and assumptions by which such testing is conducted, along with the results, should be openly documented and publicly available, with clear versioning to accommodate updates and new findings. 3. After releasing an AI system, companies should continue to monitor its use across different contexts and communities. The methods and outcomes of monitoring should be defined through open, academically rigorous processes, and should be accountable to the public. Particularly in high stakes decision-making contexts, the views and experiences of traditionally marginalized communities should be prioritized. 4. More research and policy making is needed on the use of AI systems in workplace management and monitoring, including hiring and HR. This research will complement the existing focus on worker replacement via automation. Specific attention should be given to the potential impact on labor rights and practices, and should focus especially on the potential for behavioral manipulation and the unintended reinforcement of bias in hiring and promotion. 5. Develop standards to track the provenance, development, and use of training datasets throughout their life cycle. This is necessary to better understand and monitor issues of bias and representational skews. In addition to developing better records for how a training dataset was created and maintained, social scientists and measurement researchers within the AI bias research field should continue to examine existing training datasets, and work to understand potential blind spots and biases that may already be at work. AI Now 2017 Report 26. Expand AI bias research and mitigation strategies beyond a narrowly technical approach. Bias issues are long term and structural, and contending with them necessitates deep interdisciplinary research. Technical approaches that look for a one-time “fix” for fairness risk oversimplifying the complexity of social systems. Within each domain such as education, healthcare or criminal justice legacies of bias and movements toward equality have their own histories and practices. Legacies of bias cannot be “solved” without drawing on domain expertise. Addressing fairness meaningfully will require interdisciplinary collaboration and methods of listening across different disciplines. 7. Strong standards for auditing and understanding the use of AI systems “in the wild” are urgently needed. Creating such standards will require the perspectives of diverse disciplines and coalitions. The process by which such standards are developed should be publicly accountable, academically rigorous and subject to periodic review and revision. 8. Companies, universities, conferences and other stakeholders in the AI field should release data on the participation of women, minorities and other marginalized groups within AI research and development. Many now recognize that the current lack of diversity in AI is a serious issue, yet there is insufficiently granular data on the scope of the problem, which is needed to measure progress. Beyond this, we need a deeper assessment of workplace cultures in the technology industry, which requires going beyond simply hiring more women and minorities, toward building more genuinely inclusive workplaces. 9. The AI industry should hire experts from disciplines beyond computer science and engineering and ensure they have decision making power. As AI moves into diverse social and institutional domains, influencing increasingly high stakes decisions, efforts must be made to integrate social scientists, legal scholars, and others with domain expertise that can guide the creation and integration of AI into long-standing systems with established practices and norms. 10. Ethical codes meant to steer the AI field should be accompanied by strong oversight and accountability mechanisms. More work is needed on how to substantively connect high level ethical principles and guidelines for best practices to everyday development processes, promotion and product release cycles. AI Now 2017 Report 3Executive SummaryArtificial intelligence (AI) technologies are in a phase of rapid development, and are being adopted widely. While the concept of artificial intelligence has existed for over sixty years, real-world applications have only accelerated in the last decade due to three concurrent developments: better algorithms, increases in networked computing power and the tech industrys ability to capture and store massive amounts of data. AI systems are already integrated in everyday technologies like smartphones and personal assistants, making predictions and determinations that help personalize experiences and advertise products. Beyond the familiar, these systems are also being introduced in critical areas like law, finance, policing and the workplace, where they are increasingly used to predict everything from our taste in music to our likelihood of committing a crime to our fitness for a job or an educational opportunity. AI companies promise that the technologies they create can automate the toil of repetitive work, identify subtle behavioral patterns and much more. However, the analysis and understanding of artificial intelligence should not be limited to its technical capabilities. The design and implementation of this next generation of computational tools presents deep normative and ethical challenges for our existing social, economic and political relationships and institutions, and these changes are already underway. Simply put, AI does not exist in a vacuum. We must also ask how broader phenomena like widening inequality, an intensification of concentrated geopolitical power and populist political movements will shape and be shaped by the development and application of AI technologies. Building on the inaugural 2016 report, The AI Now 2017 Report addresses the most recent scholarly literature in order to raise critical social questions that will shape our present and near future. A year is a long time in AI research, and this report focuses on new developments in four areas: labor and automation, bias and inclusion, rights and liberties, and ethics and governance. We identify emerging challenges in each of these areas and make recommendations to ensure that the benefits of AI will be shared broadly, and that risks can be identified and mitigated. Labor and automation: Popular media narratives have emphasized the prospect of mass job loss due to automation and the widescale adoption of robots. Such serious scenarios deserve sustained empirical attention, but some of the best recent work on AI and labor has focused instead on specific sectors and tasks. While few jobs will be completely automated in the near term, researchers estimate that about a third of workplace tasks can be automated for the majority of workers. New policies such as the Universal Basic Income (UBI) are being designed to address concerns about job loss, but these need much more study. An underexplored area that needs urgent attention is how AI and related algorithmic systems are already changing the balance of workplace power. Machine learning techniques are quickly being integrated into management and hiring AI Now 2017 Report 4decisions, including in the so-called gig economy where technical systems match workers with jobs, but also across more traditional white collar industries. New systems make promises of flexibility and efficiency, but they also intensify the surveillance of workers, who often do not know when and how they are being tracked and evaluated, or why they are hired or fired. Furthermore, AI-assisted forms of management may replace more democratic forms of bargaining between workers and employers, increasing owner power under the guise of technical neutrality. Bias and inclusion: One of the most active areas of critical AI research in the past year has been the study of bias, both in its more formal statistical sense and in the wider legal and normative senses. At their best, AI systems can be used to augment human judgement and reduce both our conscious and unconscious biases. However, training data, algorithms, and other design choices that shape AI systems may reflect and amplify existing cultural assumptions and inequalities. For example, natural language processing techniques trained on a corpus of internet writing from the 1990s may reflect stereotypical and dated word associationsthe word “female” might be associated with “receptionist.” If these models are used to make educational or hiring decisions, they may reinforce existing inequalities, regardless of the intentions or even knowledge of systems designers. Those researching, designing and developing AI systems tend to be male, highly educated and very well paid. Yet their systems are working to predict and understand the behaviors and preferences of diverse populations with very different life experiences. More diversity within the fields building these systems will help ensure that they reflect a broader variety of viewpoints. Rights and liberties: The application of AI systems in public and civil institutions is challenging existing political arrangements, especially in a global political context shaped by events such as the election of Donald Trump in the United States. A number of governmental agencies are already partnering with private corporations to deploy AI systems in ways that challenge civil rights and liberties. For example, police body camera footage is being used to train machine vision algorithms for law enforcement, raising privacy and accountability concerns. AI technologies are als
展开阅读全文
相关资源
相关搜索
资源标签

copyright@ 2017-2022 报告吧 版权所有
经营许可证编号:宁ICP备17002310号 | 增值电信业务经营许可证编号:宁B2-20200018  | 宁公网安备64010602000642