Preparing for Battle Against the Malicious Use of Artificial Intelligence

ven. 12 octobre à 00:00

Fuseau horaire : Paris (GMT+02:00)

NYC Seminar & Conference Center
New York
New York

Mission AI: Preparing for Battle Against the Malicious Use of Artificial Intelligence (a research analysis workshop) [ATTENDEES MUST REGISTER HERE TO BE CONFIRMED: ] In February 2017, twenty-six renowned experts on AI safety, drones, cybersecurity, lethal autonomous weapon systems, and counter-terrorism, from 14 institutions in academia, civil society, and industry, gathered for a special 2-day workshop at a the University of Oxford to discuss the malicious use of Artificial Intelligence (AI) by rogue nations, dictators, hackers, terrorists and other assorted criminals, and how companies and governments should prepare for these coming threats. The result of this secret meeting was the publication of an extensive report earlier this year, The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation, that has since been widely circulated and cited. It's the only report of its kind that tackles the possibility of threats by human beings who would intentionally manipulate AI for malicious purposes. It's a battle plan, of sorts, for identifying and combating an enemy we can't yet see. Report Executive Summary:

"Artificial intelligence and machine learning capabilities are growing at an unprecedented rate. These technologies have many widely beneficial applications, ranging from machine translation to medical image analysis. Countless more such applications are being developed and can be expected over the long term. Less attention has historically been paid to the ways in which artificial intelligence can be used maliciously. This report surveys the landscape of potential security threats from malicious uses of artificial intelligence technologies, and proposes ways to better forecast, prevent, and mitigate these threats. We analyze, but do not conclusively resolve, the question of what the long-term equilibrium between attackers and defenders will be. We focus instead on what sorts of attacks we are likely to see soon if adequate defenses are not developed." What does this mean for you? How will this all impact your life, the decisions you make at work and in your personal life as a consumer? And how will the government, big tech companies and industries protect us from these real threats? What does this important report tell us about how we should all prepare for the coming global battle against the malicious use of AI by criminals, rogue governments, and our enemies who will weaponize it?

What We'll Cover In This Workshop

This workshop will be a deep dive into the first 30 pages of this compelling 100-page report. Guest speaker, Amauche Emenari (graduate student, Massachusetts Institute of Technology (MIT), Center for Brains, Minds & Machines, specializing in neural circuits for intelligence), will provide a comprehensive overview, related literature, and interactive discussions on the following sections in the report: Introduction to the Report (why was it produced, an overview of the authors (26 experts) and the institutions behind this report, how the report is being used, etc.)

Scope of Material Covered in the entire report

Related Literature

General Framework for AI and Security Threats

AI Capabilities

Security-Relevant Properties of AI

General Implications


Digital Security

Physical Security

Political Security

Prerequisites & Preparation

No prerequisites. This workshop is for anyone who who wants to level-up their AI knowledge and gain a more nuanced understanding of how experts across multiple fields defining AI security threats and their suggestions for what businesses and governments should do to prepare for (and ultimately thwart) threats by humans using AI and machine learning maliciously. This workshop is ideal for product managers, developers and engineers, marketers, tech journalists, investors, entrepreneurs, students, and teachers.


NYC Seminar & Conference Center
New York

Nous avons temporairement désactivé la possibilité de naviguer vers les tags.