Categorie
Artificial Intelligence China USA

Comparing U.S. and Chinese Contributions to High-Impact AI Research (Ashwin Acharya, Brian Dunn, CSET)

While the large and growing number of Chinese artificial intelligence publications is well known, the quality of this research is debated. Some observers claim that China is capable of producing a high quantity of AI publications, but lags in original ideas and impactful research. Even Chinese researchers occasionally criticize their country’s academic system for its lack of innovation in AI. In recent years, however, quantitative analyses have found that Chinese AI publications are increasingly influential.

Comparing U.S. and Chinese Contributions to High-Impact AI Research – Center for Security and Emerging Technology (georgetown.edu)

Categorie
Artificial Intelligence Technology

Trends in AI Research for the Visual Surveillance of Populations (Ashwin Acharya, Max Langenkamp, James Dunham, CSET)

Progress in artificial intelligence has led to growing concern about the capabilities of AI-powered surveillance systems. This data brief uses bibliometric analysis to chart recent trends in visual surveillance research — what share of overall computer vision research it comprises, which countries are leading the way, and how things have varied over time.

Trends in AI Research for the Visual Surveillance of Populations – Center for Security and Emerging Technology (georgetown.edu)

Categorie
Artificial Intelligence Disinformation Technology

AI and the Future of Disinformation Campaigns. Part 2: A Threat Model (Katerina Sedova, Christine McNeill Aurora Johnson Aditi Joshi, Ido Wulkan, CSET)

Artificial intelligence offers enormous promise to advance progress and powerful capabilities to disrupt it. This policy brief is the second installment of a series that examines how advances in AI could be exploited to enhance operations that automate disinformation campaigns. Building on the RICHDATA framework, this report describes how AI can supercharge current techniques to increase the speed, scale, and personalization of disinformation campaigns.

AI and the Future of Disinformation Campaigns – Center for Security and Emerging Technology (georgetown.edu)

Categorie
Artificial Intelligence Disinformation Technology

AI and the Future of Disinformation Campaigns. Part 1: The RICHDATA Framework (Katerina Sedova, Christine McNeill Aurora Johnson Aditi Joshi, Ido Wulkan, CSET)

Artificial intelligence offers enormous promise to advance progress, and powerful capabilities to disrupt it. This policy brief is the first installment of a series that examines how advances in AI could be exploited to enhance operations that automate disinformation. Introducing the RICHDATA framework—a disinformation kill chain—this report describes the stages and techniques used by human operators to build disinformation campaigns.

AI and the Future of Disinformation Campaigns – Center for Security and Emerging Technology (georgetown.edu)

Categorie
Artificial Intelligence Machine Learning Technology

Key Concepts in AI Safety: Specification in Machine Learning (Tim G. J. Rudner, Helen Toner, CSET)

This paper is the fourth installment in a series on “AI safety,” an area of machine learning research that aims to identify causes of unintended behavior in machine learning systems and develop tools to ensure these systems work safely and reliably. The first paper in the series, “Key Concepts in AI Safety: An Overview,” outlined three categories of AI safety issues—problems of robustness, assurance, and specification—and the subsequent two papers described problems of robustness and assurance, respectively. This paper introduces specification as a key element in designing modern machine learning systems that operate as intended.

Key Concepts in AI Safety: Specification in Machine Learning – Center for Security and Emerging Technology (georgetown.edu)

Categorie
Technology

Wisdom of the Crowd as Arbiter of Expert Disagreement. Case Study: Future of the DOD-Silicon Valley Relationship (Michael Page, CSET)

How can state-of-the-art probabilistic forecasting tools be used to advance expert debates on big policy questions? Using Foretell, a crowd forecasting platform piloted by CSET, we trialed a method to break down a big question—”What is the future of the DOD-Silicon Valley relationship?”—into measurable components, and then leveraged the wisdom of the crowd to reduce uncertainty and arbitrate disagreement among a group of experts.

Wisdom of the Crowd as Arbiter of Expert Disagreement – Center for Security and Emerging Technology (georgetown.edu)

Categorie
Artificial Intelligence Technology

Measuring AI Development. A Prototype Methodology to Inform Policy (Jack Clark, Kyle Miller, Rebecca Gelles, CSET)

By combining a versatile and frequently updated bibliometrics tool — the CSET Map of Science — with more hands-on analyses of technical developments, this brief outlines a methodology for measuring the publication growth of AI-related topics, where that growth is occurring, what organizations and individuals are involved, and when technical improvements in performance occur.

Measuring AI Development – Center for Security and Emerging Technology (georgetown.edu)

Categorie
Artificial Intelligence Technology

AI for Judges (Jamie Baker, Laurie Hobart, Matthew Mittelsteadt, CSET)

As artificial intelligence transforms the economy and American society, it will also transform the practice of law and the role of courts in regulating its use. What role should, will, or might judges play in addressing the use of AI? And relatedly, how will AI and machine learning impact judicial practice in federal and state courts? This report is intended to provide a framework for judges to address AI.

AI for Judges – Center for Security and Emerging Technology (georgetown.edu)

Categorie
Artificial Intelligence Cyber Defense

Making AI Work for Cyber Defense (Wyatt Hoffman, CSET)

Artificial intelligence will play an increasingly important role in cyber defense, but vulnerabilities in AI systems call into question their reliability in the face of evolving offensive campaigns. Because securing AI systems can require trade-offs based on the types of threats, defenders are often caught in a constant balancing act. This report explores the challenges in AI security and their implications for deploying AI-enabled cyber defenses at scale.

Making AI Work for Cyber Defense – Center for Security and Emerging Technology (georgetown.edu)

Page 1 of 5
1 2 3 5