While the large and growing number of Chinese artificial intelligence publications is well known, the quality of this research is debated. Some observers claim that China is capable of producing a high quantity of AI publications, but lags in original ideas and impactful research. Even Chinese researchers occasionally criticize their country’s academic system for its lack of innovation in AI. In recent years, however, quantitative analyses have found that Chinese AI publications are increasingly influential.
Progress in artificial intelligence has led to growing concern about the capabilities of AI-powered surveillance systems. This data brief uses bibliometric analysis to chart recent trends in visual surveillance research — what share of overall computer vision research it comprises, which countries are leading the way, and how things have varied over time.
Artificial intelligence offers enormous promise to advance progress and powerful capabilities to disrupt it. This policy brief is the second installment of a series that examines how advances in AI could be exploited to enhance operations that automate disinformation campaigns. Building on the RICHDATA framework, this report describes how AI can supercharge current techniques to increase the speed, scale, and personalization of disinformation campaigns.
Artificial intelligence offers enormous promise to advance progress, and powerful capabilities to disrupt it. This policy brief is the first installment of a series that examines how advances in AI could be exploited to enhance operations that automate disinformation. Introducing the RICHDATA framework—a disinformation kill chain—this report describes the stages and techniques used by human operators to build disinformation campaigns.
This paper is the fourth installment in a series on “AI safety,” an area of machine learning research that aims to identify causes of unintended behavior in machine learning systems and develop tools to ensure these systems work safely and reliably. The first paper in the series, “Key Concepts in AI Safety: An Overview,” outlined three categories of AI safety issues—problems of robustness, assurance, and specification—and the subsequent two papers described problems of robustness and assurance, respectively. This paper introduces specification as a key element in designing modern machine learning systems that operate as intended.
How can state-of-the-art probabilistic forecasting tools be used to advance expert debates on big policy questions? Using Foretell, a crowd forecasting platform piloted by CSET, we trialed a method to break down a big question—”What is the future of the DOD-Silicon Valley relationship?”—into measurable components, and then leveraged the wisdom of the crowd to reduce uncertainty and arbitrate disagreement among a group of experts.
By combining a versatile and frequently updated bibliometrics tool — the CSET Map of Science — with more hands-on analyses of technical developments, this brief outlines a methodology for measuring the publication growth of AI-related topics, where that growth is occurring, what organizations and individuals are involved, and when technical improvements in performance occur.
As artificial intelligence transforms the economy and American society, it will also transform the practice of law and the role of courts in regulating its use. What role should, will, or might judges play in addressing the use of AI? And relatedly, how will AI and machine learning impact judicial practice in federal and state courts? This report is intended to provide a framework for judges to address AI.
Artificial intelligence will play an increasingly important role in cyber defense, but vulnerabilities in AI systems call into question their reliability in the face of evolving offensive campaigns. Because securing AI systems can require trade-offs based on the types of threats, defenders are often caught in a constant balancing act. This report explores the challenges in AI security and their implications for deploying AI-enabled cyber defenses at scale.