While the large and growing number of Chinese artificial intelligence publications is well known, the quality of this research is debated. Some observers claim that China is capable of producing a high quantity of AI publications, but lags in original ideas and impactful research. Even Chinese researchers occasionally criticize their country’s academic system for its lack of innovation in AI. In recent years, however, quantitative analyses have found that Chinese AI publications are increasingly influential.
Progress in artificial intelligence has led to growing concern about the capabilities of AI-powered surveillance systems. This data brief uses bibliometric analysis to chart recent trends in visual surveillance research — what share of overall computer vision research it comprises, which countries are leading the way, and how things have varied over time.
Henry Kissinger and Eric Schmidt discuss the transformational power of artificial intelligence.
Using finely tuned hardware, a specialized network, and large data storage, supercomputers have long been used for computationally intense projects that require large amounts of data processing. With the rise of artificial intelligence and machine learning, there is an increasing demand for these powerful computers and, as a result, processing power is rapidly increasing. As such, the growth of AI is inextricably linked to the growth in processing power of these high-performing devices.
In seguito allo scoppio della pandemia nel marzo 2020, il gruppo Covid-19 task force è nato dentro la Confederazione dei laboratori europei di intelligenza artificiale (Claire), con lo scopo di supportare la gestione della crisi attraverso l’uso dell’intelligenza artificiale.
Le prospettive prossime e future del gruppo di lavoro sono allineate con quelle della task force. La prima fase del lavoro, fondata su base volontaria e svolta in fase emergenziale, è giunta alla fine, e si sta valutando come proseguirne il lavoro, per renderlo più stabile e dargli un respiro più ampio, duraturo e generale.
Artificial intelligence offers enormous promise to advance progress and powerful capabilities to disrupt it. This policy brief is the second installment of a series that examines how advances in AI could be exploited to enhance operations that automate disinformation campaigns. Building on the RICHDATA framework, this report describes how AI can supercharge current techniques to increase the speed, scale, and personalization of disinformation campaigns.
Artificial intelligence offers enormous promise to advance progress, and powerful capabilities to disrupt it. This policy brief is the first installment of a series that examines how advances in AI could be exploited to enhance operations that automate disinformation. Introducing the RICHDATA framework—a disinformation kill chain—this report describes the stages and techniques used by human operators to build disinformation campaigns.
This paper is the fourth installment in a series on “AI safety,” an area of machine learning research that aims to identify causes of unintended behavior in machine learning systems and develop tools to ensure these systems work safely and reliably. The first paper in the series, “Key Concepts in AI Safety: An Overview,” outlined three categories of AI safety issues—problems of robustness, assurance, and specification—and the subsequent two papers described problems of robustness and assurance, respectively. This paper introduces specification as a key element in designing modern machine learning systems that operate as intended.
By combining a versatile and frequently updated bibliometrics tool — the CSET Map of Science — with more hands-on analyses of technical developments, this brief outlines a methodology for measuring the publication growth of AI-related topics, where that growth is occurring, what organizations and individuals are involved, and when technical improvements in performance occur.