Keith E. Sonderling writes: After a year that witnessed unemployment reach levels unseen since the Great Depression, the Great Rehiring is upon us – and AI is likely to play a significant role in it. Employers, especially those who need to hire rapidly and in large numbers, are turning to AI-driven technologies such as resume-screening programs, automated interviews, and mobile hiring apps to rebuild their workforces. To the millions of employees who were displaced by the COVID-19 pandemic, these technologies can mean a fast track back into the workplace. And to the businesses whose doors were shuttered by the pandemic, these technologies are an efficient path back to profitability.
Andrea Tsamados writes: Artificial Intelligence (AI) is slowly moving past the public bewilderment phase that is common to most revolutionary technologies. In the past decade, AI has often been portrayed as either a sentient technology with an apocalyptic destiny, or as a silver bullet for most if not all conceivable problems ranging from the climate emergency to global crime prevention. This article contributes to a more recent and realistic assessment that is beginning to see the light: AI is a powerful technology that can help us address a wide range of well-defined problems given proper design, development, integration and monitoring.
NASA writes: A group of researchers is using artificial intelligence techniques to calibrate some of NASA’s images of the Sun, helping improve the data that scientists use for solar research. The new technique was published in the journal Astronomy & Astrophysics on April 13, 2021.
Federica Maria Rita Livelli writes for Agenda Digitale: La Commissione per le Libertà Civili, la Giustizia e gli Affari Interni (LIBE) del Parlamento europeo ha redatto una bozza sull’uso della sorveglianza basata sull’intelligenza artificiale, evidenziando la necessità di una supervisione umana e di garanzie sufficienti. Si tratta, di fatto, di impedire l’uso di dati biometrici nei luoghi pubblici e l’uso da parte delle forze dell’ordine di database privati di riconoscimento facciale. Tale richiesta si fonda sul fatto che le tecnologie basate sull’intelligenza artificiale sono portatrici potenziali di pregiudizi e discriminazioni e, pertanto, è quanto mai urgente e necessario intervenire affinché Il progresso tecnico non si realizzi a scapito dei diritti fondamentali delle persone.
Giovanni De Gregorio, Federica Paolucci, Oreste Pollicino writes for Agenda Digitale: A quasi due mesi dalla pubblicazione della proposta di Regolamento sull’intelligenza artificiale, sono già molti i punti su cui si discute attorno alla portata di questo strumento.
Alex Engler writes for Brookings: Most of the public discourse around artificial intelligence (AI) policy focuses on one of two perspectives: how the government can support AI innovation, and how the government can deter its harmful or negligent use. Yet there can also be a role for government in making it easier to use AI beneficially—in this niche, the National Science Foundation (NSF) has found a way to contribute. Through a grant-making program called Fairness in Artificial Intelligence (FAI), the NSF is providing $20 million in funding to researchers working on difficult ethical problems in AI. The program, a collaboration with Amazon, has now funded 21 projects in its first two years, with an open call for applications in its third and final year. This is an important endeavor, furthering a trend of federal support for the responsible advancement of technology, and the NSF should continue this important line of funding for ethical AI.
Matthew Daniels and Ben Chang write for CSET: AI technologies will likely alter great power competitions in foundational ways, changing both how nations create power and their motives for wielding it against one another. This paper is a first step toward thinking more expansively about AI & national power and seeking pragmatic insights for long-term U.S. competition with authoritarian governments.
Joshua A. Kroll writes for Brookings: Work long performed by human decision-makers or organizations increasingly happens via computerized automation. This shift creates new gaps within the governance structures that manage the correctness, fairness, and power dynamics of important decision processes. When existing governance systems are unequipped to handle the speed, scale, and sophistication of these new automated systems, any biased, unintended, or incorrect outcomes can go unnoticed or be difficult to correct even when observed. In this article, I examine what drives these gaps by focusing on why the nature of artificial intelligence (AI) creates inherent, fundamental barriers to governance and accountability within systems that rely on automation. The design of automation in systems is not only a technical question for engineers and implementers, but a governance question for policymakers and requirements holders. If system governance acknowledges and responds to the tendencies in AI to create and reinforce inequality, automated systems should be built to support human values as a strategy for limiting harms such as bias, reduction of individual agency, or the inability to redress harmful outcomes.
see the Brookings website: Why AI is just automation (brookings.edu)
Manoj Harjani writes for East Asia Forum: On 21 April 2021, the European Union announced draft legislation to harmonise its member states’ artificial intelligence (AI) rules. The legislation’s objectives are potentially desirable for many Southeast Asian countries, especially given the growing concern about the risks associated with AI and intensifying rivalry between China and the United States.