Categorie
Analysis

Artificial Intelligence/TechInnovation – Is artificial intelligence ready for the great rehiring? (WEF)

writes: After a year that witnessed unemployment reach levels unseen since the Great Depression, the Great Rehiring is upon us – and AI is likely to play a significant role in it. Employers, especially those who need to hire rapidly and in large numbers, are turning to AI-driven technologies such as resume-screening programsautomated interviews, and mobile hiring apps to rebuild their workforces. To the millions of employees who were displaced by the COVID-19 pandemic, these technologies can mean a fast track back into the workplace. And to the businesses whose doors were shuttered by the pandemic, these technologies are an efficient path back to profitability.

go to WEF: Is artificial intelligence ready for the great rehiring? | World Economic Forum (weforum.org)

Categorie
Analysis

Artificial Intelligence – Artificial Intelligence for Social Good: Avoiding the Solutionist Trap (ELIAMEP)

Andrea Tsamados writes: Artificial Intelligence (AI) is slowly moving past the public bewilderment phase that is common to most revolutionary technologies. In the past decade, AI has often been portrayed as either a sentient technology with an apocalyptic destiny, or as a silver bullet for most if not all conceivable problems ranging from the climate emergency to global crime prevention. This article contributes to a more recent and realistic assessment that is beginning to see the light: AI is a powerful technology that can help us address a wide range of well-defined problems given proper design, development, integration and monitoring.

go to ELIAMEP: Artificial Intelligence for Social Good: Avoiding the Solutionist Trap – Andreas Tsamados : ΕΛΙΑΜΕΠ (eliamep.gr)

Categorie
Analysis

Artificial Intelligence Helps Improve NASA’s Eyes on the Sun (NASA)

NASA writes: A group of researchers is using artificial intelligence techniques to calibrate some of NASA’s images of the Sun, helping improve the data that scientists use for solar research. The new technique was published in the journal Astronomy & Astrophysics on April 13, 2021. 

go to NASA: Artificial Intelligence Helps Improve NASA’s Eyes on the Sun | NASA

Categorie
Analysis

(Europe/Artificial Intelligence) Sorveglianza di massa con l’IA, serve approccio rigoroso: i paletti del Parlamento Ue (Agenda Digitale)

Federica Maria Rita Livelli writes for Agenda Digitale: La Commissione per le Libertà Civili, la Giustizia e gli Affari Interni (LIBE) del Parlamento europeo ha redatto una bozza sull’uso della sorveglianza basata sull’intelligenza artificiale, evidenziando la necessità di una supervisione umana e di garanzie sufficienti. Si tratta, di fatto, di impedire l’uso di dati biometrici nei luoghi pubblici e l’uso da parte delle forze dell’ordine di database privati di riconoscimento facciale. Tale richiesta si fonda sul fatto che le tecnologie basate sull’intelligenza artificiale sono portatrici potenziali di pregiudizi e discriminazioni e, pertanto, è quanto mai urgente e necessario intervenire affinché Il progresso tecnico non si realizzi a scapito dei diritti fondamentali delle persone.

go to Agenda Digitale: Sorveglianza di massa con l’IA, serve approccio rigoroso: i paletti del Parlamento Ue | Agenda Digitale

Categorie
Analysis

(Europe/Artificial Intelligence) L’intelligenza artificiale made in Ue è davvero “umano-centrica”? I conflitti della proposta (Agenda Digitale)

Giovanni De Gregorio, Federica Paolucci, Oreste Pollicino writes for Agenda Digitale: A quasi due mesi dalla pubblicazione della proposta di Regolamento sull’intelligenza artificiale, sono già molti i punti su cui si discute attorno alla portata di questo strumento.

go to Agenda Digitale: L’intelligenza artificiale made in Ue è davvero “umano-centrica”? I conflitti della proposta | Agenda Digitale

Categorie
Analysis

(USA) How the National Science Foundation is taking on fairness in AI (Brookings)

Alex Engler writes for Brookings: Most of the public discourse around artificial intelligence (AI) policy focuses on one of two perspectives: how the government can support AI innovation, and how the government can deter its harmful or negligent use. Yet there can also be a role for government in making it easier to use AI beneficially—in this niche, the National Science Foundation (NSF) has found a way to contribute. Through a grant-making program called Fairness in Artificial Intelligence (FAI), the NSF is providing $20 million in funding to researchers working on difficult ethical problems in AI. The program, a collaboration with Amazon, has now funded 21 projects in its first two years, with an open call for applications in its third and final year. This is an important endeavor, furthering a trend of federal support for the responsible advancement of technology, and the NSF should continue this important line of funding for ethical AI. 

go to Brookings: How the National Science Foundation is taking on fairness in AI (brookings.edu)

 

Categorie
Analysis

(Artificial Intelligence) National Power After AI (CSET)

Matthew Daniels and Ben Chang write for CSET: AI technologies will likely alter great power competitions in foundational ways, changing both how nations create power and their motives for wielding it against one another. This paper is a first step toward thinking more expansively about AI & national power and seeking pragmatic insights for long-term U.S. competition with authoritarian governments.

go to CSET website: National Power After AI – Center for Security and Emerging Technology (georgetown.edu)

Categorie
Analysis

(TechInnovation) Work long performed by human decision-makers or organizations increasingly happens via computerized automation

Joshua A. Kroll writes for Brookings: Work long performed by human decision-makers or organizations increasingly happens via computerized automation. This shift creates new gaps within the governance structures that manage the correctness, fairness, and power dynamics of important decision processes. When existing governance systems are unequipped to handle the speed, scale, and sophistication of these new automated systems, any biased, unintended, or incorrect outcomes can go unnoticed or be difficult to correct even when observed. In this article, I examine what drives these gaps by focusing on why the nature of artificial intelligence (AI) creates inherent, fundamental barriers to governance and accountability within systems that rely on automation. The design of automation in systems is not only a technical question for engineers and implementers, but a governance question for policymakers and requirements holders. If system governance acknowledges and responds to the tendencies in AI to create and reinforce inequality, automated systems should be built to support human values as a strategy for limiting harms such as bias, reduction of individual agency, or the inability to redress harmful outcomes.

see the Brookings website: Why AI is just automation (brookings.edu)

Categorie
Analysis

(Europe/ASEAN) European regulation on artificial intelligence and the interest of ASEAN countries

Manoj Harjani writes for East Asia Forum: On 21 April 2021, the European Union announced draft legislation to harmonise its member states’ artificial intelligence (AI) rules. The legislation’s objectives are potentially desirable for many Southeast Asian countries, especially given the growing concern about the risks associated with AI and intensifying rivalry between China and the United States.

read analysis: Is the EU’s AI legislation a good fit for ASEAN? | East Asia Forum

Page 1 of 8
1 2 3 8