In July 2020, the Court of Justice of the European Union (CJEU) invalidated the European Commission’s adequacy decision for the EU-U.S. Privacy Shield framework, which until then, regulated transatlantic exchanges of personal data for commercial purposes. In Data Protection Commission v. Facebook Ireland (Schrems II), the CJEU argued that U.S. surveillance law provides inadequate safeguards for EU citizens’ data. This was a transatlantic bombshell, as it left thousands of companies questioning the future of their transatlantic data flows. Since then, the United States and EU Commission have been negotiating a successor agreement, but have not yet announced a path forward.
In the discussion of great power competition and cyberattacks meant to slow down a U.S. strategic movement of forces to Eastern Europe, the focus has been on the route from the fort to port in the U.S. But we tend to forget that once forces arrive at the major Western European ports of disembarkation, the distance from these ports to eastern Poland is the same as from New York to Chicago.
There is widespread awareness among researchers, companies, policy makers, and the public that the use of artificial intelligence (AI) and big data raises challenges involving justice, privacy, autonomy, transparency, and accountability. Organizations are increasingly expected to address these and other ethical issues. In response, many companies, nongovernmental organizations, and governmental entities have adopted AI or data ethics frameworks and principles meant to demonstrate a commitment to addressing the challenges posed by AI and, crucially, guide organizational efforts to develop and implement AI in socially and ethically responsible ways.