If you need to treat anxiety in the future, odds are the treatment won’t just be therapy, but also an algorithm. Across the mental-health industry, companies are rapidly building solutions for monitoring and treating mental-health issues that rely on just a phone or a wearable device. To do so, companies are relying on “affective computing” to detect and interpret human emotions. It’s a field that’s forecast to become a $37 billion industry by 2026, and as the COVID-19 pandemic has increasingly forced life online, affective computing has emerged as an attractive tool for governments and corporations to address an ongoing mental health crisis.
As researchers grew to understand COVID-19 during the early days of the pandemic, many built AI algorithms to analyze medical images and measure the extent of the disease in a given patient. Radiologists proposed multiple different scoring systems to categorize what they were seeing in lung scans and developed classification systems for the severity of the disease. These systems were developed and tested in clinical practice, published in academic journals, and modified or revised over time. But the pressure to quickly respond to a global pandemic threw into stark relief the lack of a coherent regulatory framework for certain cutting-edge technologies, which threatened to keep researchers from developing new diagnostic techniques as quickly as possible.
This policy brief, authored in collaboration with the MITRE Corporation, provides a new perspective on the U.S. Department of Defense’s struggle to recruit and retain artificial intelligence talent. The authors find that the DOD already has a cadre of AI and related experts, but that this talent remains hidden. Better leveraging this talent could go a long way in meeting the DOD’s AI objectives. The authors argue that this can be done through policies that more effectively identify AI talent and assignment opportunities, processes that incentivize experimentation and changes in career paths, and investing in the necessary technological infrastructure.
For years now, artificial intelligence has been hailed as both a savior and a destroyer. The technology really can make our lives easier, letting us summon our phones with a “Hey, Siri” and (more importantly) assisting doctors on the operating table. But as any science-fiction reader knows, AI is not an unmitigated good: It can be prone to the same racial biases as humans are, and, as is the case with self-driving cars, it can be forced to make murky split-second decisions that determine who lives and who dies. Like it or not, AI is only going to become an even more omnipresent force: We’re in a “watershed moment” for the technology, says Eric Schmidt, the former Google CEO.
Intellectual property (IP) is based on the idea that those who combine the spark of imagination with the grit and determination to see their vision become reality in books, film, music, technology, medicines, designs, sculpture, services, and more deserve the opportunity to reap the benefits of their innovation—and that this reward incentivizes more creative output. In the past, IP law operated under the assumption that all creative works would be entirely created by people. However, the advent of artificial intelligence (AI) has raised the prospect that in the future a significant number of works may be created by an autonomous computer system without direct human involvement. Canadian policy should protect the principle on which IP law is based, whether the works are generated by people or computer systems.
ITIF welcomes Canada’s detailed and thoughtful review of copyright implications for AI and the Internet of Things. Central to Canada’s consideration of IP and AI should be a pragmatic and realistic understanding of AI and its capabilities. While AI systems can be increasingly autonomous and creative, they still have a considerable way to go before they achieve the sophistication that many people imagine. However, Canada should not create AI-specific requirements for activity that is already legal and being done through non-technical means.
ITIF’s submission focuses on two key recommendations:
- Canada should recognize and protect AI-generated IP and the need to assign ownership of AI-generated work to the person or organization that owns the AI system.
- Canada should allow all users—whether commercial or non-commercial—to use AI for text and data mining (TDM) so long as they have legal access to the material. There should be no additional, special approvals for the use of TDM tools. Canada should avoid making its copyright framework overly complicated in considering TDM-specific requirements and exceptions and instead ensure that the use of TDM is consistent with current IP law, whether this relates to infringement or licensing.
A globally competitive AI workforce hinges on the education, development, and sustainment of the best and brightest AI talent. This issue brief compares efforts to integrate AI education in China and the United States, and what advantages and disadvantages this entails. The authors consider key differences in system design and oversight, as well as strategic planning. They then explore implications for the U.S. national security community.
In the five years since we released the first AI100 report, much has been written about the state of artificial intelligence and its influences on society. Nonetheless, AI100 remains unique in its combination of two key features. First, it is written by a Study Panel of core multi-disciplinary researchers in the field—experts who create artificial intelligence algorithms or study their influence on society as their main professional activity, and who have been doing so for many years. The authors are firmly rooted within the field of AI and provide an “insider’s” perspective. Second, it is a longitudinal study, with reports by such Study Panels planned once every five years, for at least one hundred years.
AI R&D and new firms are rising as shares of the U.S. total
Share of AI-related projects in federal R&D expenditures at U.S. colleges and universities, and firms providing AI solutions as share of all tech companies
Source: Brookings analysis of data from Crunchbase and Burning Glass data available via StanfordHAI 2021 AI Index
Much of the U.S. artificial intelligence (AI) discussion revolves around futuristic dreams of both utopia and dystopia. From extreme to extreme, the promises range from solutions to global climate change to a “robot apocalypse.”
However, it bears remembering that AI is also becoming a real-world economic fact with major implications for national and regional economic development as the U.S. crawls out of the COVID-19 pandemic.
There is widespread awareness among researchers, companies, policy makers, and the public that the use of artificial intelligence (AI) and big data raises challenges involving justice, privacy, autonomy, transparency, and accountability. Organizations are increasingly expected to address these and other ethical issues. In response, many companies, nongovernmental organizations, and governmental entities have adopted AI or data ethics frameworks and principles meant to demonstrate a commitment to addressing the challenges posed by AI and, crucially, guide organizational efforts to develop and implement AI in socially and ethically responsible ways.