AI Now Initiative

Researching the social impacts of artificial intelligence now to ensure a more equitable future


Led by Kate Crawford and Meredith Whittaker, AI Now is a New York-based research initiative working across disciplines to understand AI's social impacts.

AI Now | 2017 Symposium

Join us for AI Now’s second annual symposium on July 10th 2017 at the MIT Media Lab.  This event is free and open to the public; it will also be live streamed from our site.

*Tickets are currently sold out.  Please sign up for our waiting list, through the "register" button below, and we will email you when more tickets become available.*  

AI Now Report

The AI Now Report provides recommendations that can help ensure AI is more fair and equitable. It represents the thinking and research of the experts who attended our first symposium in 2016.

Keep in touch

We're new, but our work is underway. We would love to keep you involved and updated as we go.  @AINowInitiative on Twitter, or sign up for our mailing list. 

Our research focus

Rights and liberties

As AI systems are employed in criminal justice, law enforcement, housing, hiring, lending, and many other domains, they have the potential to impact basic rights and liberties in profound ways. AI Now is partnering with the ACLU and other stakeholders to better understand and address these impacts.

Labor and automation

Automation and early AI systems are already changing the nature of employment, and the type of jobs and working conditions available across the world. AI Now works with social scientists, economists, labor organizers and others to better understand AI's impact on work, examining who benefits and who bears the cost of these rapid changes.  

Bias and inclusion

Data reflects the social and political conditions in which it is collected. AI is only able to "see" what is in the data it's given. This, along with many other factors, can lead to biased and unfair outcomes. AI Now researches and measures the nature of such bias, how bias is defined and by whom, and the impact of such bias on diverse populations. 

Safety and critical infrastructure

As AI is introduced into our core infrastructures, like hospitals and power grids, the risks posed by errors and blindspots are very high. AI Now studies the way in which AI is being applied within these infrastructures, and works to develop approaches for safe and responsible AI integration and use.