Researching the social impacts of artificial intelligence now to ensure a more equitable future
Led by Kate Crawford and Meredith Whittaker, AI Now is a New York-based research initiative working across disciplines to understand AI's social impacts.
Check back soon for more.
Keep in touch
AI Now will be hosting our second annual symposium, July 10th 2017 at MIT Media Lab. Stay tuned for more details.
In the meantime, we invite you to peruse our archive of video, topic primers, and more from AI Now's 2016 Symposium, which AI Now co-hosted with NYU and the Obama White House's Office of Science and Technology Policy.
The AI Now Report provides recommendations that can help ensure AI is more fair and equitable. It represents the thinking and research of the experts who attended our first symposium, hosted in collaboration with President Obama's White House and held at New York University in 2016.
Read it here
We're new, but our work is underway. We would love to keep you involved and updated as we go. @AINowInitiative on Twitter, or sign up for our mailing list.
As AI systems are employed in criminal justice, law enforcement, housing, hiring, lending, and many other domains, they have the potential to impact basic rights and liberties in profound ways. AI Now is partnering with the ACLU and other stakeholders to better understand and address these impacts.
Automation and early AI systems are already changing the nature of employment, and the type of jobs and working conditions available across the world. AI Now works with social scientists, economists, labor organizers and others to better understand AI's impact on work, examining who benefits and who bears the cost of these rapid changes.
Data reflects the social and political conditions in which it is collected. AI is only able to "see" what is in the data it's given. This, along with many other factors, can lead to biased and unfair outcomes. AI Now researches and measures the nature of such bias, how bias is defined and by whom, and the impact of such bias on diverse populations.
As AI is introduced into our core infrastructures, like hospitals and power grids, the risks posed by errors and blindspots are very high. AI Now studies the way in which AI is being applied within these infrastructures, and works to develop approaches for safe and responsible AI integration and use.