This is part 2 of Natalie’s Permission to Be Uncertain series.
The interviews in this series explore how today’s AI practitioners, entrepreneurs, policy makers, and industry leaders are thinking about the ethical implications of their work, as individuals and as professionals. My goal is to reveal the paradoxes, contradictions, ironies, and uncertainties in the ethics and responsibility debates in the growing field of AI.
I believe that validating the lack of clarity and coherence may, at this stage, be more valuable than prescribing solutions rife with contradictions and blind spots. This initiative instead grants permission to be uncertain if not confused, and provides a forum for open and honest discussion that can help inform tech policy, research agendas, academic curricula, business strategy, and citizen action.
Interview with David Clark, Senior Research Scientist, MIT Computer Science & Artificial Intelligence Lab
Artificial intelligence has recently emerged from its most recent winter. Many technical researchers are now facing a moral dilemma as they watch their work find its way out of the lab and into our lives in ways they had not intended or imagined but more importantly, in ways they find objectionable.