As she was looking for a camp last summer, Yabesra Ewnetu, who’d just finished eighth grade, found a reference to MIT’s FutureMakers Create-a-thon. Ewnetu had heard that it’s hard to detect bias in artificial intelligence because AI algorithms are so complex, but this didn’t make sense to her. “I was like, well, we’re the ones coding it, shouldn’t we be able to see what it’s doing and explain why?” She signed up for the six-week virtual FutureMakers program so she could delve into AI herself.
FutureMakers is part of the MIT-wide Responsible AI for Social Empowerment and Education (RAISE) initiative launched earlier this year. RAISE is headquartered in the MIT Media Lab and run in collaboration with MIT Schwarzman College of Computing and MIT Open Learning.
MIT piloted FutureMakers to students from all over the United States last year in two formats.
During one-week, themed FutureMakers Workshops organized around key topics related to AI, students learn how AI technologies work, including social implications, then build something that uses AI.
Le règlement DORA : un tournant majeur pour la cybersécurité des institutions financières Le 17…
L’Agence nationale de la sécurité des systèmes d'information (ANSSI) a publié un rapport sur les…
Directive NIS 2 : Comprendre les nouvelles obligations en cybersécurité pour les entreprises européennes La…
Alors que la directive européenne NIS 2 s’apprête à transformer en profondeur la gouvernance de…
L'intelligence artificielle (IA) révolutionne le paysage de la cybersécurité, mais pas toujours dans le bon…
Des chercheurs en cybersécurité ont détecté une intensification des activités du groupe APT36, affilié au…
This website uses cookies.