Artificial intelligence (AI) can devise methods of wealth distribution that are more popular than systems designed by people, new research suggests.
The findings, made by a team of researchers at UK-based AI company DeepMind, show that machine learning systems aren’t just good at solving complex physics and biology problems, but may also help deliver on more open-ended social objectives, such as the goal of realizing a fair, prosperous society.
Of course, that’s not an easy task. Building a machine that can deliver beneficial results humans actually want – called « value alignment » in AI research – is complicated by the fact that people often disagree on the best method to resolve all kinds of things, and especially social, economic, and political issues.
« One key hurdle for value alignment is that human society admits a plurality of views, making it unclear to whose preferences AI should align, » researchers explain in a new paper, led by first author and DeepMind research scientist Raphael Koster.
« For example, political scientists and economists are often at loggerheads over which mechanisms will make our societies function most fairly or efficiently. »
To help bridge the gap, the researchers developed an agent for wealth distribution that had people’s interactions (both real and virtual) built into its training data – in effect, guiding the AI towards human-preferred (and hypothetically fairer overall) outcomes.
Le règlement DORA : un tournant majeur pour la cybersécurité des institutions financières Le 17…
L’Agence nationale de la sécurité des systèmes d'information (ANSSI) a publié un rapport sur les…
Directive NIS 2 : Comprendre les nouvelles obligations en cybersécurité pour les entreprises européennes La…
Alors que la directive européenne NIS 2 s’apprête à transformer en profondeur la gouvernance de…
L'intelligence artificielle (IA) révolutionne le paysage de la cybersécurité, mais pas toujours dans le bon…
Des chercheurs en cybersécurité ont détecté une intensification des activités du groupe APT36, affilié au…
This website uses cookies.