Last week’s announcement of AlphaCode, DeepMind’s source code–generating deep learning system, created a lot of excitement—some of it unwarranted—surrounding advances in artificial intelligence.
As I’ve mentioned in my deep dive on AlphaCode, DeepMind’s researchers have done a great job in bringing together the right technology and practices to create a machine learning model that can find solutions to very complex problems.
However, the sometimes-bloated coverage of AlphaCode by the media highlights the endemic problems with framing the growing capabilities of artificial intelligence in the context of competitions meant for humans.
For decades, AI researchers and scientists have been searching for tests that can measure progress toward artificial general intelligence. And having envisioned AI in the image of the human mind, they have turned to benchmarks for human intelligence. Being multidimensional and subjective, human intelligence can be difficult to measure. But in general, there are some tests and competitions that most people agree are indicative of good cognitive abilities.
Introduction La cybersécurité est devenue une priorité stratégique pour toutes les entreprises, grandes ou petites.…
Cybersécurité : les établissements de santé renforcent leur défense grâce aux exercices de crise Face…
La transformation numérique du secteur financier n'a pas que du bon : elle augmente aussi…
L'IA : opportunité ou menace ? Les DSI de la finance s'interrogent Alors que l'intelligence…
Telegram envisage de quitter la France : le chiffrement de bout en bout au cœur…
Sécurité des identités : un pilier essentiel pour la conformité au règlement DORA dans le…
This website uses cookies.