Last week’s announcement of AlphaCode, DeepMind’s source code–generating deep learning system, created a lot of excitement—some of it unwarranted—surrounding advances in artificial intelligence.
As I’ve mentioned in my deep dive on AlphaCode, DeepMind’s researchers have done a great job in bringing together the right technology and practices to create a machine learning model that can find solutions to very complex problems.
However, the sometimes-bloated coverage of AlphaCode by the media highlights the endemic problems with framing the growing capabilities of artificial intelligence in the context of competitions meant for humans.
For decades, AI researchers and scientists have been searching for tests that can measure progress toward artificial general intelligence. And having envisioned AI in the image of the human mind, they have turned to benchmarks for human intelligence. Being multidimensional and subjective, human intelligence can be difficult to measure. But in general, there are some tests and competitions that most people agree are indicative of good cognitive abilities.
Sécurité des mots de passe : bonnes pratiques pour éviter les failles La sécurité des…
Ransomware : comment prévenir et réagir face à une attaque Le ransomware est l’une des…
Cybersécurité et e-commerce : protéger vos clients et vos ventes En 2025, les sites e-commerce…
Les ransomwares : comprendre et se défendre contre cette menace En 2025, les ransomwares représentent…
RGPD et cybersécurité : comment rester conforme en 2025 Depuis sa mise en application en…
VPN : un outil indispensable pour protéger vos données Le VPN, ou « Virtual Private…
This website uses cookies.