For the past few decades, anxious parents, educators, and politicians have latched onto the idea that teaching kids to code would be a surefire way to prepare them for “the workforce of tomorrow.” But artificial intelligence is now starting to slowly but surely deflate the economic life preserver that coding was supposed to represent.
At first, this may seem counterintuitive. After all, A.I. is just software, and someone still has to write that software, right? Well, the answer is increasingly likely to be no.
Last year, Microsoft teamed with the San Francisco research lab in which it owns a big stake, OpenAI, to create a feature for its GitHub code repository called Copilot that can automatically suggest the next best line of code for a program, or the best way to complete a line that a human coder has started to craft. That wasn’t going to displace coders any more than autocomplete in Microsoft Word displaces novelists. But it was a harbinger of things to come. Last week, DeepMind unveiled A.I. software it calls AlphaCode that can construct whole programs to complete novel tasks as well as an average human coder.
Mots-clés : cybersécurité, sécurité informatique, protection des données, menaces cybernétiques, veille cyber, analyse de vulnérabilités, sécurité des réseaux, cyberattaques, conformité RGPD, NIS2, DORA, PCIDSS, DEVSECOPS, eSANTE, intelligence artificielle, IA en cybersécurité, apprentissage automatique, deep learning, algorithmes de sécurité, détection des anomalies, systèmes intelligents, automatisation de la sécurité, IA pour la prévention des cyberattaques.






