ai health
As tech companies begin to weave AI into all their products and all of our lives, the architects of this revolutionary technology often can’t predict or explain their systems’ behavior.
Why it matters: This may be the scariest aspect of today’s AI boom — and it’s common knowledge among AI’s builders, though not widely understood by everyone else.
What’s happening: For decades, we’ve used computer systems that, given the same input, provide the same output.
The element of randomness in generative AI operates on a scale — involving up to trillions of variables — that makes it challenging to dissect how the technology arrives at a particular answer.
Driving the news: Four researchers published a paper Thursday showing that users can defeat « guardrails » meant to bar AI systems from, for instance, explaining « how to make a bomb. »
Between the lines: Since AI developers can’t easily explain the systems’ behavior, their field today operates as much by oral tradition and shared tricks as by hard science.
Of note: These systems can be tuned to be relatively more or less random — to provide wider or narrower variation in their responses.
Le règlement DORA : un tournant majeur pour la cybersécurité des institutions financières Le 17…
L’Agence nationale de la sécurité des systèmes d'information (ANSSI) a publié un rapport sur les…
Directive NIS 2 : Comprendre les nouvelles obligations en cybersécurité pour les entreprises européennes La…
Alors que la directive européenne NIS 2 s’apprête à transformer en profondeur la gouvernance de…
L'intelligence artificielle (IA) révolutionne le paysage de la cybersécurité, mais pas toujours dans le bon…
Des chercheurs en cybersécurité ont détecté une intensification des activités du groupe APT36, affilié au…
This website uses cookies.