ai supercomputer
Artificial intelligence is on a tear. Machines can speak, write, play games, and generate original images, video, and music. But as AI’s capabilities have grown, so too have its algorithms.
A decade ago, machine learning algorithms relied on tens of millions of internal connections, or parameters. Today’s algorithms regularly reach into the hundreds of billions and even trillions of parameters. Researchers say scaling up still yields performance gains, and models with tens of trillions of parameters may arrive in short order.
To train models that big, you need powerful computers. Whereas AI in the early 2010s ran on a handful of graphics processing units—computer chips that excel at the parallel processing crucial to AI—computing needs have grown exponentially, and top models now require hundreds or thousands. OpenAI, Microsoft, Meta, and others are building dedicated supercomputers to handle the task, and they say these AI machines rank among the fastest on the planet.
But even as GPUs have been crucial to AI scaling—Nvidia’s A100, for example, is still one of the fastest, most commonly used chips in AI clusters—weirder alternatives designed specifically for AI have popped up in recent years.
Cerebras offers one such alternative.
L'IA : opportunité ou menace ? Les DSI de la finance s'interrogent Alors que l'intelligence…
Sécurité des identités : un pilier essentiel pour la conformité au règlement DORA dans le…
La transformation numérique du secteur financier n'a pas que du bon : elle augmente aussi…
Telegram envisage de quitter la France : le chiffrement de bout en bout au cœur…
L'intelligence artificielle (IA) révolutionne le paysage de la cybersécurité, mais pas toujours dans le bon…
TISAX® et ISO 27001 sont toutes deux des normes dédiées à la sécurité de l’information. Bien qu’elles aient…
This website uses cookies.