The deep learning world of artificial intelligence is obsessed with size.
Deep learning programs, such as OpenAI’s GPT-3, continue using more and more GPU chips from Nvidia and AMD — or novel kinds of accelerator chips — to build ever-larger software programs. The accuracy of the programs increases with size, researchers contend.
That obsession with size was on full display Wednesday in the latest industry benchmark results reported by MLCommons, which sets the standard for measuring how quickly computer chips can crunch deep learning code.
Google decided not to submit to any standard benchmark tests of deep learning, which consist of programs that are well-established in the field but relatively outdated. Instead, Google’s engineers showed off a version of Google’s BERT natural language program, which no other vendor used.
MLPerf, the benchmark suite used to measure performance in the competition, reports results for two segments: the standard « Closed » division, where most vendors compete on well-established networks such as ResNet-50; and the « Open » division, which lets vendors try out non-standard approaches.
Introduction La cybersécurité est devenue une priorité stratégique pour toutes les entreprises, grandes ou petites.…
Cybersécurité : les établissements de santé renforcent leur défense grâce aux exercices de crise Face…
La transformation numérique du secteur financier n'a pas que du bon : elle augmente aussi…
L'IA : opportunité ou menace ? Les DSI de la finance s'interrogent Alors que l'intelligence…
Telegram envisage de quitter la France : le chiffrement de bout en bout au cœur…
Sécurité des identités : un pilier essentiel pour la conformité au règlement DORA dans le…
This website uses cookies.