Looking back to the 2010s, those years were characterized by the resurgence of Neural Networks and, in particular, Convolutional Neural Networks (ConvNet). Since the introduction of AlexNet, the field has evolved at a very fast pace. ConvNets have been successful due to several built-in inductive biases, the most important of which is translation equivariance.
In parallel, the Natural Language Processing (NLP) field took a very different path, with Transformers becoming the dominant architecture.
These two streams converged in 2020 with the introduction of the Vision Transformer (ViT), which outperformed classic ConvNets when it comes to large datasets. But, the simple “patchify” layer at the beginning, which converts the full image into different patches treated as tokens, made the ViT unsuitable for fine-grained applications, such as semantic segmentation. Swin Transformers filled this gap with their sliding attention windows, which, funnily, made Transformers behave more like ConvNets.
At this point, a natural question would be: if researchers are trying to make Transformers behave like ConvNets, why not stick with the latter? The answer is that transformers have been always considered to have a superior scaling behavior that outperforms classical Convnets in many vision tasks.
Panorama des menaces cyber en 2025 : Implications pour les entreprises françaises à l'ère de…
Introduction L'adoption croissante des technologies d'intelligence artificielle dans le secteur de la santé offre des…
La révolution IA dans le secteur de la santé : nouveaux défis de cybersécurité La…
En tant que PME sous-traitante de grands groupes, vous connaissez trop bien ce scénario :…
Votre entreprise vient de subir une cyberattaque. Dans le feu de l'action, vous avez mobilisé…
"Mais concrètement, à quoi sert un scanner de vulnérabilité pour une entreprise comme la nôtre?"…
This website uses cookies.