Looking back to the 2010s, those years were characterized by the resurgence of Neural Networks and, in particular, Convolutional Neural Networks (ConvNet). Since the introduction of AlexNet, the field has evolved at a very fast pace. ConvNets have been successful due to several built-in inductive biases, the most important of which is translation equivariance.
In parallel, the Natural Language Processing (NLP) field took a very different path, with Transformers becoming the dominant architecture.
These two streams converged in 2020 with the introduction of the Vision Transformer (ViT), which outperformed classic ConvNets when it comes to large datasets. But, the simple “patchify” layer at the beginning, which converts the full image into different patches treated as tokens, made the ViT unsuitable for fine-grained applications, such as semantic segmentation. Swin Transformers filled this gap with their sliding attention windows, which, funnily, made Transformers behave more like ConvNets.
At this point, a natural question would be: if researchers are trying to make Transformers behave like ConvNets, why not stick with the latter? The answer is that transformers have been always considered to have a superior scaling behavior that outperforms classical Convnets in many vision tasks.
Introduction La cybersécurité est devenue une priorité stratégique pour toutes les entreprises, grandes ou petites.…
Cybersécurité : les établissements de santé renforcent leur défense grâce aux exercices de crise Face…
La transformation numérique du secteur financier n'a pas que du bon : elle augmente aussi…
L'IA : opportunité ou menace ? Les DSI de la finance s'interrogent Alors que l'intelligence…
Telegram envisage de quitter la France : le chiffrement de bout en bout au cœur…
Sécurité des identités : un pilier essentiel pour la conformité au règlement DORA dans le…
This website uses cookies.