The Ultimate Guide to Vision Transformers

A comprehensive guide to the Vision Transformer (ViT) that revolutionized computer vision

Hi everyone! For those who do not know me yet, my name is Francois, I am a Research Scientist at Meta. I have a passion for explaining advanced AI concepts and making them more accessible.

Today, let’s dive into one of the most significant contribution in the field of Computer Vision: the Vision Transformer (ViT).

Converting an image into patches, image by author

A bit of history first..

The Vision Transformer was introduced by Alexey Dosovitskiy and al. (Google Brain) in 2021 in the paper An Image is worth 16×16 words. At the time, Transformers had shown to be the key to unlock great performance on NLP tasks, introduced in the must paper Attention is All you Need in 2017.

Between 2017 and 2021, there were several attempts to integrate the attention mechanism into Convolutional Neural Networks (CNNs). However, these were mostly hybrid models (combining CNN layers with attention layers) and lacked scalability. Google addressed this by completely eliminating convolutions and leveraging their computational power to scale the model.

The million-dollar question this article answered is..