xianshou 3 days ago

Illustrated Transformer is amazing as a way of understanding the original transformer architecture step-by-step, but if you want to truly visualize how information flows through a decoder-only architecture - from nanoGPT all the way up to a fully represented GPT-3 - nothing beats this:

https://bbycroft.net/llm

  • cpldcpu 3 days ago

    whoa, that's awesome.

ryan-duve 3 days ago

I gave a talk on using Google BERT for financial services problems at a machine learning conference in early 2019. During my preparation, this was the only resource on transformers I could find that was even remotely understandable to me.

I had a lot of trouble understand what was going on from just the original publication[0].

[0] https://arxiv.org/abs/1706.03762

crystal_revenge 2 days ago

While I absolutely love this illustration (and frankly everything Jay Alammar does), it is worth recognizing there is a distinction between visualizing how a transformer (or any model really works) and what the transformer is doing.

My favorite article on the latter is Cosma Shalizi's excellent post showing that all "attention" is really doing is kernel smoothing [0]. Personally having this 'click' was a bigger insight for me than walking through this post and implementing "attention is all you need".

In a very real sense transformers are just performing compression and providing a soft lookup functionality on top of an unimaginably large dataset (basically the majority of human writing). This understanding of LLMs helps to better understand their limitations as well as their, imho untapped, usefulness.

0. http://bactra.org/notebooks/nn-attention-and-transformers.ht...

  • mbeex a day ago

    Interesting read! As a mathematician I always had difficulties with AI jargon, even though I've been writing on neural networks since the late nineties.

    Almost at the same time as the emergence of transformers, I had only minimal contact with the field. I was just aware of the appearance of the Vaswani paper, but only now have I returned to the subject in a way that requires more rigour. And I stumbled upon "attention" in the same way as the author. It did not help to know more about the biological model [1].

    Yes, kernels. Asking myself, what software implementations of the superior colliculus or yet retina cell complexes like DS (direction-sensitive) or OMS (object-motion-sensitive) cells could provide.

    [1] for example: https://mitpress.mit.edu/9780262019163/the-new-visual-neuros...

jerpint 3 days ago

I go back regligiously to this post whenever I need a quick visual refresh on how transformers work, I can’t overstate how fantastic it is