1. SEGFORMERS Exploration

    • Buy now
    • Learn more
    • Enter Segformers
  2. Attention & Transformers

    • Introduction to Segformers
    • What is Attention?
    • Recap: The Attention Timeline
    • Other Types of Attention
    • How to Visualize Attention Maps without Code
    • Transformers
    • Attention & Transformers Q&A
    • Segformer Networks
    • Mini-Workshop: Visually Understanding Query, Key, and Value
  3. Vault: Research Paper Study 🔐

    • Attention is all you need (2017)
    • An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale (2021)
    • SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers (2021)
    • Karpathy on Transformers
  4. Segformers Workshop

    • The Segformers Starter Block Workshop
    • Block 1 — Overlap Patch Embeddings
    • Block 2 — Efficient Self-Attention
    • Block 3 — Mix FFNs
    • Assemble: The Transformer Block
    • Workshop SegFormer: Part 1
    • Workshop SegFormer: Encoder
    • Workshop SegFormer: Decoder
    • Workshop Segformer: Train & Run
    • Visualizing the Attention Maps
    • Bonus: Drivable Area Detection Notebook
    • Hourra!
  1. Products
  2. Course
  3. Section

Section

  1. SEGFORMERS Exploration

    • Buy now
    • Learn more
    • Enter Segformers
  2. Attention & Transformers

    • Introduction to Segformers
    • What is Attention?
    • Recap: The Attention Timeline
    • Other Types of Attention
    • How to Visualize Attention Maps without Code
    • Transformers
    • Attention & Transformers Q&A
    • Segformer Networks
    • Mini-Workshop: Visually Understanding Query, Key, and Value
  3. Vault: Research Paper Study 🔐

    • Attention is all you need (2017)
    • An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale (2021)
    • SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers (2021)
    • Karpathy on Transformers
  4. Segformers Workshop

    • The Segformers Starter Block Workshop
    • Block 1 — Overlap Patch Embeddings
    • Block 2 — Efficient Self-Attention
    • Block 3 — Mix FFNs
    • Assemble: The Transformer Block
    • Workshop SegFormer: Part 1
    • Workshop SegFormer: Encoder
    • Workshop SegFormer: Decoder
    • Workshop Segformer: Train & Run
    • Visualizing the Attention Maps
    • Bonus: Drivable Area Detection Notebook
    • Hourra!

1 Lesson
    • Enter Segformers