249€ or 2 x 125€

LEARN VISUAL FUSION: Expert techniques for LiDAR Camera Fusion in Self-Driving Cars

An advanced course for engineers who want to master sensor fusion in 2D and 3D.

The Handbook to navigate between LiDARs and Cameras, between 2D and 3D, and to build a strong understanding of the Sensor Fusion Algorithms implemented by the most advanced Self-Driving Car Companies!

If you'd like to learn how Visual Sensor Fusion works between LiDARs and Cameras, and if you'd like to build a strong understanding of 3D Perception systems in the self-driving car field, then this page will show you how.
Here's the story:

A few years ago, my dream was to become a self-driving car engineer.
A dream?
It wasn’t just a dream.
I was determined, focused, and I sincerely believed in it despite what everybody was telling me — I was seeing all of these engineers learning new skills online and making it their day to day job.

If they could do it, so could I!

I was learning everything related to Perception: from Computer Vision, to Object Detection, to Sensor Fusion, to Prediction, to Kalman Filters, to Deep Learning, to SLAM, ...

It was all very inspiring, but still somewhat shallow:

How can you turn “I know how Deep Learning works” to “I’m a Self-Driving Car Engineer”?

No matter how much I tried, the knowledge I had was very basic and most of all, it wasn't getting me any closer to what companies were sharing online.

You know, things like this:

This is what I wanted to do.
That exact picture!
But how?

How do we move from basic Deep Learning knowledge to this above picture?

Here's something to understand first: 
  • There is no such thing as self-driving car engineer! 

There are LiDAR Engineers, Sensor Fusion Engineers, Computer Vision Engineers, Motion Planning Engineers, and so on...

And while you may already have some skills about one of these domains... You may not yet understand how to tie these topics all together!

How to navigate between 2D and 3D? How to go visualize in 2D, but do the fusion in 3D? How to translate one sensor to another? How to deal with the geometry, projection matrices, and maths involved? How to visualize a 3D scene using 2D outputs?

There are hundreds of questions once you try solving the above picture and mixing sensors and algorithms together.

These are eliminatory question, if you don't know them, it can take a second to label you as a rookie.

Because Self-Driving Cars aren't about 2D object detection, it's about fusing several sensors together to drive.

How to learn about sensors

We just said before, every self-driving car engineer has its specialty.
But do you want to know one of the things they share in common?
  • They all know how to work with sensors!
It's even more common than Deep Learning. From plugging a camera, to collecting RADAR waves, to generating a point cloud... This skill wasn't a "nice to have", it was second nature to every self-driving car engineer I knew!

Better:

This is a skill that is so important, because it's the link between every sensors, and it's the final outcome of your perception system.

And yet, hundreds of thousands of AI and Self-Driving Car Engineers don't know how to deal with sensors; they can perfectly detect objects in an image, but they will spend hours just plugging a camera and calibrating it.

So, how do you think they'll do when dealing with several sensors, all overlapping with eachother, at different lenses and framerates, and outputting results in different dimensions?

If you want to start working on self-driving cars, you're gonna need to understand that it's first and foremost about sensors.

And those who will know all about it...

Will simply outperform the competition!

Compare an engineer coming to an interview with a YOLO object detector, and another one coming with a Visual Fusion algorithm that can fuse a 3D LiDAR with a 2D Camera?

It's just not the same category!
If the first one is a featherweight rookie,  the other one is a world heavyweight champion!

And this is exactly what we'll learn in this course:

Introducing...

LEARN VISUAL FUSION: Expert techniques for LiDAR and Camera Fusion in Self-Driving Cars

Designed for future cutting-edge engineers, who have an existing background in image processing, or even point cloud processing, and wish to work on Sensor Fusion technologies.

Let me show you the Program:

1 — Introduction to Sensor Fusion

Learn about LiDARs, Cameras, and about the Sensor Fusion algorithms we can use to fuse data from multiple sensors.
  • 3 overlooked camera fundamentals most engineers don't use that could 10x their potential and transform their Computer Vision Engineer position.

  • An exclusive look into LiDARs (Light Detection And Ranging), including how LiDARs work, the difference between 2D and 3D LiDARs, how to read a Point Cloud, 9 different types of file format to work with, and more...

  • A concise introduction to RADARS (This is optional and has nothing to do with Visual Fusion, but I wanted your understanding of LiDARs, cameras, and RADARs to be complete.)

  • A simple "laundry list" of camera concepts you must learn before starting a Computer Vision Engineer position.

  • Early vs Late Sensor Fusion: How to select a Sensor Fusion algorithm and what are self-driving car companies using.

  • A sure way to calibrate a camera and retrieve accurate intrinsic and extrinsic parameters (if this step goes wrong, don't even think about fusion)

  • The well-know Advantages and Drawbacks of using Sensor Fusion, including how to use RADARs, Cameras, and LiDARs, but also how to mathematically consider their individual advantages in your sensor fusion algorithm.

  • Why Early Fusion is dominating in 2022, and when you should not use it...

  • 9 types of Sensor Fusion algorithms (some of the ways also include Camera-To-Camera Fusion, Image Panorama, 3D Reconstruction, ...)

2 — Point Pixel Fusion

Learn to project point clouds to 2D images and to fuse the data to build a 3D Object.


  • The 3 magic little steps to implement the Point-Pixel Fusion algorithm 80% of robotics and self-driving car companies use (project, detect, fuse) 

  • How to read a self-driving car sensor architecture diagram (including how to position sensors, where to position them ideally, what model of camera should you use in a self-driving car, ...)

  • The only 3D Visualization Library that works on Google Colab with Open3D (it took me weeks to find it!)

  • How to fuse sensors at different height, of different type, and using different scales.

  • A "MAGIC FORMULA" to project a 3D Point from any LiDAR to a 2D Image coming from any Camera.  (That formula works in most cases, but not all of them, so I also included in the code a REVERSE MAGIC FORMULA that will work in 100% of the cases. 🙃)

  • Exactly how to do the maths needed for 3D-2D PROJECTIONS. (We'll look at every matrix cell by cell and understand it so deeply that you'll become autonomous on LiDAR Camera projections; no matter the sensors.)

  • The exact code to project a LiDAR point cloud to a camera image.

Let's pause here for a second. You can find lot of code available online, but they will all project a bunch of point to an image without you really understanding how it works.

In this course, I want you to have an in-depth understanding, which means we'll project A SINGLE POINT to an image, and we'll also do the maths manually to verify our work!

  • The quickest known way to run a YOLO object detection algorithm (you'll then be able to use these 27 lines of code in any project as a black box).

  • 2 Outlier Detection Techniques I use to filter point clouds. (skip it and you'll end up with dramatic —— up to 3 meters ——distance errors in your fusion output).

  • An in-depth introduction to Homogeneous Coordinates and exactly how and when to use them when doing Projections.

  • Which distance is the "right" distance you should select (we'll go over 5 different selection types and I help you pick)

  • (Bonus) In-House Visual Fusion: How to use a real LiDAR and make it work on your computer to build your own in-house Visual Fusion project (I'll show you what type of LiDAR I own and where to purchase it for under 100$).

  • ⌨️ Point Pixel Fusion Project: Inside the manufacturing of a self-driving car Early Fusion Algorithm.

This project will use a LiDAR and a camera, and here is what it looks like:

3 — Box To Box Fusion

Learn to fuse outputs from 2D and 3D algorithms to build a robust and efficient Sensor Fusion system.
  • An Introduction to Late Sensor Fusion in 5 steps:
    • Detect Objects in 2D (Camera)
    • Detect Objects in 3D (LiDAR)
    • Project the 3D Box in the Image
    • Fuse the Bounding Boxes
    • Build a 3D Object

  • How to use Late Fusion with 3D Camera and LiDAR Boxes (this will also work with 2D LiDAR boxes and 2D camera boxes).

  • The quick projection mathematics to change the dimensions of a bounding box.

  • An object tracking technique, now used in Sensor Fusion, to associate Bounding Boxes from one sensor to another.

  • How to build a LiDAR/Camera Fusion cost function (and how to optimize a late fusion algorithm using Kalman Filters).

  • ⌨️ BOX TO BOX FUSION Project: Build an algorithm that detects objects in 2D and 3D, and combines the outputs.

  • The only time you should ever use Late Fusion over Early Fusion (and why it can't be done any other way).

  • An introduction to Deep Sensor Fusion and an in-depth look at how Aurora (amazon's Unicorn) implements it in its self-driving car.

  • Two types of features you should use in your Deep Learning algorithms for fusion.

  • The unknown secrets of Multi-View Fusion and a clever trick you can use to fuse several LiDARs with several cameras.

  • How to physically scale a Sensor Fusion architecture using Satellite Fusion to use more LiDARs and more cameras without increasing the weight and computation power.

  • (BONUS) - The VISUAL FUSION MINDMAP that will help you go back to the course several months from now and go back to the same moment you were 5 minutes after completing.

and many more...

It's not just about cars

Saying that the Sensor Fusion market is booming would be an understatement. You might not realize it, but most engineers working on Self-Driving Cars, Domotics, Robotics, AI, Computer Vision, or Augmented Reality are using sensors, and thus using sensor fusion.

Whether it's the recent Apple AirTags, the Amazon Go shops, or even medical devices, Sensor Fusion has been able to go way outside of the self-driving car field.

Can you imagine how many other everyday products use Sensor Fusion without you even noticing?

Sensor Fusion is not only one of the top skills aspiring Self-Driving Car Engineers must have, it's also a skill ANY aspiring cutting-edge engineer should master.

Below are a few questions you might have:

FAQs

What are the prerequisites to follow the course?

Here are the prerequisites:
  • Coding in Python
  • Maths - Matrix Multiplications, Linear Algrebra, Calculuus, ...
  • Deep Learning, CNNs, and 2D Object Detection
  • Understanding how Point Clouds work is a Plus, but not required

This course is an introduction to Sensor Fusion. But it has been targeted for engineers who already have prerequisites. In case you don't validate these, please note that this program hasn't been built with you in mind; and is targeted specifically to help those who validate acquire concrete and valuable skills building on their existing ones.

How much "LiDAR" should I know?

You might already have great Computer Vision skills, but not knowing a thing about LiDAR. While Point Clouds are a critical part of LiDAR Camera Fusion, all of the aspects needed to do the course are in the course.

You don't need to buy a LiDAR, the 3D data and Self-Driving Car Geometrical setups will already be provided through public datasets. We'll work as if we were just given a car with specific schematics.

How do I even have time for this?

Maybe you're already enrolled in another course or training, or have family commitments, or simply a lot of work. I totally get it - I'd also love to follow 50 courses per year, but I simply don't have time.

The last thing you can say about this course is that it's too theoretical, or too long. In your first hour, you'll be coding 3D Projection algorithms and be implementing on your web browser? If you have 5-10 hours, you can finish the course and be ready to apply your new skills right away.

⚠️ Again - The only reason you can be so skilled in such a short time is because you validate the prerequisites. If you don't have time, and can't find 5-10 hours to learn this skill, then don't join. 

If you don't feel like you can take 5-10 hours to learn one of the most important topic in self-driving cars, then simply don't even try.

Testimonials

During an exceptional collaboration with PyImageSearch, a sample of this course has been shared with their students!

Look at what they said just after completing a few lessons:

Jegathesan Shanmugam, Computer Vision Engineer | ADAS



Rangel Isaias Alvarado Walles, Engineer

Xavier Rigoulet, Data Scientist Professional

Vischwesh Ravi Shrimali

ADRIAN ROSEBROCK | CEO, PYIMAGESEARCH

"The best person to learn autonomous cars from"

"Last Week, Jeremy Cohen launched Visual Fusion for Autonomous Cars 101 inside PyImageSearch University. I cannot even begin to express the number of positive reviews we've gotten from the course.

Jeremy is an incredible teacher and the best person to to learn autonomous cars from."

Are you ready to master Visual Fusion? 🔥

Tomorrow, when waking up, you'll either be enrolled in the course, or not. Whatever you decide, you are in charge, and you make things happen. 

The skills needed to learn Visual Fusion have been gathered in this course. What took me months to learn can take you a few hours, and maximize your chances of getting relevant and cutting-edge skills you'll feel EXCITED about and PROUD.

249€ or 2 x 125€

LEARN VISUAL FUSION: Expert techniques for LiDAR Camera Fusion in Self-Driving Cars

Lifetime access to:

  • Introduction to Sensor Fusion

  • Point Pixel Fusion Approaches

  • Box To Box Fusion

Plus

  • The Visual Fusion MindMap that you can look at several months from now to remember everything as if you've just learned it