299€ or 2 x 149€
Get off the Jupyter Notebooks and take your algorithms to the real world!
⭐️ Edgeneer's Land Price: 245€
If you want to work on self-driving cars, you're gonna have to realize that...
Dear Engineer, if you want to learn the one skill that will help you connect with real self-driving car and autonomous tech companies, then this page will show you how...
Here's a quick story to kick things off:
A few years ago, I was living my last week in a company. As it was often the case, I had mixed feelings of anticipation for the future, and nostalgia of closing an existing chapter. I wanted to look forward, and my future was promising: I was going to become a self-driving car engineer. Yet, something bothered me with what I was leaving behind...
"What will happen to my code?" I asked.
"Is anyone going to maintain it? Do you need me to write documentation? Is there anything I can do to facilitate the transition? I could write someth—"
— "Oh, you can leave your computer to the IT. They'll reset it."
And this phrase was all it took for me to understand that after these hours of work, these months of meetings, projects, debugging, presentation, etc... wouldn't be continued... they would simply delete the whole thing!
All of it, simply erased from existence.
But as this chapter closed, a new one opened: self-driving cars. I was now in an intense learning of Computer Vision, Sensor Fusion, and AI algorithms for self-driving cars. I was just a few months in, but was already feeling like my work would have an impact, and that my skills would be useful not just to one company at one point in time, but to hundreds, thousands, and maybe tens of thousands of people for generations.
Yet, despite my excitement, and after many interviews, I couldn't find a job. I was just not ready yet. Each interview made me feel more like a "rookie". In some of them, I was literally taking notes, learning from their managers how self-driving cars worked. I couldn't help but notice a massive gap between what I was trying to learn and the reality of the field.
For example, unlike what I thought, I learned that Object Detection was not done in 2D, but in 3D. Sensor Fusion was not done late, but early. Localization was not done via Particle Filters, but via Extended Kalman Filters. Full-frame Segmentation was not really used.
And even bigger? The one thing that connects them all is that...
This is possibly the biggest takeaway I got from all these interviews. They all had this in common: ROS. It was the common requirement for joining the field. If I wanted to have this kind of impact, I had to learn the rules of the game, and ROS was part of it. Looking back, the period I got my job as a self-driving car engineer was essentially right after I started to build ROS projects — not just show a detection algorithm, but a complete system.
This diagram on the right is from an open source self-driving car foundation called Autoware. I had the pleasure to meet them in late 2023, and I was amazed by their system, elegantly built with ROS2.
Notice all these folders? They each correspond to a brick of a self-driving car software. We have the Perception, Localization, Mapping, Planning, Control, Sensing, Simulation, and more...
But they aren't just pieces of code — they are pieces of code integrated with the system.
And this is, I think, one of the biggest skill you can build: integration of code in a system. Not only it's a mandatory skill for a self-driving car engineer, but it's also helping you see the bigger picture, more like a CTO — rather than an engineer working on one program.
How do you connect your cameras to your detector? How do you make your algorithm communicate with other nodes? How do you synchronize sensors? How do you get that LiDAR to be fused with a GPS? Is ROS the only skill you should build? Is it ROS 1 or ROS 2? What part of ROS are self-driving cars really using?
You see, you could learn about ROS, but there is ROS, and ROS for self-driving cars. Because a self-driving car also has real-time issues. Security concerns. A quality of service it must honor. And more than this, some companies work with other middlewares than ROS. Some engineers use lower-level systems. Which system? How?
It took me years to get a good idea of how to answer these questions — and frankly, I'm still constantly learning about them. Back in 2020, I created a course on ROS that helped many engineers "connect" with the field by its relevancy, and by not just being about ROS, but about ROS for self-driving cars.
Today, after years have passed, it's time for an incredible revamp, a v2, that would not only teach you about ROS, but also teach you more in-depth knowledge I learned from the field about architectures, systems, and how real self-driving cars are built...
Introducing...
For aspiring self-driving car engineers who want to build the mandatory skill of ROS, while learning how real self-driving cars are built...
Let me show you the 3 core modules you'll learn in this course:
MODULE 1
In this first module, we're going to learn about Robotic OS — and you'll see how to export your algorithms to a ROS project. This means being able to make your code communicate with real sensors and real self-driving car data.
What you'll learn in Module 1:
Exactly how much you need to know about ROS to become a self-driving car engineer — and a checklist of things you should know before applying for a job
The main operations you need to run on a "ROS BAG" — a recording of self-driving car events — and how to visualize every single bit of data a self-driving car is processing in real-time
How to visualize point clouds in a 3D ROS visualizer (named Rviz) — and the critical configurations you need when visualizing data from a real LiDAR (despite a LiDAR emitting a point cloud, each LiDAR manufacturer has its own custom messages, we'll see how to adapt this to your robot)
An "on the fly" analysis of 5 Robotics Software Engineer Job Offers — and the 3 hard skills every robotic engineer needs to have (I have been able to verify this information while giving a seminar in India and confirming that ROS was at least one of the skills everyone had)
Why it's so hard to understand an open source self-driving car code, and the truth about the use of Docker in self-driving cars and autonomous systems
An introduction to Docker for Robotics, and the minimal knowledge you need to build in order to "pass" (we'll also do a full analysis of a Dockerfile, and learn exactly how to build a Docker Image with Robotic OS installed)
How to connect to a ROS container in 2 minutes with almost no setup — and everything you need already preinstalled (the use of Docker in self-driving cars has been highly overlooked by engineers, when you look at open source code, there's Docker everywhere, we'll therefore spend some time understanding Docker, building images, creating networks, and more...)
An introduction to ROS and the core things you should know about it (in this part, we'll see a self-driving car version of what tutorials do when they get you to create packages, nodes, topics, launchers, etc... — but with a more concrete example)
An intro to communication patterns, and when to use Publish/Subscribe versus Client/Server (hint: some algorithms like object detection may require constant communication with sensors, while others like mapping could only request a computation once)
How to modify your algorithms (Object detection, LiDAR Perception, HydraNet, 3D Tracking, ...) and make them work on almost ANY ROS system (in this part, we'll talk more about the ROS Client Library, Subscription to Camera and LiDAR sensors, synchronization, messaging, and more)
Why OpenCV doesn't work out of the box with ROS, and a rookie mistake beginners do when using the CVBridge Library
The differences between ARM and AMD computing architectures, and why the world is slowly transitioning to ARM (we'll also see it in the context of Docker, and see a command to create a Docker container for ARM environments)
How to build and run a ROS Node, and a TODO list of things to do (command lines, files to modify, ...) when creating a new package.
PROJECT: 🚁 Implement YOLOv8 on a Google Colab Notebook, and export it to a Robotics environment
This project closes our first module. This entire module is the equivalent of my ROS 1 course — but it works on ROS 2, and has more important information that was overlooked in module 1. In the next module, we'll take it even further and see how to be a robotics "integrator" — taking code from various places and making it work on your algorithms.
MODULE 2
In this second module, we'll see how to assemble self-driving car software by combining multiple projects... no matter if you wrote them or if you found them online, or if it's in Python or C++, or using Docker.
What you'll learn in Module 2:
Project #1: Implementing YOLOv8's Segmentation & Detection HydraNet on ROS — we'll see how to clone a Github repository that is NOT built for ROS, and make it ROS compatible (we'll actually see two ways to do it: a dirty but working one, and a more professional and robust one)
Project #2: Engineer a LiDAR to Bird Eye View perception visualizer — We'll see how to process LiDAR point clouds and turn them into Bird Eye View images (incidentally, we'll also see that this can then be processed by 2D CNNs for 3D object detection)
How to "stitch" several point clouds into a panoramic image (this has nothing to do with ROS, but we'll use this project to work on LiDARs and it's therefore an interesting opportunity to take a look at it)
An exclusive look into GPS and localization for self-driving cars, and how to "GPS MAP"' an environment to identify possible blind spots (we'll also see how to tell apart a Russian GPS from an European GPS, and talk more about quality of signals)
What is Earth's geomagnetic field and how it can impact the development of a self-driving car localization module (hint: because Earth's magnetic north pole moves every year — this affects our compasses and GPS measurements, this means that a Localization module built 10 years ago could fail today, we'll see how to anticipate for this)
An introduction to Inertial Measurement Units for self-driving car localization, and the difference between an IMU and an Odometer
How to replicate the Odometry of a self-driving car from GPS information, and how you could implement localization fusion if you had an odometer, barometer, or a Visual Odometry algorithm
How to visualize a live GPS trace on Google Maps using Docker and ROS (this is very cool — more on this below)
Project #3: Implement a 3D Sensor Fusion system that uses an Extended Kalman Filter to fuse an IMU and GPS and accurately locate on a map
Let's take a short break here.
You see this last project? It's actually the coolest (in my opinion) of the entire module. In this project, we'll fuse an IMU with a GPS to localize your self-driving car with centimeter level accuracy. We'll also use Map Visualization modules to have a live view of our work; an example here following the GPS trace in real-time:
Visualizing your GPS signatures in real-time
Extended Kalman Filter for Localization
MODULE 3
In this final module, we'll learn how to design an architecture for an autonomous system. This implies you having a deep understanding of what's under the hood of ROS, how to make tasks real-time, more secure, and understanding architectures of existing software.
What you'll learn in module 3:
Why ROS 1 is for rookies but ROS 2 isn't, and the key differences between ROS 1 and ROS 2 (other than API differences and function call changes, we'll see what's under-the-hood of ROS 1, and why it is great for research, but not for industrial purposes)
What is "hard real-time" in self-driving cars, and how to technically "pass" the regulator's requirement of computations, real-time, memory, security, and more...
What is a deterministic system, and how to make stochastic systems deterministic (and thus comply with regulation) (This may lead us to more Black-Box systems like End-To-End Learning, and why they may be problematic in a real-world environment)
How to make ROS faster by removing ROS (hint: ROS is a high-level API, we can get rid of some useless components, and only keep the lower level communication protocols)
What is RTOS (Real-Time OS), and why self-driving car companies are crazy about use it in their architectures (warning: RTOS is a different OS that has the power to make your software achieve "hard real time", handle with care)
An introduction to DDS (Data Distributed Service), the true middleware behind ROS 2, and a detailed analysis of all its components
How to change the middleware of ROS, and use low-level command lines to ensure a higher quality of service, security, and prevent failure, crashes, and random bugs
What to do if you join a company that uses a different ROS version than yours, and the untold story of a Robotics Engineer who joined a Robotics company and discovered that both ROS 1 and ROS 2 were used (while it may look like ROS is now ROS 2, 80% of companies are still on ROS 1, and a good 99% of them use both ROS 1 and 2)
The 6 files used by DDS to cyber secure a self-driving car's communication systems, and what they mean
Exactly what Hardware manufacturers (LiDAR, RADARs, ...) use to process the raw data and provide you with filtered information (we'll also see how they turn this data into detections)
Why it can be a terrible idea to implement a centralized architecture with every sensor pointing to the same computer, and 2 alternative sensor designs you can use instead
Architecture Breakdown #1: The story of how my team used Apollo Auto and Cyber RT on an autonomous shuttle, and every step we had to go through to to Deploy Apollo on our own Autonomous Shuttle (CyberRT is an equivalent of ROS fitted for self-driving cars — because we will understand middlewares better, we'll be able to study it much faster)
Architecture Breakdown #2: A deep dive into Autoware's Node Diagram, and every algorithm they use to implement a real self-driving car
And many more...
Daniel P, Robotic Architect
"I wasn't sure if the course was worth the price, or if I should buy some other course from your offer and learn ROS for free from the internet. But the focus on integrating algorithms into full self-driving car systems using ROS was exactly what I needed. The hands-on interesting projects, like implementing YOLOv8 on ROS and visualizing birds eye view LiDAR data, provided practical experience that bridged the gap between theory and real-world application.
What I liked the most was the teaching of how to convert algorithms from Jupyter Notebooks to a ROS environment, and covering advanced topics like point clouds and real-time localization, it bridges the gap between theoretical knowledge and industry requirements, making it highly relevant for me.
I found out what the message types are, how to run the ros bag and that you can run ROS on the docker but I think it would be useful to show how to be able to use the graphics card in ROS on docker.
I would recommend it. This course emphasises systems integration, real-world projects and advanced technologies. In addition, it focuses on understanding and building complex architectures, such as autoware and apollo, which is key to developing a career in autonomous systems.
Ishita P, Robotic Architect
"I thought this was an excellent course. Just enough to get me started with my own ROS projects.
I liked how ROS docker installation was a part of the course. Because I have not seen any ROS courses that I have come across till now explaining about containers, networks and installing ROS as a container.
Also, the course helped me to get more acquainted with ros bags and doing projects with different ros packages and combining it with a ros bag radar lidar. Also I was very much excited to be introduced to other visualisers and I was able to visualise directly from lidar sensor which was an extremely beneficial learning for me.
I would definitely recommend this product because of its duration. It doesn't require me to sit for hours and months to learn a basic thing. At the same time its full of important topics just enough to get one started with live projects."
Here's what you should know before joining the course...
This course is open to every engineer who has practical knowledge about self-driving cars but not with ROS. It will help you learn the fundamental concepts, with a personalization on the self-driving car field.
A few prerequisites:
Coding in Python, and ideally C++ (some of our projects will be in C++, it shouldn't be a blocking point though)
A good understanding of self-driving cars (what is Perception, Localization, ...) and its algorithms (object detection, point clouds processing, ...) is a plus
You'll go much faster if you have practical knowledge of self-driving car algorithms — it's not a requirement though
You can handle Linux, and have experience with basic command lines
Now, a note on existing ROS clients and users:
If you already know ROS, keep in mind that this is a ROS course for people with no knowledge of ROS. While it contains many insightful stories and modules that could help you build skills in self-driving cars, realize that you will probably encounter some familiar concepts, especially in Module 1 — you would need to decide whether it's worth the investment or not.
The goal of this course is to help you understand how to build a self-driving car system. Not just one algorithm, but having several of them together. Therefore, we'll see many different algorithms, and rather than diving deep into each of them, we'll take architect lenses, and see how they can work together in a system.
The list of ROS commands? Google it! Again, this isn't a course on ROS, and I don't think there is a lot of value in teaching you exactly how to build a robotic arm with URDF and these things — because they're already well covered in free tutorials. I want this course to be a self-driving car approach to ROS, and teach you things you can't really know unless you work in the field.
Regarding duration:
My goal is to have you operational (you can export any project to ROS, add more projects to the system, and be independent to keep learning on your own) as quickly as possible. I don't want you to spend 50 minutes on a single ROS command, then 40 on another, etc... If this is a bloating contest, I'm definitely not winning it. I want you to be ready to work with ROS companies in the next 5 to 10 hours.
Here are a few "red flags" where you could consider that you would NOT be a good fit for the course:
🔻 You already know ROS perfectly, ROS 1, ROS 2, you use it every day at work. This case is discussed above in the requirements.
🔻 You don't have the patience to debug, figure out things on your own, and expect everything to work perfectly first shot. This course is highly practical and involves a lot of layers (Docker, ROS, GPS Map Visualizers, etc...) — while we tested our codes 100 times on every type of computer, expect things to break.
🔻 You have zero knowledge of self-driving cars, have no clue what Perception is, and are a complete beginner in everything AI and self-driving car related. In this case... I can't believe you read this far!
🔻 You are looking for a complete ROS bible containing every single ROS command ever invented, and an associated 50 minute video lessons showing someone writing command lines. Sorry, I don't do 300+ hour courses (and neither will you), if anything, this course tries to get you to the objective in as little time as possible.
🔻 You don't plan to work on self-driving cars or on real physical robots. In this case, I'd go to my other courses that are more about the algorithms.
On the other hand, you ARE an excellent fit if:
✅ You have already worked on self-driving car algorithms (Perception, AI, LiDAR, Localization, ...) and even a bit on ROS (1, 2), but haven't really been able to grasp ROS related skills or get a job in the field.
✅ You want to take a CTO approach to self-driving cars and focus on architectures, middlewares, OSI layers, and self-driving car software, equally or more than about ROS or a particular tool
✅ You have failed robotic job interviews because of lack of practical experience, but didn't get specific feedback. (hint: often times, when you don't get a specific feedback, it's simply because you lack "disqualifying" skills, such as ROS, coding, etc...
✅ You're learning about self-driving cars in an isolated environment, and have never tried to work with real data, or to make programs communicate between eachother
✅ You want to have the technical ability to work with self-driving car companies and teams without ROS being an obstacle in your journey
This course is unique because it is laser focus on teaching ROS for self-driving cars architectures. While other courses will teach ROS in a more general way, we focus on self-driving car architectures using ROS. Having this focus is important because we don't learn ROS for the sake of learning ROS, but we learn ROS for the sake of building a deep understanding of self-driving car software. This allows you to think more like a CTO, rather than a programmer, which elevates your understanding of self-driving cars.
📍 Think of it like a GPS rather than a map. Other tutorials might show you every single command line, every single possibility, without telling you which is important to you and which isn't. Without a goal, everything is equally important. Therefore, you spend 50 minutes on each module. This is a map approach.
Our GPS approach means we filter out 80% of the irrelevant information and explain you how to use the core ROS skills to build self-driving cars.
If you build a good understanding of ROS, you should be able to figure out how to learn more about it on your own after the course. Again, we want you to understand ROS enough to be operational — while still working on advanced projects...
An example?
In my ROS v1 course, I adopted a similar approach. Even though the course was less detailed and contained fewer cutting-edge projects, it was still a "GPS" rather than a map.
Edgeneer Mario Bergeon took the ROS v1 course, grasped the core information about ROS in just a few hours, and when needed, was able to learn ROS related topics.
Mario Bergeron, Machine Learning Specialist @Avnet
"Jeremy's ROS Course was exactly what I needed to get started with ROS. Using the very valuable hands-on labs with an embedded platform, I quickly acquired the basics of ROS. I feel that I now know enough to continue learning myself."
— Two years later, he sent me this after an email where I talked about ROS 2, and how if you took my ROS 1 course, you could figure out ROS 2 without having to re-do 50 hours on it:
"Here is some preliminary/exploratory work I was able to do, following your ROS1 course:
As you said, I was able to “figure out” the move to ROS2."
You can find more of Mario's blog posts and ROS seminars online.
But for now...
ROBOTIC ARCHITECT costs 265€, and its price will keep going up in the future. A few months from now we'll keep updating it with new content, new DLCs, and the price and value will only increase over time. As it is the case for all of my launches, the best price you can get is during launch week.
This course is made for you if you want to become a self-driving car or autonomous system engineer, and if you currently work mainly on local and isolated systems. It's for you if you want to learn more about architectures, and build the high-level view of how a self-driving car works.
299€ or 2 x 199€
Get off the Jupyter Notebooks and take your algorithms to the real world!
⭐️ Edgeneer's Land Price: 245€