Hi everyone!
I’m making this post to provide more updates and let you look ‘under the hood‘ of our Bevy car (simulation).
We’re currently running a single-camera setup for testing self-driving algorithms to avoid unnecessary complexity and to also avoid putting out half-baked features.
Our simulated car communicates with our server code using a WebSocket! We chose websockets because they offer bidirectional, persistent communications.
You might be wondering, why bidirectional? Well, the Python server doesn’t just get a video stream from the Simulation; it also sends decisions to it, and, as such, having a bidirectional, persistent connection fits our use case perfectly.
We are using OpenCV for object recognition. Currently, we are running a color-based recognition system to demo a ‘line-following-car’ algorithm.
We opted out of doing an object recognition system for the following reasons:
- Object recognition requires time to implement and test.
- We started from zero on this project, so the learning curve would also need to be considered.
- We considered creating a top-down spline transpose system to generate a set of decisions in a ‘deque’, but this would also need object recognition since the lines would not be on a static part of the screen.
- Even after implementing such a system, integration with the main codebase would need even more time.
As a result, we opted to use the ‘line following’ algorithm, but instead adapted it to ‘lane following’, similar to how cars use lane detection.
We’re currently working on developing differently colored lanes that will help the server make decisions based on a ‘guided system’, which would be predrawn lanes in this case, but we have designed the system in a way that each component is modular, i.e., if you wanted, you could even swap out our simulation with your own!
Another thing we are working on is developing static lanes for demos, since our current lanes are drawn using a ‘lane-drawing’ feature that my brilliant teammates ideated and implemented rather than adding in static lanes.
This system doesn’t actually draw lanes! Instead, it creates a 3D object of two parallel line segments and attempts to overlay them on the terrain.
As such, even this system can be improved down the line to create obstructions, or even pedestrians! This is because Bevy (our 3D engine) uses an Entity Component System (ECS) rather than traditional Object Oriented Programming (OOP), which makes it easy to swap component systems, as compared to object class variables and methods.
Our current goal is to have a perfectly working demo, and we are almost done!
Buckle up; the simulation may be virtual, but the progress is very real, and there is a long road ahead with the end in sight. See you at the next milestone!