AI Remote Controlled Car
One of my long-term goals is to own a self-driving car. However, it doesn’t seem like that will happen anytime soon. As a result, I turned to the next best thing—a self-driving remote-controlled car. I was inspired by the The Doneky Car project, a community dedicated to DIY self-driving RC cars. I took apart my old RC car built a frame from cardboard and mounted a Jeston Nano, PCA9685, and Pi camera on top. Voila! An RC car, but with extra steps. After calibrating the steering and acceleration, I could control the car from a locally hosted web server. Next up were the fun steps.
Donkey Car offers a framework to train a self-driving agent through supervised learning. To do this, you launch a simulation and drive the car yourself for a while. The agent then trains based on the collected image and input data. This method provides a simple and fast way to train your first self-driving car. I achieved decent results, but I didn’t like this approach. The problem was that it relied on me to generate the data. To develop a robust agent, I would have had to drive for hours, which I wasn’t willing to do. Moreover, driving in the simulation was surprisingly challenging, making it difficult to gather quality data. I felt discouraged and almost abandoned the project, but then I discovered the RLDonekycar repository. What a brilliant idea! Why teach the car when it can teach itself? Currently, I’m in the process of using reinforcement learning to teach an agent to drive. I only managed to dip my toes in before getting swamped with school, research, and my summer internship. Consequently, the project was put on the back burner. However, once I have the time, I want to use the Actor-Critic method to create my dream self-driving RC car.