The contents of this project can be found in the following github repo:
https://github.com/dannyluo12/Autonomous_robot_data_visualization_and_interface

Catalog

Project Goals:

The goal of this project is to create visualizations and an interactive interface to enable users to be successful in autonomous navigation applications. By doing so, we hope to enhance the users racing performance and debugging capabilities when working with autonomous vehicles.

Context:

Autonomous navigation requires a wide-range of engineering expertise and a well-developed technological architecture in order to operate. The focus of this project is to illustrate the significance of data visualizations and an interactive interface with regards to autonomous navigation in a racing environment. In order to yield the best results in an autonomous navigation race, the users must be able to understand the behavior of the vehicle when training navigation models and during the live race. To address these concerns, teams working on autonomous navigation must be able to visualize and interact with the robot. Detailed in the formal report, it is evaluated that RRT* (Rapidly-exploring random tree) is the best performing navigation algorithm. RRT* is therefore implemented for path planning and obstacle avoidance objectives. Visualizations of the RRT* algorithm and an interactive user interface will help to enhance model testing, debug unexpected behavior, and improve upon existing autonomous navigation models. Simulations with the most optimal navigation algorithm will also be constructed to demonstrate the functionality of the interactive interface.

Visualizations:

RRT* algorithm is selected to be the best performing navigation algorithm among the ones tested. Please refer to this report for further details. The implementation of these algorithms operate on masked images (grayscale images). Below two “gif” files can be observed that detail how the RRT* algorithm is being computer on two different maps.

Below, the first gif is provided to visualize how the RRT* algorithm will navigate from one point to another on the test_track:


Below, the second gif is provided to visualize how the RRT* algorithm will navigate from one point to another on the Thunderhill track:

Simulations & Interface:

Below, a demonstration video of the interface is displayed. The top left box displays live navigation sensors such as vehicle’s speed, battery status, IMU, and orientation for the user to view. The top right boxes display the visualization of the RRT* algorithm that shows the most efficient navigation path for the vehicle along with a real-time image that the vehicle’s depth camera is displaying. In the “Navigate Robot” section, if the user inputs destination coordinates inside the interface, it will autonomously navigate the vehicle with the RRT* algorithm. The “RosBridge Subscribe” section displays the ROS topics that the interface is currently subscribed to. The “Robot Position” section displays real-time text data of the vehicle’s current x, y, z positions on the simulated racetrack.

Results & Impact:

In this project, our group successfully integrated a racing track simulation using the Gazebo Simulator that mirrors a real-life track. By using the depth camera and lidar navigation sensors, we were able to implement two path planning and obstacle avoidance algorithms in A* and RRT* inside the simulated track. The performance of these algorithms were tested based on different metrics, and these algorithms were also able to visualize their performance on real-life racing track images. We also implemented an interactive interface that will allow the user to control the vehicle and view significant sensory information obtained from the vehicle during autonomous navigation while also allowing the user to move the vehicle from its initial position to the final destination with a press of a button. This was achieved by displaying different real-time sensory data and creating a platform for the user to monitor the path planning algorithm. The significant impact of the visualizations and interactive interface is that they serve as tools to help users improve model testing, debugging unexpected behavior, and build upon existing autonomous navigation models.

Conclusion:

Our goal for this project was to create visualizations and an interactive interface that serves the users in enhancing model testing, debugging unexpected behavior, and providing a tool to build upon existing navigation models. The results were highlighted, but for more details, please visit this repository to learn more and run the project.

Future improvements to the interactive interface currently include optimizing visualizations to be more clear, subscribe to more nodes to receive more input data, and test potential latency with larger datasets or streams of data. Currently, several plots and tools on the interactive interface contain data that is self generated on a small scale, often referred to as ‘dummy data’. Future ambitions include implementing the interface with more advanced datasets and streams of live input data.

References: