Augmented Reality for Robotic Data Visualization

Robotics and Visualization
Client
Time Frame
Roles
Tools
Links
Tufts AIR Lab
June 2018 - June 2019
Research & Data Visualization
Unity Game Engine
Demo Video
Project Overview
As an assistant researcher at Tuft's Autonomous Intelligent Robotics (AIR) lab, I assisted in the development robotics related projects. I used fund from the Laidlaw Scholar research program to collaborate with a peer in creating an augmented reality framework for robotic sensor data visualization. We developed a Hololens and iOS app that used a camera to detect a robot and overlay visuals of robotics sensor data in real-time in the real-world. For instance, if a robot was pointed at a wall, using the app's camera, you would be able to see red circles on the wall depicting the robot's laser sensor data. The sensor data we parsed and rendered included the robot's intended path, laser scan reading, cost map, localization particles, and people detecting. The goal of the project was to improve human-robot interaction and make it easier to debug robots by developing a way for humans to easily conceptualize what the robot was "thinking".
My Contributions
My main role in the project was parsing the incoming robotic sensor data within Unity, a game engine platform, using C# scripts and  transforming the numeric data into visuals. The main challenge was developing visuals that were both representative of the robot's sensor data and easily understandable by the human users. We also had to ensure that all visuals could be rendered at the same time without clashing with each other.
I also worked on creating the landing screen for the Unity app. After the app was developed, I assisted in user research studies.
A person using a Hololens and looking at a robot.The landing page for the augmented reality app.
Robotic Sensor Data Transformed into Augmented Reality Visuals:
Laser Particles
Laser scanning is a non-contact technology that digitally captures the shape of physical objects using a line of laser lights. In our project, that robot's laser data was received as an array of various 3D points in space where an object was detected. First, we parsed through the data to reduce duplicate data and noise. Next, we decided on a proper way to render the laser points as visuals. We opted to use red circles to represent a laser point. This communicates to the user that the laser data is composed of points. In addition, the color red is often associated with lasers as we see in the media and in the popular game laser tag.
A robot in a room with an overlay of red circles representing the laser scan data.
Robot's Path
The robot's local path is received as an array of 3D points in space that mark the various waypoints the robot needs to get to in order to get to its final destination. We decided on depicting these points as medium sized green circles. We chose a bigger size since there are fewer points for the path and the circular shape mimics platform games where a character has to hop from one platform to another. We added an effect to the circles where every second, one circle would double in size. In the next second, that circle would resume its normal size and the consecutive circle would double in size. This created a loading visual effect and was helpful for indicating the direction of the robot's travel and communicating to the user that the robot was actively pursuing the path.
A robot with a line of green circles emerging in front of it, depicting its intended path.
Costmap
A costmap is a grid map where each cell is assigned a specific value, where a higher value indicates a smaller distance between the robot and an obstacle. The costmap is an important factor in the robot's path planning algorithm: the robot wants to create the shortest path that avoids obstacles with high confidence. To communicate this robotic sensor data with our users, we decided on overlaying a grid of circles, each individually colored, spawning at the center of the robot and extending to its environment. We colored the circles using their numerical value and mapping them to HSV colors. We opted for HSV instead of RGB because it is less sensitive to variations in lighting and a more user-oriented modeling of colors. It better aligns with how people experience color and thus creates a better map for associating color with the robot's distance to an obstacle. Green colors were used to show that the robot was very far from any obstacles, yellows and oranges communicated that an obstacle was in the vicinity but not too close, and red meant that an obstacle was very close and would be problematic for the robot to traverse around. We used these colors because they align with people's everyday association of these colors, especially in regards to traffic lights: green means go or safe, yellow/orange means caution, and red means stop or danger.
A robot with an overlay of a grid of colored circles.
Localization Particles
Localization particles are a collection of 3D points of where the robot thinks its location is in space. There can be repeats of the same point, which indicates a higher confidence in the robot's true position. We rendered these as magenta crosses, as many individuals are familiar with the saying "x marks the spot". Using crosses we communicated to the user that we were indicating a location in space. We used magenta as the color because it stands out and is not a color that often appears in our environment, therefore there would be enough contrast against the objects in the environment.
A robot with magenta crosses around it marking where the robot thinks it is in 3D space.
People detection
The robot also had the ability to detect people. This data was received as 3D points in space, along with a confidence level. We filtered the data to only accept points that were above a certain confidence threshold. Visually, this data was represented with a human icon.
A robot across a human, detect the human and projects a yellow human icon to convey this.