Laser scanning is a non-contact technology that digitally captures the shape of physical objects using a line of laser lights. In our project, that robot's laser data was received as an array of various 3D points in space where an object was detected. First, we parsed through the data to reduce duplicate data and noise. Next, we decided on a proper way to render the laser points as visuals. We opted to use red circles to represent a laser point. This communicates to the user that the laser data is composed of points. In addition, the color red is often associated with lasers as we see in the media and in the popular game laser tag.
The robot's local path is received as an array of 3D points in space that mark the various waypoints the robot needs to get to in order to get to its final destination. We decided on depicting these points as medium sized green circles. We chose a bigger size since there are fewer points for the path and the circular shape mimics platform games where a character has to hop from one platform to another. We added an effect to the circles where every second, one circle would double in size. In the next second, that circle would resume its normal size and the consecutive circle would double in size. This created a loading visual effect and was helpful for indicating the direction of the robot's travel and communicating to the user that the robot was actively pursuing the path.
A costmap is a grid map where each cell is assigned a specific value, where a higher value indicates a smaller distance between the robot and an obstacle. The costmap is an important factor in the robot's path planning algorithm: the robot wants to create the shortest path that avoids obstacles with high confidence. To communicate this robotic sensor data with our users, we decided on overlaying a grid of circles, each individually colored, spawning at the center of the robot and extending to its environment. We colored the circles using their numerical value and mapping them to HSV colors. We opted for HSV instead of RGB because it is less sensitive to variations in lighting and a more user-oriented modeling of colors. It better aligns with how people experience color and thus creates a better map for associating color with the robot's distance to an obstacle. Green colors were used to show that the robot was very far from any obstacles, yellows and oranges communicated that an obstacle was in the vicinity but not too close, and red meant that an obstacle was very close and would be problematic for the robot to traverse around. We used these colors because they align with people's everyday association of these colors, especially in regards to traffic lights: green means go or safe, yellow/orange means caution, and red means stop or danger.
Localization particles are a collection of 3D points of where the robot thinks its location is in space. There can be repeats of the same point, which indicates a higher confidence in the robot's true position. We rendered these as magenta crosses, as many individuals are familiar with the saying "x marks the spot". Using crosses we communicated to the user that we were indicating a location in space. We used magenta as the color because it stands out and is not a color that often appears in our environment, therefore there would be enough contrast against the objects in the environment.
The robot also had the ability to detect people. This data was received as 3D points in space, along with a confidence level. We filtered the data to only accept points that were above a certain confidence threshold. Visually, this data was represented with a human icon.