These were the goals of the software over the two years it was being developed. It is important to understand the goals to understand the decisions made behind the architecture and features implemented.
This was accomplished through the use of ROS and separation of features into nodes. Each element of the software has a separate node and can be replaced with nodes that are equivalent in how they interact with the rest of the system (that is, they have the same publishing and subscribed topics).
At the top of many nodes are configuration parameters to allow the code to be adapted. For example, many of the vision parameters and tunable.
Below is a screenshot from
rqt_graph showing the nodes and the topics between them in the final code iteration used at competition. One exception is that the
igvc_display node was left out as it subscribed to many topics and cluttered the image while not being useful for understanding the architecture.
The software as a whole can be considered solving for the function
f(x) -> y that maps the state of the robot
x to commands it should follow
y. Throughout the rest of this document, we will explore what
y are and how they are implemented in the code.
The robot employs the following sensors which produce the following information
All these sensors combine to grant us many different views of the environment around the robot. Unfortunately, these views are not congruent and therefore processing must be done to create a common understanding of the robot's state. To accomplish this, the robot feeds all the data into a common node which then combines the data into our state estimate. There are many algorithms that can accomplish this, but we chose to use an Extended Kalman Filter (EKF) which is seen as the
TODO: Describe more about the EKF and our state here