Now that we have a reliable foundation for computing the robot’s position, we can set the robot in motion and observe how it self-localizes using detected AprilTags. However, there are many moments when no tag is visible in the image, so odometry must be used in those situations.
To control the robot's movement, I created an alternating loop of linear motion and rotation. First, the robot moves forward for a specified time, followed by a rotational phase for another time interval. This pattern results in a circular trajectory.
Additionally, to avoid tracing a perfect circle, I implemented logic to invert the rotation direction every few cycles. It’s not the most advanced motion pattern, but it is sufficient to demonstrate how the localization system functions.
The following video fragment shows the results:
The final implementation accurately and stably estimates the robot’s position on the map using AprilTags, relying on odometry only when no tags are detected. The operational workflow is as follows:
pyapriltags
library, the system detects tags in the captured image and extracts their corners and center,
overlaying them on the image for visualization.
cv2.solvePnP
is used to estimate the position and orientation
of the robot relative to the tag’s coordinate system. This output includes rotation (as a Rodrigues vector) and translation in the camera space.
apriltags_poses.yaml
)GUI.showEstimatedPose
,
allowing real-time visualization of the robot's estimated path.
This design successfully combines the accuracy of computer vision with the stability of odometry, enabling robust robot localization even when tags are temporarily occluded.