In this class, I gained hands-on experience in image-guided control of robotic manipulators, working extensively with UR5 robotic systems and RGB-D cameras. Using ROS, Linux, and OpenCV, I applied mathematical modeling concepts and open-source software libraries to analyze and control complex 3D motions. I developed and implemented closed-loop vision control strategies, integrating real-time target detection and position estimation through 2D and 3D computer vision techniques. Additionally, I designed and programmed control systems for multi-degree-of-freedom robots, leveraging open-source tools to achieve precise motion control. Throughout the course, I enhanced my skills in dynamic system analysis, real-world robotic implementation, and presenting findings through professional scientific reports and presentations.
This course was split into 3 phases as seen below; a report can be found for each phase in the corresponding github repositories.
I was tasked with programming the robot to navigate through four specific points using the Gazebo simulator. This involved designing and implementing control algorithms, leveraging ROS for communication and coordination, and ensuring correct movement within the simulated environment.
I was tasked with detecting a tennis ball and determining its location relative to the robot. This involved developing algorithms to process camera data, accurately identify the ball's position, and command the robot to move to the ball's location. The robot was then programmed to simulate "picking up the ball" and moving it, showcasing effective object detection and manipulation capabilities.
I was tasked with implementing error-handling mechanisms and programming the gripper to ensure that our previous code could run smoothly on the real robot. This involved enhancing the system's reliability by addressing potential issues and integrating the gripper's functionality for successful execution of tasks on the physical robot.