Marker Assembling Robot


Brief overview


The project aims to assemble markers and caps through pick, place, press, and sort operations sequences. The intent was largely inspired by the application of robots in manufacturing and industry. Our project used a RealSense camera to detect the markers’ colors and MoveIt manipulation commands to actuate the robot. Franka-specific actions also were used to grip caps and markers during movement. The framework of the project was controlled using a state machine developed in the ROS package called SMACH. The state machine intelligence implemented sorting of colors by hue based on camera data coming from the RealSense perception subsystem. Intelligence then leveraged the manipulation to pick, place and press caps and markers in the assembly tray.

Video demo


Collaborations


Manipulation


The manipulation package relies on several different nodes in order to function:

  1. manipulation_cap provides low-level position and orientation sensing services, along with error recovery, movements, and gripper grasping
  2. manipulation_macro_a provides position movement services for image captures using the RealSense
  3. manipulation_press provides a pressing service to cap the markers
  4. manipulation_local provides manipulation services for moving in between trays
  5. manipulation_pnp provides pick and place services between the feed and assembly trays
  6. debug_manipulation logs the external forces experienced by the robot
  7. plan_scene provides a planning scene for simulation-based motion planning in Moveit
  8. limit_set provides services to be used with the franka_control file launched prior to Moveit being launched. It allows the user to reconfigure the collision limits on the robot.

Manipulation also relies on a python manipulation package with translational, array position, and verification utilities. A scene. yaml file is used for specifying parameters in the plan_scene node and the main manipulation movements scene elsewhere in the project.

Perception


GitHub