Marker Assembling Robot
Brief overview
The project aims to assemble markers and caps through pick, place, press, and sort operations sequences. The intent was largely inspired by the application of robots in manufacturing and industry. Our project used a RealSense camera to detect the markers’ colors and MoveIt manipulation commands to actuate the robot. Franka-specific actions also were used to grip caps and markers during movement. The framework of the project was controlled using a state machine developed in the ROS package called SMACH. The state machine intelligence implemented sorting of colors by hue based on camera data coming from the RealSense perception subsystem. Intelligence then leveraged the manipulation to pick, place and press caps and markers in the assembly tray.
Video demo
Collaborations
- Jiasen Zheng (Preception & 3D Modeling)
- Kojo Welbeck
- Ian Kennedy
- Bhagyesh Agresar
- Keaton Griffith
Manipulation
The manipulation package relies on several different nodes
in order to function:
- manipulation_cap provides low-level position and orientation sensing services, along with error recovery, movements, and gripper grasping
- manipulation_macro_a provides position movement services for image captures using the RealSense
- manipulation_press provides a pressing service to cap the markers
- manipulation_local provides manipulation services for moving in between trays
- manipulation_pnp provides pick and place services between the feed and assembly trays
- debug_manipulation logs the external forces experienced by the robot
- plan_scene provides a planning scene for simulation-based motion planning in Moveit
- limit_set provides services to be used with the franka_control file launched prior to Moveit being launched. It allows the user to reconfigure the collision limits on the robot.
Manipulation also relies on a python manipulation package with translational, array position, and verification utilities. A scene. yaml file is used for specifying parameters in the plan_scene node and the main manipulation movements scene elsewhere in the project.
Perception
- All the computer vision algorithms are embedded in the vision python package, functions in the package can be called in a node by
import <package name>.<script name>
such asimport vision.vision1
- sample_capture.py: A helper python script to capture images using RealSense 435i RGB-D camera
- Connect the RealSense camera to your laptop
- Run the python script in a terminal:
python3 sample_capture.py
- Press ‘a’ to capture and save an image and use ‘q’ to quit the image window
- hsv_slider.py: A helper python script to find the appropriate HSV range for color detection
- Add the path of the image to
frame = cv. imread()
to read the image - Run the python script in a terminal:
python3 hsv_slider.py
- A window of the original image and a window of HSV image with slide bars will show up
- Test with HSV slide bars to find an appropriate range
- Add the path of the image to
- vision.py A python script to detect contours and return a list of hue values
- For testing purposes, an image can be loaded by setting the path to
image = cv. imread()
- Run the python script in a terminal:
python3 vision1.py
- A processed image with contours and a list of hue values will be returned
The node that uses this library is called vision_bridge.
- For testing purposes, an image can be loaded by setting the path to
- vision_bridge node: Node that publishes a stream of ROS Images and implements a service capture that returns a list of H values of the detected markers and caps from an image.
- run
rosservice call /capture
and specify the tray_location to run the service.- tray_location 1 : Represents Assembly Location
- tray_location 2 : Represents Markers Location
- tray_location 3 : Represents Caps Location
- run