This page provides a detailed tutorial of how to use the vision action nodes.
- Training the Snapshot
- Picking an Object – Static
- Picking an Object – Moving Conveyor
- Vision Inspect
- TCP - The tool center point (TCP) is the point in relation to which all robot positioning may be defined.
- Landmark - A landmark is a fiducial marker placed in the field of view of an imaging system to be used as a point of reference. Rethink Robotics currently supplies landmarks on anodized aluminum with an adhesive backing that can be placed on the surface of modules in the work area.
Training the Snapshot
The first step in using the vision functionality is to train an snapshot. This means to define a surface on which the part will be located and move, and define the visual pattern which the robot will look for. It will use this visual pattern against which to compare the images in real time to determine if the object is present as well as its location and orientation.
Refer to the Vision page for the basic steps on how to train an snapshot. Some things to consider when training the snapshot:
- The capture area can be different depending on your objective. If there are similar parts in the field of view and both types of parts should be recognized, then select an area that both parts have in common. If only one of the parts should be recognized, then select an area unique to that part.
- Reflective parts can cause problems because bright spots reflecting light may move depending on the location and orientation. It may be necessary to use back lighting and train the silhouette of the part.
- Make sure to train the surface (typically using a landmark) at the same height as the surface of the object seen by the camera. Place the landmark on the top of the object if necessary. Make sure the landmark is perfectly parallel to the surface.
Once the snapshot has been trained, a Vision Inspect or Vision Locate node in the behavior tree is required to use the vision features.
Picking an Object – Static
1. Add a Move To node
2. Open up the Node Inspector for the Move To node you just created and select the drop down menu next to the +Arm Pose button. Select the pose that is named "camera pose <Snapshot Name>". In this example, it is called "camera pose Locator 1". This will command the arm to move to the position where the object was trained. This is important because if the camera is at a different height or angle, the object will appear differently than when it was trained.
Note: With the release of Intera 5.2, the robot will automatically move the arm to the vision pose before taking the image. The Move To node is not required when using Intera 5.2 or later.
4. Open up the Node Editor for the Vision Locate node. Adjust the settings according to the needs of the specific application. For details on all of the parameters of the vision node, refer to the Vision Locate page.
5. Once all of the parameters have been set, press the TEST button, click GO TO and select RELOCATE to update the part location if it has moved since training the object. This is important because the poses trained after this point will rely on object frame.
6. Create the sequence that you would like for the robot to follow once an object has been detected. Below is a typical example of a vision pick.
7. As the Move To nodes are added, make sure the part itself does not move. If the part moves, follow the instructions in Step 5 to reset the object frame.
8. When creating the Move To nodes as children of a configured Vision Locate node, they will have the parent frame <Vision Locate node name> <Snapshot Name> frame. This is the frame associated with the vision object in the location it was last captured. Open the Move To nodes to make sure the are using this frame.
Picking an Object – Moving Conveyor
The steps for creating a moving pick are very similar to a static pick. Train the snapshot in the same way, and create the same logic in the behavior tree. The differences are in the parameters of the vision locate node, and the Move To nodes.
- In the Vision Locate node, select "Moving" as the "locator type".
- The options unique to a moving vision pick are under "Predict Location" and "Adjust time for grasp". See the details on these options on the Vision Locate page.
- In the Move To Node for the Pick move, open the node editor and enable the option "timed move" and for the "input variable", choose "action_time" associated with the name of the vision locate node.
- If the timing of the move is off, try adjusting the "arrive within" and "actuation time" parameters in the vision node. Increase the values if the pick is too late, and decrease if it is too soon.
Another use of the vision capabilities is for inspection by detecting the presence or absence of an object. Start the same way as creating a vision pick by training the snapshot. In this example, a dipstick on an engine is inspected.
1. Insert a Vision Inspect node into the task and select the snapshot from the drop down in the node editor.
2. Create a variable to be used as a flag indicating whether the part is present or not. Make the default value false since the logic will set it to true when it is found.
3. In the behavior tree, put a Set To node as a child of the vision node. The Set To node will set this variable to true indicating the part is present. The child branch of the vision node is only executed if the part is found. If it is not found, this will be skipped, and the variable will stay false.
Inspect and send Email (new function in Intera 5.5)
After inspecting we can go a step forward by sending an email when an object is not detected by Vision Inspect node:
6. Delete the throw error node from the behavior tree created in the previous section:
8. Workspace Snapshot: toggle send email.
9. Workspace Snapshot: fill your email.
10. Open Settings menu and fill the respective fields
11. Run the task
Note: If you use a Gmail account as was done in this example, you need to configure it to allow less secure apps.
The behavior tree should be like this:
Was this article helpful?
Thank you for your feedback
Sorry! We couldn't be helpful
Thank you for your feedback
We appreciate your effort and will try to fix the article