Creating a Vision Action

Created by Rico Stodt, Modified on Tue, 1 Feb, 2022 at 2:32 PM by Rico Stodt

This page provides a detailed tutorial of how to use the vision action nodes. 


Contents


Definitions


  • TCP - The tool center point (TCP) is the point in relation to which all robot positioning may be defined.
  • Landmark - A landmark is a fiducial marker placed in the field of view of an imaging system to be used as a point of reference. Rethink Robotics currently supplies landmarks on anodized aluminum with an adhesive backing that can be placed on the surface of modules in the work area.

Training the Snapshot


The first step in using the vision functionality is to train an snapshot. This means to define a surface on which the part will be located and move, and define the visual pattern which the robot will look for. It will use this visual pattern against which to compare the images in real time to determine if the object is present as well as its location and orientation.

Refer to the Vision page for the basic steps on how to train an snapshot. Some things to consider when training the snapshot:

  • The capture area can be different depending on your objective. If there are similar parts in the field of view and both types of parts should be recognized, then select an area that both parts have in common. If only one of the parts should be recognized, then select an area unique to that part.
  • Reflective parts can cause problems because bright spots reflecting light may move depending on the location and orientation. It may be necessary to use back lighting and train the silhouette of the part.
  • Make sure to train the surface (typically using a landmark) at the same height as the surface of the object seen by the camera. Place the landmark on the top of the object if necessary. Make sure the landmark is perfectly parallel to the surface.

Once the snapshot has been trained, a Vision Inspect or Vision Locate node in the behavior tree is required to use the vision features.


Picking an Object – Static


1. Add a Move To node


PickanObject Static 1.png


2. Open up the Node Inspector for the Move To node you just created and select the drop down menu next to the +Arm Pose button. Select the pose that is named "camera pose <Snapshot Name>". In this example, it is called "camera pose Locator 1". This will command the arm to move to the position where the object was trained. This is important because if the camera is at a different height or angle, the object will appear differently than when it was trained. 

 

Note: With the release of Intera 5.2, the robot will automatically move the arm to the vision pose before taking the image. The Move To node is not required when using Intera 5.2 or later.

 
3. Next, insert a Vision Locate after the Move To node and select the Snapshot from the drop down list.


PickanObject Static 3.png


4. Open up the Node Editor for the Vision Locate node. Adjust the settings according to the needs of the specific application. For details on all of the parameters of the vision node, refer to the Vision Locate page.


Locator 1 Vision Locate Node.PNG


5. Once all of the parameters have been set, press the TEST button, click GO TO and select RELOCATE to update the part location if it has moved since training the object. This is important because the poses trained after this point will rely on object frame.


6. Create the sequence that you would like for the robot to follow once an object has been detected. Below is a typical example of a vision pick.


PickanObject Static 5.png


7. As the Move To nodes are added, make sure the part itself does not move. If the part moves, follow the instructions in Step 5 to reset the object frame.


8. When creating the Move To nodes as children of a configured Vision Locate node, they will have the parent frame <Vision Locate node name> <Snapshot Name> frame. This is the frame associated with the vision object in the location it was last captured. Open the Move To nodes to make sure the are using this frame.


PickanObject Static 6.png


Picking an Object – Moving Conveyor


The steps for creating a moving pick are very similar to a static pick. Train the snapshot in the same way, and create the same logic in the behavior tree. The differences are in the parameters of the vision locate node, and the Move To nodes.

  • In the Vision Locate node, select "Moving" as the "locator type".
  • The options unique to a moving vision pick are under "Predict Location" and "Adjust time for grasp". See the details on these options on the Vision Locate page.


Vision Locate Moving Settings.PNG


  • In the Move To Node for the Pick move, open the node editor and enable the option "timed move" and for the "input variable", choose "action_time" associated with the name of the vision locate node.


Timed Move for Vision.PNG


  • If the timing of the move is off, try adjusting the "arrive within" and "actuation time" parameters in the vision node. Increase the values if the pick is too late, and decrease if it is too soon.

Vision Inspect


Another use of the vision capabilities is for inspection by detecting the presence or absence of an object. Start the same way as creating a vision pick by training the snapshot. In this example, a dipstick on an engine is inspected.


Dipstick training.png


1. Insert a Vision Inspect node into the task and select the snapshot from the drop down in the node editor.


Vision Inspect Select Snapshot.png


2. Create a variable to be used as a flag indicating whether the part is present or not. Make the default value false since the logic will set it to true when it is found.


3. In the behavior tree, put a Set To node as a child of the vision node. The Set To node will set this variable to true indicating the part is present. The child branch of the vision node is only executed if the part is found. If it is not found, this will be skipped, and the variable will stay false.


Variable vision inspection.png


4. Below the vision node, add a Do If which check the condition of the variable and if it is false, throws an error.


5. Be sure to put a Set To node before the Vision node so that it is reset on subsequent loops of the logic.

Vision inspection tree.png

 

Inspect and send Email (new function in Intera 5.5)


After inspecting we can go a step forward by sending an email when an object is not detected by Vision Inspect node:


6. Delete the throw error node from the behavior tree created in the previous section:


7. Add a Workspace Snapshot node as a child of the Do If node.


8. Workspace Snapshot: toggle send email.


9. Workspace Snapshot: fill your email.


Workspace step 9.jpg


10. Open Settings menu and fill the respective fields


Workspace step 11.jpg


11. Run the task

 

Note: If you use a Gmail account as was done in this example, you need to configure it to allow less secure apps.

 

The behavior tree should be like this:


 
Workspace step 12.jpg

 


Was this article helpful?

That’s Great!

Thank you for your feedback

Sorry! We couldn't be helpful

Thank you for your feedback

Let us know how can we improve this article!

Select at least one of the reasons
CAPTCHA verification is required.

Feedback sent

We appreciate your effort and will try to fix the article