External Vision

Created by Rico Stodt, Modified on Tue, 1 Feb, 2022 at 8:11 AM by Rico Stodt

This page provides a detailed overview about the external vision and how to implement it. 


Contents


Using an External Vision System


An external camera can be configured to send the spatial coordinates of an object directly to Sawyer. This can greatly reduce cycle time over using the embedded camera in the robot arm and can help create a consistent vision detect system that is less susceptible to external factors.


This page covers the instructions on setting up an external vision system in Intera and Sawyer. It does not provide detailed instructions on how to configure an external vision system or its settings such as calibration, network communication, detector setting, lighting, etc. It is the end user or integrator's responsibility to setup and program the external vision system that intended for use.


The system and task performance are highly dependent on the quality of external camera calibration (refer to your vision system's documentation for instructions on how to calibrate). Please verify its performance before use with Sawyer and Intera. The user is responsible for calibrating the external vision system properly before integrating the device with Sawyer. A calibration target is typically required, but each system may vary. Please contact customer support of the external vision system manufacturer to learn more about calibration.


External Vision System Requirements


Data Transmit Requirements


When using an external cam snapshot, the following requirements must be satisfied to pass with a valid detection:

  • The Vision Locate node must receive coordinates with a true Pass/Fail signal at least once per second
  • The X and Y coordinates cannot change more than 1 mm per second
  • The Rz coordinate cannot change more than 5 degrees per second
  • The Vision Locator node must receive enough positions with a true Pass/Fail signal
  • The number of positions required is set with the Confidence setting in the Vision Locate node. By default, the confidence is set to balanced which requires 5 positions to pass the vision node. A custom number of positions can also be set by changing the confidence to advanced and setting the number manually
  • These requirements are the same for snapshots using base frame or cross-register frame

 

Work Frame: Cross-registration


The external vision system must have the following requirements:

  • 2D vision system - able to detect objects in a plane
  • Calibrate based on a calibration target - output world coordinates (x, y and rZ with respect to the calibration target), instead of pixel coordinates, as floating-point numbers
  • Send a Pass/Fail signal based on detection while outputting coordinates
  • The Pass/Fail signal must have a numerical value that is 0 when object detection is false, and non-zero when detection is true
  • Output data via TCP/IP
  • Use or allow users to specify delimiters as one of the following:
  • Semicolon
  • Comma
  • Space
  • Double space
  • Tab
  • Carriage Return, “CR” (\r)
  • New Line / Line Feed, “LF” (\n)
  • New line & carriage return, “CRLF” (\r\n)
  • Output data continuously at least once per second
  • If no object is detected in an image, still must send the fail signal
  • Detect Landmarks accurately using available object detection functions

 

Work Frame: Base Frame (may be use for 3D vision systems)


The system must be able to:

  • Send Pass/Fail signal based on detection while outputting coordinates
  • The Pass/Fail signal must have a numerical value that is 0 when object detection is false, and non-zero when detection is true
  • Output data via TCP/IP (UTF-8 encoding scheme)
  • Output data continuously at least once per second
  • If no object is detected in an image, still must send the Fail signal
  • Have a way to send world coordinates as floating-point numbers in the robot base frame

Configure an External Camera


Before configuring Sawyer, the following steps need to be completed on the external vision system:

  • Calibrate the external vision system with a calibration target supplied by its manufacturer.
  • When using the cross-register work frame, train the external vision system to recognize a Landmark, which will be used for cross registration, and output its coordinates.
  • Configure it to send coordinates as a delimited message over TCP/IP (encoding scheme UTF-8).
  • Also part of the TCP/IP message, a signal to indicate the part is detected (Pass/Fail) is also required.


Please double check:

  • The external vision system outputs world coordinates instead of pixel coordinates.
  • The external vision system outputs data continuously at least once per second.
  • Only one delimiter is needed after each data value. For example: 1.234, 5.678, 9.012, true;
  • If multiple delimiters are sent together without any data in between, the message is not acceptable. For example: 1.234, 5.678, 9.012,, true; There are two commas after the third data value in the example above.

Choose the Appropriate Work Frame


The external vision system must satisfy the following requirements depending on which work frame will be selected in the Snapshot Editor. There are two options: cross-register and base frame. Navigate to the External Cam tab of the Snapshot Editor and select a work frame.


External Vision Work Frame.PNG

 

  • If Cross-register is selected as the work frame, the user must go through the cross-registration workflow, explained in the ‘Training an Object’ section below, to convert coordinates received from the external vision system into Sawyer’s base frame.
  • Please note only 2D vision systems are supported for cross-registration in Intera 5.2. Therefore, it requires three coordinates: x, y and rZ (rotation z).
  • When a 2D vision system is used, we highly recommend users to select cross-register work frame, if it satisfies the requirements specified above.
  • If base frame is selected, the user will need to send data from the external vision system which describes the position of the tracked object relative to Sawyer’s base frame. This is a recommended path for tracking with 3D cameras in Intera 5.2.
  • This is a recommended path for tracking with 3D cameras in Intera 5.2.
  • The user is responsible for converting the coordinates from the external vision system into the robot's base frame.

Adding an External Camera as a TCP/IP Device


External Cam Tab.png


  • Connect the external vision system to the robot via an Ethernet cable. Note that a network switch may be needed in order to connect the camera to the controller while also connecting a laptop for Intera Studio.
  • Click +TCP/IP DEVICE and follow the instructions at Add a Device under TCP/IP to properly connect Sawyer to the vision system. Each axis of movement (including rotation) will require a unique data field. In the TCP device configuration, be sure to select units for each signal that correctly match the output from the external camera. Intera also needs a Pass/Fail signal which is true when there is an object being detected, and false when there is not. See the example device set up and instructions below.

 

Example Device Configuration

  • Set the Name and IP settings
  • Note, the IP address and port are configured based on the settings of the external vision system.
  • Settings used in the example below:
  • IP Address: 192.168.52.201
  • Port: 4000


External Vision Device Setup Name and IP.PNG

 

  • Configure the input messages (Unpack Data-In)
  • 4 fields: x, y, Rz and Pass/Fail
  • Start delimiter: None
  • End delimiter: return + new line
  • Internal delimiter: semicolon


Note: Other delimiters may be used based on the settings of the external vision system.



External Vision Device Setup Unpack Data-In.PNG

 

  • Configure the names, types, units and default values of the input data
  • The coordinates should be configured as Float.
  • Make sure the units are consistent with those in the external vision system.
  • The default values can all be 0.


External Vision Device Setup Define Data-In.PNG

 

  • Click save in the window.
  • Once an external camera has been added, go back to the Snapshot Editor and select the device from the Pick Device drop-down menu.
  • If an object of interest is in the field of view of the external vision system, and the corresponding detector is running, the external vision system should continuously outputting signals to Intera. Please double check in Shared Data that the signals are received and configured correctly.


External Vision Device Setup Shared Data Input Confirmation.PNG


Training an Object


Configure Inputs and Coordinates


Inputs and Coordinates.png


This panel is used to define how the information from the camera will be used by Intera.

  • Click on the drop-down menu beneath each axis label and select the appropriate data variable from the available data fields coming from the camera.
  • If the data fields are not available, verify that the camera is configured correctly and is actively sending data.
  • Ensure the pass/fail data field is also selected for the pass/fail signal drop-down.
  • Notes:
  • The X and Y fields will only accept signals configured with units of distance (such as mm) and the rZ field will only accept signals configured with units of rotation (such as degrees).
  • Double check the units are selected correctly. Intera does not select units automatically because it only receives values from the external vision system. If the external vision system outputs rotation angle rZ in degrees, but the user selects radians in Intera, the final result will be incorrect. Intera will not throw an warning in this case because it interprets the data based on the user input.

 

Train the Surface


Note: This step is not required when using the base frame as the work frame. This process is identical to setting the surface for the arm camera locator.


External Vision Set Surface.PNG


Steps:

  • Place a Landmark on the work surface, this will be the same surface against which the external camera was calibrated.
  • Adjust the robot’s pose so that the Landmark is within the arm camera’s view.
  • Once the Landmark is in view, adjust the camera settings (gain, exposure and flash) until the software is no longer stating that the image is too dark or bright.
  • When using the flash, set the exposure below 1 (0.4 is recommended for most scenarios) to avoid over-saturation. If the image is over-saturated, the robot screen will display a message that states "too bright".
  • Press ‘SET’ to confirm.

 

Cross-register


Cross-registration is the process by which Intera reconciles the external vision system’s work frame with the robot’s base frame.


Note: This step is not required when using the base frame as the work frame. The user must set up a detector in the external camera in advance so that it is able to locate the Landmark used in this step accurately.


Steps:

  • Place the same Landmark that was used in the previous step on the work surface.
  • Adjust the arm pose so that the arm camera can see the entire field of view of the external camera. The arm camera should not obstruct the view of the external camera. To avoid interference with the external camera, it is acceptable if the arm camera sees the work surface from an angle.


External Vision Cross Register 1.PNG


  • Click the Info icon in the accordion bar to show the instructions.
  • Move the Landmark to the middle of view shared by the arm camera and external vision system, rotated 45 degrees.
  • Once both the arm camera and the external camera detect the Landmark, press SAMPLE.


External Vision Cross Register 2.PNG


  • Sample all four corners of the camera view, rotating the Landmark 90 degrees each time in opposite direction of the sample order. See the animation below.


Cross Registration Animation.gif


  • The arm camera can be moved closer to the Landmark for each sample, as long as it does not obstruct the external vision system.
  • The settings of the arm camera can be adjusted for each sample as well.
  • Continue to take samples until the progress bar is full and green.


External Vision Cross Register 3.PNG


  • Confirm that the marker (seen in the image below) appears and follows the landmark when moved.


External Vision Cross Register 4.PNG

 

Train Snapshot (External)

  • It is highly recommended the user switches the detector to the object of interest using the external vision system software.
  • To this point, only the landmark object has been tested and it should be confirmed that the object of interest can be detected and the robot receives the coordinates.
  • This change is made in the software of the external vision system, not within Intera.
  • A new detector needs to be set up using the external vision system (if one has not yet been created) or an existing one should be selected.
  • Adjust the pose of the robot arm so it can see the entire field of view of the external camera.


External Vision Train Snapshot External.PNG


  • Move the object around in the field of view of the external camera.
  • A marker (red, green, and blue circles around a center point) representing the coordinates received from the external camera will appear in the arm camera stream of the Snapshot Editor.
  • The marker should accurately follow the object as it is moved in the field of view.
  • This feature is for testing only. The arm camera does not need to be facing the object for the external camera function to operate while running a task.
  • If the marker does not appear or it is not accurately following the object:
  • Ensure the external vision system is live.
  • Ensure the positional data is being sent to and received by the robot via the configured TCP/IP device (use shared data to verify).
  • If you are still having issues, contact support.

 

 

 

Was this article helpful?

That’s Great!

Thank you for your feedback

Sorry! We couldn't be helpful

Thank you for your feedback

Let us know how can we improve this article!

Select at least one of the reasons
CAPTCHA verification is required.

Feedback sent

We appreciate your effort and will try to fix the article