viam.services.vision.client

Module Contents

Classes

VisionClient

Connect to the Vision service, which allows you to access various computer vision algorithms

class viam.services.vision.client.VisionClient(name: str, channel: grpclib.client.Channel)[source]

Bases: viam.services.vision.vision.Vision, viam.resource.rpc_client_base.ReconfigurableResourceRPCClientBase

Connect to the Vision service, which allows you to access various computer vision algorithms (like detection, segmentation, tracking, etc) that usually only require a camera or image input.

client: viam.proto.service.vision.VisionServiceStub
async get_detections_from_camera(camera_name: str, *, extra: Mapping[str, Any] | None = None, timeout: float | None = None) List[viam.proto.service.vision.Detection][source]

Get a list of detections in the next image given a camera and a detector

camera_name = "cam1"

# Grab the detector you configured on your machine
my_detector = VisionClient.from_robot(robot, "my_detector")

# Get detections from the next image from the camera
detections = await my_detector.get_detections_from_camera(camera_name)
Parameters:

camera_name (str) – The name of the camera to use for detection

Returns:

A list of 2D bounding boxes, their labels, and the confidence score of the labels, around the found objects in the next 2D image from the given camera, with the given detector applied to it.

Return type:

List[viam.proto.service.vision.Detection]

async get_detections(image: viam.media.viam_rgba_plugin.Image.Image | viam.media.video.RawImage, *, extra: Mapping[str, Any] | None = None, timeout: float | None = None) List[viam.proto.service.vision.Detection][source]

Get a list of detections in the given image using the specified detector

# Grab camera from the machine
cam1 = Camera.from_robot(robot, "cam1")

# Get the detector you configured on your machine
my_detector = VisionClient.from_robot(robot, "my_detector")

# Get an image from the camera
img = await cam1.get_image()

# Get detections from that image
detections = await my_detector.get_detections(img)
Parameters:

image (Image) – The image to get detections from

Returns:

A list of 2D bounding boxes, their labels, and the confidence score of the labels, around the found objects in the next 2D image from the given camera, with the given detector applied to it.

Return type:

List[viam.proto.service.vision.Detection]

async get_classifications_from_camera(camera_name: str, count: int, *, extra: Mapping[str, Any] | None = None, timeout: float | None = None) List[viam.proto.service.vision.Classification][source]

Get a list of classifications in the next image given a camera and a classifier

camera_name = "cam1"

# Grab the classifier you configured on your machine
my_classifier = VisionClient.from_robot(robot, "my_classifier")

# Get the 2 classifications with the highest confidence scores from the next image from the camera
classifications = await my_classifier.get_classifications_from_camera(
    camera_name, 2)
Parameters:
  • camera_name (str) – The name of the camera to use for detection

  • count (int) – The number of classifications desired

Returns:

The list of Classifications

Return type:

List[viam.proto.service.vision.Classification]

async get_classifications(image: viam.media.viam_rgba_plugin.Image.Image | viam.media.video.RawImage, count: int, *, extra: Mapping[str, Any] | None = None, timeout: float | None = None) List[viam.proto.service.vision.Classification][source]

Get a list of classifications in the given image using the specified classifier

# Grab camera from the machine
cam1 = Camera.from_robot(robot, "cam1")

# Get the classifier you configured on your machine
my_classifier = VisionClient.from_robot(robot, "my_classifier")

# Get an image from the camera
img = await cam1.get_image()

# Get the 2 classifications with the highest confidence scores
classifications = await my_classifier.get_classifications(img, 2)
Parameters:
  • image (Image) – The image to get detections from

  • count (int) – The number of classifications desired

Returns:

The list of Classifications

Return type:

List[viam.proto.service.vision.Classification]

async get_object_point_clouds(camera_name: str, *, extra: Mapping[str, Any] | None = None, timeout: float | None = None) List[viam.proto.common.PointCloudObject][source]

Returns a list of the 3D point cloud objects and associated metadata in the latest picture obtained from the specified 3D camera (using the specified segmenter).

To deserialize the returned information into a numpy array, use the Open3D library.

import numpy as np
import open3d as o3d

# Grab the 3D camera from the machine
cam1 = Camera.from_robot(robot, "cam1")
# Grab the object segmenter you configured on your machine
my_segmenter = VisionClient.from_robot(robot, "my_segmenter")
# Get the objects from the camera output
objects = await my_segmenter.get_object_point_clouds(cam1)
# write the first object point cloud into a temporary file
with open("/tmp/pointcloud_data.pcd", "wb") as f:
    f.write(objects[0].point_cloud)
pcd = o3d.io.read_point_cloud("/tmp/pointcloud_data.pcd")
points = np.asarray(pcd.points)
Parameters:

camera_name (str) – The name of the camera

Returns:

The pointcloud objects with metadata

Return type:

List[viam.proto.common.PointCloudObject]

async do_command(command: Mapping[str, viam.utils.ValueTypes], *, timeout: float | None = None, **__) Mapping[str, viam.utils.ValueTypes][source]

Send/receive arbitrary commands.

motion = MotionClient.from_robot(robot, "builtin")

my_command = {
  "cmnd": "dosomething",
  "someparameter": 52
}

# Can be used with any resource, using the motion service as an example
await motion.do_command(command=my_command)
Parameters:

command (Dict[str, ValueTypes]) – The command to execute

Returns:

Result of the executed command

Return type:

Dict[str, ValueTypes]

classmethod from_robot(robot: viam.robot.client.RobotClient, name: str) typing_extensions.Self

Get the service named name from the provided robot.

async def connect() -> ViamClient:
    # Replace "<API-KEY>" (including brackets) with your API key and "<API-KEY-ID>" with your API key ID
    dial_options = DialOptions.with_api_key("<API-KEY>", "<API-KEY-ID>")
    return await ViamClient.create_from_dial_options(dial_options)

async def main():
    robot = await connect()

    # Can be used with any resource, using the motion service as an example
    motion = MotionClient.from_robot(robot=robot, name="builtin")

    robot.close()
Parameters:
  • robot (RobotClient) – The robot

  • name (str) – The name of the service

Returns:

The service, if it exists on the robot

Return type:

Self

classmethod get_resource_name(name: str) viam.proto.common.ResourceName

Get the ResourceName for this Resource with the given name

# Can be used with any resource, using an arm as an example
my_arm_name = my_arm.get_resource_name("my_arm")
Parameters:

name (str) – The name of the Resource

get_operation(kwargs: Mapping[str, Any]) viam.operations.Operation

Get the Operation associated with the currently running function.

When writing custom resources, you should get the Operation by calling this function and check to see if it’s cancelled. If the Operation is cancelled, then you can perform any necessary (terminating long running tasks, cleaning up connections, etc. ).

Parameters:

kwargs (Mapping[str, Any]) – The kwargs object containing the operation

Returns:

The operation associated with this function

Return type:

viam.operations.Operation

async close()

Safely shut down the resource and prevent further use.

Close must be idempotent. Later configuration may allow a resource to be “open” again. If a resource does not want or need a close function, it is assumed that the resource does not need to return errors when future non-Close methods are called.

await component.close()