Part 1: Vision Pipeline

Goal: Get a computer vision pipeline working.

Skills: Connect a machine to Viam, configure components in the Viam UI, use fragments to add preconfigured services.

Time: ~10 min

Prerequisites

Before starting this tutorial, you need the can inspection simulation running. Follow the Gazebo Simulation Setup Guide to:

  1. Build the Docker image with Gazebo Harmonic
  2. Create a machine in Viam and get credentials
  3. Start the container with your Viam credentials

Once you see “Can Inspection Simulation Running!” in the container logs and your machine shows Live in the Viam app, return here to continue.

1.1 Verify Your Machine is Online

If you followed the setup guide, your machine should already be online.

  1. Open app.viam.com (the “Viam app”)
  2. Navigate to your machine (for example, inspection-station-1)
  3. Verify the status indicator shows Live
  4. Click the CONFIGURE tab if not already selected
Machine page showing the green Live status indicator next to the machine name.

Ordinarily, after creating a machine in Viam, you would download and install viam-server together with the cloud credentials for your machine. For this tutorial, we’ve already installed viam-server and launched it in the simulation Docker container.

1.2 Locate Your Machine Part

Your machine is online but empty. To configure your machine, you will add components and services to your machine part in the Viam app. Your machine part is the compute hardware for your robot. This might be a PC, Mac, Raspberry Pi, or another computer.

In the case of this tutorial, your machine part is a virtual machine running Linux in the Docker container.

Find inspection-station-1-main in the CONFIGURE tab.

1.3 Configure the Camera

You’ll now add the camera as a component.

What's a component?

In Viam, a component is any piece of hardware: cameras, motors, arms, sensors, grippers. You configure components by declaring what they are, and Viam handles the drivers and communication.

The power of Viam’s component model: All cameras expose the same API—USB webcams, Raspberry Pi camera modules, IP cameras, simulated cameras. Your application code uses the same GetImages() method regardless of the underlying hardware. Swap hardware by changing configuration, not code.

Add a camera component

To add the camera component to your machine part:

  1. Click the + button and select Component or service
  2. Click Camera
  3. Search for gz-camera
  4. Select gz-camera:rgb-camera
  5. Click Add module
  6. Enter inspection-cam for the name
Why were two items added to my machine part?
After adding the camera component, you will see two items appear under your machine part. One is the actual camera hardware (inspection-cam) that you will use through the Viam camera API. The other is the software module (gz-camera) that implements this API for the specific model of camera you are using. All components that are supported through modules available in the Viam registry will appear this way in the CONFIGURE tab. For built-in components, such as webcams, you will not also see a module appear in the configuration.

Configure the camera

To configure your camera component to work with the camera in the simulation, you need to specify the correct camera ID. Most components require a few configuration parameters.

  1. In the ATTRIBUTES section, add:

    {
      "id": "/inspection_camera"
    }
    
  2. Click Save in the top right

1.4 Test the Camera

Verify the camera is working. Every component in Viam has a built-in test card right in the configuration view.

Open the test panel

  1. You should still be on the CONFIGURE tab with your inspection-cam selected
  2. Look for the TEST section at the bottom of the camera’s configuration panel
  3. Click TEST to expand the camera’s test card

The camera component test card uses the camera API to add an image feed to the Viam app, enabling you to determine whether your camera is working. You should see a live video feed from the simulated camera. This is an overhead view of the conveyor/staging area.

1.5 Add a vision pipeline with a fragment

Now you’ll add machine learning to run inference on your camera feed. You need two services:

  1. ML model service that loads a trained model for the inference task
  2. Vision service that connects the camera to the model and returns detections
Components versus services
  • Components are hardware: cameras, motors, arms
  • Services are capabilities: vision (ML inference), motion (arm kinematics), custom control logic

Services often use components. A vision service takes images from a camera, runs them through an ML model, and returns structured results, detections with bounding boxes and labels, or classifications with confidence scores.

The ML model service loads a trained model (TensorFlow, ONNX, or PyTorch) and exposes an Infer() method. The vision service handles the rest: converting camera images to tensors, calling the model, and interpreting outputs into usable detections.

Instead of adding each service manually, you’ll use a fragment. A fragment is a reusable block of configuration that can include components, services, modules, and ML models. Fragments let you share tested configurations across machines and teams.

The try-vision-pipeline fragment includes an ML model service loaded with a can defect detection model and a vision service wired to that model. The fragment accepts a camera_name variable so it works with any camera.

Add the fragment

  1. Click + next to your machine part
  2. Select Insert fragment
  3. Search for try-vision-pipeline
  4. Select it and click Insert fragment

Set the camera variable

The fragment needs to know which camera to use for inference.

  1. In the fragment’s configuration panel, find the Variables section

  2. Set the camera_name variable to inspection-cam

    {
      "camera_name": "inspection-cam"
    }
    
  3. Click Save in the upper right corner

Test the vision service

  1. Find the TEST section at the bottom of the vision-service configuration panel
  2. Expand the TEST card
  3. If not already selected, select inspection-cam as the camera source
  4. Set Detections/Classifications to Live
  5. Check that detection and labeling are working
Vision service test panel showing a can detected with a bounding box and FAIL label.

Continue to Part 2: Data Capture →