Viam is a platform that allows you to compose a smart machine or robotics project from any number of components or services through configuration.
Components typically represent physical hardware, while services represent higher-level functionality - often integrating with physical hardware or other technologies like Machine Learning, Artificial Intelligence, or external APIs.
Viam components and services that are configured as part of a machine can be used securely with APIs in popular programming languages.
This playground allows you to interact with a number of built-in and modular Viam resources running on a viam-server instance in Google Cloud Platform (GCP) through Viam's TypeScript SDK.
Note that video and audio capture is used in some demos, so your browser will ask for permission.
Click on the menu to choose a specific demo, or go to the first demo: System monitoring
Here, we are getting real-time system data from the server running Viam using a module from the Viam registry that provides a Telegraf sensor component.
You can use data collected from this or other modules to create realtime monitoring or management dashboards, and also collect and use this data with Viam's Data Management solution.
This in-browser demo is using the Viam Typescript SDK.
We are using Viam's Sensor API to retrieve system stats from the Telegraf module:
import { createRobotClient, SensorClient } from '@viamrobotics/sdk';
client = createRobotClient({...});
const system_monitor = new SensorClient(client, 'telegraf');
const stats = await system_monitor.getReadings();
// now do something with the stats like display in a table
Viam has a number of other programming language SDKs that you can use with your projects to interact with components and services.
For example, code with the same functionality would look like this in Python:
from viam.robot.client import RobotClient
from viam.components.sensor import Sensor
client = await RobotClient.at_address(...)
system_monitor = Sensor.from_robot(client, 'telegraf')
stats = await system_monitor.get_readings()
# now do something with the stats
For this demo, you must enable microphone and camera access. Please click here and grant permission to your browser.
Each Viam component and service implements an API, providing an interface that is consistent across all models of that resource.
One type of built-in service that Viam provides is a Vision Service, and all models of the Vision Service implement the rdk:service:vision API.
This API provides methods such as GetDetections() and GetDetectionsFromCamera().
Select one of the detectors below to use it with images from your device's webcam.
Each time a new class is detected, we'll use the TTS (text to speech) capability of the speech module from Viam's registry to say what is seen out-loud.
The efficientdet detector is a popular Machine Learning object detector based on the COCO open-source dataset. This detector is in tflite format and can be found on the Viam registry.
The red detector is a heuristics-based color detector and a model that is built-in to the Viam platform.
Viam's Vision API is being used to get detections.
Regardless of the model, the API method GetDetections() is called.
import { createRobotClient, VisionClient } from '@viamrobotics/sdk';
import { SpeechClient } from 'speech-service-api';
client = createRobotClient({...});
const speech = new SpeechClient(client, "speechio");
const detector = new VisionClient(client, 'coco-detector');const detector = new VisionClient(client, 'red-detector');const detector = new VisionClient(client, 'face-detector');
let detections = await detector.getDetections(image, 300, 280, 'image/jpeg');
if (detections[0] && detections[0].confidence > .6) {
let sp = await speech.toSpeech("I see a " + detections[0].className);
const audioBuffer = await decoders.mp3(sp);
play(audioBuffer);
}
For this demo, you must enable microphone and camera access. Please click here and grant permission to your browser.
Smart machines you build on the Viam platform can be extended with resources from elsewhere.
In this demo, we are using a gesture detection model
from HuggingFace managed by the YOLOv8 vision service from the Viam registry to detect letters in ASL (American Sign Language).
Try making some of the signs, ending with the letter V (looks like the "peace" sign) 2 times.
Once this has been detected, we will use the Rocket 3b LLM (large language model)
deployed to our machine with the local-llm module from the Viam registry to create a response based on the letters you signed.
Detected:
Response:
import { createRobotClient, VisionClient } from '@viamrobotics/sdk';
import { ChatClient } from 'chat-service-api';
const client = createRobotClient({...});
const asl_detector = new VisionClient(client, "asl_detector");
const llm = new ChatClient(client, "llm");
let completed = false;
let last_seen = "";
let letters = "";
const chat_prefix = "Create an acronym from the letters ";
while (!completed) {
let detections = await asl_detector.getDetections(image, 300, 280, 'image/jpeg');
if (detections[0] && detections[0].confidence > .8) {
if (detections[0].className == 'V' && last_seen == 'V') {
let completion = await llm.chat(chat_prefix + letters);
completed = true;
}
else {
last_seen = detections[0].className;
letters = letters + detections[0].className;
}
}
}
For this demo, you must enable microphone and camera access. Please click here and grant permission to your browser.
Vision language models (VLMs) extend traditional LLMs by incorporating the ability to interpret images.
We can run small VLMs like Moondream (or larger VLMs!)
with Viam, in this case managed by the moondream-vision service from the Viam registry.
Choose one of the images, or capture one from your webcam.
Then, hold down the "Ask Question" button and ask a question about the photo like "Where is the dog?" aloud.
We will use the Speech module from the Viam registry to convert your question speech to text (STT),
then send that to the Moondream VLM for a response.
import { createRobotClient, VisionClient } from '@viamrobotics/sdk';
import { SpeechClient } from 'speech-service-api';
const client = createRobotClient({...});
const vlm_classifier = new VisionClient(client, "moondream-classifier");
const speech = new SpeechClient(client, "speechio");
// capture audio, in this case via browser functionality
const capturedAudioArray = captureAudio()
const speechText = await speech.toText(capturedAudioArray, "wav")
let classifications =
await vlm_classifier.getClassifications(theImage, 300, 280,
"image/jpeg", 1, {"question": speechText});
let vlmResponse = classifications[0].className;
// now we can print out the response from the VLM
The Viam platform can be integrated with PLCs (Programmable Logic Controllers) to open new opportunities for control, data collection, and monitoring in industrial settings.
In this demo, we are leveraging a Modbus PLC integration from the Viam registry.
This integration represents the PLC as a board component, mapping its digital and analog outputs as GPIO.
Try changing state of the digital and analog outputs with the widgets below.
Analog:
Because this PLC is being represented as a board component, you can securely control it with any of the Viam SDKs for control scripts, custom dashboards, and more.
Below is sample code that shows how this PLC is being controlled.
const plc_board = new BoardClient(client, "PLCBoard");
// write value to analog output
await plc_board.writeAnalog('AO_01', 1000);
// get value from digital output
let val = await plc_board.getGPIO('DO_01');
// set digital output to high
await plc_board.setGPIO('DO_01', true);
plc_board = Board.from_robot(robot=robot, name="PLCBoard")
# write value to analog output
await plc_board.write_analog(pin="AO_01", value=1000)
digital_pin = await plc_board.gpio_pin_by_name(name="DO_01")
# get value from digital output
val = await digital_pin.get()
# set digital output to high
await digital_pin.set(high="true")
plcBoard, err := board.FromRobot(robot, "PLCBoard")
// write value to analog output
err := plcBoard.WriteAnalog(context.Background(), "AO_01", 1000, nil)
digitalPin, err := plcBoard.GPIOPinByName("DO_01")
// get value from digital output
var val = digitalPin.Get(context.Background(), nil)
// set digital output to high
digitalPin.set(context.Background(), true, nil)
std::shared_ptr<vs::Board> plc_board;
plc_board = robot->resource_by_name<vs::Board>("PLCBoard");
// write value to analog input
plc_board->write_analog("AO_01", 1000);
// get value from digital output
bool val = plc_board->get_gpio('DO_01');
// set digital output to high
plc_board->set_gpio('DO_01', true);
Now that you've gotten hands-on with some Viam platform configurations and capabilities, we are excited to see how you might use Viam to accelerate your projects.
Viam is at its core an open source platform, and you get started with our secure in-cloud data and fleet management solutions for free (and continue with transparent consumption-based pricing).
How will you use Viam to revolutionize your hardware projects?