As we currently publicly only support Vision solutions in the AI manager, the input to the AI model is always an image (or image stream) coming from a camera. Here we describe which cameras we support, and how to trouble-shoot your camera setup before configuring the AI manager.
In general we support any IP camera that supports one of the following protocols:
- A static image such as JPG, PNG, TGA, BMP or GIF. Camera's that support these format often allow for navigating directly to the location of the camera (the IP address) to preview, in an internet browser, the latest image coming from the camera. Within the AI manager you can simply provide the static image address in the "Input URL" field.
Next to the above common IP camera formats, we also support any video4linux (
V4L2) which you should be able to use for any USB cam that is connected to your device or GStreamerShared memory (
SHM) input which can be used for most industrial cameras. These are a bit more challenging to setup and hence each has it separate page in these docs.
Note that for many AI models the camera streams are ingested by the model frame-by-frame. Thus, if the IP camera that you are using supports grabbing individual frames (i.e., by simply directly pointing to the latest JPG, PNG, TGA, BMP of GIF, this often results in the fastest solution: the edge device can safe itself the trouble of having to decode the image stream into individual images. This is especially important for highly resource constrained devices.
Here are a number of tips for configuring your camera:
- For any IP cam you should be able to ping your camera from the edge device; if that doesn't work the edge device cannot reach the cam.
- On the edge device, if possible, check if you can connect using the Gstremer or FFMPEG utilities (in Linux).