Scailable.
Search
K

Input configuration

At this point in the AI manager documentation we assume your AI manager is setup and that you have access to at least one camera that you would like to configure. For trouble shooting your camera please see our preliminaries. Do make sure you are certain you can access your camera stream/images before proceding.

Basic camera setup

The AI manager allows you to configure 1-8 (depending on the target device) cameras which serve as input to the AI model. Please see our list of supported cameras to understand which cameras to use for which context.
Once you have a camera setup and connected (either physically to the edge device or available within the network), it is time to setup the camera configuration in the AI manager.
The core is simple:
  • Select the input URL. I.e., assuming you are using an IP camera as input, insert the URL of the camera in this field. For local cameras please point to the local source.
  • Select the input type: The type of input (single frames, an RTSP or MJPEG stream, or otherwise) will need to match the camera source. For more information regarding these sources please see our preliminaries.
If you have a IP camera you should check the connection parameters in the documentation. For many generally available camera's the connection details can be found in the iSpy database.
Once you have provided the right URL and the input source type, you should see a preview from the camera (this will be a single image taken from the camera). If you don't see a preview your camera is not properly configured and you might want to check out the cameras page for troubleshooting.
If you want to add more cameras simply press the "Add camera" button and repeat the process.
Note that in the inference process cameras are visiting "round robin", i.e., we will take a frame from each camera in turn and feed it to the AI model. In the output you will be able to see which camera source produced the output.

Additional options

Although in many use-cases the above is al you need to get started, we detail a number of additional options.

Naming your camera

You can provide a name for each camera that you add. This name will be included in the data that is logged.

Accessing secure cameras

You can access secure cameras by providing a user name and password for the camera access. This will work for most standard IP cameras that are username/password secured and we recommend setting this up.

Providing an input mask (OPTIONAL)

Some input configurations depend on the selected model. For library models that support masking a contextual option will automatically show up in the input settings:
Clicking the edit mask button shows a preview of your image, including the current field-of-view as it is passed into the model. Subsequently you can draw a region on the image, and when satisfied save the area for use.
If the area of interest setting is set to "include mask" the model output will only concern the area within the region and all other parts of the input will be ignored. When set to "exclude mask" this will be inverted and the model output will concern only the area outside of the mask.
Input masks are only available for specific AI models. In the model library this is indicated by the "within a region" keyword in model name. You can find an overview of our contextual input and output options here.
How the masking works? All objects, which have their bottom middle points (X marks in image below), are included if they are part of the drawn region of interest.
In the example below (the mask is the red area), object A is kept but object B is ignored.

Providing a line crossing line (OPTIONAL)

Similar to input masks, models that support line crossing counts will have an option to define the borders and sensitivity values for line crossing.
For the line crossing to work some extra parmaters need to be set, these settings relate to the sensitivity, the threshold and the amount of steps and distance objects need to be tracked.

Advanced options

It is possible to configure the AI manager to crop individual images before feeding them to the model. This options is currently available only through the advanced configuration settings.
Once you have configured your cameras you can continue to configuring the inference and output options.