Consecutive video frames are usually similar to some extent (there's only minor change in pixel values). Therefore, a redundant inference runs on a relatively "similar" frame can be avoided by first examining the current frame and comparing it with the previous one, and only run another inference when there's a significant change.
Resources-wise, this on-change-trigger offers many advantages:
- No additional device memory consumption
- A very fast execution (for example on the ICR-32xx series, it takes less than 30 ms)
- Less energy consumption
- A higher throughput (number of processed frames per second)
You can control both the output frequency (e.g., how often outputs are send to the appplication platform) and the inference speed.
- Output frequency can be set from "as quickly as possible" (basically everytime an image is fed to the AI model), to aggregate once a minute. When selecting an aggregated sending of data the data will be grouped per inference.
- The inference frequency can be set to as fast as possible, or to once every x milliseconds. Note that it is also possible to trigger an inference upon an external signal; this currently is only available through the advanced settings.
A number of models provide their own specific output options. For example models that support an alarm will force the fields below to show in the AI manager:
Here you can set the AI manager to only send output whenever a specific number of objects is detected or whenever an alarm is raised. The sent alarm can be accompanied with the image triggering alarm if desired, also, it's possible to receive a notification by email whenever an alarm is raised.
For model development it is often useful to grab training images within the actual context of use. The AI manager can be used to flexibly grab images whenever a model's output (class membership) is uncertain. This option is contextually available for appropriate models and it is easy to toggle on or off.