On device demos
The Scailable Edge AI Manager includes a basic visualisation engine. It contains a universal counter that works with many different Scailable models and an emotion visualisation that works for the Scailable emotion recognition model.
You can view real-time results of the AI detection with the visualisation engine. The default engine contains a universal counter and an emotion recognition visualisation.
The universal counter displays bounding boxes and counts certain classes of objects.
The types of models that can be used are models that locate people, vehicles and/or faces and return the bounding boxes for them. These include the following models that are provided by Scailable.:
- Car location model
- Face locator
- People and vehicle alarm
The universal counter will work with any standard input and with multiple camera's.
For a demo you can select a suitable image, the traffic, faces or crowd images will work for many models. The best input depends on the chosen model.
The Output frequency must be set to "
Post each inference separately"
The inference frequency does not really matter, but it works best when the speed is faster than once every 50ms. As fast as possible works great.
Start the inference engine as usual, by clicking the "Run" button on the "Run" tab
Accessing the visualisation can be done in a web browser by clicking the "View live visualisation" button in the Edge AI Manager interface.
Alternatively the demo is available at the same location as the Edge AI Manager but with a
/demo/appended to the URL. So if your Edge AI manager is accessible at
http://localhost:8081/the default visualisation will be available at
The visualisation will show the latest image and depending on the model that is used a counter and overlayed bounding boxes. It will update a few times a second, depending on the model.
One other visualisation demo that is included is the Emotion visualisation.
The emotions visualisation works with one of the models provided by Scailable. at the moment. The "Emotion detection" model.
The model detect certain emotions on a single face, so the input ideally shows a centered face.
The emotions visualisation will work with any standard input and with multiple camera's.
Select a suitable image, the sample images labeled "Emotions" will work. You can also use a webcam source that is pointed at a persons face.
The output can be set the same as for the universal counter.
Start the inference engine as usual, by clicking the "Run" button on the "Run" tab.
The example for the emotion detection is available in your browser at the path
/universal-counter/emotions.html. So if your Edge AI manager is accessible at
https://localhost:8443/the emotions visualisation will be available at
The visualisation needs a short time to get started, usually a second depending on your computer.
The model might be incompatible or the input might be unusable for the model.
If you test the model in the AI manager the output should show a part with
bboxesin the json output. If the bounding boxes part shows all zeroes the input is not recognized as on of the classes. If there is not bounding boxes output you might try another model from your library.
Check if the configuration is saved correctly. If the
0you need to stop the inference engine and update the settings.