7. Other Network Optix Plugin Settings

Plugin Settings

The Network Optix plugin contains a number of settings to manage the behaviour. These settings work in tandem with the Edge AI Manager settings to cater to a number of use cases.
Navigate to the camera settings and enable the Scailable AI plugin. A number of settings should appear.

Model Routing

It is possible to have multiple assigned models in the Edge AI Manager, see AssignedModels. In this case, it might be desirable to choose which models should be active on which camera. For example, it might be desirable for an internal surveillance camera to only detect people, while an external camera detects cars, and a third camera detects both. All of this is possible to manage from the Network Optix plugin.
Example of model routing
By default, only the first model is selected. This means that only the first assigned model will be used to run inference on each frame received from Network Optix. Any combination of models can be selected to be used for inference. The order of models in this list correspond to the assigned models in the AssignedModels array setting.
Currently up to four models is supported through the Network Optix plugin. Please contact us if more models are desired.

General Settings

In this section, some general settings are listed which can be used in a wide array of cases.
Extract object counts from bounding boxes: When this is enabled, an Objects Counted event will be generated from the received object bounding boxes. This is to enable object counting in cases where the model itself has no such output.

Feature Extraction

In some use cases, it is desired to run additional inferences on certain objects. For example, let's suppose you trained a model which can detect if a person is smoking or not. This can then be used in tandem with a person detection model. The person detection model would be used to detect people from, say, a surveillance camera, and generate bounding boxes for them. If feature extraction is enabled, these bounding boxes will be used to extract parts of the larger image, and will then be sent back to the Edge AI Manager, so that the smoking detection model can be run on each of the detected people.
In this example, make sure that the person detection model is selected in the Model Routing section.
Feature Extraction Settings
Enable Feature Extraction: If feature extraction is enabled. When enabled, bounding boxes will be hidden, and instead be used to extract parts of the image to be sent back to the Edge AI Manager for inference.
Feature Extraction Model: Which model to use for the extracted features. This model will be used for inference on each extracted bounding box. In the example, this would be set to the index of the smoking detection model.
Feature Extraction Type: The type of object to extract from the full image. This is useful for when a detection model produces multiple types of bounding boxes. This setting will be matched to bounding box types, and types matching this setting will be sent to the Edge AI Manager for further inference.

Loitering Detection

The Network Optix plugin has the functionality to detect loitering. This is a tool to detect if a certain person or object is detected for longer than the desired amount of time.
To enable this, a number of settings need to be in place. Feature Extraction needs to be enabled, and routing to a ReID type model, or a model with similar output. This model allows for the plugin to keep track of the same person or object, even if they leave the frame and return.
For example, to detect if a person is loitering in a camera's view, assign and route to a person detection model in the Edge AI Manager and plugin settings. This will generate bounding boxes for persons detected in the frame. Enable feature extraction for the 'person' type, or whichever type the person detection model generates. Select the Feature Extraction Model as the index where a ReID type model, or a model with similar output, is assigned in the AssignedModels array. The plugin expects an output with either "Identity" or "embedding" as the name, and it expects this output to be a vector of any length, which uniquely identifies the subject.
When all the settings are in place, the plugin can uniquely track a subject in the frame, even if they exit and return into view. Once it is detected that this subject is loitering, a bounding box will be generated with the type Loiterer to clearly show where this is happening.
Loitering Detection Settings
Loiter Detection Time Threshold: A subject detected for at least this amount of time, in seconds, will be considered loitering.
Loiter Detection Forget Threshold: When a subject has not been detected for at least this amount of time, in seconds, it will be forgotten. If the same object returns after this amount of time, it will be considered as a new detection.

Illegal Dumping Detection

When an illegal dumping model is assigned to the Edge AI Manager, illegal dumping detection becomes available.
Illegal dumping detection works by employing an AI model which can detect objects in a scene regardless of lighting conditions. The detected objects are then compared to a reference of detected objects. Detected objects which are not in the reference are tracked and timed. If these objects persist for longer than the threshold time they are flagged as anomalies.
Anomalies are presented as standard bounding boxes with the 'anomaly' type. It is therefore advised to create an event in NX to raise an alarm of an 'anomaly' type object is detected.
By default, a reference image will be created from the first frame the Scailable Edge AI Manager receives when starting up for the first time. This reference image will then be saved and reused in future runs. It is also possible to manually create a reference image.
When the frame is empty of temporary objects and a good representation of the background is being displayed, make use of the "Trigger Reference Run" button to set the new reference. Once the button is pressed, the following frame(s) will be used to set a new reference.