Edge AI Manager API
This page describes the AI Manager endpoints which can be used to control the AI Manager platform
The recommended way of using the Edge AI Manager is through the UI, which can be accessed through a browser window on the device, or by any device with network access.
However, if it is desired to integrate the Edge AI Manager into custom software, or if more custom control is desired, then the Edge AI Manager has an API which could be used. This page documents all the available endpoints which can be used to get status information, alter settings or control execution of the Edge AI Manager.
All below endpoints are relative to the device IP address and Port as controlled by the WebPort setting. For example, on the device, the
http://localhost:8081/statusURL can be called.
Calling this endpoint serves the Web UI. Through this UI, the Edge AI Manager can be configured and controlled.
The settings endpoint can be used in two ways.
GETmethod is used, the current settings will be returned as a JSON object. The response will be encoded as
POSTmethod is used, it will be expected for the body of the request to contain a settings object. The settings should be encoded as
'Content-Type': 'application/x-www-form-urlencoded'. Using this method will overwrite the local settings on the device.
Calling this endpoint will cause the Edge AI Manager to start running inference. The Edge AI Manager will continue running inference on inputs until the stop-endpoint is called.
If there is a problem with the current settings, such as no input defined, inference will immediately stop.
Calling this endpoint when the Edge AI Manager is already running inference will result in the process being stopped and restarted.
Calling this endpoint will cause the Edge AI Manager to stop running inference.
Calling this endpoint when the Edge AI Manager is not currently running inference will have no effect.
This endpoint can be used to select a new model. Note that this endpoint does not automatically download the model. Use Download model endpoint /downloadmodel for this.
Setting the model requires a number of input parameters. These parameters should be given as part of the query string. Each input parameter sets a corresponding setting. See Edge AI Manager settings
This endpoint can be used to trigger downloading of the model.
This function takes no input parameters, instead it reads the current settings and determines if the model should be downloaded.
If this causes an error, verify that the AssignedModelCFID and CdnLocation settings are valid, and that the device has cloud access.
This endpoint can be used to run a quick test with current settings.
The Edge AI Manager will fetch a single input and run inference with a single model. The inference results will be returned as an
If the test does not succeed, verify that settings are valid.
This endpoint can be used to quickly check if the Edge AI Manager is currently running inference.
This endpoint returns a JSON object encoded as
application/json. If inference is currently being performed:
If inference is not being performed:
This endpoint returns a broader overview of the current status of the Edge AI Manager.
This endpoint can be used to fetch the last 50 lines of the log file.
The result is returned as a JSON array as a
This endpoint can be used to trigger inference remotely.
When the Edge AI Manager is performing inference, and the ExternalTrigger setting is set, the platform will wait for an external signal before performing inference. Calling this endpoint will result in a single inference being performed, after which the Edge AI Manager will wait for the next signal.
This endpoint can be used to update the Edge AI Manager.
Calling this endpoint will result in the Edge AI Manager downloading the latest version of the Edge AI Manager from the cloud and install it.
This will not check if the Edge AI Manager is already up to date. This should be done with the Extended status endpoint /statext.
Updating the Edge AI Manager will preserve all settings.
This endpoint can be used to remove all locally downloaded models.
This is useful for when a corrupted model was detected, or for freeing up disk space.
This will result in the currently selected model to be deleted, which will make performing inference unavailable. See Download model endpoint /downloadmodel for downloading the currently selected model.
This endpoint can be used to set the Edge AI Manager configuration to default.
This is useful as troubleshooting for when an unknown error is occurring or it is simply wished to start over.
This will not overwrite the licence, device-id etc. Only the settings in the
"module"section will be changed to defaults. See Edge AI Manager settings