Edge AI Manager API

This page describes the AI Manager endpoints which can be used to control the AI Manager platform
The recommended way of using the Edge AI Manager is through the UI, which can be accessed through a browser window on the device, or by any device with network access.
However, if it is desired to integrate the Edge AI Manager into custom software, or if more custom control is desired, then the Edge AI Manager has an API which could be used. This page documents all the available endpoints which can be used to get status information, alter settings or control execution of the Edge AI Manager.
All below endpoints are relative to the device IP address and Port as controlled by the WebPort setting. For example, on the device, the http://localhost:8081/status URL can be called.

Root endpoint /

Calling this endpoint serves the Web UI. Through this UI, the Edge AI Manager can be configured and controlled.

Settings endpoint /settings

The settings endpoint can be used in two ways.
If the GET method is used, the current settings will be returned as a JSON object. The response will be encoded as 'Content-Type': 'application/json'
If the POST method is used, it will be expected for the body of the request to contain a settings object. The settings should be encoded as 'Content-Type': 'application/x-www-form-urlencoded' . Using this method will overwrite the local settings on the device.

Start endpoint /start

Calling this endpoint will cause the Edge AI Manager to start running inference. The Edge AI Manager will continue running inference on inputs until the stop-endpoint is called.
If there is a problem with the current settings, such as no input defined, inference will immediately stop.
Calling this endpoint when the Edge AI Manager is already running inference will result in the process being stopped and restarted.

Stop endpoint /stop

Calling this endpoint will cause the Edge AI Manager to stop running inference.
Calling this endpoint when the Edge AI Manager is not currently running inference will have no effect.

Test endpoint /test

This endpoint can be used to run a quick test with current settings.
The Edge AI Manager will fetch a single input and run inference with a single model. The inference results will be returned as an application/json encoded body.
If the test does not succeed, verify that settings are valid.

Status endpoint /status

This endpoint can be used to quickly check if the Edge AI Manager is currently running inference.
This endpoint returns a JSON object encoded as application/json. If inference is currently being performed:
{"status":3}
If inference is not being performed:
{"status":0}

Log endpoint /log

This endpoint can be used to fetch the last 50 lines of the log file.
The result is returned as a JSON array as a application/json encoded message.

Trigger Inference endpoint /triggerInference

This endpoint can be used to trigger inference remotely.
When the Edge AI Manager is performing inference, and the ExternalTrigger setting is set, the platform will wait for an external signal before performing inference. Calling this endpoint will result in a single inference being performed, after which the Edge AI Manager will wait for the next signal.

Update Manager endpoint /update

This endpoint can be used to update the Edge AI Manager.
Calling this endpoint will result in the Edge AI Manager downloading the latest version of the Edge AI Manager from the cloud and install it.
This will not check if the Edge AI Manager is already up to date. This should be done with the extended-status-endpoint-statext.
Updating the Edge AI Manager will preserve all settings.

Clear Model Cache endpoint /clearModelCache

This endpoint can be used to remove all locally downloaded models.
This is useful for when a corrupted model was detected, or for freeing up disk space.
This will result in the currently selected model to be deleted, which will make performing inference unavailable. See download-model-endpoint-downloadmodel for downloading the currently selected model.

Reset Setttings endpoint /resetSettings

This endpoint can be used to set the Edge AI Manager configuration to default.
This is useful as troubleshooting for when an unknown error is occurring or it is simply wished to start over.
This will not overwrite the licence, device-id etc. Only the settings in the "module" section will be changed to defaults. See Edge AI Manager settings

Tensor Input endpoint /jsonTensorInput

The tensor input endpoint allows for sending generic input tensors to the runtime from outside sources.
This endpoint expects a POST request with the Json tensor data in the body. This Json tensor data will then be passed through to the Scailable Runtime through the socket interface. Therefore, the data should follow the same format. See Generic Json Input Tensor.
Currently, this method only supports sending a single tensor in Base64 encoding. The Scailable Runtime will not process this data, but instead just pass it through to the Inference Engine.