Edge AI Manager structure

The Scailable Edge AI Manager is distributed as a .tgz archive that can be downloaded and extracted automatically through our one-line install or downloaded from our repository and installed as an Advantech ICR router module.
Either way, the Scailable Edge AI Manager module will be extracted into the /opt directory with the following structure (overview only includes the most relevant files and directories):
/opt/sclbl/bin -> stores all binaries and executables
/opt/sclbl/bin/sclbld -> runs models
/opt/sclbl/bin/sclblmod -> pre- and postprocessing, io, and config management
/opt/sclbl/bin/uimanager -> web server that provides a user interface
-> caches, saves, forwards, distributes inference results
/opt/sclbl/etc -> stores configuration files and support scripts
/opt/sclbl/etc/defaults -> copied to the "settings" file on the first install
/opt/sclbl/etc/settings -> stores and preserves settings
/opt/sclbl/etc/init -> script that starts/stops ai manager processes
/opt/sclbl/www -> the web root of the uimanager binary
/opt/sclbl/www/index.html -> the main index file of the user interface
/opt/sclbl/www/img -> uimanager images directory (includes example input)
/opt/sclbl/www/js -> uimanager javascript directory (js ui application)
/opt/sclbl/www/css -> uimanager stylesheets (js ui application style)
/opt/sclbl/cache -> caches model files locally


Persistent AI Manager settings are stored locally in the /opt/sclbl/etc/settings file. Further explanation of each individual setting can be found here.


Initialization script that instruments processes in a preconfigured manner.
> ./init {start|stop|startui|stopui|startod|stopod|startrun|
startui -> Starts uimanager and outputdistributor
stopui -> Stops uimanager and outputdistributor
start -> Alias for "startui"
stop -> Alias for "stopui"
startod -> Starts outputdistributor
stopod -> Stops outputdistributor
startrun -> Starts an inference with configured model(s)
stoprun -> Stops an inference with configured model(s)
test -> Runs one inference, outputs the result
restart -> Stops, then starts
status -> Retrieve current AI Manager processes status
cam -> Tests the cameras (generates one image per cam)
defaults -> Reset the settings file to default settings


The sclbld binary is a daemon that is responsible for running the model currently specified in the AI Manager's settings file. It is domain socket based and waits for JSON formatted input from the sclblmod binary. When its socket receives correctly specified input, it will run the configured model and send the JSON formatted inference results back to the sclblmod binary.

Log levels

By default, sclbld's log level is set to 3. Set the SCLBL_LOG_LEVEL environment variable to level 4 to see additional logging when starting sclbld from the shell:
# log levels display: [0] NOTHING [1] ERRORS [2] 1+WARNINGS [3] 1+2+INFORMATIONAL [4] 1+2+3+NOTICE


The sclblmod binary pre- and post-processes data. It transforms input data (for instance, raw, JSON, CSV, or protobuf formatted data, image, audio, or video streams) into tensor(s) with dimensions the currently set model can handle. It then transforms the resulting output tensors back into a user-specified output format (for instance, JSON, Modbus Protobuf, or raw output).

Log levels

The Scailable AI Manager has several log levels.
Scailable AI MAnager log levels, and what they will output:
By default, sclblmod's log level is set to 3. Set the SCLBL_LOG_LEVEL environment variable to level 4 to see additional logging when starting sclblmod from the shell:
For debugging gstreamer integration, set GST_DEBUG to level 2 (or higher):
# log levels:
export GST_DEBUG="*:2"
To save all images and json i/o for debug purposes set SCLBL_SAVE_ALL to 1. Turn off by setting it to 0. For example:
To set custom gstreamer strings, first set InputCamera1Format in settings to debug. Next, set SCLBL_GST_STRING, for example:
export SCLBL_GST_STRING="rtspsrc location=rtsp://localhost | etc | etc ..."

Testing from the shell

To help debug the sclbld and sclblmod binaries the sclblmod binary includes a test mode. Just follow these steps to get started with the test mode:
  • Ensure the AI Manager is configured correctly (license, model, and input all need to be set, and the inference engine should not be running).
  • cd /opt/sclbl/bin
  • sudo killall sclblmod
  • sudo killall sclbld
  • export SCLBL_LOG_LEVEL=4
  • sudo -E sclbld
  • sudo -E sclblmod test
  • The last command will result in one inference with the currently configured model, including level 4 logging to standard out. At the end of the run, sclblmod will display the inference result in JSON format.


The uimanager binary is a small webserver that allows the end user to get and set the AI manager's configuration. It can also test, start, and stop inferences.


The outputdistributor receives inference results after they have (optionally) been post-processed by the sclblmod binary. It caches, saves, forwards and distributes inference results.


output a target and a datapoint
target a tuple describing a running model on a device. triplet: (device,model,source)
data point type format of the output (see below)
data point a single data point
type = Log
this implies the data point is a text string without enters
it will be stored in a 'log file' prefixed with a timestamp and closed with a newline
the writer does NOT check for newlines in the data point but does check for a closing newline
type = Raw
this implies the data point can be appended to an existing file
there are no 'separators' added between data points
type = Blob (later)
this implies the data points require a way to be separated from other data points
most likely candidate is a TAR file with timestamps for names
--help to see available options.
--listen listen on this addr:port (defaults to
--log <directory> log the data points to a directory
--maximum-log-size maximum file size before rotating
--forward <destination> forward data points to another sclbl-output-distributor
--database <connect> connect to a PostgreSQL database and insert data points
-p, --database <connect-string> Insert data points into a PostgreSQL database
-f, --forward <destination> Forward data points to another distributor at <destination>
Destination can be address:port, hostname:port, or url (http and https)
-l, --listen <addr:port> Listen address:port [default:]
-d, --log <directory> Log data points to a directory
--maximum-log-size <bytes> Sets the maximum log size [default: 67108864]


Receiving data
POST /output
X-Output-Deviceid: <device-id>
X-Output-Modelid: <model-id>
X-Output-Sourceid: <source-id>
X-Output-Date: <date>
X-Output-Type: <log|raw>
POST /output/<device-id>/<model-id>/<source-id>
X-Output-Date: <date>
X-Output-Type: <log|raw>
  • *-id must alphanumeric or one of [-, _]
  • Date is in RFC-3339 format. Missing date stores the data with the current date/time on the collector device.
  • Type: log means output data is prefixed with the date of the result (like a log file)
  • Type: raw means output is appended to the output file
  • Type is optional, defaults to raw
  • Returns status 200 and body "ok" when successful
  • The current log writer ASSUMES the received data contains NO newlines (except at the end)
  • Maximum allowed blob is now 64 Kb
Peeking data
GET /peek
X-Output-Deviceid: <device-id>
X-Output-Modelid: <model-id>
X-Output-Sourceid: <source-id>
GET /peek/<device-id>/<model-id>/<source-id>
  • Returns status 200 if we have peek data or other status with error message
  • Returns data as application/octet-stream
Viewing status
GET /status.json
  • Returns the status of all targets and destinations.

File storage

  • Filenames are generated as follows <device-id>/<model-id>/<source-id>.<type>.<number>
  • Number is formatted with 4 digits left-padded with zeros
  • Number is increased when size limit has been reached or surpassed, file are rotated when the file size limit has been exceeded
  • Files are limited to 10 Mb by default