Comment on page
From Edge Impulse
In this section of our documentation we describe how to use the Edge Impulse model training platform to train advanced machine learning models for vision tasks and deploy them seamlessly using the Scailable platform. To follow the documentation at this point we assume that you have access to the following:
- An edge device with the Scailable AI manager installed. If you do not have access to an edge device with the Scailable AI manager installed please see how to purchase a device. Do make sure you can login to the edge device and navigate to the AI manager installed on the device.
Once you have all of the above setup, you should be able to proceed to train your own model using edge impulse and deploy it using Scailable.
We will demonstrate how to train and deploy your own model step-by-step. And, we will show you how to re-train your model once it has been deployed in-the-field. We will cover the following steps:
- 1.Model training using the Edge Impulse platform. Note that we will not provide an elaborate walk through of the amazing capabilities of the Edge Impulse platform; these can be found in the Edge Impulse docs: https://docs.edgeimpulse.com/docs/.
- 4.Retraining your model. This step is optional, but cool. Once you have a model setup you can collect new training examples in the field and use these to retrain a model. Once done you can iterate (go back to step 1) and get better!
We start the development of a novel edge AI solution by creating a new project on the Edge Impulse platform:
The Edge Impulse platform is very intuitive, and allows you to upload and annotate training examples and to train object detection models. We will focus on the Edge Impulse's FOMO model; find a quick getting started guide here: https://docs.edgeimpulse.com/docs/tutorials/detect-objects-using-fomo.
The important bit for this tutorial is to train an object detection model and to select the correct FOMO models. Work through the data acquisition and impulse creation steps in the Edge Impulse platform to get to the object detection model:
Do make sure to select the FOMO MobileNetV2 (both 0.1 and 0.35), or Yolov5 option. Next, after you have clicked "Start training" and the model training has finished, you are done (for now) on the Edge Impulse platform.
At this point we only support imports of the FOMO MobileNetV2 and Yolov5 from Edge Impulse. We will be adding support for more Edge Impulse models shortly.
After training your model, you can leave the Edge Impulse platform (but do leave it open in a tab) and move to https://admin.sclbl.net. After logging in at the Scailable platform you will arrive at your dashboard showing your current models and devices (which might both be 0 when you are just getting started):
Click the model tab at the top, and next click the green "Add a model button":
At this point you can use your Edge Impulse API key and project ID to import your trained model directly from Edge Impulse. Your API key kan be found at your dashboard, and the Project ID can be found in the URL:
After filling out the API- and project- keys you can click the "Link model" button, and your Edge Impulse model will be imported into your Scailable library:
You can obviously change the model name and documentation (as usual), but effectvely, after the import, the model is directly available for deployment. Once you click "Return to models" you will see the model on the top of you model list:
You are now ready to deploy your model to your selected edge device.
There are multiple ways in which you can use the Scailable platform to deploy your model to you selected edge device. However, if you still need to configure the edge device, the easiest way of setting things up is to navigate to the AI manager that is running on the device. It can usually be found at port
8081or through the device setup menu. You should get here:
At this point you are configuring the setup of the Scailable Edge AI manager on this specific device. For a more elaborate setup please see our AI manager documentation. However, the steps are simple enough:
- 1.First, go to the model tab, click the "Select model" button, and select your newly coupled Edge Impulse model from the list. At this point, if you have not yet done so, you might be asked to register your device.
This all worked. Great.
However, at this point it is good to also understand the generated output; here we present the top of the generated JSON that will be send to your specified output location (by default, the Scailable data logger):
// Example JSON output (top):
The JSON object starts with some meta data describing the device, model, and camera name. Next, you see the output dimensions. In this case the dimensions are
12 x 12 x 3which is the standard Edge Impulse FOMO output when a model contains 3 output classes: effectively the model output is a 12x12 grid on top of the 96x96 pixel input image (the image is automatically rescaled by the AI manager) detailing for each of the 12x12=144 blocks of the image which class is detected. What follows is a list (called
StatefulPartitionedCall:0:of effectively triplets containing the probability for each output class. I.e., in the above output, the first three blocks of the image are identified as class 1 with probablilities
You can use the output anyway you want by sending it to the Scailable data logger or your own application platform.
That's it really; you have just trained and edge deployed a pretty nifty AI model.
Please note that depending on which option you chose for Resize mode when configuring the Impulse during training, you can configure the AI Manager to adopt the same mode by changing the value of InputCameraXAspectRatio in the settings file.
Although steps 1 to 3 basically got you started, there are a few nice tricks you can use to improve your solution over time. Particularly, you can set the on-device AI manager to capture new training images when needed. On the "Output" tab in the AI manager, you will see the "Upload images with low certainty" box:
The image capture feature, which is specific for Edge Impulse models, allows you to set a threshold controlling whether or not an input image will be stored to become input for model re-training. If you set the "Probability threshold" to
.8for example, any image that contains one or more block(s) (out of the 144 blocks) for which the highest class probability (i.e., the probability of the recognized class) is lower than
.8will be send together with the model's output. By default this image added as a based 64 encoded string to the output JSON:
After running the model for some time, and assuming you are using default logging of the output data to the Scailable data logger, you will be able to view your device in the Scailable platform, and view the resulting data:
Once you have collected a batch of data it is possible to directly upload the data to the Edge Impulse project you started with by clicking the upload to Edge Impulse button:
The above covers the basics of "training-using-Edge-Impulse-deploying-using-Scailable". Very cool stuff, and in this article we really only scratched the surface of the potential applications. If you want to learn more, feel free to reach out anytime.