Deploy State-of-the-Art Deep Learning on Edge Devices in Minutes

Deploying advanced deep learning algorithms on edge devices – especially for computer vision applications like autonomous vehicles and IoT – requires special capabilities. At Brodmann17, our mission is to create practical, neural-network based algorithms that bring deep-learning vision applications to the mainstream. Our patented, robust vision technology is the world’s most lightweight deep-learning software for embedded CPU: by reducing the amount of calculations needed to compute, it saves up to 95% of the compute power – making it perfect for deployment in edge devices.

For the last three years, we’ve been fine-tuning our model – targeting high frame rate, low power consumption and low working memory, while keeping accuracy high – and we’re now keen to share our work with Arm’s developer community. By providing open access to our face detector code, we’re hoping to stimulate innovation and let developers experience the benefits of our next-generation learning vision for themselves.

Below, you’ll find details of the first version of our face detection algorithm, created using exciting new proprietary neural network design patterns. It’s highly efficient without compromising on accuracy, and – of course! – we believe it to be a superior alternative to other open source face detectors out there, especially if your target application is an edge device.

The model is intended to run on Arm CPU processors, and we’ve even developed our own in-house inference engine, so you can run our library pretty much out of the box without additional installations!

Setup

Please refer to our github repository for detailed setup instructions. Once you’re done you will be able to run our library using C++ or Python on Armv8-A (aarch64).

A speed benchmark on Arm Cortex-A72 is provided below:

Input Image Size Process Time (ms) FPS (1/s)
640x480 67.72 14.77
320x240 22.26 44.93

Below you’ll find two code snippets (Python & C++) that will help you get started. The following is covered:

  1. Reading an image
  2. Processing the image to detect faces
  3. Displaying the results (face bounding boxes)

Getting Started with Python

 

import cv2
from matplotlib import pyplot as plt
from brodmann17_face_detector import Detector

# Step 1: Reading image
im = cv2.imread("../example/example3.jpg")

# Step 2: Processing image
gray = cv2.cvtColor(im, cv2.COLOR_BGR2GRAY)
with Detector() as det:
	detections = det.detect(gray)
     
# Step 3: Display results
im2show = cv2.cvtColor(im, cv2.COLOR_BGR2RGB)

for d in detections:
	im2show = cv2.rectangle(im2show, tuple(d[:2]), tuple(d[:2] + d[2:4]), (255,0,0), 8)

plt.imshow(im2show)

Getting Started with C++

#include "libbrodmann17.h"

#include <string>
#include <unistd.h>
#include <opencv2/opencv.hpp>
#include <opencv2/highgui/highgui.hpp>

static const int MAX_DETECTIONS = 20;

using namespace bd17;
using namespace std;
using namespace cv;

int imageExample(string image_filename) {

	// Prepare Detector
	Init();
	void* detector = CreateDetector();

	unsigned int detections_num = 0;
	float* detections = new float [PARAMS_PER_DETECTION*MAX_DETECTIONS];

	Mat im = cv::imread(image_filename, cv::IMREAD_COLOR);

	if (!im.size().area()) return EXIT_FAILURE;


	// Run detection
	if (!Detect(detector, detections, &detections_num, MAX_DETECTIONS,
			(void*)im.data, im.cols, im.rows, bd17_image_format_t::bd17_bgr_interleaved_byte,
			NULL, NULL))
	{
		fprintf(stderr, "Error: Detection error");
		DestroyDetector(detector);
		delete [] detections;
		return EXIT_FAILURE;
	}

	// Draw results
	for (unsigned int i = 0; i < detections_num; i++)
	{
		// Upper left corner
		Point pt1(detections[i*PARAMS_PER_DETECTION], detections[i*PARAMS_PER_DETECTION+1]);

		// Bottom right corner
		Point pt2(detections[i*PARAMS_PER_DETECTION] + detections[i*PARAMS_PER_DETECTION+2] - 1, detections[i*PARAMS_PER_DETECTION+1] + detections[i*PARAMS_PER_DETECTION+3] - 1);

		// Draw rectangle
		rectangle(im, pt1, pt2, Scalar(0, 0, 255), 2);
	}
	imshow("Output", im);
	int key = waitKey(0);

	// Clean Up
	DestroyDetector(detector);
	delete [] detections;

	return EXIT_SUCCESS;
}

static inline bool is_file_exists (const std::string& name) {
    return ( access( name.c_str(), F_OK ) != -1 );
}

int main(int argc, char ** argv) {
	std::string image = "./example.jpg";
	if (argc != 2) {
		fprintf(stderr, "Input file is not given, uses example.jpg\r\n");
	} else {
		image = std::string(argv[1]);
	}
	if (!is_file_exists(image)) {
		fprintf(stderr, "File %s does not exists\r\n", image.c_str());
		return -1;
	}
	return imageExample(image);
}
 

 To get a copy of our library, and for more information, please visit our github repository and stay tuned for additional releases of models and code!

Get a copy of Broadmann17 library

Brodmann17’s deep learning vision technology exceeds state-of-the-art accuracy while running at the edge on standard, low-power Arm CPUs. Amir Alush, co-founder and CTO of Brodmann17, demonstrates the company’s cloud-free deep learning solution.

Anonymous