Sorry, you need to enable JavaScript to visit this website.

Deep Learning Functions as a Service

BY Daniela Plascencia, Milan Mauck ON Apr 23, 2020

Intel understands the importance of cloud computing and its many challenges. One of the biggest challenges is the maintenance and affordability of having a set of servers operating 24/7/365 to run workloads only once a day, or even once a month. The cloud community has a solution for this: FaaS, or Function As A Service, a cloud service that lets you define and run functions, such as pieces of code, without needing to manage the system on which the functions are running. FaaS projects, such as OpenFaaS[1] and Fn[2], allow users to remove the complexity of building and maintaining the infrastructure associated with a software application by dynamically managing and deploying event-driven, independent functions based on computing resources. When combined with an optimized software stack for deep learning applications, such as the Deep Learning Reference Stack (DLRS)[3], the user is allowed to focus on developing and creating complex models and deep learning applications instead of wasting time and resources putting together all the needed bits and pieces.

Let’s start by talking about the Deep Learning Reference Stack, an integrated, highly-performant open source stack optimized for Intel® Xeon® Scalable platforms. We have created the DLRS to ensure AI developers have easy access to all of the features and functionality of Intel platforms when using Machine Learning frameworks. With this stack, we enable developers to quickly prototype by reducing the complexity associated with integrating multiple software components, while still giving users the flexibility to customize their solutions. Given that the DLRS offers so much flexibility, we have integrated this solution into Fn and OpenFaaS projects to showcase the advantages of managing event-driven independent functions running on an optimized stack for deep learning applications.

Fn is a lightweight, open source FaaS-compute platform specializing in running Docker* containers as functions. It’s written in Golang, which means it’s very fast. Documentation is clear and concise, making it easy for developers to ramp up. Fn has a number of tutorials for setting up a basic Hello World function, building your own custom Docker container functions, or how to setup a function written in a common language like Python*. The Fn community provides great support on their Slack* channel and Github* issues, and contributing is as simple as making a PR to their Github repository. To install Fn, they provide a shell script hosted on their Github repository for download that installs Fn for you. The only prerequisite is having a Docker installation.

On the other hand, OpenFaaS relies on the orchestrating capabilities of Kubernetes* for  building a scalable, fault-tolerant event-driven serverless platform. OpenFaaS is also written in Golang and offers a CLI that is easy to set up and use. What makes OpenFaaS so special is their function watchdog, a tiny HTTP server that converts any Docker image into a serverless function, and also their API Gateway, which is the component that scales up functions on demand. For deploying functions, OpenFaaS relies on the concept of templates, which are a set of configuration files, scripts, and Dockerfiles that are ready to use; users can just add their specific function files. These templates live in the OpenFaaS store, an index of templates and functions, in which the System Stacks team enabled a template[4] for deep learning applications.

We have already talked about how these projects work on their own, let’s see how they can be integrated into a use case. We used the Pix2Pix example to showcase the best of both worlds. At a high level, Pix2Pix is a deep-learning model that can create realistic outputs given an abstract input. For example, if you input a doodle of a cat, Pix2Pix tries to create a realistic cat. For more information about Pix2Pix, please refer to the PIX2PIX UTILIZING THE DEEP LEARNING REFERENCE STACK article[5].

Image 1. Pix2Pix processes an abstract image and tries to build a realistic one.

Image from:

To integrate Pix2Pix with Fn and OpenFaaS, we defined specific templates for each project and the Pix2Pix inference function as the FaaS workload to be called and managed, and the Deep Learning Reference Stack as a base image for running deep learning tasks.

For both Fn [6] and OpenFaaS [7], the first step is to deploy the project-specific services and then build the actual container images to be used for running and invoking the inference function. For deploying the Pix2Pix function as a service, we use Fn and OpenFaaS Dockerfile templates whose parent image is DLRS. We then install the Function Developer Kit for Python (fdk) in the case of Fn, and the OpenFaaS watchdog in the case of OpenFaaS, and added some other runtime dependencies. In both cases, the entrypoint of this Dockerfile is running a handler script that receives an image that has been transformed into a bytes array on a POST request. That array is then translated into an image on which Pix2Pix runs inference. After the inference process is done, the container returns an image encoded as base64. The last step of the process is to translate the encoded image into an actual picture using some minor image processing.

FaaS is becoming popular with the cloud computing community because of its many advantages, from shaping software development into a more modular process and abstracting the many complexities of maintaining cloud services, to reducing the expenses of having servers running 24/7/365 and being billed on the time they’re available, not on consumption and executions. Intel is aware of the importance of FaaS projects and is getting involved with direct contributions and creating use cases to showcase how Intel solutions, like the System Stacks, can work on different cloud environments with different cloud projects.