Intel AI Academy – Deep Learning Inference With Intel® FPGAs

Current Status
Not Enrolled
Price
Closed
Get Started
This course is currently closed

An FPGA provides an extremely low-latency, flexible architecture that provides deep learning acceleration in a power-efficient solution. Learn how to deploy a computer vision application on a CPU, and then accelerate the deep learning inference on the FPGA. Next, learn how to take that application and use Docker* containers to scale the application across multiple nodes in a cluster using Kubernetes*.

By the end of this course, students will have practical knowledge of:

  • What convolutional neural networks are and how they are built
  • How to build a deep learning computer vision application
  • What an FPGA is from a software developer’s perspective, and why FPGAs are so well suited for accelerating real-time machine learning applications
  • The components of the Intel® FPGA Deep Learning Acceleration Suite
  • What constitutes a computer vision application that uses deep learning to extract patterns from data
  • How to use the Intel® Distribution of OpenVINO™ toolkit to target CNN based inferencing on Intel® CPUs and FPGAs
  • How the Acceleration Stack for Intel® Xeon® CPU with FPGAs enables higher level cloud and data center software applications to leverage the FPGA seamlessly

The course is structured around five weeks of lectures and exercises. Each week requires three hours to complete. The exercises are implemented in Python*.

Prior Knowledge