Artificial Intelligence (AI) represents one of the biggest technical opportunities today, promising to transform everything from our individual devices to cloud technologies, and reshape infrastructure, even entire industries. Intel is committed to advancing Deep Learning (DL) workloads that power AI by accelerating enterprise and ecosystem development.

With that goal in mind, we created the Deep Learning Reference Stack to help AI developers deliver the best experience on Intel® Architecture (IA) and launched v1.0 at the Intel Architecture Day in December. We’ve heard great feedback on how this integrated stack has helped customers innovate across usages, along with some ideas for further enhancements, some of which we’re included in our v2.0 release.

Today, we are releasing the Deep Learning Reference Stack v3.0, addressing more feedback and adding support for new tools, use cases, and workloads. As with previous releases, this version is highly-tuned and designed for cloud native environments.

With this update, Intel further enables developers to quickly prototype and deploy DL workloads to production, reducing complexity common with deep learning software components, while maintaining the flexibility for them to customize solutions. Among the feature added in this release:

  • TensorFlow* (v1.13.1) + Commit hash, an open-source software library for dataflow and differentiable programming across a range of tasks, used for machine learning applications such as neural networks.
  • AVX-512 Vector Neural Network Instructions (VNNI), an x86 extension that’s part of the Advanced Vector Extensions 512 (AVX-512) platform, designed to accelerate deep neural network-based algorithms.
  • Jupyter Lab*, an open source environment for interactive computing, based on Jupyter Notebooks, that allows creation and sharing of documents containing live code, equations, visualizations, and narrative text. This empowers developers to use notebooks for data cleaning and transformation, numerical simulation, statistical modeling, and data visualization among other tasks.

The Deep Learning Reference Stack can be used in either a single or multi-node architecture (with Horovod), providing choice for development and deployment of DL workloads.

This release also incorporates the latest stable versions of developer tools and frameworks, namely:

Thumbnail
  • Operating System: Clear Linux* OS, customized to individual development needs and optimized for Intel platforms, including specific use cases like Deep Learning.*
  • Orchestration: Kubernetes to manage and orchestrate containerized applications for multi-node clusters with Intel platform awareness.
  • Containers: Docker Containers and Kata Containers with Intel® VT Technology for enhanced protection.
  • Libraries: Intel® Math Kernel Library for Deep Neural Networks (MKL DNN), a highly-optimized math library for mathematical function performance.
  • Runtimes: Python application and service execution support.
  • Deployment: Kubeflow, an open source industry-driven deployment tool with enhanced performance, efficiency and ease of deployment at scale.

Each layer of the Deep Learning Reference Stack has been performance-tuned for Intel Architecture, allowing the stack to deliver impressive performance compared to non-optimized stacks.

Performance gains for the Deep Learning Reference Stack on ResNet50 as follows:

performance gains dlrs resnet50
benchmark resnet

Performance gains for the Deep Learning Reference Stack on Inceptionv4 as follows:

performance gain dlrs inceptionv4_0
benchmark inception

Intel will continue working to help ensure popular frameworks and topologies run best on Intel Architecture, giving customers a choice in the right solution for their needs. We are using this stack to innovate on our current Intel® Xeon Scalable processors and plan to continue performance optimizations for coming generations.

We invite developers to contribute feedback and ideas for future versions of the Deep Learning stack. For more information, please visit our Clear Linux OS Stacks page. To join the Clear Linux community, join our developer mailing list, the #intel-verticals IRC channel, or our GitHub repository.

 


Notices and Disclaimers
Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors.
Performance tests, such as SYSmark and MobileMark, are measured using specific computer systems, components, software, operations and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products. For more complete information, visit
www.intel.com/benchmarks.

Performance results are based on testing as of 4/29/2019 and may not reflect all publicly available security updates. No product or component can be absolutely secure.

System configuration
Second Generation Intel® Xeon Scalable Platform -- 2x Intel® Xeon® Platinum 8280 (2.7GHz, 28-core), 384 GB memory (12 x 32 GB DDR4 @ 2933 MHz), 3.7TB NVMe SSD SSDPE2KX040T7, Clear Linux* 28680, BIOS SE5C620.86B.0D.01.0271.120720180605, ucode (0x4000013), Kernel 5.0.6-726.native.

Intel® Xeon Scalable Platform -- 2x Intel® Xeon® Platinum 8180 (2.5GHz, 28-core), 384 GB memory (12 x 32 GB DDR4 @ 2666 MHz), 1.5TB NVMe SSD SSDPE2KE016T8, Clear Linux* 28680, BIOS SE5C620.86B.02.01.0008.031920191559, ucode (0x200005a), Kernel 5.0.6-726.native.

Optimization Notice: Intel’s compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors. These optimizations include SSE2, SSE3, and SSSE3 instruction sets and other optimizations. Intel does not guarantee the availability, functionality, or effectiveness of any optimization on microprocessors not manufactured by Intel. Microprocessor-dependent optimizations in this product are intended for use with Intel microprocessors. Certain optimizations not specific to Intel microarchitecture are reserved for Intel microprocessors. Please refer to the applicable product User and Reference Guides for more information regarding the specific instruction sets covered by this notice.

Notice Revision #20110804
Intel, the Intel logo, and Intel Xeon are trademarks of Intel Corporation or its subsidiaries in the U.S. and/or other countries. *Other names and brands may be claimed as the property of others. © Intel Corporation