Artificial Intelligence (AI) is one of the biggest technical advancements since the Internet, with the promise to enhance everything from individual devices to Cloud technologies, and reshape infrastructure, even whole industries. Intel is committed to advancing Deep Learning (DL) workloads by accelerating enterprise customer and ecosystem development.
With that in mind, we created the Deep Learning Reference Stack to deliver the best DL experience on Intel® Architecture (IA) and launched the first version at the Intel Architecture Day in December. Already we’ve heard great feedback on how the stack has helped customers innovate across usages, along with some ideas for further enhancements.
Today, we are releasing the Deep Learning Reference Stack v2.0 that addresses feedback and adds support for new tools, use cases, and workloads. As with the initial release, this version is highly-tuned and built for cloud native environments.
With this update, we are further enabling developers to quickly prototype and deploy deep learning workloads to production, reducing complexity common with deep learning software components. We’ve introduced a developer environment along with an additional deep learning framework while maintaining the flexibility for developers to customize their solutions.
Here are three of the biggest feature enhancements included in this release:
PyTorch* (v1.0), an open-source machine learning library for Python based on Torch that customers can use across deep learning applications, such as natural language processing. Developers use this scientific computing package as a replacement for NumPy.
Horovod*, an open source training framework that makes distributed deep learning easier. Horovod core principles are based on Message Passing Interface (MPI) and have proven to have high scaling efficiency for certain Convolutional Neural Network based workloads.
Jupyter Notebooks*, an open web application that allows creation and sharing of documents containing live code, equations, visualizations and narrative text. This helps empower developers to use notebooks for data cleaning and transformation, numerical simulation, statistical modeling, data visualization, machine learning, and much more.
As with the initial release, this Deep Learning Reference Stack can be used in either a single or multi-node architecture, providing choice for development and deployment of deep learning workloads.
In addition to new features, this release incorporates the latest stable versions of developer tools and frameworks, namely:
- Operating System: Clear Linux* OS, customized to individual development needs and optimized for Intel platforms, including specific use cases like Deep Learning.*
- Orchestration: Kubernetes to manage and orchestrate containerized applications for multi-node clusters with Intel platform awareness.
- Containers: Docker Containers and Kata Containers with Intel® VT Technology to help secure containers.
- Libraries: Intel® Math Kernel Library for Deep Neural Networks (MKL DNN), a highly-optimized math library for mathematical function performance.
- Runtimes: Python application and service execution support that is optimized for IA.
- Frameworks: The PyTorch machine learning library for Python.
- Deployment: Kubeflow, an open source industry-driven deployment tool with enhanced performance, efficiency and ease of deployment at scale.
Each layer of the Deep Learning Reference Stack has been tested and tuned for performance on Intel Architecture. The impact is clear when you look at the performance gains realized when using our stack, versus non-optimized stacks, for deep learning workloads.
Detailed server configuration:
2 x Intel® Xeon® Gold 8168 (2.7 GHz, 24-core), 192 GB memory (12 x 16 GB DDR4 @ 2666 MHz), 1.0TB NVMe SSD SSDPE2KX010T701, Clear Linux* 27910
Intel works across the industry to help ensure popular frameworks and topologies run well on Intel Architecture, giving customers a choice in the best solution for their needs. We are using this stack to innovate on our current Intel Xeon Scalable processors and plan to continue performance optimizations for coming generations.
We invite developers to contribute feedback and ideas for future versions of the Deep Learning stack. For more information, please visit our Clear Linux OS Stacks page. To join the Clear Linux community, join our developer mailing list, the #clearlinux IRC channel, or our GitHub repository.
Performance results are based on testing as of March, 2019 and may not reflect all publicly available security updates. See configuration disclosure for details. No product can be absolutely secure.
Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Performance tests, such as SYSmark and MobileMark, are measured using specific computer systems, components, software, operations and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products. For more information go to www.intel.com/benchmarks. See configuration details above.
Optimization Notice: Intel's compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors. These optimizations include SSE2, SSE3, and SSSE3 instruction sets and other optimizations. Intel does not guarantee the availability, functionality, or effectiveness of any optimization on microprocessors not manufactured by Intel. Microprocessor-dependent optimizations in this product are intended for use with Intel microprocessors. Certain optimizations not specific to Intel microarchitecture are reserved for Intel microprocessors. Please refer to the applicable product User and Reference Guides for more information regarding the specific instruction sets covered by this notice. Notice Revision #20110804.
Tests document performance of components on a particular test, in specific systems. Differences in hardware, software, or configuration will affect actual performance. Consult other sources of information to evaluate performance as you consider your purchase. For more complete information about performance and benchmark results, visit www.intel.com/benchmarks.
Intel technologies’ features and benefits depend on system configuration and may require enabled hardware, software or service activation. Performance varies depending on system configuration. No computer system can be absolutely secure. Check with your system manufacturer or retailer or learn more at intel.com.
Intel, the Intel logo, and Xeon are trademarks of Intel Corporation or its subsidiaries in the U.S. and/or other countries.
*Other names and brands may be claimed as the property of others. The nominative use of third party logos serves only the purposes of description and identification.