Artificial Intelligence (AI) is the biggest technical advancement since the Internet, and it promises to enhance everything we do, from individual IoT devices to Cloud technologies—even how we interact with technology. Intel is committed to advancing Deep Learning—critical to AI—by innovating across applications such as image, video, audio, natural language and autonomous driving, as well as in emerging use cases and providing the best experience on Intel® Architecture (IA).
With all of the advancements in artificial intelligence, and companies moving quickly to provide better access to features, developers have been forced to do the integration across solutions by themselves – a difficult task. The integrated solutions that do exist today today are prescriptive and do not give developers the flexibility to choose the tools that fit their needs.
Intel has been advancing neural networks and machine learning technology for years. We have taken this expertise in order to ease the deep learning development process and make it easier for developers to prototype and access optimized and integrated end-to-end stacks to accelerate development.
The Deep Learning Reference Stack from Intel has been highly tuned and tested for Intel® Xeon® Scalable platforms. This stack opens the door to future delivery of Deep Learning as a Service (DLaaS) by simplifying the complexity of integrating multiple open source software components, optimized out of the box initially for Intel Xeon Scalable platforms while providing the flexibility for developers and customers to individualize their own solution.
This cloud native stack provides developers tools and frameworks, including:
- Operating System: Clear Linux* OS, customized to individual development needs and optimized for Intel platforms, includes specific use cases like Deep Learning.*
- Orchestration: Kubernetes manages and orchestrates containerized applications for multi-node clusters with Intel platform awareness.
- Containers: Docker Containers and Kata Containers utilizes Intel® VT Technology to secure containers.
- Libraries: Intel® Math Kernel Library for Deep Neural Networks (MKL DNN), a highly-optimized math library for mathematical function performance.
- Runtimes: Python provides application and service execution support that is highly tuned and optimized for IA.
- Frameworks: TensorFlow, a leading deep learning and machine learning framework.
- Deployment: Kubeflow, an open source industry driven deployment tool with enhanced performance, efficiency and ease of deployment at scale.
Each layer in this stack has been tested and highly tuned for performance, utilizing Intel Advanced Vector Extensions 512 (AVX-512). It also supports new platform performance features like Intel Deep Learning Boost in our next generation Xeon Scalable processor. As we are in the early stages of Deep Learning and market needs are still changing, Intel is working to provide support for other deep learning frameworks, such as PyTorch. Intel is working across the industry to ensure popular frameworks and topologies run well on Intel Architecture so customers can choose the best solution for their needs. We welcome and invite contributions for these future stacks please visit our Clear Linux OS Stacks page . To join the Clear Linux community, join our developer mailing list, the #clearlinux IRC channel, or our GitHub repository.
Single-Node Performance (float32)
Detailed server configuration:
2 x Intel® Xeon® Gold 6139 (2.3 GHz, 18-core), 192 GB memory (12 x 16 GB DDR4 @ 2666 MHz), 512GB SSD M.2 SATA 3.0 6Gb/s Intel Liberty Harbor SSDSCKKI512G801 DC S3110, Clear Linux* 26260.
We began our development on current generation Intel Xeon Scalable processors and we continue performance optimizations on the next-generation Intel® Xeon Scalable Platforms.
Performance results are based on testing as of November and may not reflect all publicly available security updates. See configuration disclosure for details. No product can be absolutely secure.
Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Performance tests, such as SYSmark and MobileMark, are measured using specific computer systems, components, software, operations and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products. For more information go to www.intel.com/benchmarks. See configuration details above.
Optimization Notice: Intel's compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors. These optimizations include SSE2, SSE3, and SSSE3 instruction sets and other optimizations. Intel does not guarantee the availability, functionality, or effectiveness of any optimization on microprocessors not manufactured by Intel. Microprocessor-dependent optimizations in this product are intended for use with Intel microprocessors. Certain optimizations not specific to Intel microarchitecture are reserved for Intel microprocessors. Please refer to the applicable product User and Reference Guides for more information regarding the specific instruction sets covered by this notice. Notice Revision #20110804.
Tests document performance of components on a particular test, in specific systems. Differences in hardware, software, or configuration will affect actual performance. Consult other sources of information to evaluate performance as you consider your purchase. For more complete information about performance and benchmark results, visit www.intel.com/benchmarks.
Intel technologies’ features and benefits depend on system configuration and may require enabled hardware, software or service activation. Performance varies depending on system configuration. No computer system can be absolutely secure. Check with your system manufacturer or retailer or learn more at intel.com.
Intel, the Intel logo, and Xeon are trademarks of Intel Corporation or its subsidiaries in the U.S. and/or other countries.