Sorry, you need to enable JavaScript to visit this website.

Optimizing solutions with Clear Linux Project

BY 01 Staff (not verified) ON Aug 18, 2015

Arjan Van De Ven, Senior Principal Engineer Linux Kernel, Intel’s Open Source Technology Center

It has been a very exciting time since Intel announced Intel® Clear Containers, a feature of the Clear Linux* Project for Intel® Architecture, back in May. There has been a great deal of interest and enthusiasm from ecosystem vendors as well as the Linux developer community. My team and I have added a second day job: visiting with partners to discuss optimization of their solutions for Intel architecture by using various features of the Clear Linux Project!

Recently we have been working closely with CoreOS to integrate Intel Clear Containers technology with the rkt* container runtime. Due to the modular nature of rkt, which is implemented as pluggable “stages” of execution (0, 1, and 2), we can focus our efforts on Stage 1, which operates as root and sets up the containers, traditionally groups. By modifying only this stage to integrate Intel Clear Containers, CoreOS can launch containers with security rooted in hardware, while continuing to deliver the deployment benefits of containerized apps.

We conceived of the Clear Linux Project as a vehicle to deliver a highly optimized Linux distribution that offers developers a reference for how to get the most out of the latest Intel silicon for cloud deployments. We are driving innovative approaches to existing issues, while also thinking of solutions to challenges that we anticipate in data centers and clouds of the near future. Intel Clear Containers offers a solution to one of the biggest challenges inhibiting the broader adoption of containers, especially in a multi-tenant environment like that of a public cloud service provider or even in an enterprise: security. Traditional Linux containers don’t provide enough security to give application developers the confidence to run their applications next to one from an unknown source. By optimizing the heck out of the Linux boot process, we have shown that Linux can boot with the security normally associated with virtual machines, almost as quickly as a traditional container. Thus we combine security rooted in hardware, via Intel Virtualization Technology (VT-x), with the development and deployment benefits which have caused application developers to gravitate to containers. Problem solved.

What’s next for the Clear Linux Project? We have recently turned our focus to performance:

  • AVX 2.0 deployment: We are making it easier to develop applications to take advantage of new floating point and integer operations in new Haswell-based servers. AVX 1.0 allowed programmers and compilers to do floating point math highly in parallel (SIMD), which helps performance in compute-heavy workloads such as High-Performance Computing and machine learning. AVX 2.0 brought with more operations and support for integer operations in addition to floating point math. However, as applications compiled for AVX 2.0 will generally not run on processors prior to Haswell, developers often favor the older AVX 1.0 for compatibility reasons. In Clear Linux we developed a key enhancement to the dynamic loader of glibc, making it possible to ship 2 versions of a library, one compiled for AVX 2.0 and one compiled for non-AVX 2.0 systems. Thus application performance will be tailored for the underlying platform.
  • ZLib compression improvements: ZLib is a data compression library that is very widely used in the industry to reduce storage and networking requirements. We have previously optimized Zlib for Intel processors to be 1.8x faster, and we continue to work on performance improvements in this library. This work was published in 2014, and we are using Clear Linux to show off current and future improvements with the library.
  • AutoFDO: One of the compiler optimizations available in Linux is Feedback-Directed Optimization (FDO), which provides performance gains but is not widely used due to the high runtime overhead of profile collection, a tedious dual-compile usage model, and difficulties in generating representative data to train the tool. AutoFDO overcomes these challenges by collecting profile data on production systems, without requiring a second, instrumented build of the target binary. By integrating AutoFDO into Clear Linux, we see runtime performance gains resulting from software alone.

It’s a great time to be marrying cutting-edge open source software like containers with the features available in the underlying silicon, to solve real-world challenges. Stay tuned to to see what features we release next.

This week we will be at LinuxCon/CloudOpen/ContainerCon in Seattle, speaking about container security at two sessions. We will also be showcasing integration of Open Container Initiative/Docker application containers with Intel Clear Container technology, and speaking at the local Docker meetup. Right now, it’s all about containers!

See related blogs at Core OS and in the Intel Data Stack.