"Cloud is an undisputed reality."
by Nicole Huesman, Community & Developer Advocate
We are seeing exponential growth in applications delivered via the cloud as businesses increasingly adopt hybrid- and multi-cloud strategies that deliver enhanced efficiency and agility at lower cost. With this growth comes a shift to cloud-ready and cloud-native applications, and in turn, changes in the way software is architected and applications are developed.
Monica Ene-Pietrosanu, Director of Software Engineering in Intel’s Open Source Technology Center, leads a team of highly skilled performance engineers and compiler experts in optimizing runtimes for the most popular scripting languages used in the cloud. We talked to her about the trends she is observing, and how her team is helping ensure that applications in the cloud benefit from Intel hardware capabilities.
Q: Can you tell us a little bit about what you and your team do at Intel?
I’m leading a great team of software engineers, performance experts and compiler experts that are focused on accelerating cloud workloads based on scripting languages, and evolving the way performance is being measured in the new world of the cloud. Our team is working on exposing the latest innovations in the Intel server platform to ensure users can derive maximum benefit from Intel technologies. We work with major cloud service providers to understand how they are developing and deploying applications in the cloud, and the challenges they face, so we can provide them with optimizations based on what we learn from them. We collaborate with relevant open source communities who provide us with their insights. And we work with CPU architects at Intel, sharing this external, continuously evolving worldview with them so that, together, we can further enhance the next generation of Intel CPUs and platforms. This way applications can extract maximum performance from the hardware platforms.
Q: What trends have you observed in cloud computing relative to your work on programming languages?
More than 85% of applications will be delivered via cloud by 2020. So, cloud is an undisputed reality that provides efficiency, agility and cost savings to businesses. That’s why we are focusing on cloud usages and accelerating applications for the cloud. Most of the cloud is powered by open source software. Open source is sparking so much innovation these days that it’s amazing to see how everything is changing as we look through this new lens. One thing we’ve been seeing is a shift in development from the traditional C/C++ implementations to dynamic scripting languages runtimes over the last decade. Today, the number of applications that use scripting runtimes and Java have nearly doubled. For example, more than 70% of OpenStack is written in Python. While we’re sustaining a focus on C/C++ applications, we’re also investing in optimizing scripting languages runtimes to make sure that this new category of applications benefits from Intel hardware capabilities.
Q: Why do you think we’re seeing this move to scripting languages and dynamic runtimes?
Q: What is Intel’s role in accelerating innovation in this space?
Q: What most inspires you about the work you’re doing?
This new world of the cloud requires deep expertise into compilers, hardware capabilities, and DevOps. There are multiple angles that we need to tackle. I have a fantastic team of experts in all of these areas who are driving magic into the cloud every day. So that’s very exciting for me — I’m humbled and honored to work with this team.
Another thing that’s very exciting is that we are learning every day. Right now, we are seeing a huge evolution in the way performance is being measured in the cloud. In the past, we were looking at one server, bare-metal workloads running independently on a machine that we kept under our desk, and looking at how to make our applications run faster there. In the cloud, we have different challenges. We have orchestration where our workloads are now placed on different nodes. We have micro-services that have totally changed the way applications are being architected. We have containers. So, there is a lot of new code that affects the cloud paradigm and has required us to evolve the way we measure performance. This brings incredible technical challenges, and also forces us to stay on the leading edge of the technology.
Q: Containers have been such a hot area of innovation. Can you explain some of the work you’re doing to optimize high-level languages for containers?
Containerization is definitely revolutionizing the way applications are developed, packaged and deployed. They enable you to get increased efficiency out of your infrastructure, and greater acceleration of your workloads. Containers are forcing us to rethink software architecture. This means there is a lot of work we need to do — not only in terms of how we measure performance, but how we develop applications and how we ensure that applications running inside of containers take advantage of new hardware capabilities. Historically, containers have been plagued by the suspicion of security. We love them because they give us speed, but for mission-critical applications, we are always concerned about security. That’s where we have teams at Intel who have developed great innovation as part of Kata Containers that bring the speed of containers and the security of virtual machines (VMs). At the same time, we are enabling hardware capabilities like FPGAs, new CPU instructions, faster memory, and networking so that applications running inside of containers can take advantage of these advancements.
Q: How are you optimizing high-level languages for some of the latest hardware advancements?
Intel brings cutting-edge innovation to market with every server platform that enables us in the software world to fuel new use cases. One example is persistent memory and in-memory databases. With persistent memory coming to life, we see faster access to data. In these high-level languages, you can define objects that are persistent, that don’t need to be written to the disc, which gives you acceleration, faster insight and increased business velocity. Consider the time needed to restart a database and reload data from a disc — this time is way longer than when you already have that data stored in memory. Persistent memory is actually like unlimited DRAM — think very large memory, close to the CPU. Intel is enabling interfaces — bindings — in Python, Java and other high-level languages so that developers can take advantage of this type of memory. This advancement in memory technology brings new use cases that we never imagined before for things like artificial intelligence and big data analytics.
Q: You mentioned Intel’s collaboration with high-level language communities. Can you give us more specifics?
I’m proud of the active role that we play in these projects and communities — through both our code and non-code contributions — to ensure that they grow healthy and strong. We participate in some of the work groups, along with committees that define interfaces into these groups. We are engaged in the Node.js benchmarking work group, together with other large cloud service providers and software companies. We are also part of the Communication Committee, which targets Node.js ecosystem development. And because diversity is so important to the health of communities, we’ve been involved in some of the diversity-related initiatives, supporting Outreachy fellowships for runtimes and the Node.js diversity café, as well as participating in PyLadies. One of our team members is involved in the technical steering committee for Node.js and just gave a great presentation at last week’s Node + JS Interactive event.
Q: In looking forward, what are you most excited about?
As the way we measure performance in the cloud evolves, we’ve been investing in contributing benchmarks to respective open source projects, including Python and Node.js. These new benchmarks simulate applications written in these languages, and offer performance data very close to what you would be running in a real-world environment. We’re collaborating with these communities to standardize benchmarks. In turn, we’re drawing a lot of insight from the collaboration with these communities who tell us how they are using the languages, and for which applications, so that we can simulate those scenarios in the way we measure performance. That’s an area where we will continue to invest — evolving the way performance is measured in the cloud.
I’m also excited about opportunities to meet with our ecosystem partners — major cloud service providers as well as enterprise users that have business-critical deployments based on these runtimes — and learn from them about how they are evolving the language so that we can drive the language forward together.
Q: Before we close, is there anything you’d like to add?
In the cloud environment, the world is changing for developers. You have the migration to high-level programming languages and the transformation in application architecture and deployment models. You have micro-services — breaking down monolithic applications into functional components and implementing these components separately in different languages. And you have serverless computing — uploading your code that then is executed, such that you pay only for the length of time it took to execute the code, rather than paying for a full machine. All of these have the potential to bring far more efficiencies, with increased velocity at reduced cost. We’re encouraging developers to keep these factors in mind so they can get full benefit of the innovation that is happening in the cloud.