Szymon Konefał

Cloud Software Engineer, Intel

Szymon has been working with cloud orchestration software since 2014. During this time, he has deployed and developed proprietary and Open Source extensions for Mesos, Kubernetes, and OpenStack. His work mostly revolved around minimalizing the Noisy Neighbour problem and enablement of the safe oversubscription in shared computing clusters.The noisy neighbor situation appears, when multiple workloads interfere negatively with each other. This affects their performance.

While pursuing this goal, he worked on enabling oversubscription in Mesos and he was part of the development team that has created an oversubscription plugin for Mesos – Serenity ( Later, he co-developed Swan (, a tool for automatic execution of performance isolation studies for cluster schedulers. These studies required thorough system engineering. Szymon had to acquire knowledge spanning from the low-level hardware behaviors, through the OS and container orchestration and execution layers up to the actual cluster scheduler to understand where he should add additional isolation layers to achieve his goals.

Recently, he took his orchestration knowledge to the new field – integration of the disaggregated resources of the Intel Rack Scale Design with the Cloud Orchestration systems. In his work, Szymon designs how the RSD hardware should be integrated with OpenStack and Kubernetes for a seamless operator experience.


OpenStack and the Rack Scale Design – Reconfigure your server’s hardware on the fly!

Intel Rack Scale design (RSD) is an industry blueprint of the concept known as _composable disaggregated infrastructure_. Unlike traditional servers, the RSD hardware resources like accelerators or network interfaces are not tightly tied to each server node. Instead, they are pooled together in the _resource pools_, and dynamically attached to the compute nodes when needed.

This presentation will introduce the audience to the Rack Scale Design and demonstrate the dynamic attachment of disaggregated resources to the node. Depending on the workload’s requirements, the PCIe Network Interface Card or the NVMe-over-Fabric volume will be attached to the compute node automatically – with no human intervention. Besides the demo, the presenter will show the implementation details and share the plans of upstreaming this integration.