I live my life in the field of information technology. Most of the topics that I work with are making databases speedy and more readily available. As such, most of the topics that I personally and professionally enjoy are not seen as the most glamorous or flashy of areas. However, the areas that I enjoy working on are some of the most foundationally important aspects of a data center, be it in the cloud or on-premises. These are areas that every data professional must deal with every day. Most DBAs feel that if there is a performance problem, it must be the underlying hardware, even though we know sometimes the problems are within the code itself. If the hardware is a problem, most of the time it is related to storage performance. Part of my professional responsibilities are to make sure that the storage subsystem underneath these massive databases is running as fast as humanly possible, and sometimes, new and more innovative storage solutions are required.
I am absolutely thrilled to have been a delegate for this past Storage Field Day 23 event, held by Gestalt IT the first week of March this year. This event focuses on storage technologies both bleeding edge and evolutionary. In my world, revolutionary change is fun, but evolutionary makes the world go around. My role was to help understand what the vendor does and is presenting, and introduce challenging questions to help fully understand the value of the vendor’s offering.
For me being serious geek, errr I mean technologist, these events are thrilling in their own way. It is absolutely amazing to not only see the future of the platforms that you work on, but see how innovation can still apply to existing technology to drive it forward to be more useful in modern data centers.
Fungible is aiming to create a truly composable data center. A server in today’s data center comprises of compute and possibly storage. Sometimes, the computer will remotely connect to a shared storage device. A challenge with the current architecture is that slicing and dicing resources is limited to what can be contained inside that physical unit of compute. Composable architectures are very different, and in my opinion, the future of data centers. Composable architectures will contain a chassis of CPU’s. You would then use a high-speed interconnect to attach some memory, located in a different chassis. Storage, GPUs, or anything else you might need are located in independently defined but shared units. Software allows a compute node to be joined over this high-speed interconnect to become one server as you might think of it today.
In my world, the database world, a composable infrastructure solves a number of pretty serious challenges. Some of my database might have very high CPU requirements, such as decision support and business intelligence systems. Some might be designed for analytics and have a very high memory requirement, but only for some of the day. Others might contain a tremendous amount of skeletons in the closet, otherwise known as third-party software, and just need obscene requirements of compute, memory, and storage speed.
Composable infrastructure allows administrators to programmatically slice and dice these resources to assign them where they are needed on the fly. The flexibility that these platforms allow will help the organization make better use of these finite data center resources. This flexibility has been traditionally missing or not nearly as advanced as some of the public cloud offerings provide. If we can bring a solid composable infrastructure to an on premises workload, we can introduce a lot of the flexibility and make the on premises data center nearly as flexible as public cloud offerings today.
Fungible knows that the interconnects between these units of compute resources become the performance bottleneck for shared tasks such as CPU or memory. Traditional handling of these sorts of resources become exceptionally burdensome on the compute resources to manage all of this traffic.
Fungible’s major innovation is to produce what they call the Fungible DPU. The DPU is a purpose-built PCIe card that allows the highest speed interconnect and management platform to offload this burden from the host resources onto the accelerator card, and claim to be able to handle this data 20 times faster than traditional GPUs and CPUs.
My world is usually spent managing the demands of an unlimited data platform with the limits of finite compute resources. If I can move resources around my workloads, I have more flexibility to design a data platform that will better serve the business. Composable resources allow me to do this in a much more elegant way then building massive virtualization hosts whose capacity may or may not be fully utilized.
In my opinion, composable infrastructure will eventually become the future of datacenters, both in the cloud and on premises. If you run out of CPU, simply add a tray of CPUs and rebalance. The same for memory, storage, GPUs, or anything else you can think of. In my world, as the components underneath become faster and faster, the interconnects become a larger bottleneck, and Fungible’s solution allows people like me to make sure that the bottlenecks to your data stay within the database code. I’m loving the future!