What is the problem...and what is the solution for the given problem ?
Very rarely is a solution to a problem thought out so in-depth and in detail that it fixes the problem forever, the very first time.
Now given that problems are almost always solved incrementally, how does one know which point-in-time solution is the best and if it's the absolute direction that needs to be followed to solve the given problem permanently.
Take for example the case of server virtualization. The problem that had to be solved was the scale at which server hardware in data centers was growing. With server proliferation came the overheads of provisioning,managing and maintaining this hardware,along with many disparate layers of software.
Enter VMware, a company which is a household or should I say “data-center hold” name today. The visionary solution to consolidate the individual server hardware into virtual machines and provide a single pane of glass to manage the virtual machines, addressed the hardware sprawl and management problem most effectively.
Virtualization solved the problem which was plaguing the data-centers from a compute/server hardware perspective, although it did create an unforeseen problem, which was only realized much later. This problem had to do with the way VMware hypervisors required shared storage for technologies like vMotion to function. SANs (storage area networks) was used to solve this problem, without fully understanding the limitations of the existing SAN architectures and how unsuited they would be for virtualized environments.
VMware had to be satisfied when storage companies came out and presented the solution to them in form of storage area networks (SANs), but they did not fully grasp the limitations of the architectural design of SANs in large scale virtualized environments.
With SANs VMware got their shared storage, and companies like EMC and Netapp created a non-scalable,non-consolidated storage architecture based on monolithic design.
See video below which shows the limitation of SAN environments when servers became virtualized.
It was ironic that the forward-looking virtualization solution was tied to the hip to a technology that was based on exactly the opposite principles.
Now, in hindsight we know running a compute/server virtualized environment on a non-virtualized and non-consolidated SAN storage with minimal scale out built into the architecture, and connected through a complex storage switching infrastructure, is a bad idea.
Back in 1998 had VMware worked in parallel to address the compute/server virtualization and storage virtualization at the same time, they would have probably come up with a solution like that which Nutanix provides with its filesystem, NDFS.
Nutanix not only learnt what the brilliant minds at Google were doing to provision,manage and maintain data centers using native hardware, Nutanix also went one step further and created NDFS (Nutanix Distributed File System) for a hypervisor agnostic marketplace.NDFS is based on the core principle that compute and storage virtualization go best together, hand in hand. NDFS combined the compute and storage layers together and created the first scale out solution using its SOCS (Scale Out Converged Storage) architecture and provided the most uncompromisingly simple solution for the problem VMware and other hypervisors had in their architecture : the requirement of shared storage.
See video below to see Nutanix SOCS (Scale Out Converged Storage) architecture in action.
Having seen the wide adoption of Nutanix and seeing that the problem of shared storage is most effectively resolved with a NDFS like architecture, VMware announced its own version of storage virtualization with VSAN in early 2014. Hence validated that indeed compute/server virtualization is best run on a virtualized storage environment without the need of external SANs.
So holistically if the compute problems have been solved by hypervisors like Vmware, Hyper-V, KVM and the storage has been converged with the compute tier and with scale out architecture, built into the design to solve the storage problems of unlimited scale and performance...what is the problem left to solve to in todays data-centers?
It seems obvious to me from where I am..the problem next to be solved and simplified is the network.
In the next blog we will discuss the companies focused to making networking simple and see if they are solving the problems once and for all or are they just fixing problems incrementally..
Very rarely is a solution to a problem thought out so in-depth and in detail that it fixes the problem forever, the very first time.
Now given that problems are almost always solved incrementally, how does one know which point-in-time solution is the best and if it's the absolute direction that needs to be followed to solve the given problem permanently.
Take for example the case of server virtualization. The problem that had to be solved was the scale at which server hardware in data centers was growing. With server proliferation came the overheads of provisioning,managing and maintaining this hardware,along with many disparate layers of software.
Enter VMware, a company which is a household or should I say “data-center hold” name today. The visionary solution to consolidate the individual server hardware into virtual machines and provide a single pane of glass to manage the virtual machines, addressed the hardware sprawl and management problem most effectively.
Virtualization solved the problem which was plaguing the data-centers from a compute/server hardware perspective, although it did create an unforeseen problem, which was only realized much later. This problem had to do with the way VMware hypervisors required shared storage for technologies like vMotion to function. SANs (storage area networks) was used to solve this problem, without fully understanding the limitations of the existing SAN architectures and how unsuited they would be for virtualized environments.
VMware had to be satisfied when storage companies came out and presented the solution to them in form of storage area networks (SANs), but they did not fully grasp the limitations of the architectural design of SANs in large scale virtualized environments.
With SANs VMware got their shared storage, and companies like EMC and Netapp created a non-scalable,non-consolidated storage architecture based on monolithic design.
See video below which shows the limitation of SAN environments when servers became virtualized.
It was ironic that the forward-looking virtualization solution was tied to the hip to a technology that was based on exactly the opposite principles.
Now, in hindsight we know running a compute/server virtualized environment on a non-virtualized and non-consolidated SAN storage with minimal scale out built into the architecture, and connected through a complex storage switching infrastructure, is a bad idea.
Back in 1998 had VMware worked in parallel to address the compute/server virtualization and storage virtualization at the same time, they would have probably come up with a solution like that which Nutanix provides with its filesystem, NDFS.
Nutanix not only learnt what the brilliant minds at Google were doing to provision,manage and maintain data centers using native hardware, Nutanix also went one step further and created NDFS (Nutanix Distributed File System) for a hypervisor agnostic marketplace.NDFS is based on the core principle that compute and storage virtualization go best together, hand in hand. NDFS combined the compute and storage layers together and created the first scale out solution using its SOCS (Scale Out Converged Storage) architecture and provided the most uncompromisingly simple solution for the problem VMware and other hypervisors had in their architecture : the requirement of shared storage.
See video below to see Nutanix SOCS (Scale Out Converged Storage) architecture in action.
Having seen the wide adoption of Nutanix and seeing that the problem of shared storage is most effectively resolved with a NDFS like architecture, VMware announced its own version of storage virtualization with VSAN in early 2014. Hence validated that indeed compute/server virtualization is best run on a virtualized storage environment without the need of external SANs.
So holistically if the compute problems have been solved by hypervisors like Vmware, Hyper-V, KVM and the storage has been converged with the compute tier and with scale out architecture, built into the design to solve the storage problems of unlimited scale and performance...what is the problem left to solve to in todays data-centers?
It seems obvious to me from where I am..the problem next to be solved and simplified is the network.
In the next blog we will discuss the companies focused to making networking simple and see if they are solving the problems once and for all or are they just fixing problems incrementally..