The All-flash is becoming more popular. It\’s dropping in price. People are considering all-flash data
centers, for example. On the other end, we\’re seeing hybrid arrays also benefiting who can take
advantage of multiple tiers of storage, so there\’s a natural tension there. But generally, things are
moving towards a solid state world, and there are a lot of questions you have to ask about what solid
state can really do for you. Is it really prime time to replace everything in your data center? The all-flash
market, what\’s happening, is it becoming commoditized? What should you be looking out for if you\’re in
the market for all-flash storage solutions? We are going to have on Bill Miller, who\’s the CEO of XIO, and
he\’s going to talk us through their perspectives on the all-flash market.
So tell us, just in a nutshell, a little bit about the G4 that you guys have just come out with. What does
that really bring to market in the all-flash space, first?
XIO has, of course, been in the data storage business for a long time and so G4 is fourth generation. The
fourth generation of architecture, especially of our code that runs these arrays.
The earlier generations, or the earliest generations were really focused on
making disk drives work a lot better. The company has roots inside of Seagate
and so they really cared about how drives perform in arrays.
So for years that was really what XIO was known for, but there was a lot of really interesting code and IP
in these arrays that applies to flash, but we\’ve done a complete rehash of it to
make sure that we kept the stuff that worked really well and it was applied to
flash and we got rid of some of the stuff that was overhead and maybe didn\’t
work so well in the new world.
So generation four was really focused on flash. There\’s some really benefits of the way that we do data
layout onto the media that has always been there. Even the disk world was
focused around performance, mostly, but it really works well with flash in
providing a level of overprovisioning and wear leveling that is at the array level
that really makes sure that you get good reliability out of the flash over its
lifetime. You get kind of even greater performance out of flash itself, and then
flash arrays needed deduplication and data reduction, so we\’ve added data
reduction into the code. We\’ve added features that are really table stakes in this
market, like snapshots and asynchronous replications. We completely redid our
management and UI capability to simplify that and make it modern, web
services-based interface.
So you guys are all in on converting to all flash and converting to this flash basis. What\’s happened
with flash in the last couple years, because it used to be pretty expensive and
tony and then companies like Pure and other people came around, said, \”No,
we can try to convince you to do all flash.\” You guys have now made the
switch also to all flash. What\’s happening with the market there?
Well, I think the main thing is that flash arrays have now gotten to a place where they\’re … certainly,
when you apply data reduction … if you get reasonable data reduction values
out of the data you\’re storing, they end up being cheaper than disk drives. And
when you look at the simplicity of it, when I talk to some of our customers
about flash and their experience with it after years of managing disks, what they
tell us is, it\’s just a lot easier to manage.
You don\’t really have to think at all about data placement on your arrays, you get kind of equal
performance everywhere; the reliability of it is greater. They tend not to fail as
much, so you don\’t have to babysit your arrays much when you have flash
arrays. So you look at a total cost of ownership basis and flash is simply cheaper
now.
I guess that depends on who you\’re buying it from, which solution you\’re getting, I think. But your
argument is, you can bring a lot of IP that you\’ve had for a long time, tailored
for flash, and really make a very cost-efficient, like you said, table stakes
equivalent flash array to just anybody else in the market, right?
Yeah, that\’s right. I mean, the way we looked at the marketplace as we were bringing our ISE G4 900
series to market was that there are a bunch of vendors out there that are
competing in the all-flash array market space. They\’re all relatively the same.
They\’re substitutable. Customers were telling us that they\’re going to shop
between ven
dors, and they\’ll probably buy arrays from more than one vendor because there\’s really not a stickiness
or switching cost issues or complexity issue with these things. They\’re easy to
manage so you can easily have a couple or three vendors in your shop, no
problem. It doesn\’t create any additional cost for you, and it does drive your
price down.
So we really focused on price, and I mentioned data reduction … when we were looking at data
reduction, one of the things that we discovered is that our engineering team
looking at how to best do data reduction. In this way, I guess we had a bit of an
advantage in coming a little bit late to this game because we were able to look
at how other people had done it and look at some fundamentals and really
came up with an invention. That invention around data reduction allows us to
get the same results, the same data reduction ratios and the same performance
out of a data reduced volume that others do at a fraction of the cost. We get it
at a fraction of the cost because we\’re able to do it with much less in the way of
CPU and memory resources.
So data reduction is very CPU and memory intensive the way others implement it. If you can come up
with, as we did, a patent pending invention in data reduction that uses, really,
25% of those resources and then amortize that over the cost of the flash in the
array, you end up with a bill of material costs that\’s only 60% and 70% of what
others are costing them to build that same array. In a market that is very
competitive and really becoming commoditized, where there\’s not a lot of
stickiness of switching costs, price matters. So we use our cost advantage and
pass that along to our customers and we give them better cost in the flash array
market.
So, now you can we say that we can consolidate many different kinds of workloads onto a G4 kind of solution ?
Yeah, absolutely. Again, coming a little bit late to this game. Most of the people who preceded us in
bringing all-flash arrays to market with data reduction had the idea of data
reduction all the time, so all workloads, all data, all the time, all the array were
reduced. One of the things we recognized is that in the very early days of data
reduced flash arrays, people liked to talk about VDI workloads that were getting
very large data reduction numbers like 15:1, and a lot of that was because early
VDI software would take all of the code on every desktop and stick it out in the
VDI environment as is. So they had a lot of replicated code, an entire copy of the
operating system in the application environment, and everything for every
desktop.
Where\’s XIO going next? NVMe or convergence?
So at XIO, we\’ve been in the high reliability, high performance external data storage array business for a
long time. Long before I got here. Great, great vendor in that marketplace, and
certainly our ISE G4 900 series arrays are a great next step there. Beyond that,
we\’re going to stay in that marketplace.
We have a roadmap there that will bring an NVMe array to market. I\’m not willing to talk about time
frame quite yet, but what I will say is there\’s another that we\’ve been going
down here at XIO for the last couple of years, which is around edge computing.
Edge micro data centers, edge micro clouds. We see an emerging market
opportunity for converged compute, compute offload and storage in single
systems where we\’re using our expertise around systems design to have done
what we call fabric express, which is a switch PCI fabric that allows you to put a
lot of compute horsepower, a lot of compute offload horsepower and a lot of
NVMe storage on one fabric and a very small container.
And those are being used for really interesting applications around real-time streaming data analytics,
big data … you know, taking hyper-scale concepts and big data analytics and
collapsing them down into one node to make them either more deployable or
just less expensive and easier to manage. I like to think about things like hyper-
scale data center architecture and cloud architecture are great for applications
that interact with people, but in a world of autonomous machines, where
suddenly sub-second response times are not good enough, you have to think
about sub-microsecond response times in terms of ingesting data, doing
analytics against that data and generating responses those machines can use,
that you have to make that happen closer to where those machines are. You
can\’t make it happen in some faraway