HP 3PAR: The AO Caveat

Earlier this year, we posted about a new SAN bidding process and the eventual winner, the HP 3PAR V400. Now that we’ve been live on it for about six weeks, it’s time for a small update on a particular feature that might weigh in on your own decision, if you’re in the market.

Our new V400 was our first foray into the tiered storage market and we liked what we heard about gaining the speed of SSD storage on hot blocks while not wasting the cost of average data. EMC claimed advanced metrics, granular policies, and the ability to optimize as frequently as every 10 minutes. This sounded REALLY good. 3PAR also cited some of those things, sans the frequency, and we assumed they were about even, granted the results might be slightly delayed on the V400 (vs. VMAXe). What we’ve discovered isn’t so symmetric.

HP 3PAR leverage a feature they call “Adaptive Optimization”, which moves 128MB regions of data between storage tiers (0: SSD, 1: FC, 2: NL). The management of this feature was/is incorporated into 3PAR System Reporter product, which accumulates array performance data on an ongoing basis. While this repository of information is definitely the right choice to build AO upon, the implementation thereof is very elementary.

AO configuration is based on policies which apply to Common Provisioning Groups (CPGs), which are the containers/metadata holders of Virtual Volumes (VVs), otherwise known as LUNs in competitor storage products.

To briefly explain this single-step configuration of an AO policy, the tiers are CPGs (CPGs are a single type and RAID config of storage; i.e. SSD RAID 5), and the tier sizes are the maximum allowable space that the policy can use in a given CPG. For scheduling, the date/week day/hour are when the optimization(s) run and any movements are based on the amount data (in hours) specified in Measurement Hours (ranging from 3 to 48 hours; i.e. run at 1700 based on the past 9 hours of data). Mode determines how aggressive it is in moving regions up/down (Performance, Balanced, and Cost), and the last is whether the policy is enabled.

What we’ve found is that these options fall short of our tiering hopes and tend to de-optimize our storage such that things run on the slower side because AO has decided it should move regions down to NL (it seems very biased toward NL, even in a “Performance” mode configuration).

Before I go further, I would like to say that we have no hands-on experience with EMC storage to prove that such limitations are not the case with them, but my understanding from our technical review was that they had more intelligence built in to VMAXe, etc.

Our main complaints are in the reactive nature of AO. In our environment, our cycles of data activity are based more on day of the week than a specific hour of the day. In other words, Mondays look like Mondays, Tuesdays like Tuesdays, etc. With AO, we can only base the “optimization” on up to 48 hours of immediately past data such that even if we focus on weekday business hours, the nightly movements will prepare Tuesday for Monday’s behavior, and so on.

From what EMC said, their tiering software lets you decide what percentage of each type of storage is used by a given policy. So you might have a policy that uses 20% SSD, 70% FC, and 10% NL, and it will move hot/warm/cold data around accordingly. In 3PAR AO, those tier size settings are just “allowable” space, but there’s no way to encourage AO to use the SSD, for example. It may simply decide that it sees the data as cold and to move it down to NL or wherever the coldest allowance is.

3PAR’s answer is to shrink that size setting so it can’t use more than ### GiBs, but this becomes tedious, depending on how many VVs you have in each CPG. For us, we went with a three-policy configuration of “Gold”, “Silver” and “Bronze” that have greater/lesser amounts of SSD, FC, and NL as you cross the spectrum (i.e. Gold has 1200 GB of SSD, 10000 GB of FC, and 500 GB of NL; while Bronze has no SSD, 10000 GB of FC, and 10000 GB of NL; and Silver is a balance of the two). We find that though we’d wish Gold to be aggressive and use all of its SSD, it often leaves hundreds of GBs unused.

All that said, we are meeting with HP 3PAR folks tomorrow to see about tweaking the policies (and creating new ones, probably) to improve the behavior, but some of these things will remain unsolved (i.e. scheduling and the reactive nature of it).

For all this negativity, 3PAR shines with large pools of homogeneous storage (i.e. hundreds of FC disks), and as it stands, I’m not sure we didn’t make a mistake when insisting upon a tiered solution rather than a single 300 x 400GB FC drive configuration. I believed in the power of SSD, which I’m not yet seeing in 3PAR’s setup, but I’m not sure 3PAR knows how to use them properly. So…consider that when shopping. They really do make a good argument for good ‘ole reliable FC disks in large quantities.