[lug] Promise Vtrak performance
User Dan Ferris
dan at usrsbin.com
Sun Nov 25 13:21:58 MST 2007
Ahhh, Promise Technology one of my most FAVORITE topics when it comes to
disk arrays.
We had the misfortune to purchase 4 of their worthless arrays for (get
this) a new Oracle database cluster as opposed to say, EMC or HDS. My
boss loved the idea of having 6 TB of storage per array at the expense of
every other feature (dual controllers, replication, snapshots, reliable
OS, etc).
We had nothing but trouble with the arrays on our Solaris 10 boxes and I
still have nothing but trouble with the two remaining in production.
Here is a litany of problems experienced.
1. Single controllers. The rocket scientist I work for wanted to
software mirror arrays instead of buying arrays with redundant
controllers. Buh bye to decent performance there.
2. Unable to access the same LUN through different Fibre Channel ports
unless using Round Robin multipathing. We wanted to use the LRU load
balancing with Veritas Volume manager. As my SAN guy put it "These don't
even conform to the fibre channel specs."
3. Dismal performance. They advertise up to 400 MB / sec for the M500f,
however we never came close to that number with extensive testing and
tweaking. On Linux I've never gotten one array to transfer over 90 Mb/sec
even with caching turned on. On Solaris I never got it over 100 MB/sec on
one fibre port. We never got it over 200 MB/sec with multipathing. On
one support call, my boss accused them of false advertising. This is what
it took to get the support case escalated. We eventually got up to the VP
of support and marketing, a serious dickhead named Sam Sirisena.
4. The Linux version of Veritas Volume Manager 4.1 would crash the
controller when vxdmp started. When I called about it, their tech asked
me what was Veritas Volume manager.
5. A dead disk would crash the controller. They sort of fix this in the
latest firmware.
6. The final straw for us to replace them completely was the array
running out of memory every 2 weeks and crashing. It starts with the web
UI dying and then ends with the array vanishing from the fabric. Rather
than trying to fix it, their tech support said "Just upgrade the memory."
Upgrading the memory did not fix the problem. The best part is all of
their arrays suffer from this issue, they just take varying lengths of
time to finally crash.
7. Oh yes, the tech support, we can't NOT mention the wonderfulness that
is Promises tech support. I've never in my life worked with more useless
tech support people. Having worked with Cisco, F5, and Red Hat (all
fairly decent in my book), dealing with Promise was a rude shock. Their
tech support strategy is to try a few book solutions and then RMA whatever
part they think is bad.
8. Next time you call their tech support ask them if they are Sun
hardware certified. When they say no, ask them how they lost their Sun hardware
certification. If they won't tell you, email me, and I would love to
share the story. Suffice to say, they probably have a hit contract out on
our SAN consultant, who incidently runs the Sun hardware test lab.
My reccomended fix to your problem is to dump that gear if you can.
Promise is a serious case of getting exactly what you pay for. As I call
it: Don't buy storage arrays based only on price and capacity.
Dan
On Sun, 25 Nov 2007, Rob Nagler wrote:
> We have two Promise Vtrak 12110 enclosures with 12 x 500MB SATA drives
> configured with RAID5 as a single array and 3 logical drives (2 x 2TB,
> 1 x 1TB). Performance is abysmal now that we've been using this set
> up for a while. We are only using one of the 2TB logical drives; the
> other two drives are mostly empty.
>
> We are rsyncing nightly for backups from a variety of machines. At
> first this was working just fine. As the file system grew to
> hundreds of inodes, rsync got slower and slower. I don't think it is
> rsync as the data mix on the machines we are backing up is not that
> different. The rsync is hung in a device wait on the backup
> machine pretty much any time I look.
>
> The one complication is taht we are linking with cp --archive --link
> to keep historical copies. This means static inodes are being
> reference hundred or so times. Static inodes make up most of the data
> mix so I suspect there's some behavior relating to how ext3
> distributes inodes and how the vtrak distributes its data.
>
> Even simple operations, like rm -rf on old backups or du, take hours
> and sometimes days to run.
>
> I have seen references on the net to slow performance on the Promise
> enclosures (which are Linux boxes, btw). Promise has asked for lots
> of information, but they haven't provided any answers.
>
> The performance is abysmal on both RH7.2 on RHEL4.5 (CentOS). It
> doesn't matter if the machine is busy or idle. It can be doing
> nothing, and a du will not complete in an hour, when I think it should
> (and does in similar mixes on much slower machines).
>
> Rob
>
>
> _______________________________________________
> Web Page: http://lug.boulder.co.us
> Mailing List: http://lists.lug.boulder.co.us/mailman/listinfo/lug
> Join us on IRC: lug.boulder.co.us port=6667 channel=#colug
>
>
>
More information about the LUG
mailing list