[lug] My last hope....and nerve

Dan Ferris dan at usrsbin.com
Mon Oct 29 19:24:14 MDT 2007


Hey Steve,

I don't know if I suggested it on AIM the other day, but did you try a 
different SCSI controller?  I know we had that old crappy Adaptec one 
around.  I know it didn't work very well, but it might be something else 
to try.

Dan

Nate Duehr wrote:
> 
> On Oct 29, 2007, at 6:28 PM, D. Stimits wrote:
> 
>> Steve A Hart wrote:
>>> I'm still dealing with two Promise UltraTrak RM8000 raid arrays and 
>>> I'm getting desperate to find an answer to my problem.  Here's the 
>>> rundown and hopefully someone out there can help.
>>>
>>> Let's keep this simple.   I have a single Promise UltraTrak RM8000 
>>> connected to an LSI logic SCSI card.  The OS is Fedora Core 6 and 
>>> when the OS starts up, all I see is a repeating SCSI bus reset over 
>>> and over.
>>>
>>> I can say with 100% certainty that the problem is NOT the following:
>>> SCSI host ID
>>> The SCSI cable
>>> the terminator (terminated correctly)
>>> LSI card
>>> motherboard of the host system
>>>
>>> That only leaves the OS and the promise raid itself.  I know the 
>>> RM8000 did run on FC4 running the 2.6.16 kernel but ever since the 
>>> 2.6.18 kernels came out it's has not worked.  Now I have it connected 
>>> to a FC6 system and still no luck.
> 
> 
> 
> You mentioned you're dealing with two having the same problem, and 
> they're both plugged into the same type of LSI controller, right?  
> That's interesting from a raw "logical troubleshooting" standpoint -- 
> did they both fail at the same time?  Was it during the OS upgrade?  
> Have they ever worked since you've been involved where you saw them both 
> up and running?
> 
> If it was the hardware, I wouldn't think that they'd both be down... 
> separate controllers, separate cables, separate arrays, if I'm reading 
> your description correctly.  That doesn't make much logical sense, so 
> the likelihood that it's the OS just shot sky high... in my mind anyway, 
> unless I missed something.
> 
> ------
> 
> Other comments:
> 
> A "crazy" question, perhaps -- are you in touch with Promise regarding 
> the problem?  Are they responding?  Feel free to make their response or 
> lack thereof public, it may help you with leverage to get an Appeasement 
> Engineer on-site.  :-)
> 
> Further future-looking questions:  Is this a critical business system?  
> Is it on a service contract?  Should it be?
> 
> :-) ;-)
> 
> I've got a few Sun A1000 arrays at work that could use a long drop off a 
> tall building too... they're in a lab so they can't cause anyone any 
> further headaches/harm.
> 
> And our customers that use them still, are all highly recommended to 
> carry Sun service contracts on theirs.  Most have Sun's "Platinum" 
> support level anyway -- so it doesn't take much effort to get an 
> Appeasement Engineer (heh heh... I love that term from BOFH) on-site 
> with drives in hand.
> 
> The hard part is keeping the Sun RMA folks from ordering the wrong sized 
> drives, since most of those came with 9GB or 18GB drives, and the techs 
> regularly show up with 36's -- thinking that's the "smallest" they have 
> available to them.  (GRIN)
> 
> I know for a fact that a friend in Sun's storage group (formerly 
> StorageTek) has mentioned that he's sent folks on-site with SCSI 
> "sniffers" (not a cheap tool by any means!) when customers on service 
> contracts call looking for help with serious storage problems...
> 
> If Promise can't bring that kind of support to bear, perhaps they're not 
> the correct solution for a business platform.  (And if it's not a 
> business platform, disregard those comments, of course...)
> 
> Someone, somewhere knows how to troubleshoot that thing down to 
> wire-level.  I would sincerely hope that the Promise folks have that 
> person or persons on-staff... ready to assist... for the "right" price.
> 
> 
> -- 
> Nate Duehr
> nate at natetech.com
> _______________________________________________
> Web Page:  http://lug.boulder.co.us
> Mailing List: http://lists.lug.boulder.co.us/mailman/listinfo/lug
> Join us on IRC: lug.boulder.co.us port=6667 channel=#colug
> 
> 

-- 
What are we going to do tonigth Brain? - Pinky
The same thing we do every night.  Try to take over the world. - Brain



More information about the LUG mailing list