[lug] Setting up failover in Linux?

Will will.sterling at gmail.com
Tue Apr 24 16:35:46 MDT 2012


Clustering software is written to so that two machines can share storage
and IP addresses with out going haywire.  I think you are expecting a
little to much of these packages with your application upgrade plans.

On Tue, Apr 24, 2012 at 4:32 PM, Will <will.sterling at gmail.com> wrote:

> I have never seen a long distance cluster running an application on a
> shared file system such as DRBD.  Even clusters in the same room and on
> shared FC storage are usually offline when their applications are updated.
>
> I've seen data being pushed in real time but there is always some sort of
> fencing mechanism to prevent corruption.  Usually a database with a
> transaction log but also SANs that replicate to a swap area and only move
> in the changes once all of the blocks have been transmitted.
>
>
> On Tue, Apr 24, 2012 at 4:24 PM, Rob Nagler <nagler at bivio.biz> wrote:
>
>> Hi Will,
>>
>> >   Clusters are run in maintenance mode during these situations so fail
>> > over will not occur until both systems are back to a similar state.
>>
>> I can see how this would help prevent an accidental, automatic
>> failover wreaking havoc.  However, I'm interested in the problem that
>> the code and data are not part of the same transaction.
>>
>> Perhaps I can explain it this way, assume the primary and secondary
>> are separated by a large enough distance that the disk blocks
>> associated with the data and code upgrade take "a while" to copy.
>> During that period, the primary fails.  What state is the secondary
>> in?  Can it take over?
>>
>> Rob
>> _______________________________________________
>> Web Page:  http://lug.boulder.co.us
>> Mailing List: http://lists.lug.boulder.co.us/mailman/listinfo/lug
>> Join us on IRC: irc.hackingsociety.org port=6667 channel=#hackingsociety
>>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.lug.boulder.co.us/pipermail/lug/attachments/20120424/f1ee10ee/attachment.html>


More information about the LUG mailing list