[lug] MySQL work queue problem

Jason Vallery jason at vallery.net
Mon Oct 9 12:26:47 MDT 2006


MySQL actually has a nifty feature to get around the problem described
below.  If you don't care about the result of the update (your example is
just to increment some sort of counter) you can use the "LOW_PRIORITY"
keyword on the UPDATE statement.  This will cause MySQL to delay the update
until no clients are reading the table:

http://dev.mysql.com/doc/refman/5.1/en/update.html

Similarly, when doing an INSERT you can use the syntax "INSERT DELAYED" to
cause MySQL to wait until the table is not being read before processing the
INSERT.  The client doesn't need to wait around, the UPDATE or INSERT gets
inserted into the MySQL process queue and waits.

-J


On 10/9/06, Sean Reifschneider < jafo at tummy.com> wrote:
>
> On Fri, Oct 06, 2006 at 03:49:47PM -0600, Jason Vallery wrote:
> >choice.  Performance is defiantly what I am after in this case.
>
> Is it faster though?  MyISAM tables are quite fast at read-centric
> work-loads, but when you're updating as much as you're reading the
> performance drops WAY off.
>
> We had a client that was pulling their web pages out of MyISAM and those
> page records also included a hit count.  So, on every page request they
> were doing an update of that table.  When heavy load came through, you
> could watch as most SELECT queries just hung waiting for the table-level
> lock that the UPDATE had.  As multiple UPDATEs were queued up, the SELECTS
> could end up waiting many seconds to get access to the table.
>
> Something that had row-level locks would have been much better here.  For
> them, we recommended just creating a new table that had the page hits, and
> referred back to the original table.  So, the table locks would happen on
> this other table, and selects for new page views could still happen
> against
> the main table.
>
> For the problem here, I believe the issue is that MyISAM is doing:
>
>    SELECT from thread one.
>    SELECT from thread two.
>    table lock from one of these threads.
>    UPDATE from that thread.
>    unlock
>    table lock from the other thread
>    UPDATE from that thread.
>    unlock
>
> So, the second SELECT is happening before the lock.  You probably want to
> explicitly lock the table before doing the SELECT:
>
>    Lock table
>    SELECT from thread one.
>       Another SELECT that happens here will be deferred...
>    UPDATE
>    Unlock table
>    The other SELECT will happen here...
>
> In Postgres, in the past when I've done similar things, I've created
> a separate table that I stored a thread ID and task ID in, and made a
> Unique constraint on the task ID.  Instead of updating a status in the
> queue table, I would add a "working" record to this table, and rely on
> the database integrity to alert me when two jobs picked up the same item,
> because they'd both try to put in the same Task ID, the unique constraint
> would block it.
>
> Sean
> --
> "Action...  Romance...  Welding...  That movie has everything."
>                  -- Evelyn Mitchell speaking about _Subway_, 1998
> Sean Reifschneider, Member of Technical Staff < jafo at tummy.com>
> tummy.com, ltd. - Linux Consulting since 1995: Ask me about High
> Availability
>
> _______________________________________________
> Web Page:  http://lug.boulder.co.us
> Mailing List: http://lists.lug.boulder.co.us/mailman/listinfo/lug
> Join us on IRC: lug.boulder.co.us port=6667 channel=#colug
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.lug.boulder.co.us/pipermail/lug/attachments/20061009/7ec2fc27/attachment.html>


More information about the LUG mailing list