[lug] Xvnc cpu usage

Matthew Beldyk matt at beldyk.org
Wed May 28 17:18:46 MDT 2008


I'm going to take a stab in the dark and guess you have the 2850 with two
dual core processors (I just googled the specs on the machine).  I'm also
guessing that Xvnc (which I'll qualify my response by saying I've never used
Xvnc) is just constantly polling to use all of the spare cpu cycles on one
of the cores.  Which both of these things would make the 25% usage
reasonable.

We used have similar situations where I work. We had a number of scripts
that essentially do  `while[1]{ls /dir/}` and  use an entire processor,
we've since fixed this with a sleep statement.

A test I'm curious about is, if you saturate the machine with other
processes, does Xvnc still use a large amount of the cpu, or does it use a
smaller amount and allow other processes to use some cycles.  If so, I'd say
Xvnc using 25% is reasonable, but I wouldn't be able to tell for sure
without a screenshot of top in the unloaded and loaded situations.

-Matt

On Wed, May 28, 2008 at 4:54 PM, <gordongoldin at aim.com> wrote:

> I have a Dell PowerEdge Server *2850.
>
> *The only thing running on it is somebody's Xvnc session.  No other tasks
> above 0
>
> The Xvnc runs 25% of the CPU.  Does that make sense?
> ------------------------------
> Stay informed, get connected and more with AOL on your phone<http://mobile.aol.com/productOverview.jsp?productOverview=aol-mobile-overview&?ncid=aolmbd00030000000139>
> .
>
> _______________________________________________
> Web Page:  http://lug.boulder.co.us
> Mailing List: http://lists.lug.boulder.co.us/mailman/listinfo/lug
> Join us on IRC: lug.boulder.co.us port=6667 channel=#colug
>



-- 
Calvin: Know what I pray for?
Hobbes: What?
Calvin: The strength to change what I can, the inability to accept what I
can't, and the incapacity to tell the difference.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.lug.boulder.co.us/pipermail/lug/attachments/20080528/445a3170/attachment.html>


More information about the LUG mailing list