[lug] stupid swap question

duboulder blug-mail at duboulder.com
Wed Dec 4 17:54:34 MST 2019


Hmm,

I guess you need to use about:config to set max size for the cache in 68.2.0esr (browser.cache.*). I don't see the old settings in preferences for cache controls.

Your browser usage must be pretty different than mine as I don't have firefox eating lots of memory (maybe 2G in 6 js heavy tabs). One thing I have noticed is that 3rd party scripts can really slow a site down. When noscript is set to only allow a few scripts related to site function, things get much faster.

The file system might make a difference for the cache. We use xfs which seems to handle lots of small file reads/dir ops at a good speed. The openstreet map tiles dir is an example:
   find /tiles -type f | wc -l
takes about 6 seconds for 11184811 files in 82300 dirs, or ~.55 microseconds / file. This is an i5-6500 CPU @ 3.20GHz where the fs is on a raid5 with 3 ssds. git commands on a large repo (e.g. linux kernel) might also show how well an fs is performing.



‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐
On Sunday, December 1, 2019 6:52 PM, Steve Litt <slitt at troubleshooters.com> wrote:

> On Sun, 1 Dec 2019 13:29:54 -0700
> Bear Giles bgiles at coyotesong.com wrote:
>
> > I have a stupid swap question....
>
> I'm going to answer a different question, just in case my answer helps.
>
> > My system has 64 GB of memory, or nearly enough for 3 pages on Chrome.
> > Obviously I want to remain memory-resdent as much as possible but I
> > would also like a soft landing when Chrome does what Chrome does so
> > well - I would rather have some background stuff move to swap than to
> > have to do a hard reset because the system locked up.
>
> Me too. Chromium is such a pig I went back to firefox, which of course
> is a slightly smaller pig.
>
> I made tremendous strides on my system by hugely reducing the number of
> files Firefox could keep in cache. Firefox was keeping tens of
> thousands of cache files in a single directory. At least with Ext4,
> Linux is horrible at finding one file in a directory of 20,000. Like
> seconds per file. A better design would have been to have 26
> directories, a to z, each with subdirectories, a to z, maybe to four or
> five levels, and put the actual files in the leaf directories. Linux is
> lightning quick finding one out of 26. But that's crying over spilled
> milk.
>
> So after seeing tens of thousands of cache files in one directory, and
> knowing that would make what should have been a millisecond retrieval
> into three seconds, I decided that if I hadn't accessed a resource
> within the past hour, it would be better to just download it again. I
> don't remember how I did it, it's very obscure, but I limited the
> maximum number of cache files to a pretty low 3 or 4 digit number. The
> effect on Firefox and all the programs running at the same time was
> stunning.
>
> > I've created an explicit swap file, added it via swapon, and it shows
> > up in the system but doesn't seem to be used. It's really odd since
> > this has worked in the past.
>
> Somebody else pointed out that swap memory is a circuit breaker: It
> doesn't happen until needed. To really test it you'd need to open
> Chromium with a bunch of gratuitously scripted websites until it choked.
>
> Keep in mind it might not be choking on RAM at all. It could be disk or
> CPU. One thing I'd try is to have a crm.sh that runs chromium with a
> nice level of 19.
>
> nice -n 39 chromium
>
> I used 39 because -n is an adjustment, not an absolute number, and if
> the previous niceness had been -20, it would take 39 to get it up to
> the max of 19.
>
> If the problem is CPU, nicing up chromium should enable you to
> manipulate other programs long enough to send a HUP to chromium.
>
> SteveT
>
> Steve Litt
> November 2019 featured book: Manager's Guide to Technical
> Troubleshooting Second edition
> http://www.troubleshooters.com/mgr
>
> Web Page: http://lug.boulder.co.us
> Mailing List: http://lists.lug.boulder.co.us/mailman/listinfo/lug
> Join us on IRC: irc.hackingsociety.org port=6667 channel=#hackingsociety




More information about the LUG mailing list