[lug] File size limit issue
Anders Knudsen
andersk at uswest.net
Fri Nov 17 23:04:15 MST 2000
OK. Quick bg. Working to replace my main file server, which is NT,
with linux. It's a dell p-edge, raid, and all that fancy shoot. No probs
getting linux up on it. So. To "proof of concept" to the
"management" <cringe> I had to create a "mirror" server, get it up
and running, get folks using it, see that from a users NT
workstation that it worked as well (better), etc., etc.
Now. All is well. NFS, NIS, samba, all working, np. However, the
file server is used for running design simulations. It happens that
sometimes...probably more often than it should...folks create 2GB
plus log/dump files. You see my dilemma now?
On the NT server, you can go hog wild and create huge files if you
want. These are of-course temporary files.
The linux server, however, stops writing the file at 2GB.
I did a quick test.
$ dd if=/dev/zero of=testfile bs=16k count=134217728
dd: writing `testfile': File too large
131072+0 records in
131071+0 records out
$ ls -l
-rw-rw-r-- 1 aknudsen aknudsen 2147483647 Nov 17 16:04 testfile
Bah! This sucks. I'm running RH7.0, all patched up, 2.2.16 kernel.
I found some old mail in archives with same question, but no
answers :) So I'm querying the list here. Any ideas?
No way I'll be able to replace the NT file server with linux unless I
can break this 2Gb barrier by a long shot.
I found a possible solution, a kernel patch, called REISERFS.
see www.namesys.com
Anyone have any exp with this? There's got to be others using
linux and having to create files larger than 2gb, no?
So, again, I'm really bummed I ran into this situation, since it looks
like a kernel patch is the only way to fix. Will 2.4 kernel be fixing
this?
...steps off podium as the crowd ponders...
TIA, <Anders>
More information about the LUG
mailing list