[lug] Too many files error
Daniel Webb
lists at danielwebb.us
Sat Apr 22 16:03:35 MDT 2006
On Sat, Apr 22, 2006 at 12:04:43PM -0600, Ken Kinder wrote:
> I'm working on an application that creates lots of files. To reduce
> the number of files in one dir, I have broken the files up into
> subdirs. The files are basically md5 hex digests, and I'm taking the
> first five chars to determine a directory:
>
> xxxxx/(file)
>
> However, I'm running into "Too many links" as an error upon creating
> the directories. This is happening right after the 32,0001st
> directory, which seems like an awfully even number. I thought ext3
> didn't have this problem.
>
> Thoughts? Anyone have creative solutions for this? I'm barking up the
> right tree?
There is a hard link limit in ext2 of 32,000 (and directories use a hard link
to their parents). I'm not sure why they picked that instead of 2^15, maybe
they like round numbers. I'm also not sure why they didn't pick 2^16, maybe
they needed the sign bit for something? This sort of answers that question...
http://ext2.sourceforge.net/2005-ols/paper-html/node33.html
also, from the Dirvish backup tool FAQ:
----
http://www.dirvish.org/FAQ.html
Could linking between images be limited by a maximum link count?
Yes. But you are unlikely to ever come close to the limits.
Linux Filesystem link limits
-------------------------------
xenix 126
sysv 126
minix 250
coherent 10,000
ufs 32,000
ext2 32,000
reiserfs 64,535
minix2 65,530
jfs 65,535
xfs 65,535 or 2,147,483,647
----
You're unlikely to ever come close to the limits... but I did a few days ago
which is why I knew the answer to this.
Hint: if you use the finddup/nodup tools on a subversion repository (which has
a lot of identical property files), then have user hardlink snapshots of those
directories, then have hardlink snapshot backups of the entire system, you'll
hit 32,000 links very quickly... :)
More information about the LUG
mailing list