[lug] Cluster File Systems

Rob Nagler nagler at bivio.biz
Wed Aug 8 07:47:44 MDT 2018


> I naively thought that after 35+ years that simple cluster-file-system

I think naively all the time. Gets me in a lot of trouble. :)

Sorry you are having so much trouble.

FWIW, RMACC Symposium <http://rmacc.org/hpcsymposium/schedule> is going on
this week. I just happened to sit in on an open discussion between
experienced admins where they discussed file systems.

I wasn't taking notes so this is from memory.

Some of the big centers like IBM Spectrum, a lot.

A few have been experimenting with BeeGFS, and one guy said it is rock
solid, and performs faster than Panasas. He seemed pretty excited about it,
and others were listening closely/asking questions.

Somebody played with LizardFS, and said it was not good. Xtreem and Orange
were mentioned but nobody was usingthem

Separating meta data is important for some of their workloads. It seems
that people like typing "ls" a lot.

Nobody could answer my question about RAM vs SSD, which I thought was
interesting. They didn't seem to care about RAM, which I would think would
be relevant especially for the ls-workload, but I'm not an expert on this.

Nobody mentioned NFS (I use it, but we're small).

Some people are using GPFS and Lustre just fine.

People mentioned Ceph in the context of diskless nodes. I didn't quite get
it, and maybe this is a red hearing. Ceph is well regarded, and one guy
said "thanks to Red Hat" (who was in the room) it was easy to install but
they build their own hand-rolled config.

We use NFSv4 with RAID1 (not 10) and mirroring to another machine (nightly
at the moment). Our data doesn't matter that much, because it is mostly
simulation output which is likely reproducible and relatively short-lived.
We loaded our primary server with 192G, which seems to be in line with what
the big centers have for cache on their boxes, but again, they didn't seem
concerned about cache. Obviously it's pretty load dependent, but I would
think caching all the meta data would be valuable, and my guess is that
192G would be enough. I haven't heard complaints so I assume it is.

Rob
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.lug.boulder.co.us/pipermail/lug/attachments/20180808/e766e990/attachment.html>


More information about the LUG mailing list