[lug] large file management

Quentin Hartman qhartman at gmail.com
Wed Apr 30 12:02:43 MDT 2014


That is definitely a consideration. If you aren't going to have the server
setup locally, it may be worth it to pre-seed it with your data before
setting it up in whatever remote location it will ultimately live in.


On Wed, Apr 30, 2014 at 11:31 AM, Lee Woodworth <blug-mail at duboulder.com>wrote:

> Something else to consider is how much data you regularly move at one
> time and your network bandwidth. It takes me about 12 minutes to move
> 8GB video files using all of the bandwidth of a 100Mbit hardwired
> network.
>
> On 04/30/2014 10:28 AM, Quentin Hartman wrote:
> > For my purposes the amount of space was much more important than speed,
> and
> > I had failure cases covered by RAID. I've been running this setup quite
> > happily on a 10 year old dual Xeon (P4 based, 2x2 cores) box with 8GB of
> > RAM. The storage is software RAID1 750GB HDDs. It's also doing a bunch of
> > miscellaneous other stuff.
> >
> >
> >
> > On Wed, Apr 30, 2014 at 10:18 AM, Bethany Hobson <blhobson at gmail.com>
> wrote:
> >
> >> Thanks for the heads up on the gotchas, Quentin.  Care to elaborate on
> >> your hardware for the remote server?  Wondering if we should invest in
> new
> >> SSD on a new Mac/PC or new SSD on linux box created from an older
> machine
> >> or all new hardware?  Or am I off the mark/too naive from the start?
> >>
> >> Thanks again,
> >>
> >> Bethany
> >>
> >> On Apr 29, 2014, at 11:24 AM, Quentin Hartman <qhartman at gmail.com>
> wrote:
> >>
> >> Sure thing. I'm using Owncloud 6 ( http://owncloud.org/six/) on a
> remote
> >> server, and I usually sync via one of the clients (
> >> http://owncloud.org/install/) though the web interface is quite useful
> as
> >> well. There wasn't much to setting it up. I pretty much just installed
> the
> >> packages and setup an apache vhost as directed.
> >>
> >> The two gotchas I ran into were:
> >> - Make sure the location it's storing data has plenty of space. By
> default
> >> it just dumps files into a sub-directory of your webroot, which probably
> >> isn't what you want.
> >> - When upgrading from 4.x to 5.x the upgrade process trashed pretty much
> >> everything. I had it backed up so it wasn't a big deal to install 5.x
> fresh
> >> and restore data, but it was a bit of a hassle. Upgrading from 5.x to
> 6.x
> >> did not do that. I imagine if you don't install via packages that won't
> >> happen, but it is something to be aware of. The packages are community
> >> maintained and they aren't always as rigorously tested as one might
> hope.
> >>
> >> QH
> >>
> >>
> >> On Tue, Apr 29, 2014 at 11:11 AM, Bethany Hobson <blhobson at gmail.com
> >wrote:
> >>
> >>> I followed this thread with great interest.  I'm an amateur genealogist
> >>> with photo, video, audio, document files in various formats in various
> >>> places - websites, online storage, personal computers, etc.  I would
> like
> >>> to engage with my teenagers (still computer-ish, but not always
> gaming) in
> >>> a project this Summer to pull all these data together and organize it.
> >>>  Right now, it's pretty much useless to anyone, but me - not good.
> >>>
> >>> ownCloud looks intriguing and maybe most accessible for my (our) skill
> >>> level(s).  The idea of being "in control" of my data locally until I
> figure
> >>> out what to put out there sounds nice.  Being able to tie in and
> organize
> >>> our family music scene would be sweet, too.  Teens may appreciate this
> >>> project more.
> >>>
> >>> Please chime in!  Ideas and suggestions are much appreciated.  Quentin
> >>> Hartmen, if you have the time and inclination to elaborate on your
> private
> >>> ownCloud installation, I thank you.
> >>>
> >>> --
> >>> Bethany Hobson
> >>>
> >>> On Mar 30, 2014, at 9:22 PM, Davide Del Vento <
> davide.del.vento at gmail.com>
> >>> wrote:
> >>>
> >>>>>> Shortlist of stuff which I will look/consider more:
> >>>>>> - Unison
> >>>>>> - boar
> >>>>>> - OpenKM
> >>>>>> - subversion
> >>>>>> - git annex
> >>>>>
> >>>>> Definitely look at git annex.  Your use case is a lot like one Joey
> >>>>> considered when he started writing it.  There's also a web front end
> >>> (git
> >>>>> annex assistant?) to make it easy for "regular" people to use it.
> >>>>
> >>>> Yes, I'm definitely looking at it. I'm finding it much more complex
> >>>> than I thought at first (e.g. have to unlock files before changing,
> >>>> having to manually add them to git-annex but not git itself when
> >>>> modified before committing) on top of the already complicated git
> >>>> workflow (it's no mystery that I like hg better -- by the way I
> >>>> checked mercurial-largefiles and that is not what I want). But I am
> >>>> not ruling it out, at least not yet.
> >>>>
> >>>> There is a fundamental thing that I'd like to do with git-annex and I
> >>>> can't figure out how: while having only some repos available (possibly
> >>>> only one) deciding that some files aren't worth keeping anywhere and
> >>>> asking for them to be deleted from all the repos the next time I sync.
> >>>> With rsync, I just write a "note to self" in the repo I'm editing,
> >>>> which says something like "use --delete the next time you rsync". This
> >>>> is unnerving, though, because when I do this I have to be careful that
> >>>> I'm not deleting some other file just for space purposes, which is
> >>>> something I also like to do (and which git-annex does with the --drop
> >>>> option). Basically I'm looking for a suboption of --drop that would
> >>>> say --on-all-remotes and can't find it. You may say "but this is not
> >>>> how version control is supposed to work, you don't want to completely
> >>>> delete something everywhere", and you'd be right. But this is not
> >>>> "normal version control", these are pictures which I initially dumped
> >>>> from the camera, synched among repos, and then realizing they were
> >>>> crap and are not worth keeping. That's my workflow, I seldom have time
> >>>> to look at the pictures before synching and therefore I would like an
> >>>> easy (but clear and unmistakable) way to delete them from everywhere
> >>>> (even when "everywhere" isn't available -- of course the actual
> >>>> deletion will happen at the next synch, not "immediately", but I can
> >>>> consider them gone "mentally").
> >>>>
> >>>> Not sure if I was clear enough... I'm already sleeping :-)
> >>>>
> >>>> Thanks for any suggestions,
> >>>> Dav
> >>>> _______________________________________________
> >>>> Web Page:  http://lug.boulder.co.us
> >>>> Mailing List: http://lists.lug.boulder.co.us/mailman/listinfo/lug
> >>>> Join us on IRC: irc.hackingsociety.org port=6667
> >>> channel=#hackingsociety
> >>>
> >>> _______________________________________________
> >>> Web Page:  http://lug.boulder.co.us
> >>> Mailing List: http://lists.lug.boulder.co.us/mailman/listinfo/lug
> >>> Join us on IRC: irc.hackingsociety.org port=6667
> channel=#hackingsociety
> >>>
> >>
> >> _______________________________________________
> >> Web Page:  http://lug.boulder.co.us
> >> Mailing List: http://lists.lug.boulder.co.us/mailman/listinfo/lug
> >> Join us on IRC: irc.hackingsociety.org port=6667
> channel=#hackingsociety
> >>
> >>
> >>
> >> _______________________________________________
> >> Web Page:  http://lug.boulder.co.us
> >> Mailing List: http://lists.lug.boulder.co.us/mailman/listinfo/lug
> >> Join us on IRC: irc.hackingsociety.org port=6667
> channel=#hackingsociety
> >>
> >
> >
> >
> > _______________________________________________
> > Web Page:  http://lug.boulder.co.us
> > Mailing List: http://lists.lug.boulder.co.us/mailman/listinfo/lug
> > Join us on IRC: irc.hackingsociety.org port=6667 channel=#hackingsociety
> >
>
> _______________________________________________
> Web Page:  http://lug.boulder.co.us
> Mailing List: http://lists.lug.boulder.co.us/mailman/listinfo/lug
> Join us on IRC: irc.hackingsociety.org port=6667 channel=#hackingsociety
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.lug.boulder.co.us/pipermail/lug/attachments/20140430/a7f738da/attachment.html>


More information about the LUG mailing list