<snip>
> > >
> > > I suspect much of the delay for small files is due to checking,
> > > creating, and traversing directories!
>
> > The depth was chosen so it would work on poor-quality file systems that
> > only allow a handful of entries in a directory, such as FileCore :)
>
> It is a shame that there is no simple way to discover if 'big
> directories' are being used with no such limitations, as they have been
> here for many many years.
>
A bit technical for this list but
http://git.netsurf-browser.org/netsurf.git/tree/content/fs_backing_store.c#n326
explains all the constraints from all the different systems the cache
must deal with, the result is lowest common denominator. Beleive me
when I say working out that limit set from a many dimensional dataset
like that was not easy
With the changes I have just made however more than 90% of metadata
and 70% of actual data ends up in the large block files and causes far
fewer directories to be created substantially reducing overheads.
--
Regards Vincent
http://www.kyllikki.org/
No comments:
Post a Comment