David Sklar
2007-07-18 15:53:24 UTC
I am seeing filesystem process memory size grow as my filesystem deals
with an increasing number of distinct files. I am able to reproduce
this memory growth by mounting fusexmp_fh on /example-mount and then
doing, e.g. ls -lR /example-mount/usr. The first time I issue such a
command, process memory grows by about 28M (ls -l /usr | wc -l reports
161861), but if I then issue the same command again, process memory
stays constant. I am running fusexmp_th as root with the command
"fusexmp_fh -o allow_other -o attr_timeout=0 -o entry_timeout=0
/example-mount". I see essentially the same results with fuse 2.6.5
and fuse 2.7.0.
Browsing through the code, my guess on the likely cause of this is the
userspace name_table and/or id_table filling up with info about all of
the entries I am asking for. It seems those tables only shrink when
FORGET requests are issued.
What causes FORGET requests to be issued? Can I force it? My
filesystem is potentially handling tens of millions of files (but not
all at once :) but I'd like to be able to put an upper bound on the
amount of memory it would consume while running.
Alternatively, if I'm totally off on what the memory consumption is
due to, any pointers in the correct direction would be appreciated.
Thanks,
David
with an increasing number of distinct files. I am able to reproduce
this memory growth by mounting fusexmp_fh on /example-mount and then
doing, e.g. ls -lR /example-mount/usr. The first time I issue such a
command, process memory grows by about 28M (ls -l /usr | wc -l reports
161861), but if I then issue the same command again, process memory
stays constant. I am running fusexmp_th as root with the command
"fusexmp_fh -o allow_other -o attr_timeout=0 -o entry_timeout=0
/example-mount". I see essentially the same results with fuse 2.6.5
and fuse 2.7.0.
Browsing through the code, my guess on the likely cause of this is the
userspace name_table and/or id_table filling up with info about all of
the entries I am asking for. It seems those tables only shrink when
FORGET requests are issued.
What causes FORGET requests to be issued? Can I force it? My
filesystem is potentially handling tens of millions of files (but not
all at once :) but I'd like to be able to put an upper bound on the
amount of memory it would consume while running.
Alternatively, if I'm totally off on what the memory consumption is
due to, any pointers in the correct direction would be appreciated.
Thanks,
David