performance testing/questions from Odum
Closed this issue · 3 comments
I'm repeating my rsync of our DVN data with varying settings in davfs2.conf, and I believe it's the local cache that's going to make a difference. I tried nudging the upload_delay settings first (how long davfs2 waits before uploading a closed/written file from cache to server) thinking that would have the greatest effect, then tried bumping the table_size (defaults to 1024 files, but we have 168,295 so I made tried 262,144 (must be a power of 2)). Again, the performance looked like this:
My next run hoped to increase the size of /var/cache/davfs2 to a size greater than our DVN data, which performed better, but it's interesting that on the server side (Tomcat/Milton/iRODS) it's Postgres now consuming the CPU. I'm going to throw some more resources at it next.
Mike, Jon said that the iPlant people had been working on/with this, and that you might have a contact with some insight? I thought the huge local cache finally did the trick, but the CPU has leveled off again and I'm thinking it's Postgres' system resources.
I think it's DavFS2 cache expiration. Even if I set the cache size to 150GB the cache only grew to 2,766,204kb - and stopped. DavFS2 is supposed to scan the cache periodically, so I'll turn on debugging and see what changes we can effect there. Files are still transferring, but performance tanked.
I did heavy-handedly remove ~1.5Gb of stuff I had dropped in there during earlier testing (dataverse-4.1.war files, it kept three copies) but it has yet to notice they're gone.
Postgres' CPU spiked about the time the cache maxed out, and it seems strange that 9.3 would be hitting the same resource threshold as 8.4.
restest with Akio when rc is done