Using Ruby 1.8.7, Rails 3.1.11.
We have an XML feed that is performing badly. The main table it is not very long (~13,000 rows) but it is very wide and we are eager_loading 9 associated tables to get the data needed in one (or 2, Rails seems to do 1 query to get the ids, and then another with a WHERE IN clause?) query.
There are quite a lot of nodes for each entry in the response (~40), and there is some logic involved (mostly display-related, url_for, one call to enconde64).
When hit the feed with a request that produces 380 results, i'm getting:
Completed 200 OK in 11187ms (Views: 9897.7ms | ActiveRecord: 1252.1ms)
which seems a bit excessive in terms of both View-related and AR-related time.
When i remove all the nodes in the XML output except for the main object ID, i get:
Completed 200 OK in 2543ms (Views: 1945.3ms | ActiveRecord: 507.9ms)
This cut the time down quite a bit, which i suppose argues for digging into those nodes to try to track down time killers?
But isn't almost 2 seconds to iterate over 380 records an indicator of another avenue of investigation needed?
No comments:
Post a Comment