Opened 12 years ago

Closed 12 years ago

#396 closed defect (wontfix)

API 0.3 traces display has slow query

Reported by: writetodan@… Owned by: steve@…
Priority: major Milestone:
Component: api Version:
Keywords: db slow query optimise locking api 0.3 Cc:


found at source:/

seems hard to imagine that cross-reffing 20000 trace files and 5000 users could become a major drain on db, but that is apparently the situation. whatever the exact situation, the sql in here looks pretty grim. anyways, in the course of putting the traces stuff into 0.4 API i needed to check this code, so i thought i'd record results:

without the test data/environment to verify performance (need to be contending with usual profile of reads/writes against db), i'm loathe to check anything in, however, as a starting point the basic sql can be simplified somewhat ( + extra options at start of method to be included as WHERE conditions) to something like:

SELECT DISTINCT files.*, users.display_name, GROUP_CONCAT(tag SEPARATOR ' ')
  FROM gpx_files AS files INNER JOIN users ON files.user_id = 
    LEFT JOIN gpx_file_tags AS tags ON = tags.gpx_id 
  WHERE private = 0 AND users.display_name != '' AND visible = 1 
  ORDER BY timestamp DESC

from mysql EXPLAIN it would help to have a couple of new indexes - though unsure of amount of impact:

  CREATE INDEX users_display_name_idx ON users(display_name);
  CREATE INDEX gpx_files_visible_public_idx ON gpx_files(visible, public);

and this one from API 0.4:

  CREATE INDEX gpx_file_tags_gpxid_idx ON gpx_file_tags(gpx_id);

Attachments (2)

ticket396.patch (8.0 KB) - added by writetodan@… 12 years ago.
Patch to cut public traces query
ticket396.2.patch (8.0 KB) - added by writetodan@… 12 years ago.
Patch to cut public traces query

Download all attachments as: .zip

Change History (5)

comment:1 Changed 12 years ago by writetodan@…

A possible explanation for why hit is so big to system, is that even though paging through traces, a SELECT without LIMIT is made for calls to gpx_files() from the .rhtml templates (e.g. source:/ - the limit flag is not passed so defaults to not impose limit).

Perhaps this was done to provide a count of total number of traces available.

Looking more carefully at the row rendering code, it actually loops through all rows, i.e. all traces, even if not visible on this page: source:/

In summary, it looks like currently each visit to the public traces page results in a sub-selected query using 3 temp tables and 2 filesorts and some unindexed joining resulting in a widish select of 15,000 rows each pulled through the html templating row by row.

Changed 12 years ago by writetodan@…

Attachment: ticket396.patch added

Patch to cut public traces query

Changed 12 years ago by writetodan@…

Attachment: ticket396.2.patch added

Patch to cut public traces query

comment:2 Changed 12 years ago by writetodan@…

ok, ignore the second patch added - for some reason i can't delete files that i've attached.

anyway, the first patch is a rewrite of the ruby/sql code to query for traces, which in the usual, bad case is approx 300 times quicker than previous.

however, not properly tested as there is no working dev/test environment for server - see #400

comment:3 Changed 12 years ago by steve@…

Resolution: wontfix
Status: newclosed

moved to 0.4.....

Note: See TracTickets for help on using tickets.