You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Jul 24, 2021. It is now read-only.
Reporter: writetodan[at]yahoo.com [Submitted to the original trac issue database at 10.14pm, Thursday, 22nd March 2007]
found at source:/www.openstreetmap.org/ruby/api/osm/dao.rb@R1589#L431
seems hard to imagine that cross-reffing 20000 trace files and 5000 users could become a major drain on db, but that is apparently the situation. whatever the exact situation, the sql in here looks pretty grim. anyways, in the course of putting the traces stuff into 0.4 API i needed to check this code, so i thought i'd record results:
without the test data/environment to verify performance (need to be contending with usual profile of reads/writes against db), i'm loathe to check anything in, however, as a starting point the basic sql can be simplified somewhat ( + extra options at start of method to be included as WHERE conditions) to something like:
SELECT DISTINCT files.*, users.display_name, GROUP_CONCAT(tag SEPARATOR ' ')
FROM gpx_files AS files INNER JOIN users ON files.user_id = users.id
LEFT JOIN gpx_file_tags AS tags ON files.id = tags.gpx_id
WHERE private = 0 AND users.display_name != '' AND visible = 1
GROUP BY files.id
ORDER BY timestamp DESC
LIMIT 20;
from mysql EXPLAIN it would help to have a couple of new indexes - though unsure of amount of impact:
CREATE INDEX users_display_name_idx ON users(display_name);
CREATE INDEX gpx_files_visible_public_idx ON gpx_files(visible, public);
and this one from API 0.4:
CREATE INDEX gpx_file_tags_gpxid_idx ON gpx_file_tags(gpx_id);
The text was updated successfully, but these errors were encountered:
Author: writetodan[at]yahoo.com [Added to the original trac issue at 8.57pm, Friday, 23rd March 2007]
A possible explanation for why hit is so big to system, is that even though paging through traces, a SELECT without LIMIT is made for calls to gpx_files() from the .rhtml templates (e.g. source:/www.openstreetmap.org/eruby/public_traces.rhtml@R926#L66 - the limit flag is not passed so defaults to not impose limit).
Perhaps this was done to provide a count of total number of traces available.
Looking more carefully at the row rendering code, it actually loops through all rows, i.e. all traces, even if not visible on this page: source:/www.openstreetmap.org/eruby/render_gpx_list.rhtml@R1236#L54
In summary, it looks like currently each visit to the public traces page results in a sub-selected query using 3 temp tables and 2 filesorts and some unindexed joining resulting in a widish select of 15,000 rows each pulled through the html templating row by row.
Reporter: writetodan[at]yahoo.com
[Submitted to the original trac issue database at 10.14pm, Thursday, 22nd March 2007]
found at source:/www.openstreetmap.org/ruby/api/osm/dao.rb@R1589#L431
seems hard to imagine that cross-reffing 20000 trace files and 5000 users could become a major drain on db, but that is apparently the situation. whatever the exact situation, the sql in here looks pretty grim. anyways, in the course of putting the traces stuff into 0.4 API i needed to check this code, so i thought i'd record results:
without the test data/environment to verify performance (need to be contending with usual profile of reads/writes against db), i'm loathe to check anything in, however, as a starting point the basic sql can be simplified somewhat ( + extra options at start of method to be included as WHERE conditions) to something like:
from mysql EXPLAIN it would help to have a couple of new indexes - though unsure of amount of impact:
and this one from API 0.4:
The text was updated successfully, but these errors were encountered: