We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Date: 2010-09-28 15:26:04 +0200 From: @grobian To: SQL devs <> Version: 2.40.1 (Oct2010) [obsolete] CC: @njnes, @drstmane
Last updated: 2011-03-28 17:31:33 +0200
Date: 2010-09-28 15:26:04 +0200 From: @grobian
sql>select max(emit_time) from sensor_readings; +----------------------------+ | L14 | +============================+ | 2010-09-28 13:16:25.000000 | +----------------------------+ 1 tuple (78.778ms) sql>select now() - interval '30' second; +----------------------------------+ | current_timestamp | +==================================+ | 2010-09-28 13:16:09.000000+00:00 | +----------------------------------+ 1 tuple (0.236ms) sql>select count() from sensor_readings where emit_time >= '2010-09-28 13:16:09.000000+00:00'; +------+ | L15 | +======+ | 323 | +------+ 1 tuple (54.039ms) sql>select count() from sensor_readings where emit_time >= now() - interval '30' second; +------+ | L12 | +======+ | 0 | +------+ 1 tuple (49.3s)
(notice the 50 seconds, clearly a full scan is done)
I cannot reproduce this on an almost empty database, so it must be size related:
sql>select count() from sensor_readings; +----------+ | L4 | +==========+ | 22514029 | +----------+ 1 tuple (2.067ms) sql>plan select count() from sensor_readings where emit_time >= now() - interval '30' second; +------------------------------------------------------------------------------+ | rel | +==============================================================================+ | project ( | | | group by ( | | | | select ( | | | | | table(sys.sensor_readings) [ sensor_readings.src_ip, sensor_readings.e | : mit_time, sensor_readings.%TID% NOT NULL ] : | | | ) [ convert(sensor_readings.emit_time) >= sql_sub(current_timestamp, 30) | : ] : | | ) [ ] [ count NOT NULL as L5 ] | | ) [ L5 NOT NULL ] | +------------------------------------------------------------------------------+ 7 tuples (0.488ms) sql>
Date: 2010-09-28 15:32:57 +0200 From: @grobian
after some thinking: query does a full scan which takes 50 seconds, so the 30 seconds limit never matches anything.
Date: 2010-09-28 15:35:50 +0200 From: @grobian
workaround :)
sql>declare ts timestamp; sql>set ts = now() - interval '30' second; select count() from sensor_readings where emit_time >= ts; +------+ | L4 | +======+ | 229 | +------+ 1 tuple (53.620ms) sql>set ts = now() - interval '30' second; select count() from sensor_readings where emit_time >= ts; +------+ | L5 | +======+ | 234 | +------+ 1 tuple (51.480ms)
Date: 2010-10-08 23:45:28 +0200 From: @drstmane
Did/could you profile (TRACE) the query with now() in the predicate to analyze where the time is spent?
Date: 2010-10-28 13:49:49 +0200 From: @grobian
tracing the query acutally never ends (or takes more time than my patience allows)
Date: 2010-12-01 23:23:45 +0100 From: @njnes
the table definition is missing. To debug this query (using explain/plan etc) needs all ddl statements.
Date: 2010-12-02 18:29:39 +0100 From: @grobian
here you go:
CREATE TABLE "sensor_readings" ( "src_ip" VARCHAR(15), "recv_time" TIMESTAMP, "emit_time" TIMESTAMP, "location" VARCHAR(30), "type" VARCHAR(30), "value" VARCHAR(30) );
Date: 2010-12-03 21:12:26 +0100 From: @njnes
time is spend in a bat-iterator loop which is converting the timestamp column to a timestamp with time zone. This seems like a waste of time.
Date: 2010-12-03 21:27:41 +0100 From: @njnes
the conversion can be skipped, which solves the slow down.
Date: 2010-12-03 21:33:12 +0100 From: @njnes
Changeset 455912904d45 made by Niels Nes niels@cwi.nl in the MonetDB repo, refers to this bug.
For complete details, see http//devmonetdborg/hg/MonetDB?cmd=changeset;node=455912904d45
Changeset description:
avoid useless conversions (fixes Bug #2679).
Date: 2010-12-04 09:54:00 +0100 From: @grobian
This is a bug on oct2010, but it was fixed on default/current.
Is it possible to be backported?
Date: 2010-12-04 11:59:58 +0100 From: @njnes
Changeset 38195aa0606a made by Niels Nes niels@cwi.nl in the MonetDB repo, refers to this bug.
For complete details, see http//devmonetdborg/hg/MonetDB?cmd=changeset;node=38195aa0606a
avoid useless conversions (fixes Bug #2679). (transplanted from 455912904d45399b6514a0f9f9521054bbefcea7)
Date: 2010-12-04 12:00:22 +0100 From: @grobian
manually backported this fix
Date: 2011-03-28 17:31:33 +0200 From: @sjoerdmullender
The Mar2011 version has been released.
The text was updated successfully, but these errors were encountered:
No branches or pull requests
Date: 2010-09-28 15:26:04 +0200
From: @grobian
To: SQL devs <>
Version: 2.40.1 (Oct2010) [obsolete]
CC: @njnes, @drstmane
Last updated: 2011-03-28 17:31:33 +0200
Comment 14947
Date: 2010-09-28 15:26:04 +0200
From: @grobian
sql>select max(emit_time) from sensor_readings;
+----------------------------+
| L14 |
+============================+
| 2010-09-28 13:16:25.000000 |
+----------------------------+
1 tuple (78.778ms)
sql>select now() - interval '30' second;
+----------------------------------+
| current_timestamp |
+==================================+
| 2010-09-28 13:16:09.000000+00:00 |
+----------------------------------+
1 tuple (0.236ms)
sql>select count() from sensor_readings where emit_time >= '2010-09-28 13:16:09.000000+00:00';
+------+
| L15 |
+======+
| 323 |
+------+
1 tuple (54.039ms)
sql>select count() from sensor_readings where emit_time >= now() - interval '30' second;
+------+
| L12 |
+======+
| 0 |
+------+
1 tuple (49.3s)
(notice the 50 seconds, clearly a full scan is done)
I cannot reproduce this on an almost empty database, so it must be size related:
sql>select count() from sensor_readings;
+----------+
| L4 |
+==========+
| 22514029 |
+----------+
1 tuple (2.067ms)
sql>plan select count() from sensor_readings where emit_time >= now() - interval '30' second;
+------------------------------------------------------------------------------+
| rel |
+==============================================================================+
| project ( |
| | group by ( |
| | | select ( |
| | | | table(sys.sensor_readings) [ sensor_readings.src_ip, sensor_readings.e |
: mit_time, sensor_readings.%TID% NOT NULL ] :
| | | ) [ convert(sensor_readings.emit_time) >= sql_sub(current_timestamp, 30) |
: ] :
| | ) [ ] [ count NOT NULL as L5 ] |
| ) [ L5 NOT NULL ] |
+------------------------------------------------------------------------------+
7 tuples (0.488ms)
sql>
Comment 14948
Date: 2010-09-28 15:32:57 +0200
From: @grobian
after some thinking: query does a full scan which takes 50 seconds, so the 30 seconds limit never matches anything.
Comment 14949
Date: 2010-09-28 15:35:50 +0200
From: @grobian
workaround :)
sql>declare ts timestamp;
sql>set ts = now() - interval '30' second; select count() from sensor_readings where emit_time >= ts;
+------+
| L4 |
+======+
| 229 |
+------+
1 tuple (53.620ms)
sql>set ts = now() - interval '30' second; select count() from sensor_readings where emit_time >= ts;
+------+
| L5 |
+======+
| 234 |
+------+
1 tuple (51.480ms)
Comment 15014
Date: 2010-10-08 23:45:28 +0200
From: @drstmane
Did/could you profile (TRACE) the query with now() in the predicate to analyze where the time is spent?
Comment 15141
Date: 2010-10-28 13:49:49 +0200
From: @grobian
tracing the query acutally never ends (or takes more time than my patience allows)
Comment 15242
Date: 2010-12-01 23:23:45 +0100
From: @njnes
the table definition is missing. To debug this query (using explain/plan etc) needs all ddl statements.
Comment 15248
Date: 2010-12-02 18:29:39 +0100
From: @grobian
here you go:
CREATE TABLE "sensor_readings" (
"src_ip" VARCHAR(15),
"recv_time" TIMESTAMP,
"emit_time" TIMESTAMP,
"location" VARCHAR(30),
"type" VARCHAR(30),
"value" VARCHAR(30)
);
Comment 15256
Date: 2010-12-03 21:12:26 +0100
From: @njnes
time is spend in a bat-iterator loop which is converting the timestamp column
to a timestamp with time zone. This seems like a waste of time.
Comment 15257
Date: 2010-12-03 21:27:41 +0100
From: @njnes
the conversion can be skipped, which solves the slow down.
Comment 15260
Date: 2010-12-03 21:33:12 +0100
From: @njnes
Changeset 455912904d45 made by Niels Nes niels@cwi.nl in the MonetDB repo, refers to this bug.
For complete details, see http//devmonetdborg/hg/MonetDB?cmd=changeset;node=455912904d45
Changeset description:
Comment 15270
Date: 2010-12-04 09:54:00 +0100
From: @grobian
This is a bug on oct2010, but it was fixed on default/current.
Is it possible to be backported?
Comment 15274
Date: 2010-12-04 11:59:58 +0100
From: @njnes
Changeset 38195aa0606a made by Niels Nes niels@cwi.nl in the MonetDB repo, refers to this bug.
For complete details, see http//devmonetdborg/hg/MonetDB?cmd=changeset;node=38195aa0606a
Changeset description:
Comment 15275
Date: 2010-12-04 12:00:22 +0100
From: @grobian
manually backported this fix
Comment 15632
Date: 2011-03-28 17:31:33 +0200
From: @sjoerdmullender
The Mar2011 version has been released.
The text was updated successfully, but these errors were encountered: