Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Weak duplicate elimination in string heaps > 64KB #6138

Closed
monetdb-team opened this issue Nov 30, 2020 · 0 comments
Closed

Weak duplicate elimination in string heaps > 64KB #6138

monetdb-team opened this issue Nov 30, 2020 · 0 comments
Labels
bug Something isn't working GDK Kernel normal

Comments

@monetdb-team
Copy link

Date: 2016-12-02 12:21:24 +0100
From: Richard Hughes <<richard.monetdb>>
To: GDK devs <>
Version: 11.23.13 (Jun2016-SP2)
CC: arkr17997, rosecorthan23

Last updated: 2019-09-13 08:42:15 +0200

Comment 24748

Date: 2016-12-02 12:21:24 +0100
From: Richard Hughes <<richard.monetdb>>

Build is default branch 8086d2d529f2

varchar (and other variable-length columns) store their actual values in a separate .theap file, with a hash structure in order to eliminate duplicate values. This works well for heaps smaller than 64KB, however heaps larger than that switch to an alternate set of heuristics, whose implementation details make it much less effective than it should be. This causes unnecessarily high disk usage, with the corresponding performance decrease due to more data needing to be accessed.

To reproduce:

sql>create table foo as select cast(value as varchar(64)) as v from generate_series(cast(0 as int),2500) with data;

sql>select count(*) from foo;insert into foo values ('dupe');select heapsize from sys.storage() where "table"='foo';

The second line should be run repeatedly. This is the easiest possible case of duplicate elimination (inserting the exact same value as last time), however the heapsize still increases over time. The de-duplication appears to get lost during bl_restart(), so running that line once every 30 seconds shows the heapsize increasing each time.

The duplicate elimination breaks because BATload_intern() calls strCleanHash(), which memsets the entire hash table to zero - any time the BAT is unloaded and reloaded then all de-duplication is lost. The change in 4faed73ce142 made unload/reload happen significantly more frequently than it used to. The explanation for the hash zeroing appears in a comment in e6d90b529745 "heap may have been mmaped-ed, appended-by-force, and then corrupted by crash".

If that is the only reason for the wipe then I reckon this might work instead:

diff -r 8086d2d529f2 gdk/gdk_atoms.c
--- a/gdk/gdk_atoms.c Thu Dec 01 21:29:14 2016 +0100
+++ b/gdk/gdk_atoms.c Fri Dec 02 11:02:02 2016 +0000
@@ -1131,8 +1131,11 @@
{
(void) rebuild;
if (!GDK_ELIMDOUBLES(h)) {

  •           /* flush hash table for security */
    
  •           memset(h->base, 0, GDK_STRHASHSIZE);
    
  •           stridx_t *bucket = (stridx_t*)h->base;
    
  •           int i;
    
  •           for (i = 0; i < GDK_STRHASHTABLE; ++i)
    
  •                   if (bucket[i] >= h->free)
    
  •                           bucket[i] = 0;
      } else {
              /* rebuild hash table for double elimination
               *
    

If there are other reasons for the wipe (e.g. dealing with files from older versions which may be differently formatted) then that won't work. It will work for the case of a change to the hash function, although it won't be particularly effective until new values are inserted with the new hash function.

The only alternative I can currently think of is to rebuild the hash table based on the first 64KB of data and hope that the value distribution in the column hasn't changed too much over time. It may be worth, however, thinking about ways to avoid the cost of the 'else' branch too - rebuilding the whole hash table every time the BAT is loaded is not an insignificant amount of CPU usage.

Comment 24959

Date: 2017-02-03 10:06:10 +0100
From: MonetDB Mercurial Repository <>

Changeset 2ff5b4701e0e made by Sjoerd Mullender sjoerd@acm.org in the MonetDB repo, refers to this bug.

For complete details, see http//devmonetdborg/hg/MonetDB?cmd=changeset;node=2ff5b4701e0e

Changeset description:

Only clean out the hash table at start of string heap when starting server.
The idea behind cleaning the hash at all is that we can't trust the
contents of the hash table after a server restart since we may have
been half way through changing the string heap when the server shut
down (new entries were added, but not committed, so on restart the
free pointer was reset and the hash table points into limbo).
We now only clean out the hash table at server start, not when
reloading the heap during normal operations.
This should fix bug #6138.

Comment 25080

Date: 2017-03-02 15:42:22 +0100
From: @sjoerdmullender

I implemented a different fix: only clean the hash table the first time a string heap is loaded after server restart.
Looking at your proposed fix I see that it can't work. The contents of the hash table are offsets within the last 64k "page" of the string heap, not offsets from the start of the string heap. This means that if a string heap grew across a page border before a server crash, the offsets are completely off.
I don't remember what exact circumstances caused me to conclude that we needed to clean out the hash table, but assuming that we really needed to do this, my current solution seems to me to be the best we can do.

Comment 26753

Date: 2018-12-25 07:23:06 +0100
From: Thanos <>

I am here for the share this amazing post need to follow here http://syncsettingswindows10.com and save the all internet setting of windows latest version for the easily update to system.

Comment 27281

Date: 2019-09-13 08:42:15 +0200
From: Shelton <>

Online VAPE SHOP offers top quality http://schloss34.bravesites.com

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working GDK Kernel normal
Projects
None yet
Development

No branches or pull requests

2 participants