segfault in drvgettable()
kcgthb opened this issue · 6 comments
Hi,
I'm getting a segmentation fault in drvgettable()
at sqlite3odbc.c:1642
when running large SQL queries:
Line 1642 in 9049782
The setup is Stata 17 -> unixODBC 2.3.9 -> sqliteodbc 0.9998 -> SQLite 3.18.0
and the relevant part of the backtrace is as follows:
(gdb) bt
#0 0x00007f9fecb3f6bf in drvgettable (p=0x0, nparam=0, errp=<synthetic pointer>, ncolp=<synthetic pointer>, nrowp=0x62856f0, resp=0x6285700, sql=<optimized out>, s=0x6285638) at sqlite3odbc.c:1642
#1 drvexecute (stmt=0x6285638, initial=1) at sqlite3odbc.c:18739
#2 0x00007f9fecf8cbf5 in SQLExecDirectW (statement_handle=0x6284fa0, statement_text=<optimized out>, text_length=<optimized out>) at SQLExecDirectW.c:359
[...]
The particular SQL query that generates this is a SELECT
that returns about 8GB of data. The same query works fine when executed in the sqlite3
shell directly. And limiting the number of records returned with LIMIT
seems to be working too.
Any idea what may be the reason for that segfault?
I can actually reproduce it with the unixODBC command line tool, isql
:
SQL> SELECT COUNT(*) FROM summary;
+-----------+
| COUNT(*) |
+-----------+
| 77344975 |
+-----------+
SQLRowCount returns 0
1 rows fetched
SQL> SELECT * FROM summary;
Segmentation fault
And to further narrow it down: SELECT * FROM summary LIMIT 10650000;
works but SELECT * FROM summary LIMIT 10700000;
generates a segfault.
Is it something to do with a particular row in the data itself, like if you ran SELECT * FROM summary LIMIT 50000 OFFSET 10650000
does it error?
Thanks for the suggestion @EionRobb that's a very good point!
SELECT * FROM summary LIMIT 50000 OFFSET 10650000;
does work, so it doesn't seem to be related to the data itself.
And to test this further:
SELECT * FROM summary LIMIT 10650000 OFFSET 20000000;
worksSELECT * FROM summary LIMIT 10700000 OFFSET 20000000;
segfaults
So it does seem to be related to the amount of data returned.
Moreover, it looks like the LIMIT threshold varies depending on the columns requested. For instance, requesting just one column with short data (years) allow the query to complete without any LIMIT. (SELECT pubyear FROM summary
works, for instance, whereas SELECT * FROM summary
segfaults)
So the issue seems to be with the amount of data returned, not the number of records.
Also, are you using 32bit or 64bit? If 32bit, might be hitting a 2gb memory barrier?
This is on Linux, 64bits.