database/sql: permit large uint64 as parameters
xaprb opened this issue · 5 comments
The following program, which assumes a local MySQL server and a specific table with one column, will fail.
package main
import (
"database/sql"
_ "github.com/go-sql-driver/mysql"
"log"
)
func main() {
db, err := sql.Open("mysql", "root:@tcp(:3306)/")
if err != nil {
log.Fatal(err)
}
defer db.Close()
var i uint64 = 1 << 63
_, err = db.Exec("INSERT INTO test.hello VALUES(?)", i)
if err != nil {
log.Fatal(err)
}
}
The error message is
sql: converting Exec argument #0's type: uint64 values with high bit set are not supported
I am not sure why this is enforced, although #6113 gives some hints and says if it's a problem, then it should be discussed. Let's discuss.
I'll start. I believe it is a problem, and quite a serious one. A lot of applications will start out small and eventually need 64-bit counters. People who have been bitten by this a few times will anticipate it and just start out with 64-bit counters, and what the heck, why not use 64-bit unsigned while we're at it. Or, people will use uint64
to store 64-bit checksums or other 64-bit pieces of data. In the application I work on, we have them all over the place, and a lot of them set the most significant bit. In any case, the problem is that things start out working just fine, and later they explode without warning.
We work around this by wrapping those parameters in fmt.Sprint()
but
- This is ugly and it smells
- We still occasionally miss an instance of such a call and we find out after it gets into production
And this will probably never be a solved problem as long as the behavior still exists.
I don't know what to suggest to fix this, since I do not understand what defaultValueConverter
is used for -- I gather it has a lot of functionality in different contexts.
It may break compatible of database/sql. Handling uint64 means RowsAffected is possible to become greater than 1<<63
. (I know it is not realistic)
It may break compatible of database/sql. Handling uint64 means RowsAffected is possible to become greater than 1<<63. (I know it is not realistic)
You mean LastInsertId(), not RowsAffected, yes? Because I don't see how this would change RowsAffected in any way.
If did update rows over than positive number of int64, RowsAffected can't be stored count of rows. Right?
If did update rows over than positive number of int64, RowsAffected can't be stored count of rows. Right?
But what's preventing you from updating that many rows today?
There are databases (sqlite for example) that do not support such large values.
That is why database/sql does not allow this by default.
We cannot change that decision now, because it will break all existing drivers.
If MySQL does support such large values, then the MySQL driver can makes its
implementation of driver.Stmt implement driver.ColumnConverter and allow
uint64s in a custom ValueConverter.
That is, a MySQL driver that wants to allow large uint64s can do so today.
There's no code change needed in database/sql.