Constraint error: (201) Duplicate keys are not allowed
fezzzza opened this issue · 8 comments
I have read #155 and #172 in some detail and I believe I have a problem similar to #155.
Constraint error: (201) Duplicate keys are not allowed, index: monos.pkMonos, key: "session"
Unfortunately I am not able to reproduce in a simple example I have tried to draw up for you.
I note this comment in #172:
The fact that you are using
insertOrReplace()
means that if you insert a row with the same PK, it should replace an existing row instead of complaining. So, yes, that would be a bug if it is not happening.
...so it does appear that this is a bug. My schema looks a bit like this:
schemaBuilder = lf.schema.create('myDatabase', 1);
schemaBuilder.createTable('fw_mime_types')
.addColumn('id', lf.Type.INTEGER)
.addColumn('mime_type', lf.Type.STRING)
.addColumn('file_extension', lf.Type.STRING)
.addColumn('popular', lf.Type.INTEGER)
.addColumn('created_timestamp', lf.Type.INTEGER)
.addColumn('modified_timestamp', lf.Type.STRING)
.addColumn('modified_session_link', lf.Type.INTEGER)
.addNullable(['mime_type','file_extension','popular','created_timestamp','modified_timestamp','modified_session_link'])
.addPrimaryKey(['id'])
;
schemaBuilder.createTable('mono')
.addColumn('key', lf.Type.STRING)
.addColumn('value', lf.Type.NUMBER)
.addPrimaryKey(['key'])
;
schemaBuilder.createTable('monos')
.addColumn('key', lf.Type.STRING)
.addColumn('value', lf.Type.STRING)
.addColumn('created_timestamp', lf.Type.INTEGER)
.addColumn('modified_timestamp', lf.Type.INTEGER)
.addColumn('modified_session_link', lf.Type.INTEGER)
.addNullable(['value','created_timestamp','modified_timestamp','modified_session_link'])
.addPrimaryKey(['key'])
;
schemaBuilder
.connect()
.then(function(newDb){
db=newDb;
db_mono=db.getSchema().table('mono');
db_monos=db.getSchema().table('monos');
})
..though I have about 10 tables in total with greater complexity.
The main similarity to #155 is that I am using a STRING as a key, and not an INTEGER as most people would.
The error occurs when I try to insertOrReplace a second time, using the key, which in this case is "session".
There are also a few other reads and writes to and from the same table(and various other tables) in between insertOrReplaces on that key.
If I change the hard-coded key name after the first time the database is created to another string, the error no longer occurs, no matter how many times I reload the app in the browser. That is, if that key is written and the read routine continues to read the row with the original key.
If I delete the database from the console and reload, the error occurs on the second reload or any subsequent insertOrReplace on that key.
If I use a numerical string "0" as the key, the error does not occur.
If I replace the hard-coded occurrences of "session"
with a variable that contains "session" in the line newData[0]["key"]="session";
, the error also does not occur.
function dbStoreArray(tableName,index,array){
var newData=this[tableName];
var row=[];
var newRows=[];
if (tableName=="session"){
tableName="monos";
newData={};
newData[0]={};
newData[0]["key"]="session";
myData=this["session"]["000"];
newData[0]["value"]=JSON.stringify(myData);
}
var table=db.getSchema().table(tableName);
var keys=Object.keys(newData);
var l=keys.length;
for (var n = 0; n < l; n++) {
var currentRow=newData[keys[n]];
row=table.createRow(currentRow);
newRows.push(row);
}
db
.insertOrReplace()
.into(table)
.values(newRows)
.exec()
.then(function(){
console.log("insert success");
})
.catch(function(error){
console.log("fail",error);
});
newRows=[];
}
This is the function that is called to store the data, which parses over a Javascript object of 8 sets of data(objects) passed from the server, using jQuery.ajax like this:
$.ajax({
method: "GET",
url:"dbSync.php",
data:{"l": latest}
})
.done (function(response){
var result=JSON.parse(response);
Object.keys(result).forEach(dbStoreArray,result);
})
The first 7 of the 8 sets are stored just fine in the database, and the last one which contains the session data is not. The session data is added from the server using a slightly different method to the other 7 but I have verified that the data format / JSON format passed from the server and into the database is consistent at every stage.
I am testing on Chrome for Linux version 71.0.3578.98 (Official Build) (64-bit) on Linux Mint 19.1
I hope that gives you enough to go on.
Does this still happen even after fixing your data to provide numbers where numbers are expected?
This is unrelated to #247. This concerns the table "db_monos" which has a string for the PK "key" and a string for a the value "value". I have since implemented a workaround now that I understand the risk of error in certain situations, but there is an element of fragility here: while filing the error, I was testing various scenarios to give you more detail and then going back to the original issue the bug then disappeared - but there are several places in the code I had to replace that hard-coded "session" on the read and write stages and maybe I didn't get them all at the time. However, I'm quite sure the error 201 should not have been thrown, though, if I am submitting a duplicate PK in a row I'm sending to .insertOrReplace()
.
To answer your question completely: At the time strings WERE being fed into an integer key on an unrelated table at the time, and this was prior to my implementation of .orderBy()
as in #247
Can you provide a more minimal repro? Otherwise it is very hard to debug the issue that you are having, without a complete example. Specifically the following would really help
- Schema you are using (can you reduce this to a single table, if the problem only occurs with the "monos" table?)
- Code you are using to populate and query the DB
- The data you are using
I spent a couple of hours putting exactly that together in an attempt to demonstrate it but was unable to reproduce it in separate code. I have a feeling this could be because it's related to the volume of reads and writes to the table - not that there is a great volume, just a couple a minute. I put the following together with a crazy endless loop that will eventually blow the stack, just to apply some pressure to lovefield but in my environment I can't get it to crack...
Just FYI, I just got this error again in a separate part of code where I was using plain old .insert()
to insert a row containing a unique string into PK of type lf.Type.INTEGER, btw:
Constraint error: (201) Duplicate keys are not allowed, index: reasons_fields_links.pkReasons_fields_links, key: "Rfaf8e73efaaed".
The problem is solved by selecting a numeric value as a PK instead, but it just goes to show - given that there is no actual type checking and from what you say it SHOULD be ok to insert a string into a key of type lf.Type.INTEGER - that this bug is more prone to being tickled when it sees the insertion of strings as opposed to integers.
If it helps, the entire content of this table had previously been read only once and never written to since .connect()
, maybe 5-10 seconds before the attempted write.
given that there is no actual type checking and from what you say it SHOULD be ok to insert a string into a key of type lf.Type.INTEGER
That is definitely not true. The fact that Lovefield skips type-checking for performance reasons, does not mean that it is OK to enter STRING values in an INTEGER or NUMBER column. All it means is that you are entering unspecified behavior, meaning behavior that is not speced anywhere, and should not be expected to work correctly. Even more so, if you are mixing numbers and strings in the same column.
What actually happens in that case is that the index holds both types of values, and querying the index yields invalid results. The assumption here is that you are responsible for entering the correct type of data to your table. Besides the type, Lovefield performs other checks, according to the schema, to ensure your DB is consistent (nullability checks for non-nullable colums, uniqueness checks for unique columns).
OK, I've been able to get a reproducible handle on this, and it appears that this only occurs when mixing strings and integers in a PK column:
- If you add a string to a STRING column full of numbers, it breaks
- If you add a string in a STRING column full of numeric strings, that's fine
- If you add a string to an INTEGER column of integers, it breaks
- If you add an integer to a STRING column full of numeric strings, it's fine
- If you add an integer to a STRING column full of strings, it breaks
- If you add an integer to a INTEGER column full of strings, it breaks
- If you add an integer to a INTEGER column full of numeric strings, it's fine
I think I can see a pattern here! I think it's safe to say that one ought to be doing type checking/correction/handling oneself before insertion.
So yes, these errors do seem to be borne out of my poor assumptions about column types - and so largely my own fault - but it would have been helpful if the error message didn't mislead.