db error (1062): Duplicate entry '64027' for key 'PRIMARY'
tconstans opened this issue · 9 comments
Dear Pantelis next time, think twice before allowing me to abuse issue tracker ;)
On another echofish loghost server i get the following error in /var/log/messages:
telling
"Jun 18 07:40:01 LOGOHST rsyslogd: db error (1062): Duplicate entry 'XXXXXXXX' for key 'PRIMARY'"
I think i get it because i am manually and regularly purging my archive table :
show create event evt_truncate_archive_table \G
*************************** 1. row ***************************
Event: evt_truncate_archive_table
sql_mode:
< .....>
Create Event: CREATE EVENT evt_truncate_archive_table
ON SCHEDULE EVERY 1 WEEK STARTS '2014-05-26 09:21:08' ON COMPLETION NOT PRESERVE ENABLE DO truncate table archive
<....>
Any pointers to solve this issue ?
Best regards
What table you get the error for?
The archive id is also kept on archive_unparse and is in sync with the syslog id (so syslog and archive id's match).
Try to either delete the archive entries you're not interested or you'll need to also truncate archive_unparse and syslog (although not suggested).
The archive rotation is something we've been working also for the next version. The most "elegant" solution we could find was based on partitioned tables and partition rotations, but we need a bit more research for that one.
Nevertheless we'll keep you posted :)
As a fix, i did the following:
delete from syslog where id not in (select id from archive) ;
I was about to start testing the following if you're interested :)
DROP EVENT IF EXISTS e_rotate_archive//
CREATE EVENT e_rotate_archive
ON SCHEDULE EVERY 1 DAY COMMENT 'ROTATE OLD ARCHIVE ENTRIES' DO
BEGIN
CREATE TEMPORARY TABLE archive_ids (id BIGINT UNSIGNED NOT NULL PRIMARY KEY) /* ENGINE=MEMORY */;
SET SESSION TRANSACTION ISOLATION LEVEL READ UNCOMMITTED;
START TRANSACTION;
INSERT INTO archive_ids SELECT id FROM `archive` WHERE received_ts < NOW() - INTERVAL 7 DAY /* LIMIT 100000 */ ; -- Uncomment in case of memory limitations
-- Ignore ID's from entries that exist on archive_unparse
DELETE t1.* FROM archive_ids as t1 LEFT JOIN archive_unparse AS t2 ON t1.id=t2.archive_id WHERE t2.archive_id IS NOT NULL;
-- Ignore ID's from entries that exist on syslog
DELETE t1.* FROM archive_ids as t1 LEFT JOIN syslog AS t2 ON t1.id=t2.id WHERE t2.id IS NOT NULL;
-- Ignore ID's from entries that exist on abuser_evidense
DELETE t1.* FROM archive_ids as t1 LEFT JOIN abuser_evidence AS t2 ON t1.id=t2.archive_id WHERE t2.archive_id IS NOT NULL;
DELETE t1.* FROM `archive` AS t1 LEFT JOIN archive_ids AS t2 ON t1.id=t2.id WHERE t2.id IS NOT NULL;
COMMIT;
END
//
Here is an updated version with correct field names.
DROP EVENT IF EXISTS e_rotate_archive//
CREATE EVENT e_rotate_archive
ON SCHEDULE EVERY 1 DAY COMMENT 'ROTATE OLD ARCHIVE ENTRIES' DO
BEGIN
CREATE TEMPORARY TABLE archive_ids (id BIGINT UNSIGNED NOT NULL PRIMARY KEY) /* ENGINE=MEMORY */;
SET SESSION TRANSACTION ISOLATION LEVEL READ UNCOMMITTED;
START TRANSACTION;
INSERT INTO archive_ids SELECT id FROM `archive` WHERE received_ts < NOW() - INTERVAL 7 DAY /* LIMIT 100000 */ ; -- Uncomment in case of memory limitations
-- Ignore ID's from entries that exist on archive_unparse
DELETE t1.* FROM archive_ids as t1 LEFT JOIN archive_unparse AS t2 ON t1.id=t2.id WHERE t2.id IS NOT NULL;
-- Ignore ID's from entries that exist on syslog
DELETE t1.* FROM archive_ids as t1 LEFT JOIN syslog AS t2 ON t1.id=t2.id WHERE t2.id IS NOT NULL;
-- Ignore ID's from entries that exist on abuser_evidense
DELETE t1.* FROM archive_ids as t1 LEFT JOIN abuser_evidence AS t2 ON t1.id=t2.archive_id WHERE t2.archive_id IS NOT NULL;
DELETE t1.* FROM `archive` AS t1 LEFT JOIN archive_ids AS t2 ON t1.id=t2.id WHERE t2.id IS NOT NULL;
COMMIT;
END
//
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
Done
Le 19/06/2014 15:34, Pantelis a écrit :
Here is an updated version with correct field names.
Thomas Constans
Services en informatique libre
T: 33(0)6 23 37 87 85
@: http://opendoor.fr
#: https://twitter.com/ThomasConstans
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/
iQEcBAEBAgAGBQJTouhFAAoJELoZdF+AQQVB0xsH/ichDMpv1vGpKDeh6vDDyLr4
Y9sXSaRHI2ETnMvGXH4gjLKuyjl6/63nhcPxiZpirbOBq+Y89QS1xN8JCjboMOqd
lNiKLJEOOGN/4P9wBMdP8SKaxJWxlsugdv23DP18egO6By5mWGqw4xDBXxH2AqvF
RSywPfSJxFV8S55WoEq+pURaQ9lkR7+xvlqNj86R/WR3vVb4yi2XkJwwxE8k+1FP
jKjme+Y0rXnFZ/sBfl9yOemGHh/jA0+IJsqIDHOglsgstq6lFP5I/jHQ7lGYvmZe
bgC7NDFbT8RhqZnz2q0n1ayCXtXQRX6lgUyhUu1AMn/ZShphxMwPBD4qCKc6/m8=
=szlF
-----END PGP SIGNATURE-----
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
Would be better with a DELIMITER // at beginning
Trying it out now on my 2 echofish server
Best regards
Le 19/06/2014 15:34, Pantelis a écrit :
DROP EVENT IF EXISTS e_rotate_archive// CREATE EVENT
e_rotate_archive ON SCHEDULE EVERY 1 DAY COMMENT 'ROTATE OLD
ARCHIVE ENTRIES' DO BEGIN CREATE TEMPORARY TABLE archive_ids (id
BIGINT UNSIGNED NOT NULL PRIMARY KEY) /* ENGINE=MEMORY /; SET
SESSION TRANSACTION ISOLATION LEVEL READ UNCOMMITTED; START
TRANSACTION; INSERT INTO archive_ids SELECT id FROMarchive
WHERE
received_ts < NOW() - INTERVAL 7 DAY / LIMIT 100000 / ; --
Uncomment in case of memory limitations -- Ignore ID's from entries
that exist on archive_unparse DELETE t1. FROM archive_ids as t1
LEFT JOIN archive_unparse AS t2 ON t1.id=t2.id WHERE t2.id IS NOT
NULL; -- Ignore ID's from entries that exist on syslog DELETE t1.*
FROM archive_ids as t1 LEFT JOIN syslog AS t2 ON t1.id=t2.id WHERE
t2.id IS NOT NULL; -- Ignore ID's from entries that exist on
abuser_evidense DELETE t1.* FROM archive_ids as t1 LEFT JOIN
abuser_evidence AS t2 ON t1.id=t2.archive_id WHERE t2.archive_id IS
NOT NULL; DELETE t1.* FROMarchive
AS t1 LEFT JOIN archive_ids AS
t2 ON t1.id=t2.id WHERE t2.id IS NOT NULL; COMMIT; END //
Thomas Constans
Services en informatique libre
T: 33(0)6 23 37 87 85
@: http://opendoor.fr
#: https://twitter.com/ThomasConstans
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/
iQEcBAEBAgAGBQJToxlIAAoJELoZdF+AQQVB27AH/2Rv/Jv7tiqa7dZ5LJDEI3tQ
D1Bz4VeGIKiJosATRfkLQu3WCj2QrvsBpl+G1AsCMVWU1NoJSj9kSwwZFMy8z02D
TwdVmxEweF7UdkM6i4KOVQiQmuPDrsOXQwImOLWRlzvajIhQ8zugmGeuSLKdn8tv
Ax29NywIRwABlywXrGhFOT1MmUfgRH2SgnF0Y4Pmd6QXFhzu2QAgZEHgJom60Qyt
0ugFJTfqjpjyc2LvBAwd3BCn4/c2BuJqY1HD7gkx8C23svX+TsRBtGFO9UguO5W2
pwxUf/r8r0sTqyYb9oEpwQxb8G6qNIBD8s9dIKEW0BsAypOuQbjo6v5rFrXMV9E=
=TIXg
-----END PGP SIGNATURE-----
Hi Thomas,
Once you have week-old archived logs, the event above will run daily to trim anything in the archive that is older than 7 days. If you can confirm it works, please let us know!
Thanks,
George
Looks good:
MySQL [echofish]> select now() ;
| now() |
+---------------------+
| 2014-06-30 23:11:50 |
+---------------------+
1 row in set (0.00 sec)
MySQL [echofish]> SELECT id FROM archive
WHERE received_ts < NOW() - INTERVAL 7 DAY ;
Empty set (0.05 sec)
MySQL [echofish]> select id, received_ts from archive order by 2 limit 1 ;
+----+---------------------+
| id | received_ts |
+----+---------------------+
| 5 | 2014-06-25 03:08:29 |
+----+---------------------+
1 row in set (0.10 sec)
Thank you very much! Closing this issue as resolved.