Dump: bigint type issue
lightsuner opened this issue · 12 comments
Versions
MySQL: 8.4
databack/mysql-backup: 738cc8f
Issue
I've just unpacked one of the recent backups and spotted the issue: for some reason, during a backup, it can not convert properly bigint type. Please check the example of the backup file:
--
-- Table structure for table `languages`
--
DROP TABLE IF EXISTS `languages`;
/*!40101 SET @saved_cs_client = @@character_set_client */;
/*!50503 SET character_set_client = utf8mb4 */;
CREATE TABLE `languages` (
`id` bigint unsigned NOT NULL AUTO_INCREMENT,
`code` varchar(2) NOT NULL,
`created_at` timestamp NULL DEFAULT NULL,
`updated_at` timestamp NULL DEFAULT NULL,
PRIMARY KEY (`id`),
UNIQUE KEY `languages_code_unique` (`code`)
) ENGINE=InnoDB AUTO_INCREMENT=2 DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci;
/*!40101 SET character_set_client = @saved_cs_client */;
--
-- Dumping data for table `languages`
--
LOCK TABLES `languages` WRITE;
/*!40000 ALTER TABLE `languages` DISABLE KEYS */;
INSERT INTO `languages` (`id`, `code`, `created_at`, `updated_at`) VALUES ('%!s(*uint64=0xc000598008)','en','2024-05-19 22:20:46','2024-05-19 22:20:46');
/*!40000 ALTER TABLE `languages` ENABLE KEYS */;
UNLOCK TABLES;
So, as you can see, instead of (I guess) 1
, it dumped %!s(*uint64=0xc000598008)
. It seems like a memory reference.
Nice catch. Can you write a simple way to recreate it? Give a simple sql file to load into a fresh DB, that causes it when dumped?
@deitch will try
Hello @deitch , here is how to reproduce the bug:
You need to create 2 files:
- compose.yml
- init_db.sql
compose.yml
version: '3.8'
networks:
default:
driver: bridge
name: bug
services:
mysql:
image: mysql:8.4
restart: unless-stopped
environment:
MYSQL_ROOT_PASSWORD: 'root_password'
ports:
- '3306:3306'
volumes:
- ./init_db.sql:/docker-entrypoint-initdb.d/init_db.sql
networks:
- default
init_db.sql
CREATE DATABASE IF NOT EXISTS main CHARACTER SET utf8mb4 COLLATE utf8mb4_0900_ai_ci;
USE `main`;
CREATE TABLE `languages`
(
`id` bigint unsigned NOT NULL AUTO_INCREMENT,
`code` varchar(2) NOT NULL,
`created_at` timestamp NULL DEFAULT NULL,
`updated_at` timestamp NULL DEFAULT NULL,
PRIMARY KEY (`id`),
UNIQUE KEY `languages_code_unique` (`code`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci;
INSERT INTO `languages` (`code`, `created_at`, `updated_at`) VALUES ('en', NOW(), NOW());
INSERT INTO `languages` (`code`, `created_at`, `updated_at`) VALUES ('de', NOW(), NOW());
Run docker compose:
docker compose up -d
And than run command:
docker run --network=bug \
-v ${PWD}/backups:/backups \
-e DB_SERVER=mysql \
-e DB_USER='root' \
-e DB_PORT='3306' \
-e DB_NAMES='main' \
-e DB_PASS='root_password' \
-e DB_DUMP_TARGET='/backups' \
databack/mysql-backup:738cc8f5f77947dc83ae8eb69228d27674fdd493 dump --once
After, check ./backups
folder
Thanks in advance!
I just ran it, and had no issues 🤷♂️
Here is what I did:
$ docker run -d -p 3306:3306 --name mysql -e MYSQL_ALLOW_EMPTY_PASSWORD=yes -e MYSQL_DATABASE=tester -e MYSQL_USER=testadmin -e MYSQL_PASSWORD=examplePass123 mysql:8.4
$ docker exec -t mysql bash
# mysql -hlocalhost
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 18
Server version: 8.4.0 MySQL Community Server - GPL
Copyright (c) 2000, 2024, Oracle and/or its affiliates.
Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
mysql> use tester;
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A
Database changed
mysql> CREATE TABLE `languages`
-> (
-> `id` bigint unsigned NOT NULL AUTO_INCREMENT,
-> `code` varchar(2) NOT NULL,
-> `created_at` timestamp NULL DEFAULT NULL,
-> `updated_at` timestamp NULL DEFAULT NULL,
-> PRIMARY KEY (`id`),
-> UNIQUE KEY `languages_code_unique` (`code`)
-> ) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci;
Query OK, 0 rows affected (0.03 sec)
mysql> INSERT INTO `languages` (`code`, `created_at`, `updated_at`) VALUES ('en', NOW(), NOW());
Query OK, 1 row affected (0.00 sec)
mysql> INSERT INTO `languages` (`code`, `created_at`, `updated_at`) VALUES ('de', NOW(), NOW());
Query OK, 1 row affected (0.00 sec)
mysql> exit
# exit
$ mkdir dumper
$ go run . dump --server localhost --target $(pwd)/dumper --pass examplePass123 --user testadmin --once
$ ls -l dumper/
total 4
-rw-rw-r-- 1 ubuntu ubuntu 996 Jun 3 13:03 db_backup_2024-06-03T13:03:03Z.tgz
$ tar ztvf dumper/dumper/db_backup_2024-06-03T13\:03\:03Z.tgz
-rw-rw-r-- ubuntu/ubuntu 2442 2024-06-03 13:03 tester_2024-06-03T13:03:03Z.sql
That all looks pretty sane. I also retried it checking out your suggested commit 738cc8f5f77947dc83ae8eb69228d27674fdd493
. Same ting.
Hello @deitch ,
Git commit
commit 738cc8f5f77947dc83ae8eb69228d27674fdd493 (HEAD, origin/master, origin/HEAD, master)
Merge: fda8c89 39e684d
Author: Avi Deitcher <deitch@users.noreply.github.com>
Date: Fri May 10 11:30:24 2024 +0300
Merge pull request #312 from databacker/rename-config-fields
configuration fields to camelCase
My Actions
I've tried to run the app without docker directly and got the same error:
go run . dump --server localhost --target $(pwd)/../backups --pass root_password --user root --include main --once
Result
--
-- Dumping data for table `languages`
--
LOCK TABLES `languages` WRITE;
/*!40000 ALTER TABLE `languages` DISABLE KEYS */;
INSERT INTO `languages` (`id`, `code`, `created_at`, `updated_at`) VALUES ('%!s(*uint64=0xc0002902c8)','en','2024-06-08 19:00:56','2024-06-08 19:00:56'),('%!s(*uint64=0xc0002902c8)','de','2024-06-08 19:00:56','2024-06-08 19:00:56');
/*!40000 ALTER TABLE `languages` ENABLE KEYS */;
UNLOCK TABLES;
/*!40103 SET TIME_ZONE=@OLD_TIME_ZONE */;
Investigation
Also, I tried to go through functions with debugger, and
in RowBuffer()
function pkg/database/mysql/table.go
it shows that the value has type unit64
instead of sql.NullInt64
, and it was processed by
default:
fmt.Fprintf(&b, "'%s'", value)
====
func reflectColumnType(tp *sql.ColumnType) reflect.Type
from pkg/database/mysql/table.go
Debugger:
- It skipped
switch tp.ScanType().Kind()
switch - It skipped
switch tp.DatabaseTypeName()
switch - It ended up at
return tp.ScanType()
====
@deitch Can the problem be in the function reflectColumnType
which can not process unsigned int64
?
Oh, I definitely got confused about your issue. I see it now. I am glad we caught this pre-1.0.0 GA.
Aha. You had the right part highlighted, the issue is in table.go
.
The problem, however, isn't BIGINT
, which, according to the official mysql docs has a maximum value of int64 (signed) or uint64 (unsigned), in other words, 64 bits are fine for this, and it is handled correctly as sql.NullInt64
here.
The issue is UNSIGNED
. There is no handling for it.
We can add handling code for it in that section, but I am not sure what type it should return. SIGNED BIGINT is sql.NullInt64, which makes sense. What about UNSIGNED? That should become a uint64, but it isn't one of the available null-types from that package.
Ah, I might have missed something. The Scan()
function actually handles it just fine, converting it into a *uint64
, which makes sense. Might, that is not nullable, like a NullInt64
, but I think ok for now. The issue actually arises when printing it out here. It handles all of the nullable types, but none of the native ones.
So the answer appears to be:
- add support for
UNSIGNED
to the nullable types inreflectColumnType()
, maybe. This doesn't block us here. - add support for the base types in
RowBuffer()
. That would get this to do the right thing.
Done