pingcap/tidb

Incompatible query results with UNHEX function

sayJason opened this issue · 7 comments

Bug Report

Please answer these questions before submitting your issue. Thanks!

1. Minimal reproduce step (Required)

DROP TABLE IF EXISTS t1;
CREATE TABLE t1(c1 INT);
INSERT INTO t1 VALUES (1);
SELECT c1 FROM t1 WHERE UNHEX(2038330881); -- {}, expected: {1}

2. What did you expect to see? (Required)

SELECT returns {1}

3. What did you see instead (Required)

SELECT returns empty

4. What is your TiDB version? (Required)

Release Version: v7.2.0
Edition: Community
Git Commit Hash: 9fd5f4a
Git Branch: heads/refs/tags/v7.2.0
UTC Build Time: 2023-06-27 15:04:42
GoVersion: go1.20.5
Race Enabled: false
Check Table Before Drop: false
Store: tikv

It's a regression, can't reproduce it on v7.1.0, v6.5.3, v6.1.6

mysql> SELECT c1 FROM t1 WHERE UNHEX(2038330881); -- {}, expected: {1}
Field   1:  `c1`
Catalog:    `def`
Database:   `test`
Table:      `t1`
Org_table:  `t1`
Type:       LONG
Collation:  binary (63)
Length:     11
Max_length: 0
Decimals:   0
Flags:      NUM

mysql> show warnings;
+---------+------+----------------------------------------+
| Level   | Code | Message                                |
+---------+------+----------------------------------------+
| Warning | 1406 | Data Too Long, field len 0, data len 5 |
| Warning | 1292 | Truncated incorrect DOUBLE value: ''   |
+---------+------+----------------------------------------+
2 rows in set (0.00 sec)

mysql> select  UNHEX(2038330881);
Field   1:  `UNHEX(2038330881)`
Catalog:    `def`
Database:   ``
Table:      ``
Org_table:  ``
Type:       VAR_STRING
Collation:  binary (63)
Length:     5
Max_length: 5
Decimals:   31
Flags:      NOT_NULL BINARY

+--------------------------------------+
| UNHEX(2038330881)                    |
+--------------------------------------+
| 0x2038330881                         |
+--------------------------------------+
1 row in set (0.00 sec)

The same as #45253, change it to sig planner.

qw4990 commented

@sayJason Hi Jason, thanks for your feedback.
It seems like your testing tool is very powerful and has already found several optimizer bugs for TiDB.
Can we know how you conduct these tests so that we can port them into our internal test cases to improve our quality? Please reply if it's convenient for you.

qw4990 commented

@sayJason And all these issues are caused by a non-GA feature named non-prepared plan cache. It's not GA but we enable it on recent non-LTS versions for some testing reasons. If you don't want to use this feature and make your testing more stable, you can just disable it with SET GLOBAL tidb_enable_non_prepared_plan_cache = ON;

@sayJason And all these issues are caused by a non-GA feature named non-prepared plan cache. It's not GA but we enable it on recent non-LTS versions for some testing reasons. If you don't want to use this feature and make your testing more stable, you can just disable it with SET GLOBAL tidb_enable_non_prepared_plan_cache = ON;

Thanks for your advice. I'll follow it.

@sayJason Hi Jason, thanks for your feedback. It seems like your testing tool is very powerful and has already found several optimizer bugs for TiDB. Can we know how you conduct these tests so that we can port them into our internal test cases to improve our quality? Please reply if it's convenient for you.

Absolutely yes. However, our work is still under research and unpublished. Once it is released, I'll be very pleased to share all details. Your comments are so encouraging. Thanks, again.

qw4990 commented

@sayJason Hi Jason, thanks for your feedback. It seems like your testing tool is very powerful and has already found several optimizer bugs for TiDB. Can we know how you conduct these tests so that we can port them into our internal test cases to improve our quality? Please reply if it's convenient for you.

Absolutely yes. However, our work is still under research and unpublished. Once it is released, I'll be very pleased to share all details. Your comments are so encouraging. Thanks, again.

Thanks, look forward to it.