cockroachdb/cockroach

sql: consider removing nanoseconds from intervals

Closed this issue · 8 comments

Did some research. The binary encoding for intervals sends the number of microseconds as an int64. The text encoding uses 6 digits after the .. For the text encoding we currently do weirdo things like "1s2ms3ns" which isn't at all what postgres does, and I'd honestly be surprised if that format is understandable by anything except lib/pq, where it happens to work because of Go's duration parsing.

We found exactly the same problems while exploring timestamps which is why we removed their nanoseconds and truncated to micros: nanos just aren't in the spec and things that support them only happen to by accident (usually in text decoding) but the binary spec doesn't support them at all.

So today if you ask for an interval that has nanoseconds you will get a different result in text or binary. This feels quite wrong to me and most definitely must be fixed. However the fix is not obvious and all of the options are bad options.

I think we should bite the bullet (as we should have done years ago just like timestamps):

  1. Change the text encoding to match postgres (this means removing its nanosecond output).
  2. Truncate all intervals to microsecond precision (just like timestamps).

The second point there is very hard because if we do it, then any existing on-disk intervals with nanoseconds are all of a sudden inaccessible by any equality queries in SQL, since there's now no way for a user to specify a time with nanoseconds, even though that's what's on disk.

One option is to teach extract_duration about nanoseconds to allow users to get them out and do whatever they need to, including resaving with microsecond precision.

I'm against a session setting that would increase the precision to 9 or whatever because then we have to keep it around forever, and it just doesn't work due to the binary encoding limitations.

knz commented

We can truncate all the existing intervals by ensuring that the rowfetcher truncates the nanosecond part when loading interval values.

To ensure a seamless transition we need to create a new KV encoding for intervals that only stores microseconds. The 2.2 executable would be able to read the old encoding (and truncate the nanosecond part) but would only write using the new encoding. We'd backport the decoder for the new encoding into 2.1 at the same time.

knz commented

I just realized postgres also supports configuring the precision on intervals, see #32564.

Based on a conversation with @mjibson:

We think it's unlikely that nanosecond-precision interval data exists on-disk in the real world, given that most (all?) PG client drivers would assume micro-precision intervals. In order to insert the data, one would have to have used a text-mode client, performed a string cast or parameter binding to interval, or performed some mathematical operation.

As such, we propose the following course of action:

  • Disk and in-memory encoding remains as 64-bit, nanosecond values. This doesn't reclaim any currently-wasted bits, but this isn't precluded from being changed in the future.
  • Always truncate intervals to microsecond precision when encoding intervals to datums.
  • Add and backport telemetry to detect truncation of nanos to micros when encoding datums (e.g. x % 1000 != x)
  • Add support for nanoseconds to the EXTRACT syntax as an escape hatch to allow any extant NS values to be retrieved.

Known risks to highlight in release notes:

  • Point lookups of nano-precision interval values would not find data if intervals are being used as a key. This seems unlikely.
  • A UNIQUE constraint on existing nano-precision data may appear to have duplicates when that data is rendered with microsecond precision.
  • Range lookups might suffer from fencepost issues.
  • Jitter in computations between new, micro-precision nodes and old, nano-precision nodes.
  • Mathematical operations involving intervals are still performed with nanosecond resolution.
knz commented

Good discussion!

A couple of comments:

Always truncate intervals to microsecond precision when encoding intervals to datums.

Can you clarify this? Do you mean when converting a string to a dinterval? I thought the in-memory repr did not change?
Did you perhaps intend to mean "When encoding intervals to the KV representation"?

I would perhaps recommend to also truncate upon decoding, what do you think?

Add support for nanoseconds to the EXTRACT syntax as an escape hatch to allow any extant NS values to be retrieved.

Note: we don't currently support EXTRACT over intervals. I tried to do so in #27502 but that PR was not completed.

Range lookups might suffer from fencepost issues.

What is a fencepost issue?

Jitter in computations between new, micro-precision nodes and old, nano-precision nodes.

This sounds more like a real problem. Maybe this change could be gated by a cluster version bump to prevent this.

Good discussion!

Thanks.

Always truncate intervals to microsecond precision when encoding intervals to datums.

Can you clarify this? Do you mean when converting a string to a dinterval? I thought the in-memory repr did not change?
Did you perhaps intend to mean "When encoding intervals to the KV representation"?

That's a better way to express it.

I would perhaps recommend to also truncate upon decoding, what do you think?

If we truncate on decoding, wouldn't that cause breakage when trying to update indexes? Actually, truncating when we encode the datum would also be problematic, right? If you had an index on an interval column with nanosecond precision, would we even be able to delete existing, nano-precision entries?

It seems like the behavior that we're after is that we want to truncate only those interval values received from a SQL client or new values that are getting sent to KV. It seems like we do need to be able to round-trip nano-precision intervals in and out of KV.

Maybe we do need a TInterval { micros: true } type. Ugh.

Range lookups might suffer from fencepost issues.

What is a fencepost issue?

It's shorthand for "a boundary-condition problem". The canonical example is to ask: If you're building a fence 10 meters long using 1-meter horizontal rails, how many vertical posts do you need?

Jitter in computations between new, micro-precision nodes and old, nano-precision nodes.

This sounds more like a real problem. Maybe this change could be gated by a cluster version bump to prevent this.

We thought about this, but it seems like the code would have to check that cluster-version any time an interval datum is processed; do you think this would this be unnecessarily expensive?

knz commented

I thought about this a little more. The solution is to create a new encoding tag for interval values, i.e. a plain new encoding format.

Any value using the "old" (<=v2.1) format would be decoded with truncation; all new values (including all new index encodes/recodes) would use the new, truncated format. The new format would only be available upon bumping the cluster version to not confuse old nodes in mixed-version clusters.

Did some searching in our archives and they are really interesting. #6604 was the PR that removed nanos from timestamps. #8864 was the PR that removed the nanosecond workaround functions. (Sadly, I also noted there that intervals should be handled, but never followed up. That would have saved a lot of future angst.) For example:

We changed cockroach to round to micros by default and added 3 functions that could create or display those nanoseconds

This choice ended up causing problems. Indeed extract USED to have a nanosecond option but it had problems so we chose to

remove the nanosecond extract functions because they now won't do what users expect

Maybe not a good idea to add it back. Here's what was in the release notes for that version:

TIMESTAMP values are now stored with microsecond precision instead of nanoseconds. All nanosecond-related functions have been removed. An existing table t with nanoseconds in timestamps of column s can round them to microseconds with UPDATE t SET s = s + '0s'. Note that this could potentially cause uniqueness problems if the timestamp is a primary key

So! We didn't change any encoding or decoding stuff. We just made a wrapper function that made it so that any time a new timestamp was created it would go through a wrapper function that would truncate the nanos. Any already-on-disk timestamps would be unchanged. We told users how to fix them if they wanted. I think we might be able to do the same thing here. That is:

  1. Limit any new intervals to micros (#32564)
  2. Tell users how to remove nanos from their old intervals.
  3. Change the pgwire text encoding to truncate nanos to micros (recall that this is what the original timestamp nano -> micro PR did). This will open up the possibility for bugs like #8804 to recur. However this time we will learn from our past and instead of working around it with extra SQL functionality, we will just tell users to fix it with an UPDATE.

If users need to get their original nanos out they can convert them to a string and do whatever they need in their application, but we won't have anything that can maintain nanos.

knz commented

ok sounds good