iota_view{~0u-2u, 2u} is not random access. Should it be?
ericniebler opened this issue · 13 comments
Strictly speaking, 2u
is "reachable" (by some definition) from ~0u-2u
in 5 increments via unsigned integer wrap-around. However, these values do not satisfy the iota_view
's Advanceable
concept because these required expressions fail:
(b - a)
is equal to n
.
(a - b)
is equal to -n
.
... when b
is 2u
, a
is ~0u-2u
, and n
is 5
. This even depends on what we mean by "equal to" here. What do we mean when we say an unsigned integer is equal to the value of a signed integer type that cannot represent the unsigned value? Is this appealing to some notion of Platonic integral equality, or should we be considering C-style integer promotion rules for deciding whether two values of mixed sign are "equal"?
This absolutely needs to be clarified before C++20, or else we should yank iota_view
.
Proposed Resolution
constexpr iota_view(type_identity_t<W> value, type_identity_t<Bound> bound);
7 Expects: Bound denotes unreachable_sentinel_t or bound is reachable from value.
+ When `StrictTotallyOrderedWith<W, Bound>` is satisfied, then `value <= bound`
+ is true.
Thinking...
If the user really expects wrap-around behavior for auto r = iota_view{~0u-2u,2u};
, then they would expect distance(r)
to be 5
, since that is how many increments it takes to get to 2u
from ~0u-2u
. That also means that distance(r.begin(), r.end())
should be 5
, and distance(r.end(), r.begin())
should be -5
.
However, consider the range auto r2 = iota_view{2u, ~0u-2u};
. Clearly distance(r2)
should not be 5
. It should be (signed) ~0u-2u-2u
, which is a very big number. That also implies that distance(r2.begin(), r2.end())
should be ~0u-2u-2u
, and distance(r2.end(), r2.begin())
should be -(~0u-2u-2u)
.
Given the above two ranges r
and r2
, r.begin() == r2.end()
and r.end() == r2.begin()
. Let I
be the type of iota_view<unsigned,unsigned>
's iterator. Then the value of:
distance(I(2u),I(~0u-2u))
cannot be both ~0u-2u-2u
and -5
simultaneously.
In other words, the unsigned integral wrap-around case doesn't make sense for iota_view
.
I suggest we clarify the Expects: clause of [range.iota.view]/p2 to clarify what we mean by "reachable".
However, these values do not satisfy the iota_view's
Advanceable
concept
All unsigned integral types T
whose integer conversion rank is greater than or equal to int
fail to meet the syntactic requirement { j - j } -> Same<iter_difference_t<T>;
. iter_difference_t
for such a type is make_signed_t<T>
, whereas -
yields T
.
All unsigned integral types
T
whose integer conversion rank is greater than or equal toint
fail to meet the syntactic requirement{ j - j } -> Same<iter_difference_t<T>;
.iter_difference_t
for such a type ismake_signed_t<T>
, whereas-
yieldsT
.
Suggest we change the constraint to:
{ j - j } -> Integral;
I can confirm that this is enough to make an unsigned integral type satisfy Advancable
: https://godbolt.org/z/lRe7oC
I have a thought about how to handle the issue of a difference_type
of integer types. Bear with me, because it also involves infinite ranges.
Basically, we drop the requirement for a difference_type
from WeaklyIncrementable
and instead add it to a new Countable
concept that subsumes [Weakly]Incrementable
[*]. RandomAccessIterator
must then additionally subsume Countable
. Those algorithms that need a difference type must either (a) require Countable
, or (b) give callers a way to specify what type to use as the difference type for that algorithm invocation. I favor (b) since (a) would be too restrictive. It would be as simple as:
template<Iterator I, Sentinel<I> S, Integer D = iter_difference_t<I>>
constexpr D distance(I first, S last, D d = D(0));
... which would count starting from d
.
Views' iterators and the iterator adaptors would work as they do currently, except they would only satisfy Countable
if it makes sense for them to do so.
As it relates to the issues facing iota_view
and the incrementable_traits
of Integer
s, I suggest the following: incrementable_traits
provides a difference_type
that is large enough to accurately represent the full range if possible (by promoting integers). But std::[u]intmax_t
, would not have a difference_type
, making it Incrementable
but not Countable
.
The benefit of this approach is that it finally gives infinite ranges a place in the scheme. (This doesn't address the problem of cyclic iterators, however. A Actually, I think the bugs we saw with cyclic ranges generally related to the difference between two iterators (via subtraction) disagreeing with the count between the same two iterators (via increment). So maybe making repeat_view::iterator
would not need to be Counted
per se, but it would need to maintain a count to avoid the sorts of bugs we saw with cyclic iterators.repeat_view
's iterator non-Countable
solves that problem after all.)
Of course this requires research and a paper and an implementation.
[*] We need to investigate whether we need both WeaklyCountable
and Countable
, or if one concept subsuming Incrementable
is enough.
UPDATE: We could take the additional steps of promoting iota_view
's exposition-only Decrementable
and Advanceable
concepts to real concepts (Advanceable
subsuming Countable
), and define BidirectionalIterator
and RandomAccessIterator
in terms of them. Then we can weaken the constraints on advance
, distance
, next
, and prev
to permit them to work on types that are merely Incrementable
(not necessarily iterators).
Also: I'm curious, do we have any models of types that satisfy Decrementable
but not BidirectionalIterator
?
UPDATE 2: Making iota_view<[u]intptr_t>::iterator
non-Countable
would require it to be non-RandomAccess
. That's a bit odd, but perhaps acceptable.
This seems sane to me: it's a step in the direction of my notion of a class of iterators with O(1) advance
for which differences aren't computable. The core issue with cyclic iterators was that differences don't make sense because the difference of two iterator values cannot simultaneously be both a positive and a negative integer. I could live with this design, and I agree that Bidi and RandomAccess should refine Decrementable and Advanceable - it was always a question of factoring out the differences in the two families into distinct concept(s).
Also: I'm curious, do we have any models of types that satisfy
Decrementable
but notBidirectionalIterator
?
Um, int
?
Sorry:
Also: I'm curious, do we have any models of types that satisfy
Decrementable
but notBidirectionalIterator
orAdvanceable
?
My point is that the Decrementable
concept is by itself an odd thing in that we don't seem to have any models that don't also satisfy some stronger concept.
That's true of most concepts, no? If a model has any concrete property at all that isn't required by the concept, there exists a stronger concept which the model satisfies that refines the original concept and requires that property. But yes, I get your point that we don't have an exemplar of a model "in the wild" that satisfies this particular concept without also satisfying some other concept in our taxonomy.
it's a step in the direction of my notion of a class of iterators with O(1)
advance
for which differences aren't computable
I've also thought about this, but it seems to be slicing the concepts up in ways that make me a little uncomfortable.
Also: I'm curious, do we have any models of types that satisfy
Decrementable
but notBidirectionalIterator
orAdvanceable
?My point is that the
Decrementable
concept is by itself an odd thing in that we don't seem to have any models that don't also satisfy some stronger concept.
And now to answer my own question: if we follow my suggestion, then std::[u]intmax_t
will be Decrementable
but not Advanceable
or BidirectionalIterator
since it is not Countable
or Readable
.
I'm less enamored of my Countable
suggestion after having played with the code a bit (I never fully implemented it) and after mulling it over. It is simply too invasive to bifurcate the iterator hierarchy this way.
Instead, I'm leaning toward permitting user-defined types to be usable as difference types. We will have to take care to avoid forcing implementors to implement bignum
, or to lock ourselves in with concept definitions that we cannot change later.
I suggest something like:
// users can specialize this:
template <class T>
struct is_integral_like
: is_integral<T>
{};
We may also want to permit users to specialize make_(un)signed
on user-defined types. L(E)WG is not going to love that, but it is arguable not insane. We should also consider respecifying is_signed
and is_unsigned
to permit integral-like types. We don't have to, though. We can simply document that these traits don't work with difference types that are not built-ins. Difference types are required to be signed, so you don't have to ask.
We will need to place some requirements on a type in order for it to be considered is_integral_like
[*] but those should be expressed in prose rather than ossified in a concept definition.
I would like to leave the Integral
concept alone. is_integral_like
would only be used in the definition of WeaklyIncrementable
.
It doesn't quite solve the problem of infinite ranges without something like bignum
, but we don't yet have any truly infinite ranges so it's not a problem. A bignum
should be usable in a constexpr
context provided we get the constexpr
allocation stuff.
We then require incrementable_traits
for built-in integral types to promote to the next higher precision integral (int
has long
for a difference type) and [u]int_max_t
has an unspecified signed integral-like type for its difference type.
The current specification of incrementable_traits
, after we promote the built-in integral types, would permit users to query the difference type of integral-like types. The answer would simply be the type of T{} - T{}
. If the author of an integral-like type wanted to promote its difference type, it would need to do so with a specialization of incrementable_traits
.
We call out that for the unspecified difference type of [u]int_max_t
, it is similarly unspecified what its difference type is.
We say it is unspecified whether iota_view
can be instantiated on integral-like types that are not built-ins.
What I like about this approach is that it solves the immediate problem (integer overlow in common scenarios), without a lot of machinery, and while leaving lots of room to fill in the details later.
@CaseyCarter thoughts?
[*] Off the top of my head, an integral-like type should be:
- a
Regular
literal type that supports all arithmetic and special operations in O(1), - that is convertible from and (implicitly) constructible from integrals,
- that is usable in a constexpr context (all operations),
- that value-initializes to "zero",
- that is explicitly convertible to an integral type (with possible loss of precision),
- that is
EqualityComparableWith
andStrictTotallyOrderedWith
integrals, - that has a user-defined
std::numeric_limits
specialization for which:::is_integer
is true- has all
constexpr
noexcept
member function.
@CaseyCarter thoughts?
I love the idea of user-defined integral types, but I'm not enthused about this change. It feels like a half-measure that we'd be trying to rush in under the wire - or more so "dig a tunnel under the wire to stuff through" - for C++20. I'd rather leave the difference types integral types "broken" for now, invest the little time we have available into fixing other issues - we have plenty - and come back to fix this correctly in 23.
My concerns are: we're shipping iota_view broken, and once we ship the concepts they're baked and there's nothing we can do.
Fixed by P1522, accepted for C++20 in Cologne.