Categories for all rings
jbandlow opened this issue · 165 comments
Introspection is failing on polynomial rings:
sage: R.<x> = QQ[]
sage: R.su<tab>
R.sum R.summation
R.summation_from_element_class_add
sage: R.sum?
Object `R.sum` not found.
sage: R.sum()
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
This is because polynomial rings do not yet set their category properly:
sage: QQ[x]._test_category()
------------------------------------------------------------
Traceback (most recent call last):
...
AssertionError: category of self improperly initialized
See http://groups.google.com/group/sage-devel/browse_thread/thread/4780192a11a8b591 for more discussion.
Many other rings are not properly initialised as well. The aim of this ticket is to change that.
Apply attachment: 9138_flat.patch (rebased on top of #9958, but will still apply with fuzz 2 otherwise).
See #11900 for a follow-up fixing some speed regressions.
Depends on #11900
Dependencies: To be merged with #11900
CC: @sagetrac-sage-combinat @robertwb
Component: categories
Keywords: introspection, categories for rings
Author: Simon King
Reviewer: Volker Braun
Merged: sage-5.0.beta0
Issue created by migration from https://trac.sagemath.org/ticket/9138
Description changed:
---
+++
@@ -1,4 +1,4 @@
-Example:
+Introspection is failing on polynomial rings:
```
sage: R.<x> = QQ[]
@@ -12,4 +12,14 @@
AttributeError Traceback (most recent call last)
```
+This is because polynomial rings do not yet set their category properly:
+
+```
+sage: QQ[x]._test_category()
+------------------------------------------------------------
+Traceback (most recent call last):
+...
+AssertionError: category of self improperly initialized
+```
+
See http://groups.google.com/group/sage-devel/browse_thread/thread/4780192a11a8b591 for more discussion.This ticket seems to be a duplicate of #8613.
Replying to @mezzarobba:
This ticket seems to be a duplicate of #8613.
Indeed. This should have ringed a bell to me!
Since I have already recycled this ticket to "Categories for
polynomial ring", I leave the two tickets as is. Once this ticket will
be closed, it should be possible to close #8613 as well.
This ticket is just about a single kind of parent classes. Rather than going through a long list of parent classes one by one and inserting the missing pieces: Wouldn't it be a more thorough approach to provide a default implementation for the attributes needed in the category framework, in cases where it makes sense?
Here is an example:
sage: R.<x,y> = QQ[]
sage: 'element_class' in dir(R)
True
sage: hasattr(R,'element_class')
False
If I am not mistaken, "element_class" should be implemented by providing the attribute "Element".
But is there a reason why element_class is a dynamic meta-class and not a regular method? Since any parent class has a "an_element" method, it seems to me that the following default implementation makes sense (and it solves the problem in my example above):
def element_class(self):
try:
return self.Element
except AttributeError:
return self.an_element().__class__
It seems to me that providing reasonable default implementations would, on the long run, be easier than going through any single parent class. But certainly other people know more about the "how-to" of categories.
Replying to @simon-king-jena:
But is there a reason why element_class is a dynamic meta-class and not a regular method?
Sorry, I just noticed that "element_class" is not a method at all: I assumed that it should be used like R.element_class(), but sadly it is R.element_class without calling the attribute. So, one attribute (element_class) is implemented by providing another attribute (Element).
Anyway, if there shall be a default implementation for element_class then unfortunately it must be in __getattr__.
Replying to @simon-king-jena:
Replying to @simon-king-jena:
But is there a reason why element_class is a dynamic meta-class and not a regular method?
Sorry, I just noticed that "element_class" is not a method at all...
Again wrong. I found in sage.structure.parent that there is indeed a method element_class -- with a lazy_attribute decorator. I am still confused by that programming style, so, better I shut up.
Anyway, changing the element_class method so that an_element is used (rather than raising an AttributeError) did not help. Can you explain why this does not work?
Question: Is it OK to broaden the scope of this ticket, namely to use the category framework for everything that comes from sage/rings/ring.pyx? Or shall it be restricted to polynomial rings.
Author: Simon King
Since #9944 almost has a positive review but does not entirely fix the bug reported here, I'm posting my patch here, building on top of that.
Examples:
With the patches from #9944:
sage: QQ['x'].category() # why not additionally an algebra?
Category of euclidean domains
sage: QQ['x','y'].category() # why not an algebra?
Category of commutative rings
sage: SteenrodAlgebra(2)['x'].category() # this is wrong
Category of commutative rings
sage: QQ['x','y'].sum([1,1])
Traceback (most recent call last):
...
AttributeError: 'sage.rings.polynomial.multi_polynomial_libsingular.MPolynomialRing_libsingular' object has no attribute 'sum'
Adding my (not yet submitted) patch, I get:
sage: QQ['x'].category()
Join of Category of euclidean domains and Category of commutative algebras over Rational Field
sage: QQ['x','y'].category()
Category of commutative algebras over Rational Field
sage: SteenrodAlgebra(2)['x'].category()
Category of algebras over mod 2 Steenrod algebra
sage: QQ['x','y'].sum([1,1])
2
So, I think the bug reported here is fixed. For me, all doctests pass.
Depends on #9944
Replying to @simon-king-jena:
Adding my (not yet submitted) patch, ...
Oops, I meant: That's with the patch that I have submitted...
Hi Simon,
I haven't been following all the details of this, but thanks for the patch! One question I have is whether there is a performance penalty for this, and if so, to what degree is that acceptable. On my machine, I noticed about a 10% slow-down for
sage: R.<x> = QQ['x']
sage: timeit('f = x^2 + 1')
after applying the patches at #9944 and here. I did not do any rigorous testing so this may be spurious, but I'm not sure this should be given a positive review unless this issue has at least been considered.
If this has already happened and I missed the discussion, then I apologize. Just point me to it and I'll shut up and go away. :)
Hi Jason,
Replying to @jbandlow:
I did not do any rigorous testing so this may be spurious, but I'm not sure this should be given a positive review unless this issue has at least been considered.
You are right, things like this should be considered. However, I wonder where a slow-down might come from.
If this has already happened and I missed the discussion, then I apologize. Just point me to it and I'll shut up and go away. :)
No, I wasn't testing the performance. Aparently I work in cycles: Recently I had several patches that considerably increased performance, and perhaps I am now in a slow mood again...
Anyway, I'll do some tests now.
Here are the test results:
Without all patches:
sage: R.<x> = ZZ[]
sage: timeit('(2*x-1)^2+5')
625 loops, best of 3: 22.5 µs per loop
sage: R.<x> = QQ[]
sage: timeit('(2*x-1)^2+5')
625 loops, best of 3: 24.5 µs per loop
sage: R.<x> = GF(3)[]
sage: timeit('(2*x-1)^2+5')
625 loops, best of 3: 87.7 µs per loop
sage: R.<x> = QQ['t'][]
sage: timeit('(2*x-1)^2+5')
625 loops, best of 3: 114 µs per loop
sage: R.<x,y> = ZZ[]
sage: timeit('(2*x-1)^2+5')
625 loops, best of 3: 21.9 µs per loop
sage: R.<x,y> = QQ[]
sage: timeit('(2*x-1)^2+5')
625 loops, best of 3: 40 µs per loop
sage: R.<x,y> = GF(3)[]
sage: timeit('(2*x-1)^2+5')
625 loops, best of 3: 26.3 µs per loop
sage: R.<x,y> = QQ['t'][]
sage: timeit('(2*x-1)^2+5')
625 loops, best of 3: 239 µs per loop
With the patches from #9944:
sage: R.<x> = ZZ[]
sage: timeit('(2*x-1)^2+5')
625 loops, best of 3: 25.6 µs per loop
sage: R.<x> = QQ[]
sage: timeit('(2*x-1)^2+5')
625 loops, best of 3: 26.7 µs per loop
sage: R.<x> = GF(3)[]
sage: timeit('(2*x-1)^2+5')
625 loops, best of 3: 109 µs per loop
sage: R.<x> = QQ['t'][]
sage: timeit('(2*x-1)^2+5')
625 loops, best of 3: 121 µs per loop
sage: R.<x,y> = ZZ[]
sage: timeit('(2*x-1)^2+5')
625 loops, best of 3: 31.4 µs per loop
sage: R.<x,y> = QQ[]
sage: timeit('(2*x-1)^2+5')
625 loops, best of 3: 40 µs per loop
sage: R.<x,y> = GF(3)[]
sage: timeit('(2*x-1)^2+5')
625 loops, best of 3: 26.8 µs per loop
sage: R.<x,y> = QQ['t'][]
sage: timeit('(2*x-1)^2+5')
625 loops, best of 3: 250 µs per loop
With the patches from #9944 and the patch from here:
sage: R.<x> = ZZ[]
sage: timeit('(2*x-1)^2+5')
625 loops, best of 3: 25.7 µs per loop
sage: R.<x> = QQ[]
sage: timeit('(2*x-1)^2+5')
625 loops, best of 3: 28.3 µs per loop
sage: R.<x> = GF(3)[]
sage: timeit('(2*x-1)^2+5')
625 loops, best of 3: 115 µs per loop
sage: R.<x> = QQ['t'][]
sage: timeit('(2*x-1)^2+5')
625 loops, best of 3: 125 µs per loop
sage: R.<x,y> = ZZ[]
sage: timeit('(2*x-1)^2+5')
625 loops, best of 3: 31 µs per loop
sage: R.<x,y> = QQ[]
sage: timeit('(2*x-1)^2+5')
625 loops, best of 3: 17.5 µs per loop
sage: R.<x,y> = GF(3)[]
sage: timeit('(2*x-1)^2+5')
625 loops, best of 3: 25.1 µs per loop
sage: R.<x,y> = QQ['t'][]
sage: timeit('(2*x-1)^2+5')
625 loops, best of 3: 256 µs per loop
Note, however, that the numbers arent't very stable
sage: R.<x,y> = QQ[]
sage: timeit('(2*x-1)^2+5')
625 loops, best of 3: 20.9 µs per loop
sage: timeit('(2*x-1)^2+5')
625 loops, best of 3: 20.8 µs per loop
sage: timeit('(2*x-1)^2+5')
625 loops, best of 3: 22.6 µs per loop
sage: timeit('(2*x-1)^2+5')
625 loops, best of 3: 37.9 µs per loop
sage: timeit('(2*x-1)^2+5')
625 loops, best of 3: 15.4 µs per loop
But there is a tendency: Things tend to be slower, both with #9944 and with my patch.
So, it should be worth while to analyse the arithmetic with prun.
By the way, I think we should add doctests of the type
sage: TestSuite(QQ['x']).run()
sage: TestSuite(QQ['x','y']).run()
sage: TestSuite(QQ['x','y']['t']).run()
sage: TestSuite(GF(3)['t']).run()
sage: TestSuite(ZZ['t']).run()
If I understand the ticket description, this used to fail.
Idea: Could it be that the length of the method resolution order is responsible for the slow-down?
With all patches:
sage: len(type(QQ['x']).mro())
47
sage: len(type(QQ['x','y']).mro())
11
sage: len(type(GF(3)['x','y']).mro())
11
sage: len(type(GF(3)['x']).mro())
49
sage: len(type(ZZ['x']).mro())
41
sage: len(type(ZZ['x']['t']).mro())
41
sage: len(type(QQ['x'].gen()).mro())
9
sage: len(type(QQ['x','y'].gen()).mro())
8
sage: len(type(GF(3)['x','y'].gen()).mro())
8
sage: len(type(GF(3)['x'].gen()).mro())
10
sage: len(type(ZZ['x'].gen()).mro())
9
sage: len(type(ZZ['x']['t'].gen()).mro())
9
With only the patches from #9944:
sage: len(type(QQ['x']).mro())
39
sage: len(type(QQ['x','y']).mro())
11
sage: len(type(GF(3)['x','y']).mro())
11
sage: len(type(GF(3)['x']).mro())
41
sage: len(type(ZZ['x']).mro())
34
sage: len(type(ZZ['x']['t']).mro())
34
sage: len(type(QQ['x'].gen()).mro())
9
sage: len(type(QQ['x','y'].gen()).mro())
8
sage: len(type(GF(3)['x','y'].gen()).mro())
8
sage: len(type(GF(3)['x'].gen()).mro())
10
sage: len(type(ZZ['x'].gen()).mro())
9
sage: len(type(ZZ['x']['t'].gen()).mro())
9
Without these patches:
sage: len(type(QQ['x']).mro())
18
sage: len(type(QQ['x','y']).mro())
11
sage: len(type(GF(3)['x','y']).mro())
11
sage: len(type(GF(3)['x']).mro())
20
sage: len(type(ZZ['x']).mro())
15
sage: len(type(ZZ['x']['t']).mro())
15
sage: len(type(QQ['x'].gen()).mro())
9
sage: len(type(QQ['x','y'].gen()).mro())
8
sage: len(type(GF(3)['x','y'].gen()).mro())
8
sage: len(type(GF(3)['x'].gen()).mro())
10
sage: len(type(ZZ['x'].gen()).mro())
9
sage: len(type(ZZ['x']['t'].gen()).mro())
9
So, the mro of the rings becomes much longer. Could it be that, as a consequence, it takes longer to find common and frequently used methods such as R.parent() and R.base_ring()?
Here are times for basic methods (i.e., methods that require to walk up much of the mro):
Without patches
sage: R.<x> = ZZ[]
sage: timeit('x.parent()',number=10^5)
100000 loops, best of 3: 253 ns per loop
sage: timeit('R.base_ring()',number=10^5)
100000 loops, best of 3: 447 ns per loop
sage: R.<x> = QQ[]
sage: timeit('x.parent()',number=10^5)
100000 loops, best of 3: 249 ns per loop
sage: timeit('R.base_ring()',number=10^5)
100000 loops, best of 3: 508 ns per loop
sage: R.<x> = GF(3)[]
sage: timeit('x.parent()',number=10^5)
100000 loops, best of 3: 262 ns per loop
sage: timeit('R.base_ring()',number=10^5)
100000 loops, best of 3: 555 ns per loop
sage: R.<x> = QQ['t'][]
sage: timeit('x.parent()',number=10^5)
100000 loops, best of 3: 249 ns per loop
sage: timeit('R.base_ring()',number=10^5)
100000 loops, best of 3: 446 ns per loop
sage: R.<x,y> = ZZ[]
sage: timeit('x.parent()',number=10^5)
100000 loops, best of 3: 240 ns per loop
sage: timeit('R.base_ring()',number=10^5)
100000 loops, best of 3: 286 ns per loop
sage: R.<x,y> = QQ[]
sage: timeit('x.parent()',number=10^5)
100000 loops, best of 3: 240 ns per loop
sage: timeit('R.base_ring()',number=10^5)
100000 loops, best of 3: 282 ns per loop
sage: R.<x,y> = GF(3)[]
sage: timeit('x.parent()',number=10^5)
100000 loops, best of 3: 245 ns per loop
sage: timeit('R.base_ring()',number=10^5)
100000 loops, best of 3: 284 ns per loop
sage: R.<x,y> = QQ['t'][]
sage: timeit('x.parent()',number=10^5)
100000 loops, best of 3: 266 ns per loop
sage: timeit('R.base_ring()',number=10^5)
100000 loops, best of 3: 413 ns per loop
With all patches
sage: R.<x> = ZZ[]
sage: timeit('x.parent()',number=10^5)
100000 loops, best of 3: 539 ns per loop
sage: timeit('R.base_ring()',number=10^5)
100000 loops, best of 3: 476 ns per loop
sage: R.<x> = QQ[]
sage: timeit('x.parent()',number=10^5)
100000 loops, best of 3: 247 ns per loop
sage: timeit('R.base_ring()',number=10^5)
100000 loops, best of 3: 652 ns per loop
sage: R.<x> = GF(3)[]
sage: timeit('x.parent()',number=10^5)
100000 loops, best of 3: 670 ns per loop
sage: timeit('R.base_ring()',number=10^5)
sage: R.<x> = QQ['t'][]
sage: timeit('x.parent()',number=10^5)
100000 loops, best of 3: 254 ns per loop
sage: timeit('R.base_ring()',number=10^5)
100000 loops, best of 3: 496 ns per loop
sage: R.<x,y> = ZZ[]
sage: timeit('x.parent()',number=10^5)
100000 loops, best of 3: 583 ns per loop
sage: timeit('R.base_ring()',number=10^5)
100000 loops, best of 3: 297 ns per loop
sage: R.<x,y> = QQ[]
sage: timeit('x.parent()',number=10^5)
100000 loops, best of 3: 237 ns per loop
sage: timeit('R.base_ring()',number=10^5)
100000 loops, best of 3: 307 ns per loop
sage: R.<x,y> = GF(3)[]
sage: timeit('x.parent()',number=10^5)
100000 loops, best of 3: 237 ns per loop
sage: timeit('R.base_ring()',number=10^5)
100000 loops, best of 3: 294 ns per loop
sage: R.<x,y> = QQ['t'][]
sage: timeit('x.parent()',number=10^5)
100000 loops, best of 3: 277 ns per loop
sage: timeit('R.base_ring()',number=10^5)
100000 loops, best of 3: 477 ns per loop
So, there seems to be some slow-down in accessing basic methods.
But apparently one can work around the mro:
sage: timeit('R.base_ring()',number=10^6)
1000000 loops, best of 3: 470 ns per loop
sage: timeit('Parent.base_ring(R)',number=10^6)
1000000 loops, best of 3: 352 ns per loop
So, if speed matters, it might be worth while to use the idiom above to speed things up. I'll ask on sage-devel whether there is a more elegant/pythonic way to cope with a long mro.
With the latest patches, I obtain the following timings:
sage: R.<x> = ZZ[]
sage: timeit('(2*x-1)^2+5', number=10^4)
10000 loops, best of 3: 25.8 µs per loop
sage: R.<x> = QQ[]
sage: timeit('(2*x-1)^2+5', number=10^4)
10000 loops, best of 3: 27.8 µs per loop
sage: R.<x> = GF(3)[]
sage: timeit('(2*x-1)^2+5', number=10^4)
10000 loops, best of 3: 112 µs per loop
sage: R.<x> = QQ['t'][]
sage: timeit('(2*x-1)^2+5', number=10^4)
10000 loops, best of 3: 124 µs per loop
sage: R.<x,y> = ZZ[]
sage: timeit('(2*x-1)^2+5', number=10^4)
10000 loops, best of 3: 12.7 µs per loop
sage: R.<x,y> = QQ[]
sage: timeit('(2*x-1)^2+5', number=10^4)
10000 loops, best of 3: 15.7 µs per loop
sage: R.<x,y> = GF(3)[]
sage: timeit('(2*x-1)^2+5', number=10^4)
10000 loops, best of 3: 10.3 µs per loop
sage: R.<x,y> = QQ['t'][]
sage: timeit('(2*x-1)^2+5', number=10^4)
10000 loops, best of 3: 148 µs per loop
Without these patches:
sage: R.<x> = ZZ[]
sage: timeit('(2*x-1)^2+5', number=10^4)
10000 loops, best of 3: 22.7 µs per loop
sage: R.<x> = QQ[]
sage: timeit('(2*x-1)^2+5', number=10^4)
10000 loops, best of 3: 24.2 µs per loop
sage: R.<x> = GF(3)[]
sage: timeit('(2*x-1)^2+5', number=10^4)
10000 loops, best of 3: 87 µs per loop
sage: R.<x> = QQ['t'][]
sage: timeit('(2*x-1)^2+5', number=10^4)
10000 loops, best of 3: 113 µs per loop
sage: R.<x,y> = ZZ[]
sage: timeit('(2*x-1)^2+5', number=10^4)
10000 loops, best of 3: 13 µs per loop
sage: R.<x,y> = QQ[]
sage: timeit('(2*x-1)^2+5', number=10^4)
10000 loops, best of 3: 16.3 µs per loop
sage: R.<x,y> = GF(3)[]
sage: timeit('(2*x-1)^2+5', number=10^4)
10000 loops, best of 3: 10.5 µs per loop
sage: R.<x,y> = QQ['t'][]
sage: timeit('(2*x-1)^2+5', number=10^4)
10000 loops, best of 3: 237 µs per loop
In other words, there is a mild deceleration in the univariate case and a mild (and in one case considerable) acceleration in the multivariate case.
I don't understand why. But perhaps a reviewer has an idea, and can also state his or her opinion how bad the deceleration is compared with the acceleration?
The patch is rebased again.
Note that meanwhile #9944 does not mean a slow down but a speed up! The patch from here, unfortunately, makes things slightly slower, again. But compared with unpatched Sage, it is not significantly slower in any case, but still much faster in some cases (and in two cases even faster than with #9944 alone).
Here are the latest timings.
Unpatched:
sage: R.<x> = ZZ[]
sage: timeit('(2*x-1)^2+5', number=10^4)
10000 loops, best of 3: 23.4 µs per loop
sage: R.<x> = QQ[]
sage: timeit('(2*x-1)^2+5', number=10^4)
10000 loops, best of 3: 24.6 µs per loop
sage: R.<x> = GF(3)[]
sage: timeit('(2*x-1)^2+5', number=10^4)
10000 loops, best of 3: 87.9 µs per loop
sage: R.<x> = QQ['t'][]
sage: timeit('(2*x-1)^2+5', number=10^4)
10000 loops, best of 3: 113 µs per loop
sage: R.<x,y> = ZZ[]
sage: timeit('(2*x-1)^2+5', number=10^4)
10000 loops, best of 3: 13 µs per loop
sage: R.<x,y> = QQ[]
sage: timeit('(2*x-1)^2+5', number=10^4)
10000 loops, best of 3: 16.6 µs per loop
sage: R.<x,y> = GF(3)[]
sage: timeit('(2*x-1)^2+5', number=10^4)
10000 loops, best of 3: 10.8 µs per loop
sage: R.<x,y> = QQ['t'][]
sage: timeit('(2*x-1)^2+5', number=10^4)
10000 loops, best of 3: 238 µs per loop
sage: R.<x,y> = Qp(3)[]
sage: timeit('(2*x-1)^2+5', number=10^4)
10000 loops, best of 3: 511 µs per loop
sage: R.<x> = Qp(3)[]
sage: timeit('(2*x-1)^2+5', number=10^4)
10000 loops, best of 3: 1.06 ms per loop
sage: R.<x> = ZZ[]
sage: timeit('(2*x-1)^2+5', number=10^4)
10000 loops, best of 3: 8.95 µs per loop
sage: R.<x> = QQ[]
sage: timeit('(2*x-1)^2+5', number=10^4)
10000 loops, best of 3: 8.33 µs per loop
sage: R.<x> = GF(3)[]
sage: timeit('(2*x-1)^2+5', number=10^4)
10000 loops, best of 3: 76.7 µs per loop
sage: R.<x> = QQ['t'][]
sage: timeit('(2*x-1)^2+5', number=10^4)
10000 loops, best of 3: 82.7 µs per loop
sage: R.<x,y> = ZZ[]
sage: timeit('(2*x-1)^2+5', number=10^4)
10000 loops, best of 3: 13.2 µs per loop
sage: R.<x,y> = QQ[]
sage: timeit('(2*x-1)^2+5', number=10^4)
10000 loops, best of 3: 16.4 µs per loop
sage: R.<x,y> = GF(3)[]
sage: timeit('(2*x-1)^2+5', number=10^4)
10000 loops, best of 3: 11 µs per loop
sage: R.<x,y> = QQ['t'][]
sage: timeit('(2*x-1)^2+5', number=10^4)
10000 loops, best of 3: 106 µs per loop
sage: R.<x,y> = Qp(3)[]
sage: timeit('(2*x-1)^2+5', number=10^4)
10000 loops, best of 3: 421 µs per loop
sage: R.<x> = Qp(3)[]
sage: timeit('(2*x-1)^2+5', number=10^4)
10000 loops, best of 3: 1.1 ms per loop
So, I hope it can be reviewed.
For the patch bot:
Depends on #9944
Not good. Many doctests fail. First analysis: When I implemented the improved base ring conversion for polynomial rings, I noticed that _lmul_ for p-adics returns None (quite a bug). I thought that I had worked around that bug, as the doc tests pass with the patches from #9944.
But apparently it hit me here...
Work Issues: Fix conversion maps that return None
Changed keywords from introspection to introspection, categories for rings
Description changed:
---
+++
@@ -23,3 +23,6 @@
```
See http://groups.google.com/group/sage-devel/browse_thread/thread/4780192a11a8b591 for more discussion.
+
+Depends on #9944
+Changed work issues from Fix conversion maps that return None to none
I think now we are ready for review.
Long tests passed for me.
By the way, the long tests made me fix another bug. Namely, in the __setstate__ method of CategoryObject, the existing category was overridden by the value found in the pickle. If you do so for the ration field, than afterwards its category is not the category of quotient fields but of rings. Result: The pickle jar broke.
Solution: If the category in the pickle is None, then self._category will be preserved. Otherwise, it will be the join of self._category (if not None) with the category found in the pickle.
I don't know whether you agree with the second part of that solution. But I think the first part should be clear.
Also, I added some TestSuite tests for various flavours of polynomial rings.
For the patch bot:
Depends on #9944
Replying to @simon-king-jena:
Not good. Many doctests fail. First analysis: When I implemented the improved base ring conversion for polynomial rings, I noticed that
_lmul_for p-adics returns None (quite a bug).
For the record: this might be intentional. The coercion protocle
does specify that the arithmetic operation may return None
under certain circumstances to state that, after all, they can't
handle that operation, and some other approach should be tried.
Cheers,
Nicolas
Replying to @nthiery:
Replying to @simon-king-jena:
Not good. Many doctests fail. First analysis: When I implemented the improved base ring conversion for polynomial rings, I noticed that
_lmul_for p-adics returns None (quite a bug).For the record: this might be intentional.
OK, but really just for the record -- because I did not change it. Instead, I test what _lmul_ does, and if it returns None then it will not be used in the base ring conversion.
Work Issues: Doctest errors
Sorry, apparently I had not applied the patch when I was running the long doc tests. There were a couple of errors. So, needs work...
FWIW, long tests pass for me.
But there is one point that I don't like:
With the patches from #9944), I get
sage -t "devel/sage-main/sage/schemes/elliptic_curves/sha_tate.py"
[26.9 s]
which is about the same as without these patches. So, there is no speed loss.
But when I also apply the patch from here, I get
sage -t "devel/sage-main/sage/schemes/elliptic_curves/sha_tate.py"
[30.3 s]
A similar slow down is visible for the tests from heegner.py.
What could be the reason? What arithmetic operations are dominant?
My original impression was that it is operations in large integer quotient rings. But that isn't the problem according to these tests:
sage: R = Integers(3814697265625)
sage: a = R(1021573325796)
sage: b = R(2884990864521)
sage: timeit('a+b',number=10^6)
1000000 loops, best of 3: 297 ns per loop
sage: timeit('a*b',number=10^6)
1000000 loops, best of 3: 397 ns per loop
sage: timeit('a+2',number=10^6)
1000000 loops, best of 3: 1.24 µs per loop
sage: timeit('a*2',number=10^6)
1000000 loops, best of 3: 1.4 µs per loop
sage: timeit('a.sqrt()')
625 loops, best of 3: 146 µs per loop
I get more or less the same timings with or without the patch.
Any idea?
I see that there is a lot of slow-down in the Monsky-Washnitzer code, namely arithmetic in SpecialCubicQuotientRing:
sage: B.<t> = PolynomialRing(Integers(125))
sage: R = monsky_washnitzer.SpecialCubicQuotientRing(t^3 - t + B(1/4))
sage: x, T = R.gens()
sage: x
(0) + (1)*x + (0)*x^2
sage: T
(T) + (0)*x + (0)*x^2
In vanilla sage-4.6.2, I get
sage: timeit('x*T')
625 loops, best of 3: 491 µs per loop
}}}
With the patches, I get
sage: timeit('x*T')
625 loops, best of 3: 612 µs per loop
So, there is your problem!
Changed work issues from Doctest errors to Improve Monsky-Washnitzer
Closing in...
With patches
sage: %prun L=[x*T for _ in xrange(1000)]
392002 function calls in 0.766 CPU seconds
Ordered by: internal time
ncalls tottime percall cumtime percall filename:lineno(function)
384000 0.371 0.000 0.371 0.000 polynomial_ring.py:1836(modulus)
1000 0.367 0.000 0.724 0.001 monsky_washnitzer.py:553(_mul_)
1 0.017 0.017 0.766 0.766 <string>:1(<module>)
2000 0.006 0.000 0.006 0.000 integer_mod_ring.py:726(_repr_)
1000 0.003 0.000 0.004 0.000 monsky_washnitzer.py:325(__init__)
2000 0.001 0.000 0.001 0.000 {isinstance}
2000 0.000 0.000 0.000 0.000 {method 'parent' of 'sage.structure.element.Element' objects}
1 0.000 0.000 0.000 0.000 {method 'disable' of '_lsprof.Profiler' objects}
Without patches:
sage: %prun L=[x*T for _ in xrange(1000)]
404602 function calls in 0.684 CPU seconds
Ordered by: internal time
ncalls tottime percall cumtime percall filename:lineno(function)
1000 0.366 0.000 0.651 0.001 monsky_washnitzer.py:553(_mul_)
384600 0.234 0.000 0.234 0.000 polynomial_ring.py:1797(modulus)
2000 0.047 0.000 0.061 0.000 polynomial_ring.py:212(_element_constructor_)
1 0.018 0.018 0.684 0.684 <string>:1(<module>)
2000 0.007 0.000 0.007 0.000 integer_mod_ring.py:726(_repr_)
2000 0.005 0.000 0.006 0.000 integer_mod_ring.py:911(__cmp__)
1000 0.003 0.000 0.004 0.000 monsky_washnitzer.py:325(__init__)
4000 0.003 0.000 0.003 0.000 {isinstance}
4000 0.001 0.000 0.001 0.000 {method 'parent' of 'sage.structure.element.Element' objects}
2000 0.001 0.000 0.001 0.000 {cmp}
2000 0.000 0.000 0.000 0.000 {method 'base_ring' of 'sage.structure.category_object.CategoryObject' objects}
1 0.000 0.000 0.000 0.000 {method 'disable' of '_lsprof.Profiler' objects}
In other words: The biggest loss is the call to modulus(). That should be possible to fix.
Possible reason:
sage: P = R.poly_ring()
sage: len(P.__class__.mro())
14 # without patch
35 # with patch
Apparently that makes attribute access slower.
I just found that the problem with modulus is an excellent use case for the improved cached methods provided by #11115!
Namely, instead of caching the modulus in a Python attribute __modulus and have self.modulus() return self.__modulus, I define self.modulus() as a cached method -- and that is a lot faster:
With the patches:
sage: B.<t> = PolynomialRing(Integers(125))
sage: R = monsky_washnitzer.SpecialCubicQuotientRing(t^3 - t + B(1/4))
sage: P = R.poly_ring()
sage: timeit('P.modulus()',number=10^6)
1000000 loops, best of 3: 226 ns per loop
sage: x, T = R.gens()
sage: timeit('x*T')
625 loops, best of 3: 234 µs per loop
Without the patches (and without the quick cached methods):
sage: B.<t> = PolynomialRing(Integers(125))
sage: R = monsky_washnitzer.SpecialCubicQuotientRing(t^3 - t + B(1/4))
sage: P = R.poly_ring()
sage: timeit('P.modulus()',number=10^6)
1000000 loops, best of 3: 647 ns per loop
sage: x, T = R.gens()
sage: timeit('x*T')
625 loops, best of 3: 495 µs per loop
So, the slow-down turned into a speed-up. I don't know if the doc tests will all pass, but I put it as "needs review".
Description changed:
---
+++
@@ -24,5 +24,7 @@
See http://groups.google.com/group/sage-devel/browse_thread/thread/4780192a11a8b591 for more discussion.
-Depends on #9944
+Many other rings are not properly initialised as well. The aim of this ticket is to change that.
+Depends on #9944 #11115 #9976
+Changed work issues from Improve Monsky-Washnitzer to none
Note, however, that sage -t "devel/sage-main/sage/schemes/elliptic_curves/sha_tate.py" is not up to speed, yet, but not as slow as with the old patch.
It could actually be that I there is no regression in Monsky-Washnitzer at all.
I just took the tests from sage/schemes/elliptic_curves/monsky_washnitzer.py and timed each test individually (using timeit). In all cases, the tests went faster with my patches - even though "sage -t" took 50% longer than without the patches.
My next guess was that there is a problem with the startup time. Indeed, starting sage without the patches feels "snappier".
Using "sage -startuptime", I found with the patches:
== Slowest (including children) ==
1.623 sage.all (None)
0.356 sage.schemes.all (sage.all)
0.225 elliptic_curves.all (sage.schemes.all)
0.222 ell_rational_field (elliptic_curves.all)
0.216 sage.rings.all (sage.all)
0.214 sage.misc.all (sage.all)
0.173 sage.algebras.all (sage.all)
0.153 ell_number_field (ell_rational_field)
Without patch:
== Slowest (including children) ==
1.196 sage.all (None)
0.312 sage.schemes.all (sage.all)
0.190 twisted.persisted.styles (sage.all)
0.176 elliptic_curves.all (sage.schemes.all)
0.172 ell_rational_field (elliptic_curves.all)
0.172 sage.misc.all (sage.all)
0.151 ell_number_field (ell_rational_field)
0.150 ell_field (ell_number_field)
...
0.087 sage.rings.all (sage.all)
...
0.035 sage.algebras.all (sage.all)
Work Issues: startup time
It turns out that the measuring of the startup time is by far too flaky. Without the patches, the startup time varies from "1.008 sage.all (None)" to "1.518 sage.all (None)". There seems to be a regression, though.
Replying to @simon-king-jena:
It turns out that the measuring of the startup time is by far too flaky. Without the patches, the startup time varies from "1.008 sage.all (None)" to "1.518 sage.all (None)". There seems to be a regression, though.
I have to correct myself.
It turned out that both the slow-down in the startup time and the slow-down in the Monsky-Washnitzer code are caused by #11115. So, I suggest that I prepare a new patch that is not based on #11115.
Changed work issues from startup time to none
Description changed:
---
+++
@@ -26,5 +26,5 @@
Many other rings are not properly initialised as well. The aim of this ticket is to change that.
-Depends on #9944 #11115 #9976
+Depends on #9944
For the record: Both sage -startuptime and the Monsky-Washnitzer example do not report a significant slow-down. It is not a speed-up either, but I think that speeding things up shall be the job of #11115, eventually.
To be on the safe side, I repeated timings with the latest patch from here, applied only on top of #9944:
$ ./sage -startuptime
...
== Slowest (including children) ==
1.396 sage.all (None)
0.398 sage.misc.all (sage.all)
0.285 functional (sage.misc.all)
0.282 sage.schemes.all (sage.all)
0.271 sage.rings.complex_double (functional)
...
and
sage: B.<t> = PolynomialRing(Integers(125))
sage: R = monsky_washnitzer.SpecialCubicQuotientRing(t^3 - t + B(1/4))
sage: P = R.poly_ring()
sage: x, T = R.gens()
sage: timeit('x*T')
625 loops, best of 3: 869 µs per loop
Without all these patches, I get
$ ./sage -startuptime
...
== Slowest (including children) ==
1.282 sage.all (None)
0.284 sage.schemes.all (sage.all)
0.241 sage.misc.all (sage.all)
0.184 twisted.persisted.styles (sage.all)
0.165 elliptic_curves.all (sage.schemes.all)
...
and
sage: B.<t> = PolynomialRing(Integers(125))
sage: R = monsky_washnitzer.SpecialCubicQuotientRing(t^3 - t + B(1/4))
sage: P = R.poly_ring()
sage: x, T = R.gens()
sage: timeit('x*T')
625 loops, best of 3: 501 µs per loop
I think the timing of the startup time is within the error margin. But there is a slow-down of the Monsky-Washnitzer code. I have shown how to solve the latter problem, based on #11115.
I leave the decision to the reviewer whether or not #11115, together with a patch that puts a cached_method decorator in front of the modulus method, should be a dependency for this ticket.
Replying to @simon-king-jena:
I leave the decision to the reviewer whether or not #11115, together with a patch that puts a cached_method decorator in front of the modulus method, should be a dependency for this ticket.
Apparently there is no reviewer :(
Anyway. I modified the patch so that now modulus() becomes a cached method. This provides a speed-up even without #11115. And once #11115 is merged as well, the speed-up will increase.
Without #11115, but with the patches from #9944 and from here:
sage: B.<t> = PolynomialRing(Integers(125))
sage: R = monsky_washnitzer.SpecialCubicQuotientRing(t^3 - t + B(1/4))
sage: P = R.poly_ring()
sage: x, T = R.gens()
# Without patches, the following was 495 µs per loop
sage: timeit('x*T')
625 loops, best of 3: 472 µs per loop
sage: %prun L=[x*T for _ in xrange(1000)]
392002 function calls in 0.590 CPU seconds
Ordered by: internal time
ncalls tottime percall cumtime percall filename:lineno(function)
1000 0.385 0.000 0.560 0.001 monsky_washnitzer.py:553(_mul_)
384000 0.177 0.000 0.177 0.000 cachefunc.py:505(__call__)
1 0.019 0.019 0.590 0.590 <string>:1(<module>)
2000 0.006 0.000 0.006 0.000 integer_mod_ring.py:726(_repr_)
1000 0.003 0.000 0.004 0.000 monsky_washnitzer.py:325(__init__)
2000 0.001 0.000 0.001 0.000 {isinstance}
2000 0.000 0.000 0.000 0.000 {method 'parent' of 'sage.structure.element.Element' objects}
1 0.000 0.000 0.000 0.000 {method 'disable' of '_lsprof.Profiler' objects}
That's better than with an unpatched Sage and thus good enough.
While working on #11115, I found that there are still some rings that do neither use the new coercion framework nor properly initialize their category. That includes quotient rings and free algebras. The problem is that, even though they inherit from sage.rings.ring.Ring, they do not call the appropriate __init__.
I was wondering whether that should be done here or on #11115, but after all I think it should better be here.
Work Issues: Categories for more rings...
Replying to @simon-king-jena:
While working on #11115, I found that there are still some rings that do neither use the new coercion framework nor properly initialize their category. That includes quotient rings and free algebras. The problem is that, even though they inherit from sage.rings.ring.Ring, they do not call the appropriate
__init__.I was wondering whether that should be done here or on #11115, but after all I think it should better be here.
Both routes look fine. Please pick up whichever is more practical to you.
It is rather frustrating. I try to fit BooleanPolynomialRing into the new coercion model, defining _element_constructor_ and _coerce_map_from_ -- and now, suddenly Sage is complaining that it is still using the old coercion model! I found that the reason for the complaint is the existence of _element_constructor. But that should be the new model, shouldn't it??
Time to call it a day...
It turned out that the notion of "coercion" was not used in the proper way, in sage.rings.polynomial.pbori. Namely, _coerce_ was used to convert a boolean set into a boolean polynomial. That can not be called a coercion, since boolean sets have no parent (at least no parent that would allow for a coercion map). The old coercion model did not mind, but the new coercion model complains about those things.
Moreover, there was a custom call method that first tried to call self._coerce_(x). When one renames that call method to _element_contructor_ (for the new coercion model) then one obtains an infinite recursion, because you need to evaluate the element constructor for constructing the _coerce_ method.
In other words, one needs to replace P._coerce_(...) and P.coerce(...) by either P(...) or by a direct call to P._coerce_c_impl(...). This is what I'm trying now.
Good news: I managed to implement pickling for boolean monomial monoids and boolean monomials, to fit everything into the new coercion model and to make TestSuite(...).run() work on boolean polynomial rings and boolean monomial monoids.
I am not updating my patch yet, though, since there are further issues to fix.
I just found a problem with the category of quotients:
sage: EuclideanDomains().Quotients()
Join of Category of euclidean domains and Category of subquotients of monoids and Category of quotients of semigroups
That's plain wrong. Think of the ring of integers mod 16, which is certainly not a euclidean domain (not even integral domain), but should belong to the category of quotients of euclidean domains.
I'll try to analyse the problem. I noticed it when I tried to provide Integers(n) with an appropriate category.
Wow!! I found that the wrong result is actually doctested in sage.categores.quotients!
That has to change.
From the doc:
Given a concrete category ``As()`` (i.e. a subcategory of
``Sets()``), ``As().Quotients()`` returns the category of objects
of ``As()`` endowed with a distinguished description as
quotient of some other object of ``As()``.
IntMod16 is not a quotient of ZZ by a morphism of euclidean domains. So it is not in EuclideanDomains().Quotients().
Replying to @nthiery:
From the doc:
Given a concrete category ``As()`` (i.e. a subcategory of ``Sets()``), ``As().Quotients()`` returns the category of objects of ``As()`` endowed with a distinguished description as quotient of some other object of ``As()``.IntMod16 is not a quotient of ZZ by a morphism of euclidean domains. So it is not in EuclideanDomains().Quotients().
I see. So, I can not provide QuotientRing with the category of quotients of the category of the ambient ring.
Replying to @simon-king-jena:
I see. So, I can not provide
QuotientRingwith the category of quotients of the category of the ambient ring.
Yup. E.g. the quotient of an algebra A by a vector space I is just a vector space. But if I is an ideal, more can be said. So the logic to determine this piece of mathematical information is non trivial, and often one is better off just specifying it explicitly.
The first patch did not cover all rings - it turned out that many classes derived from sage.rings.ring.Ring do in fact not call the __init__ method of rings. Hence, in these cases, the category stuff was not present.
The second patch takes care of some of these cases - I think I shouldn't vouch for completeness, though. Moreover, I implemented the new coercion model for some more classes of rings, such as free algebras, quotient rings, and boolean polynomial rings.
Concerning quotient rings: I hope that the category of this quotient ring is correctly chosen:
sage: P.<x,y> = QQ[]
sage: Q = P.quo(P*[x^2+y^2])
sage: Q.category()
Join of Category of commutative rings and Category of subquotients of monoids and Category of quotients of semigroups
What do you think: Should it perhaps better be "join of Category of commutative algebras over Rational Field and Category of subquotients ..."? After all, P belongs to the category of commutative algebras over the rational field.
But apart from that, it seems ready for review now.
Changed work issues from Categories for more rings... to none
Replying to @simon-king-jena:
The first patch did not cover all rings - it turned out that many classes derived from sage.rings.ring.Ring do in fact not call the
__init__method of rings. Hence, in these cases, the category stuff was not present.The second patch takes care of some of these cases - I think I shouldn't vouch for completeness, though. Moreover, I implemented the new coercion model for some more classes of rings, such as free algebras, quotient rings, and boolean polynomial rings.
Cool!
Concerning quotient rings: I hope that the category of this quotient ring is correctly chosen:
sage: P.<x,y> = QQ[] sage: Q = P.quo(P*[x^2+y^2]) sage: Q.category() Join of Category of commutative rings and Category of subquotients of monoids and Category of quotients of semigroupsWhat do you think: Should it perhaps better be "join of Category of commutative algebras over Rational Field and Category of subquotients ..."?
After all, P belongs to the category of commutative algebras over the rational field.
If multiplication by elements of QQ are implemented (and I assume they
are), then yes I definitely would go for commutative algebras.
But apart from that, it seems ready for review now.
Nice! I'll work on that in the coming weeks, but can't promise when
with the upcoming Sage days. Please anyone beat me to it!
Cheers,
Nicolas
I just noticed:
The patch not only depends on #9944 but is based on sage-4.7.alpha5. I don't know which other patches are responsible for that, but the part of the patch that concerns the polybori code fails to apply with sage-4.6.
Replying to @nthiery:
What do you think: Should it perhaps better be "join of Category of commutative algebras over Rational Field and Category of subquotients ..."?
After all, P belongs to the category of commutative algebras over the rational field.If multiplication by elements of QQ are implemented (and I assume they
are), then yes I definitely would go for commutative algebras.
I looked at the code, and it seems that at least operation of the base ring exists. But I think one should also implement an _rmul_ and _lmul_ for quotient ring elements - it is missing, so far.
Best regards,
Simon
The patchbot keeps complaining. I don't know why. Anyway, applied on top of sage-4.7.alpha5, it works for me.
Dependencies: #9944
Apparently the patches need to be rebased: I just tried to apply it on top of my patch queue (including some other relevant tickets), and 8 hunks did not apply for the first of the two patches. Needs work.
Description changed:
---
+++
@@ -26,5 +26,5 @@
Many other rings are not properly initialised as well. The aim of this ticket is to change that.
-Depends on #9944
+Depends on #11268, #11139, #9976, #9944, #11269
FWIW, all (short) tests pass for me.
The dependencies of this patch are stated in the corresponding form field. For the following timings, I additionally have #11298, #11267 and, in particular, #11115.
In some cases, there is quite an improvement, compared with the timings that are stated in previous comments! Note that the times per loop change, depending on whether one lets timeit run 10^4 loops (as in #9944) or one simply does timeit without any specification.
First series of timings: I guess most of the speedup is due to #9944
sage: R.<x> = ZZ[]
sage: timeit('(2*x-1)^2+5')
625 loops, best of 3: 20.9 µs per loop
# unpatched 625 loops, best of 3: 22.5 µs per loop
sage: R.<x> = QQ[]
sage: timeit('(2*x-1)^2+5')
625 loops, best of 3: 21.3 µs per loop
# unpatched 625 loops, best of 3: 24.5 µs per loop
sage: R.<x> = GF(3)[]
sage: timeit('(2*x-1)^2+5')
625 loops, best of 3: 19.2 µs per loop
# unpatched 625 loops, best of 3: 87.7 µs per loop
sage: R.<x> = QQ['t'][]
sage: timeit('(2*x-1)^2+5')
625 loops, best of 3: 75.4 µs per loop
# unpatched 625 loops, best of 3: 114 µs per loop
sage: R.<x,y> = ZZ[]
sage: timeit('(2*x-1)^2+5')
625 loops, best of 3: 19.7 µs per loop
# unpatched 625 loops, best of 3: 21.9 µs per loop
sage: R.<x,y> = QQ[]
sage: timeit('(2*x-1)^2+5')
625 loops, best of 3: 18.2 µs per loop
# unpatched 625 loops, best of 3: 40 µs per loop
sage: R.<x,y> = GF(3)[]
sage: timeit('(2*x-1)^2+5')
625 loops, best of 3: 15.8 µs per loop
# unpatched 625 loops, best of 3: 26.3 µs per loop
sage: R.<x,y> = QQ['t'][]
sage: timeit('(2*x-1)^2+5')
625 loops, best of 3: 162 µs per loop
# unpatched 625 loops, best of 3: 239 µs per loop
Timings for some "schemes" tests.
sage -t "devel/sage-main/sage/schemes/elliptic_curves/sha_tate.py"
[29.5 s]
I have not the faintest idea where that slow-down might come from. Namely, the underlying arithmetic has drastically improved:
sage: B.<t> = PolynomialRing(Integers(125))
sage: R = monsky_washnitzer.SpecialCubicQuotientRing(t^3 - t + B(1/4))
sage: x, T = R.gens()
sage: timeit('x*T')
625 loops, best of 3: 179 µs per loop
# unpatched: 625 loops, best of 3: 612 µs per loop
and
sage: B.<t> = PolynomialRing(Integers(125))
sage: R = monsky_washnitzer.SpecialCubicQuotientRing(t^3 - t + B(1/4))
sage: P = R.poly_ring()
sage: timeit('P.modulus()',number=10^6)
1000000 loops, best of 3: 161 ns per loop
# unpatched: 1000000 loops, best of 3: 647 ns per loop
sage: x, T = R.gens()
sage: timeit('x*T')
625 loops, best of 3: 177 µs per loop
# unpatched: 625 loops, best of 3: 495 µs per loop
And the startup time (which is also relevant for doctests:
1.326 sage.all (None)
0.324 sage.schemes.all (sage.all)
0.184 sage.misc.all (sage.all)
0.160 hyperelliptic_curves.all (sage.schemes.all)
I'd appreciate if someone could explain why the doctest time in sage.schemes increased, while the underlying arithmetics became faster.
Overall, I think that the performance is quite good, and of course the main point of this ticket (namely to implement the category framework for rings) was successfully addressed.
Review, anyone?
rebased Simon's patch to 4.7.1.alpha1
Description changed:
---
+++
@@ -28,3 +28,4 @@
Depends on #11268, #11139, #9976, #9944, #11269
+Apply trac9138-categories_for_rings.patch, trac9138_categories_for_more_rings.rebase4.7.1.a1.patchAttachment: trac9138_categories_for_more_rings.rebase4.7.1.a1.patch.gz
Thank you, Burcin!
For the patchbot:
Apply trac9138-categories_for_rings.patch, trac9138_categories_for_more_rings.rebase4.7.1.a1.patch
Work Issues: Steenrod algebras
It seems that there is a problem with doctests for the Steenrod algebra. When I apply the patch on top of sage-4.7.1.rc1, I obtain
sage: SteenrodAlgebra(2)['x'].category()
Exception raised:
Traceback (most recent call last):
File "/mnt/local/king/SAGE/sage-4.7.1.rc1/local/bin/ncadoctest.py", line 1231, in run_one_test
self.run_one_example(test, example, filename, compileflags)
File "/mnt/local/king/SAGE/sage-4.7.1.rc1/local/bin/sagedoctest.py", line 38, in run_one_example
OrigDocTestRunner.run_one_example(self, test, example, filename, compileflags)
File "/mnt/local/king/SAGE/sage-4.7.1.rc1/local/bin/ncadoctest.py", line 1172, in run_one_example
compileflags, 1) in test.globs
File "<doctest __main__.example_1[12]>", line 1, in <module>
SteenrodAlgebra(Integer(2))['x'].category()###line 114:
sage: SteenrodAlgebra(2)['x'].category()
File "/mnt/local/king/SAGE/sage-4.7.1.rc1/local/lib/python/site-packages/sage/algebras/steenrod/steenrod_algebra.py", line 1037, in homogeneous_component
basis = self._basis_fcn(n)
File "cachefunc.pyx", line 531, in sage.misc.cachefunc.CachedFunction.__call__ (sage/misc/cachefunc.c:2227)
File "/mnt/local/king/SAGE/sage-4.7.1.rc1/local/lib/python/site-packages/sage/algebras/steenrod/steenrod_algebra_bases.py", line 368, in steenrod_algebra_basis_
if n < 0 or int(n) != n:
ValueError: invalid literal for int() with base 10: 'x'
So, back to the drawing board...
I wonder if the Steenrod algebra problem is due to #10052, which was merged in 4.7.1.alpha3.
Replying to @jhpalmieri:
I wonder if the Steenrod algebra problem is due to #10052, which was merged in 4.7.1.alpha3.
I wouldn't say "due to", since the critical tests (perhaps introduced by #10052) pass with sage-4.7.1.rc1. In fact, I already found out that attachment: trac9138-categories_for_rings.patch is to blame. But I don't know yet what went wrong.
Replying to @simon-king-jena:
I wouldn't say "due to"
I guess I meant due to the interactions between the patch here and the changes in #10052.
In my first patch, I had introduced the test
sage: SteenrodAlgebra(2)['x'].category()
Category of algebras over mod 2 Steenrod algebra
It was supposed to be a test for a polynomial ring over a non-commutative ring. But by your patch from #10052, the __getitem__ method of a Steenrod algebra got a different meaning.
I think the easiest solution (and probably the best solution as well would be to replace that test by a polynomial ring over a different non-commutative algebra, perhaps a matrix algebra.
Replying to @simon-king-jena:
I think the easiest solution (and probably the best solution as well would be to replace that test by a polynomial ring over a different non-commutative algebra, perhaps a matrix algebra.
Helàs.
Matrix spaces have their custom __getitem__ as well. But it would be possible to construct the polynomial ring by using the polynomial ring constructor:
sage: PolynomialRing(MatrixSpace(QQ,2),'x').category()
Category of algebras over Full MatrixSpace of 2 by 2 dense matrices over Rational Field
sage: PolynomialRing(SteenrodAlgebra(2),'x').category()
Category of algebras over mod 2 Steenrod algebra, milnor basis
The other problem is in sage/algebras/steenrod/steenrod_algebra.py.: With the patch from here,
sage: A1 = SteenrodAlgebra(profile=[2,1])
sage: A1(3) # map integer into A1
returns 3 and not 1!
That behaviour boils down to
sage: A1._from_dict({():3})
3
Here, one should have
sage: A1._from_dict({():3},coerce=True)
1
I don't know, though, how my patch has changed the question whether there is a conversion or not: The word coerce occurs precisely twice in my first patch, but that is certainly unrelated with "coerce=True" versus "coerce=False". Strange.
It seems that the problem is deeper. With the first patch, I obtain
sage: A1 = SteenrodAlgebra(profile=[2,1])
sage: A1.coerce_map_from(ZZ)
Conversion map:
From: Integer Ring
To: sub-Hopf algebra of mod 2 Steenrod algebra, milnor basis, profile function [2, 1]
Without the patch, I obtain
sage: A1 = SteenrodAlgebra(profile=[2,1])
sage: A1.coerce_map_from(ZZ)
Composite map:
From: Integer Ring
To: sub-Hopf algebra of mod 2 Steenrod algebra, milnor basis, profile function [2, 1]
Defn: Natural morphism:
From: Integer Ring
To: Finite Field of size 2
then
Generic morphism:
From: Finite Field of size 2
To: sub-Hopf algebra of mod 2 Steenrod algebra, milnor basis, profile function [2, 1]
By consequence, when doing A1(3) with my patch, a direct conversion is attempted from ZZ to A1, but the auxiliary methods involved in the conversion assume that the argument 3 has already been converted into the base ring, GF(2).
Perhaps a "register_coercion" during initialisation could help.
Replying to @simon-king-jena:
Perhaps a "register_coercion" during initialisation could help.
Actually, that registration should take place in the __init_extra__ method that is defined in sage.categories.algebras. Without my patch, the rather generic coercion from the base ring is registered no matter what, which means that it could result in a very slow coerce map from the base ring.
Therefore, my patch modifies __init_extra__ so that the generic coercion is only registered if no "better" coercion has been registered before. It could be that that is a problem for Steenrod algebras.
By inserting print statements into my new init_extra method, I found out that when it is called, the "one" of the Steenrod algebra is not available, yet. Therefore, the generic method ("multiply the given element of the base ring with the multiplicative unit of the algebra") is not available at that time.
Without my patch, a different method is used for coercion, namely
SetMorphism(function = self.from_base_ring, parent = Hom(self.base_ring(), self, Rings()))
The reason for changing it was the fact that normally self.one()._lmul_(r) is a pretty fast way to convert a base ring element r into self. But I guess that the old from_base_ring should be used if the unit is not available during initialisation.
That said: The generic from_base_ring does exactly the same as my new approach -- but it constructs the unit again and again and again. Therefore, if the unit is available during initialisation, then my approach is faster.
I am now fighting against some doc test failures, that are apparently due to the fact that many tests in sage.rings.polynomial.multi_polynomial_ring do not use the cache of polynomial rings, in order to demonstrate features of ring classes that would otherwise hardly be used.
Problem: If there is a ring in the cache, together with a coerce map from its base ring, then the coerce map is cached as well. Later, if one constructs an isomorphic ring that is not in the cache of rings, then the ring will evaluate equal to the previously constructed ring, and thus looking up the coerce map yields a map with the wrong codomain.
Difficult.
The problem was that some doc tests in sage/rings violate the unique parent assumption on purpose. But homsets will try to be unique even if domain and codomain are not unique. That's bad.
Therefore, I made the following change for my first patch: If Hom(X,Y,category) is able to find a hom set H for the given data in cache, then it is first tested that H.domain() is X and H.codomain() is Y. If it isn't, then a new hom set is constructed, and put into the cache.
Hence, we have (as a new doctest):
By trac ticket #9138, we abandon the uniqueness of hom sets, if the domain or
codomain break uniqueness::
sage: from sage.rings.polynomial.multi_polynomial_ring import MPolynomialRing_polydict_domain
sage: P.<x,y,z>=MPolynomialRing_polydict_domain(QQ, 3, order='degrevlex')
sage: Q.<x,y,z>=MPolynomialRing_polydict_domain(QQ, 3, order='degrevlex')
sage: P == Q
True
sage: P is Q
False
Hence, P and Q are not unique parents. By consequence, the following homsets
aren't either::
sage: H1 = Hom(QQ,P)
sage: H2 = Hom(QQ,Q)
sage: H1 == H2
True
sage: H1 is H2
False
It is always the most recently constructed hom set that remains in the cache::
sage: H2 is Hom(QQ,Q)
True
The second patch still applies on top of the first. I did the doc tests in sage/rings and sage/categories with the first patch, but full doc tests should be run with both patches, of course.
Apply trac9138-categories_for_rings.patch trac9138_categories_for_more_rings.rebase4.7.1.a1.patch
Changed work issues from Steenrod algebras to none
Description changed:
---
+++
@@ -28,4 +28,7 @@
Depends on #11268, #11139, #9976, #9944, #11269
-Apply trac9138-categories_for_rings.patch, trac9138_categories_for_more_rings.rebase4.7.1.a1.patch
+Apply
+
+- [attachment: trac9138-categories_for_rings.patch](https://github.com/sagemath/sage-prod/files/10649440/trac9138-categories_for_rings.patch.gz)
+- [attachment: trac9138_categories_for_more_rings.rebase4.7.1.a1.patch](https://github.com/sagemath/sage-prod/files/10649439/trac9138_categories_for_more_rings.rebase4.7.1.a1.patch.gz)Too bad. There are numerous errors for the tests in sage/doc.
Work Issues: doc test errors in sage/doc
Replying to @simon-king-jena:
Too bad. There are numerous errors for the tests in sage/doc.
To be precise: The errors seem to come from elliptic curves. Not for the first time...
For example, I get
sage: L = EllipticCurve('11a').padic_lseries(5)
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
/home/king/<ipython console> in <module>()
/mnt/local/king/SAGE/sage-4.7.1.rc1/local/lib/python2.6/site-packages/sage/schemes/elliptic_curves/padics.pyc in padic_lseries(self, p, normalize, use_eclib)
157 if self.ap(p) % p != 0:
158 Lp = plseries.pAdicLseriesOrdinary(self, p,
--> 159 normalize = normalize, use_eclib=use_eclib)
160 else:
161 Lp = plseries.pAdicLseriesSupersingular(self, p,
/mnt/local/king/SAGE/sage-4.7.1.rc1/local/lib/python2.6/site-packages/sage/schemes/elliptic_curves/padic_lseries.pyc in __init__(self, E, p, use_eclib, normalize)
197 print "Warning : Curve outside Cremona's table. Computations of modular symbol space might take very long !"
198
--> 199 self._modular_symbol = E.modular_symbol(sign=+1, use_eclib = use_eclib, normalize=normalize)
200
201 def __add_negative_space(self):
/mnt/local/king/SAGE/sage-4.7.1.rc1/local/lib/python2.6/site-packages/sage/schemes/elliptic_curves/ell_rational_field.pyc in modular_symbol(self, sign, use_eclib, normalize)
1241 M = ell_modular_symbols.ModularSymbolECLIB(self, sign, normalize=normalize)
1242 else :
-> 1243 M = ell_modular_symbols.ModularSymbolSage(self, sign, normalize=normalize)
1244 self.__modular_symbol[typ] = M
1245 return M
/mnt/local/king/SAGE/sage-4.7.1.rc1/local/lib/python2.6/site-packages/sage/schemes/elliptic_curves/ell_modular_symbols.pyc in __init__(self, E, sign, normalize)
621 self._use_eclib = False
622 self._normalize = normalize
--> 623 self._modsym = E.modular_symbol_space(sign=self._sign)
624 self._base_ring = self._modsym.base_ring()
625 self._ambient_modsym = self._modsym.ambient_module()
/mnt/local/king/SAGE/sage-4.7.1.rc1/local/lib/python2.6/site-packages/sage/schemes/elliptic_curves/ell_rational_field.pyc in modular_symbol_space(self, sign, base_ring, bound)
1111 except KeyError:
1112 pass
-> 1113 M = ell_modular_symbols.modular_symbol_space(self, sign, base_ring, bound=bound)
1114 self.__modular_symbol_space[typ] = M
1115 return M
/mnt/local/king/SAGE/sage-4.7.1.rc1/local/lib/python2.6/site-packages/sage/schemes/elliptic_curves/ell_modular_symbols.pyc in modular_symbol_space(E, sign, base_ring, bound)
110 t = V.T(p)
111 ap = E.ap(p)
--> 112 V = (t - ap).kernel()
113 p = next_prime(p)
114
/mnt/local/king/SAGE/sage-4.7.1.rc1/local/lib/python2.6/site-packages/sage/structure/element.so in sage.structure.element.RingElement.__sub__ (sage/structure/element.c:12234)()
/mnt/local/king/SAGE/sage-4.7.1.rc1/local/lib/python2.6/site-packages/sage/structure/coerce.so in sage.structure.coerce.CoercionModel_cache_maps.bin_op (sage/structure/coerce.c:6463)()
/mnt/local/king/SAGE/sage-4.7.1.rc1/local/lib/python2.6/site-packages/sage/structure/coerce.so in sage.structure.coerce.CoercionModel_cache_maps.canonical_coercion (sage/structure/coerce.c:7862)()
RuntimeError: BUG in map, returned None -2 <type 'sage.categories.map.FormalCompositeMap'> Composite map:
From: Integer Ring
To: Full Hecke algebra acting on Modular Symbols space of dimension 2 for Gamma_0(11) of weight 2 with sign 1 over Rational Field
Defn: Natural morphism:
From: Integer Ring
To: Rational Field
then
Generic morphism:
From: Rational Field
To: Full Hecke algebra acting on Modular Symbols space of dimension 2 for Gamma_0(11) of weight 2 with sign 1 over Rational Field
with the latest patches.