## Release management: Merging patches

This is a revision of my post to the sage-release mailing list, outlining some guidelines for merging positively reviewed tickets. I want to post it here so that it won’t be lost in the archive of the sage-release mailing list.

There is as yet little to no guidelines for Sage release management. The release management wiki page has been around for months, but it’s not especially helpful for anyone who wants hands-on experience. Here, I’ll try to list some guidelines for merging tickets/spkg’s and managing positively reviewed tickets. The primary focus is on merging tickets with patches.

Before starting on merging tickets for a new milestone, I would compile the source tarball of the current stable release of Sage under a fast-ish directory such as /scratch/. In my case, it’s under my /scratch/ home directory /scratch/mvngu/release/ on the machine sage.math. The idea is to avoid working under a directory that resides on an NFS mounted disk. Towards that end, I also set my default DOT_SAGE directory to a fast-ish directory, as implemented via the following export line in my .bashrc:

# The Sage working directory .sage/ export DOT_SAGE=/dev/shm/mvngu/dot_sage

It also helps to know whether or not there’s any doctest failure on sage.math with the newly compiled source tarball.

After compiling and (parallel) doctesting the latest version of Sage, it’s time to merge tickets that have received positive review. I often use trac report 11 to find out which tickets have been positively reviewed. Using report 11, I would then pick a ticket, read through it for any special instructions. In many cases, the ticket has instructions for which patch to merge and, for more than one patch on the ticket, in which order. In some cases, it can be difficult to work out the order in which to merge patches on a ticket. With a situation like this, I would request the patch author(s) to explain how to merge patches on the ticket and in which order.

Sage has a merge script in the scripts repository. This script is designed for merging tickets. The merge script is SAGE_LOCAL/bin/sage-apply-ticket, which can be used from the command line interface via

./sage -merge <ticket-number>

However, to effectively use that script, one first needs a working knowledge of Mercurial and its queues extension. This is especially important because the merge script uses Mercurial queues (mq). There would be times when the merge script cannot help me with debugging ticket conflicts, so I would need to use mq together with the bisect command to search through the commit history. I don’t often use the merge script, so there is little for me to say about how to effectively use it.

Here, I’ll explain one way to merge a positively reviewed ticket using mq. Ensure that any patches to be merged must have the full name and email address of the patch author. The commit message must contain the ticket number. Use mq to qimport and qpush any relevant patches to be merged. Sometimes a patch author would forget to put in their full name and email address on the patch, so I need to manually download the patch and fill in those details. Another way is to set the username field in my .hgrc to the patch author’s full name and email address; then qimport and qpush. After that, I would restore my full name and email address in my .hgrc. If a commit message doesn’t contain the ticket number, or the message is something non-sensical like

[mq]: real-numbers-work.patch

I would use qrefresh -e to replace that with a sensible commit message that also has the ticket number. Now the patches must pass the following criteria:

- No hunk failures resulting from qpush.
- The Sage library must rebuild without errors.
- The HTML and PDF versions of the reference manual must build successfully.
- No doctest failures when (parallel) doctesting everything under

SAGE_ROOT/devel/sage-main/.

If any of the above steps results in an error, then it’s very likely that the ticket would receive an automatic “needs work” status. Be sure to explain on the ticket why the patch(es) need work. Is it a rebase? If so, which version of Sage to perform the rebase on. Does the ticket conflict with another ticket? And so on.

If all of the above steps pass for the ticket, then move on to another ticket for merging. Be careful not to close the ticket even though the above criteria are met. What I would do is, merge a dozen or so tickets as per the above instructions, but not closing the corresponding tickets after qimport and qpush their patches. Once I have a qseries with patches from a dozen tickets, which should still be open by now, I would copy everything under SAGE_ROOT to another directory, say, under /dev/shm/. (I would also note down the order in which those dozen tickets have been merged.) From that newly copied SAGE_ROOT, I would create a source tarball to test that everything still builds after merging the dozen tickets. To do so, I would do:

- cd to the SAGE_ROOT under /dev/shm/.
- Run ./sage.
- Run ./sage -b main. This step would take a while, so I would do something during the rebuild process that would occupy me for at least 15 minutes.
- Create a tentative source tarball with
./sage -sdist x.y.z-<descriptive-name>

The name <descriptive-name> is something to remind myself that this source tarball is only to test that the resulting tarball still builds and all doctests still pass.

- The result is a source tarball under dist/.
- Untar that source tarball, compile Sage, and parallel doctest. This step usually takes more than 1 hour, so I would make sure to have something to occupy me for at least an hour.

To be safe, I often copy the whole SAGE_ROOT of my merge tree to another directory and run through the above steps. This is because a ticket can have an spkg for merging into the standard spkg repository. You could push a sage-main repository to another copy of Sage somewhere, but presently you can’t do the same thing for the standard spkg repository.

Once the tentative source tarball successfully compiles and all doctests under SAGE_ROOT/devel/sage-main/ pass, then I would close the relevant dozen or so tickets.

The above is a general overview of the process I follow for merging/closing tickets. It’s a slow process, I admit. But I would rather try to catch build and doctest problems early, than have them bit me later on down the track. As noted at the beginning of this post, I didn’t cover anything about merging updated spkg’s in the standard packages repository. That could be a topic for another post.

## Parallel testing the Sage library

After compiling a source version of Sage, you can choose to run the test suite on the whole Sage library, on all modules under a given directory, or on a specified module only. For the purposes of this post, I have compiled Sage 4.1.1 from source and the top level Sage directory is

[mvngu@mod sage-4.1.1]$ pwd /scratch/mvngu/build/sage-4.1.1

**Testing a module**

Say you want to run all tests in the sudoku module sage/games/sudoku.py. In a terminal window, first cd to the top level Sage directory of your local Sage installation. Now start doctesting as demonstrated in the following terminal session:

[mvngu@mod sage-4.1.1]$ ./sage -t devel/sage-main/sage/games/sudoku.py sage -t "devel/sage-main/sage/games/sudoku.py" [6.0 s] ---------------------------------------------------------------------- All tests passed! Total time for all tests: 6.0 seconds

As you can see, the numbers output by the test tell you that testing the sudoku module takes about six seconds, while testing all the modules you specified took the same amount of time. In this case, you only tested one module so it’s not surprising that the total testing time is approximately the same as the time required to test only that one module. Notice that the syntax is

[mvngu@mod sage-4.1.1]$ ./sage -t devel/sage-main/sage/games/sudoku.py sage -t "devel/sage-main/sage/games/sudoku.py" [5.7 s] ---------------------------------------------------------------------- All tests passed! Total time for all tests: 5.7 seconds [mvngu@mod sage-4.1.1]$ ./sage -t "devel/sage-main/sage/games/sudoku.py" sage -t "devel/sage-main/sage/games/sudoku.py" [5.4 s] ---------------------------------------------------------------------- All tests passed! Total time for all tests: 5.4 seconds

and not

[mvngu@mod sage-4.1.1]$ ./sage -t sage/games/sudoku.py ERROR: File ./sage/games/sudoku.py is missing exit code: 1 ---------------------------------------------------------------------- The following tests failed: ./sage/games/sudoku.py Total time for all tests: 0.0 seconds [mvngu@mod sage-4.1.1]$ ./sage -t "sage/games/sudoku.py" ERROR: File ./sage/games/sudoku.py is missing exit code: 1 ---------------------------------------------------------------------- The following tests failed: ./sage/games/sudoku.py Total time for all tests: 0.0 seconds

In all of the above terminal sessions, you use a local installation of Sage to test its own modules. Even if you have a system-wide Sage installation, using that version to doctest the modules of a local installation is a recipe for confusion. For example, in the following session, there is a system-wide Sage. Doctesting using that version doesn’t output anything useful.

[mvngu@mod sage-4.1.1]$ which sage /usr/local/bin/sage [mvngu@mod sage-4.1.1]$ sage devel/sage-main/sage/games/sudoku.py

**Troubleshooting**

If you want to doctest modules of a Sage installation, it is recommended that you first cd to the top level directory of that Sage installation, otherwise known as the SAGE_ROOT of that installation. When you run tests, use that particular Sage installation via the syntax ./sage; notice the “dot-forward-slash” at the front of sage. This is a precaution against confusion that can arise when your system has multiple Sage installations. For example, the following syntax is OK because you explicitly specify the Sage installation in the current SAGE_ROOT:

[mvngu@sage sage-4.1.1]$ ./sage -t devel/sage-main/sage/games/sudoku.py sage -t "devel/sage-main/sage/games/sudoku.py" [5.1 s] ---------------------------------------------------------------------- All tests passed! Total time for all tests: 5.1 seconds [mvngu@sage sage-4.1.1]$ ./sage -t "devel/sage-main/sage/games/sudoku.py" sage -t "devel/sage-main/sage/games/sudoku.py" [5.0 s] ---------------------------------------------------------------------- All tests passed! Total time for all tests: 5.0 seconds

If you login as a regular user, the following syntax is not recommended as you are using a system-wide Sage installation (if it exists):

[mvngu@sage sage-4.1.1]$ sage -t devel/sage-main/sage/games/sudoku.py Traceback (most recent call last): File "/usr/local/sage/local/bin/sage-test", line 49, in os.makedirs(TMP) File "/usr/local/sage/local/lib/python/os.py", line 157, in makedirs mkdir(name, mode) OSError: [Errno 13] Permission denied: '/usr/local/sage/tmp/tmp' [mvngu@sage sage-4.1.1]$ sage -t "devel/sage-main/sage/games/sudoku.py" Traceback (most recent call last): File "/usr/local/sage/local/bin/sage-test", line 49, in os.makedirs(TMP) File "/usr/local/sage/local/lib/python/os.py", line 157, in makedirs mkdir(name, mode) OSError: [Errno 13] Permission denied: '/usr/local/sage/tmp/tmp'

In this case, you received a permission error because the system-wide Sage installation attempts to write some data to a system-wide directory using your login privileges. The system-wide directory is a temporary directory under the system-wide SAGE_ROOT. Most likely a system-wide Sage installation was performed by a system administrator using an account with more privileges than a regular user. As a regular user, you cannot write to directories where you don’t have write permission.

The following syntax is also discouraged when you login as a regular user:

[mvngu@sage sage-4.1.1]$ cd [mvngu@sage ~]$ sage -t devel/sage-main/sage/games/sudoku.py Traceback (most recent call last): File "/usr/local/sage/local/bin/sage-test", line 49, in os.makedirs(TMP) File "/usr/local/sage/local/lib/python/os.py", line 157, in makedirs mkdir(name, mode) OSError: [Errno 13] Permission denied: '/usr/local/sage/tmp/tmp' [mvngu@sage ~]$ sage -t "devel/sage-main/sage/games/sudoku.py" Traceback (most recent call last): File "/usr/local/sage/local/bin/sage-test", line 49, in os.makedirs(TMP) File "/usr/local/sage/local/lib/python/os.py", line 157, in makedirs mkdir(name, mode) OSError: [Errno 13] Permission denied: '/usr/local/sage/tmp/tmp'

This is exactly the same as the previous syntax because in both cases you attempted to doctest modules in a system-wide Sage installation using privileges of a regular user. If you don’t have permission to read or write somewhere on your system, you can’t read or write there. As a regular user, you don’t usually have privileges to read the directory /root nor do you have privileges to write to the root directory:

[mvngu@sage sage-4.1.1]$ ls /root/ ls: cannot open directory /root/: Permission denied [mvngu@sage sage-4.1.1]$ cd / [mvngu@sage /]$ touch demo.txt touch: cannot touch `demo.txt': Permission denied

**Parallel testing many modules**

So far you have used a single thread to doctest a module in the Sage library. There are hundreds, even thousands of modules in the Sage library. Testing them all using one thread would take a few hours. Depending on your hardware, this could take up to six hours or more. On a multi-core system, parallel doctesting can significantly reduce the testing time. Unless you also want to use your computer while doctesting in parallel, you can choose to devote all the cores of your system for parallel testing.

Let’s doctest all modules in a directory, first using a single thread and then using two threads. For this example, suppose you want to test all the modules under sage/crypto/. You can use a syntax similar to that shown above to achieve this:

[mvngu@mod sage-4.1.1]$ ./sage -t devel/sage-main/sage/crypto/ sage -t "devel/sage-main/sage/crypto/lfsr.py" [2.5 s] sage -t "devel/sage-main/sage/crypto/cryptosystem.py" [1.9 s] sage -t "devel/sage-main/sage/crypto/block_cipher/miniaes.py" [2.5 s] sage -t "devel/sage-main/sage/crypto/block_cipher/all.py" [0.1 s] sage -t "devel/sage-main/sage/crypto/block_cipher/__init__.py" [0.1 s] sage -t "devel/sage-main/sage/crypto/classical.py" [2.7 s] sage -t "devel/sage-main/sage/crypto/mq/mpolynomialsystem.py" [8.7 s] sage -t "devel/sage-main/sage/crypto/mq/mpolynomialsystemgenerator.py" [1.9 s] sage -t "devel/sage-main/sage/crypto/mq/__init__.py" [0.1 s] sage -t "devel/sage-main/sage/crypto/mq/sbox.py" [2.8 s] sage -t "devel/sage-main/sage/crypto/mq/sr.py" [4.9 s] sage -t "devel/sage-main/sage/crypto/stream_cipher.py" [1.9 s] sage -t "devel/sage-main/sage/crypto/all.py" [0.1 s] sage -t "devel/sage-main/sage/crypto/stream.py" [1.9 s] sage -t "devel/sage-main/sage/crypto/__init__.py" [0.1 s] sage -t "devel/sage-main/sage/crypto/classical_cipher.py" [1.9 s] sage -t "devel/sage-main/sage/crypto/cipher.py" [1.9 s] ---------------------------------------------------------------------- All tests passed! Total time for all tests: 35.7 seconds

Now let’s do the same thing, but this time let’s also use the optional argument -long:

[mvngu@mod sage-4.1.1]$ ./sage -t -long devel/sage-main/sage/crypto/ sage -t -long "devel/sage-main/sage/crypto/lfsr.py" [1.9 s] sage -t -long "devel/sage-main/sage/crypto/cryptosystem.py" [2.0 s] sage -t -long "devel/sage-main/sage/crypto/block_cipher/miniaes.py" [2.6 s] sage -t -long "devel/sage-main/sage/crypto/block_cipher/all.py" [0.1 s] sage -t -long "devel/sage-main/sage/crypto/block_cipher/__init__.py" [0.1 s] sage -t -long "devel/sage-main/sage/crypto/classical.py" [2.7 s] sage -t -long "devel/sage-main/sage/crypto/mq/mpolynomialsystem.py" [8.7 s] sage -t -long "devel/sage-main/sage/crypto/mq/mpolynomialsystemgenerator.py" [2.2 s] sage -t -long "devel/sage-main/sage/crypto/mq/__init__.py" [0.1 s] sage -t -long "devel/sage-main/sage/crypto/mq/sbox.py" [2.9 s] sage -t -long "devel/sage-main/sage/crypto/mq/sr.py" [56.6 s] sage -t -long "devel/sage-main/sage/crypto/stream_cipher.py" [2.5 s] sage -t -long "devel/sage-main/sage/crypto/all.py" [0.1 s] sage -t -long "devel/sage-main/sage/crypto/stream.py" [1.9 s] sage -t -long "devel/sage-main/sage/crypto/__init__.py" [0.1 s] sage -t -long "devel/sage-main/sage/crypto/classical_cipher.py" [1.9 s] sage -t -long "devel/sage-main/sage/crypto/cipher.py" [1.9 s] ---------------------------------------------------------------------- All tests passed! Total time for all tests: 88.0 seconds

Notice the time difference between the first set of tests and the second set, which uses the optional argument -long. Many tests in the Sage library are flagged with “# long time” because these are known to take a long time to run through. Without using the optional -long argument, the module sage/crypto/mq/sr.py took about five seconds. With this optional argument, it required 57 seconds to run through all tests in that module. Here is a snippet of a function in the module sage/crypto/mq/sr.py with a doctest that has been flagged as taking a long time:

def test_consistency(max_n=2, **kwargs): r""" Test all combinations of ``r``, ``c``, ``e`` and ``n`` in ``(1, 2)`` for consistency of random encryptions and their polynomial systems. `\GF{2}` and `\GF{2^e}` systems are tested. This test takes a while. INPUT: - ``max_n`` - maximal number of rounds to consider (default: 2) - ``kwargs`` - are passed to the SR constructor TESTS:: sage: from sage.crypto.mq.sr import test_consistency sage: test_consistency(1) # long time -- calling w/ max_n = 2 requires a LOT of RAM (>> 2GB, evidently). Calling w/ max_n = 1 is far more manageable. True The above doctest used to fail on a machine with "only" 2GB RAM. Using ``max_n = 1`` appears to be a more reasonable memory usage. """

Now doctest the same directory in parallel using two threads:

[mvngu@mod sage-4.1.1]$ ./sage -tp 2 devel/sage-main/sage/crypto/ Global iterations: 1 File iterations: 1 Using cached timings to run longest doctests first. Doctesting 17 files doing 2 jobs in parallel sage -t devel/sage-main/sage/crypto/lfsr.py [2.7 s] sage -t devel/sage-main/sage/crypto/cryptosystem.py [2.0 s] sage -t devel/sage-main/sage/crypto/mq/mpolynomialsystem.py [9.4 s] sage -t devel/sage-main/sage/crypto/mq/sr.py [5.2 s] sage -t devel/sage-main/sage/crypto/classical.py [2.8 s] sage -t devel/sage-main/sage/crypto/mq/sbox.py [3.2 s] sage -t devel/sage-main/sage/crypto/block_cipher/miniaes.py [2.6 s] sage -t devel/sage-main/sage/crypto/stream_cipher.py [2.0 s] sage -t devel/sage-main/sage/crypto/mq/mpolynomialsystemgenerator.py [2.0 s] sage -t devel/sage-main/sage/crypto/classical_cipher.py [2.1 s] sage -t devel/sage-main/sage/crypto/cipher.py [2.1 s] sage -t devel/sage-main/sage/crypto/__init__.py [0.1 s] sage -t devel/sage-main/sage/crypto/block_cipher/__init__.py [0.1 s] sage -t devel/sage-main/sage/crypto/mq/__init__.py [0.1 s] sage -t devel/sage-main/sage/crypto/block_cipher/all.py [0.1 s] sage -t devel/sage-main/sage/crypto/stream.py [2.0 s] sage -t devel/sage-main/sage/crypto/all.py [0.1 s] ---------------------------------------------------------------------- All tests passed! Timings have been updated. Total time for all tests: 19.3 seconds [mvngu@mod sage-4.1.1]$ ./sage -tp 2 -long devel/sage-main/sage/crypto/ Global iterations: 1 File iterations: 1 No long cached timings exist; will create upon successful finish. Doctesting 17 files doing 2 jobs in parallel sage -t -long devel/sage-main/sage/crypto/cryptosystem.py [2.7 s] sage -t -long devel/sage-main/sage/crypto/lfsr.py [2.7 s] sage -t -long devel/sage-main/sage/crypto/stream_cipher.py [2.2 s] sage -t -long devel/sage-main/sage/crypto/all.py [0.1 s] sage -t -long devel/sage-main/sage/crypto/classical.py [3.0 s] sage -t -long devel/sage-main/sage/crypto/__init__.py [0.1 s] sage -t -long devel/sage-main/sage/crypto/stream.py [2.1 s] sage -t -long devel/sage-main/sage/crypto/classical_cipher.py [2.1 s] sage -t -long devel/sage-main/sage/crypto/cipher.py [2.1 s] sage -t -long devel/sage-main/sage/crypto/block_cipher/all.py [0.1 s] sage -t -long devel/sage-main/sage/crypto/block_cipher/__init__.py [0.1 s] sage -t -long devel/sage-main/sage/crypto/block_cipher/miniaes.py [2.8 s] sage -t -long devel/sage-main/sage/crypto/mq/mpolynomialsystemgenerator.py [2.0 s] sage -t -long devel/sage-main/sage/crypto/mq/__init__.py [0.1 s] sage -t -long devel/sage-main/sage/crypto/mq/sbox.py [3.1 s] sage -t -long devel/sage-main/sage/crypto/mq/mpolynomialsystem.py [9.1 s] sage -t -long devel/sage-main/sage/crypto/mq/sr.py [56.0 s] ---------------------------------------------------------------------- All tests passed! Timings have been updated. Total time for all tests: 71.8 seconds

As the number of threads increases, the total testing time decreases. To minimize confusion, it’s also a good idea to explicitly specify the path name of the directory you want to doctest and not a symbolic link to that directory. In the above examples, the symbolic link devel/sage points to the directory devel/sage-main, but the actual path to the directory has been specified instead of its symbolic link.

**Parallel testing the whole Sage library**

The main Sage library resides in the directory devel/sage-main/. You can use the syntax described above to doctest the main library using multiple threads. When doing release management or patching the main Sage library, I would parallel test the library using ten threads:

[mvngu@mod sage-4.1.1]$ ./sage -tp 10 -long devel/sage-main/

Another way is to edit the file makefile in the top level Sage directory so that the variable NUM_THREADS is set to 10:

# How many threads should be used when doing parallel testing (and # sometime in the future, parallel building)? NUM_THREADS=10

After saving all changes to makefile, you can parallel test with the -long option using ten threads:

[mvngu@mod sage-4.1.1]$ make ptestlong

Any of the following commands would also doctest the Sage library or one of its clones:

make test make check make testlong make ptest make ptestlong

In each case, testing is performed on the directory that is pointed to by the symbolic link devel/sage.

**Updates** 2010-01-08 — This post is now in the Sage Developers’ Guide.

## Sage 4.1.1 released

Sage 4.1.1 was released on August 14th, 2009. For the official, comprehensive release note, please refer to sage-4.1.1.txt. The following points are some of the foci of this release:

- Improved data conversion between NumPy and Sage
- Solaris support, Solaris specific bug fixes for NTL, zn_poly, Pari/GP, FLINT, MPFR, PolyBoRI, ATLAS
- Upgrade/updates for about 8 standard packages
- Three new optional packages: openopt, GLPK, p_group_cohomology

In this release, 13 people made their first contribution:

- Adam Webb
- Anders Claesson
- Andrew Mathas
- Dag Sverre Seljebotn
- Evan Fosmark
- Jens Rasch
- Nathann Cohen
- Peter McNamara
- Simon Morris
- Steven Hartke
- Taylor Sutton
- Tim Dumol
- Vincent Delecroix

We closed 165 tickets, details of which are available on the Sage trac server. Among these tickets is the long-standing #111, which was closed due to the work of Alex Ghitza and David Loeffler. Thus, ticket #111 is our lowest ticket winner for this release. With the merging of ticket #877, there is a change in the way docstring coverage is counted. Previously, the docstring coverage script also counted functions that are local to other functions. In this manner, the docstring coverage for Sage 4.1 is:

- Overall weighted coverage score: 77.8%
- Total number of functions: 22398

With ticket #877, nested functions or functions local to other functions are no longer counted towards the docstring coverage. This results in a reduced number of functions, hence we have the following statistics for Sage 4.1 as a result of #877:

- Overall weighted coverage score: 78.3%
- Total number of functions: 22210

Using the docstring coverage technique from ticket #877, in Sage 4.1.1 we increased coverage by 0.3%, while adding 120 functions:

- Overall weighted coverage score: 78.6%
- Total number of functions: 22330

**Known Issues**

The standard package cliquer doesn’t build under some 64-bit platforms. There are reports of cliquer failing to compile under 64-bit Fedora 10 (see ticket #6746) and 64-bit Intel Mac running OS X 10.5 (see ticket #6681). We provide the 64-bit binary

- sage-4.1.1-OSX10.5-intel-64bit-i386-Darwin.dmg

on the Mac OS X binary download page. However, users should note that the README.txt file in that binary contains a warning about cliquer. In the above binary, the cliquer import statement in sage/graphs/all.py has been commented out like so

#from sage.graphs.cliquer import *

as a work-around to allow Sage to start up. In the meantime, users of 64-bit OS X won’t have access to functionalities of the cliquer spkg until ticket #6681 has been resolved.

Here is a summary of main features in this release, categorized under various headings:

**Algebra**

- Adds method __nonzero__() to abelian groups (Taylor Sutton) #6510 — New method __nonzero__() for the class AbelianGroup_class in sage/groups/abelian_gps/abelian_group.py. This method returns True if the abelian group in question is non-trivial:
sage: E = EllipticCurve([0, 82]) sage: T = E.torsion_subgroup() sage: bool(T) False sage: T.__nonzero__() False

**Basic Arithmetic**

- Implement real_part() and imag_part() for CDF and CC (Alex Ghitza) #6159 — The name real_part is now an alias to the method real(); similarly, imag_part is now an alias to the method imag().
sage: a = CDF(3, -2) sage: a.real() 3.0 sage: a.real_part() 3.0 sage: a.imag() -2.0 sage: a.imag_part() -2.0 sage: i = ComplexField(100).0 sage: z = 2 + 3*i sage: z.real() 2.0000000000000000000000000000 sage: z.real_part() 2.0000000000000000000000000000 sage: z.imag() 3.0000000000000000000000000000 sage: z.imag_part() 3.0000000000000000000000000000

- Efficient summing using balanced sum (Jason Grout, Mike Hansen) #2737 — New function balanced_sum() in the module sage/misc/misc_c.pyx for summing the elements in a list. In some cases, balanced_sum() is more efficient than the built-in Python sum() function, where the efficiency can range from 26x up to 1410x faster than sum(). The following timing statistics were obtained using the machine sage.math:
sage: R. = QQ["x,y"] sage: L = [x^i for i in xrange(1000)] sage: %time sum(L); CPU times: user 0.01 s, sys: 0.00 s, total: 0.01 s Wall time: 0.01 s sage: %time balanced_sum(L); CPU times: user 0.00 s, sys: 0.00 s, total: 0.00 s Wall time: 0.00 s sage: %timeit sum(L); 100 loops, best of 3: 8.66 ms per loop sage: %timeit balanced_sum(L); 1000 loops, best of 3: 324 µs per loop sage: sage: L = [[i] for i in xrange(10e4)] sage: %time sum(L, []); CPU times: user 84.61 s, sys: 0.00 s, total: 84.61 s Wall time: 84.61 s sage: %time balanced_sum(L, []); CPU times: user 0.06 s, sys: 0.00 s, total: 0.06 s Wall time: 0.06 s

**Calculus**

- Wigner 3j, 6j, 9j, Clebsch-Gordan, Racah and Gaunt coefficients (Jens Rasch) #5996 — A collection of functions for exactly calculating Wigner 3j, 6j, 9j, Clebsch-Gordan, Racah as well as Gaunt coefficients. All these functions evaluate to a rational number times the square root of a rational number. These new functions are defined in the module sage/functions/wigner.py. Here are some examples on calculating the Wigner 3j, 6j, 9j symbols:
sage: wigner_3j(2, 6, 4, 0, 0, 0) sqrt(5/143) sage: wigner_3j(0.5, 0.5, 1, 0.5, -0.5, 0) sqrt(1/6) sage: wigner_6j(3,3,3,3,3,3) -1/14 sage: wigner_6j(8,8,8,8,8,8) -12219/965770 sage: wigner_9j(1,1,1, 1,1,1, 1,1,0 ,prec=64) # ==1/18 0.0555555555555555555 sage: wigner_9j(15,15,15, 15,3,15, 15,18,10, prec=1000)*1.0 -0.0000778324615309539

The Clebsch-Gordan, Racah and Gaunt coefficients can be computed as follows:

sage: simplify(clebsch_gordan(3/2,1/2,2, 3/2,1/2,2)) 1 sage: clebsch_gordan(1.5,0.5,1, 1.5,-0.5,1) 1/2*sqrt(3) sage: racah(3,3,3,3,3,3) -1/14 sage: gaunt(1,0,1,1,0,-1) -1/2/sqrt(pi) sage: gaunt(12,15,5,2,3,-5) 91/124062*sqrt(36890)/sqrt(pi) sage: gaunt(1000,1000,1200,9,3,-12).n(64) 0.00689500421922113448

**Combinatorics**

- Optimize the words library code (Vincent Delecroix, Sébastien Labbé, Franco Saliola) #6519 — An enhancement of the words library code in sage/combinat/words to improve its efficiency and reorganize the code. The efficiency gain for creating small words can be up to 6x:
# BEFORE sage: %timeit Word() 10000 loops, best of 3: 46.6 µs per loop sage: %timeit Word("abbabaab") 10000 loops, best of 3: 62 µs per loop sage: %timeit Word([0,1,1,0,1,0,0,1]) 10000 loops, best of 3: 59.4 µs per loop # AFTER sage: %timeit Word() 100000 loops, best of 3: 6.85 µs per loop sage: %timeit Word("abbabaab") 100000 loops, best of 3: 11.8 µs per loop sage: %timeit Word([0,1,1,0,1,0,0,1]) 100000 loops, best of 3: 10.6 µs per loop

For the creation of large words, the improvement can be from between 8000x up to 39000x:

# BEFORE sage: t = words.ThueMorseWord() sage: w = list(t[:1000000]) sage: %timeit Word(w) 10 loops, best of 3: 792 ms per loop sage: u = "".join(map(str, list(t[:1000000]))) sage: %timeit Word(u) 10 loops, best of 3: 777 ms per loop sage: %timeit Words("01")(u) 10 loops, best of 3: 748 ms per loop # AFTER sage: t = words.ThueMorseWord() sage: w = list(t[:1000000]) sage: %timeit Word(w) 10000 loops, best of 3: 20.3 µs per loop sage: u = "".join(map(str, list(t[:1000000]))) sage: %timeit Word(u) 10000 loops, best of 3: 21.9 µs per loop sage: %timeit Words("01")(u) 10000 loops, best of 3: 84.3 µs per loop

All of the above timing statistics were obtained using the machine sage.math. Further timing comparisons can be found at the Sage wiki page.

- Improve the speed of Permutation.inverse() (Anders Claesson) #6621 — In some cases, the speed gain is up to 11x. The following timing statistics were obtained using the machine sage.math:
# BEFORE sage: p = Permutation([6, 7, 8, 9, 4, 2, 3, 1, 5]) sage: %timeit p.inverse() 10000 loops, best of 3: 67.1 µs per loop sage: p = Permutation([19, 5, 13, 8, 7, 15, 9, 10, 16, 3, 12, 6, 2, 20, 18, 11, 14, 4, 17, 1]) sage: %timeit p.inverse() 1000 loops, best of 3: 240 µs per loop sage: p = Permutation([14, 17, 1, 24, 16, 34, 19, 9, 20, 18, 36, 5, 22, 2, 27, 40, 37, 15, 3, 35, 10, 25, 21, 8, 13, 26, 12, 32, 23, 38, 11, 4, 6, 39, 31, 28, 29, 7, 30, 33]) sage: %timeit p.inverse() 1000 loops, best of 3: 857 µs per loop # AFTER sage: p = Permutation([6, 7, 8, 9, 4, 2, 3, 1, 5]) sage: %timeit p.inverse() 10000 loops, best of 3: 24.6 µs per loop sage: p = Permutation([19, 5, 13, 8, 7, 15, 9, 10, 16, 3, 12, 6, 2, 20, 18, 11, 14, 4, 17, 1]) sage: %timeit p.inverse() 10000 loops, best of 3: 41.4 µs per loop sage: p = Permutation([14, 17, 1, 24, 16, 34, 19, 9, 20, 18, 36, 5, 22, 2, 27, 40, 37, 15, 3, 35, 10, 25, 21, 8, 13, 26, 12, 32, 23, 38, 11, 4, 6, 39, 31, 28, 29, 7, 30, 33]) sage: %timeit p.inverse() 10000 loops, best of 3: 72.4 µs per loop

- Updating some quirks in sage/combinat/partition.py (Andrew Mathas) #5790 — The functions r_core(), r_quotient(), k_core(), and partition_sign() are now deprecated. These are replaced with core(), quotient(), and sign() respectively. The rewrite of the function Partition() deprecated the argument core_and_quotient. The core and the quotient can be passed as keywords of Partition().
sage: Partition(core_and_quotient=([2,1], [[2,1],[3],[1,1,1]])) /home/mvngu/.sage/temp/sage.math.washington.edu/9221/_home_mvngu__sage_init_sage_0.py:1: DeprecationWarning: "core_and_quotient=(*)" is deprecated. Use "core=[*], quotient=[*]" instead. # -*- coding: utf-8 -*- [11, 5, 5, 3, 2, 2, 2] sage: Partition(core=[2,1], quotient=[[2,1],[3],[1,1,1]]) [11, 5, 5, 3, 2, 2, 2] sage: Partition([6,3,2,2]).r_quotient(3) /home/mvngu/.sage/temp/sage.math.washington.edu/9221/_home_mvngu__sage_init_sage_0.py:1: DeprecationWarning: r_quotient is deprecated. Please use quotient instead. # -*- coding: utf-8 -*- [[], [], [2, 1]] sage: Partition([6,3,2,2]).quotient(3) [[], [], [2, 1]] sage: partition_sign([5,1,1,1,1,1]) /home/mvngu/.sage/temp/sage.math.washington.edu/9221/_home_mvngu__sage_init_sage_0.py:1: DeprecationWarning: "partition_sign deprecated. Use Partition(pi).sign() instead # -*- coding: utf-8 -*- 1 sage: Partition([5,1,1,1,1,1]).sign() 1

**Cryptography**

- Improve S-box linear and differences matrices computation (Yann Laigle-Chapuy) #6454 — Speed up the functions difference_distribution_matrix() and linear_approximation_matrix() of the class SBox in the module sage/crypto/mq/sbox.py. The function linear_approximation_matrix() now uses the Walsh transform. The efficiency of difference_distribution_matrix() can be up to 277x, while that for linear_approximation_matrix() can be up to 132x. The following timing statistics were obtained using the machine sage.math:
# BEFORE sage: S = mq.SR(1,4,4,8).sbox() sage: %time S.difference_distribution_matrix(); CPU times: user 77.73 s, sys: 0.00 s, total: 77.73 s Wall time: 77.73 s sage: %time S.linear_approximation_matrix(); CPU times: user 132.96 s, sys: 0.00 s, total: 132.96 s Wall time: 132.96 s # AFTER sage: S = mq.SR(1,4,4,8).sbox() sage: %time S.difference_distribution_matrix(); CPU times: user 0.28 s, sys: 0.01 s, total: 0.29 s Wall time: 0.28 s sage: %time S.linear_approximation_matrix(); CPU times: user 1.01 s, sys: 0.00 s, total: 1.01 s Wall time: 1.01 s

**Elliptic Curves**

- Allow the method integral_points() to handle elliptic curves with large ranks (John Cremona) #6381 — A rewrite of the method integral_x_coords_in_interval() in the class EllipticCurve_rational_field belonging to the module sage/schemes/elliptic_curves/ell_rational_field.py. The rewrite allows the method integral_points() to compute the integral points of elliptic curves with large ranks. For example, previously the following code would result in an OverflowError:
sage: D = 6611719866 sage: E = EllipticCurve([0, 0, 0, -D^2, 0]) sage: E.integral_points();

- Multiplication-by-n method on elliptic curve formal groups uses the double-and-add algorithm (Hamish Ivey-Law, Tom Boothby) #6407 — Previously, the method EllipticCurveFormalGroup.mult_by_n() was implemented by applying the group law to itself n times. However, when working over a field of characteristic zero, a faster algorithm would be used instead. The linear algorithm is now replaced with the logarithmic double-and-add algorithm, i.e. the additive version of the standard square-and-multiply algorithm. In some cases, the efficiency gain can range from 3% up to 29%. The following timing statistics were obtained using the machine sage.math:
# BEFORE sage: F = EllipticCurve(GF(101), [1, 1]).formal_group() sage: %time F.mult_by_n(100, 20); CPU times: user 0.98 s, sys: 0.00 s, total: 0.98 s Wall time: 0.98 s sage: F = EllipticCurve("37a").formal_group() sage: %time F.mult_by_n(1000000, 20); CPU times: user 0.38 s, sys: 0.00 s, total: 0.38 s Wall time: 0.38 s sage: %time F.mult_by_n(100000000, 20); CPU times: user 0.55 s, sys: 0.03 s, total: 0.58 s Wall time: 0.58 s # AFTER sage: F = EllipticCurve(GF(101), [1, 1]).formal_group() sage: %time F.mult_by_n(100, 20); CPU times: user 0.96 s, sys: 0.00 s, total: 0.96 s Wall time: 0.95 s sage: F = EllipticCurve("37a").formal_group() sage: %time F.mult_by_n(1000000, 20); CPU times: user 0.44 s, sys: 0.01 s, total: 0.45 s Wall time: 0.45 s sage: %time F.mult_by_n(100000000, 20); CPU times: user 0.40 s, sys: 0.01 s, total: 0.41 s Wall time: 0.41 s

**Graphics**

- Plotting 3-D Bezier paths (Emily Kirkman) #6098 — New function bezier3d() for plotting a 3-dimensional Bezier path. Here are some examples for working with this function:
sage: bezier3d([[(0,0,0), (1,0,0), (0,1,0), (0,1,1)]]).show(zoom=1.2)

sage: path = [[(0,0,0),(.5,.1,.2),(.75,3,-1),(1,1,0)],[(.5,1,.2),(1,.5,0)],[(.7,.2,.5)]] sage: bezier3d(path, color='green').show(zoom=1.2)

- Passing extra options to show() (Bill Cauchois) #5651 — Extra optional arguments to plotting functions can now be passed on to the function show(). This passing of optional arguments is implemented for the following plotting modules:
- sage/plot/arrow.py
- sage/plot/bar_chart.py
- sage/plot/bezier_path.py
- sage/plot/circle.py
- sage/plot/complex_plot.pyx
- sage/plot/contour_plot.py
- sage/plot/density_plot.py
- sage/plot/disk.py
- sage/plot/line.py
- sage/plot/matrix_plot.py
- sage/plot/plot.py
- sage/plot/plot_field.py
- sage/plot/point.py
- sage/plot/polygon.py
- sage/plot/scatter_plot.py
- sage/plot/text.py

Each of the following examples demonstrates equivalent code to obtain a plot:

sage: arrow((-2, 2), (7,1), frame=True) sage: arrow((-2, 2), (7,1)).show(frame=True)

sage: bar_chart([-2,8,-7,3], rgbcolor=(1,0,0), axes=False) sage: bar_chart([-2,8,-7,3], rgbcolor=(1,0,0)).show(axes=False)

sage: bezier_path([[(0,1),(.5,0),(1,1)]], fontsize=20) sage: bezier_path([[(0,1),(.5,0),(1,1)]]).show(fontsize=20)

sage: complex_plot(lambda z: z, (-3, 3), (-3, 3), figsize=[5,5]) sage: complex_plot(lambda z: z, (-3, 3), (-3, 3)).show(figsize=[5,5])

**Graph Theory**

- Cliquer as a standard package (Nathann Cohen) #6355 — Cliquer is a set of C routines for finding cliques in an arbitrary weighted graph. It uses an exact branch-and-bound algorithm recently developed by Patric Ostergard and mainly written by Sampo Niskanen. It is published under the GPL license. Here are some examples for working with the new cliquer spkg:
sage: max_clique(graphs.PetersenGraph()) [7, 9] sage: all_max_clique(graphs.PetersenGraph()) [[2, 7], [7, 9], [6, 8], [6, 9], [0, 4], [4, 9], [5, 7], [0, 5], [5, 8], [3, 4], [2, 3], [3, 8], [1, 6], [0, 1], [1, 2]] sage: clique_number(Graph("DJ{")) 4 sage: clique_number(Graph({0:[1,2,3], 1:[2], 3:[0,1]})) 3 sage: list_composition([1,3,'a'], {'a':'b', 1:2, 2:3, 3:4}) [2, 4, 'b']

- Faster algorithm to compute maximum cliques (Nathann Cohen) #5793 — With the inclusion of cliquer as a standard spkg, the following functions can now use the cliquer algorithm:
- Graph.max_clique() — Returns the vertex set of a maximum complete subgraph.
- Graph.cliques_maximum() — Returns a list of all maximum cliques, with each clique represented by a list of vertices. A clique is an induced complete subgraph and a maximal clique is one of maximal order.
- Graph.clique_number() — Returns the size of the largest clique of the graph.
- Graph.cliques_vertex_clique_number() — Returns a list of sizes of the largest maximal cliques containing each vertex. This returns a single value if there is only one input vertex.
- Graph.independent_set() — Returns a maximal independent set, which is a set of vertices which induces an empty subgraph.

These functions already exist in Sage. Cliquer does not bring to Sage any new feature, but a huge efficiency improvement in computing clique numbers. The NetworkX 0.36 algorithm is very slow in its computation of these functions, even though it remains faster than cliquer for the computation of Graph.cliques_vertex_clique_number(). The algorithms in the cliquer spkg scale very well as the number of vertices in a graph increases. Here is a comparison between the implementation of NetworkX 0.36 and cliquer on computing the clique number of a graph. Timing statistics were obtained using the machine sage.math:

sage: g = graphs.RandomGNP(100, 0.4) sage: %time g.clique_number(algorithm="networkx"); CPU times: user 0.64 s, sys: 0.01 s, total: 0.65 s Wall time: 0.65 s sage: %time g.clique_number(algorithm="cliquer"); CPU times: user 0.02 s, sys: 0.00 s, total: 0.02 s Wall time: 0.02 s sage: g = graphs.RandomGNP(200, 0.4) sage: %time g.clique_number(algorithm="networkx"); CPU times: user 9.68 s, sys: 0.01 s, total: 9.69 s Wall time: 9.68 s sage: %time g.clique_number(algorithm="cliquer"); CPU times: user 0.09 s, sys: 0.00 s, total: 0.09 s Wall time: 0.09 s sage: g = graphs.RandomGNP(300, 0.4) sage: %time g.clique_number(algorithm="networkx"); CPU times: user 69.98 s, sys: 0.10 s, total: 70.08 s Wall time: 70.09 s sage: %time g.clique_number(algorithm="cliquer"); CPU times: user 0.23 s, sys: 0.00 s, total: 0.23 s Wall time: 0.23 s sage: g = graphs.RandomGNP(400, 0.4) sage: %time g.clique_number(algorithm="networkx"); CPU times: user 299.32 s, sys: 0.29 s, total: 299.61 s Wall time: 299.64 s sage: %time g.clique_number(algorithm="cliquer"); CPU times: user 0.54 s, sys: 0.00 s, total: 0.54 s Wall time: 0.53 s sage: g = graphs.RandomGNP(500, 0.4) sage: %time g.clique_number(algorithm="networkx"); CPU times: user 1178.85 s, sys: 1.30 s, total: 1180.15 s Wall time: 1180.16 s sage: %time g.clique_number(algorithm="cliquer"); CPU times: user 1.09 s, sys: 0.00 s, total: 1.09 s Wall time: 1.09 s

- Support the syntax g.add_edge((u,v), label=l) for C graphs (Robert Miller) #6540 — The following syntax is supported. However, note that the label keyword must be used:
sage: G = Graph() sage: G.add_edge((1,2), label="my label") sage: G.edges() [(1, 2, 'my label')] sage: G = Graph() sage: G.add_edge((1,2), "label") sage: G.edges() [((1, 2), 'label', None)]

- Fast subgraphs by building the graph instead of deleting things (Jason Grout) #6578 — Subgraphs can now be constructed by building a new graph from a number of vertices and edges. This is in contrast to the previous default algorithm where subgraphs were constructed by deleting edges and vertices. In some cases, the efficiency gain of the new subgraph construction implementation can be up to 17x. The following timing statistics were obtained using the machine sage.math:
# BEFORE sage: g = graphs.PathGraph(Integer(10e4)) sage: %time g.subgraph(range(20)); CPU times: user 1.89 s, sys: 0.03 s, total: 1.92 s Wall time: 1.92 s sage: g = graphs.PathGraph(Integer(10e4) * 5) sage: %time g.subgraph(range(20)); CPU times: user 14.92 s, sys: 0.05 s, total: 14.97 s Wall time: 14.97 s sage: g = graphs.PathGraph(Integer(10e5)) sage: %time g.subgraph(range(20)); CPU times: user 47.77 s, sys: 0.29 s, total: 48.06 s Wall time: 48.06 s # AFTER sage: g = graphs.PathGraph(Integer(10e4)) sage: %time g.subgraph(range(20)); CPU times: user 0.27 s, sys: 0.01 s, total: 0.28 s Wall time: 0.28 s sage: g = graphs.PathGraph(Integer(10e4) * 5) sage: %time g.subgraph(range(20)); CPU times: user 1.34 s, sys: 0.03 s, total: 1.37 s Wall time: 1.37 s sage: g = graphs.PathGraph(Integer(10e5)) sage: %time g.subgraph(range(20)); CPU times: user 2.66 s, sys: 0.04 s, total: 2.70 s Wall time: 2.70 s

**Interfaces**

- Magma interface: make magma_colon_equals() mode work in both command line and notebook (William Stein) #6395 — Exposes the magma_colon_equals() option in the notebook. For example, one can now do the following in the notebook:
sage: magma._preparse_colon_equals = False sage: magma._preparse('a = 5') 'a = 5' sage: magma._preparse_colon_equals = True sage: magma._preparse('a = 5') 'a := 5' sage: magma._preparse('a = 5; b := 7; c =a+b;') 'a := 5; b := 7; c :=a+b;'

- Viewing a Sage object with view(object, viewer=’pdf’) (Nicolas M. Thiéry) #6591 — Typical uses include:
- you prefer your PDF browser to your DVI browser
- you want to view snippets which are not displayed well in DVI viewers or jsMath, e.g. images produced by tikzpicture.

**Linear Algebra**

- Make NumPy play nice with Sage types (Robert Bradshaw) #5081 — This improves data conversion between NumPy and Sage. For example, one can now do this:
sage: from scipy import stats sage: stats.uniform(0,15).ppf([0.5,0.7]) array([ 7.5, 10.5])

And this:

sage: from scipy import * sage: from pylab import * sage: sample_rate = 1000.0 sage: t = r_[0:0.6:1/sample_rate] sage: N = len(t) sage: s = [sin(2*pi*50*elem) + sin(2*pi*70*elem + (pi/4)) for elem in t] sage: S = fft(s) sage: f = sample_rate*r_[0:(N/2)] / N sage: n = len(f) sage: line(zip(f, abs(S[0:n]) / N))

- Fast slicing of sparse matrices (Jason Grout) #6553 — The efficiency gain for slicing sparse matrices can range from 10x up to 147x. The following timing statistics were obtained using the machine sage.math:
# BEFORE sage: A = random_matrix(ZZ, 100000, density=0.00005, sparse=True) sage: %time A[50000:,:50000]; CPU times: user 298.84 s, sys: 0.05 s, total: 298.89 s Wall time: 298.95 s sage: A = random_matrix(ZZ, 10000, density=0.00005, sparse=True) sage: %time A[5000:,:5000]; CPU times: user 2.50 s, sys: 0.00 s, total: 2.50 s Wall time: 2.50 s # AFTER sage: A = random_matrix(ZZ, 100000, density=0.00005, sparse=True) sage: %time A[50000:,:50000]; CPU times: user 1.91 s, sys: 0.09 s, total: 2.00 s Wall time: 2.02 s sage: A = random_matrix(ZZ, 10000, density=0.00005, sparse=True) sage: %time A[5000:,:5000]; CPU times: user 0.23 s, sys: 0.00 s, total: 0.23 s Wall time: 0.24 s

- Plotting sparse matrices efficiently (Jason Grout) #6554 — Previously, plotting a sparse matrix would convert the matrix to a dense matrix, resulting in the whole process taking increasingly longer time as the dimensions of the matrix increase. Where a matrix is sparse, the matrix plotting function now uses SciPy’s sparse matrix functionalities, which can handle large matrices. In some cases, the performance improvement can range from 380x up to 98000x. The following timing statistics were obtained using the machine mod.math:
# BEFORE sage: A = random_matrix(ZZ, 5000, density=0.00001, sparse=True) sage: %time matrix_plot(A, marker=','); CPU times: user 60.25 s, sys: 0.69 s, total: 60.94 s Wall time: 60.94 s sage: A = random_matrix(ZZ, 10000, density=0.00001, sparse=True) sage: %time matrix_plot(A, marker=','); CPU times: user 241.31 s, sys: 3.03 s, total: 244.34 s Wall time: 244.35 s sage: A = random_matrix(ZZ, 15000, density=0.00001, sparse=True) sage: %time matrix_plot(A, marker=','); CPU times: user 544.02 s, sys: 6.85 s, total: 550.87 s Wall time: 550.86 s sage: A = random_matrix(ZZ, 20000, density=0.00001, sparse=True) sage: %time matrix_plot(A, marker=','); CPU times: user 972.85 s, sys: 13.36 s, total: 986.21 s Wall time: 986.21 s # AFTER sage: A = random_matrix(ZZ, 5000, density=0.00001, sparse=True) sage: %time matrix_plot(A, marker=','); CPU times: user 0.05 s, sys: 0.04 s, total: 0.09 s Wall time: 0.16 s sage: A = random_matrix(ZZ, 10000, density=0.00001, sparse=True) sage: %time matrix_plot(A, marker=','); CPU times: user 0.01 s, sys: 0.00 s, total: 0.01 s Wall time: 0.01 s sage: A = random_matrix(ZZ, 15000, density=0.00001, sparse=True) sage: %time matrix_plot(A, marker=','); CPU times: user 0.01 s, sys: 0.00 s, total: 0.01 s Wall time: 0.01 s sage: A = random_matrix(ZZ, 20000, density=0.00001, sparse=True) sage: %time matrix_plot(A, marker=','); CPU times: user 0.01 s, sys: 0.00 s, total: 0.01 s Wall time: 0.01 s

In Sage 4.1, the following would quickly consume gigabytes of RAM on a system and may result in a MemoryError:

sage: A = random_matrix(ZZ, 100000, density=0.00001, sparse=True) sage: %time matrix_plot(A, marker=','); CPU times: user 0.63 s, sys: 0.01 s, total: 0.64 s Wall time: 0.63 s sage: A = random_matrix(ZZ, 1000000, density=0.00001, sparse=True) sage: %time matrix_plot(A, marker=','); CPU times: user 1933.41 s, sys: 2.97 s, total: 1936.38 s Wall time: 1937.31 s

- Elementwise (Hadamard) product of matrices (Rob Beezer) #6301 — Given matrices A and B of the same size, C = A.elementwise_product(B) creates the new matrix C, of the same size, with entries given by C[i,j] = A[i,j] * B[i,j]. The multiplication occurs in a ring defined by Sage’s coercion model, as appropriate for the base rings of A and B (or an error is raised if there is no sensible common ring). In particular, if A and B are defined over a noncommutative ring, the operation is also noncommutative. The implementation is different for dense matrices versus sparse matrices, but there are probably further optimizations available for specific rings. This operation is often called the Hadamard product. Here is an example on using elementwise matrix product:
sage: G = matrix(GF(3), 2, [0,1,2,2]) sage: H = matrix(ZZ, 2, [1,2,3,4]) sage: J = G.elementwise_product(H); J [0 2] [0 2] sage: J.parent() Full MatrixSpace of 2 by 2 dense matrices over Finite Field of size 3

**Modular Forms**

- Efficient implementation of index for Gamma(N) (Simon Morris) #6606 — The new implementation provides massive speed-up for the function Gamma(N). The following statistics were obtained using the machine mod.math:
# BEFORE sage: %time [Gamma(n).index() for n in [1..19]]; CPU times: user 14369.53 s, sys: 75.18 s, total: 14444.71 s Wall time: 14445.22 s sage: %time Gamma(32041).index(); # hangs for hours # AFTER sage: %time [Gamma(n).index() for n in [1..19]]; CPU times: user 0.00 s, sys: 0.00 s, total: 0.00 s Wall time: 0.00 s sage: timeit("[Gamma(n).index() for n in [1..19]]") 125 loops, best of 3: 2.27 ms per loop sage: %time Gamma(32041).index(); CPU times: user 0.00 s, sys: 0.00 s, total: 0.00 s sage: %timeit Gamma(32041).index() 10000 loops, best of 3: 110 µs per loop

- Weight 1 Eisenstein series (David Loeffler) #6071 — Add support for computing weight 1 Eisenstein series. Here’s an example:
sage: M = ModularForms(DirichletGroup(13).0, 1) sage: M.eisenstein_series() [ -1/13*zeta12^3 + 6/13*zeta12^2 + 4/13*zeta12 + 2/13 + q + (zeta12 + 1)*q^2 + zeta12^2*q^3 + (zeta12^2 + zeta12 + 1)*q^4 + (-zeta12^3 + 1)*q^5 + O(q^6) ]

- Efficient calculation of Jacobi sums (David Loeffler) #6534 — For small primes, calculating Jacobi sums using the definition directly can result in an efficiency improvement of up to 624x. The following timing statistics were obtained using the machine mod.math:
# BEFORE sage: chi = DirichletGroup(67).0 sage: psi = chi**3 sage: time chi.jacobi_sum(psi); CPU times: user 6.24 s, sys: 0.00 s, total: 6.24 s Wall time: 6.24 s # AFTER sage: chi = DirichletGroup(67).0 sage: psi = chi**3 sage: time chi.jacobi_sum(psi); CPU times: user 0.01 s, sys: 0.00 s, total: 0.01 s Wall time: 0.01 s

There is also support for computing a Jacobi sum with values in a finite field:

sage: g = DirichletGroup(17, GF(9,'a')).0 sage: g.jacobi_sum(g**2) 2*a

**Notebook**

- Display docstrings in the notebook using HTML and jsMath (Tom Boothby, Evan Fosmark, John Palmieri, Mitesh Patel) #5653 — When viewing docstrings using the notebook, these are presented using HTML and jsMath. For example, if one does any of the following
identity_matrix([TAB] identity_matrix?[SHIFT-RETURN] identity_matrix?[TAB]

then the docstring for the function identity_matrix() would be presented as in this figure:

**Number Theory**

- Intersection of ideals in a number field (David Loeffler) #6457 — New function intersection() of the class NumberFieldIdeal in the module sage/rings/number_field/number_field_ideal.py for computing the ideals in a number field. Here are some examples on using intersection():
sage: K.<a> = QuadraticField(-11) sage: p = K.ideal((a + 1)/2); q = K.ideal((a + 3)/2) sage: p.intersection(q) == q.intersection(p) == K.ideal(a-2) True sage: # An example with non-principal ideals sage: L.<a> = NumberField(x^3 - 7) sage: p = L.ideal(a^2 + a + 1, 2) sage: q = L.ideal(a+1) sage: p.intersection(q) == L.ideal(8, 2*a + 2) True sage: # A relative example sage: L.<a,b> = NumberField([x^2 + 11, x^2 - 5]) sage: A = L.ideal([15, (-3/2*b + 7/2)*a - 8]) sage: B = L.ideal([6, (-1/2*b + 1)*a - b - 5/2]) sage: A.intersection(B) == L.ideal(-1/2*a - 3/2*b - 1) True

- Computing Heegner points (Robert Bradshaw) #6045 — Adds a Heegner point method to elliptic curves over . The new method is heegner_point() which can be found in the class EllipticCurve_rational_field of the module sage/schemes/elliptic_curves/ell_rational_field.py. Here are some examples on using heegner_point():
sage: E = EllipticCurve('37a') sage: E.heegner_discriminants_list(10) [-7, -11, -40, -47, -67, -71, -83, -84, -95, -104] sage: E.heegner_point(-7) (0 : 0 : 1) sage: P = E.heegner_point(-40); P (a : -a + 1 : 1) sage: P = E.heegner_point(-47); P (a : -a^4 - a : 1) sage: P[0].parent() Number Field in a with defining polynomial x^5 - x^4 + x^3 + x^2 - 2*x + 1 sage: # Working out the details manually sage: P = E.heegner_point(-47, prec=200) sage: f = algdep(P[0], 5); f x^5 - x^4 + x^3 + x^2 - 2*x + 1 sage: f.discriminant().factor() 47^2

- Support primes_of_degree_one() for relative extensions (David Loeffler) #6396 — For example, one can now do this:
sage: N.<a,b> = NumberField([x^2 + 1, x^2 - 5]) sage: ids = N.primes_of_degree_one_list(10); ids [Fractional ideal (2*a + 1/2*b - 1/2), Fractional ideal ((-1/2*b - 1/2)*a - b), Fractional ideal (b*a + 1/2*b + 3/2), Fractional ideal ((-1/2*b - 3/2)*a + b - 1), Fractional ideal ((-b + 1)*a + b), Fractional ideal (3*a + 1/2*b - 1/2), Fractional ideal ((1/2*b - 1/2)*a + 3/2*b - 1/2), Fractional ideal ((-1/2*b - 5/2)*a - b + 1), Fractional ideal (2*a - 3/2*b - 1/2), Fractional ideal (3*a + 1/2*b + 5/2)] sage: [x.absolute_norm() for x in ids] [29, 41, 61, 89, 101, 109, 149, 181, 229, 241] sage: ids[9] == N.ideal(3*a + 1/2*b + 5/2) True

- Inverse modulo an ideal in a relative number field (David Loeffler) #6458 — Support for computing the inverse modulo an ideal in the ring of integers of a relative field. Here’s an example:
sage: K.<a,b> = NumberField([x^2 + 1, x^2 - 3]) sage: I = K.ideal(17*b - 3*a) sage: x = I.integral_basis(); x [438, -b*a + 309, 219*a - 219*b, 156*a - 154*b] sage: V, _, phi = K.absolute_vector_space() sage: V.span([phi(u) for u in x], ZZ) == I.free_module() True

**Numerical**

- Further NumPy type conversions (Robert Bradshaw, Jason Grout) #6506 — Improved handling of , , and between NumPy and Sage. Here are some examples:
sage: import numpy sage: numpy.array([1.0, 2.5j]) array([ 1.+0.j , 0.+2.5j]) sage: numpy.array([1.0, 2.5j]).dtype dtype('complex128') sage: numpy.array([1.000000000000000000000000000000000000j]).dtype dtype('object') sage: numpy.array([1, 2, 3]) array([1, 2, 3]) sage: numpy.array([1, 2, 3]).dtype dtype('int64') sage: numpy.array(2**40).dtype dtype('int64') sage: numpy.array(2**400).dtype dtype('object') sage: numpy.array([1,2,3,0.1]).dtype dtype('float64') sage: numpy.array(QQ(2**40)).dtype dtype('int64') sage: numpy.array(QQ(2**400)).dtype dtype('object') sage: numpy.array([1, 1/2, 3/4]) array([ 1. , 0.5 , 0.75])

**Packages**

- Update the ATLAS spkg to version 3.8.3.p7 (David Kirkby, Minh Van Nguyen) #6558 #6738 — This adds support for building ATLAS under Solaris on a Sun SPARC processor.
- Update the NTL spkg to version 5.4.2.p9 (David Kirkby) #6380 — This adds support for building NTL under Solaris on a Sun SPARC processor.
- Update the zn_poly spkg to version 0.9.p1 (David Kirkby) #6443 — This adds support for building zn_poly under Solaris on a Sun SPARC processor.
- Update the Pari/GP spkg to version 2.3.3.p1 (David Kirkby) #6445 — This adds support for building Pari under Solaris on a Sun SPARC processor.
- Update the FLINT spkg to version 1.3.0.p2 (David Kirkby) #6451 — This fixes a Solaris specific bug in FLINT.
- Update the MPFR spkg to version 2.4.1.p0 (Paul Zimmermann, David Kirkby) #6453 — This fixes a number of test failures under Solaris on a Sun SPARC processor.
- Update the PolyBoRi spkg to version 0.5rc.p9 (David Kirkby) #6528 — This fixes a Solaris specific bug in compiling PolyBoRi with the Sun compiler.
- Upgrade tinyMCE to version 3.2.4.1 upstream release (Jason Grout) #6143 — This version of tinyMCE has many fixes for Safari and a greatly improved paste-from-word functionality.
- Upgrade Cython to version 0.11.2.1 latest upstream release (Robert Bradshaw) #6438.
- Update the NumPy spkg to version 1.3.0.p1 (William Stein) #6493 — This fixes a bug in compiling NumPy under 64-bit OS X.
- Upgrade Singular to version singular-3-1-0-2-20090620.p0 (David Kirkby) #6563 — This fixes a Solaris specific bug when compiling Singular under Solaris on a Sun SPARC processor.
- New optional spkg GLPK version 4.38 (Nathann Cohen) #6602 — GLPK is a linear program solver that can also solve mixed integer programs.
- New optional spkg OpenOpt version 0.24 (William Stein) #6302 — OpenOpt is a numerical optimization package with various solvers.
- New optional package p_group_cohomology version 1.0.2 (Simon A. King, David J. Green) #6491 — The package p_group_cohomology can compute the cohomology ring of a group with coefficients in a finite field of order p. Its features include:
- Compute the cohomology ring with coefficients in for any finite -group, in terms of a minimal generating set and a minimal set of algebraic relations. We use Benson’s criterion to prove the completeness of the ring structure.
- Compute depth, dimension, Poincare series and -invariants of the cohomology rings.
- Compute the nil radical.
- Construct induced homomorphisms.
- The package includes a list of cohomology rings for all groups of order 64.
- With the package, the cohomology for all groups of order 128 and for the Sylow 2-subgroup of the third Conway group (order 1024) was computed for the first time. The result of these and many other computations (e.g., all but 6 groups of order 243) is accessible in a repository on sage.math.

Here some examples. Data that are included with the package:

sage: from pGroupCohomology import CohomologyRing sage: H = CohomologyRing(64,132) # this is included in the package, hence, the ring structure is already there sage: print H Cohomology ring of Small Group number 132 of order 64 with coefficients in GF(2) Computation complete Minimal list of generators: [a_2_4, a 2-Cochain in H^*(SmallGroup(64,132); GF(2)), c_2_5, a 2-Cochain in H^*(SmallGroup(64,132); GF(2)), c_4_12, a 4-Cochain in H^*(SmallGroup(64,132); GF(2)), a_1_0, a 1-Cochain in H^*(SmallGroup(64,132); GF(2)), a_1_1, a 1-Cochain in H^*(SmallGroup(64,132); GF(2)), b_1_2, a 1-Cochain in H^*(SmallGroup(64,132); GF(2))] Minimal list of algebraic relations: [a_1_0*a_1_1, a_1_0*b_1_2, a_1_1^3+a_1_0^3, a_2_4*a_1_0, a_1_1^2*b_1_2^2+a_2_4*a_1_1*b_1_2+a_2_4^2+c_2_5*a_1_1^2] sage: H.depth() 2 sage: H.a_invariants() [-Infinity, -Infinity, -3, -3] sage: H.poincare_series() (-t^2 - t - 1)/(t^6 - 2*t^5 + t^4 - t^2 + 2*t - 1) sage: H.nil_radical() a_1_0, a_1_1, a_2_4

Data from the repository on sage.math:

sage: H = CohomologyRing(128,562) # if there is internet connection, the ring data are downloaded behind the scenes sage: len(H.gens()) 35 sage: len(H.rels()) 486 sage: H.depth() 1 sage: H.a_invariants() [-Infinity, -4, -3, -3] sage: H.poincare_series() (t^14 - 2*t^13 + 2*t^12 - t^11 - t^10 + t^9 - 2*t^8 + 2*t^7 - 2*t^6 + 2*t^5 - 2*t^4 + t^3 - t^2 - 1)/(t^17 - 3*t^16 + 4*t^15 - 4*t^14 + 4*t^13 - 4*t^12 + 4*t^11 - 4*t^10 + 4*t^9 - 4*t^8 + 4*t^7 - 4*t^6 + 4*t^5 - 4*t^4 + 4*t^3 - 4*t^2 + 3*t - 1)

Some computation from scratch, involving different ring presentations and induced maps:

sage: tmp_root = tmp_filename() sage: CohomologyRing.set_user_db(tmp_root) sage: H0 = CohomologyRing.user_db(8,3,websource=False) sage: print H0 Cohomology ring of Dihedral group of order 8 with coefficients in GF(2) Computed up to degree 0 Minimal list of generators: [] Minimal list of algebraic relations: [] sage: H0.make() sage: print H0 Cohomology ring of Dihedral group of order 8 with coefficients in GF(2) Computation complete Minimal list of generators: [c_2_2, a 2-Cochain in H^*(D8; GF(2)), b_1_0, a 1-Cochain in H^*(D8; GF(2)), b_1_1, a 1-Cochain in H^*(D8; GF(2))] Minimal list of algebraic relations: [b_1_0*b_1_1] sage: G = gap('DihedralGroup(8)') sage: H1 = CohomologyRing.user_db(G,GroupName='GapD8',websource=False) sage: H1.make() sage: print H1 # the ring presentation is different ... Cohomology ring of GapD8 with coefficients in GF(2) Computation complete Minimal list of generators: [c_2_2, a 2-Cochain in H^*(GapD8; GF(2)), b_1_0, a 1-Cochain in H^*(GapD8; GF(2)), b_1_1, a 1-Cochain in H^*(GapD8; GF(2))] Minimal list of algebraic relations: [b_1_1^2+b_1_0*b_1_1] sage: phi = G.IsomorphismGroups(H0.group()) sage: phi_star = H0.hom(phi,H1) sage: phi_star_inv = phi_star^(-1) # ... but the rings are isomorphic sage: [X==phi_star_inv(phi_star(X)) for X in H0.gens()] [True, True, True, True] sage: [X==phi_star(phi_star_inv(X)) for X in H1.gens()] [True, True, True, True]

An example with an odd prime:

sage: H = CohomologyRing(81,8) # this needs to be computed from scratch sage: H.make() sage: H.gens() [1, a_2_1, a 2-Cochain in H^*(SmallGroup(81,8); GF(3)), a_2_2, a 2-Cochain in H^*(SmallGroup(81,8); GF(3)), b_2_0, a 2-Cochain in H^*(SmallGroup(81,8); GF(3)), a_4_1, a 4-Cochain in H^*(SmallGroup(81,8); GF(3)), b_4_2, a 4-Cochain in H^*(SmallGroup(81,8); GF(3)), b_6_3, a 6-Cochain in H^*(SmallGroup(81,8); GF(3)), c_6_4, a 6-Cochain in H^*(SmallGroup(81,8); GF(3)), a_1_0, a 1-Cochain in H^*(SmallGroup(81,8); GF(3)), a_1_1, a 1-Cochain in H^*(SmallGroup(81,8); GF(3)), a_3_2, a 3-Cochain in H^*(SmallGroup(81,8); GF(3)), a_5_2, a 5-Cochain in H^*(SmallGroup(81,8); GF(3)), a_5_3, a 5-Cochain in H^*(SmallGroup(81,8); GF(3)), a_7_5, a 7-Cochain in H^*(SmallGroup(81,8); GF(3))] sage: len(H.rels()) 59 sage: H.depth() 1 sage: H.a_invariants() [-Infinity, -3, -2] sage: H.poincare_series() (t^4 - t^3 + t^2 + 1)/(t^6 - 2*t^5 + 2*t^4 - 2*t^3 + 2*t^2 - 2*t + 1) sage: H.nil_radical() a_1_0, a_1_1, a_2_1, a_2_2, a_3_2, a_4_1, a_5_2, a_5_3, b_2_0*b_4_2, a_7_5, b_2_0*b_6_3, b_6_3^2+b_4_2^3

Rob Beezer, Nathann Cohen, John Cremona, Simon King, Sébastien Labbé, and Jens Rasch contributed to writing this release tour. A big thank you to all the Sage bug report/patch authors who made my life as a release tour author easier through your comprehensive and concise documentation. There are too many to list here; you know who you are. A release tour can also be found on the Sage wiki.

## Sage 4.1 released

Sage 4.1 was released on July 09, 2009. For the official, comprehensive release note, please refer to sage-4.1.txt. The following points are some of the foci of this release:

- Upgrade to the Python 2.6.x series
- Support for building Singular with GCC 4.4
- FreeBSD support for the following packages: FreeType, gd, libgcrypt, libgpg-error, Linbox, NTL, Readline, Tachyon
- Combinatorics: irreducible matrix representations of symmetric groups; and Yang-Baxter Graphs
- Cryptography: Mini Advanced Encryption Standard for educational purposes
- Graph theory: a backend for graph theory using Cython (c_graph); and improve accuracy of graph eigenvalues
- Linear algebra: a general package for finitely generated, not-necessarily free R-modules; and multiplicative order for matrices over finite fields
- Miscellaneous: optimized Sudoku solver; a decorator for declaring abstract methods; support Unicode in LaTeX cells (notebook); and optimized integer division
- Number theory: improved random element generation for number field orders and ideals; support Michael Stoll’s ratpoints package; and elliptic exponential
- Numerical: computing numerical values of constants using mpmath
- Update/upgrade 19 packages to latest upstream releases

The following people made their first contribution in Sage 4.1:

- Golam Mortuza Hossain
- Peter Jeremy
- Peter Mora
- Stanislav Bulygin

In this release, we closed 91 tickets and added two new components to the list of standard packages. These new spkg’s are mpmath for multiprecision floating-point arithmetic, and Ratpoints for computing rational points on hyperelliptic curves. This brings the total number of standard packages to 95. We increased doctest coverage by 0.3%, bringing the overall weighted doctest coverage score to 77.8%. A total of 216 functions were added; the total number of functions is now at 22398. For detailed information on what changed in this release, refer to trac.

Here is a summary of main features in this release, categorized under various headings:

**Algebraic Geometry**

- Construct an elliptic curve from a plane curve of genus one (Lloyd Kilford, John Cremona ) — New function EllipticCurve_from_plane_curve() in the module sage/schemes/elliptic_curves/constructor.py to allow the construction of an elliptic curve from a smooth plane cubic with a rational point. Currently, this function uses Magma and it will not work on machines that do not have Magma installed. Assuming you have Magma installed on your computer, we can use the function EllipticCurve_from_plane_curve() to first check that the Fermat cubic is isomorphic to the curve with Cremona label “27a1”:
sage: x, y, z = PolynomialRing(QQ, 3, 'xyz').gens() # optional - magma sage: C = Curve(x^3 + y^3 + z^3) # optional - magma sage: P = C(1, -1, 0) # optional - magma sage: E = EllipticCurve_from_plane_curve(C, P) # optional - magma sage: E # optional - magma Elliptic Curve defined by y^2 + y = x^3 - 7 over Rational Field sage: E.label() # optional - magma '27a1'

Here is a quartic example:

sage: u, v, w = PolynomialRing(QQ, 3, 'uvw').gens() # optional - magma sage: C = Curve(u^4 + u^2*v^2 - w^4) # optional - magma sage: P = C(1, 0, 1) # optional - magma sage: E = EllipticCurve_from_plane_curve(C, P) # optional - magma sage: E # optional - magma Elliptic Curve defined by y^2 = x^3 + 4*x over Rational Field sage: E.label() # optional - magma '32a1'

**Basic Arithmetic**

- Speed-up integer division (Robert Bradshaw ) — In some cases, integer division is now up to 31% faster than previously. The following timing statistics were obtained using the machine sage.math:
# BEFORE sage: a = next_prime(2**31) sage: b = Integers(a)(100) sage: %timeit a % b; 1000000 loops, best of 3: 1.12 µs per loop sage: %timeit 101 // int(5); 1000000 loops, best of 3: 215 ns per loop sage: %timeit 100 // int(-3) 1000000 loops, best of 3: 214 ns per loop sage: a = ZZ.random_element(10**50) sage: b = ZZ.random_element(10**15) sage: %timeit a.quo_rem(b) 1000000 loops, best of 3: 454 ns per loop # AFTER sage: a = next_prime(2**31) sage: b = Integers(a)(100) sage: %timeit a % b; 1000000 loops, best of 3: 1.02 µs per loop sage: %timeit 101 // int(5); 1000000 loops, best of 3: 201 ns per loop sage: %timeit 100 // int(-3) 1000000 loops, best of 3: 194 ns per loop sage: a = ZZ.random_element(10**50) sage: b = ZZ.random_element(10**15) sage: %timeit a.quo_rem(b) 1000000 loops, best of 3: 313 ns per loop

**Combinatorics**

- Irreducible matrix representations of symmetric groups (Franco Saliola) — Support for constructing irreducible representations of the symmetric group. This is based on Alain Lascoux’s article Young representations of the symmetric group. The following types of representations are supported:
- Specht representations — The matrices have integer entries:
sage: chi = SymmetricGroupRepresentation([3, 2]); chi Specht representation of the symmetric group corresponding to [3, 2] sage: chi([5, 4, 3, 2, 1]) [ 1 -1 0 1 0] [ 0 0 -1 0 1] [ 0 0 0 -1 1] [ 0 1 -1 -1 1] [ 0 1 0 -1 1]

- Young’s seminormal representation — The matrices have rational entries:
sage: snorm = SymmetricGroupRepresentation([2, 1], "seminormal"); snorm Seminormal representation of the symmetric group corresponding to [2, 1] sage: snorm([1, 3, 2]) [-1/2 3/2] [ 1/2 1/2]

- Young’s orthogonal representation (the matrices are orthogonal) — These matrices are defined over Sage’s SymbolicRing:
sage: ortho = SymmetricGroupRepresentation([3, 2], "orthogonal"); ortho Orthogonal representation of the symmetric group corresponding to [3, 2] sage: ortho([1, 3, 2, 4, 5]) [ 1 0 0 0 0] [ 0 -1/2 1/2*sqrt(3) 0 0] [ 0 1/2*sqrt(3) 1/2 0 0] [ 0 0 0 -1/2 1/2*sqrt(3)] [ 0 0 0 1/2*sqrt(3) 1/2]

You can also create the CombinatorialClass of all irreducible matrix representations of a given symmetric group. Then particular representations can be created by providing partitions. For example:

sage: chi = SymmetricGroupRepresentations(5); chi Specht representations of the symmetric group of order 5! over Integer Ring sage: chi([5]) # the trivial representation Specht representation of the symmetric group corresponding to [5] sage: chi([5])([2, 1, 3, 4, 5]) [1] sage: chi([1, 1, 1, 1, 1]) # the sign representation Specht representation of the symmetric group corresponding to [1, 1, 1, 1, 1] sage: chi([1, 1, 1, 1, 1])([2, 1, 3, 4, 5]) [-1] sage: chi([3, 2]) Specht representation of the symmetric group corresponding to [3, 2] sage: chi([3, 2])([5, 4, 3, 2, 1]) [ 1 -1 0 1 0] [ 0 0 -1 0 1] [ 0 0 0 -1 1] [ 0 1 -1 -1 1] [ 0 1 0 -1 1]

See the documentation of SymmetricGroupRepresentation and SymmetricGroupRepresentations for more information and examples.

- Specht representations — The matrices have integer entries:
- Yang-Baxter graphs (Franco Saliola) — Besides being used for constructing the irreducible matrix representations of the symmetric group, Yang-Baxter graphs can also be used to construct the Cayley graph of a finite group. For example:
sage: def left_multiplication_by(g): ....: return lambda h : h*g ....: sage: G = AlternatingGroup(4) sage: operators = [ left_multiplication_by(gen) for gen in G.gens() ] sage: Y = YangBaxterGraph(root=G.identity(), operators=operators); Y Yang-Baxter graph with root vertex () sage: Y.plot(edge_labels=False)

Yang-Baxter graphs can also be used to construct the permutahedron:

sage: from sage.combinat.yang_baxter_graph import SwapIncreasingOperator sage: operators = [SwapIncreasingOperator(i) for i in range(3)] sage: Y = YangBaxterGraph(root=(1,2,3,4), operators=operators); Y Yang-Baxter graph with root vertex (1, 2, 3, 4) sage: Y.plot()

See the documentation of YangBaxterGraph for more information and examples.

**Cryptography**

- Mini Advanced Encryption Standard for educational purposes (Minh Van Nguyen) — New module sage/crypto/block_cipher/miniaes.py to support the Mini Advanced Encryption Standard (Mini-AES) to allow students to explore the working of a block cipher. This is a simplified variant of the Advanced Encryption Standard (AES) to be used for cryptography education. Mini-AES is described in the paper:
- A. C.-W. Phan. Mini advanced encryption standard (mini-AES): a testbed for cryptanalysis students. Cryptologia, 26(4):283–306, 2002.

We can encrypt a plaintext using Mini-AES as follows:

sage: from sage.crypto.block_cipher.miniaes import MiniAES sage: maes = MiniAES() sage: K = FiniteField(16, "x") sage: MS = MatrixSpace(K, 2, 2) sage: P = MS([K("x^3 + x"), K("x^2 + 1"), K("x^2 + x"), K("x^3 + x^2")]); P [ x^3 + x x^2 + 1] [ x^2 + x x^3 + x^2] sage: key = MS([K("x^3 + x^2"), K("x^3 + x"), K("x^3 + x^2 + x"), K("x^2 + x + 1")]); key [ x^3 + x^2 x^3 + x] [x^3 + x^2 + x x^2 + x + 1] sage: C = maes.encrypt(P, key); C [ x x^2 + x] [x^3 + x^2 + x x^3 + x]

Here is the decryption process:

sage: plaintxt = maes.decrypt(C, key) sage: plaintxt == P True

We can also work directly with binary strings:

sage: from sage.crypto.block_cipher.miniaes import MiniAES sage: maes = MiniAES() sage: bin = BinaryStrings() sage: key = bin.encoding("KE"); key 0100101101000101 sage: P = bin.encoding("Encrypt this secret message!") sage: C = maes(P, key, algorithm="encrypt") sage: plaintxt = maes(C, key, algorithm="decrypt") sage: plaintxt == P True

Or work with integers such that :

sage: from sage.crypto.block_cipher.miniaes import MiniAES sage: maes = MiniAES() sage: P = [n for n in xrange(16)]; P [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15] sage: key = [2, 3, 11, 0]; key [2, 3, 11, 0] sage: P = maes.integer_to_binary(P) sage: key = maes.integer_to_binary(key) sage: C = maes(P, key, algorithm="encrypt") sage: plaintxt = maes(C, key, algorithm="decrypt") sage: plaintxt == P True

**Graph Theory**

- Fast compiled graphs c_graph (Robert Miller) — The Python package NetworkX version 0.36 is currently the default graph implementation in Sage. The goal of fast compiled graphs, or c_graph, is to be the default implementation of graph theory in Sage. The c_graph implementation is developed using Cython, which allows graph theoretic computations to run at the speed of C. The c_graph backend is implemented in the module sage/graphs/base/c_graph.pyx. This module is called by higher-level frontends in sage/graphs/. Where support is provided for using c_graph, graph theoretic computations is usually more efficient than using NetworkX. For example, the following timing statistics were obtained using the machine sage.math:
# NetworkX 0.36 sage: time G = Graph(1000000, implementation="networkx") CPU times: user 8.74 s, sys: 0.27 s, total: 9.01 s Wall time: 9.08 s # c_graph sage: time G = Graph(1000000, implementation="c_graph") CPU times: user 0.01 s, sys: 0.14 s, total: 0.15 s Wall time: 0.19 s

Here, we see an efficiency gain of up to 47x using c_graph.

- Improve accuracy of graph eigenvalues (Rob Beezer) — New routines to compute eigenvalues and eigenvectors of integer matrices more precisely than before. Rather than converting adjacency matrices of graphs to computations over the real or complex fields, adjacency matrices are retained as matrices over the integers, yielding more accurate and informative results for eigenvalues, eigenvectors, and eigenspaces. Here is a comparison involving the computation of graph spectrum:
# BEFORE sage: g = graphs.CycleGraph(8); g Cycle graph: Graph on 8 vertices sage: g.spectrum() [-2.0, -1.41421356237, -1.41421356237, 4.02475820828e-18, 6.70487495185e-17, 1.41421356237, 1.41421356237, 2.0] # AFTER sage: g = graphs.CycleGraph(8); g Cycle graph: Graph on 8 vertices sage: g.spectrum() [2, 1.414213562373095?, 1.414213562373095?, 0, 0, -1.414213562373095?, -1.414213562373095?, -2]

Integer eigenvalues are now exact, irrational eigenvalues are more precise than previously, making multiplicities easier to determine. Similar comments apply to eigenvectors:

sage: g.eigenvectors() [(2, [ (1, 1, 1, 1, 1, 1, 1, 1) ], 1), (-2, [ (1, -1, 1, -1, 1, -1, 1, -1) ], 1), (0, [ (1, 0, -1, 0, 1, 0, -1, 0), (0, 1, 0, -1, 0, 1, 0, -1) ], 2), (-1.414213562373095?, [(1, 0, -1, 1.414213562373095?, -1, 0, 1, -1.414213562373095?), (0, 1, -1.414213562373095?, 1, 0, -1, 1.414213562373095?, -1)], 2), (1.414213562373095?, [(1, 0, -1, -1.414213562373095?, -1, 0, 1, 1.414213562373095?), (0, 1, 1.414213562373095?, 1, 0, -1, -1.414213562373095?, -1)], 2)]

Eigenspaces are exact, in that they can be expressed as vector spaces over number fields. When the defining polynomial has several roots, the eigenspaces are not repeated. Previously, eigenspaces were “fractured” owing to slight computational differences in identical eigenvalues. In concert with eigenvectors(), this command illuminates the structure of a graph’s eigenspaces more than purely numerical results.

sage: g.eigenspaces() [ (2, Vector space of degree 8 and dimension 1 over Rational Field User basis matrix: [1 1 1 1 1 1 1 1]), (-2, Vector space of degree 8 and dimension 1 over Rational Field User basis matrix: [ 1 -1 1 -1 1 -1 1 -1]), (0, Vector space of degree 8 and dimension 2 over Rational Field User basis matrix: [ 1 0 -1 0 1 0 -1 0] [ 0 1 0 -1 0 1 0 -1]), (a3, Vector space of degree 8 and dimension 2 over Number Field in a3 with defining polynomial x^2 - 2 User basis matrix: [ 1 0 -1 -a3 -1 0 1 a3] [ 0 1 a3 1 0 -1 -a3 -1]) ]

Complex eigenvalues (of digraphs) previously were missing their imaginary parts. This issue has been fixed as part of the improvement in calculating graph eigenvalues.

**Graphics**

- Plot histogram improvement (David Joyner) — Some improvements to the plot_histogram() function of the class IndexedSequence in sage/gsl/dft.py. The default colour of the histogram is blue:
sage: J = range(3) sage: A = [ZZ(i^2)+1 for i in J] sage: s = IndexedSequence(A, J) sage: s.plot_histogram()

You can now change the colour of the histogram with the argument clr:

sage: s.plot_histogram(clr=(1,0,0))

and even use the argument eps to change the width of the spacing between the bars:

sage: s.plot_histogram(clr=(1,0,1), eps=0.3)

**Linear Algebra**

- Multiplicative order for matrices over finite fields (Yann Laigle-Chapuy) — New method multiplicative_order() in the class Matrix of sage/matrix/matrix0.pyx for computing the multiplicative order of a matrix. Here are some examples on using the new method multiplicative_order():
sage: A = matrix(GF(59), 3, [10,56,39,53,56,33,58,24,55]) sage: A.multiplicative_order() 580 sage: (A^580).is_one() True sage: B = matrix(GF(10007^3, 'b'), 0) sage: B.multiplicative_order() 1 sage: E = MatrixSpace(GF(11^2, 'e'), 5).random_element() sage: (E^E.multiplicative_order()).is_one() True

- A general package for finitely generated not-necessarily free R-modules (William Stein, David Loeffler ) — This consists of the following new Sage modules:
- sage/modules/fg_pid/fgp_element.py — Elements of finitely generated modules over a principal ideal domain. Here are some examples:
sage: V = span([[1/2,1,1], [3/2,2,1], [0,0,1]], ZZ) sage: W = V.span([2*V.0+4*V.1, 9*V.0+12*V.1, 4*V.2]) sage: Q = V/W sage: x = Q(V.0-V.1); x (0, 3) sage: type(x) <class 'sage.modules.fg_pid.fgp_element.FGP_Element'> sage: x is Q(x) True sage: x.parent() is Q True sage: Q Finitely generated module V/W over Integer Ring with invariants (4, 12) sage: Q.0.additive_order() 4 sage: Q.1.additive_order() 12 sage: (Q.0+Q.1).additive_order() 12

- sage/modules/fg_pid/fgp_module.py — Finitely generated modules over a principal ideal domain. Currently, only the principal ideal domain of integers is supported. Here are some examples:
sage: V = span([[1/2,1,1], [3/2,2,1], [0,0,1]], ZZ) sage: W = V.span([2*V.0+4*V.1, 9*V.0+12*V.1, 4*V.2]) sage: import sage.modules.fg_pid.fgp_module sage: Q = sage.modules.fg_pid.fgp_module.FGP_Module(V, W) sage: type(Q) <class 'sage.modules.fg_pid.fgp_module.FGP_Module_class'> sage: Q is sage.modules.fg_pid.fgp_module.FGP_Module(V, W, check=False) True sage: X = ZZ**2 / span([[3,0],[0,2]], ZZ) sage: X.linear_combination_of_smith_form_gens([1]) (1) sage: Q Finitely generated module V/W over Integer Ring with invariants (4, 12) sage: Q.gens() ((1, 0), (0, 1)) sage: Q.coordinate_vector(-Q.0) (-1, 0) sage: Q.coordinate_vector(-Q.0, reduce=True) (3, 0) sage: Q.cardinality() 48

- sage/modules/fg_pid/fgp_morphism.py — Morphisms between finitely generated modules over a principal ideal domain. Here are some examples:
sage: V = span([[1/2,1,1],[3/2,2,1],[0,0,1]],ZZ) sage: W = V.span([2*V.0+4*V.1, 9*V.0+12*V.1, 4*V.2]) sage: Q = V/W; Q Finitely generated module V/W over Integer Ring with invariants (4, 12) sage: phi = Q.hom([Q.0+3*Q.1, -Q.1]); phi Morphism from module over Integer Ring with invariants (4, 12) to module with invariants (4, 12) that sends the generators to [(1, 3), (0, 11)] sage: phi(Q.0) == Q.0 + 3*Q.1 True sage: phi(Q.1) == -Q.1 True sage: Q.hom([0, Q.1]).kernel() Finitely generated module V/W over Integer Ring with invariants (4) sage: A = Q.hom([Q.0, 0]).kernel(); A Finitely generated module V/W over Integer Ring with invariants (12) sage: Q.1 in A True sage: phi = Q.hom([Q.0-3*Q.1, Q.0+Q.1]) sage: A = phi.kernel(); A Finitely generated module V/W over Integer Ring with invariants (4) sage: phi(A) Finitely generated module V/W over Integer Ring with invariants ()

- sage/modules/fg_pid/fgp_element.py — Elements of finitely generated modules over a principal ideal domain. Here are some examples:

**Miscellaneous**

- An optimized Sudoku solver (Rob Beezer, Tom Boothby) — Support two algorithms for efficiently solving a Sudoku puzzle: a backtrack algorithm and the DLX algorithm. Generally, the DLX algorithm is very fast and very consistent. The backtrack algorithm is very variable in its performance, on some occasions markedly faster than DLX but usually slower by a similar factor, with the potential to be orders of magnitude slower. The following code compares the performance between the Sudoku solver in Sage 4.0.2 and that in this release. We also compare the performance between the backtrack algorithm and the DLX algorithm. All timing statistics were obtained using the machine sage.math:
# BEFORE sage: A = matrix(ZZ,9,[5,0,0, 0,8,0, 0,4,9, 0,0,0, 5,0,0, 0,3,0, 0,6,7, \ ....: 3,0,0, 0,0,1, 1,5,0, 0,0,0, 0,0,0, 0,0,0, 2,0,8, 0,0,0, \ ....: 0,0,0, 0,0,0, 0,1,8, \ ....: 7,0,0, 0,0,4, 1,5,0, 0,3,0, 0,0,2, 0,0,0, 4,9,0, 0,5,0, 0,0,3]) sage: %timeit sudoku(A); 10 loops, best of 3: 43.5 ms per loop sage: from sage.games.sudoku import solve_recursive sage: B = matrix(ZZ, 9, 9, [ [0,0,0,0,1,0,9,0,0], [8,0,0,4,0,0,0,0,0], \ ....: [2,0,0,0,0,0,0,0,0], [0,7,0,0,3,0,0,0,0], [0,0,0,0,0,0,2,0,4], \ ....: [0,0,0,0,0,0,0,5,8], [0,6,0,0,0,0,1,3,0], [7,0,0,2,0,0,0,0,0], \ ....: [0,0,0,8,0,0,0,0,0] ]) sage: %timeit solve_recursive(B, 8, 5); 1000 loops, best of 3: 325 µs per loop # AFTER sage: h = Sudoku('8..6..9.5.............2.31...7318.6.24.....73...........279.1..5...8..36..3......') sage: %timeit h.solve(algorithm='backtrack').next(); 1000 loops, best of 3: 1.12 ms per loop sage: %timeit h.solve(algorithm='dlx').next(); 1000 loops, best of 3: 1.58 ms per loop sage: # These are the first 10 puzzles in a list of "Top 95" puzzles. sage: top =['4.....8.5.3..........7......2.....6.....8.4......1.......6.3.7.5..2.....1.4......',\ ....: '52...6.........7.13...........4..8..6......5...........418.........3..2...87.....',\ ....: '6.....8.3.4.7.................5.4.7.3..2.....1.6.......2.....5.....8.6......1....',\ ....: '48.3............71.2.......7.5....6....2..8.............1.76...3.....4......5....',\ ....: '....14....3....2...7..........9...3.6.1.............8.2.....1.4....5.6.....7.8...',\ ....: '......52..8.4......3...9...5.1...6..2..7........3.....6...1..........7.4.......3.',\ ....: '6.2.5.........3.4..........43...8....1....2........7..5..27...........81...6.....',\ ....: '.524.........7.1..............8.2...3.....6...9.5.....1.6.3...........897........',\ ....: '6.2.5.........4.3..........43...8....1....2........7..5..27...........81...6.....',\ ....: '.923.........8.1...........1.7.4...........658.........6.5.2...4.....7.....9.....'] sage: p = [Sudoku(top[i]) for i in xrange(10)] sage: for i in xrange(10): ....: %timeit p[i].solve(algorithm='dlx').next(); ....: %timeit p[i].solve(algorithm='backtrack').next(); ....: 100 loops, best of 3: 2.26 ms per loop 10 loops, best of 3: 223 ms per loop 100 loops, best of 3: 2.6 ms per loop 10 loops, best of 3: 21.3 ms per loop 100 loops, best of 3: 2.38 ms per loop 10 loops, best of 3: 83.5 ms per loop 1000 loops, best of 3: 1.76 ms per loop 10 loops, best of 3: 43.5 ms per loop 1000 loops, best of 3: 1.86 ms per loop 10 loops, best of 3: 316 ms per loop 1000 loops, best of 3: 1.65 ms per loop 10 loops, best of 3: 145 ms per loop 100 loops, best of 3: 1.84 ms per loop 10 loops, best of 3: 547 ms per loop 1000 loops, best of 3: 1.77 ms per loop 10 loops, best of 3: 255 ms per loop 100 loops, best of 3: 2.08 ms per loop 10 loops, best of 3: 445 ms per loop 1000 loops, best of 3: 1.67 ms per loop 10 loops, best of 3: 266 ms per loop

- A decorator for declaring abstract methods (Nicolas Thiéry) — Support a decorator that can be used to declare a method that should be implemented by derived classes. This declaration should typically include documentation for the specification for this method. The purpose of the decorator is to enforce a consistent and visual syntax for such declarations. The decorator is also used by the Sage categories framework for automated tests. As an example, here we create a class with an abstract method:
sage: class A(object): ....: @abstract_method ....: def my_method(self): ....: """ ....: The method :meth:`my_method` computes my_method ....: """ ....: pass ....: sage: A.my_method

The current policy is that a NotImplementedError is raised when accessing the method through an instance, even before the method is called:

sage: x = A() sage: x.my_method Traceback (most recent call last): ... NotImplementedError: <abstract method my_method at 0x7f53414a7410>

It is also possible to mark abstract methods as optional:

sage: class A(object): ....: @abstract_method(optional=True) ....: def my_method(self): ....: """ ....: The method :meth:`my_method` computes my_method ....: """ ....: pass ....: sage: A.my_method <optional abstract method my_method at 0x3b551b8> sage: x = A() sage: x.my_method NotImplemented

**Notebook**

- Unicode in %latex cells (Peter Mora) — One can now enter Unicode characters directly in Notebook cells. Here is a screenshot illustrating this:
- Allow \[ and \] to delimit math in %html blocks (John Palmieri) — One can now enter
%html test \[ x^2 \]

and the expression is typeset in math mode.

**Number Theory**

- Improved random_element() method for number field orders and ideals (John Cremona) — The new method random_element() of the class NumberFieldIdeal in sage/rings/number_field/number_field_ideal.py returns a random element of a fractional ideal, computed as a random -linear combination of the basis. A similar method has also been implemented for the class Order in sage/rings/number_field/order.py}. Here are some examples on using this new method:
sage: K.<a> = NumberField(x^3 + 2) sage: I = K.ideal(1 - a) sage: I.random_element() 2*a^2 + a + 3 sage: I.random_element(distribution="uniform") -a^2 + 2*a + 2 sage: I.random_element(-30, 30) -30*a^2 + 17*a - 11 sage: I.random_element(-30,30).parent() is K True sage: K.<a> = NumberField(x^3 + 2) sage: OK = K.ring_of_integers() sage: OK.random_element() 2*a^2 + 7*a + 2 sage: OK.random_element(distribution="uniform") -2*a^2 + a - 1 sage: K.order(a).random_element() -2*a^2 - a - 5

- Support for Michael Stoll’s ratpoints package (Robert Miller, Michael Stoll) — Stoll’s ratpoints package is a program for finding points of bounded height on curves of the form . The library code is contained in the Cython module sage/libs/ratpoints.pyx. Here are some examples for working with ratpoints:
sage: from sage.libs.ratpoints import ratpoints sage: for x,y,z in ratpoints([1..6], 200): ....: print -1*y^2 + 1*z^6 + 2*x*z^5 + 3*x^2*z^4 + 4*x^3*z^3 + 5*x^4*z^2 + 6*x^5*z ....: 0 0 0 0 0 0 0 sage: for x,y,z in ratpoints([1..5], 200): ....: print -1*y^2 + 1*z^4 + 2*x*z^3 + 3*x^2*z^2 + 4*x^3*z + 5*x^4 ....: 0 0 0 0 0 0 0 0

- Elliptic exponential (John Cremona) — New method elliptic_exponential() in the class EllipticCurve_rational_field of sage/schemes/elliptic_curves/ell_rational_field.py for computing the elliptic exponential of a complex number with respect to an elliptic curve. A similar method is also defined for the class PeriodLattice_ell in sage/schemes/elliptic_curves/period_lattice.py. Here are some examples:
sage: E = EllipticCurve([1,1,1,-8,6]) sage: P = E([0,2]) sage: z = P.elliptic_logarithm() sage: E.elliptic_exponential(z) (-1.6171648557030742010940435588e-29 : 2.0000000000000000000000000000 : 1.0000000000000000000000000000) sage: z = E([0,2]).elliptic_logarithm(precision=200) sage: E.elliptic_exponential(z) (-1.6490990486332025523931769742517329237564168247111092902718e-59 : 2.0000000000000000000000000000000000000000000000000000000000 : 1.0000000000000000000000000000000000000000000000000000000000)

And here are some torsion examples:

sage: E = EllipticCurve('389a') sage: w1,w2 = E.period_lattice().basis() sage: E.two_division_polynomial().roots(CC,multiplicities=False) [-2.04030220028546, 0.135409240221753, 0.904892960063711] sage: [E.elliptic_exponential((a*w1+b*w2)/2)[0] for a,b in [(0,1),(1,1),(1,0)]] [-2.04030220028546, 0.135409240221753, 0.904892960063711] sage: E.division_polynomial(3).roots(CC,multiplicities=False) [-2.88288879135334, 1.39292799513138, 0.0783137314443164 - 0.492840991709879*I, 0.0783137314443164 + 0.492840991709879*I] sage: [E.elliptic_exponential((a*w1+b*w2)/3)[0] for a,b in [(0,1),(1,0),(1,1),(2,1)]] [-2.88288879135335, 1.39292799513138, 0.0783137314443165 - 0.492840991709879*I, 0.0783137314443168 + 0.492840991709879*I]

**Numerical**

- Use mpmath to compute constants (Fredrik Johannson, Mike Hansen) — Previously the functions khinchin(), mertens() and twinprime() in sage/symbolic/constants.py were LimitedPrecisionConstant. Using mpmath, these functions now support arbitrary precision for the corresponding constants. There is now also support for the Glaisher-Kinkelin constant using mpmath. Here are some examples on using these functions with the mpmath backend. The Khinchin constant:
sage: float(khinchin) 2.6854520010653062 sage: khinchin.n(digits=60) 2.68545200106530644530971483548179569382038229399446295305115 sage: khinchin._mpfr_(RealField(100)) 2.6854520010653064453097148355 sage: RealField(100)(khinchin) 2.6854520010653064453097148355

The Twin Primes constant:

sage: float(twinprime) 0.66016181584686962 sage: twinprime.n(digits=60) 0.660161815846869573927812110014555778432623360284733413319448 sage: twinprime._mpfr_(RealField(100)) 0.66016181584686957392781211001 sage: RealField(100)(twinprime) 0.66016181584686957392781211001

The Mertens constant:

sage: float(mertens) 0.26149721284764277 sage: mertens.n(digits=60) 0.261497212847642783755426838608695859051566648261199206192064 sage: mertens._mpfr_(RealField(100)) 0.26149721284764278375542683861 sage: RealField(100)(mertens) 0.26149721284764278375542683861

The Glaisher-Kinkelin constant:

sage: float(glaisher) 1.2824271291006226 sage: glaisher.n(digits=60) 1.28242712910062263687534256886979172776768892732500119206374 sage: a = glaisher + 2 sage: parent(a) Symbolic Ring sage: glaisher._mpfr_(RealField(100)) 1.2824271291006226368753425689 sage: RealField(100)(glaisher) 1.2824271291006226368753425689

**Packages**

- New package mpmath version 0.12 for multiprecision floating-point arithmetic (Fredrik Johannson, Mike Hansen) — The Python package mpmath is now a standard package of Sage. Functions in mpmath can be called from Sage using the library under sage/libs/mpmath, with automatic data conversion between Sage and mpmath.
- New package Ratpoints version 2.1.2 for computing rational points on hyperelliptic curves (Robert Miller, Michael Stoll) — The C package Ratpoints is now a standard spkg. The corresponding library file is sage/libs/ratpoints.pyx.
- Upgrade Singular to version singular-3-1-0-2-20090620 with support for compiling with GCC 4.4 (Andrzej Giniewicz, Martin Albrecht, Craig Citro).
- Upgrade Sage’s Python spkg to the 2.6.x series (Mike Hansen).
- Upgrade Twisted to version 8.2.0 latest upstream release (Mike Hansen).
- Upgrade SCons to version 1.2.0 latest upstream release (Mike Hansen).
- Update the Pynac spkg to version pynac-0.1.8.p1.spkg (Mike Hansen).
- Update the IPython spkg to version ipython-0.9.1.p0.spkg (Mike Hansen).
- Update the ATLAS spkg to version atlas-3.8.3.p5.spkg (David Kirkby).
- Update the CVXOPT spkg to version cvxopt-0.9.p8.spkg (Gonzalo Tornaria).
- Update the FreeType spkg to version freetype-2.3.5.p1.spkg (Peter Jeremy).
- Update the GD spkg to version gd-2.0.35.p2.spkg (Peter Jeremy).
- Update the libgcrypt spkg to version libgcrypt-1.4.3.p1.spkg (Peter Jeremy).
- Update the libgpg_error spkg to version libgpg_error-1.6.p1.spkg (Peter Jeremy).
- Update the linbox spkg to version linbox-1.1.6.p0.spkg (Peter Jeremy).
- Update the NTL spkg to version ntl-5.4.2.p8.spkg (Peter Jeremy).
- Update the Readline spkg to version readline-5.2.p7.spkg (Peter Jeremy).
- Update the Tachyon spkg to version tachyon-0.98beta (Peter Jeremy).
- Update the Rubik spkg to version rubiks-20070912.p9.spkg (William Stein) — This adds support for compiling Rubiks in parallel.
- Update the python-gnutls spkg to version python_gnutls-1.1.4.p5.spkg (William Stein).
- Update the ATLAS spkg to version atlas-3.8.3.p5.spkg (David Kirkby).

**Symbolics**

- Symbolic arctan2() function (Karl-Dieter Crisman) — New symbolic trigonometric function arctan2() in sage/functions/trig.py. This symbolic function returns the arctangent (measured in radians) of y/x. Unlike arctan(y/x), the signs of both x and y are considered. For example, note the difference between arctan2() and arctan():
sage: arctan2(1,-1) 3/4*pi sage: arctan(1/-1) -1/4*pi

The new symbolic function arctan2() is also consistent with the implementations in Python and Maxima:

sage: arctan2(1,-1) # the symbolic arctan2 3/4*pi sage: maxima.atan2(1,-1) # Maxima implementation 3*%pi/4 sage: math.atan2(1,-1) # Python implementation 2.3561944901923448

We can also compute an approximation:

sage: arctan2(-.5,1).n(100) -0.46364760900080611621425623146

Rob Beezer and Franco Saliola contributed to writing this release tour. A big thank you to all the Sage bug report/patch authors who made my life as a release tour author easier through your comprehensive and concise documentation. There are too many to list here; you know who you are. A release tour can also be found on the Sage wiki.

## Sage 4.0.2 released

Sage 4.0.2 was released on June 18th, 2009. For the official, comprehensive release note, please refer to sage-4.0.2.txt. The following points are some of the foci of this release:

- Upgrade NumPy, SciPy, Singular, and FLINT to latest upstream releases.
- A script to automate the testing and merging of tickets.
- LaTeX output for combinatorial graphs.
- New features for linear algebra include Hermite normal form over principal ideal domains.
- New features for number theory include elliptic curve isogeny, and local and global heights for number fields.

The following people made their first contribution in this release:

- Fidel Barrera Cruz
- Jerome Lefebvre

Here is a summary of main features in this release, categorized under various headings:

**Algebra**

- Correct precision bound in hilbert_class_polynomial() and miscellaneous new functions (John Cremona) — The two new functions are elliptic_j() in sage/functions/special.py, and is_primitive() in the class BinaryQF of sage/quadratic_forms/binary_qf.py. The function elliptic_j(z) returns the elliptic modular -function evaluated at . The function is_primitive() determines whether the binary quadratic form satisfies , i.e. that it is primitive. Here are some examples on using these new functions:
sage: elliptic_j(CC(i)) 1728.00000000000 sage: elliptic_j(sqrt(-2.0)) 8000.00000000000 sage: Q = BinaryQF([6,3,9]) sage: Q.is_primitive() False sage: Q = BinaryQF([1,1,1]) sage: Q.is_primitive() True

- Efficient Lagrange interpolation polynomial (Yann Laigle-Chapuy) — Calculating the Lagrange interpolation polynomial of a set of points is now up to 48% faster than previously. The following timing statistics were obtained using the machine sage.math:
# BEFORE sage: R = PolynomialRing(QQ, 'x') sage: %timeit R.lagrange_polynomial([(0,1),(2,2),(3,-2),(-4,9)]) 1000 loops, best of 3: 824 µs per loop sage: R.lagrange_polynomial([(0,1),(2,2),(3,-2),(-4,9)]) -23/84*x^3 - 11/84*x^2 + 13/7*x + 1 sage: R = PolynomialRing(GF(2**3,'a'), 'x') sage: a = R.base_ring().gen() sage: timeit("R.lagrange_polynomial([(a^2+a,a),(a,1),(a^2,a^2+a+1)])") 625 loops, best of 3: 111 µs per loop sage: R.lagrange_polynomial([(a^2+a,a),(a,1),(a^2,a^2+a+1)]) a^2*x^2 + a^2*x + a^2 # AFTER sage: R = PolynomialRing(QQ, 'x') sage: %timeit R.lagrange_polynomial([(0,1),(2,2),(3,-2),(-4,9)]) 1000 loops, best of 3: 425 µs per loop sage: R.lagrange_polynomial([(0,1),(2,2),(3,-2),(-4,9)]) -23/84*x^3 - 11/84*x^2 + 13/7*x + 1 sage: R = PolynomialRing(GF(2**3,'a'), 'x') sage: a = R.base_ring().gen() sage: timeit("R.lagrange_polynomial([(a^2+a,a),(a,1),(a^2,a^2+a+1)])") 625 loops, best of 3: 86.4 µs per loop sage: R.lagrange_polynomial([(a^2+a,a),(a,1),(a^2,a^2+a+1)]) a^2*x^2 + a^2*x + a^2

- Deprecate the method __len__() for a matrix group (Nicolas Thiery) — The method __len__() of the class MatrixGroup_gap in sage/groups/matrix_gps/matrix_group.py is now deprecated and will be removed in a future release. To get the number of elements in a matrix group, users are advised to use the method cardinality() instead. The method order() is essentially the same as cardinality(), so order() will be deprecated in a future release.

**Algebraic Geometry**

- Optimize hyperelliptic curve arithmetic (Nick Alexander) — Arithmetics with hyperelliptic curves can be up to 6x faster than previously. The following timing statistics were obtained using the machine sage.math:
#BEFORE sage: F = GF(next_prime(10^30)) sage: x = F['x'].gen() sage: f = x^7 + x^2 + 1 sage: H = HyperellipticCurve(f, 2*x) sage: J = H.jacobian()(F) verbose 0 (902: multi_polynomial_ideal.py, dimension) Warning: falling back to very slow toy implementation. sage: Q = J(H.lift_x(F(1))) sage: %time ZZ.random_element(10**10) * Q; CPU times: user 0.65 s, sys: 0.02 s, total: 0.67 s Wall time: 0.68 s sage: %time ZZ.random_element(10**10) * Q; CPU times: user 1.08 s, sys: 0.00 s, total: 1.08 s Wall time: 1.08 s sage: %time ZZ.random_element(10**10) * Q; CPU times: user 0.72 s, sys: 0.02 s, total: 0.74 s Wall time: 0.74 s sage: %time ZZ.random_element(10**10) * Q; CPU times: user 0.67 s, sys: 0.00 s, total: 0.67 s Wall time: 0.67 s sage: %time ZZ.random_element(10**10) * Q; CPU times: user 0.66 s, sys: 0.00 s, total: 0.66 s Wall time: 0.66 s # AFTER sage: F = GF(next_prime(10^30)) sage: x = F['x'].gen() sage: f = x^7 + x^2 + 1 sage: H = HyperellipticCurve(f, 2*x) sage: J = H.jacobian()(F) verbose 0 (919: multi_polynomial_ideal.py, dimension) Warning: falling back to very slow toy implementation. sage: Q = J(H.lift_x(F(1))) sage: %time ZZ.random_element(10**10) * Q; CPU times: user 0.14 s, sys: 0.01 s, total: 0.15 s Wall time: 0.15 s sage: %time ZZ.random_element(10**10) * Q; CPU times: user 0.10 s, sys: 0.00 s, total: 0.10 s Wall time: 0.10 s sage: %time ZZ.random_element(10**10) * Q; CPU times: user 0.09 s, sys: 0.00 s, total: 0.09 s Wall time: 0.10 s sage: %time ZZ.random_element(10**10) * Q; CPU times: user 0.09 s, sys: 0.01 s, total: 0.10 s Wall time: 0.10 s sage: %time ZZ.random_element(10**10) * Q; CPU times: user 0.10 s, sys: 0.00 s, total: 0.10 s Wall time: 0.11 s

**Coding Theory**

- Hexads in and mathematical blackjack (David Joyner) — Implements kittens, hexads and mathematical blackjack as described in the following papers:
- R. Curtis. The Steiner system , the Mathieu group , and the kitten. In M. Atkinson (ed.) Computational Group Theory, Academic Press, 1984.
- J. Conway. Hexacode and tetracode — MINIMOG and MOG. In M. Atkinson (ed.) Computational Group Theory, Academic Press, 1984.
- J. Conway and N. Sloane. Lexicographic codes: error-correcting codes from game theory. IEEE Transactions on Information Theory, 32:337-348, 1986.
- J. Kahane and A. Ryba. The hexad game. Electronic Journal of Combinatorics, 8, 2001.

**Commutative Algebra**

- Enable Singular’s coefficient rings which are not fields (Martin Albrecht) — Singular 3-1-0 supports coefficient rings which are not fields. In particular, it supports and now. These are now natively supported in Sage.

**Cryptography**

- S-box to CNF Conversion (Martin Albrecht) — New method cnf() in the class SBox of sage/crypto/mq/sbox.py for converting an S-box to conjunctive normal form. Here are some examples on S-box to CNF conversion:
sage: S = mq.SBox(1,2,0,3); S (1, 2, 0, 3) sage: S.cnf() [(1, 2, -3), (1, 2, 4), (1, -2, 3), (1, -2, -4), (-1, 2, -3), (-1, 2, -4), (-1, -2, 3), (-1, -2, 4)] sage: # convert this representation to the DIMACS format sage: print S.cnf(format='dimacs') p cnf 4 8 1 2 -3 0 1 2 4 0 1 -2 3 0 1 -2 -4 0 -1 2 -3 0 -1 2 -4 0 -1 -2 3 0 -1 -2 4 0 sage: # as a truth table sage: log = SymbolicLogic() sage: s = log.statement(S.cnf(format='symbolic')) sage: log.truthtable(s)[1:] [['False', 'False', 'False', 'False', 'False'], ['False', 'False', 'False', 'True', 'False'], ['False', 'False', 'True', 'False', 'False'], ['False', 'False', 'True', 'True', 'True'], ['False', 'True', 'False', 'False', 'True'], ['False', 'True', 'False', 'True', 'True'], ['False', 'True', 'True', 'False', 'True'], ['False', 'True', 'True', 'True', 'True'], ['True', 'False', 'False', 'False', 'True'], ['True', 'False', 'False', 'True', 'True'], ['True', 'False', 'True', 'False', 'True'], ['True', 'False', 'True', 'True', 'True'], ['True', 'True', 'False', 'False', 'True'], ['True', 'True', 'False', 'True', 'True'], ['True', 'True', 'True', 'False', 'True'], ['True', 'True', 'True', 'True', 'True']]

**Graph Theory**

- output for (combinatorial) graphs (Robert Beezer, Fidel Barrera Cruz) — Implement the option tkz_style to output graphs in format so that they could be processed by pgf/tkz. Here’s an example of the Petersen graph visualized using tkz:
sage: g = graphs.PetersenGraph() sage: g.set_latex_options(tkz_style='Art') sage: view(g, pdflatex=True)

**Group Theory**

- Python interface to partition backtrack functions (Robert Miller) — New module in sage/groups/perm_gps/partn_ref/refinement_python.pyx provides Python frontends to the Cython-based partition backtrack functions. This allows one to write the three input functions (all_children_are_equivalent, refine_and_return_invariant, and compare_structures) in pure Python, and still use the Cython algorithms. Experimentation with specific partition backtrack implementations no longer requires compilation, as the input functions can be dynamically changed at runtime. Note that this is not intended for production quality implementations of partition refinement, but instead for experimentation, learning, and use of the Python debugger.

**Linear Algebra**

- Hermite normal form over principal ideal domains (David Loeffler) — This adds echelon form (or Hermite normal form) over principal ideal domains. Here an example:
sage: L.<w> = NumberField(x^2 - x + 2) sage: OL = L.ring_of_integers() sage: m = matrix(OL, 2, 2, [1,2,3,4+w]) sage: m.echelon_form() [ 1 -2] [ 0 w - 2] sage: m.echelon_form(transformation=True) ([ 1 -2] [ 0 w - 2], [-3*w - 2 w + 1] [ -3 1])

**Miscellaneous**

- Bypassing jsMath with view command (John Palmieri) — This provides a way to not always use jsMath when rendering LaTeX for the view command in the notebook. It works by looking for certain strings in the LaTeX code for the object, and if it finds them, it creates and displays a PNG file, bypassing jsMath altogether. The “certain strings” are stored in a list which is initially empty, but can be populated by using
latex.jsmath_avoid_list(...)

or

latex.add_to_jsmath_avoid_list(...)

- A “decorator” to allow pickling nested classes (Carl Witty, Nicolas Thiery) — The nested_pickle decorator modifies nested classes to be pickleable. (In Python 2.6 it should be usable as a decorator, although that hasn’t been tested; Python 2.5 doesn’t support class decorators, so you can’t use that syntax in Sage until Sage upgrades to Python 2.6.)

**Notebook**

- Add link to IRC in notebook help page (Harald Schilly).

**Number Theory**

- Elliptic curve isogeny object (Dan Shumow).
- Various number field improvements (Francis Clarke) — Among other things, one can now do
sage: K.<a> = NumberField(x^2 + 5) sage: L.<b> = K.extension(x^2 + 1) sage: L.ideal(K.ideal(2, a + 1)) Fractional ideal (b + 1)

For a number field , one can obtain the prime factors using K.prime_factors:

sage: CyclotomicField(3).prime_factors(7) [Fractional ideal (-2*zeta3 + 1), Fractional ideal (2*zeta3 + 3)]

- Enhanced reduction modulo ideals of number fields (Maite Aranes) — The function residues() is modified so that it returns a canonical set of coset representatives. The new function reduce() returns the canonical reduction of an integral element of a number field modulo self. The function inverse_mod now works for integral elements of a number field without having to coerce to the ring of integers.
- Local and global heights for number field elements (John Cremona) — New method local_height() and global_height() in the class NumberFieldElement of sage/rings/number_field/number_field_element.pyx. The method local_height() returns the local height of a number field element at a given prime ideal. The method global_height() returns the absolute logarithmic height of a number field element. Here are some examples for working with these new methods:
sage: R.<x> = QQ["x"] sage: K.<a> = NumberField(x^4 + 3*x^2 - 17) sage: P = K.ideal(61).factor()[0][0] sage: b = 1/(a^2 + 30) sage: b.local_height(P) 4.11087386417331 sage: b.local_height(P, weighted=True) 8.22174772834662 sage: b.local_height(P, 200) 4.1108738641733112487513891034256147463156817430812610629374 sage: (b^2).local_height(P) 8.22174772834662 sage: (b^-1).local_height(P) 0.000000000000000 sage: sage: R.<x> = QQ["x"] sage: K.<a> = NumberField(x^4 + 3*x^2 - 17) sage: b = a/2 sage: b.global_height() 2.86922224068797 sage: b.global_height(prec=200) 2.8692222406879748488543678846959454765968722137813736080066

**Packages**

- Upgrade NumPy to version 1.3.0 latest upstream release (Jason Grout).
- Upgrade SciPy to version 0.7 latest upstream release (Jason Grout).
- Upgrade Singular to version 3-1-0 latest upstream release (Martin Albrecht).
- Upgrade FLINT to version 1.3.0 latest upstream release (Nick Alexander).
- Update the MPIR spkg to version mpir-1.2.p3.spkg (Nick Alexander).
- Update the M4RI spkg to version libm4ri-20090617 (Martin Albrecht).
- Remove Guava as a standard Sage package (David Joyner).
- New experimental spkg libcocoa-0.9930.spkg (William Stein).

A big thank you to all the Sage bug report/patch authors who made my life as a release tour author easier through your comprehensive and concise documentation. There are too many to list here; you know who you are. A release tour can also be found on the Sage wiki.

## Sage 4.0.1 released

Sage 4.0.1 was released on June 06, 2009. For the official, comprehensive release note, please refer to sage-4.0.1.txt. The Sage trac server contains all tickets merged in this release. The following points are some of the foci of version 4.0.1:

- Nested lists as nicely formatted HTML tables.
- Update FLINT and MPIR to latest upstream releases.
- Speed optimization for algebra, basic arithmetics, and the GAP interface.
- A tool for understanding pickling.

The following 3 people made their first contribution in this release:

- Dan Christensen
- David Perkinson
- Joanna Gaski

In version 4.0.1, doctest coverage increased by 0.3%, while an additional 177 functions have been added. The overall weighted coverage score is now at 77.2% and the total number of functions is at 21,988.

The upcoming version 4.0.2 is a bug fix release for ironing out any outstanding bugs in 4.0.1 and cleaning up mergeable patches. This version is scheduled to be released before or during the MEGA 2009 conference, Barcelona, June 15–19, 2009.

After releasing version 4.0.rc0, Michael Abshoff took a break from release management. This is a well-deserved break since he has been managing over 33 high-quality releases over a period of about two years. Well done, Michael! A standing ovation to you. Mike Hansen and William Stein managed the release of Sage versions 4.0 and 4.0.1. The release managers for the upcoming 4.0.2 version are Nick Alexander and Craig Citro. You can read Michael’s announcement at this sage-devel thread, and the announcement of the 4.0.2 release managers at this sage-release thread.

Many of you know by now that Sage is on Facebook. You can become a fan of Sage by joining the Sage Math Facebook group. So far, the group boasts over 500 fans. Spurred by the success of this group, I’ve created the corresponding Sage Math LinkedIn group. After a week since its creation, the group now has about 6 members. Clearly we need more people to join this new group (wink, wink, nudge, nudge ;-).

The Annual Spies Sage Development Prize first started in 2008. The 2008 recipient was Michael Abshoff. It’s that time of the year again when the Sage community recognizes outstanding contributions from an individual to the development of Sage. Although not officially announced yet, this year’s winner is Mike Hansen. Expect to get an official announcement soon on sage-devel. Congratulations, Mike!

The following are some of the major features in this release, categorized under various headings:

**Algebra**

- Factoring rational functions (Soroosh Yazdani) — New method factor() in the class FractionFieldElement of sage/rings/fraction_field_element.pyx to return the factorization of self over the base ring. Here’s an example for working with this new method:
sage: K.<x> = QQ["x"] sage: f = (x^3 + x) / (x-3) sage: f.factor() (x - 3)^-1 * x * (x^2 + 1)

- Faster basis_matrix() for ambient modules (John Cremona) — The speed-up can be up to 376x faster than previously. The following timing statistics were obtained using the machine sage.math:
# BEFORE sage: K = FreeModule(ZZ, 2000) sage: %time I = K.basis_matrix() CPU times: user 292.74 s, sys: 20.11 s, total: 312.85 s Wall time: 312.90 s # AFTER sage: K = FreeModule(ZZ, 2000) sage: %time I = K.basis_matrix() CPU times: user 0.41 s, sys: 0.43 s, total: 0.84 s Wall time: 0.83 s

- Optimize the construction of Lagrange interpolation polynomials (Minh Van Nguyen) — Rewrite the method lagrange_polynomial() in the class PolynomialRing_field of sage/rings/polynomial/polynomial_ring.py for generating the -th Lagrange interpolation polynomial. The method now provides two new options:
- algorithm — (default: divided_difference) If algorithm=”divided_difference”, then use the method of divided difference. If algorithm=”neville”, then use a variant of Neville’s method to recursively generate the -th Lagrange interpolation polynomial. This adaptation of Neville’s method is more memory efficient than the original Neville’s method, since the former doesn’t generate the full Neville table resulting from Neville’s recursive procedure. Instead the adaptation only keeps track of the current and previous rows of the said table.
- previous_row — (default: None) This is only relevant if used together with algorithm=”neville”. Here “previous row” refers to the last row in the Neville table that was obtained from a previous computation of an -th Lagrange interpolation polynomial using Neville’s method. If the last row is provided, then use a memory efficient variant of Neville’s method to recursively generate a better interpolation polynomial from the results of previous computation.

There’s also the new method divided_difference() to compute the Newton divided-difference coefficients of the -th Lagrange interpolation polynomial. The following are some timing statistics obtained using sage.math. When the results of previous computations are fed to lagrange_polynomial in order to produce better interpolation polynomials, we can gain an efficiency of up to 42%.

# BEFORE # using the definition of Lagrange interpolation polynomial sage: R = PolynomialRing(QQ, 'x') sage: %timeit R.lagrange_polynomial([(0,1),(2,2),(3,-2),(-4,9)]) 1000 loops, best of 3: 1.71 ms per loop sage: R = PolynomialRing(GF(2**3,'a'), 'x') sage: a = R.base_ring().gen() sage: timeit("R.lagrange_polynomial([(a^2+a,a),(a,1),(a^2,a^2+a+1)])") 625 loops, best of 3: 233 µs per loop # without using precomputed values to generate successively better interpolation polynomials sage: R = PolynomialRing(QQ, 'x') sage: timeit("R.lagrange_polynomial([(0,1),(2,2)])"); 625 loops, best of 3: 571 µs per loop sage: # add two more points sage: timeit("R.lagrange_polynomial([(0,1),(2,2),(3,-2),(-4,9)])"); 125 loops, best of 3: 2.29 ms per loop sage: sage: R = PolynomialRing(GF(2**3,'a'), 'x') sage: a = R.base_ring().gen() sage: timeit("R.lagrange_polynomial([(a^2+a,a),(a,1)])") 625 loops, best of 3: 76.1 µs per loop sage: # add another point sage: timeit("R.lagrange_polynomial([(a^2+a,a),(a,1),(a^2,a^2+a+1)])") 625 loops, best of 3: 229 µs per loop sage: sage: R = PolynomialRing(QQ, 'x') sage: points = [(random(), random()) for i in xrange(100)] sage: time R.lagrange_polynomial(points); CPU times: user 1.21 s, sys: 0.00 s, total: 1.21 s Wall time: 1.21 s sage: # add three more points sage: for i in xrange(3): points.append((random(), random())) ....: sage: time R.lagrange_polynomial(points); CPU times: user 1.28 s, sys: 0.01 s, total: 1.29 s Wall time: 1.29 s sage: # add another 100 points sage: for i in xrange(100): points.append((random(), random())) ....: sage: time R.lagrange_polynomial(points); CPU times: user 5.87 s, sys: 0.02 s, total: 5.89 s Wall time: 5.89 s # AFTER # using the method of divided-difference sage: R = PolynomialRing(QQ, 'x') sage: %timeit R.lagrange_polynomial([(0,1),(2,2),(3,-2),(-4,9)]) 1000 loops, best of 3: 827 µs per loop sage: R = PolynomialRing(GF(2**3,'a'), 'x') sage: a = R.base_ring().gen() sage: timeit("R.lagrange_polynomial([(a^2+a,a),(a,1),(a^2,a^2+a+1)])") 625 loops, best of 3: 111 µs per loop # using precomputed values to generate successively better interpolation polynomials sage: R = PolynomialRing(QQ, 'x') sage: timeit("R.lagrange_polynomial([(0,1),(2,2)], neville=True)"); 625 loops, best of 3: 332 µs per loop sage: p = R.lagrange_polynomial([(0,1),(2,2)], neville=True); sage: # add two more points sage: timeit("R.lagrange_polynomial([(0,1),(2,2),(3,-2),(-4,9)], neville=True, previous_row=p)"); 625 loops, best of 3: 1.41 ms per loop sage: sage: R = PolynomialRing(GF(2**3,'a'), 'x') sage: a = R.base_ring().gen() sage: timeit("R.lagrange_polynomial([(a^2+a,a),(a,1)], neville=True)"); 625 loops, best of 3: 36.4 µs per loop sage: p = R.lagrange_polynomial([(a^2+a,a),(a,1)], neville=True); sage: # add another point sage: timeit("R.lagrange_polynomial([(a^2+a,a),(a,1),(a^2,a^2+a+1)], neville=True, previous_row=p)"); 625 loops, best of 3: 131 µs per loop sage: sage: R = PolynomialRing(QQ, 'x') sage: points = [(random(), random()) for i in xrange(100)] sage: time R.lagrange_polynomial(points, neville=True); CPU times: user 1.26 s, sys: 0.00 s, total: 1.26 s Wall time: 1.26 s sage: p = R.lagrange_polynomial(points, neville=True); sage: # add three more points sage: for i in xrange(3): points.append((random(), random())) ....: sage: time R.lagrange_polynomial(points, neville=True, previous_row=p); CPU times: user 0.09 s, sys: 0.00 s, total: 0.09 s Wall time: 0.08 s sage: p = R.lagrange_polynomial(points, neville=True, previous_row=p) sage: # add another 100 points sage: for i in xrange(100): points.append((random(), random())) ....: sage: time R.lagrange_polynomial(points, neville=True, previous_row=p); CPU times: user 4.62 s, sys: 0.00 s, total: 4.62 s Wall time: 4.62 s

**Basic Arithmetic**

- Speed overhaul for digits, exact_log and ndigits (Joel B. Mohler) — Speed-up for the cases where the method exact_log can conveniently be computed by log 2 estimation. In some cases, time efficiency can be up to 927x faster than previously. The following timing statistics were obtained using the machine sage.math:
# BEFORE sage: n = 5^1000 sage: m = 2975982357823879528793587928793592 sage: %timeit n.exact_log(m) 1000 loops, best of 3: 205 µs per loop sage: n = 5^50 sage: m = 33 sage: %timeit n.exact_log(m) 10000 loops, best of 3: 29.6 µs per loop sage: def zlog(m, n, k): ....: for i in xrange(0, m/1000): ....: a = ZZ.random_element(n) + 2 ....: b = ZZ.random_element(k) ....: c = a^b ....: for i in xrange (0, 1000): ....: c.exact_log(a) ....: sage: time zlog(100000, 2^100, 100) CPU times: user 22.59 s, sys: 0.12 s, total: 22.71 s Wall time: 22.70 s sage: time zlog(100000, 100, 100) CPU times: user 3.45 s, sys: 0.02 s, total: 3.47 s Wall time: 3.47 s # AFTER sage: n = 5^1000 sage: m = 2975982357823879528793587928793592 sage: %timeit n.exact_log(m) 1000000 loops, best of 3: 221 ns per loop sage: n = 5^50 sage: m = 33 sage: %timeit n.exact_log(m) 1000000 loops, best of 3: 526 ns per loop sage: def zlog(m, n, k): ....: for i in xrange(0, m/1000): ....: a = ZZ.random_element(n) + 2 ....: b = ZZ.random_element(k) ....: c = a^b ....: for i in xrange (0, 1000): ....: c.exact_log(a) ....: sage: time zlog(100000, 2^100, 100) CPU times: user 1.96 s, sys: 0.02 s, total: 1.98 s Wall time: 1.99 s sage: time zlog(100000, 100, 100) CPU times: user 0.05 s, sys: 0.01 s, total: 0.06 s Wall time: 0.05 s

**Calculus**

- Deprecate the function numerical_sqrt() (Robert Bradshaw, John H. Palmieri) — The function numerical_sqrt() in sage/misc/functional.py is now deprecated. Users are advised to instead use sqrt().

**Combinatorics**

- Sets enumerated by exploring a search space with a (lazy) tree or graph structure (Nicolas Thiery) — Extend the sage.combinat.backtrack library with other generic tools for constructing large sets whose elements can be enumerated by exploring a search space with a (lazy) tree or graph structure. The following generic utilities have been added:
- SearchForest: Depth first search through a tree described by a “child” function.
- GenericBacktracker: Depth first search through a tree described by a “child” function, with branch pruning, etc.
- TransitiveIdeal: Depth first search through a graph described by a “neighbours” relation.
- TransitiveIdealGraded: Breath first search through a graph described by a “neighbours” relation.

- The Sloane sequence A000008 (Joanna Gaski) — The Sloane sequence A000008 is concerned with the number of ways of making change for cents where one is restricted to using only coins of denominations 1, 2, 5, and 10 cents. This is contained in the new class A000008 in sage/combinat/sloane_functions.py. Here are some examples on using this class:
sage: a = sloane.A000008; a Number of ways of making change for n cents using coins of 1, 2, 5, 10 cents. sage: a(0) 1 sage: a(1) 1 sage: a(13) 16 sage: a.list(14) [1, 1, 2, 2, 3, 4, 5, 6, 7, 8, 11, 12, 15, 16]

- Read ext_rep format of combinatorial designs (Carlo Hamalainen) — The new module sage/combinat/designs/ext_rep.py is an API to the abstract tree represented by an XML document containing the External Representation of a list of block designs. The relevant combinatorial designs are read from the online database at the design theory database. This module also provides the related I/O operations for reading and writing ext-rep files or data. The parsing is based on expat.
- Dynkin diagram ASCII art for reducible Cartan types (Dan Bump) — Here are some examples on such ASCII art:
sage: CartanType("F4xA2").dynkin_diagram() O---O=>=O---O 1 2 3 4 O---O 5 6 F4xA2 sage: t = CartanType("A2xB2xF4") sage: dd = DynkinDiagram(t); dd O---O 1 2 O=>=O 3 4 O---O=>=O---O 5 6 7 8 A2xB2xF4

- Speed-up computation in symmetric algebra group (Dan Christensen) — The previous code essentially reimplemented the multiplication in the group algebra. Now it accumulates the symmetrizers and antisymmetrizers separately, and then does one multiplication at the end. This probably results in the same number of operations, but it avoids creating many intermediate objects. The following timing statistics were obtained using the machine sage.math:
# BEFORE sage: from sage.combinat.symmetric_group_algebra import e sage: time dummy = e([[1,2,3,4], [5,6,7]]) CPU times: user 1.91 s, sys: 0.31 s, total: 2.22 s Wall time: 3.15 s sage: time e([[1,2,3,4,5],[6,7,8],[9,10],[11]]); # hangs for over 6 hours # AFTER sage: from sage.combinat.symmetric_group_algebra import e sage: time dummy = e([[1,2,3,4], [5,6,7]]) CPU times: user 0.12 s, sys: 0.05 s, total: 0.17 s Wall time: 0.32 s sage: time e([[1,2,3,4,5],[6,7,8],[9,10],[11]]); CPU times: user 31.20 s, sys: 0.53 s, total: 31.73 s Wall time: 31.73 s

- Improve speed of combinatorial algebra multiplication (Dan Christensen) — The speed-up concerns the method multiply() of the class CombinatorialAlgebra in sage/combinat/combinatorial_algebra.py. In some cases, the efficiency can be up to 3x. The following timing statistics were obtained using the machine sage.math:
# BEFORE sage: from sage.combinat.symmetric_group_algebra import e sage: Y=[[1,2,3,4],[5,6]] sage: time eY = e(Y) CPU times: user 0.46 s, sys: 0.08 s, total: 0.54 s Wall time: 0.74 s sage: time eY2 = eY*eY CPU times: user 1.47 s, sys: 0.00 s, total: 1.47 s Wall time: 1.47 s # AFTER sage: from sage.combinat.symmetric_group_algebra import e sage: Y = [[1,2,3,4], [5,6]] sage: time eY = e(Y) CPU times: user 0.05 s, sys: 0.02 s, total: 0.07 s Wall time: 0.22 s sage: time eY2 = eY*eY CPU times: user 1.24 s, sys: 0.00 s, total: 1.24 s Wall time: 1.24 s

**Graphics**

- Mesh lines for 3-D plots (Bill Cauchois) — One can produce 3-D plots with mesh lines as follows:
sage: plot3d(lambda x,y: exp(x+y*I).real(), (-2, 2.4), (-3, 3), mesh=True, zoom=1.3)

- Centering of contour and density plots (Jason Grout) — The following example shows a “spotlight” on the origin:
sage: x, y = var('x,y') sage: density_plot(1/(x^10+y^10), (x, -10, 10), (y, -10, 10))

This plots concentric circles centered at the origin:

sage: x, y = var('x,y') sage: contour_plot(x^2 + y^2 - 2, (x,-1,1), (y,-1,1)).show(aspect_ratio=1)

The following example plots a disk centered at the origin:

sage: x, y = var('x,y') sage: region_plot(x^2 + y^2 < 1, (x,-1,1), (y,-1,1)).show(aspect_ratio=1)

**Interfaces**

- Improving the GAP interface by pre-compiling certain regular expressions (Simon King) — The speed-up in the GAP interface is now up to 37% faster than previously. The following timing statistics were obtained using the machine sage.math:
# BEFORE sage: G = SymmetricGroup(7) sage: g = G._gap_() sage: l = g.Elements() sage: time L = [gap.eval(l.name() + '[%d]^2' % (i)) for i in xrange(1, 7.factorial() + 1)] CPU times: user 1.90 s, sys: 0.16 s, total: 2.06 s Wall time: 2.08 s # AFTER sage: G = SymmetricGroup(7) sage: g = G._gap_() sage: l = g.Elements() sage: time L = [gap.eval(l.name() + '[%d]^2' % (i)) for i in xrange(1, 7.factorial() + 1)] CPU times: user 1.07 s, sys: 0.22 s, total: 1.29 s Wall time: 1.31 s

**Miscellaneous**

- Wrapping Sage or Python objects as Sage elements (Nicolas Thiery) — New class ElementWrapper in sage/structure/element_wrapper.py for wrapping Sage or Python objects as Sage elements, with reasonable default implementations of repr, cmp, hash, etc. The typical use case is for trivially constructing new element classes from pre-existing Sage or Python classes, with a containment relation. Here’s an example on using ElementWrapper:
sage: o = ElementWrapper("bla", parent=ZZ); o 'bla' sage: isinstance(o, sage.structure.element.Element) True sage: o.parent() Integer Ring sage: o.value 'bla'

- A tool for understanding Python pickles (Carl Witty) — The new module sage/misc/explain_pickle.py has a function called explain_pickle that takes a pickle and produces Sage code that will evaluate to the contents of the pickle. The combination of explain_sage to produce Sage code and sage_eval to evaluate the code should be a 100% compatible implementation of cPickle’s unpickler. That is, explain_sage explains a pickle by producing source code such that evaluating the code is equivalent to loading the pickle. Feeding the result of explain_pickle to sage_eval should be totally equivalent to loading the pickle with cPickle. Here are some examples on using explain_pickle:
sage: explain_pickle(dumps({('a', 'b'): [1r, 2r]})) {('a', 'b'):[1r, 2r]} sage: explain_pickle(dumps(RR(pi)), in_current_sage=True) from sage.rings.real_mpfr import __create__RealNumber_version0 from sage.rings.real_mpfr import __create__RealField_version0 __create__RealNumber_version0(__create__RealField_version0(53r, False, 'RNDN'), '3.4gvml245kc0@0', 32r) sage: s = 'hi' sage: explain_pickle(dumps((s, s))) ('hi', 'hi') sage: explain_pickle(dumps((s, s)), pedantic=True) si = 'hi' (si, si) sage: explain_pickle(dumps(5r) ....: ) 5r sage: explain_pickle(dumps(22/7)) pg_make_rational = unpickle_global('sage.rings.rational', 'make_rational') pg_make_rational('m/7') sage: explain_pickle(dumps(22/7), in_current_sage=True) from sage.rings.rational import make_rational make_rational('m/7') sage: explain_pickle(dumps(22/7), default_assumptions=True) from sage.rings.rational import make_rational make_rational('m/7')

- S-box calling when (Martin Albrecht) — An S-box takes input bits and transforms them into output bits. This is called an S-box. The case of invoking an S-box with is now supported:
sage: S = mq.SBox(3, 0, 1, 3, 1, 0, 2, 2) sage: S(0) 3 sage: S([0,0,0]) [1, 1]

**Modular Forms**

- Membership testing for modular forms subspaces (David Loeffler) — One can test such membership as follows:
sage: M = ModularForms(17, 4) sage: S = M.cuspidal_submodule() sage: M.0 == S.0 True sage: M.0 in S True

**Notebook**

- Show nested lists as HTML tables (Wilfried Huss) — One can produce such HTML tables as follows:
sage: functions = [sin(x), cos(x), tan(x), acos(x)] sage: t = [[f, taylor(f, x, 0, 10)] for f in functions] sage: html.table([["Function", "Series"]] + t, header = True)

One can also place graphic objects into the table:

sage: f = 1/x*sin(x) sage: t = [["Function", "Plot"],[f, plot(f, x, -4*pi, 4*pi)]] sage: html.table(t, header = True)

- Limit the number of worksheet snapshots (Rob Beezer) — Reduce the unlimited growth of snapshots of worksheets when using the notebook.

**Number Theory**

- Galois action (David Loeffler) — For example, one can now perform computations similar to the following:
sage: F.<z> = CyclotomicField(7) sage: G = F.galois_group() sage: phi = G.random_element() sage: phi(z) z^4

- Period lattices for elliptic curves over (John Cremona) — For elliptic curves over number fields, period lattice for complex embeddings is supported, using the complex AGM (Gauss’ Arithmetic-Geometric Mean) method to compute the basis. Here’s an example:
sage: K.<a> = NumberField(x^3 - 2) sage: E = EllipticCurve([0, 1, 0, a, a]) sage: emb = K.embeddings(ComplexField())[0] sage: E.period_lattice(emb) Period lattice associated to Elliptic Curve defined by y^2 = x^3 + x^2 + a*x + a over Number Field in a with defining polynomial x^3 - 2 with respect to the embedding Ring morphism: From: Number Field in a with defining polynomial x^3 - 2 To: Algebraic Field Defn: a |--> -0.6299605249474365? - 1.091123635971722?*I

- Move the algebraic_closure method from RLF to LazyField (Nick Alexander).

**Numerical**

- Solving the subset sum problem for super-increasing sequences (Minh Van Nguyen) — New module sage/numerical/knapsack.py for solving knapsack problems. The class Superincreasing in that module can be used to solve the subset sum problem for super-increasing sequences. Here are some examples:
sage: from sage.numerical.knapsack import Superincreasing sage: L = [1, 2, 5, 21, 69, 189, 376, 919] sage: seq = Superincreasing(L) sage: seq Super-increasing sequence of length 8 sage: seq.is_superincreasing() True sage: Superincreasing().is_superincreasing([1,3,5,7]) False

**Packages**

- Update the ATLAS spkg to version atlas-3.8.3.p3.spkg (William Stein).
- Update FLINT to version 1.2.5 latest upstream release (Michael Abshoff, Mike Hansen).
- Update the GAP spkg to version gap-4.4.10.p12.spkg (William Stein).
- Update MPIR to version 1.2 latest upstream release (William Stein).
- Update the optional SageTeX spkg to version 2.1.1 (Dan Drake).

**Symbolics**

- Simplify when multiplying by exponential expressions (Burcin Erocal, Mike Hansen) — Here are some examples:
sage: x, y = var("x,y") sage: exp(x) * exp(y) e^(x + y) sage: x^y * exp(x+y) * exp(-y) x^y*e^x sage: x^y * exp(x+y) * (x+y) * (2*x + 2*y) * exp(-y) 2*(x + y)^2*x^y*e^x sage: A = exp(I*pi/5) sage: t = A*A*A*A; t e^(4/5*I*pi) sage: t*A -1

**Topology**

- Change facets from an attribute to a method in simplicial complexes (John Palmieri).

A big thank you to all the Sage bug report/patch authors who made my life as a release tour author easier through your comprehensive and concise documentation. There are too many to list here; you know who you are. A release tour can also be found on the Sage wiki.

## Sage 4.0 released

Sage 4.0 was released on May 29, 2009. For all the changes in this version, please refer to the official, comprehensive release note or the trac server. The following points are some of the foci of this release:

- New symbolics based on Pynac
- Bring doctest coverage up to at least 75%
- Solaris 10 support (at least for gcc 4.3.x and gmake)
- Switch from Clisp to ECL
- OS X 64-bit support

With this release, 140 tickets were closed, bringing the doctest coverage up to 76.9%. The number of functions is now at 21,811 which is a decrease of 537 functions since Sage 3.4.2. Forty people contributed to the release of Sage 4.0, of which eight made their first contribution:

- Anthony David
- arattan AT gmail.com
- Fredrik Johansson
- Lloyd Kilford
- Luiz Berlioz
- Radoslav Kirov
- Ron Evans
- Ryan Dingman

Along with source and binaries for various platforms, there’s also a live CD for you to use, courtesy of Lucio Lastra. Here are some of the main features of this release, categorized under various headings:

**Algebra**

- Deprecate the order() method on elements of rings (John Palmieri) — The method order() of the class sage.structure.element.RingElement is now deprecated and will be removed in a future release. For additive or multiplicative order, use the additive_order() or multiplicative_order() method respectively.
- Partial fraction decomposition for irreducible denominators (Gonzalo Tornaria) — For example, over the field you can do
sage: R.<x> = ZZ["x"] sage: q = x^2 / (x - 1) sage: q.partial_fraction_decomposition() (x + 1, [1/(x - 1)]) sage: q = x^10 / (x - 1)^5 sage: whole, parts = q.partial_fraction_decomposition() sage: whole + sum(parts) x^10/(x^5 - 5*x^4 + 10*x^3 - 10*x^2 + 5*x - 1) sage: whole + sum(parts) == q True

and over the finite field :

sage: R.<x> = GF(2)["x"] sage: q = (x + 1) / (x^3 + x + 1) qsage: q.partial_fraction_decomposition() (0, [(x + 1)/(x^3 + x + 1)])

**Algebraic Geometry**

- Various invariants for genus 2 hyperelliptic curves (Nick Alexander) — The following invariants for genus 2 hyperelliptic curves are implemented in the module sage/schemes/hyperelliptic_curves/hyperelliptic_g2_generic.py:
- the Clebsch invariants
- the Igusa-Clebsch invariants
- the absolute Igusa invariants

**Basic Arithmetic**

- Utility methods for integer arithmetics (Fredrik Johansson) — New methods trailing_zero_bits() and sqrtrem() for the class Integer in sage/rings/integer.pyx:
- trailing_zero_bits(self) — Returns the number of trailing zero bits in self, i.e. the exponent of the largest power of 2 dividing self.
- sqrtrem(self) — Returns a pair (s, r) where s is the integer square root of self and r is the remainder such that self = s^2 + r.

Here are some examples for working with these new methods:

sage: 13.trailing_zero_bits() 0 sage: (-13).trailing_zero_bits() 0 sage: (-13 >> 2).trailing_zero_bits() 2 sage: (-13 >> 3).trailing_zero_bits() 1 sage: (-13 << 3).trailing_zero_bits() 3 sage: (-13 << 2).trailing_zero_bits() 2 sage: 29.sqrtrem() (5, 4) sage: 25.sqrtrem() (5, 0)

- Casting from float to rationals (Robert Bradshaw) — One can now create a rational out of a float. Here’s an example:
sage: a = float(1.0) sage: QQ(a) 1 sage: type(a); type(QQ(a)) <type 'float'> <type 'sage.rings.rational.Rational'>

- Speedup to Integer creation (Robert Bradshaw) — Memory for recycled integers are only reclaimed if over 10 limbs are used, giving a significant speedup for small integers. (Previously all integers were reallocated to a single limb, which were often then reallocated to two limbs for arithmetic operations even when the result fit into a single limb.)

**Combinatorics**

- ASCII art output for Dynkin diagrams (Dan Bump) — Support for ASCII art representation of Dynkin diagrams of a finite Cartan type. Here are some examples:
sage: DynkinDiagram("E6") O 2 | | O---O---O---O---O 1 3 4 5 6 E6 sage: DynkinDiagram(['E',6,1]) O 0 | | O 2 | | O---O---O---O---O 1 3 4 5 6 E6~

- Crystal of letters for type E (Brant Jones, Anne Schilling) — Support crystal of letters for type E corresponding to the highest weight crystal and its dual (using the Sage labeling convention of the Dynkin nodes). Here are some examples:
sage: C = CrystalOfLetters(['E',6]) sage: C.list() [[1], [-1, 3], [-3, 4], [-4, 2, 5], [-2, 5], [-5, 2, 6], [-2, -5, 4, 6], [-4, 3, 6], [-3, 1, 6], [-1, 6], [-6, 2], [-2, -6, 4], [-4, -6, 3, 5], [-3, -6, 1, 5], [-1, -6, 5], [-5, 3], [-3, -5, 1, 4], [-1, -5, 4], [-4, 1, 2], [-1, -4, 2, 3], [-3, 2], [-2, -3, 4], [-4, 5], [-5, 6], [-6], [-2, 1], [-1, -2, 3]] sage: C = CrystalOfLetters(['E',6], element_print_style="compact") sage: C.list() [+, a, b, c, d, e, f, g, h, i, j, k, l, m, n, o, p, q, r, s, t, u, v, w, x, y, z]

**Commutative Algebra**

- Improved performance for SR (Martin Albrecht) — The speed-up gain for SR is up to 6x. The following timing statistics were obtained using the machine sage.math:
# BEFORE sage: sr = mq.SR(4, 4, 4, 8, gf2=True, polybori=True, allow_zero_inversions=True) sage: %time F,s = sr.polynomial_system() CPU times: user 21.65 s, sys: 0.03 s, total: 21.68 s Wall time: 21.83 s # AFTER sage: sr = mq.SR(4, 4, 4, 8, gf2=True, polybori=True, allow_zero_inversions=True) sage: %time F,s = sr.polynomial_system() CPU times: user 3.61 s, sys: 0.06 s, total: 3.67 s Wall time: 3.67 s

- Symmetric Groebner bases and infinitely generated polynomial rings (Simon King, Mike Hansen) — The new modules sage/rings/polynomial/infinite_polynomial_element.py and sage/rings/polynomial/infinite_polynomial_ring.py support computation in polynomial rings with a countably infinite number of variables. Here are some examples for working with these new modules:
sage: from sage.rings.polynomial.infinite_polynomial_element import InfinitePolynomial sage: X.<x> = InfinitePolynomialRing(QQ) sage: a = InfinitePolynomial(X, "(x1 + x2)^2"); a x2^2 + 2*x2*x1 + x1^2 sage: p = a.polynomial() sage: b = InfinitePolynomial(X, a.polynomial()) sage: a == b True sage: InfinitePolynomial(X, int(1)) 1 sage: InfinitePolynomial(X, 1) 1 sage: Y.<x,y> = InfinitePolynomialRing(GF(2), implementation="sparse") sage: InfinitePolynomial(Y, a) x2^2 + x1^2 sage: X.<x,y> = InfinitePolynomialRing(QQ, implementation="sparse") sage: A.<a,b> = InfinitePolynomialRing(QQ, order="deglex") sage: f = x[5] + 2; f x5 + 2 sage: g = 3*y[1]; g 3*y1 sage: g._p.parent() Univariate Polynomial Ring in y1 over Rational Field sage: f2 = a[5] + 2; f2 a5 + 2 sage: g2 = 3*b[1]; g2 3*b1 sage: A.polynomial_ring() Multivariate Polynomial Ring in b5, b4, b3, b2, b1, b0, a5, a4, a3, a2, a1, a0 over Rational Field sage: f + g 3*y1 + x5 + 2 sage: p = x[10]^2 * (f + g); p 3*y1*x10^2 + x10^2*x5 + 2*x10^2

Furthermore, the new module sage/rings/polynomial/symmetric_ideal.py supports ideals of polynomial rings in a countably infinite number of variables that are invariant under variable permutation. Symmetric reduction of infinite polynomials is provided by the new module sage/rings/polynomial/symmetric_reduction.pyx.

**Geometry**

- Simplicial complex method for polytopes (Marshall Hampton) — New method simplicial_complex() in the class Polyhedron of sage/geometry/polyhedra.py for computing the simplicial complex from a triangulation of the polytope. Here’s an example:
sage: p = polytopes.cuboctahedron() sage: p.simplicial_complex() Simplicial complex with 13 vertices and 20 facets

- Face lattices and f-vectors for polytopes (Marshall Hampton) — New methods face_lattice() and f_vector() in the class Polyhedron of sage/geometry/polyhedra.py:
- face_lattice() — Returns the face-lattice poset. Elements are tuples of (vertices, facets) which keeps track of both the vertices in each face, and all the facets containing them. This method implements the results from the following paper:
- V. Kaibel and M.E. Pfetsch. Computing the face lattice of a polytope from its vertex-facet incidences. Computational Geometry, 23(3):281–290, 2002.

- f_vector() — Returns the f-vector of a polytope as a list.

Here are some examples:

sage: c5_10 = Polyhedron(vertices = [[i,i^2,i^3,i^4,i^5] for i in xrange(1,11)]) sage: c5_10_fl = c5_10.face_lattice() sage: [len(x) for x in c5_10_fl.level_sets()] [1, 10, 45, 100, 105, 42, 1] sage: p = Polyhedron(vertices = [[1, 2, 3], [1, 3, 2], [2, 1, 3], [2, 3, 1], [3, 1, 2], [3, 2, 1], [0, 0, 0]]) sage: p.f_vector() [1, 7, 12, 7, 1]

- face_lattice() — Returns the face-lattice poset. Elements are tuples of (vertices, facets) which keeps track of both the vertices in each face, and all the facets containing them. This method implements the results from the following paper:

**Graph Theory**

- Graph colouring (Robert Miller) — New method coloring() of the class sage.graphs.graph.Graph for obtaining the first (optimal) coloring found on a graph. Here are some examples on using this new method:
sage: G = Graph("Fooba") sage: P = G.coloring() sage: G.plot(partition=P) sage: H = G.coloring(hex_colors=True) sage: G.plot(vertex_colors=H)

- Optimize the construction of large Sage graphs (Radoslav Kirov) — The construction of large Sage graphs is now up to 19x faster than previously. The following timing statistics were obtained using the machine sage.math:
# BEFORE sage: D = {} sage: for i in xrange(10^3): ....: D[i] = [i+1, i-1] ....: sage: timeit("g = Graph(D)") 5 loops, best of 3: 1.02 s per loop # AFTER sage: D = {} sage: for i in xrange(10^3): ....: D[i] = [i+1, i-1] ....: sage: timeit("g = Graph(D)") 5 loops, best of 3: 51.2 ms per loop

- Generate size trees in linear time (Ryan Dingman) -- The speed-up can be up to 3400x. However, the efficiency gain is greater as becomes larger. The following timing statistics were produced using the machine sage.math:
# BEFORE sage: %time L = list(graphs.trees(2)) CPU times: user 0.13 s, sys: 0.02 s, total: 0.15 s Wall time: 0.18 s sage: %time L = list(graphs.trees(4)) CPU times: user 0.02 s, sys: 0.00 s, total: 0.02 s Wall time: 0.02 s sage: %time L = list(graphs.trees(6)) CPU times: user 0.08 s, sys: 0.00 s, total: 0.08 s Wall time: 0.07 s sage: %time L = list(graphs.trees(8)) CPU times: user 0.59 s, sys: 0.00 s, total: 0.59 s Wall time: 0.60 s sage: %time L = list(graphs.trees(10)) CPU times: user 34.48 s, sys: 0.02 s, total: 34.50 s Wall time: 34.51 s # AFTER sage: %time L = list(graphs.trees(2)) CPU times: user 0.11 s, sys: 0.02 s, total: 0.13 s Wall time: 0.15 s sage: %time L = list(graphs.trees(4)) CPU times: user 0.01 s, sys: 0.00 s, total: 0.01 s Wall time: 0.00 s sage: %time L = list(graphs.trees(6)) CPU times: user 0.00 s, sys: 0.00 s, total: 0.00 s Wall time: 0.00 s sage: %time L = list(graphs.trees(8)) CPU times: user 0.00 s, sys: 0.00 s, total: 0.00 s Wall time: 0.00 s sage: %time L = list(graphs.trees(10)) CPU times: user 0.01 s, sys: 0.00 s, total: 0.01 s Wall time: 0.01 s sage: %time L = list(graphs.trees(12)) CPU times: user 0.06 s, sys: 0.00 s, total: 0.06 s Wall time: 0.05 s sage: %time L = list(graphs.trees(14)) CPU times: user 0.51 s, sys: 0.01 s, total: 0.52 s Wall time: 0.52 s

**Graphics**

- Implicit Surfaces (Bill Cauchois, Carl Witty) -- New function implicit_plot3d for plotting level sets of 3-D functions. The documentation contains many examples. Here's a sphere contained inside a tube-like sphere:
sage: x, y, z = var("x, y, z") sage: T = 1.61803398875 sage: p = 2 - (cos(x + T*y) + cos(x - T*y) + cos(y + T*z) + cos(y - T*z) + cos(z - T*x) + cos(z + T*x)) sage: r = 4.77 sage: implicit_plot3d(p, (-r, r), (-r, r), (-r, r), plot_points=40, zoom=1.2).show()

Here's a Klein bottle:

sage: x, y, z = var("x, y, z") sage: implicit_plot3d((x^2+y^2+z^2+2*y-1)*((x^2+y^2+z^2-2*y-1)^2-8*z^2)+16*x*z*(x^2+y^2+z^2-2*y-1), (x, -3, 3), (y, -3.1, 3.1), (z, -4, 4), zoom=1.2)

This example shows something resembling a water droplet:

sage: x, y, z = var("x, y, z") sage: implicit_plot3d(x^2 +y^2 -(1-z)*z^2, (x, -1.5, 1.5), (y, -1.5, 1.5), (z, -1, 1), zoom=1.2)

- Fixed bug in rendering 2-D polytopes embedded in 3-D (Arnauld Bergeron, Bill Cauchois, Marshall Hampton).

**Group Theory**

- Improved efficiency of is_subgroup (Simon King) -- Testing whether a group is a subgroup of another group is now up to 2x faster than previously. The following timing statistics were obtained using the machine sage.math:
# BEFORE sage: G = SymmetricGroup(7) sage: H = SymmetricGroup(6) sage: %time H.is_subgroup(G) CPU times: user 4.12 s, sys: 0.53 s, total: 4.65 s Wall time: 5.51 s True sage: %timeit H.is_subgroup(G) 10000 loops, best of 3: 118 µs per loop # AFTER sage: G = SymmetricGroup(7) sage: H = SymmetricGroup(6) sage: %time H.is_subgroup(G) CPU times: user 0.00 s, sys: 0.00 s, total: 0.00 s Wall time: 0.00 s True sage: %timeit H.is_subgroup(G) 10000 loops, best of 3: 56.3 µs per loop

**Interfaces**

- Viewing Sage objects with a PDF viewer (Nicolas Thiery) -- Implements the option viewer="pdf" for the command view() so that one can invoke this command in the form view(object, viewer="pdf") in order to view object using a PDF viewer. Typical uses of this new optional argument include:
- You prefer to use a PDF viewer rather than a DVI viewer.
- You want to view snippets which are not displayed well in DVI viewers (e.g. graphics produced using tikzpicture).

- Change name of Pari's sum function when imported (Craig Citro) -- When Pari's sum function is imported, it is renamed to pari_sum in order to avoid conflict Python's sum function.

**Linear Algebra**

- Improved performance for the generic linear_combination_of_rows and linear_combination_of_columns functions for matrices (William Stein) -- The speed-up for the generic functions linear_combination_of_rows and linear_combination_of_columns is up to 4x. The following timing statistics were obtained using the machine sage.math:
# BEFORE sage: A = random_matrix(QQ, 50) sage: v = [1..50] sage: %timeit A.linear_combination_of_rows(v); 1000 loops, best of 3: 1.99 ms per loop sage: %timeit A.linear_combination_of_columns(v); 1000 loops, best of 3: 1.97 ms per loop # AFTER sage: A = random_matrix(QQ, 50) sage: v = [1..50] sage: %timeit A.linear_combination_of_rows(v); 1000 loops, best of 3: 436 µs per loop sage: %timeit A.linear_combination_of_columns(v); 1000 loops, best of 3: 457 µs per loop

- Massively improved performance for 4 x 4 determinants (Tom Boothby) -- The efficiency of computing the determinants of 4 x 4 matrices can range from 16x up to 58,083x faster than previously, depending on the base ring. The following timing statistics were obtained using the machine sage.math:
# BEFORE sage: S = MatrixSpace(ZZ, 4) sage: M = S.random_element(1, 10^8) sage: timeit("M.det(); M._clear_cache()") 625 loops, best of 3: 53 µs per loop sage: M = S.random_element(1, 10^10) sage: timeit("M.det(); M._clear_cache()") 625 loops, best of 3: 54.1 µs per loop sage: sage: M = S.random_element(1, 10^200) sage: timeit("M.det(); M._clear_cache()") 5 loops, best of 3: 121 ms per loop sage: M = S.random_element(1, 10^300) sage: timeit("M.det(); M._clear_cache()") 5 loops, best of 3: 338 ms per loop sage: M = S.random_element(1, 10^1000) sage: timeit("M.det(); M._clear_cache()") 5 loops, best of 3: 9.7 s per loop # AFTER sage: S = MatrixSpace(ZZ, 4) sage: M = S.random_element(1, 10^8) sage: timeit("M.det(); M._clear_cache()") 625 loops, best of 3: 3.17 µs per loop sage: M = S.random_element(1, 10^10) sage: timeit("M.det(); M._clear_cache()") 625 loops, best of 3: 3.44 µs per loop sage: sage: M = S.random_element(1, 10^200) sage: timeit("M.det(); M._clear_cache()") 625 loops, best of 3: 15.3 µs per loop sage: M = S.random_element(1, 10^300) sage: timeit("M.det(); M._clear_cache()") 625 loops, best of 3: 27 µs per loop sage: M = S.random_element(1, 10^1000) sage: timeit("M.det(); M._clear_cache()") 625 loops, best of 3: 167 µs per loop

- Refactor matrix kernels (Rob Beezer) -- The core section of kernel computation for each (specialized) class is now moved into the method right_kernel(). Mostly these would replace kernel() methods that are computing left kernels. A call to kernel() or left_kernel() should arrive at the top of the hierarchy where it would take a transpose and call the (specialized) right_kernel(). So there wouldn't be a change in behavior in routines currently calling kernel() or left_kernel(), and Sage's preference for the left is retained by having the vanilla kernel() give back a left kernel. The speed-up for the computation of left kernels is up to 5% faster, and the computation of right kernels is up to 31% by eliminating paired transposes. The following timing statistics were obtained using sage.math:
# BEFORE sage: n = 2000 sage: entries = [[1/(i+j+1) for i in srange(n)] for j in srange(n)] sage: mat = matrix(QQ, entries) sage: %time mat.left_kernel(); CPU times: user 21.92 s, sys: 3.22 s, total: 25.14 s Wall time: 25.26 s sage: %time mat.right_kernel(); CPU times: user 23.62 s, sys: 3.32 s, total: 26.94 s Wall time: 26.94 s # AFTER sage: n = 2000 sage: entries = [[1/(i+j+1) for i in srange(n)] for j in srange(n)] sage: mat = matrix(QQ, entries) sage: %time mat.left_kernel(); CPU times: user 20.87 s, sys: 2.94 s, total: 23.81 s Wall time: 23.89 s sage: %time mat.right_kernel(); CPU times: user 18.43 s, sys: 0.00 s, total: 18.43 s Wall time: 18.43 s

- Cholesky decomposition for matrices other than RDF (Nick Alexander) -- The method cholesky() of the class Matrix_double_dense in sage/matrix/matrix_double_dense.pyx is now deprecated and will be removed in a future release. Users are advised to use cholesky_decomposition() instead. The new method cholesky_decomposition() in the class Matrix of sage/matrix/matrix2.pyx can be used to compute the Cholesky decomposition of matrices with entries over arbitrary precision real and complex fields. Here's an example over the real double field:
sage: r = matrix(RDF, 5, 5, [ 0,0,0,0,1, 1,1,1,1,1, 16,8,4,2,1, 81,27,9,3,1, 256,64,16,4,1 ]) sage: m = r * r.transpose(); m [ 1.0 1.0 1.0 1.0 1.0] [ 1.0 5.0 31.0 121.0 341.0] [ 1.0 31.0 341.0 1555.0 4681.0] [ 1.0 121.0 1555.0 7381.0 22621.0] [ 1.0 341.0 4681.0 22621.0 69905.0] sage: L = m.cholesky_decomposition(); L [ 1.0 0.0 0.0 0.0 0.0] [ 1.0 2.0 0.0 0.0 0.0] [ 1.0 15.0 10.7238052948 0.0 0.0] [ 1.0 60.0 60.9858144589 7.79297342371 0.0] [ 1.0 170.0 198.623524155 39.3665667796 1.72309958068] sage: L.parent() Full MatrixSpace of 5 by 5 dense matrices over Real Double Field sage: L*L.transpose() [ 1.0 1.0 1.0 1.0 1.0] [ 1.0 5.0 31.0 121.0 341.0] [ 1.0 31.0 341.0 1555.0 4681.0] [ 1.0 121.0 1555.0 7381.0 22621.0] [ 1.0 341.0 4681.0 22621.0 69905.0] sage: ( L*L.transpose() - m ).norm(1) < 2^-30 True

Here's an example over a higher precision real field:

sage: r = matrix(RealField(100), 5, 5, [ 0,0,0,0,1, 1,1,1,1,1, 16,8,4,2,1, 81,27,9,3,1, 256,64,16,4,1 ]) sage: m = r * r.transpose() sage: L = m.cholesky_decomposition() sage: L.parent() Full MatrixSpace of 5 by 5 dense matrices over Real Field with 100 bits of precision sage: ( L*L.transpose() - m ).norm(1) < 2^-50 True

Here's a Hermitian example:

sage: r = matrix(CDF, 2, 2, [ 1, -2*I, 2*I, 6 ]); r [ 1.0 -2.0*I] [ 2.0*I 6.0] sage: r.eigenvalues() [0.298437881284, 6.70156211872] sage: ( r - r.conjugate().transpose() ).norm(1) < 1e-30 True sage: L = r.cholesky_decomposition(); L [ 1.0 0] [ 2.0*I 1.41421356237] sage: ( r - L*L.conjugate().transpose() ).norm(1) < 1e-30 True sage: L.parent() Full MatrixSpace of 2 by 2 dense matrices over Complex Double Field

Note that the implementation uses a standard recursion that is not known to be numerically stable. Furthermore, it is potentially expensive to ensure that the input is positive definite. Therefore this is not checked and it is possible that the output matrix is not a valid Cholesky decomposition of a matrix.

- Make symbolic matrices use pynac symbolics (Mike Hansen, Jason Grout) -- Using Pynac symbolics, calculating the determinant of a symbolic matrix can be up to 2500x faster than previously. The following timing statistics were obtained using the machine sage.math:
# BEFORE sage: x00, x01, x10, x11 = var("x00, x01, x10, x11") sage: a = matrix(2, [[x00,x01], [x10,x11]]) sage: %timeit a.det() 100 loops, best of 3: 8.29 ms per loop # AFTER sage: x00, x01, x10, x11 = var("x00, x01, x10, x11") sage: a = matrix(2, [[x00,x01], [x10,x11]]) sage: %timeit a.det() 100000 loops, best of 3: 3.2 µs per loop

**Miscellaneous**

- Allow use of pdflatex instead of latex (John Palmieri) -- One can now use pdflatex instead of latex in two different ways:
- Use a %pdflatex cell in a notebook; or
- Call latex.pdflatex(True)

after which any use of latex (in a %latex cell or using the view command) will use pdflatex. One visually appealing aspect of this is that if you have the most recent version of pgf installed, as well as the tkz-graph package, you can produce images like the following:

**Modular Forms**

- Action of Hecke operators on Gamma_1(N) modular forms (David Loeffler) -- Here's an example:
sage: ModularForms(Gamma1(11), 2).hecke_matrix(2) [ -2 0 0 0 0 0 0 0 0 0] [ 0 -381 0 -360 0 120 -4680 -6528 -1584 7752] [ 0 -190 0 -180 0 60 -2333 -3262 -789 3887] [ 0 -634/11 1 -576/11 0 170/11 -7642/11 -10766/11 -231 12555/11] [ 0 98/11 0 78/11 0 -26/11 1157/11 1707/11 30 -1959/11] [ 0 290/11 0 271/11 0 -50/11 3490/11 5019/11 99 -5694/11] [ 0 230/11 0 210/11 0 -70/11 2807/11 3940/11 84 -4632/11] [ 0 122/11 0 120/11 1 -40/11 1505/11 2088/11 48 -2463/11] [ 0 42/11 0 46/11 0 -30/11 554/11 708/11 21 -970/11] [ 0 10/11 0 12/11 0 7/11 123/11 145/11 7 -177/11]

- Slopes of operator acting on a space of overconvergent modular forms (Lloyd Kilford) -- New method slopes of the class OverconvergentModularFormsSpace in sage/modular/overconvergent/genus0.py for computing the slopes of the operator acting on a space of overconvergent modular forms. Here are some examples of using this new method:
sage: OverconvergentModularForms(5,2,1/3,base_ring=Qp(5),prec=100).slopes(5) [0, 2, 5, 6, 9] sage: OverconvergentModularForms(2,1,1/3,char=DirichletGroup(4,QQ).0).slopes(5) [0, 2, 4, 6, 8]

**Number Theory**

- Function multiplicative_generator for (David Loeffler) -- This adds support for the case where is twice a power of an odd prime. Also, the new method subgroups() is added to the class AbelianGroup_class in sage/groups/abelian_gps/abelian_group.py. The method computes all the subgroups of a finite abelian group. Here's an example on working with the new method subgroups():
sage: AbelianGroup([2,3]).subgroups() [Multiplicative Abelian Group isomorphic to C2 x C3, which is the subgroup of Multiplicative Abelian Group isomorphic to C2 x C3 generated by [f0*f1^2], Multiplicative Abelian Group isomorphic to C2, which is the subgroup of Multiplicative Abelian Group isomorphic to C2 x C3 generated by [f0], Multiplicative Abelian Group isomorphic to C3, which is the subgroup of Multiplicative Abelian Group isomorphic to C2 x C3 generated by [f1], Trivial Abelian Group, which is the subgroup of Multiplicative Abelian Group isomorphic to C2 x C3 generated by []] sage: sage: len(AbelianGroup([2,3,8]).subgroups()) 22 sage: len(AbelianGroup([2,4,8]).subgroups()) 81

- Speed-up relativization of number fields (Nick Alexander) -- The efficiency gain of relativizing a number field can be up to 1700x. Furthermore, the rewrite of the method relativize() allows for relativization over large number fields. The following timing statistics were obtained using the machine sage.math:
# BEFORE sage: x = ZZ['x'].0 sage: f1 = x^6 - x^5 + 3*x^4 - x^3 + 2*x + 1 sage: f2 = x^6 - 3*x^4 - 3*x^3 + x^2 - 5*x + 128 sage: Cs = NumberField(f1, 'a').composite_fields(NumberField(f2, 'b'),'c') sage: %time Cs[0].relativize(Cs[0].subfields(6)[0][1], 'z'); CPU times: user 4899.67 s, sys: 0.17 s, total: 4899.84 s Wall time: 4900.01 s #AFTER sage: x = ZZ['x'].0 sage: f1 = x^6 - x^5 + 3*x^4 - x^3 + 2*x + 1 sage: f2 = x^6 - 3*x^4 - 3*x^3 + x^2 - 5*x + 128 sage: Cs = NumberField(f1, 'a').composite_fields(NumberField(f2, 'b'),'c') sage: %time Cs[0].relativize(Cs[0].subfields(6)[0][1], 'z'); CPU times: user 2.69 s, sys: 0.04 s, total: 2.73 s Wall time: 2.88 s

- Improved efficiency of elliptic curve torsion computation (John Cremona) -- The speed-up of computing elliptic curve torsion can be up to 12%. The following timing statistics were obtained using the machine sage.math:
# BEFORE sage: F.<z> = CyclotomicField(21) sage: E = EllipticCurve([2,-z^7,-z^7,0,0]) sage: time E._p_primary_torsion_basis(7); CPU times: user 9.87 s, sys: 0.07 s, total: 9.94 s Wall time: 9.95 s # AFTER sage: F.<z> = CyclotomicField(21) sage: E = EllipticCurve([2,-z^7,-z^7,0,0]) sage: time E._p_primary_torsion_basis(7,2); CPU times: user 8.56 s, sys: 0.11 s, total: 8.67 s Wall time: 8.67 s

- New method odd_degree_model() for hyperelliptic curves (Nick Alexander) -- The new method odd_degree_model() in the class HyperellipticCurve_generic of sage/schemes/hyperelliptic_curves/hyperelliptic_generic.py computes an odd degree model of a hyperelliptic curve. Here are some examples:
sage: x = QQ['x'].gen() sage: H = HyperellipticCurve((x^2 + 2)*(x^2 + 3)*(x^2 + 5)) sage: K2 = QuadraticField(-2, 'a') sage: H.change_ring(K2).odd_degree_model() Hyperelliptic Curve over Number Field in a with defining polynomial x^2 + 2 defined by y^2 = 6*a*x^5 - 29*x^4 - 20*x^2 + 6*a*x + 1 sage: K3 = QuadraticField(-3, 'b') sage: H.change_ring(QuadraticField(-3, 'b')).odd_degree_model() Hyperelliptic Curve over Number Field in b with defining polynomial x^2 + 3 defined by y^2 = -4*b*x^5 - 14*x^4 - 20*b*x^3 - 35*x^2 + 6*b*x + 1

- Rational arguments in kronecker_symbol() and legendre_symbol() (Gonzalo Tornaria) -- The functions kronecker_symbol() and legendre_symbol() in sage/rings/arith.py now support rational arguments. Here are some examples for working with rational arguments to these functions:
sage: kronecker(2/3,5) 1 sage: legendre_symbol(2/3,7) -1

**Packages**

- Upgrade fpLLL to latest upstream release version 3.0.12 (Michael Abshoff).
- Update the NTL spkg to version ntl-5.4.2.p7 (Michael Abshoff).
- Downgrade the NetworkX spkg to version 0.36 (William Stein) -- The previous networkx-0.99.p0.spkg spkg contained both NetworkX 0.36, which Sage was using, and NetworkX 0.99. When installing networkx-0.99.p0.spkg, only version 0.36 would be installed. This wastes disk space, and it confuses users. The current NetworkX package that's shipped by Sage only contains version 0.36.
- Upgrade Symmetrica to latest upstream release version 2.0 (Michael Abshoff).
- Split off the Boost library from polybori.spkg (Michael Abshoff) -- Boost version 1.34.1 is now contained within its own spkg.
- Switch from Clisp version 2.47 to ECL version 9.4.1 (Michael Abshoff).
- Upgrade MPIR to latest upstream release version 1.1.2 (William Stein).
- Upgrade Python to latest 2.5.x upstream release version 2.5.4 (Michael Abshoff, Mike Hansen).

**P-adics**

- Norm function in the -adic ring (David Roe) -- New function abs() to calculate the -adic absolute value. This is normalized so that the absolute value of is . This should be distinguished from the function norm(), which computes the norm of a -adic element over a ground ring. Here are some examples of using the new function abs():
sage: a = Qp(5)(15); a.abs() 1/5 sage: a.abs(53) 0.200000000000000 sage: R = Zp(5, 5) sage: S.<x> = ZZ[] sage: f = x^5 + 75*x^3 - 15*x^2 +125*x - 5 sage: W.<w> = R.ext(f) sage: w.abs() 0.724779663677696

**Quadratic Forms**

- Major upgrade to the QuadraticForm local density routines (Jon Hanke) -- A complete rewrite of local densities routines, following a consistent interface (and algorithms) as described in this paper.

**Symbolics**

- Update Pynac to version 0.1.7 (Burcin Erocal).
- Switch from Maxima to Pynac for core symbolic manipulation (Mike Hansen, William Stein, Carl Witty, Robert Bradshaw).

**Topology**

- Random simplicial complexes (John Palmieri) -- New method RandomComplex() in the module sage/homology/examples.py for producing a random -dimensional simplicial complex on vertices. Here's an example:
sage: simplicial_complexes.RandomComplex(6,12) Simplicial complex with vertex set (0, 1, 2, 3, 4, 5, 6, 7) and facets {(0, 1, 2, 3, 4, 5, 6, 7)}

A big thank you to all the Sage bug report/patch authors who made my life as a release tour author easier through your comprehensive and concise documentation. The following people contributed to this release tour: Robert Bradshaw, John Cremona, Marshall Hampton, and Anne Schilling. A release tour can also be found on the Sage wiki.