## Clone, configure, build and test M4RI

I’m new to the autotools world, so I was rather helpless when I first looked at M4RI. Say I have patched a file in M4RI. The problem now is figuring out how to configure, build, and test the patched M4RI tree. The solution below was kindly communicated to me by the lead M4RI maintainer Martin Albrecht.

$ hg clone http://bitbucket.org/malb/m4ri $ cd m4ri/ $ autoreconf --install $ ./configure $ make $ make check

## Tools for C development

This post collects free and open source software tools that are useful when you do software development in C. For a survey of software development tools for scientific computation, see A Survey of C and C++ Software Tools for Computational Science by D.J. Worth, C. Greenough and L.S. Chin. See The Art, Science, and Engineering of Software Development by Steve McConnell for some best practices.

**Lint tools**

- Cppcheck: A tool for static C/C++ code analysis.
- Splint: Annotation-assisted lightweight static checking.

**Documentation generators**

- Natural Docs: documentation generator for multiple programming languages.
- ROBODoc
- Sphinx: A tool that makes it easy to create intelligent and beautiful documentation.

**Testing tools**

See A Survey of Software Testing Tools for Computational Science by L.S. Chin, D.J. Worth and C. Greenough.

- CUnit: Unit testing framework for C.
- gcov: A test coverage program. See ggcov for a GUI replacement of gcov.

**Profiling & debugging tools**

- DDD: Graphical front-end for command-line debuggers such as GDB, etc.
- GDB: The GNU Project debugger.
- google-perftools: Fast, multi-threaded malloc() and nifty performance analysis tools.
- gprof: The GNU profiler. For a tutorial on this tool, see Profiling Tutorial: A simple program by DJ Worth, LS Chin, and C Greenough.
- OProfile: Profiler for Linux systems, capable of profiling all running code at low overhead.
- RATS: Rough Auditing Tool for Security.
- Solaris Studio: Formerly Sun Studio, but is now called Oracle Solaris Studio.
- Valgrind: Instrumentation framework for building dynamic analysis tools.

## Sage 4.4.2 release schedule

Sage 4.4.2 is intended to be a little release on the way to Sage 5.0. I’m devoting a week or so to managing the release of Sage 4.4.2. Here’s a proposed release schedule I posted to sage-release:

- Sage 4.4.2.alpha0 — release 10th May 2010
- Sage 4.4.2.rc0 — feature freeze; release 15th May 2010
- Anything critical to get Sage 4.4.2.final to stabilize
- Sage 4.4.2.final — release 18th May 2010

I’m being too optimistic here with the above schedule. But we’ll see how things go.

## Release management: Merging patches

This is a revision of my post to the sage-release mailing list, outlining some guidelines for merging positively reviewed tickets. I want to post it here so that it won’t be lost in the archive of the sage-release mailing list.

There is as yet little to no guidelines for Sage release management. The release management wiki page has been around for months, but it’s not especially helpful for anyone who wants hands-on experience. Here, I’ll try to list some guidelines for merging tickets/spkg’s and managing positively reviewed tickets. The primary focus is on merging tickets with patches.

Before starting on merging tickets for a new milestone, I would compile the source tarball of the current stable release of Sage under a fast-ish directory such as /scratch/. In my case, it’s under my /scratch/ home directory /scratch/mvngu/release/ on the machine sage.math. The idea is to avoid working under a directory that resides on an NFS mounted disk. Towards that end, I also set my default DOT_SAGE directory to a fast-ish directory, as implemented via the following export line in my .bashrc:

# The Sage working directory .sage/ export DOT_SAGE=/dev/shm/mvngu/dot_sage

It also helps to know whether or not there’s any doctest failure on sage.math with the newly compiled source tarball.

After compiling and (parallel) doctesting the latest version of Sage, it’s time to merge tickets that have received positive review. I often use trac report 11 to find out which tickets have been positively reviewed. Using report 11, I would then pick a ticket, read through it for any special instructions. In many cases, the ticket has instructions for which patch to merge and, for more than one patch on the ticket, in which order. In some cases, it can be difficult to work out the order in which to merge patches on a ticket. With a situation like this, I would request the patch author(s) to explain how to merge patches on the ticket and in which order.

Sage has a merge script in the scripts repository. This script is designed for merging tickets. The merge script is SAGE_LOCAL/bin/sage-apply-ticket, which can be used from the command line interface via

./sage -merge <ticket-number>

However, to effectively use that script, one first needs a working knowledge of Mercurial and its queues extension. This is especially important because the merge script uses Mercurial queues (mq). There would be times when the merge script cannot help me with debugging ticket conflicts, so I would need to use mq together with the bisect command to search through the commit history. I don’t often use the merge script, so there is little for me to say about how to effectively use it.

Here, I’ll explain one way to merge a positively reviewed ticket using mq. Ensure that any patches to be merged must have the full name and email address of the patch author. The commit message must contain the ticket number. Use mq to qimport and qpush any relevant patches to be merged. Sometimes a patch author would forget to put in their full name and email address on the patch, so I need to manually download the patch and fill in those details. Another way is to set the username field in my .hgrc to the patch author’s full name and email address; then qimport and qpush. After that, I would restore my full name and email address in my .hgrc. If a commit message doesn’t contain the ticket number, or the message is something non-sensical like

[mq]: real-numbers-work.patch

I would use qrefresh -e to replace that with a sensible commit message that also has the ticket number. Now the patches must pass the following criteria:

- No hunk failures resulting from qpush.
- The Sage library must rebuild without errors.
- The HTML and PDF versions of the reference manual must build successfully.
- No doctest failures when (parallel) doctesting everything under

SAGE_ROOT/devel/sage-main/.

If any of the above steps results in an error, then it’s very likely that the ticket would receive an automatic “needs work” status. Be sure to explain on the ticket why the patch(es) need work. Is it a rebase? If so, which version of Sage to perform the rebase on. Does the ticket conflict with another ticket? And so on.

If all of the above steps pass for the ticket, then move on to another ticket for merging. Be careful not to close the ticket even though the above criteria are met. What I would do is, merge a dozen or so tickets as per the above instructions, but not closing the corresponding tickets after qimport and qpush their patches. Once I have a qseries with patches from a dozen tickets, which should still be open by now, I would copy everything under SAGE_ROOT to another directory, say, under /dev/shm/. (I would also note down the order in which those dozen tickets have been merged.) From that newly copied SAGE_ROOT, I would create a source tarball to test that everything still builds after merging the dozen tickets. To do so, I would do:

- cd to the SAGE_ROOT under /dev/shm/.
- Run ./sage.
- Run ./sage -b main. This step would take a while, so I would do something during the rebuild process that would occupy me for at least 15 minutes.
- Create a tentative source tarball with
./sage -sdist x.y.z-<descriptive-name>

The name <descriptive-name> is something to remind myself that this source tarball is only to test that the resulting tarball still builds and all doctests still pass.

- The result is a source tarball under dist/.
- Untar that source tarball, compile Sage, and parallel doctest. This step usually takes more than 1 hour, so I would make sure to have something to occupy me for at least an hour.

To be safe, I often copy the whole SAGE_ROOT of my merge tree to another directory and run through the above steps. This is because a ticket can have an spkg for merging into the standard spkg repository. You could push a sage-main repository to another copy of Sage somewhere, but presently you can’t do the same thing for the standard spkg repository.

Once the tentative source tarball successfully compiles and all doctests under SAGE_ROOT/devel/sage-main/ pass, then I would close the relevant dozen or so tickets.

The above is a general overview of the process I follow for merging/closing tickets. It’s a slow process, I admit. But I would rather try to catch build and doctest problems early, than have them bit me later on down the track. As noted at the beginning of this post, I didn’t cover anything about merging updated spkg’s in the standard packages repository. That could be a topic for another post.

## Parallel testing the Sage library

After compiling a source version of Sage, you can choose to run the test suite on the whole Sage library, on all modules under a given directory, or on a specified module only. For the purposes of this post, I have compiled Sage 4.1.1 from source and the top level Sage directory is

[mvngu@mod sage-4.1.1]$ pwd /scratch/mvngu/build/sage-4.1.1

**Testing a module**

Say you want to run all tests in the sudoku module sage/games/sudoku.py. In a terminal window, first cd to the top level Sage directory of your local Sage installation. Now start doctesting as demonstrated in the following terminal session:

[mvngu@mod sage-4.1.1]$ ./sage -t devel/sage-main/sage/games/sudoku.py sage -t "devel/sage-main/sage/games/sudoku.py" [6.0 s] ---------------------------------------------------------------------- All tests passed! Total time for all tests: 6.0 seconds

As you can see, the numbers output by the test tell you that testing the sudoku module takes about six seconds, while testing all the modules you specified took the same amount of time. In this case, you only tested one module so it’s not surprising that the total testing time is approximately the same as the time required to test only that one module. Notice that the syntax is

[mvngu@mod sage-4.1.1]$ ./sage -t devel/sage-main/sage/games/sudoku.py sage -t "devel/sage-main/sage/games/sudoku.py" [5.7 s] ---------------------------------------------------------------------- All tests passed! Total time for all tests: 5.7 seconds [mvngu@mod sage-4.1.1]$ ./sage -t "devel/sage-main/sage/games/sudoku.py" sage -t "devel/sage-main/sage/games/sudoku.py" [5.4 s] ---------------------------------------------------------------------- All tests passed! Total time for all tests: 5.4 seconds

and not

[mvngu@mod sage-4.1.1]$ ./sage -t sage/games/sudoku.py ERROR: File ./sage/games/sudoku.py is missing exit code: 1 ---------------------------------------------------------------------- The following tests failed: ./sage/games/sudoku.py Total time for all tests: 0.0 seconds [mvngu@mod sage-4.1.1]$ ./sage -t "sage/games/sudoku.py" ERROR: File ./sage/games/sudoku.py is missing exit code: 1 ---------------------------------------------------------------------- The following tests failed: ./sage/games/sudoku.py Total time for all tests: 0.0 seconds

In all of the above terminal sessions, you use a local installation of Sage to test its own modules. Even if you have a system-wide Sage installation, using that version to doctest the modules of a local installation is a recipe for confusion. For example, in the following session, there is a system-wide Sage. Doctesting using that version doesn’t output anything useful.

[mvngu@mod sage-4.1.1]$ which sage /usr/local/bin/sage [mvngu@mod sage-4.1.1]$ sage devel/sage-main/sage/games/sudoku.py

**Troubleshooting**

If you want to doctest modules of a Sage installation, it is recommended that you first cd to the top level directory of that Sage installation, otherwise known as the SAGE_ROOT of that installation. When you run tests, use that particular Sage installation via the syntax ./sage; notice the “dot-forward-slash” at the front of sage. This is a precaution against confusion that can arise when your system has multiple Sage installations. For example, the following syntax is OK because you explicitly specify the Sage installation in the current SAGE_ROOT:

[mvngu@sage sage-4.1.1]$ ./sage -t devel/sage-main/sage/games/sudoku.py sage -t "devel/sage-main/sage/games/sudoku.py" [5.1 s] ---------------------------------------------------------------------- All tests passed! Total time for all tests: 5.1 seconds [mvngu@sage sage-4.1.1]$ ./sage -t "devel/sage-main/sage/games/sudoku.py" sage -t "devel/sage-main/sage/games/sudoku.py" [5.0 s] ---------------------------------------------------------------------- All tests passed! Total time for all tests: 5.0 seconds

If you login as a regular user, the following syntax is not recommended as you are using a system-wide Sage installation (if it exists):

[mvngu@sage sage-4.1.1]$ sage -t devel/sage-main/sage/games/sudoku.py Traceback (most recent call last): File "/usr/local/sage/local/bin/sage-test", line 49, in os.makedirs(TMP) File "/usr/local/sage/local/lib/python/os.py", line 157, in makedirs mkdir(name, mode) OSError: [Errno 13] Permission denied: '/usr/local/sage/tmp/tmp' [mvngu@sage sage-4.1.1]$ sage -t "devel/sage-main/sage/games/sudoku.py" Traceback (most recent call last): File "/usr/local/sage/local/bin/sage-test", line 49, in os.makedirs(TMP) File "/usr/local/sage/local/lib/python/os.py", line 157, in makedirs mkdir(name, mode) OSError: [Errno 13] Permission denied: '/usr/local/sage/tmp/tmp'

In this case, you received a permission error because the system-wide Sage installation attempts to write some data to a system-wide directory using your login privileges. The system-wide directory is a temporary directory under the system-wide SAGE_ROOT. Most likely a system-wide Sage installation was performed by a system administrator using an account with more privileges than a regular user. As a regular user, you cannot write to directories where you don’t have write permission.

The following syntax is also discouraged when you login as a regular user:

[mvngu@sage sage-4.1.1]$ cd [mvngu@sage ~]$ sage -t devel/sage-main/sage/games/sudoku.py Traceback (most recent call last): File "/usr/local/sage/local/bin/sage-test", line 49, in os.makedirs(TMP) File "/usr/local/sage/local/lib/python/os.py", line 157, in makedirs mkdir(name, mode) OSError: [Errno 13] Permission denied: '/usr/local/sage/tmp/tmp' [mvngu@sage ~]$ sage -t "devel/sage-main/sage/games/sudoku.py" Traceback (most recent call last): File "/usr/local/sage/local/bin/sage-test", line 49, in os.makedirs(TMP) File "/usr/local/sage/local/lib/python/os.py", line 157, in makedirs mkdir(name, mode) OSError: [Errno 13] Permission denied: '/usr/local/sage/tmp/tmp'

This is exactly the same as the previous syntax because in both cases you attempted to doctest modules in a system-wide Sage installation using privileges of a regular user. If you don’t have permission to read or write somewhere on your system, you can’t read or write there. As a regular user, you don’t usually have privileges to read the directory /root nor do you have privileges to write to the root directory:

[mvngu@sage sage-4.1.1]$ ls /root/ ls: cannot open directory /root/: Permission denied [mvngu@sage sage-4.1.1]$ cd / [mvngu@sage /]$ touch demo.txt touch: cannot touch `demo.txt': Permission denied

**Parallel testing many modules**

So far you have used a single thread to doctest a module in the Sage library. There are hundreds, even thousands of modules in the Sage library. Testing them all using one thread would take a few hours. Depending on your hardware, this could take up to six hours or more. On a multi-core system, parallel doctesting can significantly reduce the testing time. Unless you also want to use your computer while doctesting in parallel, you can choose to devote all the cores of your system for parallel testing.

Let’s doctest all modules in a directory, first using a single thread and then using two threads. For this example, suppose you want to test all the modules under sage/crypto/. You can use a syntax similar to that shown above to achieve this:

[mvngu@mod sage-4.1.1]$ ./sage -t devel/sage-main/sage/crypto/ sage -t "devel/sage-main/sage/crypto/lfsr.py" [2.5 s] sage -t "devel/sage-main/sage/crypto/cryptosystem.py" [1.9 s] sage -t "devel/sage-main/sage/crypto/block_cipher/miniaes.py" [2.5 s] sage -t "devel/sage-main/sage/crypto/block_cipher/all.py" [0.1 s] sage -t "devel/sage-main/sage/crypto/block_cipher/__init__.py" [0.1 s] sage -t "devel/sage-main/sage/crypto/classical.py" [2.7 s] sage -t "devel/sage-main/sage/crypto/mq/mpolynomialsystem.py" [8.7 s] sage -t "devel/sage-main/sage/crypto/mq/mpolynomialsystemgenerator.py" [1.9 s] sage -t "devel/sage-main/sage/crypto/mq/__init__.py" [0.1 s] sage -t "devel/sage-main/sage/crypto/mq/sbox.py" [2.8 s] sage -t "devel/sage-main/sage/crypto/mq/sr.py" [4.9 s] sage -t "devel/sage-main/sage/crypto/stream_cipher.py" [1.9 s] sage -t "devel/sage-main/sage/crypto/all.py" [0.1 s] sage -t "devel/sage-main/sage/crypto/stream.py" [1.9 s] sage -t "devel/sage-main/sage/crypto/__init__.py" [0.1 s] sage -t "devel/sage-main/sage/crypto/classical_cipher.py" [1.9 s] sage -t "devel/sage-main/sage/crypto/cipher.py" [1.9 s] ---------------------------------------------------------------------- All tests passed! Total time for all tests: 35.7 seconds

Now let’s do the same thing, but this time let’s also use the optional argument -long:

[mvngu@mod sage-4.1.1]$ ./sage -t -long devel/sage-main/sage/crypto/ sage -t -long "devel/sage-main/sage/crypto/lfsr.py" [1.9 s] sage -t -long "devel/sage-main/sage/crypto/cryptosystem.py" [2.0 s] sage -t -long "devel/sage-main/sage/crypto/block_cipher/miniaes.py" [2.6 s] sage -t -long "devel/sage-main/sage/crypto/block_cipher/all.py" [0.1 s] sage -t -long "devel/sage-main/sage/crypto/block_cipher/__init__.py" [0.1 s] sage -t -long "devel/sage-main/sage/crypto/classical.py" [2.7 s] sage -t -long "devel/sage-main/sage/crypto/mq/mpolynomialsystem.py" [8.7 s] sage -t -long "devel/sage-main/sage/crypto/mq/mpolynomialsystemgenerator.py" [2.2 s] sage -t -long "devel/sage-main/sage/crypto/mq/__init__.py" [0.1 s] sage -t -long "devel/sage-main/sage/crypto/mq/sbox.py" [2.9 s] sage -t -long "devel/sage-main/sage/crypto/mq/sr.py" [56.6 s] sage -t -long "devel/sage-main/sage/crypto/stream_cipher.py" [2.5 s] sage -t -long "devel/sage-main/sage/crypto/all.py" [0.1 s] sage -t -long "devel/sage-main/sage/crypto/stream.py" [1.9 s] sage -t -long "devel/sage-main/sage/crypto/__init__.py" [0.1 s] sage -t -long "devel/sage-main/sage/crypto/classical_cipher.py" [1.9 s] sage -t -long "devel/sage-main/sage/crypto/cipher.py" [1.9 s] ---------------------------------------------------------------------- All tests passed! Total time for all tests: 88.0 seconds

Notice the time difference between the first set of tests and the second set, which uses the optional argument -long. Many tests in the Sage library are flagged with “# long time” because these are known to take a long time to run through. Without using the optional -long argument, the module sage/crypto/mq/sr.py took about five seconds. With this optional argument, it required 57 seconds to run through all tests in that module. Here is a snippet of a function in the module sage/crypto/mq/sr.py with a doctest that has been flagged as taking a long time:

def test_consistency(max_n=2, **kwargs): r""" Test all combinations of ``r``, ``c``, ``e`` and ``n`` in ``(1, 2)`` for consistency of random encryptions and their polynomial systems. `\GF{2}` and `\GF{2^e}` systems are tested. This test takes a while. INPUT: - ``max_n`` - maximal number of rounds to consider (default: 2) - ``kwargs`` - are passed to the SR constructor TESTS:: sage: from sage.crypto.mq.sr import test_consistency sage: test_consistency(1) # long time -- calling w/ max_n = 2 requires a LOT of RAM (>> 2GB, evidently). Calling w/ max_n = 1 is far more manageable. True The above doctest used to fail on a machine with "only" 2GB RAM. Using ``max_n = 1`` appears to be a more reasonable memory usage. """

Now doctest the same directory in parallel using two threads:

[mvngu@mod sage-4.1.1]$ ./sage -tp 2 devel/sage-main/sage/crypto/ Global iterations: 1 File iterations: 1 Using cached timings to run longest doctests first. Doctesting 17 files doing 2 jobs in parallel sage -t devel/sage-main/sage/crypto/lfsr.py [2.7 s] sage -t devel/sage-main/sage/crypto/cryptosystem.py [2.0 s] sage -t devel/sage-main/sage/crypto/mq/mpolynomialsystem.py [9.4 s] sage -t devel/sage-main/sage/crypto/mq/sr.py [5.2 s] sage -t devel/sage-main/sage/crypto/classical.py [2.8 s] sage -t devel/sage-main/sage/crypto/mq/sbox.py [3.2 s] sage -t devel/sage-main/sage/crypto/block_cipher/miniaes.py [2.6 s] sage -t devel/sage-main/sage/crypto/stream_cipher.py [2.0 s] sage -t devel/sage-main/sage/crypto/mq/mpolynomialsystemgenerator.py [2.0 s] sage -t devel/sage-main/sage/crypto/classical_cipher.py [2.1 s] sage -t devel/sage-main/sage/crypto/cipher.py [2.1 s] sage -t devel/sage-main/sage/crypto/__init__.py [0.1 s] sage -t devel/sage-main/sage/crypto/block_cipher/__init__.py [0.1 s] sage -t devel/sage-main/sage/crypto/mq/__init__.py [0.1 s] sage -t devel/sage-main/sage/crypto/block_cipher/all.py [0.1 s] sage -t devel/sage-main/sage/crypto/stream.py [2.0 s] sage -t devel/sage-main/sage/crypto/all.py [0.1 s] ---------------------------------------------------------------------- All tests passed! Timings have been updated. Total time for all tests: 19.3 seconds [mvngu@mod sage-4.1.1]$ ./sage -tp 2 -long devel/sage-main/sage/crypto/ Global iterations: 1 File iterations: 1 No long cached timings exist; will create upon successful finish. Doctesting 17 files doing 2 jobs in parallel sage -t -long devel/sage-main/sage/crypto/cryptosystem.py [2.7 s] sage -t -long devel/sage-main/sage/crypto/lfsr.py [2.7 s] sage -t -long devel/sage-main/sage/crypto/stream_cipher.py [2.2 s] sage -t -long devel/sage-main/sage/crypto/all.py [0.1 s] sage -t -long devel/sage-main/sage/crypto/classical.py [3.0 s] sage -t -long devel/sage-main/sage/crypto/__init__.py [0.1 s] sage -t -long devel/sage-main/sage/crypto/stream.py [2.1 s] sage -t -long devel/sage-main/sage/crypto/classical_cipher.py [2.1 s] sage -t -long devel/sage-main/sage/crypto/cipher.py [2.1 s] sage -t -long devel/sage-main/sage/crypto/block_cipher/all.py [0.1 s] sage -t -long devel/sage-main/sage/crypto/block_cipher/__init__.py [0.1 s] sage -t -long devel/sage-main/sage/crypto/block_cipher/miniaes.py [2.8 s] sage -t -long devel/sage-main/sage/crypto/mq/mpolynomialsystemgenerator.py [2.0 s] sage -t -long devel/sage-main/sage/crypto/mq/__init__.py [0.1 s] sage -t -long devel/sage-main/sage/crypto/mq/sbox.py [3.1 s] sage -t -long devel/sage-main/sage/crypto/mq/mpolynomialsystem.py [9.1 s] sage -t -long devel/sage-main/sage/crypto/mq/sr.py [56.0 s] ---------------------------------------------------------------------- All tests passed! Timings have been updated. Total time for all tests: 71.8 seconds

As the number of threads increases, the total testing time decreases. To minimize confusion, it’s also a good idea to explicitly specify the path name of the directory you want to doctest and not a symbolic link to that directory. In the above examples, the symbolic link devel/sage points to the directory devel/sage-main, but the actual path to the directory has been specified instead of its symbolic link.

**Parallel testing the whole Sage library**

The main Sage library resides in the directory devel/sage-main/. You can use the syntax described above to doctest the main library using multiple threads. When doing release management or patching the main Sage library, I would parallel test the library using ten threads:

[mvngu@mod sage-4.1.1]$ ./sage -tp 10 -long devel/sage-main/

Another way is to edit the file makefile in the top level Sage directory so that the variable NUM_THREADS is set to 10:

# How many threads should be used when doing parallel testing (and # sometime in the future, parallel building)? NUM_THREADS=10

After saving all changes to makefile, you can parallel test with the -long option using ten threads:

[mvngu@mod sage-4.1.1]$ make ptestlong

Any of the following commands would also doctest the Sage library or one of its clones:

make test make check make testlong make ptest make ptestlong

In each case, testing is performed on the directory that is pointed to by the symbolic link devel/sage.

**Updates** 2010-01-08 — This post is now in the Sage Developers’ Guide.

## Compile Sage 4.1 in 64-bit mode on OS X 10.5.8

Compiling Sage 4.1 on Mac OS X 10.5.8 is fairly easy, i.e. if you want to build Sage in 32-bit mode. You download a source version of Sage, uncompress the tarball, navigate to the top level directory of the source version, and issue make. But if you want to build in 64-bit mode, you need to replace the version of Fortran that is shipped with Sage with gfortran. For example, say you want to compile Sage 4.1 in 64-bit mode. Here is what you should do:

- Download the source tarball of Sage 4.1 and a custom-built Fortran spkg. The Sage source tarball can be found at the Sage download page. The custom-built Fortran spkg can be found at my development home directory on sage.math. It was built by Michael Abshoff (thank you very much, Michael).
- Uncompress the source tarball, delete the Fortran spkg that’s shipped with Sage, and move the custom-built Fortran package to the standard packages repository:
- Now compile in 64-bit mode:

$ tar -xf sage-4.1.tar $ rm sage-4.1/spkg/standard/fortran-20071120.p5.spkg $ mv fortran-OSX64-20090120.spkg sage-4.1/spkg/standard/

$ export SAGE64=yes $ cd sage-4.1/ $ make

Wait a while for Sage to build. You can use these steps to build Sage 4.1.1 in 64-bit mode under OS X 10.5.8. However, Sage 4.1.1 is known to fail to compile properly as documented in the release tour. This is due to a 32- versus 64-bit issue in cliquer. One way to get around this is to wait for make to finish running. Open the file sage/graphs/all.py and comment out the line that imports cliquer functionalities, like so:

#from sage.graphs.cliquer import *

From the top level Sage directory, issue make again. This would resume the build process where it left off when failing to compile cliquer.

## Sage 4.1.1 released

Sage 4.1.1 was released on August 14th, 2009. For the official, comprehensive release note, please refer to sage-4.1.1.txt. The following points are some of the foci of this release:

- Improved data conversion between NumPy and Sage
- Solaris support, Solaris specific bug fixes for NTL, zn_poly, Pari/GP, FLINT, MPFR, PolyBoRI, ATLAS
- Upgrade/updates for about 8 standard packages
- Three new optional packages: openopt, GLPK, p_group_cohomology

In this release, 13 people made their first contribution:

- Adam Webb
- Anders Claesson
- Andrew Mathas
- Dag Sverre Seljebotn
- Evan Fosmark
- Jens Rasch
- Nathann Cohen
- Peter McNamara
- Simon Morris
- Steven Hartke
- Taylor Sutton
- Tim Dumol
- Vincent Delecroix

We closed 165 tickets, details of which are available on the Sage trac server. Among these tickets is the long-standing #111, which was closed due to the work of Alex Ghitza and David Loeffler. Thus, ticket #111 is our lowest ticket winner for this release. With the merging of ticket #877, there is a change in the way docstring coverage is counted. Previously, the docstring coverage script also counted functions that are local to other functions. In this manner, the docstring coverage for Sage 4.1 is:

- Overall weighted coverage score: 77.8%
- Total number of functions: 22398

With ticket #877, nested functions or functions local to other functions are no longer counted towards the docstring coverage. This results in a reduced number of functions, hence we have the following statistics for Sage 4.1 as a result of #877:

- Overall weighted coverage score: 78.3%
- Total number of functions: 22210

Using the docstring coverage technique from ticket #877, in Sage 4.1.1 we increased coverage by 0.3%, while adding 120 functions:

- Overall weighted coverage score: 78.6%
- Total number of functions: 22330

**Known Issues**

The standard package cliquer doesn’t build under some 64-bit platforms. There are reports of cliquer failing to compile under 64-bit Fedora 10 (see ticket #6746) and 64-bit Intel Mac running OS X 10.5 (see ticket #6681). We provide the 64-bit binary

- sage-4.1.1-OSX10.5-intel-64bit-i386-Darwin.dmg

on the Mac OS X binary download page. However, users should note that the README.txt file in that binary contains a warning about cliquer. In the above binary, the cliquer import statement in sage/graphs/all.py has been commented out like so

#from sage.graphs.cliquer import *

as a work-around to allow Sage to start up. In the meantime, users of 64-bit OS X won’t have access to functionalities of the cliquer spkg until ticket #6681 has been resolved.

Here is a summary of main features in this release, categorized under various headings:

**Algebra**

- Adds method __nonzero__() to abelian groups (Taylor Sutton) #6510 — New method __nonzero__() for the class AbelianGroup_class in sage/groups/abelian_gps/abelian_group.py. This method returns True if the abelian group in question is non-trivial:
sage: E = EllipticCurve([0, 82]) sage: T = E.torsion_subgroup() sage: bool(T) False sage: T.__nonzero__() False

**Basic Arithmetic**

- Implement real_part() and imag_part() for CDF and CC (Alex Ghitza) #6159 — The name real_part is now an alias to the method real(); similarly, imag_part is now an alias to the method imag().
sage: a = CDF(3, -2) sage: a.real() 3.0 sage: a.real_part() 3.0 sage: a.imag() -2.0 sage: a.imag_part() -2.0 sage: i = ComplexField(100).0 sage: z = 2 + 3*i sage: z.real() 2.0000000000000000000000000000 sage: z.real_part() 2.0000000000000000000000000000 sage: z.imag() 3.0000000000000000000000000000 sage: z.imag_part() 3.0000000000000000000000000000

- Efficient summing using balanced sum (Jason Grout, Mike Hansen) #2737 — New function balanced_sum() in the module sage/misc/misc_c.pyx for summing the elements in a list. In some cases, balanced_sum() is more efficient than the built-in Python sum() function, where the efficiency can range from 26x up to 1410x faster than sum(). The following timing statistics were obtained using the machine sage.math:
sage: R. = QQ["x,y"] sage: L = [x^i for i in xrange(1000)] sage: %time sum(L); CPU times: user 0.01 s, sys: 0.00 s, total: 0.01 s Wall time: 0.01 s sage: %time balanced_sum(L); CPU times: user 0.00 s, sys: 0.00 s, total: 0.00 s Wall time: 0.00 s sage: %timeit sum(L); 100 loops, best of 3: 8.66 ms per loop sage: %timeit balanced_sum(L); 1000 loops, best of 3: 324 µs per loop sage: sage: L = [[i] for i in xrange(10e4)] sage: %time sum(L, []); CPU times: user 84.61 s, sys: 0.00 s, total: 84.61 s Wall time: 84.61 s sage: %time balanced_sum(L, []); CPU times: user 0.06 s, sys: 0.00 s, total: 0.06 s Wall time: 0.06 s

**Calculus**

- Wigner 3j, 6j, 9j, Clebsch-Gordan, Racah and Gaunt coefficients (Jens Rasch) #5996 — A collection of functions for exactly calculating Wigner 3j, 6j, 9j, Clebsch-Gordan, Racah as well as Gaunt coefficients. All these functions evaluate to a rational number times the square root of a rational number. These new functions are defined in the module sage/functions/wigner.py. Here are some examples on calculating the Wigner 3j, 6j, 9j symbols:
sage: wigner_3j(2, 6, 4, 0, 0, 0) sqrt(5/143) sage: wigner_3j(0.5, 0.5, 1, 0.5, -0.5, 0) sqrt(1/6) sage: wigner_6j(3,3,3,3,3,3) -1/14 sage: wigner_6j(8,8,8,8,8,8) -12219/965770 sage: wigner_9j(1,1,1, 1,1,1, 1,1,0 ,prec=64) # ==1/18 0.0555555555555555555 sage: wigner_9j(15,15,15, 15,3,15, 15,18,10, prec=1000)*1.0 -0.0000778324615309539

The Clebsch-Gordan, Racah and Gaunt coefficients can be computed as follows:

sage: simplify(clebsch_gordan(3/2,1/2,2, 3/2,1/2,2)) 1 sage: clebsch_gordan(1.5,0.5,1, 1.5,-0.5,1) 1/2*sqrt(3) sage: racah(3,3,3,3,3,3) -1/14 sage: gaunt(1,0,1,1,0,-1) -1/2/sqrt(pi) sage: gaunt(12,15,5,2,3,-5) 91/124062*sqrt(36890)/sqrt(pi) sage: gaunt(1000,1000,1200,9,3,-12).n(64) 0.00689500421922113448

**Combinatorics**

- Optimize the words library code (Vincent Delecroix, Sébastien Labbé, Franco Saliola) #6519 — An enhancement of the words library code in sage/combinat/words to improve its efficiency and reorganize the code. The efficiency gain for creating small words can be up to 6x:
# BEFORE sage: %timeit Word() 10000 loops, best of 3: 46.6 µs per loop sage: %timeit Word("abbabaab") 10000 loops, best of 3: 62 µs per loop sage: %timeit Word([0,1,1,0,1,0,0,1]) 10000 loops, best of 3: 59.4 µs per loop # AFTER sage: %timeit Word() 100000 loops, best of 3: 6.85 µs per loop sage: %timeit Word("abbabaab") 100000 loops, best of 3: 11.8 µs per loop sage: %timeit Word([0,1,1,0,1,0,0,1]) 100000 loops, best of 3: 10.6 µs per loop

For the creation of large words, the improvement can be from between 8000x up to 39000x:

# BEFORE sage: t = words.ThueMorseWord() sage: w = list(t[:1000000]) sage: %timeit Word(w) 10 loops, best of 3: 792 ms per loop sage: u = "".join(map(str, list(t[:1000000]))) sage: %timeit Word(u) 10 loops, best of 3: 777 ms per loop sage: %timeit Words("01")(u) 10 loops, best of 3: 748 ms per loop # AFTER sage: t = words.ThueMorseWord() sage: w = list(t[:1000000]) sage: %timeit Word(w) 10000 loops, best of 3: 20.3 µs per loop sage: u = "".join(map(str, list(t[:1000000]))) sage: %timeit Word(u) 10000 loops, best of 3: 21.9 µs per loop sage: %timeit Words("01")(u) 10000 loops, best of 3: 84.3 µs per loop

All of the above timing statistics were obtained using the machine sage.math. Further timing comparisons can be found at the Sage wiki page.

- Improve the speed of Permutation.inverse() (Anders Claesson) #6621 — In some cases, the speed gain is up to 11x. The following timing statistics were obtained using the machine sage.math:
# BEFORE sage: p = Permutation([6, 7, 8, 9, 4, 2, 3, 1, 5]) sage: %timeit p.inverse() 10000 loops, best of 3: 67.1 µs per loop sage: p = Permutation([19, 5, 13, 8, 7, 15, 9, 10, 16, 3, 12, 6, 2, 20, 18, 11, 14, 4, 17, 1]) sage: %timeit p.inverse() 1000 loops, best of 3: 240 µs per loop sage: p = Permutation([14, 17, 1, 24, 16, 34, 19, 9, 20, 18, 36, 5, 22, 2, 27, 40, 37, 15, 3, 35, 10, 25, 21, 8, 13, 26, 12, 32, 23, 38, 11, 4, 6, 39, 31, 28, 29, 7, 30, 33]) sage: %timeit p.inverse() 1000 loops, best of 3: 857 µs per loop # AFTER sage: p = Permutation([6, 7, 8, 9, 4, 2, 3, 1, 5]) sage: %timeit p.inverse() 10000 loops, best of 3: 24.6 µs per loop sage: p = Permutation([19, 5, 13, 8, 7, 15, 9, 10, 16, 3, 12, 6, 2, 20, 18, 11, 14, 4, 17, 1]) sage: %timeit p.inverse() 10000 loops, best of 3: 41.4 µs per loop sage: p = Permutation([14, 17, 1, 24, 16, 34, 19, 9, 20, 18, 36, 5, 22, 2, 27, 40, 37, 15, 3, 35, 10, 25, 21, 8, 13, 26, 12, 32, 23, 38, 11, 4, 6, 39, 31, 28, 29, 7, 30, 33]) sage: %timeit p.inverse() 10000 loops, best of 3: 72.4 µs per loop

- Updating some quirks in sage/combinat/partition.py (Andrew Mathas) #5790 — The functions r_core(), r_quotient(), k_core(), and partition_sign() are now deprecated. These are replaced with core(), quotient(), and sign() respectively. The rewrite of the function Partition() deprecated the argument core_and_quotient. The core and the quotient can be passed as keywords of Partition().
sage: Partition(core_and_quotient=([2,1], [[2,1],[3],[1,1,1]])) /home/mvngu/.sage/temp/sage.math.washington.edu/9221/_home_mvngu__sage_init_sage_0.py:1: DeprecationWarning: "core_and_quotient=(*)" is deprecated. Use "core=[*], quotient=[*]" instead. # -*- coding: utf-8 -*- [11, 5, 5, 3, 2, 2, 2] sage: Partition(core=[2,1], quotient=[[2,1],[3],[1,1,1]]) [11, 5, 5, 3, 2, 2, 2] sage: Partition([6,3,2,2]).r_quotient(3) /home/mvngu/.sage/temp/sage.math.washington.edu/9221/_home_mvngu__sage_init_sage_0.py:1: DeprecationWarning: r_quotient is deprecated. Please use quotient instead. # -*- coding: utf-8 -*- [[], [], [2, 1]] sage: Partition([6,3,2,2]).quotient(3) [[], [], [2, 1]] sage: partition_sign([5,1,1,1,1,1]) /home/mvngu/.sage/temp/sage.math.washington.edu/9221/_home_mvngu__sage_init_sage_0.py:1: DeprecationWarning: "partition_sign deprecated. Use Partition(pi).sign() instead # -*- coding: utf-8 -*- 1 sage: Partition([5,1,1,1,1,1]).sign() 1

**Cryptography**

- Improve S-box linear and differences matrices computation (Yann Laigle-Chapuy) #6454 — Speed up the functions difference_distribution_matrix() and linear_approximation_matrix() of the class SBox in the module sage/crypto/mq/sbox.py. The function linear_approximation_matrix() now uses the Walsh transform. The efficiency of difference_distribution_matrix() can be up to 277x, while that for linear_approximation_matrix() can be up to 132x. The following timing statistics were obtained using the machine sage.math:
# BEFORE sage: S = mq.SR(1,4,4,8).sbox() sage: %time S.difference_distribution_matrix(); CPU times: user 77.73 s, sys: 0.00 s, total: 77.73 s Wall time: 77.73 s sage: %time S.linear_approximation_matrix(); CPU times: user 132.96 s, sys: 0.00 s, total: 132.96 s Wall time: 132.96 s # AFTER sage: S = mq.SR(1,4,4,8).sbox() sage: %time S.difference_distribution_matrix(); CPU times: user 0.28 s, sys: 0.01 s, total: 0.29 s Wall time: 0.28 s sage: %time S.linear_approximation_matrix(); CPU times: user 1.01 s, sys: 0.00 s, total: 1.01 s Wall time: 1.01 s

**Elliptic Curves**

- Allow the method integral_points() to handle elliptic curves with large ranks (John Cremona) #6381 — A rewrite of the method integral_x_coords_in_interval() in the class EllipticCurve_rational_field belonging to the module sage/schemes/elliptic_curves/ell_rational_field.py. The rewrite allows the method integral_points() to compute the integral points of elliptic curves with large ranks. For example, previously the following code would result in an OverflowError:
sage: D = 6611719866 sage: E = EllipticCurve([0, 0, 0, -D^2, 0]) sage: E.integral_points();

- Multiplication-by-n method on elliptic curve formal groups uses the double-and-add algorithm (Hamish Ivey-Law, Tom Boothby) #6407 — Previously, the method EllipticCurveFormalGroup.mult_by_n() was implemented by applying the group law to itself n times. However, when working over a field of characteristic zero, a faster algorithm would be used instead. The linear algorithm is now replaced with the logarithmic double-and-add algorithm, i.e. the additive version of the standard square-and-multiply algorithm. In some cases, the efficiency gain can range from 3% up to 29%. The following timing statistics were obtained using the machine sage.math:
# BEFORE sage: F = EllipticCurve(GF(101), [1, 1]).formal_group() sage: %time F.mult_by_n(100, 20); CPU times: user 0.98 s, sys: 0.00 s, total: 0.98 s Wall time: 0.98 s sage: F = EllipticCurve("37a").formal_group() sage: %time F.mult_by_n(1000000, 20); CPU times: user 0.38 s, sys: 0.00 s, total: 0.38 s Wall time: 0.38 s sage: %time F.mult_by_n(100000000, 20); CPU times: user 0.55 s, sys: 0.03 s, total: 0.58 s Wall time: 0.58 s # AFTER sage: F = EllipticCurve(GF(101), [1, 1]).formal_group() sage: %time F.mult_by_n(100, 20); CPU times: user 0.96 s, sys: 0.00 s, total: 0.96 s Wall time: 0.95 s sage: F = EllipticCurve("37a").formal_group() sage: %time F.mult_by_n(1000000, 20); CPU times: user 0.44 s, sys: 0.01 s, total: 0.45 s Wall time: 0.45 s sage: %time F.mult_by_n(100000000, 20); CPU times: user 0.40 s, sys: 0.01 s, total: 0.41 s Wall time: 0.41 s

**Graphics**

- Plotting 3-D Bezier paths (Emily Kirkman) #6098 — New function bezier3d() for plotting a 3-dimensional Bezier path. Here are some examples for working with this function:
sage: bezier3d([[(0,0,0), (1,0,0), (0,1,0), (0,1,1)]]).show(zoom=1.2)

sage: path = [[(0,0,0),(.5,.1,.2),(.75,3,-1),(1,1,0)],[(.5,1,.2),(1,.5,0)],[(.7,.2,.5)]] sage: bezier3d(path, color='green').show(zoom=1.2)

- Passing extra options to show() (Bill Cauchois) #5651 — Extra optional arguments to plotting functions can now be passed on to the function show(). This passing of optional arguments is implemented for the following plotting modules:
- sage/plot/arrow.py
- sage/plot/bar_chart.py
- sage/plot/bezier_path.py
- sage/plot/circle.py
- sage/plot/complex_plot.pyx
- sage/plot/contour_plot.py
- sage/plot/density_plot.py
- sage/plot/disk.py
- sage/plot/line.py
- sage/plot/matrix_plot.py
- sage/plot/plot.py
- sage/plot/plot_field.py
- sage/plot/point.py
- sage/plot/polygon.py
- sage/plot/scatter_plot.py
- sage/plot/text.py

Each of the following examples demonstrates equivalent code to obtain a plot:

sage: arrow((-2, 2), (7,1), frame=True) sage: arrow((-2, 2), (7,1)).show(frame=True)

sage: bar_chart([-2,8,-7,3], rgbcolor=(1,0,0), axes=False) sage: bar_chart([-2,8,-7,3], rgbcolor=(1,0,0)).show(axes=False)

sage: bezier_path([[(0,1),(.5,0),(1,1)]], fontsize=20) sage: bezier_path([[(0,1),(.5,0),(1,1)]]).show(fontsize=20)

sage: complex_plot(lambda z: z, (-3, 3), (-3, 3), figsize=[5,5]) sage: complex_plot(lambda z: z, (-3, 3), (-3, 3)).show(figsize=[5,5])

**Graph Theory**

- Cliquer as a standard package (Nathann Cohen) #6355 — Cliquer is a set of C routines for finding cliques in an arbitrary weighted graph. It uses an exact branch-and-bound algorithm recently developed by Patric Ostergard and mainly written by Sampo Niskanen. It is published under the GPL license. Here are some examples for working with the new cliquer spkg:
sage: max_clique(graphs.PetersenGraph()) [7, 9] sage: all_max_clique(graphs.PetersenGraph()) [[2, 7], [7, 9], [6, 8], [6, 9], [0, 4], [4, 9], [5, 7], [0, 5], [5, 8], [3, 4], [2, 3], [3, 8], [1, 6], [0, 1], [1, 2]] sage: clique_number(Graph("DJ{")) 4 sage: clique_number(Graph({0:[1,2,3], 1:[2], 3:[0,1]})) 3 sage: list_composition([1,3,'a'], {'a':'b', 1:2, 2:3, 3:4}) [2, 4, 'b']

- Faster algorithm to compute maximum cliques (Nathann Cohen) #5793 — With the inclusion of cliquer as a standard spkg, the following functions can now use the cliquer algorithm:
- Graph.max_clique() — Returns the vertex set of a maximum complete subgraph.
- Graph.cliques_maximum() — Returns a list of all maximum cliques, with each clique represented by a list of vertices. A clique is an induced complete subgraph and a maximal clique is one of maximal order.
- Graph.clique_number() — Returns the size of the largest clique of the graph.
- Graph.cliques_vertex_clique_number() — Returns a list of sizes of the largest maximal cliques containing each vertex. This returns a single value if there is only one input vertex.
- Graph.independent_set() — Returns a maximal independent set, which is a set of vertices which induces an empty subgraph.

These functions already exist in Sage. Cliquer does not bring to Sage any new feature, but a huge efficiency improvement in computing clique numbers. The NetworkX 0.36 algorithm is very slow in its computation of these functions, even though it remains faster than cliquer for the computation of Graph.cliques_vertex_clique_number(). The algorithms in the cliquer spkg scale very well as the number of vertices in a graph increases. Here is a comparison between the implementation of NetworkX 0.36 and cliquer on computing the clique number of a graph. Timing statistics were obtained using the machine sage.math:

sage: g = graphs.RandomGNP(100, 0.4) sage: %time g.clique_number(algorithm="networkx"); CPU times: user 0.64 s, sys: 0.01 s, total: 0.65 s Wall time: 0.65 s sage: %time g.clique_number(algorithm="cliquer"); CPU times: user 0.02 s, sys: 0.00 s, total: 0.02 s Wall time: 0.02 s sage: g = graphs.RandomGNP(200, 0.4) sage: %time g.clique_number(algorithm="networkx"); CPU times: user 9.68 s, sys: 0.01 s, total: 9.69 s Wall time: 9.68 s sage: %time g.clique_number(algorithm="cliquer"); CPU times: user 0.09 s, sys: 0.00 s, total: 0.09 s Wall time: 0.09 s sage: g = graphs.RandomGNP(300, 0.4) sage: %time g.clique_number(algorithm="networkx"); CPU times: user 69.98 s, sys: 0.10 s, total: 70.08 s Wall time: 70.09 s sage: %time g.clique_number(algorithm="cliquer"); CPU times: user 0.23 s, sys: 0.00 s, total: 0.23 s Wall time: 0.23 s sage: g = graphs.RandomGNP(400, 0.4) sage: %time g.clique_number(algorithm="networkx"); CPU times: user 299.32 s, sys: 0.29 s, total: 299.61 s Wall time: 299.64 s sage: %time g.clique_number(algorithm="cliquer"); CPU times: user 0.54 s, sys: 0.00 s, total: 0.54 s Wall time: 0.53 s sage: g = graphs.RandomGNP(500, 0.4) sage: %time g.clique_number(algorithm="networkx"); CPU times: user 1178.85 s, sys: 1.30 s, total: 1180.15 s Wall time: 1180.16 s sage: %time g.clique_number(algorithm="cliquer"); CPU times: user 1.09 s, sys: 0.00 s, total: 1.09 s Wall time: 1.09 s

- Support the syntax g.add_edge((u,v), label=l) for C graphs (Robert Miller) #6540 — The following syntax is supported. However, note that the label keyword must be used:
sage: G = Graph() sage: G.add_edge((1,2), label="my label") sage: G.edges() [(1, 2, 'my label')] sage: G = Graph() sage: G.add_edge((1,2), "label") sage: G.edges() [((1, 2), 'label', None)]

- Fast subgraphs by building the graph instead of deleting things (Jason Grout) #6578 — Subgraphs can now be constructed by building a new graph from a number of vertices and edges. This is in contrast to the previous default algorithm where subgraphs were constructed by deleting edges and vertices. In some cases, the efficiency gain of the new subgraph construction implementation can be up to 17x. The following timing statistics were obtained using the machine sage.math:
# BEFORE sage: g = graphs.PathGraph(Integer(10e4)) sage: %time g.subgraph(range(20)); CPU times: user 1.89 s, sys: 0.03 s, total: 1.92 s Wall time: 1.92 s sage: g = graphs.PathGraph(Integer(10e4) * 5) sage: %time g.subgraph(range(20)); CPU times: user 14.92 s, sys: 0.05 s, total: 14.97 s Wall time: 14.97 s sage: g = graphs.PathGraph(Integer(10e5)) sage: %time g.subgraph(range(20)); CPU times: user 47.77 s, sys: 0.29 s, total: 48.06 s Wall time: 48.06 s # AFTER sage: g = graphs.PathGraph(Integer(10e4)) sage: %time g.subgraph(range(20)); CPU times: user 0.27 s, sys: 0.01 s, total: 0.28 s Wall time: 0.28 s sage: g = graphs.PathGraph(Integer(10e4) * 5) sage: %time g.subgraph(range(20)); CPU times: user 1.34 s, sys: 0.03 s, total: 1.37 s Wall time: 1.37 s sage: g = graphs.PathGraph(Integer(10e5)) sage: %time g.subgraph(range(20)); CPU times: user 2.66 s, sys: 0.04 s, total: 2.70 s Wall time: 2.70 s

**Interfaces**

- Magma interface: make magma_colon_equals() mode work in both command line and notebook (William Stein) #6395 — Exposes the magma_colon_equals() option in the notebook. For example, one can now do the following in the notebook:
sage: magma._preparse_colon_equals = False sage: magma._preparse('a = 5') 'a = 5' sage: magma._preparse_colon_equals = True sage: magma._preparse('a = 5') 'a := 5' sage: magma._preparse('a = 5; b := 7; c =a+b;') 'a := 5; b := 7; c :=a+b;'

- Viewing a Sage object with view(object, viewer=’pdf’) (Nicolas M. Thiéry) #6591 — Typical uses include:
- you prefer your PDF browser to your DVI browser
- you want to view snippets which are not displayed well in DVI viewers or jsMath, e.g. images produced by tikzpicture.

**Linear Algebra**

- Make NumPy play nice with Sage types (Robert Bradshaw) #5081 — This improves data conversion between NumPy and Sage. For example, one can now do this:
sage: from scipy import stats sage: stats.uniform(0,15).ppf([0.5,0.7]) array([ 7.5, 10.5])

And this:

sage: from scipy import * sage: from pylab import * sage: sample_rate = 1000.0 sage: t = r_[0:0.6:1/sample_rate] sage: N = len(t) sage: s = [sin(2*pi*50*elem) + sin(2*pi*70*elem + (pi/4)) for elem in t] sage: S = fft(s) sage: f = sample_rate*r_[0:(N/2)] / N sage: n = len(f) sage: line(zip(f, abs(S[0:n]) / N))

- Fast slicing of sparse matrices (Jason Grout) #6553 — The efficiency gain for slicing sparse matrices can range from 10x up to 147x. The following timing statistics were obtained using the machine sage.math:
# BEFORE sage: A = random_matrix(ZZ, 100000, density=0.00005, sparse=True) sage: %time A[50000:,:50000]; CPU times: user 298.84 s, sys: 0.05 s, total: 298.89 s Wall time: 298.95 s sage: A = random_matrix(ZZ, 10000, density=0.00005, sparse=True) sage: %time A[5000:,:5000]; CPU times: user 2.50 s, sys: 0.00 s, total: 2.50 s Wall time: 2.50 s # AFTER sage: A = random_matrix(ZZ, 100000, density=0.00005, sparse=True) sage: %time A[50000:,:50000]; CPU times: user 1.91 s, sys: 0.09 s, total: 2.00 s Wall time: 2.02 s sage: A = random_matrix(ZZ, 10000, density=0.00005, sparse=True) sage: %time A[5000:,:5000]; CPU times: user 0.23 s, sys: 0.00 s, total: 0.23 s Wall time: 0.24 s

- Plotting sparse matrices efficiently (Jason Grout) #6554 — Previously, plotting a sparse matrix would convert the matrix to a dense matrix, resulting in the whole process taking increasingly longer time as the dimensions of the matrix increase. Where a matrix is sparse, the matrix plotting function now uses SciPy’s sparse matrix functionalities, which can handle large matrices. In some cases, the performance improvement can range from 380x up to 98000x. The following timing statistics were obtained using the machine mod.math:
# BEFORE sage: A = random_matrix(ZZ, 5000, density=0.00001, sparse=True) sage: %time matrix_plot(A, marker=','); CPU times: user 60.25 s, sys: 0.69 s, total: 60.94 s Wall time: 60.94 s sage: A = random_matrix(ZZ, 10000, density=0.00001, sparse=True) sage: %time matrix_plot(A, marker=','); CPU times: user 241.31 s, sys: 3.03 s, total: 244.34 s Wall time: 244.35 s sage: A = random_matrix(ZZ, 15000, density=0.00001, sparse=True) sage: %time matrix_plot(A, marker=','); CPU times: user 544.02 s, sys: 6.85 s, total: 550.87 s Wall time: 550.86 s sage: A = random_matrix(ZZ, 20000, density=0.00001, sparse=True) sage: %time matrix_plot(A, marker=','); CPU times: user 972.85 s, sys: 13.36 s, total: 986.21 s Wall time: 986.21 s # AFTER sage: A = random_matrix(ZZ, 5000, density=0.00001, sparse=True) sage: %time matrix_plot(A, marker=','); CPU times: user 0.05 s, sys: 0.04 s, total: 0.09 s Wall time: 0.16 s sage: A = random_matrix(ZZ, 10000, density=0.00001, sparse=True) sage: %time matrix_plot(A, marker=','); CPU times: user 0.01 s, sys: 0.00 s, total: 0.01 s Wall time: 0.01 s sage: A = random_matrix(ZZ, 15000, density=0.00001, sparse=True) sage: %time matrix_plot(A, marker=','); CPU times: user 0.01 s, sys: 0.00 s, total: 0.01 s Wall time: 0.01 s sage: A = random_matrix(ZZ, 20000, density=0.00001, sparse=True) sage: %time matrix_plot(A, marker=','); CPU times: user 0.01 s, sys: 0.00 s, total: 0.01 s Wall time: 0.01 s

In Sage 4.1, the following would quickly consume gigabytes of RAM on a system and may result in a MemoryError:

sage: A = random_matrix(ZZ, 100000, density=0.00001, sparse=True) sage: %time matrix_plot(A, marker=','); CPU times: user 0.63 s, sys: 0.01 s, total: 0.64 s Wall time: 0.63 s sage: A = random_matrix(ZZ, 1000000, density=0.00001, sparse=True) sage: %time matrix_plot(A, marker=','); CPU times: user 1933.41 s, sys: 2.97 s, total: 1936.38 s Wall time: 1937.31 s

- Elementwise (Hadamard) product of matrices (Rob Beezer) #6301 — Given matrices A and B of the same size, C = A.elementwise_product(B) creates the new matrix C, of the same size, with entries given by C[i,j] = A[i,j] * B[i,j]. The multiplication occurs in a ring defined by Sage’s coercion model, as appropriate for the base rings of A and B (or an error is raised if there is no sensible common ring). In particular, if A and B are defined over a noncommutative ring, the operation is also noncommutative. The implementation is different for dense matrices versus sparse matrices, but there are probably further optimizations available for specific rings. This operation is often called the Hadamard product. Here is an example on using elementwise matrix product:
sage: G = matrix(GF(3), 2, [0,1,2,2]) sage: H = matrix(ZZ, 2, [1,2,3,4]) sage: J = G.elementwise_product(H); J [0 2] [0 2] sage: J.parent() Full MatrixSpace of 2 by 2 dense matrices over Finite Field of size 3

**Modular Forms**

- Efficient implementation of index for Gamma(N) (Simon Morris) #6606 — The new implementation provides massive speed-up for the function Gamma(N). The following statistics were obtained using the machine mod.math:
# BEFORE sage: %time [Gamma(n).index() for n in [1..19]]; CPU times: user 14369.53 s, sys: 75.18 s, total: 14444.71 s Wall time: 14445.22 s sage: %time Gamma(32041).index(); # hangs for hours # AFTER sage: %time [Gamma(n).index() for n in [1..19]]; CPU times: user 0.00 s, sys: 0.00 s, total: 0.00 s Wall time: 0.00 s sage: timeit("[Gamma(n).index() for n in [1..19]]") 125 loops, best of 3: 2.27 ms per loop sage: %time Gamma(32041).index(); CPU times: user 0.00 s, sys: 0.00 s, total: 0.00 s sage: %timeit Gamma(32041).index() 10000 loops, best of 3: 110 µs per loop

- Weight 1 Eisenstein series (David Loeffler) #6071 — Add support for computing weight 1 Eisenstein series. Here’s an example:
sage: M = ModularForms(DirichletGroup(13).0, 1) sage: M.eisenstein_series() [ -1/13*zeta12^3 + 6/13*zeta12^2 + 4/13*zeta12 + 2/13 + q + (zeta12 + 1)*q^2 + zeta12^2*q^3 + (zeta12^2 + zeta12 + 1)*q^4 + (-zeta12^3 + 1)*q^5 + O(q^6) ]

- Efficient calculation of Jacobi sums (David Loeffler) #6534 — For small primes, calculating Jacobi sums using the definition directly can result in an efficiency improvement of up to 624x. The following timing statistics were obtained using the machine mod.math:
# BEFORE sage: chi = DirichletGroup(67).0 sage: psi = chi**3 sage: time chi.jacobi_sum(psi); CPU times: user 6.24 s, sys: 0.00 s, total: 6.24 s Wall time: 6.24 s # AFTER sage: chi = DirichletGroup(67).0 sage: psi = chi**3 sage: time chi.jacobi_sum(psi); CPU times: user 0.01 s, sys: 0.00 s, total: 0.01 s Wall time: 0.01 s

There is also support for computing a Jacobi sum with values in a finite field:

sage: g = DirichletGroup(17, GF(9,'a')).0 sage: g.jacobi_sum(g**2) 2*a

**Notebook**

- Display docstrings in the notebook using HTML and jsMath (Tom Boothby, Evan Fosmark, John Palmieri, Mitesh Patel) #5653 — When viewing docstrings using the notebook, these are presented using HTML and jsMath. For example, if one does any of the following
identity_matrix([TAB] identity_matrix?[SHIFT-RETURN] identity_matrix?[TAB]

then the docstring for the function identity_matrix() would be presented as in this figure:

**Number Theory**

- Intersection of ideals in a number field (David Loeffler) #6457 — New function intersection() of the class NumberFieldIdeal in the module sage/rings/number_field/number_field_ideal.py for computing the ideals in a number field. Here are some examples on using intersection():
sage: K.<a> = QuadraticField(-11) sage: p = K.ideal((a + 1)/2); q = K.ideal((a + 3)/2) sage: p.intersection(q) == q.intersection(p) == K.ideal(a-2) True sage: # An example with non-principal ideals sage: L.<a> = NumberField(x^3 - 7) sage: p = L.ideal(a^2 + a + 1, 2) sage: q = L.ideal(a+1) sage: p.intersection(q) == L.ideal(8, 2*a + 2) True sage: # A relative example sage: L.<a,b> = NumberField([x^2 + 11, x^2 - 5]) sage: A = L.ideal([15, (-3/2*b + 7/2)*a - 8]) sage: B = L.ideal([6, (-1/2*b + 1)*a - b - 5/2]) sage: A.intersection(B) == L.ideal(-1/2*a - 3/2*b - 1) True

- Computing Heegner points (Robert Bradshaw) #6045 — Adds a Heegner point method to elliptic curves over . The new method is heegner_point() which can be found in the class EllipticCurve_rational_field of the module sage/schemes/elliptic_curves/ell_rational_field.py. Here are some examples on using heegner_point():
sage: E = EllipticCurve('37a') sage: E.heegner_discriminants_list(10) [-7, -11, -40, -47, -67, -71, -83, -84, -95, -104] sage: E.heegner_point(-7) (0 : 0 : 1) sage: P = E.heegner_point(-40); P (a : -a + 1 : 1) sage: P = E.heegner_point(-47); P (a : -a^4 - a : 1) sage: P[0].parent() Number Field in a with defining polynomial x^5 - x^4 + x^3 + x^2 - 2*x + 1 sage: # Working out the details manually sage: P = E.heegner_point(-47, prec=200) sage: f = algdep(P[0], 5); f x^5 - x^4 + x^3 + x^2 - 2*x + 1 sage: f.discriminant().factor() 47^2

- Support primes_of_degree_one() for relative extensions (David Loeffler) #6396 — For example, one can now do this:
sage: N.<a,b> = NumberField([x^2 + 1, x^2 - 5]) sage: ids = N.primes_of_degree_one_list(10); ids [Fractional ideal (2*a + 1/2*b - 1/2), Fractional ideal ((-1/2*b - 1/2)*a - b), Fractional ideal (b*a + 1/2*b + 3/2), Fractional ideal ((-1/2*b - 3/2)*a + b - 1), Fractional ideal ((-b + 1)*a + b), Fractional ideal (3*a + 1/2*b - 1/2), Fractional ideal ((1/2*b - 1/2)*a + 3/2*b - 1/2), Fractional ideal ((-1/2*b - 5/2)*a - b + 1), Fractional ideal (2*a - 3/2*b - 1/2), Fractional ideal (3*a + 1/2*b + 5/2)] sage: [x.absolute_norm() for x in ids] [29, 41, 61, 89, 101, 109, 149, 181, 229, 241] sage: ids[9] == N.ideal(3*a + 1/2*b + 5/2) True

- Inverse modulo an ideal in a relative number field (David Loeffler) #6458 — Support for computing the inverse modulo an ideal in the ring of integers of a relative field. Here’s an example:
sage: K.<a,b> = NumberField([x^2 + 1, x^2 - 3]) sage: I = K.ideal(17*b - 3*a) sage: x = I.integral_basis(); x [438, -b*a + 309, 219*a - 219*b, 156*a - 154*b] sage: V, _, phi = K.absolute_vector_space() sage: V.span([phi(u) for u in x], ZZ) == I.free_module() True

**Numerical**

- Further NumPy type conversions (Robert Bradshaw, Jason Grout) #6506 — Improved handling of , , and between NumPy and Sage. Here are some examples:
sage: import numpy sage: numpy.array([1.0, 2.5j]) array([ 1.+0.j , 0.+2.5j]) sage: numpy.array([1.0, 2.5j]).dtype dtype('complex128') sage: numpy.array([1.000000000000000000000000000000000000j]).dtype dtype('object') sage: numpy.array([1, 2, 3]) array([1, 2, 3]) sage: numpy.array([1, 2, 3]).dtype dtype('int64') sage: numpy.array(2**40).dtype dtype('int64') sage: numpy.array(2**400).dtype dtype('object') sage: numpy.array([1,2,3,0.1]).dtype dtype('float64') sage: numpy.array(QQ(2**40)).dtype dtype('int64') sage: numpy.array(QQ(2**400)).dtype dtype('object') sage: numpy.array([1, 1/2, 3/4]) array([ 1. , 0.5 , 0.75])

**Packages**

- Update the ATLAS spkg to version 3.8.3.p7 (David Kirkby, Minh Van Nguyen) #6558 #6738 — This adds support for building ATLAS under Solaris on a Sun SPARC processor.
- Update the NTL spkg to version 5.4.2.p9 (David Kirkby) #6380 — This adds support for building NTL under Solaris on a Sun SPARC processor.
- Update the zn_poly spkg to version 0.9.p1 (David Kirkby) #6443 — This adds support for building zn_poly under Solaris on a Sun SPARC processor.
- Update the Pari/GP spkg to version 2.3.3.p1 (David Kirkby) #6445 — This adds support for building Pari under Solaris on a Sun SPARC processor.
- Update the FLINT spkg to version 1.3.0.p2 (David Kirkby) #6451 — This fixes a Solaris specific bug in FLINT.
- Update the MPFR spkg to version 2.4.1.p0 (Paul Zimmermann, David Kirkby) #6453 — This fixes a number of test failures under Solaris on a Sun SPARC processor.
- Update the PolyBoRi spkg to version 0.5rc.p9 (David Kirkby) #6528 — This fixes a Solaris specific bug in compiling PolyBoRi with the Sun compiler.
- Upgrade tinyMCE to version 3.2.4.1 upstream release (Jason Grout) #6143 — This version of tinyMCE has many fixes for Safari and a greatly improved paste-from-word functionality.
- Upgrade Cython to version 0.11.2.1 latest upstream release (Robert Bradshaw) #6438.
- Update the NumPy spkg to version 1.3.0.p1 (William Stein) #6493 — This fixes a bug in compiling NumPy under 64-bit OS X.
- Upgrade Singular to version singular-3-1-0-2-20090620.p0 (David Kirkby) #6563 — This fixes a Solaris specific bug when compiling Singular under Solaris on a Sun SPARC processor.
- New optional spkg GLPK version 4.38 (Nathann Cohen) #6602 — GLPK is a linear program solver that can also solve mixed integer programs.
- New optional spkg OpenOpt version 0.24 (William Stein) #6302 — OpenOpt is a numerical optimization package with various solvers.
- New optional package p_group_cohomology version 1.0.2 (Simon A. King, David J. Green) #6491 — The package p_group_cohomology can compute the cohomology ring of a group with coefficients in a finite field of order p. Its features include:
- Compute the cohomology ring with coefficients in for any finite -group, in terms of a minimal generating set and a minimal set of algebraic relations. We use Benson’s criterion to prove the completeness of the ring structure.
- Compute depth, dimension, Poincare series and -invariants of the cohomology rings.
- Compute the nil radical.
- Construct induced homomorphisms.
- The package includes a list of cohomology rings for all groups of order 64.
- With the package, the cohomology for all groups of order 128 and for the Sylow 2-subgroup of the third Conway group (order 1024) was computed for the first time. The result of these and many other computations (e.g., all but 6 groups of order 243) is accessible in a repository on sage.math.

Here some examples. Data that are included with the package:

sage: from pGroupCohomology import CohomologyRing sage: H = CohomologyRing(64,132) # this is included in the package, hence, the ring structure is already there sage: print H Cohomology ring of Small Group number 132 of order 64 with coefficients in GF(2) Computation complete Minimal list of generators: [a_2_4, a 2-Cochain in H^*(SmallGroup(64,132); GF(2)), c_2_5, a 2-Cochain in H^*(SmallGroup(64,132); GF(2)), c_4_12, a 4-Cochain in H^*(SmallGroup(64,132); GF(2)), a_1_0, a 1-Cochain in H^*(SmallGroup(64,132); GF(2)), a_1_1, a 1-Cochain in H^*(SmallGroup(64,132); GF(2)), b_1_2, a 1-Cochain in H^*(SmallGroup(64,132); GF(2))] Minimal list of algebraic relations: [a_1_0*a_1_1, a_1_0*b_1_2, a_1_1^3+a_1_0^3, a_2_4*a_1_0, a_1_1^2*b_1_2^2+a_2_4*a_1_1*b_1_2+a_2_4^2+c_2_5*a_1_1^2] sage: H.depth() 2 sage: H.a_invariants() [-Infinity, -Infinity, -3, -3] sage: H.poincare_series() (-t^2 - t - 1)/(t^6 - 2*t^5 + t^4 - t^2 + 2*t - 1) sage: H.nil_radical() a_1_0, a_1_1, a_2_4

Data from the repository on sage.math:

sage: H = CohomologyRing(128,562) # if there is internet connection, the ring data are downloaded behind the scenes sage: len(H.gens()) 35 sage: len(H.rels()) 486 sage: H.depth() 1 sage: H.a_invariants() [-Infinity, -4, -3, -3] sage: H.poincare_series() (t^14 - 2*t^13 + 2*t^12 - t^11 - t^10 + t^9 - 2*t^8 + 2*t^7 - 2*t^6 + 2*t^5 - 2*t^4 + t^3 - t^2 - 1)/(t^17 - 3*t^16 + 4*t^15 - 4*t^14 + 4*t^13 - 4*t^12 + 4*t^11 - 4*t^10 + 4*t^9 - 4*t^8 + 4*t^7 - 4*t^6 + 4*t^5 - 4*t^4 + 4*t^3 - 4*t^2 + 3*t - 1)

Some computation from scratch, involving different ring presentations and induced maps:

sage: tmp_root = tmp_filename() sage: CohomologyRing.set_user_db(tmp_root) sage: H0 = CohomologyRing.user_db(8,3,websource=False) sage: print H0 Cohomology ring of Dihedral group of order 8 with coefficients in GF(2) Computed up to degree 0 Minimal list of generators: [] Minimal list of algebraic relations: [] sage: H0.make() sage: print H0 Cohomology ring of Dihedral group of order 8 with coefficients in GF(2) Computation complete Minimal list of generators: [c_2_2, a 2-Cochain in H^*(D8; GF(2)), b_1_0, a 1-Cochain in H^*(D8; GF(2)), b_1_1, a 1-Cochain in H^*(D8; GF(2))] Minimal list of algebraic relations: [b_1_0*b_1_1] sage: G = gap('DihedralGroup(8)') sage: H1 = CohomologyRing.user_db(G,GroupName='GapD8',websource=False) sage: H1.make() sage: print H1 # the ring presentation is different ... Cohomology ring of GapD8 with coefficients in GF(2) Computation complete Minimal list of generators: [c_2_2, a 2-Cochain in H^*(GapD8; GF(2)), b_1_0, a 1-Cochain in H^*(GapD8; GF(2)), b_1_1, a 1-Cochain in H^*(GapD8; GF(2))] Minimal list of algebraic relations: [b_1_1^2+b_1_0*b_1_1] sage: phi = G.IsomorphismGroups(H0.group()) sage: phi_star = H0.hom(phi,H1) sage: phi_star_inv = phi_star^(-1) # ... but the rings are isomorphic sage: [X==phi_star_inv(phi_star(X)) for X in H0.gens()] [True, True, True, True] sage: [X==phi_star(phi_star_inv(X)) for X in H1.gens()] [True, True, True, True]

An example with an odd prime:

sage: H = CohomologyRing(81,8) # this needs to be computed from scratch sage: H.make() sage: H.gens() [1, a_2_1, a 2-Cochain in H^*(SmallGroup(81,8); GF(3)), a_2_2, a 2-Cochain in H^*(SmallGroup(81,8); GF(3)), b_2_0, a 2-Cochain in H^*(SmallGroup(81,8); GF(3)), a_4_1, a 4-Cochain in H^*(SmallGroup(81,8); GF(3)), b_4_2, a 4-Cochain in H^*(SmallGroup(81,8); GF(3)), b_6_3, a 6-Cochain in H^*(SmallGroup(81,8); GF(3)), c_6_4, a 6-Cochain in H^*(SmallGroup(81,8); GF(3)), a_1_0, a 1-Cochain in H^*(SmallGroup(81,8); GF(3)), a_1_1, a 1-Cochain in H^*(SmallGroup(81,8); GF(3)), a_3_2, a 3-Cochain in H^*(SmallGroup(81,8); GF(3)), a_5_2, a 5-Cochain in H^*(SmallGroup(81,8); GF(3)), a_5_3, a 5-Cochain in H^*(SmallGroup(81,8); GF(3)), a_7_5, a 7-Cochain in H^*(SmallGroup(81,8); GF(3))] sage: len(H.rels()) 59 sage: H.depth() 1 sage: H.a_invariants() [-Infinity, -3, -2] sage: H.poincare_series() (t^4 - t^3 + t^2 + 1)/(t^6 - 2*t^5 + 2*t^4 - 2*t^3 + 2*t^2 - 2*t + 1) sage: H.nil_radical() a_1_0, a_1_1, a_2_1, a_2_2, a_3_2, a_4_1, a_5_2, a_5_3, b_2_0*b_4_2, a_7_5, b_2_0*b_6_3, b_6_3^2+b_4_2^3

Rob Beezer, Nathann Cohen, John Cremona, Simon King, Sébastien Labbé, and Jens Rasch contributed to writing this release tour. A big thank you to all the Sage bug report/patch authors who made my life as a release tour author easier through your comprehensive and concise documentation. There are too many to list here; you know who you are. A release tour can also be found on the Sage wiki.