Archive

Archive for the ‘algorithm’ Category

Statistical analysis of the Fisher-Yates shuffle

8 May 2011 Leave a comment

The Fisher-Yates shuffle is a procedure for producing a random permutation of a sequence. This procedure is also known as the Knuth shuffle. Here I provide a statistical analysis of an implementation of the Fisher-Yates shuffle. A central idea is that any permutation of a sequence should equally likely be an output of the Fisher-Yates shuffle. That is, in a large enough number of shuffles of a fixed sequence, the observed probability of each permutation produced by a Fisher-Yates shuffle implementation should cluster around or converge to the theoretical probability for that permutation. As the number of shuffles increases, the observed probability for each possible permutation should converge to the theoretical probability. Otherwise there is something wrong with the implementation. I used my implementation of the Fisher-Yates shuffle to produce random permutations of various simple sequences of digits. The resulting output of the shuffles were used to perform frequency analyses of the behaviour of the implementation. Following are details on the particular sequences and the number of iterations for each sequence. Iteration here counts the number of times that I shuffled the given sequence. An experiment on a sequence is then the totality of all shuffles performed on it.

  • Sequence: 123. Iterations: 1,000,000
  • Sequence: 1234. Iterations: 1,000,000
  • Sequence: 12345. Iterations: 1,000,000
  • Sequence: 123456. Iterations: 10,000,000
  • Sequence: 1234567. Iterations: 100,000,000
  • Sequence: 12345678. Iterations: 100,000,000
  • Sequence: 123456789. Iterations: 100,000,000

Each sequence was initialized as follows. Let V be a vector where V[i] holds the digit i + 1. So V[0] holds the digit 1, V[1] holds 2, and so on. Two versions of the experiment was performed on each sequence. In the first version of the experiment, called version A, at the start of each iteration, I initialized V to be as described above. Then I randomly permuted the vector. In version B of the experiment, I first initialized V to be as above. Then I proceeded to repeatedly randomly permute V. Thus if V_i is the permutation obtained from iteration i, then during iteration i + 1 I applied the Fisher-Yates shuffle on V_i to obtain V_{i+1}. These two different versions of each experiment on a sequence were performed to see whether if they would produce qualitatively identical results. The experimental results suggest so: the two different versions of each experiment produced qualitatively similar results.

Source code of the experiments are provided here. Note that in order to compile the C files, you need to check out igraph trunk from Launchpad, apply the patch on this ticket, and then compile and install the resulting patched igraph version on your system. The C files containing the code for the experiments output the result of each shuffle to a file. For small sequences with say 3 to 4 digits, the resulting output files are a few MB in size. But for longer sequences, such as with 5 or more digits, the output files can be from tens of MB to hundreds of MB in size. The experimental data are easily generated from the above C files, so I do not provide the data. The data for each experiment were analyzed using the Python script fisherstat.py. If you intend to replicate the experiments, you need to adjust this script for each data file of each experiment. Given a data file for each experiment, the Python file is loaded from within the Sage command line interface; everything from then on is automated, from reading the experimental data to computing the frequency distribution. All experiments were run on the Sage cluster, in particular the sage.math compute node, whose purchase is supported by US National Science Foundation Grant No. DMS-0821725. Data analysis was performed using the Sage mathematics software system version 4.6.2.

Also note that the Bitbucket.org project link also points to PDF files. These files plot the normalized frequency distributions of the experimental data. The horizontal axis of each plot is for the permutation IDs. Each permutation of a fixed sequence is assigned a unique ID starting from 0. For example, for the sequence “123” here are all the possible permutations together with their corresponding IDs:

123 -> 0
132 -> 1
213 -> 2
231 -> 3
312 -> 4
321 -> 5

The vertical axis contains the corresponding normalized frequency of each permutation. Each frequency count was normalized by the number of iterations for the corresponding experiment. See the script fisherstat.py for further details. The normalized frequency for a permutation can be thought of as the empirical probability of that permutation showing up as a result of a Fisher-Yates shuffle.

And now comes the fun bit: plots of the experimental data. As I said above, both versions of each experiment produced qualitatively similar results. For this reason, below I only show some plots for version A of each experiment. To see all plots including plots for version B, refer to the Bitbucket.org project page. For kicks, each PDF file containing a plot was typeset using LaTeX and pgfplots.

As is evident from the above plots, for each sequence considered the empirical probabilities resulting from the experiments cluster around the theoretical probabilities. For a sequence of 3 or 4 digits, the empirical probabilities converge to the theoretical probability after a million or so experimental iterations. For example, the sequence “123” has six possible permutations so each permutation has a theoretical probability of 1/6 of occurring as a result of the Fisher-Yates shuffle. In the above plot for the sequence “123”, it can be seen that the empirical probabilities converge to the theoretical probability after one million iterations. But as the number of digits in a sequence increases, the number of experimental iterations needs to increase as well in order to observe a convergence of the empirical probabilities to the theoretical probability for that sequence. For example, after 10^8 iterations for the sequence “123456789”, observe that the empirical probabilities still cluster around the theoretical probability of 1 / 362880. As the iteration number increases, the range of empirical probabilities should converge to the theoretical probability.

Advertisements

Version 0.7 of book “Algorithmic Graph Theory” released

24 February 2011 4 comments

Here is version 0.7 of the book Algorithmic Graph Theory. The relevant download options are:

Version 0.7 fleshes out the chapter “Random Graphs”. Here is the content of the chapter in brief:

  1. Network statistics
  2. Binomial random graph model
  3. Erdos-Renyi model
  4. Small-world networks
  5. Scale-free networks

Version 0.6 of book “Algorithmic Graph Theory” released

6 January 2011 Leave a comment

Happy new year, folks! As a new year’s gift to you, here is version 0.6 of the book Algorithmic Graph Theory. The relevant download options are:

Version 0.6 adds the new chapter “Tree Data Structures” that discusses priority queues and various efficient implementations of priority queues, including binary heaps and binomial heaps. Here is the content of the new chapter in brief:

  1. Priority queues
  2. Binary heaps
  3. Binomial heaps
  4. Binary search trees

Version 0.5 of the book “Algorithmic Graph Theory”

30 November 2010 Leave a comment

I’m happy as a clam to announce version 0.5 of the book Algorithmic Graph Theory for your reading pleasure.

The main focus of this release is to flesh out the chapter on trees and forests. Along the way, numerous problems/exercises are added to the introductory chapter “Introduction to Graph Theory” and the chapter “Graph Algorithms”. Needless to say, there are also the multitude of typo fixes throughout the book. We, the authors of the book, gratefully acknowledge contributions from the following people while preparing this release:

  • Caroline Melles
  • Pravin Paratey

See the section “Acknowledgments” in the book for full details on their contributions. Here is an outline of topics covered in the newly fleshed out chapter “Trees and Forests”:

  • Definitions and examples relating to trees and forests.
  • Various basic characterizations of trees.
  • Techniques for constructing minimum spanning trees: a randomized spanning tree construction algorithm and the usual suspects including Kruskal’s algorithm, Prim’s algorithm, and Boruvka’s algorithm.
  • Binary trees and an algorithm to construct a random binary tree. Application topics include coding theory, Gray code, and Huffman code.
  • The usual suspects of tree traversal algorithms: level-order, pre-order, post-order, and in-order.