Below is some freely available software concerned with a task-oriented abstraction for parallel programming. There are also some selected papers on parallelism, and some papers on large computations executed using this software. An abstract briefly describes the TOP-C abstraction.

I have a different series of papers on algorithms for computational algebra for which I will add pointers as I become more ambitious. For pointers to me, personally, see my home page.
(last modified, Oct. 8, 1997)


Abstract (extracted from HPDC paper)

The goal of this work is to simplify parallel applications development, and thus ease the learning barriers faced by non-experts. It is especially useful where there is little data-parallelism to be recognized by a compiler. The applications programmer need learn the intricacies of only one primary subroutine in order to get the full benefits of the parallel interface. The applications programmer defines a high level concept, the task, that depends only on his application, and not on any particular parallel library. The task is defined by its three phases: (a) the task input, (b) sequential code to execute the task, and (c) any modifications of global variables that occur as a result of the task. In particular, side effects (which change global variable values) must not occur in phase (b). Forcing the user to re-organize his computation in these terms allows us to present the applications programmer with a single global environment visible to all processors in the context of a master-slave architecture. A graphical sketch (in postscript, 82K) of the architecture can be found here.




This has three separate C libraries, allowing identical end-user code to run on either a sequential machine, a NOW (network of workstations), or a SMP (Symmetric MultiProcessing -- shared memory). The NOW version runs on top of MPI, and the SMP version runs on top of threads (evolving toward POSIX threads).


This software applies a task-oriented parallel abstraction to GCL (GNU Common LISP). It is built on top of MPI, and also makes MPI available to be called directly from LISP. The software has been tested on SunOS 4.x and DEC Alpha OSF/UNIX. It probably also runs on most of the other architectures supported by GCL, but I have not been able to test that.


GAP (Groups, Algorithms and Programming) is a general purpose language for mathematical group theory and related abstract algebra. AN FTP-ABLE VERSION FOR GAP VERSION 4 SHOULD BE AVAILABLE IN A FEW MONTHS. IN THE MEANTIME, I AM HAPPY TO PROVIDE A "BETA VERSION". PLEASE WRITE TO ME:



G. Cooperman, ``TOP-C: A Task-Oriented Parallel C Interface'', 5-th International Symposium on High Performance Distributed Computing (HPDC-5), IEEE Press, 1996, pp. 141--150. (N.B.: The paper on Gaussian elimination provides a good illustration of the principles of TOP-C.)

G. Cooperman, ``GAP/MPI: Facilitating Parallelism'', Proc. of DIMACS Workshop on Groups and Computation II 28, DIMACS Series in Discrete Mathematics and Theoretical Computer Science, L. Finkelstein and W.M. Kantor (eds.), AMS, Providence, RI, 1997, pp. 69--84.

G. Cooperman, ``STAR/MPI: Binding a Parallel Library to Interactive Symbolic Algebra Systems'', Proc. of International Symposium on Symbolic and Algebraic Computation (ISSAC '95), ACM Press, pp. 126--132.

G. Cooperman and V. Greenberg, ``TOP-WEB: Task-Oriented Metacomputing on the Web'', G. Cooperman and V. Grinberg, International Journal of Parallel and Distributed Systems and Networks 1, 1998, pp.~184--192

G. Cooperman and M. Tselman, ``Using Tadpoles to Reduce Memory and Communication Requirements for Exhaustive, Breadth-First Search Using Distributed Computers'', Proc. of ACM Symposium on Parallel Architectures and Algorithms (SPAA-97), ACM Press, 1997, pp. 231--238; also N.U. Tech. Rept. NU-CCS-97-02, 1997.


Large computations

The task-oriented model used is described in the abstract above. Papers are available on each of these results (currently by request to, but eventually by ftp). These accomplishments are currently all in the area of computational algebra. I am eager to explore using this system for large task-oriented computations in other domains. References to the papers are given below, followed by a discussion of some of the computations.

G. ooperman, G. Hiß, K. Lux, and J. Müller, ``The Brauer tree of the principal 19-block of the sporadic simple Thompson group'', J. of Experimental Mathematics 6(4), 1997, pp& 293-300.

G. Cooperman, ``Practical Task-Oriented Parallelism for Gaussian Elimination in Distributed Memory'', Linear Algebra Applications  275-276, 1998, pp. 107-120.

G. Cooperman and G. Havas, ``Practical Parallel Coset Enumeration'', Proc. of Workshop on High Performance Computation and Gigabit Local Area Networks} G.& Cooperman, G.& Michler and H.& Vinck (eds.), Lecture notes in control and information sciences& 226, Springer Verlag, pp.& 15--27.

G. Cooperman, L. Finkelstein, M.& Tselman and B.& York, ``Constructing Permutation Representations for Matrix Groups'', J. of Symbolic Computation24, 1997, pp.& 1--18.

To date, there have been a number of notable successes in computational algebra, which provided the initial testbeds. These successes include: finding a permutation representation of Lyons's group of degree 9,606,125 (GCL/MPI, 8 SPARC-2's, 12 hours, joint with Finkelstein and Tselman, Northeastern U.); condensation of a permutation matrix representation of J_4 from dimension 173,067,389 to a matrix representation for an algebra of dimension 5,693 (GCL/MPI, 14 heavily loaded Alpha's, 2-1/2 days, joint with Tselman, Northeastern U.); and enumeration of 8,835,156 cosets in order to determine a permutation representation of Lyons's group from a presentation (TOP-C, 4-processor Convex, 6 hours, joint with Havas, U. of Queensland, Australia).

Each of these algebraic computations were the largest of their kind at that time, and in each case, almost linear speedup with the number of processors was seen. Each example has begun with a sequential algorithm and used the parallel methodology to parallelize the algorithm in a natural manner. For example, the coset enumeration used sequential code written 6-1/2 years earlier by Schönert with no thought for parallelization. Approximately 30 lines of a 1400 line C program were modified, and the program was then linked with the TOP-C library to achieve the parallelization. Each of these tests on larger problems has stressed further the general parallelization methodology and led to new ideas. Ongoing work in this spirit includes condensation of a permutation matrix representation of Th from dimension 976,841,775 to a matrix representation for an algebra of dimension 1,403 (GCL/MPI, 7 Alpha 3000's, 3 weeks expected time, joint with Lux, RWTH, Aachen, Germany and Mueller, Heidelberg, Germany) and construction of a permutation representation of dimension 173,067,389 for J_4 over GL(1333,11) (TOP-C, about 200 nodes of SP-2, under 2 days expected, joint with Michler's group) at Essen, Germany.