6

I was wondering about a problem in analyzing a social network (counting friends-in-common between all pairs of members) that requires squaring its adjacency matrix, and started reading up on algorithms for multiplying sparse matrices.

However, all I found so far were different ways of arranging the more or less naive "outer product" algorithm between processors - the same total number of multiplications/additions with different amounts of communication and additional algorithmic overhead (which, though, is undoubtedly important).

The most non-trivial algorithm I found was the Yuster-Zwick algorithm described in Fast sparse matrix multiplication, which is basically a combination of the same old naive algorithm and using a fast dense method for the dense part of the problem.

I looked at how sparse matrix multiplication is implemented in MLLib and it, too, appears to use the simple block-based algorithm.

Are there any parallel algorithms for multiplying sparse matrices that are substantially different from the naive one - as different as Strassen's or Coppersmith-Winograd's algorithms are from the naive algorithm for multiplying dense matrices?

For concreteness, let us assume that the matrices are sparse enough that number of non-zeros in the arguments and in the result are both $O(N)$.

jkff
  • 2,269
  • 1
  • 14
  • 17

1 Answers1

4

This recent paper proposes a different approach, based on hypergraph partitioning.

Grey Ballard, Alex Druinsky, Nicholas Knight, and Oded Schwartz. 2015. Brief Announcement: Hypergraph Partitioning for Parallel Sparse Matrix-Matrix Multiplication. In Proceedings of the 27th ACM on Symposium on Parallelism in Algorithms and Architectures (SPAA '15). ACM, New York, NY, USA, 86-88. DOI=10.1145/2755573.2755613 http://doi.acm.org/10.1145/2755573.2755613

Massimo Cafaro
  • 4,360
  • 19
  • 27