Consider the following dense matrix:
1 2 3 4 5 6 7 8 9
If I store it in a continuous block:
1 2 3 4 5 6 7 8 9
I can directly access the elements of the matrix, given the number of rows and columns with some basic arithmetic.
Now consider this sparse matrix:
1 0 0 0 0 2 0 3 0
To effectively store this matrix, I discard non-zero elements, so now it becomes
1 2 3
But this, obviously, is not enough information to perform operations such as multiplying the matrix vector! Therefore, we need to add additional information to extract elements from the matrix.
But you can see that regardless of the storage method used, we need
- Do extra work to access items
- Store additional information to maintain matrix structure.
So, as you can see, the advantages of storage arise only if there are enough zeros in the matrix to compensate for the additional information that we save in order to preserve the structure of the matrix. For example, in the Yale format, we only save memory when the number of non-zero values ββ(NNZ) is less than (m(n β 1) β 1) / 2 , where m = the number of rows and n = the number of columns.
Jacob source share