should help you out It depends not only on the size of the matrices, but also how sparse they are, and what sparseness structure they have. Obviously, you can solve a tridiagonal system much faster than a system with the same number of nonzero entries distributed randomly through the matrix. As HighPerformance Mark noted, CG works for dense matrices as well as sparse, so the question you want to ask is something more along the lines of "how large and how sparse does a matrix need to be before solvers can benefit from treating it as a sparse matrix instead of a dense matrix that happens to have a lot of zeros". code :
Share :

How to efficiently add sparse matrices in Python
By : user2585660
Date : March 29 2020, 07:55 AM
hope this fix your issue I want to know how to efficiently add sparse matrices in Python. , Have you tried timing the simplest method? code :
matrix_result = matrix_a + matrix_b
matrix_result = (matrix_a.tocsr() + matrix_b.tocsr()).tolil()

Efficiently accumulating a collection of sparse scipy matrices
By : Caroline Law
Date : March 29 2020, 07:55 AM
This might help you I think I've found a way to speed it up by a factor of ~10 if your matrices are very sparse. code :
In [1]: from scipy.sparse import csr_matrix
In [2]: def sum_sparse(m):
...: x = np.zeros(m[0].shape)
...: for a in m:
...: ri = np.repeat(np.arange(a.shape[0]),np.diff(a.indptr))
...: x[ri,a.indices] += a.data
...: return x
...:
In [6]: m = [np.zeros((100,100)) for i in range(1000)]
In [7]: for x in m:
...: x.ravel()[np.random.randint(0,x.size,10)] = 1.0
...:
m = [csr_matrix(x) for x in m]
In [17]: (sum(m[1:],m[0]).todense() == sum_sparse(m)).all()
Out[17]: True
In [18]: %timeit sum(m[1:],m[0]).todense()
10 loops, best of 3: 145 ms per loop
In [19]: %timeit sum_sparse(m)
100 loops, best of 3: 18.5 ms per loop

How to Efficiently Combine Sparse Matrices Vertically
By : ChingChing
Date : March 29 2020, 07:55 AM
it fixes the issue According to the matlab help, you can "disassemble" a sparse matrix with code :
[i,j,s] = find(S);
[is, js, ss] = find(S);
[it, jt, st] = find(T);
ST = sparse([is; it + size(S,1)], [js; jt], [ss; st]);
m = 1000; n = 2000; density = 0.01;
N = 100;
Q = cell(1, N);
is = Q;
js = Q;
ss = Q;
numrows = 0; % keep track of dimensions so far
for ii = 1:N
Q{ii} = sprandn(m+ii, njj, density); % so each matrix has different size
[a b c] = find(Q{ii});
sz = size(Q{ii});
is{ii} = a' + numrows; js{ii}=b'; ss{ii}=c'; % append "on the corner"
numrows = numrows + sz(1); % keep track of the size
end
tic
ST = sparse([is{:}], [js{:}], [ss{:}]);
fprintf(1, 'using find takes %.2f sec\n', toc);
using find takes 0.63 sec

filling sparse matrices efficiently matlab
By : rasmus
Date : March 29 2020, 07:55 AM
wish helps you I'm not sure if there is a way to avoid the loop, but I do get a factor of 2to20 speed increase (I ranged a from 3 to 5,000 with b fixed at 10,000) by building three large vectors (two for row and column indices and one for values) and building the sparse matrix after the loop: code :
strides = cellfun(@numel,Ind);
n = sum(strides);
I(n,1) = 0;
J(n,1) = 0;
S(n,1) = 0;
bot = 1;
for k = 1:a
top = bot + strides(k)  1 ;
mask = bot:top ;
%
I(mask) = k ;
J(mask) = Ind{k} ;
S(mask) = c(Ind{k}) ;
%
bot = top + 1;
end
U = sparse(I,J,S,a,b);

Can Armadillo efficiently multiply sparsebysparse and sparsebydense matrices into a dense result?
By : ramesh dahal
Date : March 29 2020, 07:55 AM
like below fixes the issue You can use the mat constructor which takes a sparse matrix and converts its data to a dense one: code :
arma::mat out1(S_a * S_b);
arma::mat out2(S_b * D);

