Federated Learning with Flexible Control
Shiqiang Wang, Jake Perazzone, et al.
INFOCOM 2023
We propose a distributed-memory parallel algorithm for computing some of the algebraically smallest eigenvalues (and corresponding eigenvectors) of a large, sparse, real symmetric positive definite matrix pencil that lie within a target interval. The algorithm is based on Chebyshev interpolation of the eigenvalues of the Schur complement (over the interface variables) of a domain decomposition reordering of the pencil and accordingly exposes two dimensions of parallelism: one derived from the reordering and one from the independence of the interpolation nodes. The new method demonstrates excellent parallel scalability, comparing favorably with PARPACK, and does not require factorization of the mass matrix, which significantly reduces memory consumption, especially for 3D problems. Our implementation is publicly available on GitHub.
Shiqiang Wang, Jake Perazzone, et al.
INFOCOM 2023
Herbert Woisetschläger, Alexander Erben, et al.
IJCAI 2024
Simone Magnani, Stefano Braghin, et al.
Big Data 2023
Elton Figueiredo de Souza Soares, Emilio Ashton Vital Brazil, et al.
Future Generation Computer Systems