\( \newcommand{\blu}[1]{{\color{blue}#1}} \newcommand{\red}[1]{{\color{red}#1}} \newcommand{\grn}[1]{{\color{green!50!black}#1}} \newcommand{\local}{_i} \newcommand{\inv}{^{-1}} % Index for interface and interior \newcommand{\G}{\Gamma} \newcommand{\Gi}{\Gamma_i} \newcommand{\I}{{\cal I}} \newcommand{\Ii}{\I_i} % Matrix A \newcommand{\A}{{\cal A}} \newcommand{\Ai}{\A\local} \newcommand{\Aj}{\A_j} \newcommand{\Aib}{\bar{\A}\local} \newcommand{\AII}{\A_{\I\I}} \newcommand{\AIG}{\A_{\I\G}} \newcommand{\AGI}{\A_{\G\I}} \newcommand{\AGG}{\A_{\G\G}} \newcommand{\AIiIi}{\A_{\Ii\Ii}} \newcommand{\AIiGi}{\A_{\Ii\Gi}} \newcommand{\AGiIi}{\A_{\Gi\Ii}} \newcommand{\AGiGi}{\A_{\Gi\Gi}} \newcommand{\AGiGiw} {{\ensuremath{\A_{\Gi\Gi}^w}}} \newcommand{\AIIi}{\AII\local} \newcommand{\AIGi}{\AIG\local} \newcommand{\AGIi}{\AGI\local} \newcommand{\AGGi}{\AGG\local} \newcommand{\Ab}{\bar{\A}} \newcommand{\Ah}{{\widehat{\A}}} \newcommand{\Aih}{{\Ah\local}} \newcommand{\At}{{\widetilde{\A}}} \newcommand{\Ait}{{\At\local}} \newcommand{\Ao}{\A_0} \newcommand{\Aot}{\At_0} \newcommand{\Aob}{\Ab_0} \newcommand{\AiNN}{\Ai^{(NN)}{}} \newcommand{\AitNN}{\Ait^{(NN)}{}} \newcommand{\AiAS}{\Ai^{(AS)}{}} \newcommand{\AitAS}{\Ait^{(AS)}{}} % Matrix S \renewcommand{\S}{{\cal S}} \newcommand{\Si}{\S\local} \newcommand{\Sb}{\bar{\S}} \newcommand{\Sib}{\Sb\local} \newcommand{\Sh}{{\widehat{\S}}} \newcommand{\Sih}{{\Sh\local}} \newcommand{\St}{{\widetilde{\S}}} \newcommand{\Sit}{{\St\local}} \newcommand{\So}{\S_0} \newcommand{\Soi}{\S_0^i} \newcommand{\Sot}{\St_0} \newcommand{\Sob}{\Sb_0} \newcommand{\SiNN}{\Si^{(NN)}{}} \newcommand{\SitNN}{\Sit^{(NN)}{}} \newcommand{\SiAS}{\Si^{(AS)}{}} \newcommand{\SitAS}{\Sit^{(AS)}{}} % Matrix K \newcommand{\K}{{\cal K}} \newcommand{\Ki}{\K\local} \newcommand{\Kb}{\bar{\K}} \newcommand{\Kib}{\Kb\local} \newcommand{\Kh}{{\widehat{\K}}} \newcommand{\Kih}{{\Kh\local}} \newcommand{\Kt}{{\widetilde{\K}}} \newcommand{\Kit}{{\Kt\local}} \newcommand{\Ko}{\K_0} \newcommand{\Kot}{\Kt_0} \newcommand{\Kob}{\Kb_0} \newcommand{\KiNN}{\Ki^{(NN)}{}} \newcommand{\KitNN}{\Kit^{(NN)}{}} \newcommand{\KiAS}{\Ki^{(AS)}{}} \newcommand{\KitAS}{\Kit^{(AS)}{}} \newcommand{\KII}{\K_{\I\I}} \newcommand{\KIG}{\K_{\I\G}} \newcommand{\KGI}{\K_{\G\I}} \newcommand{\KGG}{\K_{\G\G}} \newcommand{\KIiIi}{\K_{\Ii\Ii}} \newcommand{\KIiGi}{\K_{\Ii\Gi}} \newcommand{\KGiIi}{\K_{\Gi\Ii}} \newcommand{\KGiGi}{\K_{\Gi\Gi}} \newcommand{\KIIi}{\KII\local} \newcommand{\KIGi}{\KIG\local} \newcommand{\KGIi}{\KGI\local} \newcommand{\KGGi}{\KGG\local} \newcommand{\KGiGiw} {{\ensuremath{\K_{\Gi\Gi}^w}}} % Matrix B \newcommand{\B}{{\cal B}} \newcommand{\Bi}{\B\local} \newcommand{\Bib}{\widehat{\B}\local} \newcommand{\Bob}{\widehat{\B}_0} % Matrix C \newcommand{\C}{{\cal C}} % Matrix T \newcommand{\T}{{\cal T}} \newcommand{\Ti}{{\T\local}} % Vectors \newcommand{\uI}{u_\I} \newcommand{\uG}{u_\G} \newcommand{\xI}{x_\I} \newcommand{\xG}{x_\G} \newcommand{\bI}{b_\I} \newcommand{\bG}{b_\G} \newcommand{\fI}{f_\I} \newcommand{\fG}{f_\G} \newcommand{\ftG}{\widetilde f_\G} \newcommand{\ftGi}{\widetilde f_\Gi^{(i)}} \newcommand{\ftGj}{\widetilde f_\Gj^{(j)}} \)

Composyx

Table of Contents

1. General overview

Go upstream.

See the pdf version.

Composyx (previsouly Compose, or Maphys++) is a linear algebra C++ library focused on composability. Its purpose is to allow the user to express a large pannel of algorithms using a high-level interface to range from laptop prototypes to many node supercomputer parallel computations.

See tutorial for more information on how to use the library.

2. The composyx library

2.1. Introduction

composyx provides its own basic linear algebra datatypes for matrices, vector, … It is also possible to use external datatypes with composyx tools.

2.2. Namespace

The C++ namespace used by Composyx is composyx.

using namespace composyx;

2.3. Scalar types

In the rest of the documentation, we refer to Scalar for the following C++ types:

  • float: single precision (32 bits) real;
  • double: double precision (64 bits) real;
  • std::complex<float>: single precision (64 bits) complex;
  • std::complex<double>: double precision (128 bits) complex.

Those are the types supported by composyx datatypes, in other words the types of data that can be stored in a vector or a matrix.

We may also refer to Real as the real type associated to the Scalar type. It is for example the type you should use for a norm or a tolerance criterion for an iterative method. For real types (float and double), Scalar and Real are the same, but not for complex types. For instance, the Real type associated to std::complex<float> is float.

In composyx, one can access the Real type associated to a Scalar as follows:

using Real = typename composyx::arithmetic_real<Scalar>::type;

See Arithmetic for more details.

2.4. Linear algebra conventions

2.4.1. Functions on vector

With two vectors \(u, v\)

  • composyx::dot(u, v) is the scalar product of \(u\) and \(v\).
  • composyx::size(u) is the size of the vector \(u\).

2.4.2. Functions on matrices

For a matrix A, when it's relevant to do so, we define the following functions in the namespace composyx:

  • n_rows(A) and n_cols(A) respectively the number of rows and columns of the matrix.
  • transpose(A) the transpose of A. If A has complex coefficients, the imaginary part is kept the same. For objects defined in composyx type, we can also use A.t().
  • adjoint(A) the conjugate transpose of A. If A has real coefficients, it is equivalent as transpose(A). For objects defined in composyx type, we can also use A.h().

2.4.3. Arithmetic operators

The usual operators (\(+, -, *\)) should be defined for vectors and matrices. If dimensions are not consistent an exception will be thrown. The operator \(/\) can also be used for division by a scalar value.

2.4.4. Special operators for pseudo-inverses

On composyx objects, operators ~ and % have special semantics.

2.4.4.1. Operator~

The unary operator ~ is used as an equivalent of the pseudo-inverse \(A^+\) of a matrix \(A\). When it's possible and relevant, writing

auto A_pi = ~A;

will create an instance of an operator pseudo-inverse of A in A_pi. For example, if A is a square matrix, A_pi will be a linear solver that can be called on a vector b to return x such that \(A x = b\), in other word you can see A_pi as the operator \(A^{-1}\).

2.4.4.2. Operator%

The binary operator % is inspired from matlab's \ operator:

auto x = A % b;

is equivalent computing \(x = A^+ b\). It's also equivalent of writing:

auto x = ~A * b;

It is recommended to use ~A rather than this operator, especially if you want to use several times the solver instance ~A. For example, if you want to solve a system with successive right-hand sides, using ~A to keep an instance of the solver is better. It will possibly store a factorization of \(A\) or some data about \(A\) to accelerate the solving phase. If you use the % operator, this information will be computed again at every solve.

2.4.4.3. Choosing dense kernels and solvers

For the selection of the dense kernels and solvers used by default with these operators, see Dense kernel selection

3. Coding tools

3.1. Arithmetics

3.2. Error management

3.3. Algorithms for 1D arrays

3.4. Cross product iterator

3.5. Macros

3.6. SZ compressor wrapper

4. Matrix interface

4.1. Basic concepts

4.2. Linear algebra concepts

4.3. Common traits and definitions

4.4. Dense kernels static and dynamic selection

4.5. Eigen wrapper

4.6. Eigen dense solver

4.7. Eigen sparse solver

4.8. Armadillo wrapper

5. Datatypes

5.1. Matrix properties

5.2. Dense vectors and matrices

5.2.1. Dense data

5.2.2. Dense matrix

5.2.3. Expression templates for dense matrices

5.3. Diagonal matrices

5.4. Sparse matrices

5.4.1. Base sparse matrix class

5.4.2. Sparse matrix COO

5.4.3. Sparse matrix CSC

5.4.4. Sparse matrix LIL

5.5. Partitioned vectors and matrices

5.5.1. Partitioned base class

5.5.2. Partitioned operator

5.5.3. Partitioned vector

5.5.4. Partitioned matrix (domain decomposition style)

5.5.5. Partitioned dense matrix

Experimental feature. Interface is expected to change.

PartDenseMatrix

6. MPI wrapper

6.1. MPI

MPI

7. Parallel domain decomposition

7.1. Process

7.2. Subdomain

8. Solvers

8.1. Linear operator interface

8.2. Iterative solvers

8.2.1. Generic interface

8.2.2. Generic tests

8.2.3. Jacobi

8.2.4. Conjugate gradient

8.2.5. GCR

GCR

8.2.6. BiCGSTAB

8.2.7. GMRES

8.2.8. Fabulous

8.2.9. Iterative refinement solver

8.3. Direct solvers

8.3.1. BLAS / LAPACK

8.3.2. <T>Lapack solver

8.3.3. Chameleon solver

8.3.4. Pastix

8.3.5. Mumps

8.3.6. Qr-Mumps

8.4. Hybrid solvers

8.4.1. Schur complement solver

8.4.2. Partitioned Schur complement solver

8.4.3. Implicit Schur operator

8.5. Centralized solver

8.6. Coarse solver

9. Preconditioners

9.1. Diagonal preconditioner

9.2. Abstract Schwarz

9.3. Two level abstract Schwarz

9.4. Eigen-deflated preconditioner

10. I/O

10.1. Matrix market loader

10.2. Domain decomposition

11. Linear algebra tools

11.1. Induced matrix norm

11.2. Arpack eigen solver

12. Kernels

12.1. Blas/lapack kernels

12.2. <T>lapack kernels

12.3. Chameleon kernels

12.4. Sparse kernels

13. Graph and partitioning

13.1. Paddle

14. Unit test matrices

14.1. Test matrices

15. Benchmark results

Date: 27/09/2024 at 12:15:53

Author: Concace

Created: 2024-09-27 Fri 12:15

Validate