\( \newcommand{\blu}[1]{{\color{blue}#1}} \newcommand{\red}[1]{{\color{red}#1}} \newcommand{\grn}[1]{{\color{green!50!black}#1}} \newcommand{\local}{_i} \newcommand{\inv}{^{-1}} % Index for interface and interior \newcommand{\G}{\Gamma} \newcommand{\Gi}{\Gamma_i} \newcommand{\I}{{\cal I}} \newcommand{\Ii}{\I_i} % Matrix A \newcommand{\A}{{\cal A}} \newcommand{\Ai}{\A\local} \newcommand{\Aj}{\A_j} \newcommand{\Aib}{\bar{\A}\local} \newcommand{\AII}{\A_{\I\I}} \newcommand{\AIG}{\A_{\I\G}} \newcommand{\AGI}{\A_{\G\I}} \newcommand{\AGG}{\A_{\G\G}} \newcommand{\AIiIi}{\A_{\Ii\Ii}} \newcommand{\AIiGi}{\A_{\Ii\Gi}} \newcommand{\AGiIi}{\A_{\Gi\Ii}} \newcommand{\AGiGi}{\A_{\Gi\Gi}} \newcommand{\AGiGiw} {{\ensuremath{\A_{\Gi\Gi}^w}}} \newcommand{\AIIi}{\AII\local} \newcommand{\AIGi}{\AIG\local} \newcommand{\AGIi}{\AGI\local} \newcommand{\AGGi}{\AGG\local} \newcommand{\Ab}{\bar{\A}} \newcommand{\Ah}{{\widehat{\A}}} \newcommand{\Aih}{{\Ah\local}} \newcommand{\At}{{\widetilde{\A}}} \newcommand{\Ait}{{\At\local}} \newcommand{\Ao}{\A_0} \newcommand{\Aot}{\At_0} \newcommand{\Aob}{\Ab_0} \newcommand{\AiNN}{\Ai^{(NN)}{}} \newcommand{\AitNN}{\Ait^{(NN)}{}} \newcommand{\AiAS}{\Ai^{(AS)}{}} \newcommand{\AitAS}{\Ait^{(AS)}{}} % Matrix S \renewcommand{\S}{{\cal S}} \newcommand{\Si}{\S\local} \newcommand{\Sb}{\bar{\S}} \newcommand{\Sib}{\Sb\local} \newcommand{\Sh}{{\widehat{\S}}} \newcommand{\Sih}{{\Sh\local}} \newcommand{\St}{{\widetilde{\S}}} \newcommand{\Sit}{{\St\local}} \newcommand{\So}{\S_0} \newcommand{\Soi}{\S_0^i} \newcommand{\Sot}{\St_0} \newcommand{\Sob}{\Sb_0} \newcommand{\SiNN}{\Si^{(NN)}{}} \newcommand{\SitNN}{\Sit^{(NN)}{}} \newcommand{\SiAS}{\Si^{(AS)}{}} \newcommand{\SitAS}{\Sit^{(AS)}{}} % Matrix K \newcommand{\K}{{\cal K}} \newcommand{\Ki}{\K\local} \newcommand{\Kb}{\bar{\K}} \newcommand{\Kib}{\Kb\local} \newcommand{\Kh}{{\widehat{\K}}} \newcommand{\Kih}{{\Kh\local}} \newcommand{\Kt}{{\widetilde{\K}}} \newcommand{\Kit}{{\Kt\local}} \newcommand{\Ko}{\K_0} \newcommand{\Kot}{\Kt_0} \newcommand{\Kob}{\Kb_0} \newcommand{\KiNN}{\Ki^{(NN)}{}} \newcommand{\KitNN}{\Kit^{(NN)}{}} \newcommand{\KiAS}{\Ki^{(AS)}{}} \newcommand{\KitAS}{\Kit^{(AS)}{}} \newcommand{\KII}{\K_{\I\I}} \newcommand{\KIG}{\K_{\I\G}} \newcommand{\KGI}{\K_{\G\I}} \newcommand{\KGG}{\K_{\G\G}} \newcommand{\KIiIi}{\K_{\Ii\Ii}} \newcommand{\KIiGi}{\K_{\Ii\Gi}} \newcommand{\KGiIi}{\K_{\Gi\Ii}} \newcommand{\KGiGi}{\K_{\Gi\Gi}} \newcommand{\KIIi}{\KII\local} \newcommand{\KIGi}{\KIG\local} \newcommand{\KGIi}{\KGI\local} \newcommand{\KGGi}{\KGG\local} \newcommand{\KGiGiw} {{\ensuremath{\K_{\Gi\Gi}^w}}} % Matrix B \newcommand{\B}{{\cal B}} \newcommand{\Bi}{\B\local} \newcommand{\Bib}{\widehat{\B}\local} \newcommand{\Bob}{\widehat{\B}_0} % Matrix C \newcommand{\C}{{\cal C}} % Matrix T \newcommand{\T}{{\cal T}} \newcommand{\Ti}{{\T\local}} % Vectors \newcommand{\uI}{u_\I} \newcommand{\uG}{u_\G} \newcommand{\xI}{x_\I} \newcommand{\xG}{x_\G} \newcommand{\bI}{b_\I} \newcommand{\bG}{b_\G} \newcommand{\fI}{f_\I} \newcommand{\fG}{f_\G} \newcommand{\ftG}{\widetilde f_\G} \newcommand{\ftGi}{\widetilde f_\Gi^{(i)}} \newcommand{\ftGj}{\widetilde f_\Gj^{(j)}} \)

Composyx

Table of Contents

1. General overview

Go upstream.

See the pdf version.

Composyx (previsouly Compose, or Maphys++) is a linear algebra C++ library focused on composability. Its purpose is to allow the user to express a large pannel of algorithms using a high-level interface to range from laptop prototypes to many node supercomputer parallel computations.

2. Documentation and tutorials

2.1. Quick start tutorial

A quick presentation of composyx, and how to get started with the library.

Tutorial

2.2. GMRES parameters

A description of parameters for iterative solvers (with a focus on GMRES).

Reference card for GMRES

2.3. Operators

How to define a operator in composyx.

Operator definition

2.4. Fortran and C drivers

Usage of C and Fortran drivers.

C and Fortran drivers

3. The composyx library

3.1. Introduction

composyx provides its own basic linear algebra datatypes for matrices, vector, … It is also possible to use external datatypes with composyx tools.

3.2. Namespace

The C++ namespace used by Composyx is composyx.

using namespace composyx;

3.3. Scalar types

In the rest of the documentation, we refer to Scalar for the following C++ types:

  • float: single precision (32 bits) real;
  • double: double precision (64 bits) real;
  • std::complex<float>: single precision (64 bits) complex;
  • std::complex<double>: double precision (128 bits) complex.

Those are the types supported by composyx datatypes, in other words the types of data that can be stored in a vector or a matrix.

We may also refer to Real as the real type associated to the Scalar type. It is for example the type you should use for a norm or a tolerance criterion for an iterative method. For real types (float and double), Scalar and Real are the same, but not for complex types. For instance, the Real type associated to std::complex<float> is float.

In composyx, one can access the Real type associated to a Scalar as follows:

using Real = typename composyx::arithmetic_real<Scalar>::type;

See Arithmetic for more details.

3.4. Linear algebra conventions

3.4.1. Functions on vector

With two vectors \(u, v\)

  • composyx::dot(u, v) is the scalar product of \(u\) and \(v\).
  • composyx::size(u) is the size of the vector \(u\).

3.4.2. Functions on matrices

For a matrix A, when it's relevant to do so, we define the following functions in the namespace composyx:

  • n_rows(A) and n_cols(A) respectively the number of rows and columns of the matrix.
  • transpose(A) the transpose of A. If A has complex coefficients, the imaginary part is kept the same. For objects defined in composyx type, we can also use A.t().
  • adjoint(A) the conjugate transpose of A. If A has real coefficients, it is equivalent as transpose(A). For objects defined in composyx type, we can also use A.h().

3.4.3. Arithmetic operators

The usual operators (\(+, -, *\)) should be defined for vectors and matrices. If dimensions are not consistent an exception will be thrown. The operator \(/\) can also be used for division by a scalar value.

3.4.4. Special operators for pseudo-inverses

On composyx objects, operators ~ and % have special semantics.

3.4.4.1. Operator~

The unary operator ~ is used as an equivalent of the pseudo-inverse \(A^+\) of a matrix \(A\). When it's possible and relevant, writing

auto A_pi = ~A;

will create an instance of an operator pseudo-inverse of A in A_pi. For example, if A is a square matrix, A_pi will be a linear solver that can be called on a vector b to return x such that \(A x = b\), in other word you can see A_pi as the operator \(A^{-1}\).

3.4.4.2. Operator%

The binary operator % is inspired from matlab's \ operator:

auto x = A % b;

is equivalent computing \(x = A^+ b\). It's also equivalent of writing:

auto x = ~A * b;

It is recommended to use ~A rather than this operator, especially if you want to use several times the solver instance ~A. For example, if you want to solve a system with successive right-hand sides, using ~A to keep an instance of the solver is better. It will possibly store a factorization of \(A\) or some data about \(A\) to accelerate the solving phase. If you use the % operator, this information will be computed again at every solve.

3.4.4.3. Choosing implicit kernels and solver to use

For the selection of the dense kernels and solvers used by default with these operators, see dense kernel selection

For the sparse solver selection, see sparse solver selection

For sparse kernel selection, see sparse kernels.

4. Context

5. Coding tools

5.1. Arithmetics

5.2. Error management

5.3. Algorithms for 1D arrays

5.4. Cross product iterator

5.5. Macros

5.6. SZ compressor wrapper

6. Matrix interface

6.1. Basic concepts

6.2. Linear algebra concepts

6.3. Common traits and definitions

6.4. Implicit dense kernels and solver

6.5. Implicit sparse solver

6.6. Eigen wrapper

6.7. Eigen dense solver

6.8. Eigen sparse solver

6.9. Armadillo wrapper

7. Datatypes

7.1. Matrix properties

7.2. Dense vectors and matrices

7.2.1. Dense data

7.2.2. Dense matrix

7.2.3. Expression templates for dense matrices

7.3. Diagonal matrices

7.4. Sparse matrices

7.4.1. Base sparse matrix class

7.4.2. Sparse matrix COO

7.4.3. Sparse matrix CSC

7.4.4. Sparse matrix CSR

7.4.5. Sparse matrix LIL

7.5. Partitioned vectors and matrices

7.5.1. Partitioned base class

7.5.2. Partitioned operator

7.5.3. Partitioned vector

7.5.4. Partitioned matrix (domain decomposition style)

7.5.5. Partitioned dense matrix

Experimental feature. Interface is expected to change.

PartDenseMatrix

8. MPI wrapper

8.1. MPI

MPI

9. Parallel domain decomposition

9.1. Process

9.2. Subdomain

10. Solvers

10.1. Linear operator interface

10.2. Iterative solvers

10.2.1. Generic interface

10.2.2. Generic tests

10.2.3. Jacobi

10.2.4. Conjugate gradient

10.2.5. GCR

GCR

10.2.6. BiCGSTAB

10.2.7. GMRES

10.2.8. Fabulous

10.2.9. Iterative refinement solver

10.3. Direct solvers

10.3.1. BLAS / LAPACK

10.3.2. <T>Lapack solver

10.3.3. Chameleon solver

10.3.4. Pastix

10.3.5. Mumps

10.3.6. Qr-Mumps

10.4. Hybrid solvers

10.4.1. Schur complement solver

10.4.2. Partitioned Schur complement solver

10.4.3. Implicit Schur operator

10.5. Centralized solver

10.6. Coarse solver

11. Preconditioners

11.1. Diagonal preconditioner

11.2. Abstract Schwarz

11.3. Two level abstract Schwarz

11.4. Eigen-deflated preconditioner

12. Operator interfaces

12.1. Lite operator

12.2. Custom operator

13. I/O

13.1. Matrix market loader

13.2. Domain decomposition

14. Linear algebra tools

14.1. Induced matrix norm

14.2. Arpack eigen solver

15. Kernels

15.1. Blas/lapack kernels

15.2. <T>lapack kernels

15.3. Chameleon kernels

15.4. Sparse kernels

15.4.1. Sparse kernel interface

15.4.2. Composyx sparse kernels

15.4.3. RSB sparse kernels

15.4.4. MKL sparse kernels

16. Graph and partitioning

16.1. Paddle

17. Unit test matrices

17.1. Test matrices

18. Benchmark results

Date: 04/02/2025 at 13:06:59

Author: Concace

Created: 2025-02-04 Tue 13:07

Validate