Matrices API Reference

Distributed matrix operations in SafePETSc.

Type

SafePETSc.MatType
Mat{T,Prefix}

A distributed PETSc matrix with element type T and prefix type Prefix, managed by SafePETSc's reference counting system.

Mat{T,Prefix} is actually a type alias for DRef{_Mat{T,Prefix}}, meaning matrices are automatically tracked across MPI ranks and destroyed collectively when all ranks release their references.

Construction

Use Mat_uniform or Mat_sum to create distributed matrices:

# Create from uniform data (same on all ranks)
A = Mat_uniform([1.0 2.0; 3.0 4.0])

# Create from sparse contributions (summed across ranks)
using SparseArrays
A = Mat_sum(sparse([1, 2], [1, 2], [1.0, 4.0], 2, 2))

Operations

Matrices support standard linear algebra operations:

# Matrix-vector multiplication
y = A * x

# Matrix-matrix multiplication
C = A * B

# Matrix transpose
B = A'
B = Mat(A')  # Materialize transpose

# Linear solve
x = A \ b

# Concatenation
C = vcat(A, B)  # or cat(A, B; dims=1)
D = hcat(A, B)  # or cat(A, B; dims=2)
E = blockdiag(A, B)

# Diagonal matrix from vectors
using SparseArrays
A = spdiagm(0 => diag_vec, 1 => upper_diag)

See also: Mat_uniform, Mat_sum, Vec, KSP

Prefix Types

The Prefix type parameter determines matrix storage format and PETSc configuration:

SafePETSc.MPIAIJType
MPIAIJ

Prefix type for sparse matrices and general vectors (default).

String Prefix

The string prefix is "MPIAIJ_", which is prepended to PETSc option names.

Default PETSc Types

  • Matrices: mpiaij (MPI sparse matrix, compressed row storage)
  • Vectors: mpi (standard MPI vector)

Usage

Use MPIAIJ for:

  • Sparse matrices with few nonzeros per row
  • Memory-efficient storage of large sparse systems
  • Iterative solvers and sparse linear algebra
  • General-purpose vector operations

This is the default prefix type when not specified.

Example

# Create sparse matrix (MPIAIJ is the default)
A = Mat_uniform(sparse([1.0 0.0; 0.0 2.0]))

# Explicitly specify MPIAIJ prefix
B = Mat_uniform(data; Prefix=MPIAIJ)

# Configure iterative solver for sparse matrices
petsc_options_insert_string("-MPIAIJ_ksp_type gmres")

See also: MPIDENSE, Mat, Vec

SafePETSc.MPIDENSEType
MPIDENSE

Prefix type for dense matrix operations and associated vectors.

String Prefix

The string prefix is "MPIDENSE_", which is prepended to PETSc option names.

Default PETSc Types

  • Matrices: mpidense (MPI dense matrix, row-major storage)
  • Vectors: mpi (standard MPI vector)

Usage

Use MPIDENSE for:

  • Dense matrices where all elements are stored
  • Operations requiring dense storage (e.g., eachrow)
  • Direct solvers and dense linear algebra

Example

# Create dense matrix
A = Mat_uniform([1.0 2.0; 3.0 4.0]; Prefix=MPIDENSE)

# Configure GPU acceleration for dense matrices
petsc_options_insert_string("-MPIDENSE_mat_type mpidense")

See also: MPIAIJ, Mat, Vec

SafePETSc.prefixFunction
prefix(::Type{<:Prefix}) -> String

Return the string prefix for a given prefix type.

The string prefix is prepended to PETSc option names. For example, with prefix type MPIDENSE, the option -mat_type mpidense becomes -MPIDENSE_mat_type mpidense.

Examples

prefix(MPIDENSE)  # Returns "MPIDENSE_"
prefix(MPIAIJ)    # Returns "MPIAIJ_"

Constructors

SafePETSc.Mat_uniformFunction
Mat_uniform(A::Matrix{T}; row_partition=default_row_partition(size(A, 1), MPI.Comm_size(MPI.COMM_WORLD)), col_partition=default_row_partition(size(A, 2), MPI.Comm_size(MPI.COMM_WORLD)), Prefix::Type=MPIAIJ) -> DRef{Mat{T,Prefix}}

MPI Collective

Create a distributed PETSc matrix from a Julia matrix, asserting uniform distribution across ranks (on MPI.COMM_WORLD).

  • A::Matrix{T} must be identical on all ranks (mpi_uniform).
  • row_partition is a Vector{Int} of length nranks+1 where partition[i] is the start row (1-indexed) for rank i-1.
  • col_partition is a Vector{Int} of length nranks+1 where partition[i] is the start column (1-indexed) for rank i-1.
  • Prefix is a type parameter for MatSetOptionsPrefix() to set matrix-specific command-line options (default: MPIAIJ).
  • Returns a DRef that will destroy the PETSc Mat collectively when all ranks release their reference.
Mat_uniform(A::SparseMatrixCSC{T}; row_partition=default_row_partition(size(A, 1), MPI.Comm_size(MPI.COMM_WORLD)), col_partition=default_row_partition(size(A, 2), MPI.Comm_size(MPI.COMM_WORLD)), Prefix::Type=MPIAIJ) -> DRef{Mat{T,Prefix}}

MPI Collective

Create a distributed PETSc matrix from a sparse Julia matrix, asserting uniform distribution across ranks (on MPI.COMM_WORLD).

  • A::SparseMatrixCSC{T} must be identical on all ranks (mpi_uniform).
  • row_partition is a Vector{Int} of length nranks+1 where partition[i] is the start row (1-indexed) for rank i-1.
  • col_partition is a Vector{Int} of length nranks+1 where partition[i] is the start column (1-indexed) for rank i-1.
  • Prefix is a type parameter for MatSetOptionsPrefix() to set matrix-specific command-line options (default: MPIAIJ).
  • Returns a DRef that will destroy the PETSc Mat collectively when all ranks release their reference.

Each rank inserts only the values from its assigned row partition using INSERT_VALUES mode.

SafePETSc.Mat_sumFunction
Mat_sum(A::SparseMatrixCSC{T}; row_partition=default_row_partition(size(A, 1), MPI.Comm_size(MPI.COMM_WORLD)), col_partition=default_row_partition(size(A, 2), MPI.Comm_size(MPI.COMM_WORLD)), Prefix::Type=MPIAIJ, own_rank_only=false) -> DRef{Mat{T,Prefix}}

MPI Collective

Create a distributed PETSc matrix by summing sparse matrices across ranks (on MPI.COMM_WORLD).

  • A::SparseMatrixCSC{T} can differ across ranks; nonzero entries are summed across all ranks.
  • row_partition is a Vector{Int} of length nranks+1 where partition[i] is the start row (1-indexed) for rank i-1.
  • col_partition is a Vector{Int} of length nranks+1 where partition[i] is the start column (1-indexed) for rank i-1.
  • Prefix is a type parameter for MatSetOptionsPrefix() to set matrix-specific command-line options (default: MPIAIJ).
  • own_rank_only::Bool (default=false): if true, asserts that all nonzero entries fall within this rank's row partition.
  • Returns a DRef that will destroy the PETSc Mat collectively when all ranks release their reference.

Uses MatSetValues with ADD_VALUES mode to sum contributions from all ranks.

Concatenation

Base.catFunction
Base.cat(As::Union{Vec{T,Prefix},Mat{T,Prefix}}...; dims) -> Union{Vec{T,Prefix}, Mat{T,Prefix}}

MPI Collective

Concatenate distributed PETSc vectors and/or matrices along dimension dims.

Arguments

  • As::Union{Vec{T,Prefix},Mat{T,Prefix}}...: One or more vectors or matrices with the same element type T
  • dims: Concatenation dimension (1 for vertical/vcat, 2 for horizontal/hcat)

Return Type

  • Returns Vec{T,Prefix} when dims=1 and result has a single column (vertical stacking of vectors)
  • Returns Mat{T,Prefix} otherwise (horizontal concatenation or matrix inputs)

Requirements

All inputs must:

  • Have the same element type T
  • Have compatible sizes and partitions for the concatenation dimension
    • For dims=1 (vcat): same number of columns and column partition
    • For dims=2 (hcat): same number of rows and row partition

Automatic Prefix Selection

The output Prefix type is automatically determined to ensure correctness:

  • MPIDENSE if any input has Prefix=MPIDENSE (dense format required)
  • MPIDENSE if concatenating vectors horizontally with width > 1 (e.g., hcat(x, y))
  • Otherwise, preserves the first input's Prefix

This ensures that operations like hcat(vec1, vec2) produce dense matrices as expected, since vectors are inherently dense and horizontal concatenation creates a dense result.

Implementation

The concatenation is performed by:

  1. Each rank extracts its owned rows from each input as a Julia sparse matrix
  2. Standard Julia cat is applied locally on each rank
  3. The results are combined across ranks using Vec_sum (for single-column results) or Mat_sum (otherwise)

Examples

# Vertical concatenation (stacking) - returns Vec
x = Vec_uniform([1.0, 2.0, 3.0])
y = Vec_uniform([4.0, 5.0, 6.0])
v = vcat(x, y)  # Returns Vec{Float64,MPIAIJ} with 6 elements

# Horizontal concatenation - returns Mat
M = hcat(x, y)  # Returns Mat{Float64,MPIDENSE} of size 3×2

# Matrix concatenation - returns Mat
A = Mat_uniform(sparse([1 2; 3 4]))
B = Mat_uniform(sparse([5 6; 7 8]))
C = vcat(A, B)  # Returns Mat{Float64,MPIAIJ} of size 4×2

See also: vcat, hcat, Vec_sum, Mat_sum

Base.vcatFunction
Base.vcat(As::Union{Vec{T,Prefix},Mat{T,Prefix}}...) -> Union{Vec{T,Prefix}, Mat{T,Prefix}}

MPI Collective

Vertically concatenate (stack) distributed PETSc vectors and/or matrices.

Equivalent to cat(As...; dims=1). Stacks inputs vertically, increasing the number of rows while keeping the number of columns constant.

Return Type

  • Returns Vec{T,Prefix} when concatenating only vectors (single-column result)
  • Returns Mat{T,Prefix} when concatenating matrices or when result has multiple columns

Requirements

All inputs must have the same number of columns and the same column partition.

Prefix Selection

  • Typically preserves the input Prefix (e.g., MPIAIJ for vectors)
  • Upgrades to MPIDENSE if any input is MPIDENSE

Examples

# Concatenating vectors returns a Vec
x = Vec_uniform([1.0, 2.0])
y = Vec_uniform([3.0, 4.0])
v = vcat(x, y)  # Vec{Float64,MPIAIJ} with 4 elements

# Concatenating matrices returns a Mat
A = Mat_uniform(sparse([1 2; 3 4]))
B = Mat_uniform(sparse([5 6; 7 8]))
C = vcat(A, B)  # Mat{Float64,MPIAIJ} of size 4×2

See also: cat, hcat

Base.hcatFunction
Base.hcat(As::Union{Vec{T,Prefix},Mat{T,Prefix}}...) -> Mat{T,Prefix}

MPI Collective

Horizontally concatenate (place side-by-side) distributed PETSc vectors and/or matrices.

Equivalent to cat(As...; dims=2). Concatenates inputs horizontally, increasing the number of columns while keeping the number of rows constant.

Requirements

All inputs must have the same number of rows and the same row partition.

Prefix Selection

  • Automatically upgrades to MPIDENSE when concatenating vectors (width > 1)
  • Upgrades to MPIDENSE if any input is MPIDENSE
  • Otherwise preserves the input Prefix

The automatic upgrade for vectors is important because vectors are inherently dense, and horizontal concatenation of vectors produces a dense matrix.

Examples

x = Vec_uniform([1.0, 2.0, 3.0])
y = Vec_uniform([4.0, 5.0, 6.0])
M = hcat(x, y)  # 3×2 Mat{Float64,MPIDENSE} - auto-upgraded!

A = Mat_uniform(sparse([1; 2; 3]))
B = Mat_uniform(sparse([4; 5; 6]))
C = hcat(A, B)  # 3×2 matrix with MPIDENSE

See also: cat, vcat

SparseArrays.blockdiagFunction
blockdiag(As::Mat{T,Prefix}...) -> Mat{T,Prefix}

MPI Collective

Create a block diagonal matrix from distributed PETSc matrices.

The result is a matrix with the input matrices along the diagonal and zeros elsewhere. All matrices must have the same prefix and element type.

Example

# If A is m×n and B is p×q, then blockdiag(A, B) is (m+p)×(n+q)
C = blockdiag(A, B)

Sparse Diagonal Matrices

SparseArrays.spdiagmFunction
spdiagm(kv::Pair{<:Integer, <:Vec{T,Prefix}}...) -> Mat{T,Prefix}
spdiagm(m::Integer, n::Integer, kv::Pair{<:Integer, <:Vec{T,Prefix}}...) -> Mat{T,Prefix}

MPI Collective

Create a sparse diagonal matrix from distributed PETSc vectors.

Each pair k => v places the vector v on the k-th diagonal:

  • k = 0: main diagonal
  • k > 0: superdiagonal
  • k < 0: subdiagonal

All vectors must have the same element type T. The matrix dimensions are inferred from the diagonal positions and vector lengths, or can be specified explicitly.

Optional Keyword Arguments

  • prefix: Matrix prefix type to use for the result. Defaults to the input vector's prefix. Use this to create a matrix with a different prefix than the input vectors (e.g., create MPIAIJ from MPIDENSE vectors, or vice versa).
  • row_partition: Override the default equal-row partitioning (length nranks+1, start at 1, end at m+1, non-decreasing). Defaults to default_row_partition(m, nranks).
  • col_partition: Override the default equal-column partitioning (length nranks+1, start at 1, end at n+1, non-decreasing). Defaults to default_row_partition(n, nranks).

Examples

# Create a tridiagonal matrix
A = spdiagm(-1 => lower, 0 => diag, 1 => upper)

# Create a 100×100 matrix with specified vectors on diagonals
B = spdiagm(100, 100, 0 => v1, 1 => v2)

# Create MPIAIJ (sparse) matrix from MPIDENSE vector
v_dense = Vec_uniform(data; Prefix=MPIDENSE)
A_sparse = spdiagm(0 => v_dense; prefix=MPIAIJ)

Conversion and Display

Convert distributed matrices to Julia arrays for inspection and display:

Base.MatrixMethod
Matrix(x::Mat{T,Prefix}) -> Matrix{T}

MPI Collective

Convert a distributed PETSc Mat to a Julia Matrix by gathering all data to all ranks. This is a collective operation - all ranks must call it and will receive the complete matrix.

For dense matrices, this uses efficient MatDenseGetArrayRead. For other matrix types, it uses MatGetRow to extract each row.

This is primarily used for display purposes or small matrices. For large matrices, this operation can be expensive as it gathers all data to all ranks.

Base.Matrix(At::LinearAlgebra.Adjoint{T, <:Mat{T}}) -> Matrix{T}

MPI Collective

Convert an adjoint of a distributed PETSc Mat to an adjoint Julia Matrix. Equivalent to Matrix(parent(At))'.

This is a collective operation - all ranks must call it and will receive the complete matrix transpose.

SparseArrays.sparseMethod
SparseArrays.sparse(x::Mat{T,Prefix}) -> SparseMatrixCSC{T, Int}

MPI Collective

Convert a distributed PETSc Mat to a Julia SparseMatrixCSC by gathering all data to all ranks. This is a collective operation - all ranks must call it and will receive the complete sparse matrix.

Uses MatGetRow to extract the sparse structure efficiently, preserving sparsity.

This is primarily used for display purposes or small matrices. For large matrices, this operation can be expensive as it gathers all data to all ranks.

SparseArrays.sparse(At::LinearAlgebra.Adjoint{T, <:Mat{T}}) -> SparseMatrixCSC{T, Int}

MPI Collective

Convert an adjoint of a distributed PETSc Mat to an adjoint Julia SparseMatrixCSC. Equivalent to sparse(parent(At))'.

This is a collective operation - all ranks must call it and will receive the complete sparse matrix transpose.

SafePETSc.is_denseFunction
is_dense(x::Mat{T,Prefix}) -> Bool

MPI Non-Collective

Check if a PETSc matrix is a dense matrix type.

This checks the PETSc matrix type string and returns true if it contains "dense" (case-insensitive). This handles various dense types like "seqdense", "mpidense", and vendor-specific dense matrix types.

Display methods (automatically used by println, display, etc.):

  • show(io::IO, A::Mat) - Display matrix contents (uses dense or sparse format based on type)
  • show(io::IO, mime::MIME, A::Mat) - Display with MIME type support

Utilities

SafePETSc.own_rowMethod
own_row(v::Vec{T,Prefix}) -> UnitRange{Int}

MPI Non-Collective

Return the range of indices owned by the current rank for vector v.

Example

v = Vec_uniform([1.0, 2.0, 3.0, 4.0])
range = own_row(v)  # e.g., 1:2 on rank 0
own_row(A::Mat{T,Prefix}) -> UnitRange{Int}

MPI Non-Collective

Return the range of row indices owned by the current rank for matrix A.

Example

A = Mat_uniform([1.0 2.0; 3.0 4.0; 5.0 6.0; 7.0 8.0])
range = own_row(A)  # e.g., 1:2 on rank 0

Row-wise Operations

See map_rows in the Vectors API - works with both vectors and matrices.

Indexing

Non-collective element and range access:

Base.getindexMethod
Base.getindex(A::Mat{T,Prefix}, ::Colon, k::Int) -> Vec{T,Prefix}

MPI Non-Collective

Extract column k from matrix A, returning a distributed vector.

Each rank extracts its owned rows from column k. The resulting vector has the same row partition as matrix A.

Uses efficient bulk operations: MatDenseGetArrayRead for dense matrices, MatGetRow for sparse matrices.

Example

A = Mat_uniform([1.0 2.0 3.0; 4.0 5.0 6.0])
v = A[:, 2]  # Extract second column: [2.0, 5.0]
Base.getindexMethod
Base.getindex(A::Mat{T}, i::Int, j::Int) -> T

MPI Non-Collective

Get the value at position (i, j) from a distributed matrix.

The row index i must be wholly contained in the current rank's row ownership range. If not, the function will abort with an error message and stack trace.

This is a non-collective operation.

Example

A = Mat_uniform([1.0 2.0; 3.0 4.0])
# On rank that owns row 1:
val = A[1, 2]  # Returns 2.0
Base.getindexMethod
Base.getindex(A::Mat{T}, range_i::UnitRange{Int}, j::Int) -> Vector{T}

MPI Non-Collective

Extract a contiguous range of rows from column j of a distributed matrix.

The row range must be wholly contained in the current rank's row ownership range. If not, the function will abort with an error message and stack trace.

This is a non-collective operation.

Example

A = Mat_uniform([1.0 2.0; 3.0 4.0; 5.0 6.0])
# On rank that owns rows 1:2:
vals = A[1:2, 2]  # Returns [2.0, 4.0]
Base.getindexMethod
Base.getindex(A::Mat{T}, i::Int, range_j::UnitRange{Int}) -> Vector{T}

MPI Non-Collective

Extract a contiguous range of columns from row i of a distributed matrix.

The row index i must be wholly contained in the current rank's row ownership range. If not, the function will abort with an error message and stack trace.

This is a non-collective operation.

Example

A = Mat_uniform([1.0 2.0 3.0; 4.0 5.0 6.0])
# On rank that owns row 1:
vals = A[1, 2:3]  # Returns [2.0, 3.0]
Base.getindexMethod
Base.getindex(A::Mat{T}, range_i::UnitRange{Int}, range_j::UnitRange{Int}) -> Union{Matrix{T}, SparseMatrixCSC{T}}

MPI Non-Collective

Extract a submatrix from a distributed matrix.

The row range must be wholly contained in the current rank's row ownership range. If not, the function will abort with an error message and stack trace.

Returns a dense Matrix if A is dense, otherwise returns a SparseMatrixCSC.

This is a non-collective operation.

Example

A = Mat_uniform([1.0 2.0 3.0; 4.0 5.0 6.0; 7.0 8.0 9.0])
# On rank that owns rows 1:2:
submat = A[1:2, 2:3]  # Returns [2.0 3.0; 5.0 6.0]

Operations

Linear Algebra

# Matrix-vector multiplication
y = A * x
LinearAlgebra.mul!(y, A, x)

# Matrix-matrix multiplication
C = A * B
LinearAlgebra.mul!(C, A, B)

# Transpose
B = A'
B = Mat(A')                            # Materialize transpose
LinearAlgebra.transpose!(B, A)         # In-place transpose

# Adjoint-vector multiplication
w = v' * A
LinearAlgebra.mul!(w, v', A)

Properties

T = eltype(A)                          # Element type
m, n = size(A)                         # Dimensions
m = size(A, 1)                         # Rows
n = size(A, 2)                         # Columns

Iteration

# Iterate over rows (works for both dense and sparse matrices)
for row in eachrow(A)
    # For dense (MPIDENSE): row is a view of the matrix row
    # For sparse (MPIAIJ): row is a SparseVector efficiently preserving sparsity
    process(row)
end

Block Matrix Products

SafePETSc.BlockProductType
BlockProduct{T,Prefix}

Represents a product of block matrices with pre-allocated storage for efficient recomputation.

A block matrix is a Julia Matrix where each element is a Mat, Mat', Vec, Vec', scalar, or nothing.

Fields

  • prod::Vector{Matrix{BlockElement{T,Prefix}}}: The sequence of block matrices to multiply
  • result::Union{Matrix{BlockElement{T,Prefix}}, Nothing}: Pre-allocated result (allocated on first calculate! call)
  • intermediates::Vector{Matrix{BlockElement{T,Prefix}}}: Pre-allocated intermediate results for chained products

Type Parameters

  • T: Element type (e.g., Float64)
  • Prefix: PETSc prefix type (e.g., MPIAIJ, MPIDENSE) - must match all contained objects

Constructor

BlockProduct(prod::Vector{Matrix}; Prefix::Type=MPIAIJ)

Validates dimensions and creates a BlockProduct. Actual allocation of result and intermediates happens lazily on the first call to calculate!.

Example

# Create block matrices
A = [M1 M2; M3 M4]  # 2x2 block of Mat{Float64,MPIAIJ}
B = [N1 N2; N3 N4]

# Create product (no allocation yet)
bp = BlockProduct([A, B])

# Compute A * B (allocates result on first call)
C = calculate!(bp)

# Subsequent calls reuse allocations
C2 = calculate!(bp)
SafePETSc.calculate!Function
calculate!(bp::BlockProduct)

MPI Collective

Recompute the product after modifying input matrices/vectors, reusing cached PETSc objects.

After the user modifies entries in bp.prod[k][i,j] matrices or vectors, calling this function updates bp.result using in-place operations on the cached intermediate results.

This avoids allocating new PETSc Mat/Vec objects.

Returns the updated result as a block matrix (Julia Matrix of BlockElements).

Example

# Create and compute initial product
bp = BlockProduct([A, B])
result1 = bp.result

# Modify input matrix entries
# (modify A[1,1] entries here)

# Recompute with cached objects
calculate!(bp)
result2 = bp.result

# result2 has updated values but same PETSc object identity