Matrices API Reference
Distributed matrix operations in SafePETSc.
Type
SafePETSc.Mat — TypeMat{T}A distributed PETSc matrix with element type T, managed by SafePETSc's reference counting system.
Mat{T} is actually a type alias for DRef{_Mat{T}}, meaning matrices are automatically tracked across MPI ranks and destroyed collectively when all ranks release their references.
Construction
Use Mat_uniform or Mat_sum to create distributed matrices:
# Create from uniform data (same on all ranks)
A = Mat_uniform([1.0 2.0; 3.0 4.0])
# Create from sparse contributions (summed across ranks)
using SparseArrays
A = Mat_sum(sparse([1, 2], [1, 2], [1.0, 4.0], 2, 2))Operations
Matrices support standard linear algebra operations:
# Matrix-vector multiplication
y = A * x
# Matrix-matrix multiplication
C = A * B
# Matrix transpose
B = A'
B = Mat(A') # Materialize transpose
# Linear solve
x = A \ b
# Concatenation
C = vcat(A, B) # or cat(A, B; dims=1)
D = hcat(A, B) # or cat(A, B; dims=2)
E = blockdiag(A, B)
# Diagonal matrix from vectors
using SparseArrays
A = spdiagm(0 => diag_vec, 1 => upper_diag)See also: Mat_uniform, Mat_sum, Vec, Solver
Constructors
SafePETSc.Mat_uniform — FunctionMat_uniform(A::Matrix{T}; row_partition=default_row_partition(size(A, 1), MPI.Comm_size(MPI.COMM_WORLD)), col_partition=default_row_partition(size(A, 2), MPI.Comm_size(MPI.COMM_WORLD)), prefix="") -> DRef{Mat{T}}Create a distributed PETSc matrix from a Julia matrix, asserting uniform distribution across ranks (on MPI.COMM_WORLD).
A::Matrix{T}must be identical on all ranks (mpi_uniform).row_partitionis a Vector{Int} of length nranks+1 where partition[i] is the start row (1-indexed) for rank i-1.col_partitionis a Vector{Int} of length nranks+1 where partition[i] is the start column (1-indexed) for rank i-1.prefixis an optional string prefix for MatSetOptionsPrefix() to set matrix-specific command-line options.- Returns a DRef that will destroy the PETSc Mat collectively when all ranks release their reference.
SafePETSc.Mat_sum — FunctionMat_sum(A::SparseMatrixCSC{T}; row_partition=default_row_partition(size(A, 1), MPI.Comm_size(MPI.COMM_WORLD)), col_partition=default_row_partition(size(A, 2), MPI.Comm_size(MPI.COMM_WORLD)), prefix="", own_rank_only=false) -> DRef{Mat{T}}Create a distributed PETSc matrix by summing sparse matrices across ranks (on MPI.COMM_WORLD).
A::SparseMatrixCSC{T}can differ across ranks; nonzero entries are summed across all ranks.row_partitionis a Vector{Int} of length nranks+1 where partition[i] is the start row (1-indexed) for rank i-1.col_partitionis a Vector{Int} of length nranks+1 where partition[i] is the start column (1-indexed) for rank i-1.prefixis an optional string prefix for MatSetOptionsPrefix() to set matrix-specific command-line options.own_rank_only::Bool(default=false): if true, asserts that all nonzero entries fall within this rank's row partition.- Returns a DRef that will destroy the PETSc Mat collectively when all ranks release their reference.
Uses MatSetValues with ADD_VALUES mode to sum contributions from all ranks.
Concatenation
Missing docstring for Base.cat(::SafePETSc.Mat, ::Vararg{SafePETSc.Mat}). Check Documenter's build log for details.
Missing docstring for Base.vcat(::SafePETSc.Mat...). Check Documenter's build log for details.
Missing docstring for Base.hcat(::SafePETSc.Mat...). Check Documenter's build log for details.
Missing docstring for SparseArrays.blockdiag(::SafePETSc.Mat...). Check Documenter's build log for details.
Sparse Diagonal Matrices
Missing docstring for SparseArrays.spdiagm(::Pair{<:Integer, <:SafePETSc.Vec}...). Check Documenter's build log for details.
Missing docstring for SparseArrays.spdiagm(::Integer, ::Integer, ::Pair{<:Integer, <:SafePETSc.Vec}...). Check Documenter's build log for details.
Operations
Linear Algebra
# Matrix-vector multiplication
y = A * x
LinearAlgebra.mul!(y, A, x)
# Matrix-matrix multiplication
C = A * B
LinearAlgebra.mul!(C, A, B)
# Transpose
B = A'
B = Mat(A') # Materialize transpose
LinearAlgebra.transpose!(B, A) # In-place transpose
# Adjoint-vector multiplication
w = v' * A
LinearAlgebra.mul!(w, v', A)Properties
T = eltype(A) # Element type
m, n = size(A) # Dimensions
m = size(A, 1) # Rows
n = size(A, 2) # ColumnsIteration
# Iterate over rows (dense matrices only)
for row in eachrow(A)
# row is a view of the matrix row
process(row)
end