Vectors API Reference
Distributed vector operations in SafePETSc.
Type
SafePETSc.Vec — Type
Vec{T,Prefix}A distributed PETSc vector with element type T and prefix type Prefix, managed by SafePETSc's reference counting system.
Vec{T,Prefix} is a type alias for DRef{_Vec{T,Prefix}} and is released collectively when all ranks release their references. By default, released PETSc vectors are returned to an internal pool for reuse rather than destroyed immediately. To force destruction instead of pooling, set ENABLE_VEC_POOL[] = false, or call clear_vec_pool!() to free pooled vectors.
Construction
Use Vec_uniform or Vec_sum to create distributed vectors:
# Create from uniform data (same on all ranks)
v = Vec_uniform([1.0, 2.0, 3.0, 4.0])
# Create from sparse contributions (summed across ranks)
using SparseArrays
v = Vec_sum(sparsevec([1, 3], [1.0, 3.0], 4))Operations
Vectors support standard arithmetic operations via broadcasting:
y = x .+ 1.0 # Element-wise addition
y .= 2.0 .* x # In-place scaling
z = x .+ y # Vector additionMatrix-vector multiplication:
y = A * x # Matrix-vector product
LinearAlgebra.mul!(y, A, x) # In-place versionSee also: Vec_uniform, Vec_sum, Mat, zeros_like, ENABLE_VEC_POOL, clear_vec_pool!
Prefix Types
The Prefix type parameter controls PETSc configuration for vectors. See the Matrices API Reference for details on MPIAIJ, MPIDENSE, and prefix.
Constructors
SafePETSc.Vec_uniform — Function
Vec_uniform(v::Vector{T}; row_partition=default_row_partition(length(v), MPI.Comm_size(MPI.COMM_WORLD)), Prefix::Type=MPIAIJ) -> Vec{T,Prefix}MPI Collective
Create a distributed PETSc vector from a Julia vector, asserting uniform distribution across ranks (on MPI.COMM_WORLD).
v::Vector{T}must be identical on all ranks (mpi_uniform).row_partitionis a Vector{Int} of lengthnranks+1with 1-based inclusive starts.Prefixis a type parameter forVecSetOptionsPrefixfor PETSc options (default: MPIAIJ).- Returns a
Vec{T,Prefix}(akaDRef{_Vec{T,Prefix}}) managed collectively; by default vectors are returned to a reuse pool when released, not immediately destroyed. UseENABLE_VEC_POOL[] = falseorclear_vec_pool!()to force destruction.
SafePETSc.Vec_sum — Function
Vec_sum(v::SparseVector{T}; row_partition=default_row_partition(length(v), MPI.Comm_size(MPI.COMM_WORLD)), Prefix::Type=MPIAIJ, own_rank_only=false) -> Vec{T,Prefix}MPI Collective
Create a distributed PETSc vector by summing sparse vectors across ranks (on MPI.COMM_WORLD).
v::SparseVector{T}can differ across ranks; nonzeros are summed across all ranks.row_partitionis a Vector{Int} of lengthnranks+1with 1-based inclusive starts.Prefixis a type parameter forVecSetOptionsPrefixfor PETSc options (default: MPIAIJ).own_rank_only::Bool(default=false): if true, asserts that all nonzero indices fall within this rank's row partition.- Returns a
Vec{T,Prefix}managed collectively; by default vectors are returned to a reuse pool when released, not immediately destroyed. UseENABLE_VEC_POOL[] = falseorclear_vec_pool!()to force destruction.
Uses VecSetValues with ADD_VALUES to sum contributions across ranks.
Helper Constructors
SafePETSc.zeros_like — Function
zeros_like(x::Vec{T,Prefix}; T2::Type{S}=T, Prefix2::Type=Prefix) -> Vec{S,Prefix2}MPI Collective
Create a new distributed vector with the same size and partition as x, filled with zeros.
Arguments
x: Template vector to match size and partitionT2: Element type of the result (defaults to same type asx)Prefix2: Prefix type (defaults to same prefix asx)
See also: ones_like, fill_like, Vec_uniform
SafePETSc.ones_like — Function
ones_like(x::Vec{T,Prefix}; T2::Type{S}=T, Prefix2::Type=Prefix) -> Vec{S,Prefix2}MPI Collective
Create a new distributed vector with the same size and partition as x, filled with ones.
Arguments
x: Template vector to match size and partitionT2: Element type of the result (defaults to same type asx)Prefix2: Prefix type (defaults to same prefix asx)
See also: zeros_like, fill_like, Vec_uniform
SafePETSc.fill_like — Function
fill_like(x::Vec{T,Prefix}, val; T2::Type{S}=typeof(val), Prefix2::Type=Prefix) -> Vec{S,Prefix2}MPI Collective
Create a new distributed vector with the same size and partition as x, filled with val.
Arguments
x: Template vector to match size and partitionval: Value to fill the vector withT2: Element type of the result (defaults to type ofval)Prefix2: Prefix type (defaults to same prefix asx)
Example
y = fill_like(x, 3.14) # Create a vector like x, filled with 3.14See also: zeros_like, ones_like, Vec_uniform
Concatenation
Vectors can be concatenated using the same functions as matrices. See the Matrices API Reference for vcat and hcat.
Note: Concatenating vectors returns Mat{T,Prefix} objects.
Partitioning
SafePETSc.default_row_partition — Function
default_row_partition(n::Int, nranks::Int) -> Vector{Int}MPI Non-Collective
Create a default row partition that divides n rows equally among nranks.
Returns a Vector{Int} of length nranks+1 where partition[i] is the start row (1-indexed) for rank i-1.
Vector Pooling
SafePETSc.ENABLE_VEC_POOL — Constant
ENABLE_VEC_POOLGlobal flag to enable/disable vector pooling. Set to false to disable pooling.
SafePETSc.clear_vec_pool! — Function
clear_vec_pool!()MPI Non-Collective
Clear all vectors from the pool, destroying them immediately. Useful for testing or explicit memory management.
SafePETSc.get_vec_pool_stats — Function
get_vec_pool_stats() -> DictMPI Non-Collective
Return statistics about the current vector pool state. Returns a dictionary with keys (nglobal, prefix, type) => count.
Conversion and Display
Convert distributed vectors to Julia arrays for inspection and display:
Base.Vector — Method
Vector(x::Vec{T,Prefix}) -> Vector{T}MPI Collective
Convert a distributed PETSc Vec to a Julia Vector by gathering all data to all ranks. This is a collective operation - all ranks must call it and will receive the complete vector.
This is primarily used for display purposes or small vectors. For large vectors, this operation can be expensive as it gathers all data to all ranks.
Vector(vt::LinearAlgebra.Adjoint{T, <:Vec{T}}) -> LinearAlgebra.Adjoint{T, Vector{T}}MPI Collective
Convert an adjoint of a distributed PETSc Vec to an adjoint Julia Vector. Equivalent to Vector(parent(vt))'.
This is a collective operation - all ranks must call it and will receive the complete adjoint vector.
Display methods (automatically used by println, display, etc.):
show(io::IO, v::Vec)- Display vector contentsshow(io::IO, mime::MIME, v::Vec)- Display with MIME type support
Utilities
SafePETSc.io0 — Function
io0(io=stdout; r::Set{Int}=Set{Int}([0]), dn=devnull)MPI Non-Collective
Return io if the current rank is in r, otherwise return dn.
This is useful for printing output only on specific ranks to avoid duplicate output.
Parameters
io: The IO stream to use (default:stdout)r: Set of ranks that should produce output (default:Set{Int}([0]))dn: The IO stream to return for non-selected ranks (default:devnull)
Examples
# Print only on rank 0 (default)
println(io0(), "This prints only on rank 0")
# Print only on rank 2
println(io0(r=Set([2])), "This prints only on rank 2")
# Print on ranks 0 and 3
println(io0(r=Set([0, 3])), "This prints on ranks 0 and 3")
# Write to file only on rank 1
open("output.txt", "w") do f
println(io0(f; r=Set([1])), "This writes only on rank 1")
endSafePETSc.own_row — Method
own_row(v::Vec{T,Prefix}) -> UnitRange{Int}MPI Non-Collective
Return the range of indices owned by the current rank for vector v.
Example
v = Vec_uniform([1.0, 2.0, 3.0, 4.0])
range = own_row(v) # e.g., 1:2 on rank 0own_row(A::Mat{T,Prefix}) -> UnitRange{Int}MPI Non-Collective
Return the range of row indices owned by the current rank for matrix A.
Example
A = Mat_uniform([1.0 2.0; 3.0 4.0; 5.0 6.0; 7.0 8.0])
range = own_row(A) # e.g., 1:2 on rank 0Row-wise Operations
SafePETSc.map_rows — Function
map_rows(f::Function, A::Union{Vec{T,Prefix},Mat{T,Prefix}}...; col_partition=nothing) -> Union{Vec{T,MPIDENSE},Mat{T,MPIDENSE}}MPI Collective
Apply a function f to corresponding rows across distributed PETSc vectors and matrices.
Similar to the native Julia pattern vcat((f.((eachrow.(A))...))...), but works with distributed PETSc objects. The function f is applied row-wise to each input, and the results are concatenated into a new distributed vector or matrix.
Arguments
f::Function: Function to apply to each row. Should accept as many arguments as there are inputs.A...::Union{Vec{T,Prefix},Mat{T,Prefix}}: One or more distributed vectors or matrices. All inputs must have the same number of rows and compatible row partitions.col_partition::Union{Vector{Int},Nothing}: Column partition for result matrix (default: use defaultrowpartition). Only used whenfreturns an adjoint vector (creating a matrix).
Return value
Always returns Vec{T,MPIDENSE} or Mat{T,MPIDENSE} (dense format). The return type depends on what f returns:
- If
freturns a scalar or Julia Vector → returns aVec{T,MPIDENSE} - If
freturns an adjoint Julia Vector (row vector) → returns aMat{T,MPIDENSE}
Size behavior
If inputs have m rows and f returns:
- A scalar or adjoint vector → result has
mrows - An
n-dimensional vector → result hasm*nrows
Examples
# Example 1: Sum rows of a matrix
B = Mat_uniform(randn(5, 3))
sums = map_rows(sum, B) # Returns Vec{Float64,MPIDENSE} with 5 elements
# Example 2: Compute [sum, product] for each row (returns matrix)
stats = map_rows(x -> [sum(x), prod(x)]', B) # Returns 5×2 Mat{Float64,MPIDENSE}
# Example 3: Combine matrix and vector row-wise
C = Vec_uniform(randn(5))
combined = map_rows((x, y) -> [sum(x), prod(x), y[1]]', B, C) # Returns 5×3 Mat{Float64,MPIDENSE}Implementation notes
- This is a collective operation; all ranks must call it with compatible arguments
- The function
fis assumed to be homogeneous (always returns the same type of output) - For vectors,
freceives a scalar value per row - For matrices,
freceives a view of the row (similar to eachrow) - The result always uses MPIDENSE prefix regardless of input prefix
Indexing
Non-collective element and range access:
Base.getindex — Method
Base.getindex(v::Vec{T}, i::Int) -> TMPI Non-Collective
Get the value at index i from a distributed vector.
The index i must be wholly contained in the current rank's ownership range. If not, the function will abort with an error message and stack trace.
Example
v = Vec_uniform([1.0, 2.0, 3.0, 4.0])
# On rank that owns index 2:
val = v[2] # Returns 2.0Base.getindex — Method
Base.getindex(v::Vec{T}, range::UnitRange{Int}) -> Vector{T}MPI Non-Collective
Extract a contiguous range of values from a distributed vector.
The range must be wholly contained in the current rank's ownership range. If not, the function will abort with an error message and stack trace.
Example
v = Vec_uniform([1.0, 2.0, 3.0, 4.0])
# On rank that owns indices 2:3:
vals = v[2:3] # Returns [2.0, 3.0]Operations
Arithmetic
Vectors support standard Julia arithmetic operations via broadcasting:
y = x .+ 1.0 # Element-wise addition
y = 2.0 .* x # Scaling
z = x .+ y # Vector addition
y .= x .+ 1.0 # In-place operationStandard operators are also overloaded:
z = x + y # Addition
z = x - y # Subtraction
z = -x # NegationLinear Algebra
y = A * x # Matrix-vector multiplication
LinearAlgebra.mul!(y, A, x) # In-place multiplication
w = v' * A # Adjoint-vector times matrix
LinearAlgebra.mul!(w, v', A) # In-placeProperties
T = eltype(v) # Element type
n = length(v) # Vector length
n = size(v, 1) # Size in dimension 1