API Reference

This page provides detailed documentation for all exported functions in MultiGridBarrierPETSc.jl.

All Functions Are Collective

All functions documented here are MPI collective operations. Every MPI rank must call these functions together with the same parameters. Failure to do so will result in deadlock.

Initialization

This function must be called before using any other MultiGridBarrierPETSc functionality.

MultiGridBarrierPETSc.InitFunction
Init(; options="-MPIAIJ_ksp_type preonly -MPIAIJ_pc_type lu -MPIAIJ_pc_factor_mat_solver_type mumps")

Collective

Initialize MultiGridBarrierPETSc by setting up MPI, PETSc, and solver options.

This function should be called once before using any MultiGridBarrierPETSc functionality. It will:

  1. Initialize MPI and PETSc if not already initialized
  2. Configure PETSc solver options (default: use MUMPS direct solver for sparse matrices)

The default options configure MPIAIJ (sparse) matrices to use:

  • -ksp_type preonly: Don't use iterative solver, just apply preconditioner
  • -pc_type lu: Use LU factorization as preconditioner
  • -pc_factor_mat_solver_type mumps: Use MUMPS sparse direct solver for the factorization

Note: MPIDENSE matrices are only used for geometry data (coordinates and weights), not for linear solves.

Arguments

  • options::String: PETSc options string to set (default: MUMPS direct solver for sparse matrices)

Example

using MultiGridBarrierPETSc
MultiGridBarrierPETSc.Init()  # Use default MUMPS solver for sparse matrices

# Or with custom options:
MultiGridBarrierPETSc.Init(options="-MPIAIJ_ksp_type cg -MPIAIJ_pc_type jacobi")
source

High-Level API

These functions provide the simplest interface for solving problems with PETSc types.

1D Problems

MultiGridBarrierPETSc.fem1d_petscFunction
fem1d_petsc(::Type{T}=Float64; kwargs...) where {T}

Collective

Create a PETSc-based Geometry from fem1d parameters.

This function calls fem1d(kwargs...) to create a native 1D geometry, then converts it to use PETSc distributed types (Mat and Vec) for distributed computing.

Note: Call MultiGridBarrierPETSc.Init() before using this function.

Arguments

  • T::Type: Element type for the geometry (default: Float64)
  • kwargs...: Additional keyword arguments passed to fem1d():
    • L::Int: Number of multigrid levels (default: 4), creating 2^L elements

Returns

A Geometry object with PETSc distributed types.

Example

MultiGridBarrierPETSc.Init()
g = fem1d_petsc(Float64; L=4)
source
MultiGridBarrierPETSc.fem1d_petsc_solveFunction
fem1d_petsc_solve(::Type{T}=Float64; kwargs...) where {T}

Collective

Solve a fem1d problem using amgb with PETSc distributed types.

This is a convenience function that combines fem1d_petsc and amgb into a single call. It creates a PETSc-based 1D geometry and solves the barrier problem.

Arguments

  • T::Type: Element type for the geometry (default: Float64)
  • kwargs...: Keyword arguments passed to both fem1d_petsc and amgb
    • L::Int: Number of multigrid levels (passed to fem1d)
    • p: Power parameter for the barrier (passed to amgb)
    • verbose: Verbosity flag (passed to amgb)
    • Other arguments specific to fem1d or amgb

Returns

The solution object from amgb.

Example

sol = fem1d_petsc_solve(Float64; L=4, p=1.0, verbose=true)
println("Solution norm: ", norm(sol.z))
source

2D Problems

MultiGridBarrierPETSc.fem2d_petscFunction
fem2d_petsc(::Type{T}=Float64; kwargs...) where {T}

Collective

Create a PETSc-based Geometry from fem2d parameters.

This function calls fem2d(kwargs...) to create a native geometry, then converts it to use PETSc distributed types (Mat and Vec) for distributed computing.

Note: Call MultiGridBarrierPETSc.Init() before using this function.

Arguments

  • T::Type: Element type for the geometry (default: Float64)
  • kwargs...: Additional keyword arguments passed to fem2d()

Returns

A Geometry object with PETSc distributed types.

Example

MultiGridBarrierPETSc.Init()
g = fem2d_petsc(Float64; maxh=0.1)
source
MultiGridBarrierPETSc.fem2d_petsc_solveFunction
fem2d_petsc_solve(::Type{T}=Float64; kwargs...) where {T}

Collective

Solve a fem2d problem using amgb with PETSc distributed types.

This is a convenience function that combines fem2d_petsc and amgb into a single call. It creates a PETSc-based geometry and solves the barrier problem.

Arguments

  • T::Type: Element type for the geometry (default: Float64)
  • kwargs...: Keyword arguments passed to both fem2d_petsc and amgb
    • maxh: Maximum mesh size (passed to fem2d)
    • p: Power parameter for the barrier (passed to amgb)
    • verbose: Verbosity flag (passed to amgb)
    • Other arguments specific to fem2d or amgb

Returns

The solution object from amgb.

Example

sol = fem2d_petsc_solve(Float64; maxh=0.1, p=2.0, verbose=true)
println("Solution norm: ", norm(sol.z))
source

3D Problems

MultiGridBarrierPETSc.fem3d_petscFunction
fem3d_petsc(::Type{T}=Float64; kwargs...) where {T}

Collective

Create a PETSc-based Geometry from fem3d parameters.

This function calls fem3d(kwargs...) to create a native 3D geometry, then converts it to use PETSc distributed types (Mat and Vec) for distributed computing.

Note: Call MultiGridBarrierPETSc.Init() before using this function.

Arguments

  • T::Type: Element type for the geometry (default: Float64)
  • kwargs...: Additional keyword arguments passed to fem3d():
    • L::Int: Number of multigrid levels (default: 2)
    • k::Int: Polynomial order of elements (default: 3)
    • K: Coarse Q1 mesh as an N×3 matrix (optional, defaults to unit cube)

Returns

A Geometry object with PETSc distributed types.

Example

MultiGridBarrierPETSc.Init()
g = fem3d_petsc(Float64; L=2, k=3)
source
MultiGridBarrierPETSc.fem3d_petsc_solveFunction
fem3d_petsc_solve(::Type{T}=Float64; kwargs...) where {T}

Collective

Solve a fem3d problem using amgb with PETSc distributed types.

This is a convenience function that combines fem3d_petsc and amgb into a single call. It creates a PETSc-based 3D geometry and solves the barrier problem.

Arguments

  • T::Type: Element type for the geometry (default: Float64)
  • kwargs...: Keyword arguments passed to both fem3d_petsc and amgb
    • L::Int: Number of multigrid levels (passed to fem3d)
    • k::Int: Polynomial order of elements (passed to fem3d)
    • p: Power parameter for the barrier (passed to amgb)
    • verbose: Verbosity flag (passed to amgb)
    • D: Operator structure matrix (passed to amgb, defaults to 3D operators)
    • f: Source term function (passed to amgb, defaults to 3D source)
    • g: Boundary condition function (passed to amgb, defaults to 3D BCs)
    • Other arguments specific to fem3d or amgb

Returns

The solution object from amgb.

Example

sol = fem3d_petsc_solve(Float64; L=2, k=3, p=1.0, verbose=true)
println("Solution norm: ", norm(sol.z))
source

Type Conversion API

These functions convert between native Julia types and PETSc distributed types. The petsc_to_native function dispatches on type, handling both Geometry and AMGBSOL objects.

MultiGridBarrierPETSc.native_to_petscFunction
native_to_petsc(g_native::Geometry{T, Matrix{T}, Vector{T}, SparseMatrixCSC{T,Int}, Discretization}) where {T, Discretization}

Collective

Convert a native Geometry object (with Julia arrays) to use PETSc distributed types.

This is a collective operation. Each rank calls fem2d() to get the same native geometry, then this function converts:

  • x::Matrix{T} -> x::Mat{T, MPIDENSE}
  • w::Vector{T} -> w::Vec{T}
  • operators[key]::SparseMatrixCSC{T,Int} -> operators[key]::Mat{T, MPIAIJ}
  • subspaces[key][i]::SparseMatrixCSC{T,Int} -> subspaces[key][i]::Mat{T, MPIAIJ}

The MPIDENSE prefix indicates dense storage (for geometry data and weights), while MPIAIJ indicates sparse storage (for operators and subspace matrices).

source
MultiGridBarrierPETSc.petsc_to_nativeFunction
petsc_to_native(g_petsc::Geometry{T, Mat{T,XPrefix}, Vec{T}, Mat{T,MPrefix}, Discretization}) where {T, XPrefix, MPrefix, Discretization}

Collective

Convert a PETSc Geometry object (with distributed PETSc types) back to native Julia arrays.

This is a collective operation. This function converts:

  • x::Mat{T, MPIDENSE} -> x::Matrix{T}
  • w::Vec{T} -> w::Vector{T}
  • operators[key]::Mat{T, MPIAIJ} -> operators[key]::SparseMatrixCSC{T,Int}
  • subspaces[key][i]::Mat{T, MPIAIJ} -> subspaces[key][i]::SparseMatrixCSC{T,Int}

Uses SafePETSc.J() which automatically handles dense vs sparse conversion based on the Mat's storage type.

source
petsc_to_native(sol_petsc::AMGBSOL{T, XType, WType, MType, Discretization}) where {T, XType, WType, MType, Discretization}

Collective

Convert an AMGBSOL solution object from PETSc types back to native Julia types.

This is a collective operation that performs a deep conversion of the solution structure:

  • z: Mat{T,Prefix} -> Matrix{T} or Vec{T} -> Vector{T} (depending on the type)
  • SOL_feasibility: NamedTuple with PETSc types -> NamedTuple with native types
  • SOL_main: NamedTuple with PETSc types -> NamedTuple with native types
  • geometry: Geometry with PETSc types -> Geometry with native types

Uses SafePETSc.J() which automatically handles dense vs sparse conversion based on the Mat's storage type.

source
petsc_to_native(sol_petsc::ParabolicSOL{T, XType, WType, MType, Discretization}) where {T, XType, WType, MType, Discretization}

Collective

Convert a ParabolicSOL solution object from PETSc types back to native Julia types.

This is a collective operation that performs a deep conversion of the parabolic solution:

  • geometry: Geometry with PETSc types -> Geometry with native types
  • ts: Vector{T} (unchanged, already native)
  • u: Vector{Mat{T,Prefix}} -> Vector{Matrix{T}} (each time snapshot converted)

Example

g = fem2d_petsc(Float64; L=2)
sol_petsc = parabolic_solve(g; h=0.5, p=1.0)
sol_native = petsc_to_native(sol_petsc)
source

Type Mappings Reference

Native to PETSc Conversions

When converting from native Julia types to PETSc distributed types:

Native TypePETSc TypePETSc PrefixUsage
Matrix{T}Mat{T, MPIDENSE}MPIDENSEGeometry coordinates, dense operators
Vector{T}Vec{T}-Weights, dense vectors
SparseMatrixCSC{T,Int}Mat{T, MPIAIJ}MPIAIJSparse operators, subspace matrices

PETSc to Native Conversions

When converting from PETSc distributed types back to native Julia types:

PETSc TypeNative TypeConversion Method
Mat{T, MPIDENSE}Matrix{T}SafePETSc.J()
Mat{T, MPIAIJ}SparseMatrixCSC{T,Int}SafePETSc.J()
Vec{T}Vector{T}SafePETSc.J()

Geometry Structure

The Geometry type from MultiGridBarrier is parameterized by its storage types:

Native Geometry:

Geometry{T, Matrix{T}, Vector{T}, SparseMatrixCSC{T,Int}, Discretization}

PETSc Geometry:

Geometry{T, Mat{T,MPIDENSE}, Vec{T}, Mat{T,MPIAIJ}, Discretization}

Fields

  • discretization: Discretization information (domain, mesh, etc.)
  • x: Geometry coordinates (Matrix or Mat)
  • w: Quadrature weights (Vector or Vec)
  • operators: Dictionary of operators (id, laplacian, mass, etc.)
  • subspaces: Dictionary of subspace projection matrices
  • refine: Vector of refinement matrices (coarse → fine)
  • coarsen: Vector of coarsening matrices (fine → coarse)

Solution Structure

The AMGBSOL type from MultiGridBarrier contains the complete solution:

Fields

  • z: Solution matrix/vector
  • SOL_feasibility: NamedTuple with feasibility phase information
  • SOL_main: NamedTuple with main solve information
    • t_elapsed: Elapsed solve time in seconds
    • ts: Barrier parameter values
    • its: Iterations per level
    • c_dot_Dz: Convergence measure values
  • log: Vector of iteration logs
  • geometry: The geometry used for solving

MPI and IO Utilities

SafePETSc.io0()

Returns an IO stream that only writes on rank 0:

println(io0(), "This prints once from rank 0")
println(io0(), my_petsc_vec)  # Collective show() of Vec

MPI Rank Information

using MPI

rank = MPI.Comm_rank(MPI.COMM_WORLD)  # Current rank (0 to nranks-1)
nranks = MPI.Comm_size(MPI.COMM_WORLD)  # Total number of ranks

PETSc Configuration

MUMPS Solver

The Init() function automatically configures PETSc to use MUMPS for sparse matrices:

# Equivalent PETSc options set automatically:
# -MPIAIJ_ksp_type preonly          # No iterative solver, just direct solve
# -MPIAIJ_pc_type lu                # LU factorization
# -MPIAIJ_pc_factor_mat_solver_type mumps  # Use MUMPS for factorization

Matrix Type Configuration:

  • Sparse matrices (MPIAIJ): Use MUMPS direct solver for exact solves
  • Dense matrices (MPIDENSE): Used only for geometry data (coordinates and weights), not for linear solves

This ensures exact direct solves for the sparse linear systems in the barrier method's Newton iterations.

Examples

Type Conversion Round-Trip

using MultiGridBarrierPETSc
using MultiGridBarrier
using LinearAlgebra
MultiGridBarrierPETSc.Init()

# Create native geometry
g_native = fem2d(; maxh=0.3)

# Convert to PETSc
g_petsc = native_to_petsc(g_native)

# Solve with PETSc types
sol_petsc = amgb(g_petsc; p=2.0)

# Convert back to native
sol_native = petsc_to_native(sol_petsc)
g_back = petsc_to_native(g_petsc)

# Verify round-trip accuracy
@assert norm(g_native.x - g_back.x) < 1e-10
@assert norm(g_native.w - g_back.w) < 1e-10

Accessing Operator Matrices

# Native geometry
g_native = fem2d(; maxh=0.2)
lap_native = g_native.operators[:laplacian]  # SparseMatrixCSC

# PETSc geometry
g_petsc = native_to_petsc(g_native)
lap_petsc = g_petsc.operators[:laplacian]  # Mat{Float64, MPIAIJ}

# Convert back if needed
lap_back = SafePETSc.J(lap_petsc)  # SparseMatrixCSC

Integration with MultiGridBarrier

All MultiGridBarrier functions work seamlessly with PETSc types:

using MultiGridBarrier: amgb, amgb_solve

# Create PETSc geometry
g = fem2d_petsc(Float64; L=3)

# Use MultiGridBarrier functions directly
sol = amgb(g; p=1.0, verbose=true)
sol = amgb_solve(g; p=1.5, maxit=50, tol=1e-10)

The package extends MultiGridBarrier's internal API (amgb_zeros, amgb_hcat, amgb_diag, etc.) to work with PETSc types automatically.