API Reference
This page provides detailed documentation for all exported functions in MultiGridBarrierMPI.jl.
All functions documented here are MPI collective operations. Every MPI rank must call these functions together with the same parameters. Failure to do so will result in deadlock.
High-Level API
These functions provide the simplest interface for solving problems with MPI types.
1D Problems
MultiGridBarrierMPI.fem1d_mpi — Function
fem1d_mpi(::Type{T}=Float64; kwargs...) where {T}Collective
Create an MPI-based Geometry from fem1d parameters.
This function calls fem1d(kwargs...) to create a native 1D geometry, then converts it to use MPI distributed types for distributed computing.
Arguments
T::Type: Element type for the geometry (default: Float64)kwargs...: Additional keyword arguments passed tofem1d():L::Int: Number of multigrid levels (default: 4), creating 2^L elements
Returns
A Geometry object with MPI distributed types.
Example
using MPI; MPI.Init()
using MultiGridBarrierMPI
g = fem1d_mpi(Float64; L=4)MultiGridBarrierMPI.fem1d_mpi_solve — Function
fem1d_mpi_solve(::Type{T}=Float64; kwargs...) where {T}Collective
Solve a fem1d problem using amgb with MPI distributed types.
This is a convenience function that combines fem1d_mpi and amgb into a single call. It creates an MPI-based 1D geometry and solves the barrier problem.
Arguments
T::Type: Element type for the geometry (default: Float64)kwargs...: Keyword arguments passed to bothfem1d_mpiandamgbL::Int: Number of multigrid levels (passed to fem1d)p: Power parameter for the barrier (passed to amgb)verbose: Verbosity flag (passed to amgb)- Other arguments specific to fem1d or amgb
Returns
The solution object from amgb.
Example
sol = fem1d_mpi_solve(Float64; L=4, p=1.0, verbose=true)
println("Solution norm: ", norm(sol.z))2D Problems
MultiGridBarrierMPI.fem2d_mpi — Function
fem2d_mpi(::Type{T}=Float64; kwargs...) where {T}Collective
Create an MPI-based Geometry from fem2d parameters.
This function calls fem2d(kwargs...) to create a native geometry, then converts it to use MPI distributed types for distributed computing.
Arguments
T::Type: Element type for the geometry (default: Float64)kwargs...: Additional keyword arguments passed tofem2d()
Returns
A Geometry object with MPI distributed types.
Example
using MPI; MPI.Init()
using MultiGridBarrierMPI
g = fem2d_mpi(Float64; L=3)MultiGridBarrierMPI.fem2d_mpi_solve — Function
fem2d_mpi_solve(::Type{T}=Float64; kwargs...) where {T}Collective
Solve a fem2d problem using amgb with MPI distributed types.
This is a convenience function that combines fem2d_mpi and amgb into a single call. It creates an MPI-based geometry and solves the barrier problem.
Arguments
T::Type: Element type for the geometry (default: Float64)kwargs...: Keyword arguments passed to bothfem2d_mpiandamgbL: Number of multigrid levels (passed to fem2d)p: Power parameter for the barrier (passed to amgb)verbose: Verbosity flag (passed to amgb)- Other arguments specific to fem2d or amgb
Returns
The solution object from amgb.
Example
sol = fem2d_mpi_solve(Float64; L=3, p=2.0, verbose=true)
println("Solution norm: ", norm(sol.z))3D Problems
MultiGridBarrierMPI.fem3d_mpi — Function
fem3d_mpi(::Type{T}=Float64; kwargs...) where {T}Collective
Create an MPI-based Geometry from fem3d parameters.
This function calls fem3d(kwargs...) to create a native 3D geometry, then converts it to use MPI distributed types for distributed computing.
Arguments
T::Type: Element type for the geometry (default: Float64)kwargs...: Additional keyword arguments passed tofem3d():L::Int: Number of multigrid levels (default: 2)k::Int: Polynomial order of elements (default: 3)K: Coarse Q1 mesh as an N×3 matrix (optional, defaults to unit cube)
Returns
A Geometry object with MPI distributed types.
Example
using MPI; MPI.Init()
using MultiGridBarrierMPI
g = fem3d_mpi(Float64; L=2, k=3)MultiGridBarrierMPI.fem3d_mpi_solve — Function
fem3d_mpi_solve(::Type{T}=Float64; kwargs...) where {T}Collective
Solve a fem3d problem using amgb with MPI distributed types.
This is a convenience function that combines fem3d_mpi and amgb into a single call. It creates an MPI-based 3D geometry and solves the barrier problem.
Arguments
T::Type: Element type for the geometry (default: Float64)kwargs...: Keyword arguments passed to bothfem3d_mpiandamgbL::Int: Number of multigrid levels (passed to fem3d)k::Int: Polynomial order of elements (passed to fem3d)p: Power parameter for the barrier (passed to amgb)verbose: Verbosity flag (passed to amgb)D: Operator structure matrix (passed to amgb, defaults to 3D operators)f: Source term function (passed to amgb, defaults to 3D source)g: Boundary condition function (passed to amgb, defaults to 3D BCs)- Other arguments specific to fem3d or amgb
Returns
The solution object from amgb.
Example
sol = fem3d_mpi_solve(Float64; L=2, k=3, p=1.0, verbose=true)
println("Solution norm: ", norm(sol.z))Type Conversion API
These functions convert between native Julia types and MPI distributed types. The mpi_to_native function dispatches on type, handling Geometry, AMGBSOL, and ParabolicSOL objects.
MultiGridBarrierMPI.native_to_mpi — Function
native_to_mpi(g_native::Geometry{T, Matrix{T}, Vector{T}, SparseMatrixCSC{T,Int}, Discretization}) where {T, Discretization}Collective
Convert a native Geometry object (with Julia arrays) to use MPI distributed types.
This is a collective operation. Each rank calls fem2d() to get the same native geometry, then this function converts:
- x::Matrix{T} -> x::MatrixMPI{T}
- w::Vector{T} -> w::VectorMPI{T}
- operators[key]::SparseMatrixCSC{T,Int} -> operators[key]::SparseMatrixMPI{T}
- subspaces[key][i]::SparseMatrixCSC{T,Int} -> subspaces[key][i]::SparseMatrixMPI{T}
MultiGridBarrierMPI.mpi_to_native — Function
mpi_to_native(g_mpi::Geometry{T, MatrixMPI{T}, VectorMPI{T}, <:SparseMatrixMPI{T}, Discretization}) where {T, Discretization}Collective
Convert an MPI Geometry object (with distributed MPI types) back to native Julia arrays.
This is a collective operation. This function converts:
- x::MatrixMPI{T} -> x::Matrix{T}
- w::VectorMPI{T} -> w::Vector{T}
- operators[key]::SparseMatrixMPI{T} -> operators[key]::SparseMatrixCSC{T,Int}
- subspaces[key][i]::SparseMatrixMPI{T} -> subspaces[key][i]::SparseMatrixCSC{T,Int}
mpi_to_native(sol_mpi::AMGBSOL{T, XType, WType, MType, Discretization}) where {T, XType, WType, MType, Discretization}Collective
Convert an AMGBSOL solution object from MPI types back to native Julia types.
This is a collective operation that performs a deep conversion of the solution structure:
- z: MatrixMPI{T} -> Matrix{T} or VectorMPI{T} -> Vector{T}
- SOL_feasibility: NamedTuple with MPI types -> NamedTuple with native types
- SOL_main: NamedTuple with MPI types -> NamedTuple with native types
- geometry: Geometry with MPI types -> Geometry with native types
mpi_to_native(sol_mpi::ParabolicSOL{T, XType, WType, MType, Discretization}) where {T, XType, WType, MType, Discretization}Collective
Convert a ParabolicSOL solution object from MPI types back to native Julia types.
This is a collective operation that performs a deep conversion of the parabolic solution:
- geometry: Geometry with MPI types -> Geometry with native types
- ts: Vector{T} (unchanged, already native)
- u: Vector{MatrixMPI{T}} -> Vector{Matrix{T}} (each time snapshot converted)
Example
g = fem2d_mpi(Float64; L=2)
sol_mpi = parabolic_solve(g; h=0.5, p=1.0)
sol_native = mpi_to_native(sol_mpi)Type Mappings Reference
Native to MPI Conversions
When converting from native Julia types to MPI distributed types:
| Native Type | MPI Type | Usage |
|---|---|---|
Matrix{T} | MatrixMPI{T} | Geometry coordinates, dense data |
Vector{T} | VectorMPI{T} | Weights, dense vectors |
SparseMatrixCSC{T,Int} | SparseMatrixMPI{T} | Sparse operators, subspace matrices |
MPI to Native Conversions
When converting from MPI distributed types back to native Julia types:
| MPI Type | Native Type |
|---|---|
MatrixMPI{T} | Matrix{T} |
VectorMPI{T} | Vector{T} |
SparseMatrixMPI{T} | SparseMatrixCSC{T,Int} |
Geometry Structure
The Geometry type from MultiGridBarrier is parameterized by its storage types:
Native Geometry:
Geometry{T, Matrix{T}, Vector{T}, SparseMatrixCSC{T,Int}, Discretization}MPI Geometry:
Geometry{T, MatrixMPI{T}, VectorMPI{T}, SparseMatrixMPI{T}, Discretization}Fields
discretization: Discretization information (domain, mesh, etc.)x: Geometry coordinates (Matrix or MatrixMPI)w: Quadrature weights (Vector or VectorMPI)operators: Dictionary of operators (id, dx, dy, etc.)subspaces: Dictionary of subspace projection matricesrefine: Vector of refinement matrices (coarse -> fine)coarsen: Vector of coarsening matrices (fine -> coarse)
Solution Structure
The AMGBSOL type from MultiGridBarrier contains the complete solution:
Fields
z: Solution matrix/vectorSOL_feasibility: NamedTuple with feasibility phase informationSOL_main: NamedTuple with main solve informationt_elapsed: Elapsed solve time in secondsts: Barrier parameter valuesits: Iterations per levelc_dot_Dz: Convergence measure values
log: Vector of iteration logsgeometry: The geometry used for solving
MPI and IO Utilities
LinearAlgebraMPI.io0()
Returns an IO stream that only writes on rank 0:
using LinearAlgebraMPI
println(io0(), "This prints once from rank 0")MPI Rank Information
using MPI
rank = MPI.Comm_rank(MPI.COMM_WORLD) # Current rank (0 to nranks-1)
nranks = MPI.Comm_size(MPI.COMM_WORLD) # Total number of ranksExamples
Type Conversion Round-Trip
using MPI
MPI.Init()
using MultiGridBarrierMPI
using LinearAlgebraMPI
using MultiGridBarrier
using LinearAlgebra
# Create native geometry
g_native = fem2d(; L=2)
# Convert to MPI
g_mpi = native_to_mpi(g_native)
# Solve with MPI types
sol_mpi = amgb(g_mpi; p=2.0)
# Convert back to native
sol_native = mpi_to_native(sol_mpi)
g_back = mpi_to_native(g_mpi)
# Verify round-trip accuracy
@assert norm(g_native.x - g_back.x) < 1e-10
@assert norm(g_native.w - g_back.w) < 1e-10Accessing Operator Matrices
# Native geometry
g_native = fem2d(; L=2)
id_native = g_native.operators[:id] # SparseMatrixCSC
# MPI geometry
g_mpi = native_to_mpi(g_native)
id_mpi = g_mpi.operators[:id] # SparseMatrixMPI
# Convert back if needed
id_back = SparseMatrixCSC(id_mpi) # SparseMatrixCSCIntegration with MultiGridBarrier
All MultiGridBarrier functions work seamlessly with MPI types:
using MultiGridBarrier: amgb
# Create MPI geometry
g = fem2d_mpi(Float64; L=3)
# Use MultiGridBarrier functions directly
sol = amgb(g; p=1.0, verbose=true)The package extends MultiGridBarrier's internal API (amgb_zeros, amgb_diag, amgb_blockdiag, map_rows, etc.) to work with MPI types automatically.