TriBIE (Triangular Boundary Integral Equation) is a high-performance Fortran90 parallel computing framework for simulating earthquake cycles, slow slip events, and aseismic transients on complex 3D fault geometries. The code employs hybrid MPI+OpenMP parallelization with advanced computational optimizations including SIMD vectorization and dynamic load balancing to efficiently model rate-and-state friction physics on triangular fault meshes embedded in elastic half-space media.
- 🌍 Complex Fault Geometries: Supports arbitrary curved faults with triangular mesh discretization
- ⚡ High-Performance Computing: Hybrid MPI+OpenMP parallelization with SIMD optimization
- 🔧 Advanced Physics: Rate-and-state friction laws for realistic earthquake cycle modeling
- 📊 Modern I/O: HDF5/XDMF output for direct visualization in Paraview
- ⚖️ Dynamic Load Balancing: Automatic work distribution across irregular mesh geometries
- 🔄 Accumulative Simulations: Long-term earthquake cycle studies with restart capability
- ✅ Scientifically Validated: Verified in the SCEC Sequences of Earthquakes and Aseismic Slip (SEAS) Project
Applications: Earthquake cycle modeling, slow slip event analysis, tsunami hazard assessment, and fault system dynamics research.
Verification: TriBIE has been extensively validated through the SCEC SEAS Project benchmark comparisons.
Figure: Mapview of on-fault distribution of fault key parameters in BP5 example.
Figure: Cumulative slip along strike (left) and along downdip (right) in the first 800 modeling years. Coseismic slip in red while interseismic in blue.
TriBIE provides a complete workflow for earthquake cycle simulation in three main phases:
- Stiffness Matrix Computation using
calc_trigreen.f90 - Cycling Simulation using
3dtri_BP5.f90 - Result Analysis using output visualization tools
Mesh Generation → Stiffness Computation → Cycling Simulation → Analysis
↓ ↓ ↓ ↓
triangular_mesh.gts calc_trigreen.f90 3dtri_BP5.f90 Results
- Input Mesh:
triangular_mesh.gtsfile containing triangular fault elements - MPI Environment: Multi-process execution environment
- Compilation: Use
TriGreen/runcompile.sh
cd TriGreen/
./runcompile.sh
mpirun -np <n_processes> ./calc_trigreentrigreen_<process_id>.bin: Stiffness matrix for each processposition.bin: Element centroid positions- Console Output: Distribution information and performance metrics
- Dynamic Load Balancing: Automatically distributes work unevenly across processes
- SIMD Optimization: Vectorized computation for improved performance
- Memory Management: Optimized allocation/deallocation strategies
- Stiffness Files:
trigreen_<process_id>.binfiles from step 1 - Parameter File:
input/parameter1.txtwith simulation parameters - Compilation: Use
src/compile.sh
cd src/
./compile.sh
cd ../input
mpirun -np <n_processes> ../src/3dtri_BP5- MPI_Scatterv: Proper handling of uneven distributions
- Rate-and-State Friction: Physics-based fault behavior modeling
- SIMD Optimization: Vectorized physics calculations
- Dynamic Load Balancing: Compatible with calc_trigreen distribution
<jobname> ! Simulation job identifier
<foldername> ! Output directory path
<stiffname> ! Stiffness matrix file prefix
<restartname> ! Restart file name (if applicable)
<Nt_all> <nprocs> <n_obv> <num_of_receivers_along_strike> <num_of_receivers_along_downdip> ! Array dimensions
<Idin> <Idout> <Iprofile> <Iperb> <Isnapshot> ! Control flags
<Vpl> ! Plate velocity (m/s)
<tmax> ! Maximum simulation time (years)
<tslip_ave> <tslipend> <tslip_aveint> ! Slip averaging parameters
<tint_out> <tmin_out> <tint_cos> <tint_sse> ! Output intervals
<vcos> <vsse1> <vsse2> ! Velocity thresholds
<nmv> <nas> <ncos> <nnul> <nsse> <n_nul_int> ! Output counters
<s1(1)> <s1(2)> ... <s1(10)> ! Additional parameters
Array Dimensions:
Nt_all: Total number of fault elementsnprocs: Number of MPI processesn_obv: Number of observation points
Physical Parameters:
Vpl: Plate velocity (typically 1e-10 to 1e-9 m/s)tmax: Maximum simulation time in years
Output Control:
tint_cos: Coseismic output intervaltint_sse: Slow slip event output interval
# Ensure triangular_mesh.gts exists in TriGreen/ directory
ls TriGreen/triangular_mesh.gtscd TriGreen/
./runcompile.sh
mpirun -np 8 ./calc_trigreen
# Generates: trigreen_0.bin, trigreen_1.bin, ..., trigreen_7.bincd input/
# Edit parameter1.txt with your simulation parameters
# Ensure nprocs matches the number of stiffness filescd src/
./compile.sh
cd ../input/
mpirun -np 8 ../src/3dtri_BP5# Check output files for results
ls -la area* fltst* prof*TriBIE now supports modern HDF5/XDMF output for visualization:
- HDF5 Time Series: Efficient storage of large temporal datasets
- XDMF Visualization: Direct compatibility with Paraview
- Accumulative Writing: Multiple simulation runs append to existing data
- Mesh Integration: Automatic mesh export for visualization
timeseries_data_<jobname>.h5: Main time series datatimeseries_data_<jobname>.xdmf: Paraview visualization filesse_timeseries_data_<jobname>.h5: SSE-specific datasse_timeseries_data_<jobname>.xdmf: SSE visualization file
MPI Communication Errors:
- Symptom:
MPI_ERR_TRUNCATEor communication failures - Solution: Ensure
nprocsin parameter1.txt matches the number of stiffness files
Memory Issues:
- Symptom:
free(): invalid sizeor allocation failures - Solution: Proper memory management implemented with conditional deallocation
HDF5 Errors:
- Symptom: "name already exists" or dataset creation failures
- Solution: Fixed with existence checks for groups and datasets
- SIMD: Use
*_simd.f90versions for better performance - OpenMP: Hybrid MPI+OpenMP parallelization available
- Process Count: Match MPI processes to available cores
- Automatically handles uneven element distributions
- Optimizes work distribution across processes
- Compatible with irregular fault geometries
- Automatic vectorization of computational loops
- Cache-aware memory access patterns
- Performance improvements on modern processors
- HDF5-based parallel file output
- XDMF integration for visualization
- Accumulative time series storage
- Use SIMD-optimized versions for production runs
- Match MPI processes to available hardware cores
- Monitor memory usage and adjust problem size accordingly
- Set appropriate OpenMP thread counts for hybrid parallelization
- Always verify input file formats and parameters
- Use restart capability for long simulations
- Check output files for expected results
- Test with smaller problems before large-scale runs
For detailed parameter descriptions and advanced usage, see src/USER_GUIDE.md.
The example1/ directory contains a complete working example based on the SCEC SEAS BP5 benchmark problem. This example demonstrates earthquake cycle simulation on a planar fault with rate-and-state friction.
Problem: SCEC SEAS Benchmark Problem 5 (BP5) - Long-term earthquake cycles on a vertical strike-slip fault
- Fault geometry: 160 km × 60 km planar fault
- Depth: Surface to 60 km depth
- Elements: 9,214 triangular elements, mesh size ~ 1km
- Physics: Rate-and-state friction with aging law
- Duration: 500 years simulation time
# Create a working copy of example1
cp -r example1/ my_simulation/
cd my_simulation/# Compile TriGreen for stiffness calculation
cd ../TriGreen/
./runcompile.sh
# Compile main simulation code
cd ../src/
./compile.sh
cd ../my_simulation/# Copy mesh file to TriGreen directory
cp triangular_mesh.gts ../TriGreen/
# Run stiffness calculation (single process for this example)
cd ../TriGreen/
mpirun -np 1 ./calc_trigreen
# Copy stiffness files back to example directory
cp trigreen_0.bin ../my_simulation/
cd ../my_simulation/# Run the earthquake cycle simulation
mpirun -np 1 ../src/3dtri_BP5 < parameter1.txtexample1/
├── parameter1.txt # Main simulation parameters
├── triangular_mesh.gts # Fault mesh geometry
├── var-BP5_h1000.dat # On-fault friction parameters
├── area-BP5_h1000.dat # Element area data
├── profdp-BP5_h1000.dat # Dip profile coordinates
├── profstrk-BP5_h1000.dat # Strike profile coordinates
├── sub_stiff.sh # SLURM script for stiffness calculation
└── sub_3dtri.sh # SLURM script for main simulation
From parameter1.txt:
-BP5_h1000.dat # File suffix for input files
result6/ # Output directory
5 9214 111 1 1 9 74 45 # Nab=5, Nt_all=9214, nprocs=1
31.50 # Plate velocity: 31.5 mm/yr
500.0 # Simulation time: 500 years
1.0 183.0 305.0 # Velocity thresholds (mm/s, mm/yr)# Request compute node
salloc --nodes=1 --ntasks-per-node=1 --cpus-per-task=16 --time=2:00:00
# Set OpenMP threads
export OMP_NUM_THREADS=16
export OMP_PROC_BIND=close
# Run stiffness calculation
cd TriGreen/
mpirun -np 1 ./calc_trigreen
# Run simulation
cd ../my_simulation/
mpirun -np 1 ../src/3dtri_BP5 < parameter1.txt# Submit stiffness calculation
sbatch sub_stiff.sh
# Wait for completion, then submit main simulation
sbatch sub_3dtri.shAfter successful completion, you should see:
result6/ # Output directory
├── area-BP5_h1000.dat # Updated area information
├── rupture-BP5_h1000.dat # Rupture data
├── summary-BP5_h1000.dat # Simulation summary
├── fltst_strk-*.dat # Strike profiles
├── fltst_dip-*.dat # Dip profiles
├── timeseries_data_*.h5 # HDF5 time series (if enabled)
├── timeseries_data_*.xdmf # Paraview visualization files
└── [various monitoring files]Single Process (as configured):
- Stiffness calculation: ~30-60 minutes
- 500-year simulation: ~2-4 hours
- Memory usage: ~8-12 GB
- Disk space: ~1-2 GB for outputs
Scaling Options:
# For faster execution, modify parameter1.txt:
# Change: 5 9214 111 1 1 9 74 45
# To: 5 9214 111 1 4 9 74 45 (4 processes)
# Then run with:
mpirun -np 4 ./calc_trigreen # Stiffness
mpirun -np 4 ../src/3dtri_BP5 # Simulation# Open XDMF files directly in Paraview
paraview result6/timeseries_data_-BP5_h1000.dat.xdmf# Use provided MATLAB scripts in Mesh/ directory
cd ../Mesh/
matlab -r "ReadInpFile; PlotVariable""TriGreen file not found":
# Ensure stiffness files exist
ls trigreen_*.bin
# If missing, re-run stiffness calculation"Parameter file errors":
# Check parameter1.txt format
# Ensure no extra spaces or missing lines
# Verify nprocs matches number of stiffness filesMemory errors:
# Reduce problem size or request more memory
# For SLURM: --mem-per-cpu=20GSlow performance:
# Enable OpenMP
export OMP_NUM_THREADS=16
# Use multiple MPI processes
mpirun -np 4 ./3dtri_BP5This example reproduces the SCEC SEAS BP5 benchmark, which models:
- Long-term earthquake cycles (~100-200 year recurrence)
- Interseismic loading at plate velocity
- Coseismic ruptures with dynamic weakening
- Postseismic slip and stress relaxation
The results can be compared with other codes participating in the SCEC SEAS project for validation.
After successfully running example1:
- Modify parameters to explore different scenarios
- Scale up to multiple processes for larger problems
- Analyze results using Paraview or MATLAB
- Create custom problems using your own fault geometries