Dolfinx + BEMpp in parallel

Hi all,
I would like to use dolfinx+bempp to solve a magnetostatic problem, which has a boundary condition: phi1_boundary=BEM*phi0_boundary. I create a distributed mesh with dolfinx and would like to use bempp to handle the FEM/BEM problem.
I’m experiencing the following issue, which can be reproduced with the following MWE:

from mpi4py import MPI

import bempp_cl.api

from bempp_cl.api.external import fenicsx

from dolfinx import mesh

L = 6.0

mesh_size = 0.4

N = int(round(L/mesh_size))

comm = MPI.COMM_WORLD

# Create a DolfinX mesh

domain = mesh.create_box(comm=comm, points=[[0.0, 0.0, 0.0], [L, L, L]],n=[N, N, N], cell_type=mesh.CellType.tetrahedron)

tdim = domain.topology.dim

fdim = tdim - 1

domain.topology.create_connectivity(fdim, tdim)

domain.topology.create_connectivity(tdim, 0)

domain.topology.create_connectivity(fdim, 0)

# Print number of boundary facets and number of boundary nodes per rank

boundary_facets = mesh.exterior_facet_indices(domain.topology)

# print(f"Rank {domain.comm.rank}: Number of boundary facets = {len(boundary_facets)}", flush=True)

local_has_zero = int(len(boundary_facets) == 0)  # 1 if true, 0 otherwise

all_has_zero = comm.allgather(local_has_zero)       # everyone receives list of 0/1 flags for all ranks

if comm.rank == 0:

ranks_with_zero_facets = [rank for rank, flag in enumerate(all_has_zero) if flag == 1]

print(f"Ranks with 0 boundary facets = {ranks_with_zero_facets}", flush=True)

# Create the boundary mesh grid for BEMpp from the DolfinX mesh

bempp_grid, bempp_boundary_dofs = fenicsx.boundary_grid_from_fenics_mesh(domain)

# print('Rank {}: BEMpp Number of elements on boundary {}'. format(comm.rank, bempp_grid.number_of_elements), flush=True)

# print('Rank {}: BEMpp Number of vertices on boundary {}'. format(comm.rank, bempp_grid.number_of_vertices), flush=True)

bempp_space = bempp_cl.api.function_space(bempp_grid, "P", 1)

bempp_operator = bempp_cl.api.operators.boundary.laplace.double_layer(

bempp_space, bempp_space, bempp_space)

print(f"Rank {comm.rank}: Created BEMpp grid with {bempp_grid.number_of_vertices} vertices and {bempp_grid.number_of_elements} elements.", flush=True)

the code runs in parallel up to the point where 1 or multiple ranks do not have any node on the boundary. If this happens (for instance for this particular mesh, by requesting 50 ranks), it fails with:

File “XX/miniconda3/envs/fenicsx/lib/python3.12/site-packages/bempp_cl/api/grid/grid.py”, line 57, in init
self._enumerate_edges()
File “XX/miniconda3/envs/fenicsx/lib/python3.12/site-packages/bempp_cl/api/grid/grid.py”, line 508, in _enumerate_edges
self._edges, self._element_edges = _numba_enumerate_edges(self._elements, edge_tuple_to_index)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
IndexError: getitem out of range

Also:

  • from what I understood from the documentation, bempp handles the distributed matrix vector multiplication on its backend.
  • if so, it is unclear to me how to construct the vector phi0_boundary (is it a distributed vector where each rank has its own part or do I need to gather everything into a global one)?
  • I’m not sure how to handle the same vector if there are ranks that do not have nodes on the boundary.

many thanks and thank you for your work!!