I am simulating sound radiation from a vibrating structure with around 120,000 elements using the Numba-only Docker image. To accelerate the computation and avoid memory issues, I switched to the “FMM” assembler. However, I’ve noticed that the GMRES solver converges very slowly. After a full day of computation, it reached around 1,600 iterations before I decided to stop the process.
My machine has 32 GB of RAM and an Intel i9-13900H CPU (2.6 GHz base clock). Is there something additional I should be implementing when using the FMM assembler to improve GMRES convergence? Could the issue be related to preconditioning?
The FMM assembler uses a fast multipole method for the matrix-vector products inside each GMRES iteration. This assembler uses less memory than the dense assembler. However, the assembler should not influence the convergence of GMRES significantly.
To improve the convergence of linear solvers, you need to use a preconditioner. However, the preconditioning strategy depends on the BEM formulation you are using. Several of the tutorials on the Bempp website use preconditioning and may give you ideas.
Another way of improving GMRES convergence is to set the restart parameter manually. A higher value will improve convergence but at the expense of higher memory consumption.