Hi everyone. Thanks a lot for all the help so far.

I’m (still) doing Helmholtz scattering on a rigid object. I’m using somewhat large grid’s, and my simulations takes a long time to compute.

```
import time
import bempp.api
import numpy as np
grid = bempp.api.import_grid('some_grid.stl')
k = 100 # wave number
space = bempp.api.function_space(grid, "P", 1) # piecewise linear needed due to the hypersingular operator
I = bempp.api.operators.boundary.sparse.identity(space, space, space)
M = bempp.api.operators.boundary.helmholtz.double_layer(space, space, space,k,assembler= "fmm") # double layer potential
E = bempp.api.operators.boundary.helmholtz.hypersingular(space, space, space, k,assembler= "fmm")
lhs = 0.5 * I - M - 1j/k * E
# point source far away from the boundary
r = 2
theta = 45 * np.pi/180
phi = 20 * np.pi/180
r_s = r * np.array([np.cos(theta) * np.cos(phi) , np.sin(theta) * np.cos(phi) , np.sin(phi) ])
@bempp.api.complex_callable
def f(x, n, domain_index, result):
dist = np.sqrt((r_s[0] - x[0])**2 + (r_s[1] - x[1])**2 + (r_s[2] - x[2])**2)
result[0] = np.exp(1j * k * dist)/dist
rhs = bempp.api.GridFunction(space, fun=f)
t1 = time.time()
print('Solving equation system')
sol, info = bempp.api.linalg.gmres(lhs, rhs,tol=1E-4)
print('solve time ')
print(time.time() - t1)
print('Solver returned with ')
print(info)
print('\n\n'
```

I’m currently running the above on a cloud computer with 4 VCPU’s and 32 GB Ram.

For a grid with ~20.000 elements the system logged a solve time of

```
solve time
6649.893636465073
Solver returned with
0
```

So a little under two hours.

Now if i instead don’t specify the assembler like

```
M = bempp.api.operators.boundary.helmholtz.double_layer(space, space, space,k)
E = bempp.api.operators.boundary.helmholtz.hypersingular(space, space, space, k)
```

then the program finished in about half the time

```
solve time
3013.474852323532
Solver returned with
0
```

Which leads me to belive that either I’m utilizing the FMM wrong, or that it is only worth it for huge grids if the overhead is still greater than dense assembly for grids with over 20.000 elements (the first option being much more likely).

In theory FMM should scale with O[\log (N) N] instead of O[N^2] right?

I don’t know the significance of this but if i interrupt the computations the last funtion call is almost always

```
File "/home/rasmus/anaconda3/lib/python3.8/site-packages/Bempp_cl-0.2.2-py3.8.egg/bempp/api/fmm/exafmm.py", line 160, in evaluate
result = self._module.evaluate(self._tree, self._fmm)
```

Am i doing something stupid in my code? Even if there is no solution i would still like to know when the FMM is useful or not, so i can write better simulations in the future.

Best regards

Rasmus