For postprocessing one generally loads data sequentially on smaller machines, and in such scenarios big simulations would not fit in the memory. It would be nice to have the Operator class to support out-of-core fft using dask arrays for KX, KY etc and dask.array.fft either directly or via pyfftw's interface:
Good idea. Not simple to implement though... First how the user tells that she/he wants to use dask arrays? Then, how to we make Operator classes compatible with dask arrays?
Note that I think it is a fundamental issue in Python computing, see for example https://www.numpy.org/neps/
First question: in my opinion the user should simply specify OperatorsPseudoSpectral2D(...., type_fft="with_daskfft") # or "with_cufft" and then it should be smooth sailing.
Interesting NEP , and I have to read it more carefully. If I understood right this is some form of multiple dispatch mechanism, yes? Starting from numpy>=0.16 there seems to be some support for the __array_function__ mechanism with NUMPY_EXPERIMENTAL_ARRAY_FUNCTION=1. As described in the NEP, first we can see whether we can substitute numpy arrays with cupy's device arrays equally, since we already have supported cuFFT.
Thinking about it, it should be possible to have Operator classes compatible with dask and cupy arrays.
Of course, we need to have FFT classes for these arrays (with_daskfft and with_cufft).
Since Pythran is not compatible with these arrays, we would need to tell Transonic to use the not compiled versions of the boosted and jitted functions. Now it is possible with the TRANSONIC_NO_REPLACE (which could be adapted for the use case motivating this issue), but we have to have a better mechanism to be able to disable the replacement only for some functions.
Same thing when Transonic will be able to use Numba. Numba could be used with for example Cupy arrays.
Then, we also use in the Operator classes array creation API (np.meshgrid, np.ascontiguousarray, np.arange, np.random.random, etc.). I'm wondering if it could be possible to have code compatible for the different types for arrays (numpy, dask, cupy, etc.). We also have to think which arrays have to be Cupy or Dask arrays and which arrays should always be simple Numpy arrays.
We also have to deal with the empty_aligned which creates a Numpy array.
To distill the ideas there are three tasks to achieve this
Support multiple dispatch for transonic @boost and @jit functions/methods: pure python for dask arrays, numba for cupy arrays, pythran for numpy arrays.
Dealing with cython modules.
Thin layer for array creation using a thin module like this: flexible but cumbersome; could be written in a smarter way using decorators and / or templates. A possibility would be:
# module: fluidfft.arraydefarange(start,stop,step,dtype):# check for an env variable say FLUIDFFT_ARRAY_API and pass the arguments to appropiate function# populate the module with unimplemented functions and modules from numpy as it is# by inspecting locals()
Then we can substitute import numpy as np with import fluidfft.array as np as a drop in replacement in many module of other packages.
Note that it would be nice to be able to have in the same process instances of the Operator classes for different array types.
I think this issue is really dealing with long term numerical Python issues.
How can we write elegant numerical Python and be able to support (1) different accelerators and (2) different array types ?
Transonic could be an element of the solution. I think our Operators classes are good examples. I tend to be against quick and dirty solutions for that. It is a too general to be handle just at our level.
What about the idea of first writing a blog post on the subject using the example of fluidfft.fft3d.operators where we explicitly describe the problems and ideas about how they could be solved? Then, we can get feedback and good advice from persons who know well these problems. numpy-discussion@python.org could be a good place for such questions.
Note that I don't think we need to deal with Cython for this problem. The with_cupy and with_dask modules should be written in Python (by the way, like another module that needs to be added with_mpi4pyfft).
It is a bleeding edge enhancement, but I don' t think has to remain a "long term" issue. NEP-18 seems is already released on a provisional basis, and it is available for us to try. Sure, we can ask for opinions, but I don't see how we can invoke array creation without writing a wrapper for it. Multiple dispatch is not possible since the inputs are just some "integers" representing the shape of the array.
One option is to have interoperability like Bohrium - which claims to have zero-copy interoperability between numpy and pycuda / pyopencl.
Another alternative is to use Dask arrays alone. There is an open issue to see how Dask interacts with Cupy. Dask arrays can be created from numpy arrays and vice versa. I can't help but wonder... if we are going to start with a numpy array, it would take away all the advantage of doing out-of-core computation like dask.
I agree Cython would only be used with numpy arrays, so it should be fine. with_mpi4pyfft is a separate issue, and should be straightforward to implement - using just Python I suppose.
Sure, we can ask for opinions, but I don't see how we can invoke array creation without writing a wrapper for it. Multiple dispatch is not possible since the inputs are just some "integers" representing the shape of the array.
I agree that we'll have to have a wrapper for array creation functions. But what I mean is that many other projects will face the same issue, so writing from zero our own special wrapper fluidfft.array seems questionable. I don't see what is specific to fluidfft so I'm wondering if such wrapper should be in fluidfft. So where should it be?
Then there is the question of Pythran (and other accelerators), which understands Numpy array creation functions and nothing else, so with the current version of Transonic, one won't be able to write:
import fluidfft.array as npfrom transonic import jit@jitdef func(n): return sum(np.arange(n)**2)
Other issue: we need to be able to use in the same code different types of arrays, it seems to me that imports like import mysuperarray as np are not enough expressive.
For fluidfft, what would be nice would be to be able to do (in the same process):
And to conserve a clean maintainable code defining OperatorsPseudoSpectral2D. For me it is really a very interesting but very complicated problem and I doubt that there are now good general solutions to do that.
An update, using dask.array will soon be easier. Recently dask array mutation (or __setitem__) became supported and the original code just works. See github.com/dask/dask/issues/2000 for some history. And https://github.com/dask/dask/issues/7392#issuecomment-801832399 for the implementation. This means we can remove hacks like this: