mpi4py_fft.io package

Submodules

mpi4py_fft.io.generate_xdmf module

mpi4py_fft.io.generate_xdmf.generate_xdmf(h5filename, periodic=True, order='paraview')[source]

Generate XDMF-files

Parameters
  • h5filename (str) – Name of hdf5-file that we want to decorate with xdmf

  • periodic (bool or dim-sequence of bools, optional) – If true along axis i, assume data is periodic. Only affects the calculation of the domain size and only if the domain is given as 2-tuple of origin+dx.

  • order (str) – paraview or visit For some reason Paraview and Visit requires the mesh stored in opposite order in the XDMF-file for 2D slices. Make choice of order here.

mpi4py_fft.io.h5py_file module

class mpi4py_fft.io.h5py_file.HDF5File(h5name, domain=None, mode='a', **kw)[source]

Bases: FileBase

Class for reading/writing data to HDF5 format

Parameters
  • h5name (str) – Name of hdf5 file to be created.

  • domain (sequence, optional) – An optional spatial mesh or domain to go with the data. Sequence of either

    • 2-tuples, where each 2-tuple contains the (origin, length) of each dimension, e.g., (0, 2*pi).

    • Arrays of coordinates, e.g., np.linspace(0, 2*pi, N). One array per dimension.

  • mode (str, optional) – r, w or a for read, write or append. Default is a.

  • kw (dict, optional) – Optional additional keyword arguments used when creating the file used to store data.

static backend()[source]

Return which backend is used to store data

open(mode='r+')[source]

Open the self.filename file for reading or writing

Parameters

mode (str) – Open file in this mode. Default is ‘r+’.

read(u, name, **kw)[source]

Read field name into distributed array u

Parameters
  • u (array) – The DistArray to read into.

  • name (str) – Name of field to be read.

  • step (int, optional) – Index of field to be read. Default is 0.

write(step, fields, **kw)[source]

Write snapshot step of fields to HDF5 file

Parameters
  • step (int) – Index of snapshot.

  • fields (dict) – The fields to be dumped to file. (key, value) pairs are group name and either arrays or 2-tuples, respectively. The arrays are complete arrays to be stored, whereas 2-tuples are arrays with associated global slices.

  • as_scalar (boolean, optional) – Whether to store rank > 0 arrays as scalars. Default is False.

Example

>>> from mpi4py import MPI
>>> from mpi4py_fft import PFFT, HDF5File, newDistArray
>>> comm = MPI.COMM_WORLD
>>> T = PFFT(comm, (15, 16, 17))
>>> u = newDistArray(T, forward_output=False, val=1)
>>> v = newDistArray(T, forward_output=False, val=2)
>>> f = HDF5File('h5filename.h5', mode='w')
>>> f.write(0, {'u': [u, (u, [slice(None), 4, slice(None)])],
...             'v': [v, (v, [slice(None), 5, 5])]})
>>> f.write(1, {'u': [u, (u, [slice(None), 4, slice(None)])],
...             'v': [v, (v, [slice(None), 5, 5])]})

This stores data within two main groups u and v. The HDF5 file will in the end contain groups:

/u/3D/{0, 1}
/u/2D/slice_4_slice/{0, 1}
/v/3D/{0, 1}
/v/1D/slice_5_5/{0, 1}

Note

The list of slices used in storing only parts of the arrays are views of the global arrays.

mpi4py_fft.io.nc_file module

class mpi4py_fft.io.nc_file.NCFile(ncname, domain=None, mode='a', clobber=True, **kw)[source]

Bases: FileBase

Class for writing data to NetCDF4 format

Parameters
  • ncname (str) – Name of netcdf file to be created

  • domain (Sequence, optional) – An optional spatial mesh or domain to go with the data. Sequence of either

    • 2-tuples, where each 2-tuple contains the (origin, length) of each dimension, e.g., (0, 2*pi).

    • Arrays of coordinates, e.g., np.linspace(0, 2*pi, N). One array per dimension.

  • mode (str) – r, w or a for read, write or append. Default is a.

  • clobber (bool, optional) – If True (default), opening a file with mode=’w’ will clobber an existing file with the same name. If False, an exception will be raised if a file with the same name already exists.

  • kw (dict, optional) – Optional additional keyword arguments used when creating the file used to store data.

Note

Each class instance creates one unique NetCDF4-file, with one step-counter. It is possible to store multiple fields in each file, but all snapshots of the fields must be taken at the same time. If you want one field stored every 10th timestep and another every 20th timestep, then use two different class instances with two different filenames ncname.

static backend()[source]

Return which backend is used to store data

open(mode='r+')[source]

Open the self.filename file for reading or writing

Parameters

mode (str) – Open file in this mode. Default is ‘r+’.

read(u, name, **kw)[source]

Read field name into distributed array u

Parameters
  • u (array) – The DistArray to read into.

  • name (str) – Name of field to be read.

  • step (int, optional) – Index of field to be read. Default is 0.

write(step, fields, **kw)[source]

Write snapshot step of fields to NetCDF4 file

Parameters
  • step (int) – Index of snapshot.

  • fields (dict) – The fields to be dumped to file. (key, value) pairs are group name and either arrays or 2-tuples, respectively. The arrays are complete arrays to be stored, whereas 2-tuples are arrays with associated global slices.

  • as_scalar (boolean, optional) – Whether to store rank > 0 arrays as scalars. Default is False.

Example

>>> from mpi4py import MPI
>>> from mpi4py_fft import PFFT, NCFile, newDistArray
>>> comm = MPI.COMM_WORLD
>>> T = PFFT(comm, (15, 16, 17))
>>> u = newDistArray(T, forward_output=False, val=1)
>>> v = newDistArray(T, forward_output=False, val=2)
>>> f = NCFile('ncfilename.nc', mode='w')
>>> f.write(0, {'u': [u, (u, [slice(None), 4, slice(None)])],
...             'v': [v, (v, [slice(None), 5, 5])]})
>>> f.write(1, {'u': [u, (u, [slice(None), 4, slice(None)])],
...             'v': [v, (v, [slice(None), 5, 5])]})

This stores the following datasets to the file ncfilename.nc. Using in a terminal ‘ncdump -h ncfilename.nc’, one gets:

netcdf ncfilename {
dimensions:
        time = UNLIMITED ; // (2 currently)
        x = 15 ;
        y = 16 ;
        z = 17 ;
variables:
        double time(time) ;
        double x(x) ;
        double y(y) ;
        double z(z) ;
        double u(time, x, y, z) ;
        double u_slice_4_slice(time, x, z) ;
        double v(time, x, y, z) ;
        double v_slice_5_5(time, x) ;
}

mpi4py_fft.io.file_base module

class mpi4py_fft.io.file_base.FileBase(filename=None, domain=None)[source]

Bases: object

Base class for reading/writing distributed arrays

Parameters
  • filename (str, optional) – Name of backend file used to store data

  • domain (sequence, optional) – An optional spatial mesh or domain to go with the data. Sequence of either

    • 2-tuples, where each 2-tuple contains the (origin, length) of each dimension, e.g., (0, 2*pi).

    • Arrays of coordinates, e.g., np.linspace(0, 2*pi, N). One array per dimension.

static backend()[source]

Return which backend is used to store data

close()[source]

Close the self.filename file

open(mode='r+')[source]

Open the self.filename file for reading or writing

Parameters

mode (str) – Open file in this mode. Default is ‘r+’.

read(u, name, **kw)[source]

Read field name into distributed array u

Parameters
  • u (array) – The DistArray to read into.

  • name (str) – Name of field to be read.

  • step (int, optional) – Index of field to be read. Default is 0.

write(step, fields, **kw)[source]

Write snapshot step of fields to file

Parameters
  • step (int) – Index of snapshot.

  • fields (dict) – The fields to be dumped to file. (key, value) pairs are group name and either arrays or 2-tuples, respectively. The arrays are complete arrays to be stored, whereas 2-tuples are arrays with associated global slices.

  • as_scalar (boolean, optional) – Whether to store rank > 0 arrays as scalars. Default is False.

Module contents