Skip to content

Commit

Permalink
add readme description
Browse files Browse the repository at this point in the history
  • Loading branch information
Delcior committed Aug 29, 2023
1 parent 832f96b commit da043b4
Show file tree
Hide file tree
Showing 2 changed files with 21 additions and 7 deletions.
26 changes: 21 additions & 5 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,15 +24,23 @@ of transport equations with the MPDATA numerical scheme in a
hybrid parallelisation model with both multi-threading and MPI distributed memory communication.
PyMPDATA-MPI adapts to API of PyMPDATA offering domain decomposition logic.

## Hello world example
## Hello world examples

In a minimal setup, PyMPDATA-MPI can be used to solve the following transport equation:
$$\partial_t (G \psi) + \nabla \cdot (Gu \psi)= 0$$
in an environment with multiple nodes.
In two dimensions (x,y), MPI (Message Passing Interface) is used
for handling data transfers and synchronisation in the outer dimension,
while multi-threading (using, e.g., OpenMP via Numba) is used in the inner dimension.
Every worker is responsible for computing its part of the decomposed domain as depicted below:
Every node (process) is responsible for computing its part of the decomposed domain.

### Spherical scenario (2D)

In spherical geometry, the $$G$$ factor represents the Jacobian of coordinate transformation.
In this example (based on a test case from [Williamson & Rasch 1989](https://doi.org/10.1175/1520-0493(1989)117<0102:TDSLTW>2.0.CO;2)),
domain decomposition is done cutting the sphere along meridians.
The inner dimension uses the [`MPIPolar`](https://open-atmos.github.io/PyMPDATA-MPI/mpi_polar.html)
boundary condition class, while the outer dimension uses
[`MPIPeriodic`](https://open-atmos.github.io/PyMPDATA-MPI/mpi_periodic.html).
Note that the spherical animations below depict simulations without MPDATA corrective iterations,
i.e. only plain first-order upwind scheme is used (FIXME).

### 1 worker
<p align="middle">
Expand All @@ -45,6 +53,14 @@ Every worker is responsible for computing its part of the decomposed domain as d
<img src="https://github.com/open-atmos/PyMPDATA-MPI/releases/download/latest-generated-plots/n_iters.1_rank_1_size_2_c_field_.0.5.0.25.-SphericalScenario-anim.gif" width="49%" />
</p>

### Cartesian scenario (2D)

In the carthesian example below (based on a test case from [Arabas et al. 2014](https://doi.org/10.3233/SPR-140379)),
a constant advector field $$u$$ is used (and $$G=1$$).
MPI (Message Passing Interface) is used
for handling data transfers and synchronisation in the outer dimension,
while multi-threading (using, e.g., OpenMP via Numba) is used in the inner dimension.

### 1 worker
<p align="middle">
<img src="https://github.com/open-atmos/PyMPDATA-MPI/releases/download/latest-generated-plots/n_iters.3_rank_0_size_1_c_field_.0.5.0.25.-CartesianScenario-anim.gif" width="49%" />
Expand Down
2 changes: 0 additions & 2 deletions scenarios/spherical.py
Original file line number Diff line number Diff line change
Expand Up @@ -131,13 +131,11 @@ def __init__(
Cx_max = np.amax(
np.abs((advector_x[1:, :] + advector_x[:-1, :]) / 2 / g_factor_z)
)
print(Cx_max)
assert Cx_max < 1

Cy_max = np.amax(
np.abs((advector_y[:, 1:] + advector_y[:, :-1]) / 2 / g_factor_z)
)
print(Cy_max)
assert Cy_max < 1
# TODO #81: </move out>

Expand Down

0 comments on commit da043b4

Please sign in to comment.