diff --git a/README.md b/README.md index 30d12ca..3c05631 100644 --- a/README.md +++ b/README.md @@ -24,15 +24,23 @@ of transport equations with the MPDATA numerical scheme in a hybrid parallelisation model with both multi-threading and MPI distributed memory communication. PyMPDATA-MPI adapts to API of PyMPDATA offering domain decomposition logic. -## Hello world example +## Hello world examples In a minimal setup, PyMPDATA-MPI can be used to solve the following transport equation: $$\partial_t (G \psi) + \nabla \cdot (Gu \psi)= 0$$ in an environment with multiple nodes. -In two dimensions (x,y), MPI (Message Passing Interface) is used - for handling data transfers and synchronisation in the outer dimension, - while multi-threading (using, e.g., OpenMP via Numba) is used in the inner dimension. -Every worker is responsible for computing its part of the decomposed domain as depicted below: +Every node (process) is responsible for computing its part of the decomposed domain. + +### Spherical scenario (2D) + +In spherical geometry, the $$G$$ factor represents the Jacobian of coordinate transformation. +In this example (based on a test case from [Williamson & Rasch 1989](https://doi.org/10.1175/1520-0493(1989)117<0102:TDSLTW>2.0.CO;2)), + domain decomposition is done cutting the sphere along meridians. +The inner dimension uses the [`MPIPolar`](https://open-atmos.github.io/PyMPDATA-MPI/mpi_polar.html) + boundary condition class, while the outer dimension uses + [`MPIPeriodic`](https://open-atmos.github.io/PyMPDATA-MPI/mpi_periodic.html). +Note that the spherical animations below depict simulations without MPDATA corrective iterations, + i.e. only plain first-order upwind scheme is used (FIXME). ### 1 worker
@@ -45,6 +53,14 @@ Every worker is responsible for computing its part of the decomposed domain as d
+### Cartesian scenario (2D) + +In the carthesian example below (based on a test case from [Arabas et al. 2014](https://doi.org/10.3233/SPR-140379)), + a constant advector field $$u$$ is used (and $$G=1$$). +MPI (Message Passing Interface) is used + for handling data transfers and synchronisation in the outer dimension, + while multi-threading (using, e.g., OpenMP via Numba) is used in the inner dimension. + ### 1 workerdiff --git a/scenarios/spherical.py b/scenarios/spherical.py index b60ce88..8b697ab 100644 --- a/scenarios/spherical.py +++ b/scenarios/spherical.py @@ -131,13 +131,11 @@ def __init__( Cx_max = np.amax( np.abs((advector_x[1:, :] + advector_x[:-1, :]) / 2 / g_factor_z) ) - print(Cx_max) assert Cx_max < 1 Cy_max = np.amax( np.abs((advector_y[:, 1:] + advector_y[:, :-1]) / 2 / g_factor_z) ) - print(Cy_max) assert Cy_max < 1 # TODO #81: