diff --git a/docs/source_docs/user_guide/inputs/domain.rst b/docs/source_docs/user_guide/inputs/domain.rst index fc4da1f37e9599992c89201f8bdab571db558d90..7d0e796d0918460a26ddd76b70c087ff503e5f76 100644 --- a/docs/source_docs/user_guide/inputs/domain.rst +++ b/docs/source_docs/user_guide/inputs/domain.rst @@ -119,6 +119,20 @@ The following inputs are defined using the prefix ``amr``: | | specified per-level. | | | +----------------------+-----------------------------------------------------------------------+-------------+-----------+ +Note, the default for ``max_grid_size`` is 64 for GPU runs. + +The domain is decomposed into grids by dividing the number of cells by the max grid size +for each direction (e.g., ``n_cells[0]/max_grid_size_x``). The blocking factor ensures that +the grids will be sufficiently coarsenable for good multigrid performance; therefore, the +``max_grid_size`` must be divisible by the corresponding ``blocking_factor``. + +.. note:: + + The `AMReX documentation `_ contains a significant + amount of information on grid creation and load balancing. Users are strongly encouraged to + read the relevant sections. + + Periodic domains ---------------- diff --git a/docs/source_docs/user_guide/inputs/gridding.rst b/docs/source_docs/user_guide/inputs/gridding.rst index 5edb4edf1bcfe0b6893998aa6402ce8960e2deaf..d3b1c8310be2a6ffc6814ede7713eeb19e3f35f2 100644 --- a/docs/source_docs/user_guide/inputs/gridding.rst +++ b/docs/source_docs/user_guide/inputs/gridding.rst @@ -4,31 +4,26 @@ Grids, tiles, and tagging ========================= -Grid layout ------------ - -+----------------------+-----------------------------------------------------------------------+-------------+-----------+ -| refine_grid_layout_x | If set, AMReX will attempt to chop new grids into smaller chunks along| Bool | true | -| | the x axis, ensuring at least one grid per MPI process, provided this | | | -| | does not violate the blocking factor constraint. | | | -+----------------------+-----------------------------------------------------------------------+-------------+-----------+ -| refine_grid_layout_y | If set, AMReX will attempt to chop new grids into smaller chunks along| Bool | true | -| | the y axis, ensuring at least one grid per MPI process, provided this | | | -| | does not violate the blocking factor constraint. | | | -+----------------------+-----------------------------------------------------------------------+-------------+-----------+ -| refine_grid_layout_z | If set AMReX will attempt to chop new grids into smaller chunks along | Bool | true | -| | the z axis, ensuring at least one grid per MPI process, provided this | | | -| | does not violate the blocking factor constraint. | | | -+----------------------+-----------------------------------------------------------------------+-------------+-----------+ - - - -Note, the default for ``max_grid_size`` is 64 for GPU runs. - -The domain is decomposed into grids by dividing the number of cells by the max grid size -for each direction (e.g., ``n_cells[0]/max_grid_size_x``). The blocking factor ensures that -the grids will be sufficiently coarsenable for good multigrid performance; therefore, the -``max_grid_size`` must be divisible by the corresponding ``blocking_factor``. +Grid refinement +--------------- + +The following inputs are defined using the prefix ``amr``: + ++----------------------+-----------------------------------------------------------------------+-------------+----------+ +| | Description | Type | Default | ++======================+=======================================================================+=============+==========+ +| refine_grid_layout_x | If set, AMReX will attempt to chop new grids into smaller chunks along| Bool | true | +| | the x axis, ensuring at least one grid per MPI process, provided this | | | +| | does not violate the blocking factor constraint. | | | ++----------------------+-----------------------------------------------------------------------+-------------+----------+ +| refine_grid_layout_y | If set, AMReX will attempt to chop new grids into smaller chunks along| Bool | true | +| | the y axis, ensuring at least one grid per MPI process, provided this | | | +| | does not violate the blocking factor constraint. | | | ++----------------------+-----------------------------------------------------------------------+-------------+----------+ +| refine_grid_layout_z | If set AMReX will attempt to chop new grids into smaller chunks along | Bool | true | +| | the z axis, ensuring at least one grid per MPI process, provided this | | | +| | does not violate the blocking factor constraint. | | | ++----------------------+-----------------------------------------------------------------------+-------------+----------+ .. note:: @@ -53,12 +48,12 @@ The following inputs are defined using the prefix ``fabarray``: The following inputs are defined using the prefix ``particles``: -+----------------------+-----------------------------------------------------------------------+-------------+--------------+ -| | Description | Type | Default | -+======================+=======================================================================+=============+==============+ -| tile_size | Maximum number of cells in each direction for (logical) tiles | Ints<3> | 1024000 8 8 | -| | in the ParticleBoxArray if ``load_balance`` is ``DualGrid`` | | | -+----------------------+-----------------------------------------------------------------------+-------------+--------------+ ++----------------------+-------------------------------------------------------------------+-------------+--------------+ +| | Description | Type | Default | ++======================+===================================================================+=============+==============+ +| tile_size | Maximum number of cells in each direction for (logical) tiles in | Ints<3> | 1024000 8 8 | +| | the ParticleBoxArray if ``load_balance`` is ``DualGrid`` | | | ++----------------------+-------------------------------------------------------------------+-------------+--------------+ When running on shared memory machines using an OpenMP enabled executable, *grids* are subdivided into *tiles* and iterated over to improve data locality by cache blocking.