Skip to content

Wheels: 0.17.0#1833

Merged
ax3l merged 31 commits intoopenPMD:wheelsfrom
ax3l:wheels-0.17.0
Jan 27, 2026
Merged

Wheels: 0.17.0#1833
ax3l merged 31 commits intoopenPMD:wheelsfrom
ax3l:wheels-0.17.0

Conversation

@ax3l
Copy link
Member

@ax3l ax3l commented Jan 13, 2026

Update ADIOS2 to v2.11.0 and enable campaign management.
@ax3l ax3l added this to the 0.17.0 milestone Jan 13, 2026
@ax3l ax3l requested a review from franzpoeschel January 13, 2026 18:36
@ax3l ax3l force-pushed the wheels-0.17.0 branch 4 times, most recently from ada0bda to 7d6bc2e Compare January 18, 2026 05:26
GH runners EOL and removed
- [x] Linux/macOS
- [x] Windows
@ax3l ax3l force-pushed the wheels-0.17.0 branch 4 times, most recently from 5cd8a37 to 89dc8a0 Compare January 27, 2026 05:28
ax3l added 2 commits January 26, 2026 21:45
libcrypto.3.dylib and libssl.3.dylib pull this up
Comment on lines 94 to 96
git clone https://github.com/ornladios/ADIOS2 ADIOS2-2.11.0
cd ADIOS2-2.11.0
git checkout 7a21e4ef2f5def6659e67084b5210a66582d4b1a
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@franzpoeschel do we need this commit for something specific or can we go back to vanilla 2.11.0 + the patch below for DILL?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've modified this patch to include only the ADIOS thirdparty/ffs change. I suspect that the EVPath change isn't necessary, and if it can be excluded then the other can go into ADIOS master and release_211.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, let's see if the latest version of ornladios/ADIOS2#4820, modifying only thirdparty/ffs/CMakeLists.txt to add

if(TARGET dill::dill)
  set(FFS_USE_DILL ON CACHE INTERNAL "")
else()
  set(FFS_USE_DILL OFF CACHE INTERNAL "")
endif()      

works on 2.11.0 for macOS

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sadly, it looks like it needs the EVPath changes, too.

@franzpoeschel
Copy link
Contributor

franzpoeschel commented Jan 27, 2026 via email

@ax3l
Copy link
Member Author

ax3l commented Jan 27, 2026

All green - thanks a lot for the help @eisenhauer ! Will now clean up a bit.

@ax3l ax3l force-pushed the wheels-0.17.0 branch 5 times, most recently from 63f1acd to 4a77b85 Compare January 27, 2026 19:15
@ax3l ax3l merged commit 9b806ce into openPMD:wheels Jan 27, 2026
8 of 9 checks passed
@ax3l ax3l deleted the wheels-0.17.0 branch January 27, 2026 21:57
RemiLehe added a commit to BLAST-WarpX/warpx that referenced this pull request Jan 28, 2026
Test the new openPMD-api release before it is tagged.

- [x] Python `setup.py`/`requirements.txt` update needs
openPMD/openPMD-api#1833

---------

Co-authored-by: Remi Lehe <[email protected]>
ncook882 added a commit to radiasoft/WarpX that referenced this pull request Jan 28, 2026
)

* Release: WarpX 25.12 (BLAST-WarpX#6432)

Automated via .github/workflows/monthly_release.yml.

---------

Co-authored-by: Axel Huebl <[email protected]>
Co-authored-by: Edoardo Zoni <[email protected]>

* Docs: inputs section cleanup - QED (BLAST-WarpX#6385)

QED input parameters are moved to a dedicated subsection. Split from BLAST-WarpX#6355.

Co-authored-by: Edoardo Zoni <[email protected]>

* [pre-commit.ci] pre-commit autoupdate (BLAST-WarpX#6440)

<!--pre-commit.ci start-->
updates:
- [github.com/pre-commit/mirrors-clang-format: v21.1.6 →
v21.1.7](pre-commit/mirrors-clang-format@v21.1.6...v21.1.7)
- [github.com/astral-sh/ruff-pre-commit: v0.14.7 →
v0.14.8](astral-sh/ruff-pre-commit@v0.14.7...v0.14.8)
<!--pre-commit.ci end-->

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>

* PETSC CI: Simplify (BLAST-WarpX#6437)

- install properly in the system default location `/usr`
- remove all the extra hints for custom install paths
- waste no time with debug builds (this test only builds and runs
nothing)

* allow dt update based on particle CFL for theta implicit solver. (BLAST-WarpX#6428)

This PR adds the ability to dynamically limit the time step based on the
particle CFL when using the theta implicit EM solver, which treats light
waves implicitly. This would be a one line PR if it wasn't for one
subtlety described below.

The call to `UpdateDtFromParticleSpeeds()` in the` WarpX::Evolve()`
routine occurs prior to the call to `OneStep()`. For many of the
time-advance methods, including the implicit ones, `OneStep()` first
performs collisions and then the PIC advance. However, to be meaningful,
the call to `UpdateDtFromParticleSpeeds()` should occur **just before**
the PIC advance. This PR swaps the order of calls inside `OneStep()` for
the implicit methods to do the PIC advance before collisions. This
ordering is actually more common in my experience, and I wonder if the
ordering should be swapped for other time-advance methods as well.
@roelof-groenewald @ax3l @RemiLehe

* For PEC_insulator, fix the source on the boundary (BLAST-WarpX#6436)

For PEC_insulator, the source (rho and J-parallel) need to be zeroed out
on the boundary in the conductor region. Without this, the linear
solvers for implicit can fail to converge.

This PR implements two versions of this to compare them.

* Dependencies: weekly update (BLAST-WarpX#6439)

Automated via .github/workflows/weekly_update.yml.

Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
Co-authored-by: Edoardo Zoni <[email protected]>

* Fix formatting in parameters.rst to avoid warnings (BLAST-WarpX#6444)

Several small fixes in the section on the `do_qed_virtual_photons` input
parameter needed to avoid warnings during the generation of the html.

---------

Co-authored-by: Luca Fedeli <[email protected]>
Co-authored-by: Arianna Formenti <[email protected]>

* Free disk space in nvhpc CI (BLAST-WarpX#6445)

The script is borrowed from AMReX. For the nvhpc CI using ubuntu-latest,
the free space before was 18G, it becomes 23G after the cleanup.

* CMake: No Warn for No-MPI Tests (BLAST-WarpX#6450)

Our strategy for CMake is to enable tests that work in the current
configuration, and silently not add other tests.

When building w/o MPI, we currently fill the terminal with warnings.
This removes those.

* ExternalField.H: use static_cast<int> to convert double/float to int (BLAST-WarpX#6446)

I've noticed this warning at the end of
https://github.com/BLAST-WarpX/warpx/actions/runs/20027848504/job/57429402421?pr=6427
:

```
./Source/Initialization/ExternalField.H:84:26: warning: conversion from ‘double’ to ‘int’ may change value [-Wfloat-conversion]
   84 |         AMREX_D_TERM(int const i0 = std::floor( (pos[0]-offset[0])/dx[0] );,
      |                ~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
./Source/Initialization/ExternalField.H:84:87: warning: conversion from ‘double’ to ‘int’ may change value [-Wfloat-conversion]
   84 |         AMREX_D_TERM(int const i0 = std::floor( (pos[0]-offset[0])/dx[0] );,
      |                                                                                       ^                                 
./Source/Initialization/ExternalField.H:84:148: warning: conversion from ‘double’ to ‘int’ may change value [-Wfloat-conversion]
   84 |         AMREX_D_TERM(int const i0 = std::floor( (pos[0]-offset[0])/dx[0] );,
      |                                                                                   
```

This PR uses `static_cast` to silence the warning. This should fix the
issue.

Note that `auto` here is used to avoid repeating `int` twice.

* Cropping of particles at boundaries for deposition for charge conservation (BLAST-WarpX#5649)

* Doc: WarpX no-MPI Perlmutter Container (BLAST-WarpX#6422)

A container for WarpX on GPU on Perlmutter without MPI support.

Used as base image for SYNAPSE for now
BLAST-AI-ML/synapse#279

## To Do

- [x] Build all WarpX dims
- [x] Add HDF5
- [x] Add more [Python packages for
analysis/db](BLAST-AI-ML/synapse#279)
- [x] Builds
- [x] Ensure it runs on Perlmutter (1 GPU only):
```console
cd ~/src/warpx/Examples/Physics_applications/laser_acceleration

# executable
podman-hpc run --rm --gpu -v $PWD:/opt/pwd -it registry.nersc.gov/m558/superfacility/warpx-perlmutter-nompi:25.11 warpx.rz /opt/pwd/inputs_base_rz

# Python
podman-hpc run --rm --gpu -v $PWD:/opt/pwd -it registry.nersc.gov/m558/superfacility/warpx-perlmutter-nompi:25.11 /opt/pwd/inputs_test_rz_laser_acceleration_picmi.py
```
- [x] Publish as
`registry.nersc.gov/m558/superfacility/warpx-perlmutter-nompi:25.11`

* Add cupy support for picmi CIs with callback functions (BLAST-WarpX#6354)

To fix (separate PR?): during the shutdown, several CI tests still
crash: `amrex::Abort::1::CUDA error 709 in file
/global/homes/o/oshapova/src/warpx/build/_deps/fetchedamrex-src/Src/Base/AMReX_GpuDevice.cpp
line 691: context is destroyed !!!`
Crashing tests: 

-
`/Examples/Tests/ohm_solver_ion_Landau_damping/inputs_test_2d_ohm_solver_landau_damping_picmi.py`
-
`/Examples/Physics_applications/capacitive_discharge/inputs_test_2d_background_mcc_picmi.py`
-
`/Examples/Physics_applications/capacitive_discharge/inputs_base_1d_picmi.py`
- `/Examples
/Physics_applications/spacecraft_charging/inputs_test_rz_spacecraft_charging_picmi.py`
-
`/Examples/Tests/ohm_solver_cylinder_compression/inputs_test_3d_ohm_solver_cylinder_compression_picmi.py`
-
`/Examples/Tests/projection_div_cleaner/inputs_test_3d_projection_div_cleaner_callback_picmi.py`

---------

Co-authored-by: Axel Huebl <[email protected]>

* Custom weights for initial DistributionMapping (BLAST-WarpX#6452)

By default, amrex uses box volumes as weights when distributing boxes.
But this would somewhat defeat the purpose of splitting high density
boxes until the next load balance is performed. Thus, when splitting box
is on, we are going to assume that the boxes have equal weights.

For a test based on BLAST-WarpX#6158, this improves the initial load balance
efficiency from 0.26 to 0.63 when running on 8 processes.

The figures below show the distribution mapping without and with this
PR.

<img width="844" height="636" alt="Screenshot from 2025-12-11 16-07-56"
src="https://github.com/user-attachments/assets/eb6b4c85-fc55-4e4e-ac68-b6a02ab9bf4d"
/>

* Dependencies: weekly update (BLAST-WarpX#6457)

Automated via .github/workflows/weekly_update.yml.

Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>

* [pre-commit.ci] pre-commit autoupdate (BLAST-WarpX#6459)

<!--pre-commit.ci start-->
updates:
- [github.com/astral-sh/ruff-pre-commit: v0.14.8 →
v0.14.9](astral-sh/ruff-pre-commit@v0.14.8...v0.14.9)
<!--pre-commit.ci end-->

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>

* Load Density: Distributed Approach (BLAST-WarpX#6221)

Previously, the data are duplicated on all processes. For a very large
file, this might be problematic. A distributed approach is developed in
this PR.

For the initial particle creation, we load the data we need in a
distributed way. Because we also know the particle BoxArray and
DistributionMapping, we can then redistribute the data to where they are
needed.

For the continuously particle injection during moving window, the data
are duplicated on every process, but they are still distributed in time
in the sense that we only load the data needed for the new region
created by the moving window plus some extra caching so that we don't
have to do disk I/O too frequently.

A caveat is that it might be difficult to determine the range of the
data we need if the boosted frame is used and the boosted velocity is
not a constant. In that case, one might encounter an out-of-bound error
at run time. One can also disable the distribute approach with a run
time parameter, `<species_name>.read_density_distributed`.

This also add support for Fortran order data and axes with the order of
either x, y and z or z, y and x.

---------

Co-authored-by: Remi Lehe <[email protected]>
Co-authored-by: Axel Huebl <[email protected]>

* Add option to initialize a Gaussian beam using total number of particles instead of total charge  (BLAST-WarpX#6451)

A Gaussian beam can now be initialized using the input parameter
`<species>.npart_real` instead of `<species>.q_tot`.
Only one of the two should be specified by the user:
* if `npart_real` is specified, the weights are calculated as
`npart_real/npart` (note that, in the user interface, we use `npart` to
mean the number of macroparticles, which I think is a bit meh)
* if `q_tot` is specified, the weights are calculated as `q_tot / (npart
* charge)`, as it was before this PR.

This is useful for neutral species.

* Fix undefined variables in PICMI test scripts (BLAST-WarpX#6456)

* External Particle Fields: enable reading of multiple fields from file (BLAST-WarpX#6269)

This PR adds the functionality to read in multiple external particle
fields from an openPMD file. Inspired from the HybridSolver, a new
helper class `ExternalParticleFields` is introduced, which holds the
arrays of the meta data for the external E and B fields.

Then, the fields are read and written into the `m_fields` by using as
many components as there are external E and B fields, respectively. This
allows to read in fields from different files witth different time
dependencies.

The previous syntax of reading in a single field is preserved and
functional. Now, additionally, multiple fields can be read in as
```
            particles.B_ext_particle_init_style = read_from_file
            particles.B_ext_particle_fields = b1 b2
            particles.b1.read_fields_from_path = diags/Bfield_map1
            particles.b1.read_fields_B_dependency(t) = cos(omega*t + phase)
            particles.b2.read_fields_from_path = diags/Bfield_map2
            particles.b2.read_fields_B_dependency(t) = cos(2*omega*t + phase)
```

For the picmi interface, a dictionary has to be created per field,
similarly to the HybridSolver:
```python
    applied_field = picmi.LoadAppliedField(
        load_E=False,
        load_B=True,
        B_external_fields={
            "b1": {
                "read_fields_from_path": "diags/Bfield_map1",
                "read_fields_B_dependency(t)": f"cos({omega}*t + {phase})",
            },
            "b2": {
                "read_fields_from_path": "diags/Bfield_map2",
                "read_fields_B_dependency(t)": f"cos(2*{omega}*t + {phase})",
            },
        },
    )
```


Note that the global `particles.read_fields_from_path` is ignored as
soon as multiple fields are specified and a warning is issued.
Furthermore, for each field, the path must be specified individually.

Additional CI tests are added with different time dependency, similar to
the previously existing one that read only a single field.

This plot shows the beating from two field maps at different
frequencies:

![frequency_beating](https://github.com/user-attachments/assets/d7fb8353-76bd-4789-bcd0-85210adaac9d)

Any feedback or suggestions are more than welcome!

---------

Co-authored-by: Edoardo Zoni <[email protected]>

* collisions should be after call to HandleParticlesAtBoundaries() (BLAST-WarpX#6458)

This PR moves the collisions for the implicit solver to the proper place
in the Evolve loop. This fixes a bug introduced in PR BLAST-WarpX#6428, where the
collisions are moved from just before to just after the PIC advance.
(See BLAST-WarpX#6449). However, with
this change I noticed odd behavior in simulation outputs at processor
boundaries late in time for highly collisional simulations. The fix is
that the collisions should be placed after the particle communication
and boundary handling is performed, which occurs later in the Evolve
loop.

* PETSc: Better Support in Build & CI (BLAST-WarpX#6441)

* Poisson Solver: Synchronize nodal data before solve (BLAST-WarpX#6438)

The nodal density in `LabFrameExplicitES::ComputeSpaceChargeField` was
not synchronized on shared nodes. Thus we need to call OverrideSync
before passing it to `MLMG::solve`.

Close BLAST-WarpX#6425

* Add documentation for implicit attributes (BLAST-WarpX#6443)

This adds to the documentation the particle attributes that are created
when the implicit solver is used.

Note that no specific default value is needed for the attributes since
they will always be set before being used.

* Dependencies: weekly update (BLAST-WarpX#6463)

Automated via .github/workflows/weekly_update.yml.

Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
Co-authored-by: Edoardo Zoni <[email protected]>

* check for out of bounds particles with cropping in suborbits. (BLAST-WarpX#6467)

Particle suborbits were added in PR BLAST-WarpX#5969. Suborbits are needed for
improved accuracy of particle orbits when using large time steps.
Particle orbit cropping at physical boundaries was added in PR BLAST-WarpX#5649.
This is used to achieve rigorous charge conservation for outflow
particles at physical boundaries (conductor and insulator).

A bug occurs when the initial position of the particle is out of bounds
(past the physical boundary) where cropping is being done. In this
scenario, the particle will deposit current when it should not be doing
so, breaking energy and charge conservation. This PR fixes this bug by
checking for particles with the initial position out of bounds in the
suborbit routine and neglecting deposition for these suborbits.

* add missing PetscFunctionBeginUser (BLAST-WarpX#6462)

* [pre-commit.ci] pre-commit autoupdate (BLAST-WarpX#6464)

<!--pre-commit.ci start-->
updates:
- [github.com/pre-commit/mirrors-clang-format: v21.1.7 →
v21.1.8](pre-commit/mirrors-clang-format@v21.1.7...v21.1.8)
- [github.com/astral-sh/ruff-pre-commit: v0.14.9 →
v0.14.10](astral-sh/ruff-pre-commit@v0.14.9...v0.14.10)
<!--pre-commit.ci end-->

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>

* Doc: AMD GPU Debugging (BLAST-WarpX#6469)

https://rocm.docs.amd.com/projects/HIP/en/docs-6.0.0/how_to_guides/debugging.html#summary-of-environment-variables-in-hip

---------

Co-authored-by: Weiqun Zhang <[email protected]>

* Documentation for Adastra (CINES, France): change installation directory to WORKDIR & clean scripts (BLAST-WarpX#6423)

This PR concerns the documentation for the Adastra supercomputer.

The main point of this PR is to change the installation directory from
`$SHAREDHOMEDIR` to `$WORKDIR` .
On Adastra the quota for the `$HOME` directory is shared among all the
group members of a given project and it is quite stringent, so having
individual installations in the `$HOME` folder may be challenging.
Therefore, the proposed solution was to have a unique installation in
`$SHAREDHOMEDIR` for all the project members. However, this introduces a
significant constraint for the users. For this reason, this PR changes
the default installation directory to `$WORKDIR`.

In addition, the installation of additional modules in the python
environment is proposed (notably `jupyter` and `lasy`), and some
cleaning of the installation scripts is performed.

* Release: WarpX 26.01 (BLAST-WarpX#6468)

Automated via .github/workflows/monthly_release.yml.

- [x] depends on AMReX-Codes/pyamrex#522

---------

Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
Co-authored-by: Axel Huebl <[email protected]>

* Doc: X-Ref Debugging (BLAST-WarpX#6470)

While we have debugging workflows traditionally in the user how-to
section (to write better bug reports), it is equally important to be
found (linked) from the developer section.

* Fix Zenodo JSON Schema (BLAST-WarpX#6473)

Fixed via validation with https://www.npmjs.com/package/zenodraft.

Broken since BLAST-WarpX#6102.

Not too easy to validate automatically:
zenodo/zenodo-rdm#1205

* Dependencies: weekly update (BLAST-WarpX#6466)

Automated via .github/workflows/weekly_update.yml.

- [x] wait / rebase after release BLAST-WarpX#6468

Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
Co-authored-by: Edoardo Zoni <[email protected]>

* Fix the periodic 1D Poisson solver (BLAST-WarpX#6418)

There were problems with the implementation of the tridiagonal solver
for phi with periodic boundaries. This PR replaces it with a more
standard solver as given in the Wikipedia page
https://en.wikipedia.org/wiki/Tridiagonal_matrix_algorithm#Variants.

* CI: Zenodo Validator (BLAST-WarpX#6474)

* Containers: More Shared Libs (BLAST-WarpX#6480)

Build WarpX as shared lib, so Python and executable application can
share the same lib (instead of copying the static lib twice). Remove all
auxiliary static libs after build.

Size before: 4.99GiB (11.9 GiB total)
Size now: 2.92GiB (6.33 GiB total)
Legend: size shown on harbor registry, i.e., compressed (size on `podman
images`, i.e. uncompressed)

- [x] runtime tested

* Add reduced Compton wavelength to constants (BLAST-WarpX#6476)

Added the reduced Compton wavelength in the list of constants in ablastr.

---------

Co-authored-by: Peter Kicsiny <[email protected]>
Co-authored-by: Arianna Formenti <[email protected]>

* [pre-commit.ci] pre-commit autoupdate (BLAST-WarpX#6484)

<!--pre-commit.ci start-->
updates:
- [github.com/astral-sh/ruff-pre-commit: v0.14.10 →
v0.14.11](astral-sh/ruff-pre-commit@v0.14.10...v0.14.11)
<!--pre-commit.ci end-->

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>

* Doc: New Paper in NJP on LPA Opt (BLAST-WarpX#6488)

:loudspeaker: New community paper:
https://doi.org/10.1088/1367-2630/ae23f1 🎉

* Fix Perlmutter Container: CuPy Headers (BLAST-WarpX#6485)

Seen in Synapse with LASY scripts on GPU:

Installing `nvidia-cuda-runtime-cu12==12.4.*` provides otherwise
unavailable headers that are needed for NVRTC runtime compilation inside
CuPy.

https://docs.cupy.dev/en/stable/install.html#cupy-always-raises-nvrtc-error-compilation-6

CuPy uses NVRTC for nearly everything, e.g., to calculate `cupy.sqrt(x)`
of an array `x`. Lasy uses CuPy by default, too.

- also install lasy (small, ok to ship)
- also disable HDF5 file locking (generally not needed, often a problem
and seen to be an issue for writing the lasy file in the container)

* Small bugfix in particle container wrapper (BLAST-WarpX#6481)

`get_particle_container_from_name()` no longer exists, instead it is now
just called `get`.
This functionality is deprecated anyway but still worth fixing until we
remove `particle_containers.py` completely.

---------

Signed-off-by: roelof-groenewald <[email protected]>

* refactor direct deposit for Mass Matrices. (BLAST-WarpX#6381)

This PR makes a few modifications to the direct mass matrix deposition
routine.

1) The switching between doing a full mass matrix deposition and just
depositing the diagonal elements of the diagonal mass matrices for the
preconditioner (PC) is now controlled with a template parameter just as
it is in the villasenor deposition. This removes an if check inside hot
loops.

2) The full mass matrices are deposited using the product of shape
factors for current and field interpolation (`shape_J x shape_E`). In
other words, for each particle location where current J is deposited,
the code loops over all field interpolation points E (the same locations
used for J) and deposits the corresponding mass matrix contribution with
`shape_J x shape_E`. In the current development branch, when only the
diagonal elements of the diagonal mass matrices are used for the PC, the
deposition still uses `shape_J x shape_E`, which is non-conservative.
With this PR, when operating in this way, the deposition instead uses
only the current deposition shape factor (`shape_J`). This makes the
deposition conservative, consistent with the J deposition scheme.
Mathematically, this is equivalent to depositing to all E-location for
each J-location (as is done for the full mass matrices) and summing the
contributions. @RemiLehe

* fix bug in MatrixPC for 1D. (BLAST-WarpX#6498)

This bug, which prevents using the matrix PC in 1D, was accidentally
introduced in PR BLAST-WarpX#6415.

* Dependencies: weekly update (BLAST-WarpX#6483)

Automated via .github/workflows/weekly_update.yml.

Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
Co-authored-by: Edoardo Zoni <[email protected]>

* Fix Cursor issue when running Python through CTest (BLAST-WarpX#6490)

When asking Cursor to run `CTest` tests, it seems that it is unable to
find the correct Python (in my case the `conda` one) and uses the system
Python instead, thus not finding `numpy` and `yt`:

<img width="852" height="722" alt="Screenshot 2026-01-13 at 2 26 28 PM"
src="https://github.com/user-attachments/assets/fc4ef6f8-b02c-4954-b7cb-0aec968c750c"
/>

Strangely, this error only happens on MacOS. (I don't get this error
when running Cursor on Linux.)
Additionally, when I run the same command **myself directly in the
Terminal**, I don't get the error (even on MacOS).

The changes made to the CMakeList.txt fix the issue, but were suggested
by a coding assistant and I do not fully understand them. They should be
simplified/updated before merging this PR.

- [ ] follow-up PR: also add same change in ImpactX

---------

Co-authored-by: Axel Huebl <[email protected]>

* [pre-commit.ci] pre-commit autoupdate (BLAST-WarpX#6508)

<!--pre-commit.ci start-->
updates:
- [github.com/Lucas-C/pre-commit-hooks: v1.5.5 →
v1.5.6](Lucas-C/pre-commit-hooks@v1.5.5...v1.5.6)
- [github.com/astral-sh/ruff-pre-commit: v0.14.11 →
v0.14.13](astral-sh/ruff-pre-commit@v0.14.11...v0.14.13)
<!--pre-commit.ci end-->

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>

* Docs: inputs section cleanup - parser and constants (BLAST-WarpX#6510)

Move and refine paragraph about math parser and constants. Split from
BLAST-WarpX#6355.

* Fix conditionless `ComputeSpaceChargeField`  (BLAST-WarpX#6455)

Adds a conditional check to only call `ComputeSpaceChargeField` when an
electrostatic solver is enabled or when using hybrid-PIC, matching the
logic in `WarpXEvolve.cpp`.

With the help of the robots, I have to check if it's correct.

---------

Co-authored-by: Roelof Groenewald <[email protected]>

* Add masks for boundary conditions in curl-curl matrix (BLAST-WarpX#6383)

Add masks to implement the boundary conditions in the curl curl matrix used for the PETSc preconditioner.

* Add density floor and time filtering to effective potential solver (BLAST-WarpX#6143)

~Currently, the `sigma` MF used to dress the Poisson equation in the
semi-implicit "effective potential" electrostatic solver uses the charge
density **deposited** at the nodes, which requires an interpolation step
to get the charge density at the cell-centers. This PR corrects the
calculation of sigma by directly using the cell average number density
via the `GetNumberDensity` function.~

The "effective potential" solver is updated to allow use of a density
floor in the calculation of sigma and to use time filtering of sigma -
this substantially reduces noise in the electrostatic potential.
The solver performance is also improved by only depositing the charge
density once per step instead of twice as in the current `development`
version.

The `GetNumberDensity` function is also exposed to Python since I used
it to test the a prototype solver (1d) with the above mentioned change.

---------

Signed-off-by: roelof-groenewald <[email protected]>

* Dependencies: weekly update (BLAST-WarpX#6507)

Automated via .github/workflows/weekly_update.yml.

To resolve the timeout issue, re-update the PR after:
- [x] AMReX-Codes/amrex#4900
- [x] AMReX-Codes/pyamrex#528 
- [x] AMReX-Codes/amrex#4908

---------

Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
Co-authored-by: Edoardo Zoni <[email protected]>
Co-authored-by: Axel Huebl <[email protected]>

* Dependencies: weekly update (BLAST-WarpX#6522)

Automated via .github/workflows/weekly_update.yml.

---------

Co-authored-by: Axel Huebl <[email protected]>

* [pre-commit.ci] pre-commit autoupdate (BLAST-WarpX#6525)

<!--pre-commit.ci start-->
updates:
- [github.com/astral-sh/ruff-pre-commit: v0.14.13 →
v0.14.14](astral-sh/ruff-pre-commit@v0.14.13...v0.14.14)
<!--pre-commit.ci end-->

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>

* Expose reduced Planck constant (BLAST-WarpX#6517)

Expose the reduced Planck constant $\hbar$ to the user. 
- [x] Expose the constant.
- [x] Update the documentation.
- [x] Use the constant in at least one CI test.

* Add new paper using WarpX for magnetic reconnection (BLAST-WarpX#6521)

* Docs: inputs section cleanup - BLAST-WarpX#6510 follow-up (BLAST-WarpX#6511)

Follow up on BLAST-WarpX#6510 and fix a few issues:
- Annotate the units of all dimensional constants (not just some of
them).
- Fix broken references to the renamed `Time intervals` subsection.
- Write "pi" as $\pi$ in the table.

* Improve documentation for callback function (third step): data access (BLAST-WarpX#6368)

This reorganizes the section on data access into separate pages (the
current page was getting too long), and updates the documentation to
describe the new interface that was implemented during the November 2025
hackathon.

The new structure is visible here in particular.
<img width="825" height="627" alt="Screenshot 2026-01-25 at 7 41 24 AM"
src="https://github.com/user-attachments/assets/51fb9c0b-8d48-4ee2-970b-842af98e8095"
/>

Before reviewing the code changes, I would recommend comparing the
existing Sphinx documentation
[here](https://warpx.readthedocs.io/en/latest/usage/workflows/python_extend.html)
with the new proposed documentation
[here](https://warpx--6368.org.readthedocs.build/en/6368/).

Next steps:
- Should we remove the previous Python interfaces? (e.g.
`MultiFabWrapper`, `ParticleContainerWrapper`)

---------

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Axel Huebl <[email protected]>

* bug fix for MM deposition in 1D. (BLAST-WarpX#6526)

This PR fixes a bug in the 1D Villasenor deposition routine for mass
matrices. The number of indices for the mass matrices currently has only
three values specified, with the intended component number the third
value. This causes crash with amrex.fpe_trap_invalid = 1 because the
code is interpreting the component number as the third spatial dimension
index and setting the component number to 0. Using debug build, I get
the following error message:
amrex::Abort::0:: (-1,0,3,0) is out of bound (-2:10,0:0,0:0,0:6) !!!

This error prevented me from running the
implicit/inputs_test_1d_theta_implicit_pinch test locally. I have no
idea why the crash didn't occur when running the CI tests on azure.

With or without this fix, if I set amrex.fpe_trap_invalid = 0, the CI
test runs fine and the results look as expected.

* Mass matrices pc width part 1 (BLAST-WarpX#6520)

Add off-diagonal terms of the diagonal mass matrices to the
preconditioner.

* Docs: Remove Summit & Lassen (ppc64le / CORAL aquisition) (BLAST-WarpX#6528)

Hi,

I've noticed that our documentation contains installation instructions
for machines that have been decommissioned, in some cases more than one
year ago.

This PR removes in particular the installation instructions and the
machine files for the Summit supercomputer, since the machine exhaled
its last FLOP in November 2024.

A valid alternative would be to keep these files in a `legacy` folder. 

What I suggest is basically to avoid cluttering the documentation by
mixing old and new machines. What do you think?

---------

Co-authored-by: Axel Huebl <[email protected]>

* openPMD-api: 0.17.0 (BLAST-WarpX#6461)

Test the new openPMD-api release before it is tagged.

- [x] Python `setup.py`/`requirements.txt` update needs
openPMD/openPMD-api#1833

---------

Co-authored-by: Remi Lehe <[email protected]>

---------

Signed-off-by: roelof-groenewald <[email protected]>
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: Axel Huebl <[email protected]>
Co-authored-by: Edoardo Zoni <[email protected]>
Co-authored-by: Arianna Formenti <[email protected]>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Justin Ray Angus <[email protected]>
Co-authored-by: David Grote <[email protected]>
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
Co-authored-by: Edoardo Zoni <[email protected]>
Co-authored-by: Luca Fedeli <[email protected]>
Co-authored-by: Weiqun Zhang <[email protected]>
Co-authored-by: Luca Fedeli <[email protected]>
Co-authored-by: Olga Shapoval <[email protected]>
Co-authored-by: Remi Lehe <[email protected]>
Co-authored-by: Severin Diederichs <[email protected]>
Co-authored-by: Peter Kicsiny <[email protected]>
Co-authored-by: Peter Kicsiny <[email protected]>
Co-authored-by: Roelof Groenewald <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants