HyperNOs is a Python project focused on the implementation of completely automatic, distributed and parallel neural operators hyperparameter optimization. The project aims to provide a framework for training neural operator models using Pytorch and Ray Tune for hyperparameter tuning. The library is designed to be highly flexible, making it easy to use with any kind of model and dataset. In the context of Neural Operators, where architecture design is still an active area of research, performing extensive hyperparameter optimization is crucial to obtain state-of-the-art results.
For a more detailed explanation of the library and its capabilities, please refer to our article: HyperNOs: Automated and Parallel Library for Neural Operators Research.
HyperNOs allows users to easily integrate and use models from popular neural-operator libraries or custom models. Is also very flexible and can be used with many different datasets.
I already implemented some examples of usage with the following popular libraries:
- NeuralOperator: Implement neural operator architectures like FNO, SFNO, TFNO, UNO, UQNO, GINO, FNOGNO, LocalNO, RNO, CODANO, OTNO.
- DeepXDE: Implement operator learning models like DeepONet, MIONet, POD-DeepONet, POD-MIONet.
You can find examples of how to use these models in the neural_operators/examples directory with two dedicated subdirectories: deepxde_lib and neuralop_lib. There are implemented examples both for training a given architecture and for hyperparameter optimization routines.
The project also includes a visualization website: (https://hypernos.streamlit.app) that allows users to visualize the results obtained with HyperNOs library.
To set up the HyperNOs project, follow these steps:
-
Clone the repository:
git clone --depth=1 https://github.com/MaxGhi8/HyperNOs.git cd HyperNOs -
Install the required dependencies. It is recommended to create a virtual environment before installing the dependencies; I personally use
pyenv(but others, likeuv, are fine) and Python version3.12.7for this purpose:pyenv install 3.12.7 pyenv virtualenv 3.12.7 hypernos pyenv activate hypernos
Then, install the dependencies using
pip:pip install -r requirements.txt
If you want to use the
neuraloperatorlibrary, since we use the cutting-edge features of NeuralOperator, please install the library directly from GitHub and not with pip, you can found the instructions here.[!WARNING] For PyTorch, more attention may be needed during installation. We describe the default installation; however, we highly recommend following the official documentation to install the correct version for your system. You can check your CUDA driver version by running
nvidia-smiin your terminal to ensure compatibility. -
Download the dataset using the
download_data.shscript:./download_data.sh
[!WARNING] Only for Windows I recommend to install WSL. Then open the WSL terminal and navigate where you have installed the HyperNOs library
cd /mnt/c/Users/<your_user>/<your_path_to_HyperNOs>
and then try to run the program with
./download_data.shif you get an error like/bin/bash^M: bad interpreter. No such file or directorythis can be due toCRandLFin Windows. In this case try to run the following line and then rerun the program.sed -i -e 's/\r$//' download_data.sh ./download_data.sh -
If you want to download our trained model this have to be done in two steps. First of all clone the following github repository:
git clone --depth=1 https://github.com/MaxGhi8/tests
The previous repository contains the Tensorboard support for every model, the information about the training and the architecture's hyperparameters chosen. Then you can download running the following script and select the model that you want to download:
./download_trained_model.sh
[!WARNING] As before, for Windows, if you are on WSL and get the error
/bin/bash^M: bad interpreter. No such file or directorytry to runsed -i -e 's/\r$//' download_trained_model.shand then rerun the script./download_trained_model.sh.
After installation, you can run the provided examples in the neural_operators/examples directory.
We provide interactive Jupyter Notebooks in the notebook/ directory to help you get started:
- Training Tutorial: Learn how to train a Neural Operator.
- Ray Tune Tutorial: Learn how to tune hyperparameters of a Neural Operator.
To train a model (e.g., FNO) on a single machine, simply run the corresponding python script:
cd neural_operators/examples/
python train_fno.pyYou can use Ray Tune to optimize hyperparameters.
To run Ray Tune on your local machine, first start a Ray head node:
ray start --headThen run the Ray script:
cd neural_operators/examples/
python ray_fno.pyFor running on a cluster using Slurm, we provide a template script. Please refer to SLURM_USAGE.md for detailed instructions on how to configure and submit jobs using template.slurm.
If you use our library please consider citing our paper:
@misc{ghiotto2025hypernosautomatedparallellibrary,
title={HyperNOs: Automated and Parallel Library for Neural Operators Research},
author={Massimiliano Ghiotto},
year={2025},
eprint={2503.18087},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2503.18087},
}