You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+45-37Lines changed: 45 additions & 37 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,7 +1,11 @@
1
1
# PyPSA-China:An Open-Source Optimisation model of the Chinese Energy System
2
2
3
+
PyPSA-China (PIK) is a open-source model of the Chinese energy system covering electricity and heat. It co-optimizes dispatch and investments under user-set constraints, such as limits to environmental impacts, to minimize costs. The model works at provincial resolution and can simulate a full year at hourly resolution.
4
+
5
+
## PIK version
3
6
This is the PIK implementation of the PyPSA-China power model, first published by Hailiang Liu et al for their study of [hydro-power in china](https://doi.org/10.1016/j.apenergy.2019.02.009) and extended by Xiaowei Zhou et al for their ["Multi-energy system horizon planning: Early decarbonisation in China avoids stranded assets"](doi.org/10.1049/ein2.12011) paper. It is adapted from the Zhou version by the [PIK RD3-ETL team](https://www.pik-potsdam.de/en/institute/labs/energy-transition/energy-transition-lab), with the aim of coupling it to the [REMIND](https://www.pik-potsdam.de/en/institute/departments/transformation-pathways/models/remind) integrated assessment model. A reference guide is available as part of the [documentation](https://pik-piam.github.io/PyPSA-China-PIK/).
4
7
8
+
## Overview
5
9
PyPSA-China should be understood as a modelling worklow, using snakemake as workflow manager, around the [PyPSA python power system analysis](https://pypsa.org/) package. The workflow collects data, builds the power system network and plots the results. It is akin to its more mature sister project, [PyPSA-EUR](https://github.com/PyPSA/pypsa-eur), from which it is derived.
6
10
7
11
Unlike PyPSA-EUR, which simplifies high resolution data into a user-defined network size, the PyPSA-China network is currently fixed to one node per province. This is in large part due to data availability issues.
@@ -10,57 +14,61 @@ The PyPSA can perform a number of different study types (investment decision, op
10
14
11
15
The PyPSA-CHINA-PIK is currently under development. Please contact us if you intend to use it for publications.
12
16
17
+
# Documentation
18
+
The documentation can be found at https://pik-piam.github.io/PyPSA-China-PIK/
13
19
14
-
# Installation
20
+
# Getting started
15
21
16
-
## Set-up on the PIK cluster
17
-
Gurobi license activation from the compute nodes requires internet access. The workaround is an ssh tunnel to the login nodes, which can be set-up on the compute nodes with
An installation guide is provided at https://pik-piam.github.io/PyPSA-China-PIK/
25
+
26
+
> [!NOTE] Set-up on the PIK cluster
27
+
> Gurobi license activation from the compute nodes requires internet access. The workaround is an ssh tunnel to the login nodes, which can be set-up on the compute nodes with
You will then need to add the contents of the public key `~/.ssh/id_rsa.cluster_internal_exchange.pub` to your authorised `~/.ssh/authorized_keys`, eg. with `cat <key_name> >> authorized_keys`
36
+
37
+
> You will then need to add the contents of the public key `~/.ssh/id_rsa.cluster_internal_exchange.pub` to your authorised `~/.ssh/authorized_keys`, eg. with `cat <key_name> >> authorized_keys`
27
38
28
39
> TROUBLE SHOOTING
29
40
> you may have some issues with the solver tunnel failing (permission denied). One of these two steps should solve it
30
41
> option 1: name the exchange key `id_rsa`.
31
42
> option 2: copy the contents to authorized_keys from the compute nodes (from the ssh_dir `srun --qos=priority --pty bash; cat <key_name> >> authorized_keys;exit`)
32
43
33
-
In addition you should have your .profile & .bashrc setup as per https://gitlab.pik-potsdam.de/rse/rsewiki/-/wikis/Cluster-Access
44
+
> In addition you should have your .profile & .bashrc setup as per https://gitlab.pik-potsdam.de/rse/rsewiki/-/wikis/Cluster-Access
34
45
and add `module load anaconda/2024.10` (or latest) to it
35
46
36
-
## General installation
37
-
- Create the conda environment in workflow/envs/ (maybe snakemake does it automatically for you provided the profile has use-conda) `conda env create --file path_to_env` (name is opt.). You can use either the pinned (exact) or the loose env (will install later package versions too).
38
-
- If you experience issues switch to the pinned environment #TODO: generate
39
-
- NB! you may need to modify atlite for things to work. Instructions to follow.
47
+
## Getting the data
48
+
You will need to enable data retrieval in the config
49
+
```yaml
50
+
enable:
51
+
build_cutout: false # if you want to build your own (requires ERA5 api access)
52
+
retrieve_cutout: true # if you want to download the pre-computed one from zenodo
53
+
retrieve_raster: true # get raster data
54
+
```
55
+
Some of the files are very large - expect a slow process!
40
56
57
+
- You can also download the data manually and copy it over to the correct folder. The source and target destinations are the input/output of the `fetch_` rules in `workflow/rules/fetch_data.smk`
58
+
- **PIK HPC users only** you can also copy the data from other users
41
59
42
-
## Getting the data
43
-
- some of the data is downloaded by the snakemake workflow (e.g. cutouts). Just make sure te relevant config options are set to true if it is your first run
44
-
- the shapely files can be generated with the build_province_shapes script
45
-
- the [zeonodo bundle](https://zenodo.org/records/13987282) from the pypsa-China v3 comes with the data but in the old format, you will have to manually restructure it (or we can write a script)
46
-
-[PIK HPC users only] you can also copy the data from the tmp folder
60
+
## Usage
47
61
48
-
# Usage
49
-
- If you are not running on the PIK hpc, you will need make a new profile for your machine under `config/<myprofile>/config.yaml`
62
+
Detailed instructions in the documentation.
63
+
### local execution
64
+
- local execution can be started (once the environment is activated) with `snakemake`
65
+
- to customize the options, create `my_config.yaml` and launch `snakemake --configfile `my_config.yaml`. Configuration options are summarised in the documentation.
66
+
### Remote execution
67
+
This is relevant for slurm HPCs and other remotes with a submit job command
50
68
- The workflow can be launched with `snakemake --profile config/compute_profile`
51
69
- [PIK HPC users only] use `snakemake --profile config/pik_hpc_profile`
70
+
- If you are not running on the PIK hpc, you will need make a new profile for your machine under `config/<compute_profile>/config.yaml`
71
+
72
+
73
+
52
74
53
-
# Changelog
54
-
- add path managers and merge duplicate functionalities
55
-
- moved hardcoded data paths to config
56
-
- restructure project to match snakemake8 guidelines & update to snakemake8
57
-
- move hardcoded to centralised store constants.py (file paths still partially hardcoded)
58
-
- start adding typing
59
-
- add scripts to pull data
60
-
- add derived_data/ folder and change target of data cleaning/prep steps for clarity
61
-
62
-
# TODOs:
63
-
- see issues
64
-
- add pinned env
65
-
- make a PR on atlite to add the squeeze array, which may be needed
0 commit comments