Skip to content

Commit 01e398d

Browse files
authored
Merge branch 'pik-piam:main' into main
2 parents be42ad4 + 1286b98 commit 01e398d

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

50 files changed

+2069
-3224
lines changed

README.md

Lines changed: 45 additions & 37 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,11 @@
11
# PyPSA-China:An Open-Source Optimisation model of the Chinese Energy System
22

3+
PyPSA-China (PIK) is a open-source model of the Chinese energy system covering electricity and heat. It co-optimizes dispatch and investments under user-set constraints, such as limits to environmental impacts, to minimize costs. The model works at provincial resolution and can simulate a full year at hourly resolution.
4+
5+
## PIK version
36
This is the PIK implementation of the PyPSA-China power model, first published by Hailiang Liu et al for their study of [hydro-power in china](https://doi.org/10.1016/j.apenergy.2019.02.009) and extended by Xiaowei Zhou et al for their ["Multi-energy system horizon planning: Early decarbonisation in China avoids stranded assets"](doi.org/10.1049/ein2.12011) paper. It is adapted from the Zhou version by the [PIK RD3-ETL team](https://www.pik-potsdam.de/en/institute/labs/energy-transition/energy-transition-lab), with the aim of coupling it to the [REMIND](https://www.pik-potsdam.de/en/institute/departments/transformation-pathways/models/remind) integrated assessment model. A reference guide is available as part of the [documentation](https://pik-piam.github.io/PyPSA-China-PIK/).
47

8+
## Overview
59
PyPSA-China should be understood as a modelling worklow, using snakemake as workflow manager, around the [PyPSA python power system analysis](https://pypsa.org/) package. The workflow collects data, builds the power system network and plots the results. It is akin to its more mature sister project, [PyPSA-EUR](https://github.com/PyPSA/pypsa-eur), from which it is derived.
610

711
Unlike PyPSA-EUR, which simplifies high resolution data into a user-defined network size, the PyPSA-China network is currently fixed to one node per province. This is in large part due to data availability issues.
@@ -10,57 +14,61 @@ The PyPSA can perform a number of different study types (investment decision, op
1014

1115
The PyPSA-CHINA-PIK is currently under development. Please contact us if you intend to use it for publications.
1216

17+
# Documentation
18+
The documentation can be found at https://pik-piam.github.io/PyPSA-China-PIK/
1319

14-
# Installation
20+
# Getting started
1521

16-
## Set-up on the PIK cluster
17-
Gurobi license activation from the compute nodes requires internet access. The workaround is an ssh tunnel to the login nodes, which can be set-up on the compute nodes with
18-
```
19-
# interactive session on the compute nodes
20-
srun --qos=priority --pty bash
21-
# key pair gen (here ed25518 but can be rsa)
22-
ssh-keygen -t ed25519 -f ~/.ssh/id_rsa.cluster_internal_exchange -C "$USER@cluster_internal_exchange"
23-
# leave the compute nodes
24-
exit
22+
## Installation
23+
24+
An installation guide is provided at https://pik-piam.github.io/PyPSA-China-PIK/
25+
26+
> [!NOTE] Set-up on the PIK cluster
27+
> Gurobi license activation from the compute nodes requires internet access. The workaround is an ssh tunnel to the login nodes, which can be set-up on the compute nodes with
28+
```bash
29+
# interactive session on the compute nodes
30+
srun --qos=priority --pty bash
31+
# key pair gen (here ed25518 but can be rsa)
32+
ssh-keygen -t ed25519 -f ~/.ssh/id_rsa.cluster_internal_exchange -C "$USER@cluster_internal_exchange"
33+
# leave the compute nodes
34+
exit
2535
```
26-
You will then need to add the contents of the public key `~/.ssh/id_rsa.cluster_internal_exchange.pub` to your authorised `~/.ssh/authorized_keys`, eg. with `cat <key_name> >> authorized_keys`
36+
37+
> You will then need to add the contents of the public key `~/.ssh/id_rsa.cluster_internal_exchange.pub` to your authorised `~/.ssh/authorized_keys`, eg. with `cat <key_name> >> authorized_keys`
2738
2839
> TROUBLE SHOOTING
2940
> you may have some issues with the solver tunnel failing (permission denied). One of these two steps should solve it
3041
> option 1: name the exchange key `id_rsa`.
3142
> option 2: copy the contents to authorized_keys from the compute nodes (from the ssh_dir `srun --qos=priority --pty bash; cat <key_name> >> authorized_keys;exit`)
3243
33-
In addition you should have your .profile & .bashrc setup as per https://gitlab.pik-potsdam.de/rse/rsewiki/-/wikis/Cluster-Access
44+
> In addition you should have your .profile & .bashrc setup as per https://gitlab.pik-potsdam.de/rse/rsewiki/-/wikis/Cluster-Access
3445
and add `module load anaconda/2024.10` (or latest) to it
3546

36-
## General installation
37-
- Create the conda environment in workflow/envs/ (maybe snakemake does it automatically for you provided the profile has use-conda) `conda env create --file path_to_env` (name is opt.). You can use either the pinned (exact) or the loose env (will install later package versions too).
38-
- If you experience issues switch to the pinned environment #TODO: generate
39-
- NB! you may need to modify atlite for things to work. Instructions to follow.
47+
## Getting the data
48+
You will need to enable data retrieval in the config
49+
```yaml
50+
enable:
51+
build_cutout: false # if you want to build your own (requires ERA5 api access)
52+
retrieve_cutout: true # if you want to download the pre-computed one from zenodo
53+
retrieve_raster: true # get raster data
54+
```
55+
Some of the files are very large - expect a slow process!
4056
57+
- You can also download the data manually and copy it over to the correct folder. The source and target destinations are the input/output of the `fetch_` rules in `workflow/rules/fetch_data.smk`
58+
- **PIK HPC users only** you can also copy the data from other users
4159

42-
## Getting the data
43-
- some of the data is downloaded by the snakemake workflow (e.g. cutouts). Just make sure te relevant config options are set to true if it is your first run
44-
- the shapely files can be generated with the build_province_shapes script
45-
- the [zeonodo bundle](https://zenodo.org/records/13987282) from the pypsa-China v3 comes with the data but in the old format, you will have to manually restructure it (or we can write a script)
46-
- [PIK HPC users only] you can also copy the data from the tmp folder
60+
## Usage
4761

48-
# Usage
49-
- If you are not running on the PIK hpc, you will need make a new profile for your machine under `config/<myprofile>/config.yaml`
62+
Detailed instructions in the documentation.
63+
### local execution
64+
- local execution can be started (once the environment is activated) with `snakemake`
65+
- to customize the options, create `my_config.yaml` and launch `snakemake --configfile `my_config.yaml`. Configuration options are summarised in the documentation.
66+
### Remote execution
67+
This is relevant for slurm HPCs and other remotes with a submit job command
5068
- The workflow can be launched with `snakemake --profile config/compute_profile`
5169
- [PIK HPC users only] use `snakemake --profile config/pik_hpc_profile`
70+
- If you are not running on the PIK hpc, you will need make a new profile for your machine under `config/<compute_profile>/config.yaml`
71+
72+
73+
5274

53-
# Changelog
54-
- add path managers and merge duplicate functionalities
55-
- moved hardcoded data paths to config
56-
- restructure project to match snakemake8 guidelines & update to snakemake8
57-
- move hardcoded to centralised store constants.py (file paths still partially hardcoded)
58-
- start adding typing
59-
- add scripts to pull data
60-
- add derived_data/ folder and change target of data cleaning/prep steps for clarity
61-
62-
# TODOs:
63-
- see issues
64-
- add pinned env
65-
- make a PR on atlite to add the squeeze array, which may be needed
66-
- integrate various into settings

config/default_config.yaml

Lines changed: 20 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@
44
# DO NOT FORGET TO LOOK AT THE TECHNOLOGY CONFIGURATION FILES
55

66
run:
7-
name:
7+
name: "unamed_default_run"
88
foresight: "overnight"
99

1010
paths:
@@ -178,7 +178,7 @@ Techs:
178178
coal_ccs_retrofit: true # currently myopic pathway only. CC = co2 cap
179179

180180
## add components (overwrites vre tech choice)
181-
heat_coupling: false
181+
heat_coupling: true
182182
add_biomass: True
183183
add_hydro: True
184184
add_H2: True
@@ -286,14 +286,6 @@ solving:
286286

287287
mem: 80000 #memory in MB; 20 GB enough for 50+B+I+H2; 100 GB for 181+B+I+H2
288288

289-
# brownfield options
290-
existing_capacities:
291-
add: True # whether to add brownfield capacities
292-
grouping_years: [1980, 1985, 1990, 1995, 2000, 2005, 2010, 2015, 2019, 2020, 2025, 2030, 2035, 2040, 2045, 2050, 2055, 2060]
293-
collapse_years: False # Treat as a single unit when preparing & solving network
294-
threshold_capacity: 1 # TODO UNIT
295-
techs: ['coal','CHP coal', 'CHP gas', 'OCGT', 'CCGT', 'solar', 'onwind', 'offwind', 'nuclear']
296-
297289
# transmission options
298290
lines:
299291
line_length_factor: 1.25
@@ -303,8 +295,25 @@ lines:
303295

304296
security:
305297
line_margin: 70 # max percent of line capacity
298+
299+
# brownfield options
300+
existing_capacities:
301+
add: True # whether to add brownfield capacities
302+
grouping_years: [1985, 1990, 1995, 2000, 2005, 2010, 2015, 2020, 2025, 2030, 2035, 2040, 2045, 2050, 2055, 2060]
303+
collapse_years: False # Treat as a single unit when preparing & solving network
304+
threshold_capacity: 1 # TODO UNIT
305+
techs: ['coal','CHP coal', 'CHP gas', 'OCGT', 'CCGT', 'solar', 'onwind', 'offwind', 'nuclear', "PHS"]
306+
node_assignment: simple # simple | gps
307+
308+
# brownfield options
309+
existing_capacities:
310+
add: True # whether to add brownfield capacities
311+
grouping_years: [1985, 1990, 1995, 2000, 2005, 2010, 2015, 2020, 2025, 2030, 2035, 2040, 2045, 2050, 2055, 2060]
312+
collapse_years: False # Treat as a single unit when preparing & solving network
313+
threshold_capacity: 1 # TODO UNIT
314+
techs: ['coal','CHP coal', 'CHP gas', 'OCGT', 'CCGT', 'solar', 'onwind', 'offwind', 'nuclear', "PHS"]
315+
node_assignment: simple # simple | gps
306316

307-
# contingencies (slower!) -> epsilon is fraction of vre gen or load for which reserves should exist
308317
operational_reserve:
309318
activate: false
310319
epsilon_load: 0.02

config/global_energy_monitor.yaml

Lines changed: 52 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,52 @@
1+
global_energy_monitor_plants:
2+
3+
# split GEM CHP plants into separate CHP technology
4+
CHP:
5+
split: true
6+
aliases: ["CHP", "cogeneration", "cogen", "heat", "heating"]
7+
8+
tech_map:
9+
# map GEM Type based on technology
10+
# Type: technology: rename_to
11+
# Set rename_to: null to drop
12+
# default is a * wildcard (except pre-defined).
13+
gas:
14+
combined cycle: CCGT
15+
default: OCGT
16+
wind:
17+
Onshore: onwind
18+
Offshore hard mount: offwind
19+
Offshore floating: offwind
20+
solar:
21+
Solar Thermal: solar thermal
22+
PV: solar
23+
default: null
24+
# tricky because some conventional is also PHS
25+
hydropower:
26+
pumped storage: PHS
27+
# run-of-river: ror
28+
default: hydro
29+
30+
# level of brownfield
31+
base_year: 2020 # units retired before this date will be dropped
32+
status:
33+
- operating
34+
- retired
35+
# - construction
36+
# for more brownfield add `construction`, `pre-construction` and/or `announced`
37+
38+
relevant_columns:
39+
- "Type"
40+
- "Plant name"
41+
- "Technology"
42+
- "Fuel"
43+
- "CHP"
44+
- "Capacity (MW)"
45+
- "Start year"
46+
- "Retired year"
47+
- "Subregion"
48+
- "Region"
49+
- "Local area (taluk, county)"
50+
- "Major area (prefecture, district)"
51+
- "Subnational unit (state, province)"
52+
- "Status"

config/pik_hpc_profile/config.yaml

Lines changed: 25 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -91,7 +91,7 @@ set-resources:
9191
threads: 1
9292
partition: io # login
9393
qos: io
94-
retrieve_build_up_raster:
94+
retrieve_Build_up_raster:
9595
time: 300
9696
threads: 1
9797
partition: io # login
@@ -106,11 +106,27 @@ set-resources:
106106
threads: 1
107107
partition: io
108108
qos: io
109+
retrieve_powerplants:
110+
time: 300
111+
threads: 2
112+
partition: io
113+
qos: io
114+
retrieve_bathymetry_raster:
115+
time: 300
116+
threads: 1
117+
partition: io
118+
qos: io
109119
retrieve_Shrubland_raster:
110120
time: 300
111121
threads: 1
112122
partition: io
113123
qos: io
124+
retrieve_cutout:
125+
time: 300
126+
threads: 1
127+
partition: io
128+
qos: io
129+
114130

115131
# GROUPS
116132

@@ -123,10 +139,17 @@ groups:
123139
prepare_networks: prep_network
124140
add_existing_baseyear: prep_network2
125141
make_summary: summary
142+
retrieve_cutout: retrieve
143+
retrieve_region_shapes: retrieve
144+
retrieve_rasters: retrieve
145+
retrieve_Build_up_raster: retrieve
146+
retrieve_Grass_raster: retrieve
147+
retrieve_Bare_raster: retrieve
126148
group-components:
127149
plot: 3
128150
build_load: 3
129151
prep_network: 3
130152
prep_network2: 3
153+
retrieve: 1 # io queue
131154
summary: 3
132-
prep_capacities: 3
155+
prep_capacities: 3

config/plot_config.yaml

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -6,6 +6,11 @@ plotting:
66
transparent: false
77
statistics:
88
clip_numerical_zeroes: true
9+
add_labels:
10+
- "capacity_factor"
11+
- "installed_capacity"
12+
- "lcoe"
13+
- "mv_minus_lcoe"
914
cost_panel:
1015
y_axis: "annualized system cost bEUR/a" # only true if investment period is one year
1116
cost_map:

resources/data/existing_infrastructure/CCGT capacity.csv

Lines changed: 0 additions & 28 deletions
This file was deleted.

resources/data/existing_infrastructure/CHP coal capacity.csv

Lines changed: 0 additions & 29 deletions
This file was deleted.

resources/data/existing_infrastructure/CHP gas capacity.csv

Lines changed: 0 additions & 16 deletions
This file was deleted.

resources/data/existing_infrastructure/China_current_capacity.cpg

Lines changed: 0 additions & 1 deletion
This file was deleted.

0 commit comments

Comments
 (0)