Unzip data.zip with the password vogelbeobachtung131719 with a command such as unzip -P PASSWORD data.zip to extract all the prompting strings to get model generations.
Then run python3 scripts/get_generations.py config.json with the appropriate configuration of dataset and model to get model generations.
The raw generations from models contain reasoning chains delimited by <think> tags, followed by final answers in natural language.
With python3 scripts/evaluate.py, we perform a heuristic-based automatic evaluation, a sample of which will be manually validated as well.
The results of this evaluation will be stored in a folder called postprocessed, which contain the reasoning chains and final answers separately, as well as a column called deepseek_correct, which is a boolean result of the automatic evaluation.
Run pytest to run our tests (which currently test the evaluation code).
All prompts are released in this repository, but to reproduce our transformations of the original data, do the following.
First, place the following original data files from the appropriate repositories into a folder called raw_data:
- RUFF:
13_eo_task.tsv,13_eo_ep_task.tsv - WinoPron:
double.tsv - GAP:
gap-test.tsv - MISGENDERED:
templates/*.csv,names/*.txt,pronouns.csv
Since MISGENDERED does not come with already prepared data, we first create the entire dataset of 3.3 million instances and then downsample it to 3300 instances by running python3 scripts/create_MISGENDERED.py.
Similarly, since KnowRef-60k is provided on HuggingFace rather than as a CSV file, run python3 scripts/create_KnowRef.py to create the raw data.
Then, run python3 scripts/convert_datasets.py to create all the prompts in a folder called data/.