Skip to content

Question about the 60k / 10k FFHQ split mentioned in README #11

@Laura-musiol

Description

@Laura-musiol

Hi, thanks for releasing the code and benchmarks.

While reading the README, I noticed the statement that for FFHQ the first 60k images are used for training and the remaining 10k for testing.

In the paper, you also mention that the experiments use samples from
https://github.com/layer6ai-labs/dgm-eval

However, looking through that repository and the linked generative model repositories, it seems that the models there were trained on the full 70k FFHQ dataset, rather than using a fixed 60k / 10k split.

Because of this, I'm unsure how the data was actually prepared for the experiments.

Could you clarify:

  • Were the generative models retrained using the fixed 60k / 10k FFHQ split mentioned in the README?
  • Or were pretrained models from the dgm-eval repository used, which appear to be trained on the full 70k dataset, and the 60k / 10k split was applied only during evaluation?

Thanks in advance for the clarification!

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions