This repository contains reference architectures and test cases for distributed model training with Amazon SageMaker HyperPod, AWS ParallelCluster, AWS Batch, and Amazon EKS. The test cases cover different types and sizes of models as well as different frameworks and parallel optimizations (PyTorch DDP/FSDP, Megatron-LM, NeMo...).
The major components of this directory are:
├── 1.architectures/ # CloudFormation templates for reference architectures
├── 2.ami_and_containers/ # Scripts to create AMIs and container images
├── 3.test_cases/ # Reference test cases and/or benchmark scripts
├── 4.validation_and_observability/# Tools to measure performance or troubleshoot
└── micro-benchmarks/ # Micro-benchmarks (NCCL, NCCOM, NVSHMEM, etc.)
NOTE: The architectures are designed to work with the S3 bucket and VPC created using reference templates 1.architectures/0.common/ and 1.architectures/1.vpc_network/. You're strongly recommended to deploy these two templates before deploying any of the reference architectures.
You can follow the workshops below to train models on AWS. Each contains examples for several test cases as well as nuggets of information on operating a cluster for LLM training.
| Name | Comments |
|---|---|
| AI on SageMaker HyperPod | Workshop for SageMaker HyperPod, shows how to deploy and monitor it |
| AWS ParallelCluster | Similar workshop as HyperPod but on ParallelCluster |
Architectures are located in 1.architectures and consist of utilities and service-related architectures.
| Name | Category | Usage |
|---|---|---|
0.common |
Storage | Common resources (S3 bucket, event notifications) |
1.vpc_network |
Network | Create a VPC with subnets and required resources |
2.aws-parallelcluster |
Compute | Cluster templates for GPU & custom silicon training |
3.aws-batch |
Compute | AWS Batch template for distributed training |
4.amazon-eks |
Compute | Manifest files to train with Amazon EKS |
5.sagemaker-hyperpod |
Compute | SageMaker HyperPod template for distributed training |
6.ldap_server |
Identity | LDAP server for multi-user cluster access |
7.sagemaker-hyperpod-eks |
Compute | SageMaker HyperPod with EKS orchestration |
8.accounting-database |
Tooling | Accounting database for job tracking |
You will also find documentation for EFA and the recommended environment variables.
Custom machine images can be built using Packer for AWS ParallelCluster, Amazon EKS and plain EC2. These images are based on Ansible roles and playbooks.
Test cases are organized under 3.test_cases/ by framework (e.g. pytorch/, megatron/, jax/). Within each framework, directories are named after the training library or method (e.g. picotron/, FSDP/, megatron-lm/).
Each test case follows this general structure:
3.test_cases/
└── <framework>/ # e.g. pytorch, megatron, jax
└── <library>/ # e.g. picotron, FSDP, megatron-lm
└── <model>/ # e.g. SmolLM-1.7B (may be omitted for single-model cases)
├── Dockerfile # Container / environment setup
├── README.md
├── slurm/ # Slurm-specific launch scripts
├── kubernetes/ # Kubernetes manifests
└── hyperpod-eks/ # HyperPod EKS instructions
The top-level directory for each test case contains general introduction and environment setup (Dockerfiles, training scripts, configs), while subdirectories provide service-specific launch instructions.
Browse 3.test_cases/ to see the full list of available frameworks and test cases.
Utility scripts and tools for validating your environment and monitoring training jobs are under 4.validation_and_observability/.
| Name | Comments |
|---|---|
1.pytorch-env-validation |
Validates your PyTorch environment |
2.gpu-cluster-healthcheck |
GPU cluster health checks |
3.efa-node-exporter |
Node exporter with Amazon EFA monitoring modules |
4.prometheus-grafana |
Monitoring for SageMaker HyperPod and EKS GPU clusters |
5.nsight |
Shows how to run Nvidia Nsight Systems to profile your workload |
Micro-benchmarks for evaluating network and communication performance are under micro-benchmarks/.
| Name | Comments |
|---|---|
nccl-tests |
NCCL collective communication benchmarks |
nccom-tests |
NCCOM communication benchmarks |
nvshmem |
NVSHMEM benchmarks |
expert-parallelism |
Expert parallelism (MoE) benchmarks |
Thanks to all the contributors for building, reviewing and testing.