Hello Rust (Axum) on CloudRun revisit 2026/02/23
# build
docker build -t hello-axum -f ./Dockerfile .
# run
docker run --rm --name hello-axum -p 8081:8080 -e "TARGET=foo" hello-axum
# view
open http://localhost:8081
open http://localhost:8081/api/v1/bar
open http://localhost:8081/api/v1/search?id=barThe Docker image is optimized for size using cargo-chef and Rust release profile optimizations:
- Docker Image Size: 9.49MB
- Binary Size: 591K
opt-level = "z": Optimize for sizestrip = true: Automatically strip symbols from the binarylto = true: Enable Link Time Optimization (LTO)codegen-units = 1: Reduce Parallel Code Generation Units to increase optimizationpanic = "abort": Reduce binary size by removing unwinding code- Alpine Linux base image for minimal OS footprint
SIZE CREATED BY
0B ENTRYPOINT ["/usr/local/bin/hello-axum"]
791kB COPY /app/target/release/hello-axum /usr/local/bin/
0B USER appuser
3.06kB RUN /bin/sh -c addgroup -S appgroup && adduser -S appuser -G appgroup
0B CMD ["/bin/sh"]
8.7MB ADD alpine-minirootfs-3.23.2-aarch64.tar.gz /
- Application Binary: 791kB (Rust release build with optimizations)
- Alpine Base: 8.7MB (minimal OS footprint)
- User Creation: 3.06kB (appgroup and appuser setup)
- Configuration: 0B (ENTRYPOINT, USER, CMD are metadata-only layers)
Using dive for layer analysis:
- Efficiency: 99.98%
- Wasted Space: 3.0 kB
- User Wasted Percent: 0.38%
The build process is highly optimized with minimal waste, achieved through:
- Multi-stage Docker build with cargo-chef for dependency caching
- Alpine Linux base image
- Release profile optimizations in Cargo.toml
# Ensure we are all set
gcloud auth login
# See the projects list
gcloud projects list
# Your config for CloudRun
export PROJECT_ID=YOUR_PROJECT_ID_SEE_ABOVE
export PROJECT_NUMBER=YOUR_PROJECT_NUMBER_SEE_ABOVE
export SERVICE_NAME=hello-axum
gcloud config set project $PROJECT_ID
# Enable cache https://github.com/GoogleContainerTools/kaniko
gcloud config set run/platform managed
gcloud config set builds/use_kaniko True
gcloud config set builds/kaniko_cache_ttl 24
# Give serviceAccount build permissions
gcloud projects add-iam-policy-binding $PROJECT_ID \
--member="serviceAccount:service-$PROJECT_NUMBER@gcp-sa-cloudbuild.iam.gserviceaccount.com" \
--role="roles/cloudbuild.serviceAgent"
# Submit build
gcloud builds submit --tag gcr.io/$PROJECT_ID/$SERVICE_NAME --timeout=30m
# Deploy with environment variables
gcloud run deploy --image gcr.io/$PROJECT_ID/$SERVICE_NAME --set-env-vars TARGET=foo
# Update environment variables (optional for testing)
gcloud run services update $SERVICE_NAME --update-env-vars TARGET=barERROR: (gcloud.builds.submit) FAILED_PRECONDITION: invalid bucket "123.cloudbuild-logs.googleusercontent.com"; service account [email protected] does not have access to the bucket
gcloud projects add-iam-policy-binding $PROJECT_ID \
--member="serviceAccount:service-$PROJECT_NUMBER@gcp-sa-cloudbuild.iam.gserviceaccount.com" \
--role="roles/cloudbuild.serviceAgent"- Connect to a GitHub repository with Cloud Build: https://cloud.google.com/build/docs/automating-builds/github/connect-repo-github
- Creating a CI/CD environment for serverless containers on Cloud Run with GitHub Actions: https://github.com/GoogleCloudPlatform/community/blob/master/tutorials/cicd-cloud-run-github-actions/index.md
- GitHub Actions best practices for Rust projects: https://www.infinyon.com/blog/2021/04/github-actions-best-practices/#optimizing-rusts-build-speed-with-sccache