Skip to content

feat: Handle memory errors in batch_process_dataset#1602

Open
jcpitre wants to merge 16 commits intomainfrom
1538-memory-limit-exceeded-in-batch-process-dataset-prod-function
Open

feat: Handle memory errors in batch_process_dataset#1602
jcpitre wants to merge 16 commits intomainfrom
1538-memory-limit-exceeded-in-batch-process-dataset-prod-function

Conversation

@jcpitre
Copy link
Collaborator

@jcpitre jcpitre commented Feb 16, 2026

Summary:

closes #1538
Added some memory management functionalities to limit the available memory of the process. The result is that any out-of-memory error will happen earlier, with some memory left (a security margin currently set at 200MB by default) so the exception can be handled properly and the http call to the function can return a 200 code. This (in theory) prevents automatic retries.
Also log messages will still be printed since there is memory left, which allows printing the stable_id that caused the error.

Also modified the way gtfs datasets are unzipped. Each .txt file within the dataset is unzipped separately on the in-memory file system, then uploaded to GCP storage then immediately deleted locally. This reduces the number of out of disk space errors and (apparently) does not make the process significantly slower.

From Copilot:

This pull request introduces significant improvements to memory management and disk usage in the dataset batch processing function. The main change is the introduction of a memory limiting utility and a new approach for extracting and uploading files from ZIP archives, which minimizes local disk usage and helps prevent out-of-memory errors. Several method names and flows have been updated to reflect these improvements. Additionally, error handling and environment variable parsing have been enhanced for robustness.

Memory Management Enhancements:

  • Added shared/common/gcp_memory_utils.py with functions to calculate available process memory and set memory limits using cgroups and tmpfs information, and integrated limit_gcp_memory() at startup to restrict process memory usage. [1] [2]

Efficient ZIP Extraction and Upload:

  • Replaced bulk extraction of ZIP files with extract_and_upload_files_from_zip, which extracts and uploads files one at a time, immediately deleting temporary files to minimize disk usage. This change is reflected in both dataset upload and bucket processing flows, and the old unzip_files method was removed. [1] [2] [3] [4]

Robustness and Error Handling:

  • Improved error handling in dataset processing, including logging exception types and messages, and more robust parsing of environment variables (e.g., MAXIMUM_EXECUTIONS). [1] [2]

Database Integration Updates:

  • Updated database imports and configuration to include Gtfsdataset and its relationship with gtfsfiles. [1] [2]

These changes collectively make the batch processing function more reliable, efficient, and scalable, especially in environments with constrained memory and disk resources.
Expected behavior:

Testing tips:

For the memory limitation change, increased the in-memory disk space to 7 GB (out of 8 GB for the whole process). This left 1GB of memory for running the code, of which 200MB were kept as a security margin. Testing with mdb-2014, we now get these errors:

image Where we see that there was a memory error, but it was caught the logged with the stable_id.

For the separate zip upload improvement, used mdb-2014. WIth the original code and 6 GB of in-memory disk space, it would originally have an out of disk space exception. With the changes the files were extracted properly.

Please make sure these boxes are checked before submitting your pull request - thanks!

  • Run the unit tests with ./scripts/api-tests.sh to make sure you didn't break anything
  • Add or update any needed documentation to the repo
  • Format the title like "feat: [new feature short description]". Title must follow the Conventional Commit Specification(https://www.conventionalcommits.org/en/v1.0.0/).
  • Linked all relevant issues
  • Include screenshot(s) showing how this pull request works and fixes the issue(s)

@jcpitre jcpitre linked an issue Feb 16, 2026 that may be closed by this pull request
@davidgamez davidgamez requested a review from cka-y February 17, 2026 15:02
@jcpitre jcpitre requested a review from davidgamez March 19, 2026 03:53
Copy link
Member

@davidgamez davidgamez left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Memory limit exceeded in batch-process-dataset-prod function

2 participants