This system implements a layer-by-layer processing workflow for converting dbt semantic models to Sigma data models based on dependency relationships defined in a DAG (Directed Acyclic Graph). The system supports both initial processing of all models and incremental updates for changed files.
- Initial Mode: Processes all semantic models in the source directory
- Update Mode: Processes only changed files and their dependents based on dependency graph
- Command Line Arguments: Update mode accepts changed file paths as command line arguments
- Sequential Processing: Models are processed layer by layer, ensuring dependencies are resolved before dependent models are processed
- DAG-Based Ordering: Uses topological sorting to determine the correct processing order
- Dependency Resolution: Each layer waits for all dependencies from previous layers to be completed
- Incremental DAG Building: In update mode, builds DAG only for changed files and their dependents
- Create Data Model: Creates new Sigma data models via API (
/v3alpha/dataModels/spec) - Update Data Model: Updates existing data models via API (
/v3alpha/dataModels/{id}/spec) - Get Data Model: Retrieves data model specifications from Sigma API
- Test Mode: Supports test mode with placeholder functions when
TEST_FLAG=true - ID Management: Automatically retrieves existing data model IDs from sigma_model files in update mode
- Output Generation: Creates processed YAML files in the output directory
- Sigma Model Storage: Saves data model specs from Sigma API to sigma_model folder
- Foreign Entity References: Uses IDs from sigma_model files for foreign entity relationships
- Git Integration: Optionally commits changes to git repository when
FROM_CI_CD=true
- Foreign Entity Lookup: Automatically resolves foreign entity references using sigma_model files
- Relationship Configuration: Creates proper relationship configurations between primary and foreign entities
- ID Propagation: Ensures foreign entities reference the correct data model and table IDs
src/
├── main.js # Main entry point and execution script
├── routes/
│ ├── converter/
│ │ ├── layer_processor.js # Main processing class
│ │ ├── convert_semantics.js # Semantic model conversion
│ │ └── build_dag.js # DAG construction
│ ├── sigma_api/
│ │ ├── create_data_model.js # Create data model API
│ │ ├── update_data_model.js # Update data model API
│ │ └── get_data_model.js # Get data model API
│ └── ...
│
sigma_model/ # Generated Sigma model files (from API)
├── wd_account.yml
├── wd_opportunity.yml
└── ...
node src/main.jsnode src/main.js file1.yml file2.yml ...The system will:
- Build DAG for changed files and their dependents
- Process only affected models in the correct layer order
- Update existing data models or create new ones as needed
The converter requires the following Action secrets:
API_CLIENT_ID: Client ID for Sigma API KeyAPI_SECRET: Secret for Sigma API KeyCONNECTION_ID: Sigma connection IDDB: Database name conatining the dbt generated views/tables used by the semantic modelsSCHEMA: Schema name conatining the dbt generated views/tables used by the semantic modelsGIT_USER: The converter retrieves the data models it creates in Sigma and checks them into the repository. Git user name to be used for commits.GIT_EMAIL: Git user email to be used for commits.SIGMA_DOMAIN: The name of your Sigma org.SIGMA_FOLDER_ID: Sigma folder ID where the converter creates data models.
The converter requires the following Action variables:
API_URL: Sigma API base URL e.g. https://api.sigmacomputing.comMODE: has to be set asinitialwhen using for the first time andupdateafterwards
The following variables need to be configured in the action.yml file.
paths: Path to location of dbt semantic model yml filesDAG_FILE: Path to DAG JSON fileTIME_SPINE_FILE: Path to time spine models fileSOURCE_DIR: Directory containing source semantic modelsSIGMA_MODEL_DIR: Directory for Sigma model filesUSER_FRIENDLY_COLUMN_NAMES: Set totrueto convert dimension names to user-friendly format. This needs to match the Sigma connection configuration.
- Build DAG: Analyze all semantic models and build dependency graph
- Export DAG: Save DAG to JSON file
- Process Layer 1: Handle all models with no dependencies
- Convert semantic models to Sigma format
- Create new data models via Sigma API (or placeholder if test mode)
- Retrieve data model spec from Sigma API
- Save to sigma_model folder
- Process Layer 2+: Handle models with dependencies
- Use sigma_model IDs for foreign entities
- Create proper relationship configurations
- Create new data models via Sigma API
- Retrieve and save data model specs
- Complete: Generate processing summary and results
- Build DAG for Changed Files: Analyze changed files and their dependents
- Only processes files that changed and models that depend on them
- Builds minimal DAG for affected models
- Export DAG: Save DAG to JSON file
- Process Affected Layers: Process models in dependency order
- For existing models: Retrieve existing dataModelId from sigma_model files
- Update existing data models via Sigma API (or placeholder if test mode)
- For new models: Create new data models
- Retrieve and save updated data model specs
- Optionally commit to git if
FROM_CI_CD=true
- Complete: Generate processing summary and results
main(): Main execution function- Handles command line arguments for update mode
- Validates mode and configuration
- Orchestrates DAG building and processing
processAllLayers(): Main processing method that processes all layers sequentiallyprocessLayer(): Process all models in a single layerprocessDbtModel(): Process individual modelsgetExistingDataModelId(): Retrieve existing data model ID from sigma_model filessaveDataModelSpecToSigmaFolder(): Save data model spec as YAMLsaveDataModelSpecInRepo(): Save and commit to git repository
buildDAG(): Build dependency graph from semantic models- Supports analyzing all files (initial mode)
- Supports analyzing specific files and dependents (update mode)
exportDAG(): Export DAG to JSON file
createDataModelInSigma(): Create new data model via Sigma APIupdateDataModelInSigma(): Update existing data model via Sigma APIgetDataModelFromSigma(): Retrieve data model specification from Sigma API- Test mode support with placeholder functions
convertSemantics(): Convert dbt semantic model to Sigma data model format- Handles dimensions, measures, metrics, and entities
- Processes foreign entity relationships
- Adds time spine relationships
- Automatic foreign entity lookup from sigma_model files
- Proper ID propagation for relationship configurations
- Support for multiple foreign entities per model
- DAG-based dependency tracking
=== PROCESSING COMPLETE ===
Total Models: 33
Processed: 33
Successful: 33
Failed: 0
Layers: 4
Layer 1: ✓ 20 models processed
Layer 2: ✓ 2 models processed
Layer 3: ✓ 7 models processed
Layer 4: ✓ 4 models processed
Each sigma_model file contains:
id: Unique GUID for the data modelname: Model nameentity: "datamodel"pages: Model structure with elements, columns, metrics, joins- Foreign entity references with proper IDs
- Comprehensive error handling for file operations
- Graceful failure handling with detailed error messages
- Processing continues even if individual models fail
- Detailed logging of success/failure status
- Validation of mode and required parameters
- Error handling for Sigma API calls with detailed error messages
- Processes all semantic models in source directory
- Creates new data models in Sigma
- Generates new GUIDs for all models
- Full dependency graph analysis
- Processes only changed files and their dependents
- Retrieves existing data model IDs from sigma_model files
- Updates existing data models instead of creating new ones
- Minimal dependency graph analysis (only affected models)
- Requires changed file paths as command line arguments
When TEST_FLAG=true:
- Uses placeholder functions instead of real Sigma API calls
- Generates dummy GUIDs for data models
- Copies processed files directly to sigma_model folder
- Useful for testing conversion logic without API calls
When FROM_CI_CD=true:
- Automatically commits changes to git repository
- Configures git user if not already set
- Commits data model specs to sigma_model folder
- Uses environment variables for git user configuration