Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 4 additions & 0 deletions acceptance/pipelines/run/run-pipeline/databricks.yml
Original file line number Diff line number Diff line change
Expand Up @@ -8,3 +8,7 @@ resources:
libraries:
- file:
path: pipeline_file.py
# job is runnable, but doesn't impact auto-selection of pipeline
jobs:
my_job:
name: "My Job"
1 change: 1 addition & 0 deletions acceptance/pipelines/run/run-pipeline/output.txt
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,7 @@ Uploading bundle files to /Workspace/Users/[USERNAME]/.bundle/test-pipeline-run/
Deploying resources...
Updating deployment state...
Deployment complete!
View your job my_job here: [DATABRICKS_URL]/jobs/[NUMID]?o=[NUMID]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Non-blocking: I don't think we should show all URLs for many-resource DABs projects. We still don't have a good "single" URL for deployments yet. The alternative I think we discussed way back was to show a hint to use the summary command whenever there is >1 resource. But 1 pipeline + 1 job could also be treated as a special case.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Makes sense. Do I understand correctly, that this suggestion applies generically to DABs, and not special to pipelines?

View your pipeline my_pipeline here: [DATABRICKS_URL]/pipelines/[UUID]?o=[NUMID]

=== Run pipeline
Expand Down
2 changes: 1 addition & 1 deletion cmd/pipelines/run.go
Original file line number Diff line number Diff line change
Expand Up @@ -351,7 +351,7 @@ Refreshes all tables in the pipeline unless otherwise specified.`,
}

if len(args) == 0 {
completions := bundleresources.Completions(b, run.IsRunnable)
completions := bundleresources.Completions(b, isPipeline)
return maps.Keys(completions), cobra.ShellCompDirectiveNoFileComp
} else {
// If we know the resource to run, we can complete additional positional arguments.
Expand Down
17 changes: 12 additions & 5 deletions cmd/pipelines/utils.go
Original file line number Diff line number Diff line change
Expand Up @@ -42,12 +42,10 @@ func promptResource(ctx context.Context, b *bundle.Bundle, filters ...resources.
// autoSelectSinglePipeline checks if there's exactly one pipeline resource in the bundle and returns its key.
// Returns empty string if there's not exactly one pipeline.
func autoSelectSinglePipeline(b *bundle.Bundle) string {
completions := resources.Completions(b, run.IsRunnable)
completions := resources.Completions(b, isPipeline)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

just a drive by comment since I got tagged: this was an intentional decision at the time from some one-off discussion. I think the product decision at the time was to still to expose anything that can still be run. Just curious if we've changed this decision.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Our template includes a job and a pipeline, so "run" command doesn't work because there are 2 runnable resources. Given that we are exposing the same "run" command in "databricks pipelines run", it should behave differently from "databricks bundles run", similar to how "databricks app run" and "databricks jobs run" should filter by their resource type.

if len(completions) == 1 {
for key, ref := range completions {
if _, ok := ref.Resource.(*configresources.Pipeline); ok {
return key
}
for key := range completions {
return key
}
}
return ""
Expand Down Expand Up @@ -334,3 +332,12 @@ func fetchPipelineUpdates(ctx context.Context, w *databricks.WorkspaceClient, st

return updates, nil
}

func isPipeline(ref resources.Reference) bool {
switch ref.Resource.(type) {
case *configresources.Pipeline:
return true
default:
return false
}
}