Skip to content

Enhancement: Add API to reload a single pipeline#18832

Open
mhunsber wants to merge 4 commits intoelastic:mainfrom
mhunsber:feat/api-pipeline-restart
Open

Enhancement: Add API to reload a single pipeline#18832
mhunsber wants to merge 4 commits intoelastic:mainfrom
mhunsber:feat/api-pipeline-restart

Conversation

@mhunsber
Copy link

@mhunsber mhunsber commented Mar 6, 2026

Type of change:

  • enhancement

Release notes

  • Added API endpoint to reload a pipeline: POST _node/pipeline/:id/_reload.

What does this PR do?

This commit exposes a new endpoint in the logstash node API for pipelines/[pipeline_id]/_reload, which uses the Logstash agent to issue a new LogStash::PipelineAction::Reload action.

Why is it important/What is the impact to the user?

Addresses a feature request in #11837. There are times when it is necessary to forcefully restart a pipeline without making changes to the configuration. The SIGHUB signal will reload all pipelines, but cannot be used to only reload one. It is also possible to modify the file of a pipeline config, but this only works when using file-based pipeline configuration.

Checklist

  • My code follows the style guidelines of this project
  • I have commented my code, particularly in hard-to-understand areas
  • I have made corresponding changes to the documentation
  • (skipped) I have made corresponding change to the default configuration files (and/or docker env variables)
  • I have added tests that prove my fix is effective or that my feature works

How to test this PR locally

  • Run logstash locally, e.g. with bin/logstash -e 'input { file { path => "/dev/null" }} output { file { path => "/dev/null" } }'
  • In a separate console, run curl -X POST http://localhost:9600/_node/pipelines/main/_reload
  • Observe the pipeline restart

Related issues

Use cases

Given a pipeline uses a ruby script file, when a user modifies the ruby script file, then the API can be called so that the pipeline reloads.

Given a pipeline has failed due to a downstream dependency, then the pipeline can be reloaded without restarting all of Logstash. - from Cameron Kerr: #11837 (comment)

I have been using this change in deployments of Logstash that I had managed for over 5 years without issues. I had several ruby scripts that would need updated, but was using the builtin Pipeline Management through the Kibana UI, so I could not modify the pipeline configuration to trigger a reload.

Logs

bin/logstash -e 'input { file { path => "/dev/null" }} output { file { path => "/dev/null" } }'
curl -sSf -X POST http://localhost:9600/_node/pipelines/main/_reload | jq .
{
  "host": "**************",
  "version": "9.4.0",
  "http_address": "127.0.0.1:9600",
  "id": "7984d429-62e3-4e36-aaa9-9773b8acc5ee",
  "name": "solardrip",
  "ephemeral_id": "dc9173ea-0ce9-465b-a0d4-ab68d1980714",
  "snapshot": null,
  "status": "green",
  "pipeline": {
    "workers": 12,
    "batch_size": 125,
    "batch_delay": 50
  },
  "success": true,
  "failed_actions": [],
  "successful_actions": [
    {
      "id": "main",
      "action_type": "LogStash::PipelineAction::Reload"
    }
  ]
}
[2026-03-06T14:17:54,927][INFO ][logstash.javapipeline    ][main] Pipeline `main` is configured with `pipeline.ecs_compatibility: v8` setting. All plugins in this pipeline will default to `ecs_compatibility => v8` unless explicitly configured otherwise.
[2026-03-06T14:17:54,948][INFO ][logstash.javapipeline    ][main] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>12, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.batch.output_chunking.growth_threshold_factor"=>1000, "pipeline.max_inflight"=>1500, "batch_metric_sampling"=>"minimal", "pipeline.sources"=>["config string"], :thread=>"#<Thread:0xf3363d6 /home/mhunsber/code/logstash/logstash-core/lib/logstash/java_pipeline.rb:147 run>"}
[2026-03-06T14:17:55,240][INFO ][logstash.javapipeline    ][main] Pipeline Java execution initialization time {"seconds"=>0.29}
[2026-03-06T14:17:55,243][INFO ][logstash.inputs.file     ][main] No sincedb_path set, generating one based on the "path" setting {:sincedb_path=>"/home/mhunsber/code/logstash/data/plugins/inputs/file/.sincedb_ee35e7c782791419f29316f183d5d6d3", :path=>["/dev/null"]}
[2026-03-06T14:17:55,245][INFO ][logstash.javapipeline    ][main] Pipeline started {"pipeline.id"=>"main"}
[2026-03-06T14:17:55,248][INFO ][filewatch.observingtail  ][main][88b8bfaeedcb5d65af1fd3384fdf30fd2a402bd0593fae89f7f446ecaaf3e170] START, creating Discoverer, Watch with file and sincedb collections
[2026-03-06T14:17:55,250][INFO ][logstash.agent           ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[2026-03-06T14:18:02,173][WARN ][deprecation.logstash.codecs.jsonlines] The default value for `decode_size_limit_bytes`, currently at 512Mb, will be lowered in a future version to prevent Out of Memory errors from abnormally large messages or missing delimiters. Please set a value that reflects the largest expected message size (e.g. 20971520 for 20Mb)
[2026-03-06T14:18:02,175][INFO ][logstash.codecs.jsonlines] ECS compatibility is enabled but `target` option was not specified. This may cause fields to be set at the top-level of the event where they are likely to clash with the Elastic Common Schema. It is recommended to set the `target` option to avoid potential schema conflicts (if your data is ECS compliant or non-conflicting, feel free to ignore this message)
[2026-03-06T14:18:02,188][INFO ][logstash.pipelineaction.reload] Reloading pipeline {"pipeline.id"=>:main}
[2026-03-06T14:18:02,189][INFO ][filewatch.observingtail  ] QUIT - closing all files and shutting down.
[2026-03-06T14:18:02,946][INFO ][logstash.javapipeline    ][main] Pipeline terminated {"pipeline.id"=>"main"}
[2026-03-06T14:18:03,269][WARN ][deprecation.logstash.codecs.jsonlines] The default value for `decode_size_limit_bytes`, currently at 512Mb, will be lowered in a future version to prevent Out of Memory errors from abnormally large messages or missing delimiters. Please set a value that reflects the largest expected message size (e.g. 20971520 for 20Mb)
[2026-03-06T14:18:03,270][INFO ][logstash.codecs.jsonlines] ECS compatibility is enabled but `target` option was not specified. This may cause fields to be set at the top-level of the event where they are likely to clash with the Elastic Common Schema. It is recommended to set the `target` option to avoid potential schema conflicts (if your data is ECS compliant or non-conflicting, feel free to ignore this message)
[2026-03-06T14:18:03,281][INFO ][logstash.javapipeline    ][main] Pipeline `main` is configured with `pipeline.ecs_compatibility: v8` setting. All plugins in this pipeline will default to `ecs_compatibility => v8` unless explicitly configured otherwise.
[2026-03-06T14:18:03,291][INFO ][logstash.javapipeline    ][main] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>12, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.batch.output_chunking.growth_threshold_factor"=>1000, "pipeline.max_inflight"=>1500, "batch_metric_sampling"=>"minimal", "pipeline.sources"=>["config string"], :thread=>"#<Thread:0x58e413e8 /home/mhunsber/code/logstash/logstash-core/lib/logstash/java_pipeline.rb:147 run>"}
[2026-03-06T14:18:03,319][INFO ][logstash.javapipeline    ][main] Pipeline Java execution initialization time {"seconds"=>0.03}
[2026-03-06T14:18:03,325][INFO ][logstash.inputs.file     ][main] No sincedb_path set, generating one based on the "path" setting {:sincedb_path=>"/home/mhunsber/code/logstash/data/plugins/inputs/file/.sincedb_ee35e7c782791419f29316f183d5d6d3", :path=>["/dev/null"]}
[2026-03-06T14:18:03,327][INFO ][logstash.javapipeline    ][main] Pipeline started {"pipeline.id"=>"main"}
[2026-03-06T14:18:03,329][INFO ][filewatch.observingtail  ][main][88b8bfaeedcb5d65af1fd3384fdf30fd2a402bd0593fae89f7f446ecaaf3e170] START, creating Discoverer, Watch with file and sincedb collections
[2026-03-06T14:18:03,333][INFO ][logstash.agent           ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}

@github-actions
Copy link
Contributor

github-actions bot commented Mar 6, 2026

🤖 GitHub comments

Just comment with:

  • run docs-build : Re-trigger the docs validation. (use unformatted text in the comment!)
  • run exhaustive tests : Run the exhaustive tests Buildkite pipeline.

@mergify
Copy link
Contributor

mergify bot commented Mar 6, 2026

This pull request does not have a backport label. Could you fix it @mhunsber? 🙏
To fixup this pull request, you need to add the backport labels for the needed
branches, such as:

  • backport-8./d is the label to automatically backport to the 8./d branch. /d is the digit.
  • If no backport is necessary, please add the backport-skip label

@mhunsber
Copy link
Author

mhunsber commented Mar 6, 2026

This pull request does not have a backport label. Could you fix it @mhunsber? 🙏 To fixup this pull request, you need to add the backport labels for the needed branches, such as:

* `backport-8./d` is the label to automatically backport to the `8./d` branch. `/d` is the digit.

* If no backport is necessary, please add the `backport-skip` label

I do not have the permissions to add a label to the PR. However, I had been running this change successfully in Logstash version 7.x, so it should be compatible with any of the previous versions of Logstash.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

add ability to restart a specific pipeline

1 participant