Skip to content

Conversation

@dependabot
Copy link
Contributor

@dependabot dependabot bot commented on behalf of github May 7, 2025

Bumps huggingface-hub from 0.17.3 to 0.31.1.

Release notes

Sourced from huggingface-hub's releases.

[v0.31.0] LoRAs with Inference Providers, auto mode for provider selection, embeddings models and more

🧑‍🎨 Introducing LoRAs with fal.ai and Replicate providers

We're introducing blazingly fast LoRA inference powered by fal.ai and Replicate through Hugging Face Inference Providers! You can use any compatible LoRA available on the Hugging Face Hub and get generations at lightning fast speed ⚡

from huggingface_hub import InferenceClient
client = InferenceClient(provider="fal-ai") # or provider="replicate"
output is a PIL.Image object
image = client.text_to_image(
"a boy and a girl looking out of a window with a cat perched on the window sill. There is a bicycle parked in front of them and a plant with flowers to the right side of the image. The wall behind them is visible in the background.",
model="openfree/flux-chatgpt-ghibli-lora",
)

⚙️ auto mode for provider selection

You can now automatically select a provider for a model using auto mode — it will pick the first available provider based on your preferred order set in https://hf.co/settings/inference-providers.

from huggingface_hub import InferenceClient
will select the first provider available for the model, sorted by your order.
client = InferenceClient(provider="auto")
completion = client.chat.completions.create(
model="Qwen/Qwen3-235B-A22B",
messages=[
{
"role": "user",
"content": "What is the capital of France?"
}
],
)
print(completion.choices[0].message)

⚠️ Note: This is now the default value for the provider argument. Previously, the default was hf-inference, so this change may be a breaking one if you're not specifying the provider name when initializing InferenceClient or AsyncInferenceClient.

🧠 Embeddings support with Sambanova (feature-extraction)

We added support for feature extraction (embeddings) inference with sambanova provider.

... (truncated)

Commits

Dependabot compatibility score

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


Dependabot commands and options

You can trigger Dependabot actions by commenting on this PR:

  • @dependabot rebase will rebase this PR
  • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
  • @dependabot merge will merge this PR after your CI passes on it
  • @dependabot squash and merge will squash and merge this PR after your CI passes on it
  • @dependabot cancel merge will cancel a previously requested merge and block automerging
  • @dependabot reopen will reopen this PR if it is closed
  • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
  • @dependabot show <dependency name> ignore conditions will show all of the ignore conditions of the specified dependency
  • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
  • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
  • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)

Summary by Sourcery

Chores:

  • Bump huggingface-hub package version from 0.17.3 to 0.31.1

Bumps [huggingface-hub](https://github.com/huggingface/huggingface_hub) from 0.17.3 to 0.31.1.
- [Release notes](https://github.com/huggingface/huggingface_hub/releases)
- [Commits](huggingface/huggingface_hub@v0.17.3...v0.31.1)

---
updated-dependencies:
- dependency-name: huggingface-hub
  dependency-version: 0.31.1
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <[email protected]>
@dependabot dependabot bot added dependencies Pull requests that update a dependency file python Pull requests that update Python code labels May 7, 2025
@sourcery-ai
Copy link
Contributor

sourcery-ai bot commented May 7, 2025

Reviewer's Guide

This pull request upgrades the huggingface-hub dependency from version 0.17.3 to 0.31.1. This was achieved by updating the version constraint in pyproject.toml and regenerating the poetry.lock file to incorporate the new version and its features, including a potential breaking change in InferenceClient's default behavior.

Sequence Diagram: InferenceClient with 'auto' Provider Selection

sequenceDiagram
    actor Developer
    participant IC as InferenceClient
    participant HFSettings as Hugging Face Settings
    participant SelectedProvider as Auto-Selected Provider

    Developer->>IC: Initialize InferenceClient(provider="auto" or default)
    Developer->>IC: Make inference request (e.g., chat_completions.create(...))
    IC->>HFSettings: Fetch user's preferred provider order (for the model)
    HFSettings-->>IC: Provider order list
    IC->>IC: Select first available/compatible provider from list
    IC->>SelectedProvider: Forward inference request
    SelectedProvider-->>IC: Inference result
    IC-->>Developer: Return inference result
Loading

Sequence Diagram: LoRA Inference via InferenceClient with fal.ai/Replicate

sequenceDiagram
    actor Developer
    participant IC as InferenceClient
    participant LoRAProvider as "fal.ai / Replicate"

    Developer->>IC: Initialize InferenceClient(provider="fal-ai" or "replicate")
    Developer->>IC: Call client.text_to_image("A cute cat", model="lora_model_id")
    IC->>LoRAProvider: Request text-to-image with LoRA model
    LoRAProvider-->>IC: Image data
    IC-->>Developer: Return PIL.Image object
Loading

Class Diagram: Updates to InferenceClient and AsyncInferenceClient in huggingface-hub

classDiagram
    class InferenceClient {
        +provider: string
        +__init__(self, provider: string = "auto", ...)
        +text_to_image(self, prompt: string, model: string, ...) : PIL.Image
        +feature_extraction(self, ...) : Embeddings
    }
    note for InferenceClient "Default for 'provider' in __init__ changed to 'auto' (was 'hf-inference').\nNew capabilities: LoRA inference (via text_to_image with fal.ai/Replicate) and embeddings (via feature_extraction with Sambanova)."

    class AsyncInferenceClient {
        +provider: string
        +__init__(self, provider: string = "auto", ...)
        # Methods analogous to InferenceClient, supporting new provider features
    }
    note for AsyncInferenceClient "Default for 'provider' in __init__ changed to 'auto' (was 'hf-inference')."
Loading

File-Level Changes

Change Details Files
Updated huggingface-hub dependency to version 0.31.1.
  • Modified huggingface-hub version constraint in pyproject.toml to >=0.11.0,<0.32.0.
  • Regenerated poetry.lock to reflect the new huggingface-hub version and its transitive dependencies.
pyproject.toml
poetry.lock
Incorporated new features and a breaking change from huggingface-hub v0.31.1.
  • Introduced LoRA model inference support with fal.ai and Replicate providers.
  • Changed InferenceClient's default provider to auto, which may be a breaking change if the previous default (hf-inference) was relied upon implicitly.
  • Added support for embeddings inference using the Sambanova provider.

Tips and commands

Interacting with Sourcery

  • Trigger a new review: Comment @sourcery-ai review on the pull request.
  • Continue discussions: Reply directly to Sourcery's review comments.
  • Generate a GitHub issue from a review comment: Ask Sourcery to create an
    issue from a review comment by replying to it. You can also reply to a
    review comment with @sourcery-ai issue to create an issue from it.
  • Generate a pull request title: Write @sourcery-ai anywhere in the pull
    request title to generate a title at any time. You can also comment
    @sourcery-ai title on the pull request to (re-)generate the title at any time.
  • Generate a pull request summary: Write @sourcery-ai summary anywhere in
    the pull request body to generate a PR summary at any time exactly where you
    want it. You can also comment @sourcery-ai summary on the pull request to
    (re-)generate the summary at any time.
  • Generate reviewer's guide: Comment @sourcery-ai guide on the pull
    request to (re-)generate the reviewer's guide at any time.
  • Resolve all Sourcery comments: Comment @sourcery-ai resolve on the
    pull request to resolve all Sourcery comments. Useful if you've already
    addressed all the comments and don't want to see them anymore.
  • Dismiss all Sourcery reviews: Comment @sourcery-ai dismiss on the pull
    request to dismiss all existing Sourcery reviews. Especially useful if you
    want to start fresh with a new review - don't forget to comment
    @sourcery-ai review to trigger a new review!

Customizing Your Experience

Access your dashboard to:

  • Enable or disable review features such as the Sourcery-generated pull request
    summary, the reviewer's guide, and others.
  • Change the review language.
  • Add, remove or edit custom review instructions.
  • Adjust other review settings.

Getting Help

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

dependencies Pull requests that update a dependency file python Pull requests that update Python code

Projects

None yet

Development

Successfully merging this pull request may close these issues.

0 participants