Releases: invoke-ai/InvokeAI
v6.11.1
InvokeAI 6.11.1
This is a bugfix release that corrects several image generation and user interface glitches:
- Fix FLUX.2 Klein image generation quality (@Pfannkuchensack)
- At higher step values and larger images, the FLUX.2 Klein models were generating image artifacts characterized by diagonals, cross-hatching and dust. This bug is now corrected.
- Restore denoising strength for outpaint mode (@Pfannkuchensack)
- Previously, when outpainting, the denoising strength was pinned at 1.0 rather than observing the value set by the user.
- Only show FLUX.1 VAEs when a FLUX.1 main model is selected (@Pfannkuchensack)
- This fix prevents the user from inadvertently selecting a FLUX.2 VAE when generating with FLUX.1.
- Reset ZiT seed variance toggle when recalling images without that metadata (@Pfannkuchensack)
- When remixing an image generated by Z-Image Turbo, the setting of the seed variance toggle (which increases image diversity) is now correctly restored.
- Improve DyPE area calculation (@JPPhoto)
- DyPE increases the quality of FLUX.1 models at higher resolutions.. This fix improves how the algorithm's parameters are automatically adjusted for image size.
- Remove duplicate DyPE preset dropdown in generation settings (@Pfannkuchensack
- The DyPE dropdown in generation settings is no longer duplicated in the generation UI.
In addition to these bug fixes, new Russian translations were added by (@DustyShoe).
Checkout the roadmap
To see what the development team has planned for forthcoming releases, check out the InvokeAI roadmap. Feature releases will be issued roughly monthly.
Take the user survey
And don't forget to tell us who you are, what features you use, and what features you most want to see included in future releases. Take the InvokeAI 2026 User Engagement Survey and share your thoughts!
Credits
In addition to the authors of these bug fixes, many thanks to @blessedcoolant, @skunkworxdark, and @mickr777 for their time and patience testing and reviewing the code.
Full Changelog: v6.11.0...v6.11.1
v6.11.0
InvokeAI v6.11.0
This is a feature release of InvokeAI which provides support for the new FLUX.2 Klein image generation and edit models as well as a few small improvements and bug fixes. Before we get to the details, consider taking our 2026 User Engagement Survey. We want to know who you are, how you use InvokeAI, and what new features we can add to make the software even better.
Support for FLUX.2 Klein models
The FLUX2 Klein family of models (F2K) are fast, high quality image generation and editing models. Invoke provides support for multiple versions, including both the fast-but-less-precise 4 billion (4B) and the slower-but-more-accurate 9 billion (9B) models, as well as quantized versions of these models suited for systems with limited VRAM. These models are small and fast; the fastest can render images in seconds with just four steps.
In addition to the usual features (txt2img, img2img, inpainting, outpainting) F2K offers a unique image editing feature which allows you to make targeted modifications to an image or set of images using prompts like "Change the goblet in the king's right hand from silver to gold," or "Transfer the style from image 1 to image 2".
Suggested hardware requirements are:
FLUX.2 Klein 4B - 1024×1024
- GPU: Nvidia 30xx series or later, 12GB+ VRAM (e.g. RTX 3090, RTX 4070). FP8 version works with 8GB+ VRAM.*
- Memory: At least 16GB RAM.
- Disk: 10GB for base installation plus 20GB for models (Diffusers format with encoder).
FLUX.2 Klein 9B - 1024×1024
- GPU: Nvidia 40xx series, 24GB+ VRAM (e.g. RTX 4090). FP8 version works with 12GB+ VRAM.
- Memory: At least 32GB RAM.
- Disk: 10GB for base installation plus 40GB for models (Diffusers format with encoder).
Getting Started with F2K
After updating InvokeAI, you will find a new FLUX.2 Klein starter pack for in the Starter Models section of the Model Manager. This will download three files: the Q4 quantized version of F2K 4B, which is suitable to run on low-end hardware, and two supporting files: the FLUX.2 VAE, and a quantized version of the FLUX.2 Qwen3 text encoder.
After installing the bundle, select the "FLUX.2 Klein 4B (GGUF Q4)" model in theGeneration section of Invoke's left panel. Also go to the Advanced section at the bottom of the panel and select the F2K VAE and text encoder models that were installed with the starter bundle. (If you don't select these, you will get an warning message on the first generation that tells you to do this.) Recommended generation settings are:
- Steps: 4-6
- CFG: 1-2
Modestly increasing the number of steps may increase accuracy somewhat. If you work with the Base versions of F2K (available from HuggingFace), increase the steps to >20 and the CFG to 3.5-5.0.
Text2img, img2img, inpainting and outpainting will all work as usual. InvokeAI does not currently support F2K LoRAs or ControlNets (there have not been many published so far). In addition, only the Euler sampler is currently available. Support for LoRAs and additional schedulers will be added in a future release.
Prompting with FLUX.2
Like ZiT, F2K's text encoder works best when you provide it with long prose prompts that follow the framework Subject + Setting + Details + Lighting + Atmosphere. For example: "An elderly king is standing on a low dais in front of a crowded and chaotic banquet hall bursting with courtiers and noblemen. He is shown in profile, facing his noblemen, holding high a jeweled chalice of wine to toast the unification of his fifedoms. This is a cinematic shot set that conveys historical grandeur and a medieval vibe."
F2K does not perform any form of prompt enhancement, so what you write is what the model sees. See FLUX.2 Prompting Guide for more guidance.
Image Editing
F2K provides an image editing mode that works like a souped-up version of Image Prompt (IP) Adapters. Drag-and-drop or upload an image to the Reference Image section of the Prompt panel. Then instruct the model on modifications you wish to make using active verbs. You may issue multiple instructions in the same prompt.
- Change the king's chalice from silver to gold. Give him a crown, and grow him a salt-and-pepper beard.
- Change the image style to a scifi/fantasy vibe.
- Use an anime style and give the noblemen and courtiers brightly-colored robes.
F2K editing supports multiple reference images, letting you transfer visual elements (subjects, style and background) from one to another. When prompting over multiple images, refer to them in order as "image 1," "image 2," and so forth.
- Give the king in image 1 the crown that appears in image 2.
- Transfer the style of image 1 to image 2.
Dealing with multiple reference images is tricky. There is no way to adjust the weightings of each image, and so you will have to be explicit in the prompt about which visual elements you are combining. If you cannot get the effect you are looking for by modifying the prompt, you may find success by changing the order of images.
Also be aware that each image significantly increases the model's VRAM usage. If you run into memory errors, use a smaller (quantized) model, or reduce the number and size of the reference images.
Other Versions of F2K Available in the Model Manager
To find additional supported versions of F2K, type "FLUX.2" into the Starter Models search box. This will show you the following types of files:
- FLUX.2 Klein 4B/9B (Diffusers) These are the full-size all-in-one diffusers versions of F2K which come bundled with the VAE and text encoder.
- FLUX.2 Klein 4B/9B These are standalone versions of the full-size F2K which require installation of separate VAE and text encoders. Note that the 4B and 9B models require different text encoders, "FLUX.2 Klein Qwen3 4B Encoder" and "FLUX.2 Klein Qwen3 8B Encoder" respectively. (Not a misprint: use the 9B F2K model with the 8B text encoder!)
- FLUX.2 Klein 4B/9B (FP8) These are the standalone versions quantized to 8 bits. The 4B model will run comfortably on macines with 8GB VRAM, while the 9B model will run on machines with 12GB or higher. As with all quantized versions, there is minor loss of generation accuracy.
- FLUX.2 Klein 4B/9B (Q4) These are standalone versions that have been quantized to 4 bits, resulting in very small and fast models that can run on cards with 6-8 GB VRAM.
There is only one F2K VAE, and it happens to be same as the one used by FLUX.1 and Z-Image Turbo. However, there are several text encoder options:
- FLUX.2 Klein Qwen3 4B Encoder Use this encoder with the F2K 4B versions. It also works with Z-Image Turbo.
- Z-Image Qwen3 Text Encoder (quantized) This is a Q6-quantized version of the text encoder, that works with both F2K and ZiT. You may use this on smaller memory systems to reduce swapping of models in and out of VRAM.
- FLUX.2 Klein Qwen3 8B Encoder Use this encoder with the F2K 9B versions. It is not compatible with ZiT.
You will find additional F2K models on HuggingFace and other model repositories, including the base models intended for fine-tuning and LoRA training. We have not exhaustively tested InvokeAI compatibility with all the available variants. Please report any incompatible models to InvokeAI Issues.
Many credits to @Pfannkuchensack for contributing F2K support.
Other Features in this Release
The other features in this release include:
Z-Image Turbo Variance Enhancer
ZiT tends to produce very similar images for a given prompt. To increase image diversity, @Pfannkuchensack contributed a Seed Variance Enhancer node which adds calibrated amounts of noise to the prompt conditioning prior to generation. You will find this feature in the Generation panel under Advanced Options. When activated, you will see two sliders, one for Variance Strength and the other for Randomize Percent. The first slider controls how much noise will be added to the conditioned prompt, and the second controls what proportion of the conditioning's weights will be altered. Using the default randomization of 50% of the values, a variance strength of 0.1 will produce subtle variations, while a strength of 0.5 will produce very marked deviation from the prompt. Increasing the percentage of weights modified will also increase the level of variation.
Improved Support for High-Resolution FLUX.1 Images
A new denoising tuning algorithm, introduced by @Pfannkuchensack, increases the accuracy of FLUX.1 generations at high resolutions. When a FLUX.1 model is selected, a new DyPE option will appear in the Generation panel. Its settings are Off (the default) to disable the algorithm, Auto to automatically activate DyPE when rendering images greater than 1536 pixels in either dimension, and 4K Optimized to activate the algorithm with parameters that are tuned for 4K images. Note that if you do not have sufficient VRAM to generate 4K images, this feature will not help you generate them. Instead, generate a smaller image and use Invoke's Upscaling feature.
Canvas high level transform smoothing
Another improvement contributed by @DustyShoe: The Canvas raster layer transform operation now supports multiple types of smoothing, thereby reducing the number of artifacts when an area is upscaled.
Text Search and Highlighting in the Image Metadata Tab
The Image Viewer's info (🛈) tab now has a search field that allows you to rapidly search and highlight text in image metadata, details, workflow and generation graph. In addition, the left margin of the metadata display has been widened to make the display more readable.
Thanks to @DustyShoe for...
v6.10.0
InvokeAI v6.10.0
This is the first InvokeAI Community Edition release since the closure of the commercial venture, and we think you will be pleased with the new features and capabilities. This release introduces backend support for the state-of-the-art Z-Image Turbo image generation models, and multiple frontend improvements that make working with InvokeAI an even smoother and more pleasurable experience.
The Z-Image Turbo Model Family
Z-Image Turbo (ZiT) is a bilingual image generation model that manages to combine high performance with a small footprint and excellent image generation quality. It excels in photorealistic image generation, renders both English and Chinese text accurately, and is easy to steer. The full model will run easily on consumer hardware with 16 GB VRAM, and while quantized versions will run on significantly smaller cards, with some loss of precision.
With this release InvokeAI runs almost all released versions of ZiT, including diffusers, safetensors, GGUF, FP8 and quantified versions. However, be aware that the FP8 scaled weights models are not yet fully supported and will produce image artifacts. In addition, InvokeAI supports text2image, image2image, ZiT LoRA models, controlnet models, canvas functions and regional guidance. Image Prompts (IP) are not supported by ZiT, but similar functionality is expected when Z-Image Edit is publicly released.
To get started using ZiT, go to the Models tab and from the Launchpad select the Z-Image Turbo bundle to install all the available ZiT related models and dependencies (rougly 35 GB in total). Alternatively, you can select individual models from the Starter Models tab, and search for "Z-Image." The full and Q8 models will run on a 16 GB card. For cards with 6-8 GB of VRAM, choose the smaller quantized model, Z-Image Turbo GGUF Q4_K. Note that when using one of the quantized models, you will also need to install the standalone Qwen3 encoder and one of the Flux VAE models. This will be handled for you when you install a ZiT starter model.
When generating with these models it is recommended to use 8-9 steps and a CFG of 1. Be aware that due to ZiTs strong prompt following it does not generate as much image diversity as other models you may be used to. One way to increase image diversity is to create a custom workflow that adds noise to the Z-Image Text Encoder using @Pfannkuchensack's Image Seed Variance Enhancer Node.
In addition to the default Euler scheduler for ZiT we offer the more accurate but slower Heun scheduler, and a faster but less accurate LCM scheduler. Note that the LCM and Heun schedulers are still experimental, and may not produce optimal results in some workflows.
A big shout out to @Pfannkuchensack for his critical contributions to this effort.
New Workflow Features
We have two new improvements to the Workflow Editor:
- Workflow Tags: It is now possible to add multiple arbitrary text tags to your workflows. To set a tag on the current workflow, go to Details and and scroll down to Tags. Enter a comma-delimited of tags that describe your workflow, such as "image, bounding box", and save. The next time you browse your workflows, you will see a series of checkboxes for all the unique tags in your workflow connection. Select the tag checkboxes individually or in combination to filter the workflows that are displayed. This feature was contributed by @Pfannkuchensack.
- Prompt Template Node: Another @Pfannkuchensack workflow contribution is a new Prompt Template node, which allows you to apply any of the built-in or custom prompt style templates to a prompt before passing it onward to generation.
Prompt Weighting Hotkeys
@joshistoast has added a neat feature for adjusting the weighting of words and phrases in the prompt. Simply select a word or phrase in the prompt textbox and press Ctrl-Up Arrow to increase the weight of the selection (by adding "+" marks) or Ctrl-Down Arrow to decrease the weighting.
Limitations: The prompt weighting does not work properly with numeric weights, nor with prompts that contain the .add() or .blend() functions. This will be fixed in the next point release.
Hotkey Editor
Speaking of hotkeys, @Pfannkuchensack and @joshistoast contributed a new user interface for editing hotkeys. Any of the major UI functions, such as kicking off a generation, opening or closing panels, selecting tools in the canvas, gallery navigation, and so forth, can now be assigned a key shortcut combination. You can also assign multiple hotkeys to the same function.
To access the hotkey editor, go to the Settings (gear) menu in the bottom left, and select Hotkeys.
Bulk Operations in the Model Manager
You can now select multiple models in the Model Manager tab and apply bulk operations to them. Currently the only supported operation is to Delete unwanted models, but this feature will be expanded in the future to allow for model exporting, archiving, and other functionality.
This feature was contributed by @joshistoast, based on earlier work by @Pfannkuchensack .
Masked Area Extraction in the Canvas
It is now possible to extract an arbitrary portion of all visible raster layers that are covered by the Inpaint Mask. The extracted region is composited and added as a new raster layer. This allows for greater flexibility in the generation and manipulation of raster layers.
Thanks to @DustyShoe for this work.
PBR Maps
@blessedcoolant added support for PBR maps, a set of three texture images that can be used in 3D graphics applications to define a material's physical properties, such as glossiness. To generate the PBR maps, simply right click on any image in the viewer or gallery, and select "Filters -> PBR Maps". This will generate PBR Normal, Displacement, and Roughness map images suitable for use with a separate 3D rendering package.
New FLUX Model Schedulers
We've also added new schedulers for FLUX models (both dev and schnell). In addition to the default Euler scheduler, you can select the more accurate but slow Heun scheduler, and the faster but less accurate LCM scheduler. Look for the selection under "Advanced Options" in the Text2Image settings panel, or in the FLUX Denoise node in the workflow editor. Note that the LCM and Heun schedulers are still experimental, and may not produce optimal results in some workflows.
Thanks to @Pfannkuchensack for this contribution.
SDXL Color Compensation
When performing SDXL image2image operations, the color palette changes subtly and the discrepancy becomes increasingly obvious after several such operations. @dunkeroni has contributed a new advanced option to compensate for this color drift when generating with SDXL models.
Option to Release VRAM When Idle
InvokeAI tends to grab as much GPU VRAM as it needs and then hold on to it until the model cache is manually cleared or the server is restarted. This can cause an annoyance for people who need the VRAM for other tasks. @lstein added a new feature that will automatically clear the InvokeAI model cache and release its VRAM after a set period of idleness. To activate this feature, add the configuration option model_cache_keep_alive_min to the invokeai.yaml configuration file. It takes a floating point number corresponding to the number of minutes of idleness before VRAM is released. For example, to release after 5 minutes of idleness, enter:
model_cache_keep_alive_min: 5.0
Setting this value to 0 disables the feature. This is also the default if the configuration option is absent.
Bugfixes
Multiple bugs were caught and fixed in this release and are listed in the detailed changelog below.
Installing and Updating
The Invoke Launcher is the recommended way to install, update and run Invoke. It takes care of a lot of details for you - like installing the right version of python - and runs Invoke as a desktop application.
Note: With recent updates to torch, users on older GPUs (20xx and 10xx series) will likely run into issues with installing/updating. We are still evaluating how we can support older GPUs, but in the meantime users have found success manually downgrading torch. Head over to discord if you need help.
Follow the Quick Start guide to get started with the launcher.
If you don't want to use the launcher, or need a headless install, you can follow the manual install guide.
New Contributors
- @kyhavlov made their first contribution in #8613
- @aleyan made their first contribution in #8722
- @Copilot made their first contribution in #8693
Translation Credits
Many thanks to Riccardo Giovanetti (Italian) and RyoKoba (Japanese) who contributed their time and effort to providing translations of InvokeAI's text.
What's Changed
- Fix(nodes): color correct invocation by @dunkeroni in #8605
- chore: v6.8.1 by @psychedelicious in #8610
- refactor: model manager v3 by @psychedelicious in #8607
- tidy: docs and some tidying by @psychedelicious in #8614
- chore: prep for v6.9.0rc1 by @psychedelicious in #8615
- feat: reidentify model by @psychedelicious in #8618
- fix(ui): generator nodes by @psychedelicious in #8619...
v6.10.0rc2
InvokeAI v6.10.0rc2
This is the first InvokeAI Community Edition release since the closure of the commercial venture, and we think you will be happy with the progress. This release introduces backend support for the state-of-the-art Z-Image-Turbo image generation models, and multiple frontend improvements that make working with InvokeAI an even smoother and more pleasurable experience.
The Z-Image Turbo Model Family
Z-Image-Turbo (ZIT) is a bilingual image generation model that manages to combine high performance with a small footprint and excellent image generation quality. It excels in photorealistic image generation, renders both English and Chinese text accurately, and is easy to steer. The full model will run easily on consumer hardware with 16 GB VRAM, and while quantized versions will run on significantly smaller cards, with some loss of precision.
With this release InvokeAI runs almost all released versions of ZIT, including diffusers, safetensors, GGUF, FP8 and quantified versions. However, be aware that the FP8 scaled weights models are not yet fully supported and will produce image artifacts. In addition, InvokeAI supports text2image, image2image, ZIT LoRA models, controlnet models, canvas functions and regional guidance. Image Prompts (IP) are not supported by ZIT, but similar functionality is expected when Z-Image Edit is publicly released.
To get started using ZIT, go to the Models tab and from the Launchpad select the Z-Image Turbo bundle to install all the available ZIT related models and dependencies (rougly 35 GB in total). Alternatively, you can select individual models from the Starter Models tab, and search for "Z-Image." The full and Q8 models will run on a 16 GB card. For cards with 6-8 GB of VRAM, choose the smaller quantized model, Z-Image Turbo GGUF Q4_K. Note that when using one of the quantized models, you will also need to install the standalone Qwen3 encoder and one of the Flux VAE models. This will be handled for you when you install a ZIT starter model.
When generating with these models it is recommended to use 8-9 steps and a CFG of 1. In addition to the default Euler scheduler for ZIT, we offer the more accurate but slower Heun scheduler, and a faster but less accurate LCM scheduler. Note that the LCM and Heun schedulers are still experimental, and may not produce optimal results in some workflows.
A big shout out to @Pfannkuchensack for his critical contributions to this effort.
New Workflow Features
We have two new improvements to the Workflow Editor:
- Workflow Tags: It is now possible to add multiple arbitrary text tags to your workflows. To set a tag on the current workflow, go to Details and and scroll down to Tags. Enter a comma-delimited of tags that describe your workflow, such as "image, bounding box", and save. The next time you browse your workflows, you will see a series of checkboxes for all the unique tags in your workflow connection. Select the tag checkboxes individually or in combination to filter the workflows that are displayed. This feature was contributed by @Pfannkuchensack.
- Prompt Template Node: Another @Pfannkuchensack workflow contribution is a new Prompt Template node, which allows you to apply any of the built-in or custom prompt style templates to a prompt before passing it onward to generation.
Hotkey Editor
@Pfannkuchensack and @joshistoast contributed a new user interface for editing hotkeys. Any of the major UI functions, such as kicking off a generation, opening or closing panels, selecting tools in the canvas, gallery navigation, and so forth, can now be assigned a key shortcut combination. You can also assign multiple hotkeys to the same function.
To access the hotkey editor, go to the Settings (gear) menu in the bottom left, and select Hotkeys.
Bulk Operations in the Model Manager
You can now select multiple models in the Model Manager tab and apply bulk operations to them. Currently the only supported operation is to Delete unwanted models, but this feature will be expanded in the future to allow for model exporting, archiving, and other functionality.
This feature was contributed by @joshistoast, based on earlier work by @Pfannkuchensack .
Masked Area Extraction in the Canvas
It is now possible to extract an arbitrary portion of all visible raster layers that are covered by the Inpaint Mask. The extracted region is composited and added as a new raster layer. This allows for greater flexibility in the generation and manipulation of raster layers.
Thanks to @DustyShoe for this work.
PBR Maps
@blessedcoolant added support for PBR maps, a set of three texture images that can be used in 3D graphics applications to define a material's physical properties, such as glossiness. To generate the PBR maps, simply right click on any image in the viewer or gallery, and select "Filters -> PBR Maps". This will generate PBR Normal, Displacement, and Roughness map images suitable for use with a separate 3D rendering package.
New FLUX Model Schedulers
We've also added new schedulers for FLUX models (both dev and schnell). In addition to the default Euler scheduler, you can select the more accurate but slow Heun scheduler, and the faster but less accurate LCM scheduler. Look for the selection under "Advanced Options" in the Text2Image settings panel, or in the FLUX Denoise node in the workflow editor. Note that the LCM and Heun schedulers are still experimental, and may not produce optimal results in some workflows.
Thanks to @Pfannkuchensack for this contribution.
SDXL Color Compensation
When performing SDXL image2image operations, the color palette changes subtly and the discrepancy becomes increasingly obvious after several such operations. @dunkeroni has contributed a new advanced option to compensate for this color drift when generating with SDXL models.
Bugfixes
Multiple bugs were caught and fixed in this release and are listed in the detailed changelog below.
Installing and Updating
The Invoke Launcher is the recommended way to install, update and run Invoke. It takes care of a lot of details for you - like installing the right version of python - and runs Invoke as a desktop application.
Note: With recent updates to torch, users on older GPUs (20xx and 10xx series) will likely run into issues with installing/updating. We are still evaluating how we can support older GPUs, but in the meantime users have found success manually downgrading torch. Head over to discord if you need help.
Follow the Quick Start guide to get started with the launcher.
If you don't want to use the launcher, or need a headless install, you can follow the manual install guide.
What's Changed
- feat: reidentify model by @psychedelicious in #8618
- fix(ui): generator nodes by @psychedelicious in #8619
- chore(ui): point ui lib dep at gh repo by @psychedelicious in #8620
- chore: prep for v6.9.0 by @psychedelicious in #8623
- fix(mm): directory path leakage on scan folder error by @lstein in #8641
- feat: remove the ModelFooter in the ModelView and add the Delete Mode… by @Pfannkuchensack in #8635
- chore(codeowners): remove commercial dev codeowners by @lstein in #8650
- Fix to enable loading fp16 repo variant ControlNets by @DustyShoe in #8643
- ui: translations update from weblate by @weblate in #8599
- (chore) Update requirements to python 3.11-12 by @lstein in #8657
- Rework graph.py by @JPPhoto in #8642
- Fix memory issues when installing models on Windows by @gogurtenjoyer in #8652
- Feat: SDXL Color Compensation by @dunkeroni in #8637
- feat(ui): improve hotkey customization UX with interactive controls and validation by @Pfannkuchensack in #8649
- feat(ui): Color Picker V2 by @hipsterusername in #8585
- Feature(UI): bulk remove models loras by @Pfannkuchensack in #8659
- feat(prompts): hotkey controlled prompt weighting by @joshistoast in #8647
- Feature: Add Z-Image-Turbo model support by @Pfannkuchensack in #8671
- fix(ui): 🐛
HotkeysModalandSettingsModalinitial focus by @joshistoast in #8687 - Feature: Add Tag System for user made Workflows by @Pfannkuchensack in #8673
- Feature(UI): add extract masked area from raster layers by @DustyShoe in #8667
- Feature: z-image Turbo Control Net by @Pfannkuchensack in #8679
- fix(z-image): Fix padding token shape mismatch for GGUF models by @Pfannkuchensack in #8690
- feat(starter-models): add Z-Image Turbo starter models by @Pfannkuchensack in #8689
- fix: CFG Scale min value reset to zero by @blessedcoolant in #8691
- feat(model manager): 💄 refactor model manager bulk actions UI by @jo...
v6.10.0rc1
InvokeAI v6.10.0rc1
This is the first InvokeAI Community Edition release since the closure of the commercial venture, and we think you will be happy with the progress. This release introduces backend support for the state-of-the-art Z-Image-Turbo image generation models, and multiple frontend improvements that make working with InvokeAI an even smoother and more pleasurable experience.
The Z-Image-Turbo Model Family
Z-Image-Turbo is a bilingual image generation model that manages to combine high performance with a small footprint and excellent image generation quality. It excels in photorealistic image generation, renders both English and Chinese text accurately, and is easy to steer. The full model will run easily on consumer hardware with 16 GB VRAM, and while quantized versions will run on significantly smaller cards, with some loss of precision.
With this release InvokeAI runs almost all released versions of Z-Image-Turbo, including diffusers, safetensors, GGUF, FP8 and quantified versions. However, be aware that the FP8 scaled weights models are not yet fully supported and will produce image artifacts. In addition, InvokeAI supports text2image, image2image, Z-Image-Turbo LoRA models, controlnet models, canvas functions and regional guidance.
To get started using Z-Image-Turbo, go to the Models tab, select Starter Models, and search for "Z-Image." The full and Q8 models will run on a 16 GB card. For less VRAM, choose one of the smaller quantized models. When generating with these models it is recommended to use 8 steps and a CFG of 1.
A big shout out to @Pfannkuchensack for his critical contributions to this effort.
New Workflow Features
We have two new improvements to the Workflow Editor:
- Workflow Tags: It is now possible to add multiple arbitrary text tags to your workflows. To set a tag on the current workflow, go to Details and and scroll down to Tags. Enter a comma-delimited of tags that describe your workflow, such as "image, bounding box", and save. The next time you browse your workflows, you will see a series of checkboxes for all the unique tags in your workflow connection. Select the tag checkboxes individually or in combination to filter the workflows that are displayed. This feature was contributed by @Pfannkuchensack.
- Prompt Template Node: Another @Pfannkuchensack workflow contribution is a new Prompt Template node, which allows you to apply any of the built-in or custom prompt style templates to a prompt before passing it onward to generation.
Hotkey Editor
@Pfannkuchensack and @joshistoast contributed a new user interface for editing hotkeys. Any of the major UI functions, such as kicking off a generation, opening or closing panels, selecting tools in the canvas, gallery navigation, and so forth, can now be assigned a key shortcut combination. You can also assign multiple hotkeys to the same function.
To access the hotkey editor, go to the Settings (gear) menu in the bottom left, and select Hotkeys.
Bulk Operations in the Model Manager
You can now select multiple models in the Model Manager tab and apply bulk operations to them. Currently the only supported operation is to Delete unwanted models, but this feature will be expanded in the future to allow for model exporting, archiving, and other functionality.
This feature was contributed by @joshistoast, based on earlier work by @Pfannkuchensack .
Masked Area Extraction in the Canvas
It is now possible to extract an arbitrary portion of all visible raster layers that are covered by the Inpaint Mask. The extracted region is composited and added as a new raster layer. This allows for greater flexibility in the generation and manipulation of raster layers.
Thanks to @DustyShoe for this work.
SDXL Color Compensation
When performing SDXL image2image operations, the color palette changes subtly and the discrepancy becomes increasingly obvious after several such operations. @dunkeroni has contributed a new advanced option to compensate for this color drift when generating with SDXL models.
Bugfixes
Multiple bugs were caught and fixed in this release and are listed in the detailed changelog below.
Installing and Updating
The Invoke Launcher is the recommended way to install, update and run Invoke. It takes care of a lot of details for you - like installing the right version of python - and runs Invoke as a desktop application.
Note: With recent updates to torch, users on older GPUs (20xx and 10xx series) will likely run into issues with installing/updating. We are still evaluating how we can support older GPUs, but in the meantime users have found success manually downgrading torch. Head over to discord if you need help.
Follow the Quick Start guide to get started with the launcher.
If you don't want to use the launcher, or need a headless install, you can follow the manual install guide.
What's Changed
- feat: reidentify model by @psychedelicious in #8618
- fix(ui): generator nodes by @psychedelicious in #8619
- chore(ui): point ui lib dep at gh repo by @psychedelicious in #8620
- chore: prep for v6.9.0 by @psychedelicious in #8623
- fix(mm): directory path leakage on scan folder error by @lstein in #8641
- feat: remove the ModelFooter in the ModelView and add the Delete Mode… by @Pfannkuchensack in #8635
- chore(codeowners): remove commercial dev codeowners by @lstein in #8650
- Fix to enable loading fp16 repo variant ControlNets by @DustyShoe in #8643
- ui: translations update from weblate by @weblate in #8599
- (chore) Update requirements to python 3.11-12 by @lstein in #8657
- Rework graph.py by @JPPhoto in #8642
- Fix memory issues when installing models on Windows by @gogurtenjoyer in #8652
- Feat: SDXL Color Compensation by @dunkeroni in #8637
- feat(ui): improve hotkey customization UX with interactive controls and validation by @Pfannkuchensack in #8649
- feat(ui): Color Picker V2 by @hipsterusername in #8585
- Feature(UI): bulk remove models loras by @Pfannkuchensack in #8659
- feat(prompts): hotkey controlled prompt weighting by @joshistoast in #8647
- Feature: Add Z-Image-Turbo model support by @Pfannkuchensack in #8671
- fix(ui): 🐛
HotkeysModalandSettingsModalinitial focus by @joshistoast in #8687 - Feature: Add Tag System for user made Workflows by @Pfannkuchensack in #8673
- Feature(UI): add extract masked area from raster layers by @DustyShoe in #8667
- Feature: z-image Turbo Control Net by @Pfannkuchensack in #8679
- fix(z-image): Fix padding token shape mismatch for GGUF models by @Pfannkuchensack in #8690
- feat(starter-models): add Z-Image Turbo starter models by @Pfannkuchensack in #8689
- fix: CFG Scale min value reset to zero by @blessedcoolant in #8691
- feat(model manager): 💄 refactor model manager bulk actions UI by @joshistoast in #8684
- feat(hotkeys): ✨ Overhaul hotkeys modal UI by @joshistoast in #8682
- Feature (UI): add model path update for external models by @Pfannkuchensack in #8675
- fix support multi-subfolder downloads for Z-Image Qwen3 encoder by @Pfannkuchensack in #8692
- Feature: add prompt template node by @Pfannkuchensack in #8680
- feat(hotkeys modal): ⚡ loading state + performance improvements by @joshistoast in #8694
- Feature/user workflow tags by @Pfannkuchensack in #8698
- feat(backend): add support for xlabs Flux LoRA format by @Pfannkuchensack in #8686
- fix(prompts): 🐛 prompt attention behaviors, add tests by @joshistoast in #8683
- Workaround for Windows being unable to remove tmp directories when installing GGUF files by @lstein in #8699
- chore: bump version to v6.10.0rc1 by @lstein in #8695
- Feature: Add Z-Image-Turbo regional guidance by @Pfannkuchensack in #8672
Full Changelog: v6.9.0...v6.10.0rc1
v6.9.0
This release focuses on improvements to Invoke's Model Manager. The changes are mostly internal, with one significant user-facing change and a data migration.
On first run after installing this release, Invoke will do some data migrations:
- Run-of-the mill database updates.
- Update some model records to work with internal Model Manager changes, described below.
- Restructure the Invoke-managed models directory into a flat directory structure, where each model gets its own folder named by the model's UUID. Models outside the Invoke-managed models directory are not moved.
If you see any errors or run into any problems, please create a GH issue or ask for help in the
#new-release-discussionchannel of the Invoke discord.
Model Installation Improvements
Invoke analyzes models during install to attempt to identify them, recording their attributes in the database. This includes the type of model, its base architecture, its file format, and so on. This release includes a number of improvements to that process, both user-facing and internal.
Unknown Models
Previously, when this identification fails, we gave up on that model. If you had downloaded the model via Invoke, we would delete the downloaded file.
As of this release, if we cannot identify a model, we will install it as an Unknown model. If you know what kind of model it is, you can try editing the model via the Model Manager UI to set its type, base, format, and so on. Invoke may be able to run the model after this.
If the model still doesn't work, please create a GH issue linking to the model so we can improve model support. The internal changes in this release are focused on making it easier for contributors to support new models.
Invoke-managed Models Directory
Previously, as a relic of times long past, Invoke's internal model storage was organized in nested folders: <models_dir>/<type>/<base>/model.safetensors. Many moons ago, we didn't have a database, and models were identified by putting them into the right folder. This has not been the case for a long time.
As of this release, Invoke's internal model storage has a normalized, flat directory structure. Each model gets its own folder, named its unique key: <models_dir>/<model_key_uuid>/model.safetensors.
On first startup of this release, Invoke will move model files into the new flat structure. Your non-Invoke-managed models (i.e. models outside the Invoke-managed models directory) won't be touched.
We understand this change may seem user-unfriendly at first, but there are good reasons for it:
- This structure eliminates the possibility of model name conflicts, which have caused of numerous hard-to-fix bugs and errors.
- It reinforces that the internal models directory is Invoke-managed:
- Adding models to this directory manually does not add them to Invoke. With the previous structure, users often dropped models into a folder and expected them to work.
- Deleting models from this directory or moving them in the directory causes the database to lose track of the models.
- It obviates the need to move models around when changing their type and base.
Refactored Model Identification system
Several months ago, we started working on a new API to improve model identification (aka "probing" or "classification"). This process involves analyzing model files to determine what kind of model it is.
As of this release, the new API is complete and all legacy model identification logic has been ported to it. Along with the changes in #8577, the process of adding new models to Invoke is much simpler.
Model Identification Test Suite
Besides the business logic improvements, model identification is now fully testable!
When we find a model that is not identified correctly, we can add that model to the test suite, which currently has test cases for 70 models.
Models can be many GB in size and are thus not particularly well-suited to be stored in a git repo. We can work around this by creating lightweight representations of models. Model identification typically relies on analyzing model config files or state dict keys and shapes, but we do not need the tensors themselves for this process.
This allows us to strip out the weights from model files, leaving only the model's "skeleton" as a test case. The 70-model test suite is currently about 115MB but represents hundreds of GB of models.
Installing and Updating
The Invoke Launcher is the recommended way to install, update and run Invoke. It takes care of a lot of details for you - like installing the right version of python - and runs Invoke as a desktop application.
Note: With recent updates to torch, users on older GPUs (20xx and 10xx series) will likely run into issues with installing/updating. We are still evaluating how we can support older GPUs, but in the meantime users have found success manually downgrading torch. Head over to discord if you need help.
Follow the Quick Start guide to get started with the launcher.
If you don't want to use the launcher, or need a headless install, you can follow the manual install guide.
What's Changed
- refactor: model manager v3 by @psychedelicious in #8607
- tidy: docs and some tidying by @psychedelicious in #8614
- chore: prep for v6.9.0rc1 by @psychedelicious in #8615
- feat: reidentify model by @psychedelicious in #8618
- fix(ui): generator nodes by @psychedelicious in #8619
- chore(ui): point ui lib dep at gh repo by @psychedelicious in #8620
Full Changelog: v6.8.1...v6.9.0
v6.9.0rc3
This release focuses on improvements to Invoke's Model Manager. The changes are mostly internal, with one significant user-facing change and a data migration.
On first run after installing this release, Invoke will do some data migrations:
- Run-of-the mill database updates.
- Update some model records to work with internal Model Manager changes, described below.
- Restructure the Invoke-managed models directory into a flat directory structure, where each model gets its own folder named by the model's UUID. Models outside the Invoke-managed models directory are not moved.
If you see any errors or run into any problems, please create a GH issue or ask for help in the
#new-release-discussionchannel of the Invoke discord.
Model Installation Improvements
Invoke analyzes models during install to attempt to identify them, recording their attributes in the database. This includes the type of model, its base architecture, its file format, and so on. This release includes a number of improvements to that process, both user-facing and internal.
Unknown Models
Previously, when this identification fails, we gave up on that model. If you had downloaded the model via Invoke, we would delete the downloaded file.
As of this release, if we cannot identify a model, we will install it as an Unknown model. If you know what kind of model it is, you can try editing the model via the Model Manager UI to set its type, base, format, and so on. Invoke may be able to run the model after this.
If the model still doesn't work, please create a GH issue linking to the model so we can improve model support. The internal changes in this release are focused on making it easier for contributors to support new models.
Invoke-managed Models Directory
Previously, as a relic of times long past, Invoke's internal model storage was organized in nested folders: <models_dir>/<type>/<base>/model.safetensors. Many moons ago, we didn't have a database, and models were identified by putting them into the right folder. This has not been the case for a long time.
As of this release, Invoke's internal model storage has a normalized, flat directory structure. Each model gets its own folder, named its unique key: <models_dir>/<model_key_uuid>/model.safetensors.
On first startup of this release, Invoke will move model files into the new flat structure. Your non-Invoke-managed models (i.e. models outside the Invoke-managed models directory) won't be touched.
We understand this change may seem user-unfriendly at first, but there are good reasons for it:
- This structure eliminates the possibility of model name conflicts, which have caused of numerous hard-to-fix bugs and errors.
- It reinforces that the internal models directory is Invoke-managed:
- Adding models to this directory manually does not add them to Invoke. With the previous structure, users often dropped models into a folder and expected them to work.
- Deleting models from this directory or moving them in the directory causes the database to lose track of the models.
- It obviates the need to move models around when changing their type and base.
Refactored Model Identification system
Several months ago, we started working on a new API to improve model identification (aka "probing" or "classification"). This process involves analyzing model files to determine what kind of model it is.
As of this release, the new API is complete and all legacy model identification logic has been ported to it. Along with the changes in #8577, the process of adding new models to Invoke is much simpler.
Model Identification Test Suite
Besides the business logic improvements, model identification is now fully testable!
When we find a model that is not identified correctly, we can add that model to the test suite, which currently has test cases for 70 models.
Models can be many GB in size and are thus not particularly well-suited to be stored in a git repo. We can work around this by creating lightweight representations of models. Model identification typically relies on analyzing model config files or state dict keys and shapes, but we do not need the tensors themselves for this process.
This allows us to strip out the weights from model files, leaving only the model's "skeleton" as a test case. The 70-model test suite is currently about 115MB but represents hundreds of GB of models.
Installing and Updating
The Invoke Launcher is the recommended way to install, update and run Invoke. It takes care of a lot of details for you - like installing the right version of python - and runs Invoke as a desktop application.
Note: With recent updates to torch, users on older GPUs (20xx and 10xx series) will likely run into issues with installing/updating. We are still evaluating how we can support older GPUs, but in the meantime users have found success manually downgrading torch. Head over to discord if you need help.
Follow the Quick Start guide to get started with the launcher.
If you don't want to use the launcher, or need a headless install, you can follow the manual install guide.
What's Changed
- chore: v6.8.1 by @psychedelicious in #8610
- refactor: model manager v3 by @psychedelicious in #8607
- tidy: docs and some tidying by @psychedelicious in #8614
- chore: prep for v6.9.0rc1 by @psychedelicious in #8615
Full Changelog: v6.8.1...v6.9.0rc3
v6.9.0rc2
This release focuses on improvements to Invoke's Model Manager. The changes are mostly internal, with one significant user-facing change and a data migration.
On first run after installing this release, Invoke will do some data migrations:
- Run-of-the mill database updates.
- Update some model records to work with internal Model Manager changes, described below.
- Restructure the Invoke-managed models directory into a flat directory structure, where each model gets its own folder named by the model's UUID. Models outside the Invoke-managed models directory are not moved.
If you see any errors or run into any problems, please create a GH issue or ask for help in the
#new-release-discussionchannel of the Invoke discord.
Model Installation Improvements
Invoke analyzes models during install to attempt to identify them, recording their attributes in the database. This includes the type of model, its base architecture, its file format, and so on. This release includes a number of improvements to that process, both user-facing and internal.
Unknown Models
Previously, when this identification fails, we gave up on that model. If you had downloaded the model via Invoke, we would delete the downloaded file.
As of this release, if we cannot identify a model, we will install it as an Unknown model. If you know what kind of model it is, you can try editing the model via the Model Manager UI to set its type, base, format, and so on. Invoke may be able to run the model after this.
If the model still doesn't work, please create a GH issue linking to the model so we can improve model support. The internal changes in this release are focused on making it easier for contributors to support new models.
Invoke-managed Models Directory
Previously, as a relic of times long past, Invoke's internal model storage was organized in nested folders: <models_dir>/<type>/<base>/model.safetensors. Many moons ago, we didn't have a database, and models were identified by putting them into the right folder. This has not been the case for a long time.
As of this release, Invoke's internal model storage has a normalized, flat directory structure. Each model gets its own folder, named its unique key: <models_dir>/<model_key_uuid>/model.safetensors.
On first startup of this release, Invoke will move model files into the new flat structure. Your non-Invoke-managed models (i.e. models outside the Invoke-managed models directory) won't be touched.
We understand this change may seem user-unfriendly at first, but there are good reasons for it:
- This structure eliminates the possibility of model name conflicts, which have caused of numerous hard-to-fix bugs and errors.
- It reinforces that the internal models directory is Invoke-managed:
- Adding models to this directory manually does not add them to Invoke. With the previous structure, users often dropped models into a folder and expected them to work.
- Deleting models from this directory or moving them in the directory causes the database to lose track of the models.
- It obviates the need to move models around when changing their type and base.
Refactored Model Identification system
Several months ago, we started working on a new API to improve model identification (aka "probing" or "classification"). This process involves analyzing model files to determine what kind of model it is.
As of this release, the new API is complete and all legacy model identification logic has been ported to it. Along with the changes in #8577, the process of adding new models to Invoke is much simpler.
Model Identification Test Suite
Besides the business logic improvements, model identification is now fully testable!
When we find a model that is not identified correctly, we can add that model to the test suite, which currently has test cases for 70 models.
Models can be many GB in size and are thus not particularly well-suited to be stored in a git repo. We can work around this by creating lightweight representations of models. Model identification typically relies on analyzing model config files or state dict keys and shapes, but we do not need the tensors themselves for this process.
This allows us to strip out the weights from model files, leaving only the model's "skeleton" as a test case. The 70-model test suite is currently about 115MB but represents hundreds of GB of models.
Installing and Updating
The Invoke Launcher is the recommended way to install, update and run Invoke. It takes care of a lot of details for you - like installing the right version of python - and runs Invoke as a desktop application.
Note: With recent updates to torch, users on older GPUs (20xx and 10xx series) will likely run into issues with installing/updating. We are still evaluating how we can support older GPUs, but in the meantime users have found success manually downgrading torch. Head over to discord if you need help.
Follow the Quick Start guide to get started with the launcher.
If you don't want to use the launcher, or need a headless install, you can follow the manual install guide.
What's Changed
- chore: v6.8.1 by @psychedelicious in #8610
- refactor: model manager v3 by @psychedelicious in #8607
- tidy: docs and some tidying by @psychedelicious in #8614
Full Changelog: v6.8.1...v6.9.0rc2
v6.9.0rc1
This release focuses on improvements to Invoke's Model Manager. The changes are mostly internal, with one significant user-facing change and a data migration.
On first run after installing this release, Invoke will do some data migrations:
- Run-of-the mill database updates.
- Update some model records to work with internal Model Manager changes, described below.
- Restructure the Invoke-managed models directory into a flat directory structure, where each model gets its own folder named by the model's UUID. Models outside the Invoke-managed models directory are not moved.
If you see any errors or run into any problems, please create a GH issue or ask for help in the
#new-release-discussionchannel of the Invoke discord.
Model Installation Improvements
Invoke analyzes models during install to attempt to identify them, recording their attributes in the database. This includes the type of model, its base architecture, its file format, and so on. This release includes a number of improvements to that process, both user-facing and internal.
Unknown Models
Previously, when this identification fails, we gave up on that model. If you had downloaded the model via Invoke, we would delete the downloaded file.
As of this release, if we cannot identify a model, we will install it as an Unknown model. If you know what kind of model it is, you can try editing the model via the Model Manager UI to set its type, base, format, and so on. Invoke may be able to run the model after this.
If the model still doesn't work, please create a GH issue linking to the model so we can improve model support. The internal changes in this release are focused on making it easier for contributors to support new models.
Invoke-managed Models Directory
Previously, as a relic of times long past, Invoke's internal model storage was organized in nested folders: <models_dir>/<type>/<base>/model.safetensors. Many moons ago, we didn't have a database, and models were identified by putting them into the right folder. This has not been the case for a long time.
As of this release, Invoke's internal model storage has a normalized, flat directory structure. Each model gets its own folder, named its unique key: <models_dir>/<model_key_uuid>/model.safetensors.
On first startup of this release, Invoke will move model files into the new flat structure. Your non-Invoke-managed models (i.e. models outside the Invoke-managed models directory) won't be touched.
We understand this change may seem user-unfriendly at first, but there are good reasons for it:
- This structure eliminates the possibility of model name conflicts, which have caused of numerous hard-to-fix bugs and errors.
- It reinforces that the internal models directory is Invoke-managed:
- Adding models to this directory manually does not add them to Invoke. With the previous structure, users often dropped models into a folder and expected them to work.
- Deleting models from this directory or moving them in the directory causes the database to lose track of the models.
- It obviates the need to move models around when changing their type and base.
Refactored Model Identification system
Several months ago, we started working on a new API to improve model identification (aka "probing" or "classification"). This process involves analyzing model files to determine what kind of model it is.
As of this release, the new API is complete and all legacy model identification logic has been ported to it. Along with the changes in #8577, the process of adding new models to Invoke is much simpler.
Model Identification Test Suite
Besides the business logic improvements, model identification is now fully testable!
When we find a model that is not identified correctly, we can add that model to the test suite, which currently has test cases for 70 models.
Models can be many GB in size and are thus not particularly well-suited to be stored in a git repo. We can work around this by creating lightweight representations of models. Model identification typically relies on analyzing model config files or state dict keys and shapes, but we do not need the tensors themselves for this process.
This allows us to strip out the weights from model files, leaving only the model's "skeleton" as a test case. The 70-model test suite is currently about 115MB but represents hundreds of GB of models.
Installing and Updating
The Invoke Launcher is the recommended way to install, update and run Invoke. It takes care of a lot of details for you - like installing the right version of python - and runs Invoke as a desktop application.
Note: With recent updates to torch, users on older GPUs (20xx and 10xx series) will likely run into issues with installing/updating. We are still evaluating how we can support older GPUs, but in the meantime users have found success manually downgrading torch. Head over to discord if you need help.
Follow the Quick Start guide to get started with the launcher.
If you don't want to use the launcher, or need a headless install, you can follow the manual install guide.
What's Changed
- chore: v6.8.1 by @psychedelicious in #8610
- refactor: model manager v3 by @psychedelicious in #8607
- tidy: docs and some tidying by @psychedelicious in #8614
Full Changelog: v6.8.1...v6.9.0rc1
v6.8.1
This patch release fixes the Exception in ASGI application startup error that prevents Invoke from starting.
The error was introduced by an upstream dependency (fastapi). We've pinned the fastapi dependency to the last known working version.
Installing and Updating
The Invoke Launcher is the recommended way to install, update and run Invoke. It takes care of a lot of details for you - like installing the right version of python - and runs Invoke as a desktop application.
Note: With recent updates to torch, users on older GPUs (20xx and 10xx series) will likely run into issues with installing/updating. We are still evaluating how we can support older GPUs, but in the meantime users have found success manually downgrading torch. Head over to discord if you need help.
Follow the Quick Start guide to get started with the launcher.
If you don't want to use the launcher, or need a headless install, you can follow the manual install guide.
What's Changed
- Fix(nodes): color correct invocation by @dunkeroni in #8605
Full Changelog: v6.8.0...v6.8.1