Releases: bytedance/Protenix
v2.0.0: Protenix-v2 Model Released
What's changed
🚀 Key Highlights
• Protenix-v2 Model Released: Introduced protenix-v2, an enhanced-capacity model (464M parameters). It delivers significant accuracy improvements in predicting challenging antibody-antigen complex structures and updates ligand-related plausibility.
• Training-Free Guidance (TFG) Module: Introduced a powerful new guidance module enforcing geometric and physical constraints (Steric, Torsion, Bond, etc.) during diffusion sampling without the need for retraining.
✨ New Features & Enhancements
• Inference Efficiency Breakthrough: protenix-v2 shows remarkable efficiency gains. Utilizing only 5 sampling seeds, it successfully
outperforms protenix-v1 at 1000 seeds.
• Configurable TFG Capabilities: Exposed via the --use_tfg_guidance CLI flag. Supported geometries include VinaStericPotential,
ExperimentalTorsionPotential, and PairwiseDistancePotential.
📖 Documentation & Assets
• Bumped protenix version to 2.0.0.
• Published the new Protenix-v2 Technical Report (docs/PX2.pdf).
• Updated README.md and docs/supported_models.md with the latest Protenix-v2 benchmarks, showcasing a 9 to 13 percentage points absolute success rate gain over Protenix-v1 at the DockQ > 0.23 threshold.
v1.1.0: Support fallback to torch cache dir for layer_norm
What's changed
- Check if current directory is writable before compiling layer_norm cuda extension
- Fallback to torch cache directory if not writable
v1.0.9: Add a include_discont_poly_poly_bonds switch for 'protenix json' command
What's changed
- Add a include_discont_poly_poly_bonds switch for
protenix jsoncommand
v1.0.8: Disable triton fused ops by default
What's Changed
- Triton fused ops for dropout residual are now disabled by default as they cause a slight drop in foldbench performance. They can be enabled via the FUSED_DROPOUT_RESIDUAL environment variable.
v1.0.7: Refactor: consolidate version management and bump to 1.0.7
What's Changed
- Add protenix/version.py to store version string
- Update setup.py to read version dynamically
- Update runner/batch_inference.py to use shared version
- Expose version in protenix/init.py
v1.0.6: Support custom id field in input json
What's Changed
- Auto-detect CUDA architectures for layer norm kernel compilation by @longleo17 in #252
- Featurizer: zero-copy tensor creation + numpy vectorization by @longleo17 in #246
- MSA encoding: vectorized sequence parsing by @longleo17 in #247
- Dataset: vectorize pandas/numpy operations by @longleo17 in #248
- Attention precision fix + LDDT fused thresholds + loss caching by @longleo17 in #249
- Fused Triton dropout+residual-add kernel (v2, cueq fix) by @longleo17 in #256
- Fix/blackwell gpu compatibility by @giulioisac in #257
- chain reindexing by @giulioisac in #260
- fix return identity matrix in eye_mask when opposite is False by @mrzzmrzz in #261
- Remove invalid AdamW arg and handle None param_names by @mrzzmrzz in #262
- Support custom id field in input json
Full Changelog: v1.0.5...v1.0.6
v1.0.5: Add need_atom_confidence option in protenix cli
What's Changed
- inference: fix MSA assertion and harden input/model-name validation by @ullahsamee in #235
- On-demand remote fetching of mmCIF template files by @giulioisac in #243
- Add need_atom_confidence option in protenix cli
New Contributors
- @giulioisac made their first contribution in #243
Full Changelog: v1.0.4...v1.0.5
v1.0.4: Raise if use template and kalign is not found
What's Changed
- fix: Add proper file handle management in colab_request_utils.py by @hobostay in #228
- triangular: Fix PyTorch 2.9 tensor indexing deprecation warnings by @ullahsamee in #229
- Raise if use template and kalign is not found
New Contributors
- @hobostay made their first contribution in #228
- @ullahsamee made their first contribution in #229
Full Changelog: v1.0.3...v1.0.4
v1.0.3: Fix edge case inference for mini_esm models
v1.0.2: Release RNA MSA data and change back chain ID format
What's changed
- Release RNA MSA data and corresponding configuration documentation.
- Minior change for chain ID in final output cif.