Skip to content

Use the canonical Llama model ID#468

Merged
AnthonyRonning merged 1 commit intomasterfrom
canonical-llama3-3-70b
Apr 9, 2026
Merged

Use the canonical Llama model ID#468
AnthonyRonning merged 1 commit intomasterfrom
canonical-llama3-3-70b

Conversation

@AnthonyRonning
Copy link
Copy Markdown
Contributor

@AnthonyRonning AnthonyRonning commented Apr 9, 2026

Summary

  • use llama3-3-70b as the canonical Llama model identifier
  • normalize fetched and persisted model IDs so legacy saved chats map to the supported ID
  • remove stale legacy aliases and update the remaining examples and reference text

Testing

  • just format
  • just lint
  • just build
  • cd frontend && bun test

Keep model selection, fetched models, and proxy examples aligned on the supported ID while dropping stale legacy aliases.

Co-authored-by: factory-droid[bot] <138933559+factory-droid[bot]@users.noreply.github.com>

Correct the canonical Llama model ID

Use llama3-3-70b as the supported identifier and keep dotted legacy values migrating to it.

Co-authored-by: factory-droid[bot] <138933559+factory-droid[bot]@users.noreply.github.com>
@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Apr 9, 2026

📝 Walkthrough

Walkthrough

A centralized model ID aliasing system is introduced via a new LLAMA_MODEL_ID constant and MODEL_NAME_ALIASES lookup table. The hardcoded conditional aliasing logic is replaced with this table-driven approach, and duplicate Llama model configurations are consolidated across the codebase.

Changes

Cohort / File(s) Summary
Aliasing Infrastructure
frontend/src/utils/utils.ts
Replaced hardcoded conditional aliasing with centralized MODEL_NAME_ALIASES lookup table; added LLAMA_MODEL_ID constant; updated aliasModelName to use table-based lookup with fallback.
Model Configuration & Deduplication
frontend/src/components/ModelSelector.tsx
Updated MODEL_CONFIG to use computed LLAMA_MODEL_ID key instead of hardcoded AWQ identifier; refactored getModelTokenLimit to resolve via aliasModelName; changed deduplication logic to normalize model IDs; removed special-case name-preference filter.
Model State Normalization
frontend/src/state/LocalStateContext.tsx
Introduced normalizeAvailableModels function to deduplicate and canonicalize model entries; applied normalization at state initialization and on model list updates.
Configuration & Documentation
docs/conversations-api-implementation.md, frontend/public/llms-full.txt, frontend/src/components/apikeys/ProxyConfigSection.tsx
Updated model identifier references from llama-3.3-70b / long AWQ path to standardized llama3-3-70b format; replaced hardcoded strings with LLAMA_MODEL_ID interpolation in example configurations.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

Possibly related PRs

Poem

🐰 The models are many, the IDs grow long,
So we built a lookup table to set things right and strong!
With LLAMA_MODEL_ID and aliases neat,
No more hardcoded chains—one source of truth, complete! ✨

🚥 Pre-merge checks | ✅ 2
✅ Passed checks (2 passed)
Check name Status Explanation
Title check ✅ Passed The title 'Use the canonical Llama model ID' directly and clearly summarizes the main objective of the pull request, which is to standardize on a single canonical Llama model identifier across the codebase.
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch canonical-llama3-3-70b

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown
Contributor

@devin-ai-integration devin-ai-integration bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Devin Review found 1 potential issue.

View 4 additional findings in Devin Review.

Open in Devin Review

Comment on lines +79 to +85
const MODEL_NAME_ALIASES: Record<string, string> = {
"llama-3.3-70b": LLAMA_MODEL_ID,
"gemma-3-27b": "gemma4-31b",
"deepseek-r1-0528": "kimi-k2-5",
"kimi-k2": "kimi-k2-5",
"kimi-k2-thinking": "kimi-k2-5"
};
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🔴 Dropped backward-compatibility aliases for three old model names

The refactoring from if-else to a lookup table removed three previously-existing aliases:

  • "ibnzterrell/Meta-Llama-3.3-70B-Instruct-AWQ-INT4" (was → "llama-3.3-70b", which chains to "llama3-3-70b")
  • "qwen3-coder-480b" (was → "kimi-k2-5")
  • "leon-se/gemma-3-27b-it-fp8-dynamic" (was → "gemma4-31b")

These aliases exist for backward compatibility with persisted user data. When getChatById loads a chat at LocalStateContext.tsx:241, it calls aliasModelName(parsedChat.model). If a user has an old chat stored with one of these model names, it will now pass through un-aliased. The unrecognized model name then won't match any MODEL_CONFIG entry in ModelSelector.tsx, causing incorrect token limit fallbacks (via getModelTokenLimit at frontend/src/components/ModelSelector.tsx:78) and broken display names. Similarly, persistChat at frontend/src/state/LocalStateContext.tsx:96 will re-persist the unrecognized name rather than migrating it.

Suggested change
const MODEL_NAME_ALIASES: Record<string, string> = {
"llama-3.3-70b": LLAMA_MODEL_ID,
"gemma-3-27b": "gemma4-31b",
"deepseek-r1-0528": "kimi-k2-5",
"kimi-k2": "kimi-k2-5",
"kimi-k2-thinking": "kimi-k2-5"
};
const MODEL_NAME_ALIASES: Record<string, string> = {
"ibnzterrell/Meta-Llama-3.3-70B-Instruct-AWQ-INT4": LLAMA_MODEL_ID,
"llama-3.3-70b": LLAMA_MODEL_ID,
"leon-se/gemma-3-27b-it-fp8-dynamic": "gemma4-31b",
"gemma-3-27b": "gemma4-31b",
"qwen3-coder-480b": "kimi-k2-5",
"deepseek-r1-0528": "kimi-k2-5",
"kimi-k2": "kimi-k2-5",
"kimi-k2-thinking": "kimi-k2-5"
};
Open in Devin Review

Was this helpful? React with 👍 or 👎 to provide feedback.

@AnthonyRonning AnthonyRonning merged commit 9154fad into master Apr 9, 2026
14 checks passed
@AnthonyRonning AnthonyRonning deleted the canonical-llama3-3-70b branch April 9, 2026 23:34
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant