A Retrieval-Augmented Generation (RAG) system specifically designed for React Native UI component documentation. This system uses LanceDB for vector storage and Ollama for local LLM inference to provide intelligent assistance for React Native development using the rn-base-component library.
- Vector-based Document Retrieval: Uses embeddings to find relevant component documentation
- Local LLM Integration: Powered by Ollama for privacy-focused AI assistance
- React Native Component Library: Comprehensive documentation for
rn-base-componentUI library - Intelligent Code Generation: Generates runnable React Native TypeScript screens
- Semantic Search: Advanced semantic search capabilities for component documentation
Before setting up the project, ensure you have the following installed:
- Node.js (v16 or higher)
- npm or yarn
- Ollama (for local LLM inference)
- TypeScript (for development)
- Install Ollama from https://ollama.ai
- Pull the required model:
ollama pull llama3
- Ensure Ollama is running on
http://localhost:11434
-
Clone the repository:
git clone <repository-url> cd rn-ui-rag
-
Install dependencies:
npm install # or yarn install
Before using the RAG system, you need to index the component documentation:
npm run rag:index
# or
yarn rag:indexThis command will:
- Read all Markdown files from
src/docs/ - Split them into chunks using LangChain's text splitter
- Generate embeddings using the
Xenova/all-MiniLM-L6-v2model - Store the vectors in LanceDB for fast retrieval
You can inspect the created database:
npx tsx src/inspectDB.tsThis will create a debug.json file with sample database contents.
Use the query command to ask questions about React Native components:
npm run rag:query "Create a login screen with email and password"
# or
yarn rag:query "How to create a button with icons?"You can also pass questions as command line arguments:
npx tsx src/query.ts "Create a settings screen with checkboxes and sliders"Here are some example queries you can try:
- "Create a login screen with email, password and login button"
- "How to use TextInput with validation?"
- "Create a form with multiple input types"
- "Build a settings screen with toggles and sliders"
- "How to implement a card-based layout?"
rn-ui-rag/
βββ src/
β βββ docs/ # Component documentation (Markdown files)
β β βββ ComponentOverview.md # Overview of all components
β β βββ Button.md # Button component documentation
β β βββ TextInput.md # TextInput component documentation
β β βββ Card.md # Card component documentation
β β βββ ... # Other component docs
β βββ indexDocs.ts # Document indexing script
β βββ loadAndChunk.ts # Document loading and chunking utilities
β βββ query.ts # RAG query interface
β βββ inspectDB.ts # Database inspection utility
βββ lance_db/ # LanceDB vector database
βββ package.json
βββ README.md
Create a .env file in the root directory if you need to customize settings:
# Ollama API endpoint (default: http://localhost:11434)
OLLAMA_API_URL=http://localhost:11434
# Model name (default: llama3)
OLLAMA_MODEL=llama3
# Embedding model (default: Xenova/all-MiniLM-L6-v2)
EMBEDDING_MODEL=Xenova/all-MiniLM-L6-v2You can modify the text chunking parameters in src/loadAndChunk.ts:
const splitter = new RecursiveCharacterTextSplitter({
chunkSize: 600, // Maximum chunk size
chunkOverlap: 100, // Overlap between chunks
});Adjust the number of retrieved chunks in src/query.ts:
const retrieved = await retrieveTopK(question, 5); // Retrieve top 5 chunksThe system includes documentation for the following React Native components from rn-base-component:
| Component | Description | Key Features |
|---|---|---|
| Button | Customizable button with multiple variants | Primary, Secondary, Outline, Transparent variants |
| TextInput | Advanced input component | Multiple variants, validation, icon support |
| Card | Flexible container for content | Touch interactions, consistent styling |
| Checkbox | Animated checkbox with label support | Custom animations, form validation ready |
| RadioButton | Single selection radio button | Smooth animations, custom styling |
| Slider | Interactive value selector | Single/Range variants, gesture handling |
| Progress | Animated progress indicator | Determinate & indeterminate modes |
| Icon | Versatile icon component | Touch interactions, image source flexibility |
| Typography | Typography system | Predefined variants, consistent scaling |
| Accordion | Collapsible content sections | Custom animations, multiple expansion |
| CodeInput | OTP/PIN input component | Customizable length, secure input |
| CountDown | Timer and countdown component | Target date countdown, timer control |
- Document Processing: Markdown documentation is loaded and split into semantic chunks
- Embedding Generation: Each chunk is converted to a vector using a transformer model
- Vector Storage: Embeddings are stored in LanceDB for fast similarity search
- Query Processing: User questions are embedded and matched against stored chunks
- Context Retrieval: Most relevant chunks are retrieved based on semantic similarity
- LLM Generation: Retrieved context is sent to Ollama LLM for code generation
You can modify the system prompt in src/query.ts to customize the AI behavior:
const system = `You are a React Native (Expo) Engineer expert and must only use components from rn-base-component library. Use the provided docs strictly. If info not in docs, say "Not in docs".`;To add new component documentation:
- Create a new Markdown file in
src/docs/ - Follow the existing documentation format with proper chunk tags
- Re-run the indexing process:
npm run rag:index
To reset the database:
# The indexing script automatically removes old database
npm run rag:index# Index documents
npx tsx src/indexDocs.ts
# Query the system
npx tsx src/query.ts "Your question here"
# Inspect database
npx tsx src/inspectDB.tsYou can experiment with different Ollama models:
# Pull a different model
ollama pull codellama
# Update the model name in src/query.ts
model: 'codellama'- Embedding Generation: First-time embedding generation may take a few minutes
- Model Loading: Ollama model loading happens on first query
- Vector Search: LanceDB provides fast similarity search even with large document sets
- Memory Usage: The system loads embeddings into memory for optimal performance
Ollama Connection Error
Error: Ollama API error: 500 Internal Server Error
- Ensure Ollama is running:
ollama serve - Check if the model is pulled:
ollama list - Verify the API endpoint in configuration
Embedding Model Loading Issues
Error loading embedding model
- Check internet connection for model download
- Ensure sufficient disk space
- Try clearing npm/yarn cache
Database Issues
Error opening LanceDB table
- Delete the
lance_dbdirectory and re-index - Check file permissions
- Ensure sufficient disk space
No Results Found
No relevant chunks found for the query
- Try rephrasing your question
- Check if documents are properly indexed
- Verify the database contains data
Enable debug logging by modifying src/query.ts:
console.log('Retrieved chunks:', retrieved.map(r => r.content));
console.log('LLM response:', JSON.stringify(resp, null, 2));- Fork the repository
- Create a feature branch
- Add new component documentation in
src/docs/ - Test the indexing and querying process
- Submit a pull request
This project is licensed under the ISC License - see the LICENSE file for details.
- LanceDB for vector database capabilities
- Ollama for local LLM inference
- Hugging Face Transformers for embedding models
- LangChain for text processing utilities
- rn-base-component library for React Native components
Built with β€οΈ for the React Native development community