Skip to content

ledutu2/rn-ui-rag

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

3 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

React Native UI RAG System

A Retrieval-Augmented Generation (RAG) system specifically designed for React Native UI component documentation. This system uses LanceDB for vector storage and Ollama for local LLM inference to provide intelligent assistance for React Native development using the rn-base-component library.

πŸš€ Features

  • Vector-based Document Retrieval: Uses embeddings to find relevant component documentation
  • Local LLM Integration: Powered by Ollama for privacy-focused AI assistance
  • React Native Component Library: Comprehensive documentation for rn-base-component UI library
  • Intelligent Code Generation: Generates runnable React Native TypeScript screens
  • Semantic Search: Advanced semantic search capabilities for component documentation

πŸ“‹ Prerequisites

Before setting up the project, ensure you have the following installed:

  • Node.js (v16 or higher)
  • npm or yarn
  • Ollama (for local LLM inference)
  • TypeScript (for development)

Ollama Setup

  1. Install Ollama from https://ollama.ai
  2. Pull the required model:
    ollama pull llama3
  3. Ensure Ollama is running on http://localhost:11434

πŸ› οΈ Installation

  1. Clone the repository:

    git clone <repository-url>
    cd rn-ui-rag
  2. Install dependencies:

    npm install
    # or
    yarn install

πŸ“š Setup

1. Index the Documentation

Before using the RAG system, you need to index the component documentation:

npm run rag:index
# or
yarn rag:index

This command will:

  • Read all Markdown files from src/docs/
  • Split them into chunks using LangChain's text splitter
  • Generate embeddings using the Xenova/all-MiniLM-L6-v2 model
  • Store the vectors in LanceDB for fast retrieval

2. Verify the Database

You can inspect the created database:

npx tsx src/inspectDB.ts

This will create a debug.json file with sample database contents.

🎯 Usage

Query the RAG System

Use the query command to ask questions about React Native components:

npm run rag:query "Create a login screen with email and password"
# or
yarn rag:query "How to create a button with icons?"

You can also pass questions as command line arguments:

npx tsx src/query.ts "Create a settings screen with checkboxes and sliders"

Example Queries

Here are some example queries you can try:

  • "Create a login screen with email, password and login button"
  • "How to use TextInput with validation?"
  • "Create a form with multiple input types"
  • "Build a settings screen with toggles and sliders"
  • "How to implement a card-based layout?"

πŸ“ Project Structure

rn-ui-rag/
β”œβ”€β”€ src/
β”‚   β”œβ”€β”€ docs/                    # Component documentation (Markdown files)
β”‚   β”‚   β”œβ”€β”€ ComponentOverview.md # Overview of all components
β”‚   β”‚   β”œβ”€β”€ Button.md            # Button component documentation
β”‚   β”‚   β”œβ”€β”€ TextInput.md         # TextInput component documentation
β”‚   β”‚   β”œβ”€β”€ Card.md              # Card component documentation
β”‚   β”‚   └── ...                  # Other component docs
β”‚   β”œβ”€β”€ indexDocs.ts            # Document indexing script
β”‚   β”œβ”€β”€ loadAndChunk.ts         # Document loading and chunking utilities
β”‚   β”œβ”€β”€ query.ts                # RAG query interface
β”‚   └── inspectDB.ts            # Database inspection utility
β”œβ”€β”€ lance_db/                   # LanceDB vector database
β”œβ”€β”€ package.json
└── README.md

πŸ”§ Configuration

Environment Variables

Create a .env file in the root directory if you need to customize settings:

# Ollama API endpoint (default: http://localhost:11434)
OLLAMA_API_URL=http://localhost:11434

# Model name (default: llama3)
OLLAMA_MODEL=llama3

# Embedding model (default: Xenova/all-MiniLM-L6-v2)
EMBEDDING_MODEL=Xenova/all-MiniLM-L6-v2

Chunking Configuration

You can modify the text chunking parameters in src/loadAndChunk.ts:

const splitter = new RecursiveCharacterTextSplitter({
  chunkSize: 600,        // Maximum chunk size
  chunkOverlap: 100,     // Overlap between chunks
});

Retrieval Configuration

Adjust the number of retrieved chunks in src/query.ts:

const retrieved = await retrieveTopK(question, 5); // Retrieve top 5 chunks

πŸ“– Available Components

The system includes documentation for the following React Native components from rn-base-component:

Component Description Key Features
Button Customizable button with multiple variants Primary, Secondary, Outline, Transparent variants
TextInput Advanced input component Multiple variants, validation, icon support
Card Flexible container for content Touch interactions, consistent styling
Checkbox Animated checkbox with label support Custom animations, form validation ready
RadioButton Single selection radio button Smooth animations, custom styling
Slider Interactive value selector Single/Range variants, gesture handling
Progress Animated progress indicator Determinate & indeterminate modes
Icon Versatile icon component Touch interactions, image source flexibility
Typography Typography system Predefined variants, consistent scaling
Accordion Collapsible content sections Custom animations, multiple expansion
CodeInput OTP/PIN input component Customizable length, secure input
CountDown Timer and countdown component Target date countdown, timer control

πŸ€– How It Works

  1. Document Processing: Markdown documentation is loaded and split into semantic chunks
  2. Embedding Generation: Each chunk is converted to a vector using a transformer model
  3. Vector Storage: Embeddings are stored in LanceDB for fast similarity search
  4. Query Processing: User questions are embedded and matched against stored chunks
  5. Context Retrieval: Most relevant chunks are retrieved based on semantic similarity
  6. LLM Generation: Retrieved context is sent to Ollama LLM for code generation

πŸ” Advanced Usage

Custom Queries

You can modify the system prompt in src/query.ts to customize the AI behavior:

const system = `You are a React Native (Expo) Engineer expert and must only use components from rn-base-component library. Use the provided docs strictly. If info not in docs, say "Not in docs".`;

Adding New Documentation

To add new component documentation:

  1. Create a new Markdown file in src/docs/
  2. Follow the existing documentation format with proper chunk tags
  3. Re-run the indexing process:
    npm run rag:index

Database Management

To reset the database:

# The indexing script automatically removes old database
npm run rag:index

πŸ§ͺ Development

Running in Development Mode

# Index documents
npx tsx src/indexDocs.ts

# Query the system
npx tsx src/query.ts "Your question here"

# Inspect database
npx tsx src/inspectDB.ts

Testing Different Models

You can experiment with different Ollama models:

# Pull a different model
ollama pull codellama

# Update the model name in src/query.ts
model: 'codellama'

πŸ“Š Performance Considerations

  • Embedding Generation: First-time embedding generation may take a few minutes
  • Model Loading: Ollama model loading happens on first query
  • Vector Search: LanceDB provides fast similarity search even with large document sets
  • Memory Usage: The system loads embeddings into memory for optimal performance

πŸ› Troubleshooting

Common Issues

Ollama Connection Error

Error: Ollama API error: 500 Internal Server Error
  • Ensure Ollama is running: ollama serve
  • Check if the model is pulled: ollama list
  • Verify the API endpoint in configuration

Embedding Model Loading Issues

Error loading embedding model
  • Check internet connection for model download
  • Ensure sufficient disk space
  • Try clearing npm/yarn cache

Database Issues

Error opening LanceDB table
  • Delete the lance_db directory and re-index
  • Check file permissions
  • Ensure sufficient disk space

No Results Found

No relevant chunks found for the query
  • Try rephrasing your question
  • Check if documents are properly indexed
  • Verify the database contains data

Debug Mode

Enable debug logging by modifying src/query.ts:

console.log('Retrieved chunks:', retrieved.map(r => r.content));
console.log('LLM response:', JSON.stringify(resp, null, 2));

🀝 Contributing

  1. Fork the repository
  2. Create a feature branch
  3. Add new component documentation in src/docs/
  4. Test the indexing and querying process
  5. Submit a pull request

πŸ“„ License

This project is licensed under the ISC License - see the LICENSE file for details.

πŸ™ Acknowledgments

  • LanceDB for vector database capabilities
  • Ollama for local LLM inference
  • Hugging Face Transformers for embedding models
  • LangChain for text processing utilities
  • rn-base-component library for React Native components

Built with ❀️ for the React Native development community

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors