An AI-powered study assistant desktop application that runs locally with privacy-first design. Built with Electron, React, and TypeScript.
StudyNest offers several advantages over the standard Ollama desktop application:
- Study-Focused Interface - Purpose-built UI optimized for learning and research, not just generic chat
- Modern UX Design - Beautiful, responsive interface with seamless light/dark theme switching
- Smart Prompt Suggestions - Quick-start templates tailored for studying, research, and learning tasks
- Enhanced Conversation Features - Rename conversations, better organization, and local database storage
- Optimized Performance - React-based UI with smooth animations and real-time updates
- Customizable & Extensible - Open architecture for adding study-specific features
Want more features? Create an issue and I'll work on implementing your requests!
- ChatGPT-like conversational interface
- Support for local Small Language Models (SLM)
- Offline-first architecture for complete privacy
- Real-time message streaming
- Conversation history management
- Light and dark theme support with seamless switching
- Material Design Icons integration
- Responsive and accessible UI components
- Custom design tokens for consistent styling
- All data stays on your device
- No external API calls required
- Local LLM integration ready
- Secure document management
- Multiple conversation threads
- Message history with timestamps
- Typing indicators
- Prompt suggestions for quick start
- Auto-resizing text input
- Node.js (v18 or higher)
- pnpm (recommended) or npm
- Rust toolchain (for Crane service)
- Ollama (if you want to use Ollama models instead of Crane)
# Install Node.js dependencies
pnpm install# Install Rust toolchain
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
# Verify installation
rustc --version
cargo --versionStudyNest uses local AI models. You need to download at least one model:
# Install huggingface-cli
pip install huggingface-hub
# Download Qwen 2.5 (0.5B) - Fast, lightweight
huggingface-cli download Qwen/Qwen2.5-0.5B-Instruct \
--local-dir crane-studynest/checkpoints/Qwen2.5-0.5B-Instruct
# Or download Qwen 2.5 (1.5B) - Better quality
huggingface-cli download Qwen/Qwen2.5-1.5B-Instruct \
--local-dir crane-studynest/checkpoints/Qwen2.5-1.5B-InstructModel Requirements:
- Qwen2.5-0.5B: ~1GB download, ~2GB RAM
- Qwen2.5-1.5B: ~3GB download, ~4GB RAM
- Qwen2.5-3B: ~6GB download, ~8GB RAM
# Install Ollama from https://ollama.ai
# Then pull a model
ollama pull phi4-mini
ollama pull tinyllamapnpm run devWhat happens:
- Rust service is automatically compiled (first time takes ~2-3 minutes)
- React dev server starts on port 3000
- Electron app launches
- Models are detected from
crane-studynest/checkpoints/
On first run, you'll see:
- Rust compilation logs (one-time setup)
- Model detection in console
- Available models in the dropdown
Select a model:
- Crane models:
Qwen 2.5 (0.5B) Instruct (988 MB) - Ollama models:
Phi4 Mini Latest,Tinyllama 1.1B
pnpm run buildBuild distributable packages for your platform:
# For macOS
pnpm run package:mac
# For Windows
pnpm run package:win
# For Linux
pnpm run package:linux
# For all platforms
pnpm run packagestudynest/
βββ electron/ # Electron main process
β βββ main.ts # Main process entry point
β βββ preload.ts # Preload script for IPC
βββ src/
β βββ components/ # Reusable UI components
β β βββ Button/ # Button component with variants
β β βββ Card/ # Card container component
β β βββ ChatInput/ # Message input with actions
β β βββ ChatMessage/ # Message display component
β β βββ Input/ # Form input component
β β βββ Sidebar/ # Navigation sidebar
β β βββ ThemeToggle/ # Light/dark theme switcher
β β βββ Typography/ # Text components
β βββ hooks/ # Custom React hooks
β β βββ useChat.ts # Chat state management
β β βββ useTheme.ts # Theme management
β βββ screens/ # Main application screens
β β βββ Chat/ # Chat interface screen
β β βββ Home/ # Home screen
β βββ styles/ # Global styles and tokens
β β βββ tokens.css # Design system tokens
β β βββ README.md # Design system documentation
β βββ types/ # TypeScript type definitions
β β βββ chat.ts # Chat-related types
β β βββ task.ts # Task-related types
β βββ utils/ # Utility functions
β β βββ llm.ts # LLM client for local models
β βββ App.tsx # Root component
β βββ index.tsx # React entry point
β βββ index.css # Global styles
βββ public/ # Static assets
β βββ index.html # HTML template
βββ dist/ # Build output
βββ release/ # Packaged applications
- Electron (v28.3.3) - Desktop application framework
- React (v18.3.1) - UI library
- TypeScript (v5.9.3) - Type-safe JavaScript
- Material Design Icons (@mdi/react, @mdi/js) - Icon library
- CSS Variables - Design tokens system
- Webpack (v5.104.1) - Module bundler
- electron-builder - Application packager
- pnpm - Fast, disk space efficient package manager
StudyNest supports two methods for running local LLMs:
StudyNest includes a built-in Rust-based service using the Crane framework for running LLM models locally without external dependencies.
- Rust toolchain installed:
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh - Download a Qwen model from HuggingFace:
# Install huggingface-cli
pip install huggingface-hub
# Download a model (choose one)
huggingface-cli download Qwen/Qwen2.5-0.5B-Instruct --local-dir crane-studynest/checkpoints/Qwen2.5-0.5B-Instruct
huggingface-cli download Qwen/Qwen2.5-1.5B-Instruct --local-dir crane-studynest/checkpoints/Qwen2.5-1.5B-InstructThe Crane service is automatically built and integrated when you run:
# Development mode
pnpm run dev
# Automatically builds Rust service β starts app
# Production build
pnpm run package
# Builds Rust service β packages everything togetherBuild Process:
build:rust:devscript compiles the Rust service in release mode- Binary is copied to
dist/bin/chat-service - If build fails, app falls back to using
cargo run(slower but works) - Service starts automatically when app launches
Manual Build:
# Build Rust service separately
pnpm run build:rust:dev
# Or build for production
pnpm run build:rustSupported Models:
- Qwen2.5-0.5B-Instruct (fast, 1-2GB RAM)
- Qwen2.5-1.5B-Instruct (balanced, 3-4GB RAM)
- Qwen2.5-3B-Instruct (high quality, 6-8GB RAM)
- Qwen3-0.6B / Qwen3-1.7B (latest models)
Performance:
- macOS (Apple Silicon): Metal GPU acceleration (3-5x faster)
- NVIDIA GPUs: CUDA support (build with
--features cuda) - CPU: Works everywhere, slower but functional
Advantages:
- β No external server needed (Ollama not required)
- β Faster startup and response times
- β Better integration with Electron
- β Smaller footprint
- β Full control over model lifecycle
You can also connect to external LLM servers:
- Start your local LLM server (e.g., Ollama, LM Studio, llama.cpp)
- Update the LLM endpoint in
src/hooks/useChat.ts - Configure the model settings in
src/utils/llm.ts
Example endpoints:
- Ollama:
http://localhost:11434/api/generate - LM Studio:
http://localhost:1234/v1/chat/completions
"crane-studynest directory not found"
# Make sure you're in the project root
cd /path/to/StudyNest
ls crane-studynest # Should show the directory"cargo: command not found"
# Install Rust
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
source $HOME/.cargo/env"Model not found" or "Failed to initialize model"
# Verify model files exist
ls crane-studynest/checkpoints/Qwen2.5-0.5B-Instruct/
# Should show: config.json, model.safetensors, tokenizer files
# Re-download if needed
huggingface-cli download Qwen/Qwen2.5-0.5B-Instruct \
--local-dir crane-studynest/checkpoints/Qwen2.5-0.5B-InstructSlow inference or timeout
- First inference takes longer (model loading)
- Subsequent responses are faster
- On Apple Silicon, Metal GPU acceleration is automatic
- Check Activity Monitor for memory usage
"Request timeout"
- Model initialization can take 1-2 minutes first time
- Check console logs for errors
- Ensure you have enough RAM (2GB+ free)
"Failed to fetch ollama models"
# Check if Ollama is running
ollama list
# Start Ollama service
ollama serve- Apple Silicon (M1/M2/M3): Automatic Metal GPU acceleration (3-5x faster)
- Intel Mac: Uses CPU, slower but works
- First run: Model loading takes 30-60 seconds
- Subsequent runs: Responses in 1-5 seconds
- Keep Activity Monitor open to monitor RAM
- Close other apps if running large models
- Use smaller models (0.5B) on machines with <8GB RAM
StudyNest includes a comprehensive design system with:
- Color scales (white/black with opacity variants)
- Primary colors (blue shades)
- Spacing scale (4px to 64px)
- Typography system
- Reusable components (Button, Input, Card, Typography)
See src/styles/README.md for detailed documentation.
Contributions are welcome! Please feel free to submit a Pull Request.
- Crane - Original inspiration for local LLM inference integration using Rust and Candle
MIT

