Skip to content

Local PC app for everyone to use LLM locally 🫑

Notifications You must be signed in to change notification settings

hoangnv170752/StudyNest

Repository files navigation

StudyNest

An AI-powered study assistant desktop application that runs locally with privacy-first design. Built with Electron, React, and TypeScript.

Demo

StudyNest Light Theme StudyNest Dark Theme

Why StudyNest over Ollama PC App?

StudyNest offers several advantages over the standard Ollama desktop application:

  • Study-Focused Interface - Purpose-built UI optimized for learning and research, not just generic chat
  • Modern UX Design - Beautiful, responsive interface with seamless light/dark theme switching
  • Smart Prompt Suggestions - Quick-start templates tailored for studying, research, and learning tasks
  • Enhanced Conversation Features - Rename conversations, better organization, and local database storage
  • Optimized Performance - React-based UI with smooth animations and real-time updates
  • Customizable & Extensible - Open architecture for adding study-specific features

Want more features? Create an issue and I'll work on implementing your requests!

Features

1. AI Chat Interface

  • ChatGPT-like conversational interface
  • Support for local Small Language Models (SLM)
  • Offline-first architecture for complete privacy
  • Real-time message streaming
  • Conversation history management

2. Modern Design System

  • Light and dark theme support with seamless switching
  • Material Design Icons integration
  • Responsive and accessible UI components
  • Custom design tokens for consistent styling

3. Privacy-Focused

  • All data stays on your device
  • No external API calls required
  • Local LLM integration ready
  • Secure document management

4. Chat Features

  • Multiple conversation threads
  • Message history with timestamps
  • Typing indicators
  • Prompt suggestions for quick start
  • Auto-resizing text input

Prerequisites

Required

  • Node.js (v18 or higher)
  • pnpm (recommended) or npm
  • Rust toolchain (for Crane service)

Optional

  • Ollama (if you want to use Ollama models instead of Crane)

Installation

1. Install Dependencies

# Install Node.js dependencies
pnpm install

2. Install Rust (for Crane Service)

# Install Rust toolchain
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh

# Verify installation
rustc --version
cargo --version

3. Download AI Models

StudyNest uses local AI models. You need to download at least one model:

Option A: Download Crane Models (Recommended)

# Install huggingface-cli
pip install huggingface-hub

# Download Qwen 2.5 (0.5B) - Fast, lightweight
huggingface-cli download Qwen/Qwen2.5-0.5B-Instruct \
  --local-dir crane-studynest/checkpoints/Qwen2.5-0.5B-Instruct

# Or download Qwen 2.5 (1.5B) - Better quality
huggingface-cli download Qwen/Qwen2.5-1.5B-Instruct \
  --local-dir crane-studynest/checkpoints/Qwen2.5-1.5B-Instruct

Model Requirements:

  • Qwen2.5-0.5B: ~1GB download, ~2GB RAM
  • Qwen2.5-1.5B: ~3GB download, ~4GB RAM
  • Qwen2.5-3B: ~6GB download, ~8GB RAM

Option B: Use Ollama Models

# Install Ollama from https://ollama.ai
# Then pull a model
ollama pull phi4-mini
ollama pull tinyllama

Development

Running in Development Mode

pnpm run dev

What happens:

  1. Rust service is automatically compiled (first time takes ~2-3 minutes)
  2. React dev server starts on port 3000
  3. Electron app launches
  4. Models are detected from crane-studynest/checkpoints/

First Run

On first run, you'll see:

  • Rust compilation logs (one-time setup)
  • Model detection in console
  • Available models in the dropdown

Select a model:

  • Crane models: Qwen 2.5 (0.5B) Instruct (988 MB)
  • Ollama models: Phi4 Mini Latest, Tinyllama 1.1B

Building for Production

pnpm run build

Packaging

Build distributable packages for your platform:

# For macOS
pnpm run package:mac

# For Windows
pnpm run package:win

# For Linux
pnpm run package:linux

# For all platforms
pnpm run package

Project Structure

studynest/
β”œβ”€β”€ electron/              # Electron main process
β”‚   β”œβ”€β”€ main.ts           # Main process entry point
β”‚   └── preload.ts        # Preload script for IPC
β”œβ”€β”€ src/
β”‚   β”œβ”€β”€ components/       # Reusable UI components
β”‚   β”‚   β”œβ”€β”€ Button/       # Button component with variants
β”‚   β”‚   β”œβ”€β”€ Card/         # Card container component
β”‚   β”‚   β”œβ”€β”€ ChatInput/    # Message input with actions
β”‚   β”‚   β”œβ”€β”€ ChatMessage/  # Message display component
β”‚   β”‚   β”œβ”€β”€ Input/        # Form input component
β”‚   β”‚   β”œβ”€β”€ Sidebar/      # Navigation sidebar
β”‚   β”‚   β”œβ”€β”€ ThemeToggle/  # Light/dark theme switcher
β”‚   β”‚   └── Typography/   # Text components
β”‚   β”œβ”€β”€ hooks/            # Custom React hooks
β”‚   β”‚   β”œβ”€β”€ useChat.ts    # Chat state management
β”‚   β”‚   └── useTheme.ts   # Theme management
β”‚   β”œβ”€β”€ screens/          # Main application screens
β”‚   β”‚   β”œβ”€β”€ Chat/         # Chat interface screen
β”‚   β”‚   └── Home/         # Home screen
β”‚   β”œβ”€β”€ styles/           # Global styles and tokens
β”‚   β”‚   β”œβ”€β”€ tokens.css    # Design system tokens
β”‚   β”‚   └── README.md     # Design system documentation
β”‚   β”œβ”€β”€ types/            # TypeScript type definitions
β”‚   β”‚   β”œβ”€β”€ chat.ts       # Chat-related types
β”‚   β”‚   └── task.ts       # Task-related types
β”‚   β”œβ”€β”€ utils/            # Utility functions
β”‚   β”‚   └── llm.ts        # LLM client for local models
β”‚   β”œβ”€β”€ App.tsx           # Root component
β”‚   β”œβ”€β”€ index.tsx         # React entry point
β”‚   └── index.css         # Global styles
β”œβ”€β”€ public/               # Static assets
β”‚   └── index.html        # HTML template
β”œβ”€β”€ dist/                 # Build output
└── release/              # Packaged applications

Technologies

Core

  • Electron (v28.3.3) - Desktop application framework
  • React (v18.3.1) - UI library
  • TypeScript (v5.9.3) - Type-safe JavaScript

UI & Styling

  • Material Design Icons (@mdi/react, @mdi/js) - Icon library
  • CSS Variables - Design tokens system

Build Tools

  • Webpack (v5.104.1) - Module bundler
  • electron-builder - Application packager
  • pnpm - Fast, disk space efficient package manager

Local LLM Integration

StudyNest supports two methods for running local LLMs:

Option 1: Crane Service (Recommended - Built-in)

StudyNest includes a built-in Rust-based service using the Crane framework for running LLM models locally without external dependencies.

Prerequisites

  • Rust toolchain installed: curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
  • Download a Qwen model from HuggingFace:
# Install huggingface-cli
pip install huggingface-hub

# Download a model (choose one)
huggingface-cli download Qwen/Qwen2.5-0.5B-Instruct --local-dir crane-studynest/checkpoints/Qwen2.5-0.5B-Instruct
huggingface-cli download Qwen/Qwen2.5-1.5B-Instruct --local-dir crane-studynest/checkpoints/Qwen2.5-1.5B-Instruct

How It Works

The Crane service is automatically built and integrated when you run:

# Development mode
pnpm run dev
# Automatically builds Rust service β†’ starts app

# Production build
pnpm run package
# Builds Rust service β†’ packages everything together

Build Process:

  1. build:rust:dev script compiles the Rust service in release mode
  2. Binary is copied to dist/bin/chat-service
  3. If build fails, app falls back to using cargo run (slower but works)
  4. Service starts automatically when app launches

Manual Build:

# Build Rust service separately
pnpm run build:rust:dev

# Or build for production
pnpm run build:rust

Supported Models:

  • Qwen2.5-0.5B-Instruct (fast, 1-2GB RAM)
  • Qwen2.5-1.5B-Instruct (balanced, 3-4GB RAM)
  • Qwen2.5-3B-Instruct (high quality, 6-8GB RAM)
  • Qwen3-0.6B / Qwen3-1.7B (latest models)

Performance:

  • macOS (Apple Silicon): Metal GPU acceleration (3-5x faster)
  • NVIDIA GPUs: CUDA support (build with --features cuda)
  • CPU: Works everywhere, slower but functional

Advantages:

  • βœ… No external server needed (Ollama not required)
  • βœ… Faster startup and response times
  • βœ… Better integration with Electron
  • βœ… Smaller footprint
  • βœ… Full control over model lifecycle

Option 2: External LLM Server (Ollama, LM Studio)

You can also connect to external LLM servers:

  1. Start your local LLM server (e.g., Ollama, LM Studio, llama.cpp)
  2. Update the LLM endpoint in src/hooks/useChat.ts
  3. Configure the model settings in src/utils/llm.ts

Example endpoints:

  • Ollama: http://localhost:11434/api/generate
  • LM Studio: http://localhost:1234/v1/chat/completions

Troubleshooting

Rust Service Issues

"crane-studynest directory not found"

# Make sure you're in the project root
cd /path/to/StudyNest
ls crane-studynest  # Should show the directory

"cargo: command not found"

# Install Rust
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
source $HOME/.cargo/env

"Model not found" or "Failed to initialize model"

# Verify model files exist
ls crane-studynest/checkpoints/Qwen2.5-0.5B-Instruct/
# Should show: config.json, model.safetensors, tokenizer files

# Re-download if needed
huggingface-cli download Qwen/Qwen2.5-0.5B-Instruct \
  --local-dir crane-studynest/checkpoints/Qwen2.5-0.5B-Instruct

Slow inference or timeout

  • First inference takes longer (model loading)
  • Subsequent responses are faster
  • On Apple Silicon, Metal GPU acceleration is automatic
  • Check Activity Monitor for memory usage

"Request timeout"

  • Model initialization can take 1-2 minutes first time
  • Check console logs for errors
  • Ensure you have enough RAM (2GB+ free)

Ollama Issues

"Failed to fetch ollama models"

# Check if Ollama is running
ollama list

# Start Ollama service
ollama serve

Performance Tips

For Crane Models

  • Apple Silicon (M1/M2/M3): Automatic Metal GPU acceleration (3-5x faster)
  • Intel Mac: Uses CPU, slower but works
  • First run: Model loading takes 30-60 seconds
  • Subsequent runs: Responses in 1-5 seconds

Memory Usage

  • Keep Activity Monitor open to monitor RAM
  • Close other apps if running large models
  • Use smaller models (0.5B) on machines with <8GB RAM

Design System

StudyNest includes a comprehensive design system with:

  • Color scales (white/black with opacity variants)
  • Primary colors (blue shades)
  • Spacing scale (4px to 64px)
  • Typography system
  • Reusable components (Button, Input, Card, Typography)

See src/styles/README.md for detailed documentation.

Contributing

Contributions are welcome! Please feel free to submit a Pull Request.

Acknowledgments

  • Crane - Original inspiration for local LLM inference integration using Rust and Candle

License

MIT