Skip to content

gudiwadasruthi/NEX_OFF

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

13 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

NEXOFF.mp4

NEX_OFF πŸ€–

AI-Powered Offline-First Chatbot with Document Intelligence

Table of Contents

⚑ Quick Start

git clone <repository-url>
cd NEX_OFF
python -m venv .venv
.venv\Scripts\activate
pip install -r offline/requirements.txt
python offline/app.py
npm install
npm start

Open the desktop app and start chatting offline πŸš€

(⬆ Back to top)

πŸ’‘ About the Project

NEX_OFF is an advanced AI-powered chatbot application that combines the power of local processing with cloud-based AI capabilities. Built with a modern tech stack, it offers seamless document processing, intelligent question answering, and natural language understanding both online and offline.

πŸŽ₯ Demo & Screenshots

πŸ” Authentication Flow (Login)

Screen.Recording.2025-12-31.160055.mp4

Secure login experience with smooth UI interactions


🌐 Online Mode – Real-Time AI Chat (Light Mode)

Online AI assistant in Light Mode

🌐 Online Mode – Real-Time AI Chat (Dark Mode)

Online AI assistant in Dark Mode


πŸ“„ Chat with PDF – Upload & Question Answering (Light Mode)

PDF upload and contextual question answering in Light Mode

πŸ“„ Chat with PDF – Upload & Question Answering (Dark Mode)

PDF-based question answering in Dark Mode


πŸ“΄ Offline Mode – Fully Local AI Assistant (Light Mode)

Offline AI assistant running fully on local system (Light Mode)

πŸ“΄ Offline Mode – Fully Local AI Assistant (Dark Mode)

Offline-first AI assistant with Dark Mode enabled


(⬆ Back to top)

✨ Features

  • Document Intelligence: Upload and process PDFs with advanced text extraction
  • Hybrid AI: Combine local RAG (Retrieval-Augmented Generation) with cloud-based LLMs
  • Offline-First: Full functionality without internet connectivity
  • Speech Recognition: Built-in speech-to-text using Vosk
  • Cross-Platform: Desktop application built with Electron.js
  • Secure: Local data storage with SQLite
  • Responsive UI: Clean, modern interface built with Bootstrap

πŸ“΄ Offline vs Online Capabilities

Feature Offline Online
Chat Interface βœ… βœ…
PDF Upload & Parsing βœ… βœ…
RAG-based QA βœ… βœ…
Cloud LLM (Groq) ❌ βœ…
Local Storage βœ… βœ…

(⬆ Back to top)

πŸ–₯️ Frontend

  • Framework: Electron.js
  • UI: HTML5, CSS3, JavaScript (ES6+)
  • Libraries:
    • Bootstrap 5.x - Responsive design
    • PDF.js - PDF rendering
    • Socket.io - Real-time communication

🧠 System Architecture

Architecture Diagram

Workflow Overview

  1. User interacts via Electron desktop UI
  2. Frontend communicates with Flask backend
  3. PDFs are processed and chunked locally
  4. RAG engine retrieves relevant context
  5. Local or cloud LLM generates responses
  6. SQLite stores conversation & metadata
  7. Vosk enables offline speech input

(⬆ Back to top)

βš™οΈ Backend

  • Language: Python 3.8+
  • Framework: Flask
  • AI/ML:
    • RAG (Retrieval-Augmented Generation)
    • Groq LLM integration
    • Vosk for speech recognition
  • Database: SQLite
  • Dependencies:
    • Flask-CORS
    • PyPDF2
    • SQLAlchemy

Configure cloud LLM access by setting GROQ_API_KEY in your environment or editing offline/gro.py. Choose a supported model (e.g., llama-3.1-8b-instant).

πŸš€ Getting Started

Prerequisites

  • Python 3.8 or higher
  • Node.js 14+ and npm
  • Git
  • 8GB RAM (16GB recommended for large models)
  • 2GB+ free disk space

Installation

  1. Clone the repository:

    git clone <repository-url>
    cd NEX_OFF
  2. Set up Python environment:

    python -m venv .venv
    .venv\Scripts\activate  # On Windows
    pip install -r offline/requirements.txt
  3. Install Node.js dependencies:

    npm install

Running the Application

  1. Start the backend server:

    python offline/app.py
  2. In a new terminal, start the Electron app:

    npm start

πŸ“Š Results & Use Cases

  • Ask questions from large PDFs without internet
  • Fast local responses using RAG
  • Accurate speech-to-text in offline mode
  • Seamless switching between offline and online AI
  • Ideal for research, academics, and private data analysis

(⬆ Back to top)

πŸ› οΈ Run Backend (Docker)

Build the Docker image:

docker build -t nexoff-backend .

Run the container:

docker run -p 5000:5000 nexoff-backend

πŸ“¦ Deployment

For production deployment, consider the following:

  • Use Gunicorn or uWSGI for production WSGI server
  • Set up Nginx as a reverse proxy
  • Configure environment variables for sensitive data
  • Enable HTTPS with Let's Encrypt

πŸ”’ Security & Privacy

  • All documents are processed locally
  • No PDF data is uploaded without user consent
  • SQLite database stored on local machine
  • Offline mode ensures complete data privacy
  • No third-party tracking or analytics

(⬆ Back to top)

πŸš€ Future Enhancements

  • πŸ”Œ Fully local LLM integration (no cloud dependency)
  • 🌍 Multi-language voice support
  • πŸ“Š Conversation analytics dashboard
  • 🧠 Model fine-tuning using local knowledge base
  • 🐧 Linux & macOS installer packages

(⬆ Back to top)

🀝 Contributing

Contributions are welcome! Please follow these steps:

  1. Fork the repository
  2. Create your feature branch (git checkout -b feature/AmazingFeature)
  3. Commit your changes (git commit -m 'Add some AmazingFeature')
  4. Push to the branch (git push origin feature/AmazingFeature)
  5. Open a Pull Request

πŸ™ Acknowledgements

  • Vosk for offline speech recognition
  • PDF.js for PDF rendering
  • Bootstrap for the UI components
  • Flask for the backend framework
  • Electron for cross-platform desktop app

πŸ“œ License

Distributed under the MIT License. See LICENSE for more information.


This project is maintained by Gudiwada Sruthi. For support, please open an issue in the repository.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors