NEXOFF.mp4
AI-Powered Offline-First Chatbot with Document Intelligence
- π‘ About the Project
- β‘ Quick Start
- β¨ Features
- π₯οΈ Frontend
- βοΈ Backend
- π Getting Started
- π₯ Demo & Screenshots
- π§ System Architecture
- π΄ Offline vs Online Capabilities
- π Results & Use Cases
- π Security & Privacy
- π Future Enhancements
- π οΈ Run Backend (Docker)
- π¦ Deployment
- π€ Contributing
- π Acknowledgements
- π License
git clone <repository-url>
cd NEX_OFF
python -m venv .venv
.venv\Scripts\activate
pip install -r offline/requirements.txt
python offline/app.py
npm install
npm startOpen the desktop app and start chatting offline π
NEX_OFF is an advanced AI-powered chatbot application that combines the power of local processing with cloud-based AI capabilities. Built with a modern tech stack, it offers seamless document processing, intelligent question answering, and natural language understanding both online and offline.
Screen.Recording.2025-12-31.160055.mp4
Secure login experience with smooth UI interactions
Online AI assistant in Light Mode
Online AI assistant in Dark Mode
PDF upload and contextual question answering in Light Mode
PDF-based question answering in Dark Mode
Offline AI assistant running fully on local system (Light Mode)
Offline-first AI assistant with Dark Mode enabled
- Document Intelligence: Upload and process PDFs with advanced text extraction
- Hybrid AI: Combine local RAG (Retrieval-Augmented Generation) with cloud-based LLMs
- Offline-First: Full functionality without internet connectivity
- Speech Recognition: Built-in speech-to-text using Vosk
- Cross-Platform: Desktop application built with Electron.js
- Secure: Local data storage with SQLite
- Responsive UI: Clean, modern interface built with Bootstrap
| Feature | Offline | Online |
|---|---|---|
| Chat Interface | β | β |
| PDF Upload & Parsing | β | β |
| RAG-based QA | β | β |
| Cloud LLM (Groq) | β | β |
| Local Storage | β | β |
- Framework: Electron.js
- UI: HTML5, CSS3, JavaScript (ES6+)
- Libraries:
- Bootstrap 5.x - Responsive design
- PDF.js - PDF rendering
- Socket.io - Real-time communication
- User interacts via Electron desktop UI
- Frontend communicates with Flask backend
- PDFs are processed and chunked locally
- RAG engine retrieves relevant context
- Local or cloud LLM generates responses
- SQLite stores conversation & metadata
- Vosk enables offline speech input
- Language: Python 3.8+
- Framework: Flask
- AI/ML:
- RAG (Retrieval-Augmented Generation)
- Groq LLM integration
- Vosk for speech recognition
- Database: SQLite
- Dependencies:
- Flask-CORS
- PyPDF2
- SQLAlchemy
Configure cloud LLM access by setting
GROQ_API_KEYin your environment or editingoffline/gro.py. Choose a supported model (e.g.,llama-3.1-8b-instant).
- Python 3.8 or higher
- Node.js 14+ and npm
- Git
- 8GB RAM (16GB recommended for large models)
- 2GB+ free disk space
-
Clone the repository:
git clone <repository-url> cd NEX_OFF
-
Set up Python environment:
python -m venv .venv .venv\Scripts\activate # On Windows pip install -r offline/requirements.txt
-
Install Node.js dependencies:
npm install
-
Start the backend server:
python offline/app.py
-
In a new terminal, start the Electron app:
npm start
- Ask questions from large PDFs without internet
- Fast local responses using RAG
- Accurate speech-to-text in offline mode
- Seamless switching between offline and online AI
- Ideal for research, academics, and private data analysis
docker build -t nexoff-backend .docker run -p 5000:5000 nexoff-backendFor production deployment, consider the following:
- Use Gunicorn or uWSGI for production WSGI server
- Set up Nginx as a reverse proxy
- Configure environment variables for sensitive data
- Enable HTTPS with Let's Encrypt
- All documents are processed locally
- No PDF data is uploaded without user consent
- SQLite database stored on local machine
- Offline mode ensures complete data privacy
- No third-party tracking or analytics
- π Fully local LLM integration (no cloud dependency)
- π Multi-language voice support
- π Conversation analytics dashboard
- π§ Model fine-tuning using local knowledge base
- π§ Linux & macOS installer packages
Contributions are welcome! Please follow these steps:
- Fork the repository
- Create your feature branch (
git checkout -b feature/AmazingFeature) - Commit your changes (
git commit -m 'Add some AmazingFeature') - Push to the branch (
git push origin feature/AmazingFeature) - Open a Pull Request
- Vosk for offline speech recognition
- PDF.js for PDF rendering
- Bootstrap for the UI components
- Flask for the backend framework
- Electron for cross-platform desktop app
Distributed under the MIT License. See LICENSE for more information.
This project is maintained by Gudiwada Sruthi. For support, please open an issue in the repository.






