Skip to content

ganesankar/SignLangRecWeb1

Repository files navigation

SignLangRecWeb1

A Next.js application for Sign Language Recognition using MediaPipe and TensorFlow.js, hosted on Firebase.

Setup

  1. Install Dependencies:

    npm install
  2. Training the Model: The model is trained on a custom dataset (dataset/signLanguage-Dataset-15-WithStillImages.npy). The training process:

    • Extracts images from .npy to training/images/.
    • Trains a Convolutional Neural Network (CNN) using Node.js (training/train.js).
    • Saves the model to training/model/.

    To run training manually:

    cd training
    npm install
    # First export images (requires Python env with numpy/cv2) defined in training/venv
    ../training/venv/Scripts/python export_images.py
    # Then train
    node train.js

    After training, move the model files to the public directory:

    mkdir -p public/model
    cp training/model/* public/model/
  3. Run Development Server:

    npm run dev
  4. Deploy to Firebase:

    npm run build
    firebase deploy

    (Ensure you have initialized firebase with firebase init if needed, or use existing project ID).

Architecture

  • Frontend: Next.js 15 + React 19 + Tailwind CSS v4.
  • ML Pipeline:
    • Webcam input via react-webcam.
    • Hand Detection via MediaPipe HandLandmarker.
    • Landmarks are drawn onto a standardized 300x300 black canvas (white skeleton).
    • CNN (TFJS) predicts the sign from this skeleton image.
    • This matches the training data format (which consists of pre-drawn skeletons).

Configuration

  • app/globals.css: Tailwind v4 configuration using Ganesan's design system style.
  • components/SignLanguageDetector.tsx: Main logic.

Notes

  • The training script runs on CPU by default if tfjs-node fails to install (common on Windows). It is slower but works for small datasets.
  • The .npy dataset contains 860 samples of 25 classes (A-Y likely).

Releases

No releases published

Packages

 
 
 

Contributors