Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Binary file added .DS_Store
Binary file not shown.
56 changes: 56 additions & 0 deletions .github/workflows/deploy.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,56 @@
name: Docker Build & Deploy to Vercel

on:
push:
branches:
- main

jobs:
deploy:
runs-on: ubuntu-latest

steps:
- name: Checkout Code
uses: actions/checkout@v4

- name: Install Vercel CLI
run: npm install --global vercel@latest

# Step A: Pull Vercel Config
# We do this OUTSIDE Docker first so we have the valid .vercel folder
# to copy INTO the Docker container.
- name: Pull Vercel Environment Information
run: vercel pull --yes --environment=production --token=${{ secrets.VERCEL_TOKEN }}
env:
VERCEL_ORG_ID: ${{ secrets.VERCEL_ORG_ID }}
VERCEL_PROJECT_ID: ${{ secrets.VERCEL_PROJECT_ID }}

# Step B: Build the Docker Image
# Automates: docker build --build-arg VERCEL_TOKEN=... -t overlap-chatgpt .
- name: Build Docker Image
run: |
docker build \
--build-arg VERCEL_TOKEN=${{ secrets.VERCEL_TOKEN }} \
-t overlap-chatgpt .

# Step C: Extract Artifacts
# Automates: docker create, docker cp, docker rm
- name: Extract Prebuilt Artifacts
run: |
# Create a temporary container (don't run it, just create it)
docker create --name temp_container overlap-chatgpt

# Copy the .vercel output folder from the container to the runner
# Note: We overwrite the local .vercel folder with the build output
docker cp temp_container:/.vercel .

# Cleanup
docker rm temp_container

# Step D: Deploy to Vercel
# Automates: vercel deploy --prebuilt --prod
- name: Deploy to Vercel
run: vercel deploy --prebuilt --prod --token=${{ secrets.VERCEL_TOKEN }}
env:
VERCEL_ORG_ID: ${{ secrets.VERCEL_ORG_ID }}
VERCEL_PROJECT_ID: ${{ secrets.VERCEL_PROJECT_ID }}
3 changes: 2 additions & 1 deletion .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -157,4 +157,5 @@ cython_debug/
# be found at https://github.com/github/gitignore/blob/main/Global/JetBrains.gitignore
# and can be added to the global gitignore or merged into this file. For a more nuclear
# option (not recommended) you can uncomment the following to ignore the entire idea folder.
#.idea/
#.idea/
.vercel
150 changes: 150 additions & 0 deletions ClientServer.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,150 @@
Overlap-chatgptCloneThis is the backend server for the Overlap ChatGPT Clone, a Flask application designed to serve a chat API. It is configured for deployment on Vercel using a custom Docker build environment.🚀 How to Run LocallyFollow these steps to run the server on your local machine for development.1. PrerequisitesPython 3.12A Python virtual environment (recommended)A config.json file (see below)2. Local InstallationClone the repository:git clone [https://github.com/KennyAngelikas/Overlap-chatgptClone]
cd Overlap-chatgptClone
Create and activate a virtual environment:python3 -m venv venv
source venv/bin/activate
Install the required Python packages:pip install -r requirements.txt
Create your configuration file. Your app reads settings from config.json. Create this file in the root directory.{
"site_config": {
"host": "0.0.0.0",
"port": 1338,
"debug": true
},
"database": {
"url": "postgresql://user:password@localhost:5432/mydb"
},
"api_keys": {
"gemini": "YOUR_GEMINI_API_KEY_HERE"
}
}
Run the application:python run.py
Your server should now be running on http://localhost:1338.📦 How to Deploy to VercelThis project is deployed using a prebuilt output from a custom Docker container. This complex process is required to build psycopg2 correctly for Vercel's Amazon Linux runtime.1. PrerequisitesDocker Desktop must be installed and running.Vercel CLI must be installed: npm install -g vercelA Vercel account.2. Required Project FilesYou must have these four files in your project's root directory.DockerfileThis file builds your project inside an environment identical to Vercel's (Amazon Linux 2023).# Stage 1: The "builder"
# USE THE OFFICIAL AWS LAMBDA PYTHON 3.12 IMAGE (Amazon Linux 2023)
FROM public.ecr.aws/lambda/python:3.12 AS builder

WORKDIR /app

# Install build tools, node, and npm using DNF
RUN dnf update -y && dnf install -y "Development Tools" nodejs npm

# 2. Install Python dependencies
COPY requirements.txt requirements.txt
RUN pip3 install --user --no-cache-dir -r requirements.txt
# Add Python's user bin to the PATH
ENV PATH=/root/.local/bin:$PATH

# 3. Install Vercel CLI
RUN npm install --global vercel@latest

# 4. Copy all your project files
COPY . .

# 5. Copy your Vercel project link
COPY .vercel .vercel

# 6. Build the project using Vercel CLI
ARG VERCEL_TOKEN
RUN VERCEL_TOKEN=$VERCEL_TOKEN vercel build --prod

# ---
# Stage 2: The "final output"
FROM alpine:latest

# Copy the entire .vercel folder
COPY --from=builder /app/.vercel /.vercel
vercel.jsonThis file tells Vercel how to build and route your Python app.{
"builds": [
{
"src": "run.py",
"use": "@vercel/python",
"config": { "pythonVersion": "3.12" }
}
],
"routes": [
{
"src": "/(.*)",
"dest": "run.py"
}
]
}
requirements.txtMake sure this file uses psycopg2-binary.flask
python-dotenv
requests
beautifulsoup4
psycopg2-binary
# ... any other libraries
.dockerignoreThis speeds up your Docker build by ignoring unnecessary files.# Venv
venv/

# Docker build output
.vercel

# Python cache
__pycache__/
*.pyc
3. ⚠️ Important: Fix config.json for VercelYour run.py script (which reads config.json) will fail on Vercel. Vercel uses Environment Variables for secrets, not JSON files.You must modify your run.py to read from os.environ.Original run.py (Local only):# ...
from json import load

if __name__ == '__main__':
config = load(open('config.json', 'r'))
site_config = config['site_config']
# ...
Modified run.py (Works locally AND on Vercel):from server.app import app
from server.website import Website
from server.controller.conversation_controller import ConversationController
from json import load
import os # Import os

# --- VERCEL FIX ---
# Check if running on Vercel (or any system with ENV VARS)
db_url = os.environ.get('DATABASE_URL')
site_port = os.environ.get('PORT', 1338) # Vercel provides a PORT

if db_url:
# We are on Vercel or similar
site_config = {
"host": "0.0.0.0",
"port": int(site_port),
"debug": False
}
# You would also load other configs (like GEMINI_API_KEY) here
# os.environ.get('GEMINI_API_KEY')
else:
# We are local, load from config.json
config = load(open('config.json', 'r'))
site_config = config['site_config']
# You would also load DB URL from config here
# db_url = config['database']['url']
# --- END FIX ---


# This logic is now outside the __name__ block
site = Website(app)
for route in site.routes:
app.add_url_rule(
route,
view_func = site.routes[route]['function'],
methods = site.routes[route]['methods'],
)

ConversationController(app)

# This will run for a 404
@app.route('/', methods=['GET'])
def handle_root():
return "Flask server is running!"

# This block is for local development only
if __name__ == '__main__':
print(f"Running on port {site_config['port']}")
app.run(**site_config)
print(f"Closing port {site_config['port']}")
4. Deployment StepsStep 1: One-Time Vercel SetupLog in to Vercel CLI:vercel login
Link your project:vercel link
Pull project settings:vercel pull --yes
Add Vercel Environment Variables:Go to your project's dashboard on Vercel.Go to Settings > Environment Variables.Add all your secrets (e.g., DATABASE_URL, GEMINI_API_KEY). These must match the os.environ.get() keys in your run.py.Step 2: The 6-Step Deploy ProcessRun these commands from your project's root directory every time you want to deploy a change.Build the Docker image: (This will take a few minutes)docker build --build-arg VERCEL_TOKEN="YOUR_VERCEL_TOKEN_HERE" -t overlap-chatgpt .
(Get your token from Vercel Dashboard > Settings > Tokens)Remove the old container (in case it exists):docker rm temp_container
Create a new container from the image:docker create --name temp_container overlap-chatgpt
Copy the build output from the container to your computer:docker cp temp_container:/.vercel .
Clean up the container:docker rm temp_container
Deploy the prebuilt output!vercel deploy --prebuilt --prod
🔌 Architecture: Client-Server InteractionThis repository is a JSON API backend. It is only the "server" part of your application.Client (The "Browser")A user visits your Vercel URL (e.g., https://overlap-chatgpt-clone.vercel.app).Vercel serves your static frontend (e.g., React, HTML/JS) from the Website routes.The user types a message in the chat.Server (This Flask App)Your frontend's JavaScript makes an HTTP request (e.g., a POST request to /api/chat) with the user's message.Vercel routes this request to your run.py serverless function.The ConversationController receives the request.It calls services like gemini_service (to talk to an AI) and teams_service (to get data).The teams_service uses db_model to query your PostgreSQL database (using psycopg2).The services return data to the controller.ResponseThe ConversationController formats a JSON response.Flask sends this JSON back to the client.Your frontend's JavaScript receives the JSON and displays the chat message to the user.
36 changes: 24 additions & 12 deletions Dockerfile
Original file line number Diff line number Diff line change
@@ -1,22 +1,34 @@
# Build stage
FROM python:3.8-alpine AS build
# Stage 1: The "builder"
# USE THE OFFICIAL AWS LAMBDA PYTHON 3.12 IMAGE (Amazon Linux 2023)
FROM public.ecr.aws/lambda/python:3.12 AS builder

WORKDIR /app

# CHANGED: Removed "Development Tools". We only need nodejs and npm.
RUN dnf update -y && dnf install -y nodejs npm

# 2. Install Python dependencies
COPY requirements.txt requirements.txt
RUN apk add --no-cache build-base && \
pip3 install --user --no-cache-dir -r requirements.txt
RUN pip3 install --user --no-cache-dir -r requirements.txt
# Add Python's user bin to the PATH
ENV PATH=/root/.local/bin:$PATH

# 3. Install Vercel CLI
RUN npm install --global vercel@latest

# 4. Copy all your project files
COPY . .

# Production stage
FROM python:3.8-alpine AS production
# 5. Copy your Vercel project link
COPY .vercel .vercel

WORKDIR /app
# 6. Build the project using Vercel CLI
ARG VERCEL_TOKEN
RUN VERCEL_TOKEN=$VERCEL_TOKEN vercel build --prod

COPY --from=build /root/.local /root/.local
COPY . .

ENV PATH=/root/.local/bin:$PATH
# ---
# Stage 2: The "final output"
FROM alpine:latest

CMD ["python3", "./run.py"]
# Copy the entire .vercel folder
COPY --from=builder /app/.vercel /.vercel
29 changes: 26 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,30 @@
Development of this repository is currently in a halt, due to lack of time. Updates are comming end of June.
# Overlap
- Overlap is a team-first chat assistant that nudges teammates toward each other.
- When someone asks a question, the system checks a shared, opt‑in skills index and, if relevant, suggests which teammate(s) might help — replacing solitary AI answers with socially aware guidance.

## MVP (minimal viable product)
- Simple web chat UI (single‑page) that sends {user_id, team_id, prompt} to the server.
- Team and user records with a short skills survey (store in SQLite for MVP).
- Backend augmentation: before calling the model, do a fast skill-match (keyword or simple normalization) and inject one short hint into the prompt if a teammate matches.
- Stream model responses back to the client unchanged except for the injected hint.
- Basic seed data, Docker support, and environment variable for the model API key.
- No production auth in MVP (trusted user_id); plan to add auth before public use.

## User journey (MVP)
1. Join or create a team and complete a quick skills survey.
2. Open chat and ask a question.
3. Server checks team skills and finds possible matches.
4. If a match exists, the reply includes a short suggestion like “Alice knows React — want to connect?”
5. Conversation is logged; skill usage counters may be updated for future recommendations.

## User personas
#Four User Personnas:

Sam — Product Manager
Goal: Get a quick, team-aware answer or a referral to the right teammate.
Scenario: Asks “How do we track onboarding metrics?” and is suggested to talk to Maya, who owns analytics.


working again ; )
I am very busy at the moment so I would be very thankful for contributions and PR's

## To do
- [x] Double confirm when deleting conversation
Expand Down
Loading