A tool for automatically scanning for IDOR (Insecure Direct Object References) and BAC (Broken Access Control) vulnerabilities. This project consists of two main parts: a flexible CLI scanner and a Flask-based web interface (GUI) to simplify scan execution and report analysis.
A searchable and sortable dashboard displaying all scan reports.
A terminal-like view showing live output from the scanner as it runs.
- Interactive Web Dashboard: View scan history in a searchable and sortable table.
- Execute Scans from the Web: Easily run new scans via a form in the web interface.
- Real-time Scan Logs: Monitor the scanner's output process directly in the browser.
- Flexible CLI Scanner: Run scans directly from the terminal with various dynamic arguments.
- Intelligent Crawling: Uses Selenium to crawl modern, JavaScript-heavy websites.
- Report Management: View vulnerability finding details and easily clear the report history.
- Backend: Python, Flask
- Frontend: HTML, Bootstrap 5, JavaScript, DataTables.js
- Scanner: Selenium, BeautifulSoup4, Requests
- Deployment: Docker, Gunicorn
Crawlrice/
βββ Main # Main folder
β βββ Cli_Crawlrice/
β β βββ __init__.py
β β βββ crawlrice.py # Main scanner script (CLI)
β βββ Gui_Crawlrice/
β βββ app.py # Flask web application (GUI)
β βββ reports/ # Report output folder (ignored by Git)
β βββ static/ # CSS and JavaScript files
β βββ templates/ # HTML files
βββ Dockerfile # Instructions to build the Docker image
βββ docker-compose.yml # Easy one-command Docker startup
βββ setup.py # Setup script for installing the CLI
βββ setup.sh # Setup script for Linux/macOS
βββ setup.bat # Setup script for Windows (Currently not available)
βββ requirements.txt # List of required Python libraries
βββ README.md # This documentation
Follow this guide to install the project manually from your command line.
Prerequisites:
- Git
- Python (version 3.7+ is recommended)
- Google Chrome (or Chromium)
- ChromeDriver (matching your Chrome version)
git clone https://github.com/NasiGoRank/Crawlrice.git
cd CrawlriceSince the scanner relies on Selenium, you need a working browser and driver.
Install Google Chrome:
sudo apt update
sudo apt install wget unzip -y
#Get your latest version of Google Chrome
sudo wget https://dl.google.com/linux/direct/google-chrome-stable_current_amd64.deb
sudo apt install ./google-chrome-stable_current_amd64.deb -y
# Verify installation
google-chrome --versionInstall ChromeDriver (must match your Chrome version):
# Remove old versions
sudo rm -f /usr/local/bin/chromedriver
# Example: Chrome version 139.0.7258.154 (Make sure it's the same version with the Google Chrome)
sudo wget https://storage.googleapis.com/chrome-for-testing-public/139.0.7258.154/linux64/chromedriver-linux64.zip
sudo unzip chromedriver-linux64.zip
sudo mv chromedriver-linux64/chromedriver /usr/local/bin/
sudo chmod +x /usr/local/bin/chromedriver
# Verify installation
chromedriver --versionRun the setup script for your operating system. This will make the crawlrice command available from any directory in your terminal.
For Windows (Currently not available):
.\setup.bat(You may need to run as Administrator. Open a new terminal after setup completes.)
For Linux/macOS:
chmod +x setup.sh
sudo ./setup.sh(Open a new terminal after setup completes.)
This project can be run in several ways depending on your needs.
This is the easiest and most reliable way to get the web application running.
Prerequisites:
- Docker & Docker Compose
Instructions:
-
Clone this repository.
-
Navigate to the project's root directory (
Crawlrice/) in your terminal. -
Run the application using Docker Compose:
docker-compose up -d --build
-
The web application is now running at
http://127.0.0.1:5050. -
To stop the application, run:
docker-compose down.
For smoother integration between CLI and Docker (so reports sync correctly):
-
Give your user ownership of the Crawlrice project folder:
sudo chown -R Your Username:Your Username Crawlrice
-
Add Crawlrice project root as an environment variable:
nano ~/.bashrcAdd this line at the bottom:
export CRAWLRICE_PROJECT_ROOT="Your path to Crawlrice"
-
Reload your shell configuration:
source ~/.bashrc
Now the project root is globally accessible via $CRAWLRICE_PROJECT_ROOT, which makes volume mounts and CLI report syncing easier.
Now you can access the website using this credential.
| Username | Password |
|---|---|
| admin | password123 |
You can run the crawlrice command from any directory. For reports to sync with the Docker GUI, it's best to run the command from the project's root folder (Crawlrice/).
Usage Examples:
# Scan with full passwords
crawlrice -u http://example.com -au attacker -ap password -vu victim -vp password
# Display the help menu
crawlrice --help(Optional, but recommended before first use)
Create a file test_selenium.py:
from selenium import webdriver
from selenium.webdriver.chrome.options import Options
options = Options()
options.add_argument("--headless")
driver = webdriver.Chrome(options=options)
driver.get("https://www.google.com")
print("Page Title:", driver.title)
driver.quit()Run:
python3 test_selenium.pyExpected output:
Page Title: Google
This confirms that Chrome + ChromeDriver are working correctly.