Skip to content

jalalakbar47/ForeheadTracker-Pro

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

ForeheadTracker Pro · Tactical Face Tracking System

Python MediaPipe OpenCV License: MIT

ForeheadTracker Pro is a high-performance, modular facial landmark tracking system designed for real-time targeting and tactical HUD overlays. Leveraging the modern MediaPipe Tasks API and OpenCV, it provides surgical precision and cinematic visual feedback.


📺 Demonstrations

System in Action (GIF)

ForeheadTracker Pro Live Demo

Tactical Interface (Screenshot)

ForeheadTracker Pro UI Screenshot


⚡ Key Features

  • Surgical Precision: Tracks 478 landmarks with sub-pixel accuracy.
  • Tactical HUD: Minimalist, AR-headset inspired UI with cyan and white aesthetics.
  • Intelligent Centering: Targets the true face center (Eyes-Nose-Chin average) for stable tracking.
  • Advanced Smoothing: Integrated Exponential Moving Average (EMA) filtering ($\alpha=0.15$) for fluid, cinematic motion.
  • Multi-Target Lock: Concurrently tracks and locks onto up to 5 faces.
  • Visual Feedback:
    • Pulsing Reticle: Animated cyan reticle with rotating outer rings.
    • Corner Brackets: Dynamic L-shaped brackets for facial containment.
    • Telemetry: Real-time FPS monitoring and tracking status HUD.
  • Multi-Source Input: Supports live webcam, local video files, and static images.

🛠️ Installation

  1. Clone the repository:

    git clone https://github.com/yourusername/ForeheadTracker-Pro.git
    cd ForeheadTracker-Pro
  2. Install dependencies:

    pip install -r requirements.txt
  3. Ensure the model is present: The system will look for assets/face_landmarker.task. (Automatically downloaded if missing).


🚀 Usage

Live Webcam (Default)

python src/main.py

Video File

python src/main.py --input path/to/video.mp4

Static Image

python src/main.py --input path/to/photo.jpg

📁 Project Structure

ForeheadTracker-Pro/
├── assets/             # Model files and demo assets
├── src/
│   ├── main.py         # Entry point & CLI handling
│   ├── detector.py     # MediaPipe Tasks API integration
│   ├── tracker.py      # Smoothing & targeting logic
│   └── utils.py        # Tactical HUD drawing engine
├── requirements.txt    # Project dependencies
└── LICENSE             # MIT License

⚙️ How it Works

  1. Detection: Uses mediapipe.tasks.vision to extract high-density face meshes.
  2. Targeting: Calculates the geometric center of the face using a weighted average of the nose bridge and facial perimeter.
  3. Stabilization: Applies an EMA filter to the coordinates to eliminate camera jitter and micro-movements.
  4. Rendering: Draw surgical-grade UI elements using OpenCV's core drawing functions with alpha-blending for the "glow" effect.

📄 License

This project is licensed under the MIT License - see the LICENSE file for details.


🤝 Contributing

Contributions are welcome! Feel free to open an issue or submit a pull request.

Developed with ❤️ by jalalakbar47


Dedicated to my 💖 JS. 💖

About

A high-precision, real-time facial landmark tracking system with a tactical HUD. Powered by the MediaPipe Tasks API and OpenCV. Features surgical-grade 478-point mesh detection, EMA smoothing for jitter-free lock-on, and a minimalist AR-headset inspired visual interface.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages