[“LexiBot” – LLM-Powered Interactive Robot] This project focuses on developing a mobile robot platform powered by a Large Language Model (LLM), integrating Arduino and Raspberry Pi technologies to enable natural language-driven human-robot interaction. Equipped with an onboard camera for real-time visual feedback, the robot will allow users to issue commands through voice or text, creating a seamless interactive experience.
By combining LLM-based natural language processing, real-time computer vision, and interactive robotics, the system will enable users to engage conversationally, directing movements and actions with ease. This project explores the potential for voice-controlled robotics in home automation, assistive technology, and AI-driven remote operation.
. ├── web/ # HTML/CSS/JS front-end │ └── src/ ├── pi/ # Raspberry Pi Python back-end │ ├── movement1.py │ └── server.py │ └── camerastream.py └── README.md
- Node.js install: https://nodejs.org/
- Python install: sudo apt install python3
- Raspberry Pi OS (64bit) install: https://raspberrypi.com
Main board : Raspberry Pi 4B 8 GB
Key components
- Raspberry Pi 4B + Fan & Heat‑sink
- RPi Camera v3, ultrasonic & line‑tracking sensors
- Dual H‑bridge motor driver, chassis, speaker, 8×AA battery pack
Live MJPEG video + REST / Socket.IO robot control | Requirement | Version tested | Install command | |-------------|----------------|-----------------| | Raspberry Pi OS | bookworm 64-bit |
sudo apt update && sudo apt full-upgrade| | Python | 3.11 | pre-installed | | System packages | libcamera, v4l2, pigpio |sudo apt install libcamera-apps| | Python libs | picamera2, simplejpeg, gpiozero, evdev, Flask, Flask-SocketIO, Flask-CORS |pip install picamera2 simplejpeg gpiozero evdev flask flask_socketio flask_cors|
Enable camera:
sudo raspi-config→ Interface Options → Camera → Yes (reboot).
| Component | Pins | Notes |
|---|---|---|
| Camera | CSI ribbon | libcamera |
| Ultrasonic #1 | Echo 19, Trig 13 | Front-left obstacle |
| Ultrasonic #2 | Echo 20, Trig 16 | Front-right obstacle |
| Right motor | FWD 17, BWD 22 | via dual H-bridge |
| Left motor | FWD 24, BWD 23 | "" |
| Power | 5 V / 3 A | separate supply recommended |
The front-end provides a control panel for interacting with the LexiBot robot. It includes real-time video streaming, voice commands, and manual control options.
- HTML: src/index.html
The main entry point for the web interface. - CSS: src/css/styles.css
Defines the layout and styling of the control panel. - JavaScript:
- src/js/keyboard.js: Handles keyboard-based robot control.
- src/js/send.js: Sends commands to the robot server via Socket.IO.
- src/js/sensor.js: Processes sensor data from the robot.
- src/js/voice_input.js: Enables voice recognition for issuing commands.
-
Live Video Stream
Displays a real-time MJPEG video feed from the robot's camera.- Source:
<img id="video-stream" src="http://<robot-ip>:8000/stream.mjpg" alt="Live video">
- Source:
-
Voice Control
Allows users to issue commands via voice input.- Start/Stop buttons are implemented in src/index.html.
- Logic is handled in src/js/voice_input.js.
-
Manual Controls
- Directional buttons for forward, backward, left, and right movement.
- Touch and mouse events are supported.
- Implemented in src/index.html and src/js/keyboard.js.
-
Sensor Data
Displays real-time sensor readings from the robot.- Logic is implemented in src/js/sensor.js.
- Open src/index.html in a browser.
- Ensure the robot server is running and accessible at the configured IP address.
- Use the control panel to interact with the robot:
- View the live video feed.
- Issue commands via voice or manual controls.
- Monitor sensor data in real-time.
- Socket.IO: Used for real-time communication between the front-end and the robot server.
- Included via CDN:
<script src="https://cdn.socket.io/4.7.2/socket.io.min.js"></script>
- Included via CDN: