|
|
||
|---|---|---|
| backend | ||
| frontend | ||
| nginx | ||
| .gitignore | ||
| README.md | ||
| docker-compose.yml | ||
README.md
Homebox AI Frontend
Mobile-first web app for adding items to Homebox with AI-powered image analysis via Ollama.
Prerequisites
- Python 3.11+
- Nginx
- Ollama with a vision model
- Running Homebox instance
Quick Start
1. Pull an Ollama vision model
For development (16GB RAM, iGPU):
ollama pull moondream
For production (6GB+ VRAM):
ollama pull llava:13b-v1.6-q4_0
2. Generate SSL certificate
cd nginx
chmod +x generate-cert.sh
./generate-cert.sh
3. Set up Python environment
cd backend
python -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
pip install -r requirements.txt
4. Configure environment
Create backend/.env:
OLLAMA_URL=http://127.0.0.1:11434
OLLAMA_MODEL=moondream
HOMEBOX_URL=http://127.0.0.1:3100
# Optional: pre-configured token (skip login)
# HOMEBOX_TOKEN=your_token_here
5. Configure and start Nginx
Copy or symlink the config:
# Option A: Use the config directly (adjust paths as needed)
sudo cp nginx/nginx.conf /etc/nginx/nginx.conf
# Edit paths in nginx.conf:
# - ssl_certificate: point to your certs
# - root: point to your frontend folder
sudo nginx -t # Test config
sudo systemctl restart nginx
Or for development, run nginx with custom config:
sudo nginx -c $(pwd)/nginx/nginx.conf
6. Start the backend
cd backend
source venv/bin/activate
uvicorn main:app --host 0.0.0.0 --port 8000 --reload
7. Access the app
Open https://localhost in your browser. Accept the self-signed certificate warning.
For mobile access on the same network, use your computer's IP:
# Find your IP
ip addr show | grep inet
# or
hostname -I
Then access https://YOUR_IP on your phone.
Project Structure
homebox_ai_frontend/
├── frontend/
│ ├── index.html # Single page app
│ ├── styles.css # Mobile-first styles
│ └── app.js # Camera, form, API calls
├── backend/
│ ├── main.py # FastAPI app
│ ├── config.py # Environment config
│ ├── requirements.txt
│ └── routers/
│ ├── analyze.py # Ollama integration
│ └── homebox.py # Homebox API proxy
├── nginx/
│ ├── nginx.conf # HTTPS proxy config
│ ├── generate-cert.sh
│ └── certs/ # SSL certificates
└── README.md
API Endpoints
| Method | Endpoint | Description |
|---|---|---|
| GET | /api/health |
Health check |
| POST | /api/login |
Login to Homebox |
| POST | /api/analyze |
Analyze image with AI |
| GET | /api/locations |
Get Homebox locations |
| GET | /api/labels |
Get Homebox labels/tags |
| POST | /api/items |
Create item in Homebox |
| POST | /api/items/{id}/attachments |
Upload photo |
Usage
- Login with your Homebox credentials
- Point camera at an item and tap Capture
- Wait for AI analysis (suggestions populate automatically)
- Edit name, description, location, and tags as needed
- Tap Save to Homebox
Troubleshooting
Camera not working
- Ensure HTTPS is being used (required for camera API)
- Check browser permissions for camera access
- On iOS Safari, camera access requires user interaction
AI analysis slow/failing
- Check Ollama is running:
curl http://localhost:11434/api/tags - Try a smaller model like
moondream - Increase timeout in nginx.conf if needed
Homebox connection issues
- Verify Homebox URL in
.env - Check Homebox is accessible:
curl http://localhost:3100/api/v1/status - Ensure correct credentials
Development
Run backend with auto-reload:
uvicorn main:app --reload --host 0.0.0.0 --port 8000
Frontend changes take effect immediately (just refresh browser).
License
MIT