# Profile Reporting to Backend
The quality-agent profiler can automatically send profiling data to a backend endpoint for centralized collection and analysis.
## Quick Start
### Using CLI Arguments
```bash
# Send profiles to backend every 5 minutes (default)
./multi_lang_profiler python server.py \
--report-endpoint https://quality-web-app.pages.dev/api/v0/profiles
# Custom interval (60 seconds) and API key
./multi_lang_profiler python server.py \
--report-endpoint https://quality-web-app.pages.dev/api/v0/profiles \
--report-interval 60 \
--api-key your-api-key
```
### Using Environment Variables
```bash
# Set environment variables
export PROFILER_ENDPOINT=https://quality-web-app.pages.dev/api/v0/profiles
export PROFILER_INTERVAL=300
export PROFILER_API_KEY=your-api-key
# Run profiler (reporting is automatically enabled)
./multi_lang_profiler python server.py
```
## Configuration
### CLI Options
| `--report-endpoint <url>` | Backend API endpoint URL | `https://quality-web-app.pages.dev/api/v0/profiles` |
| `--report-interval <secs>` | Time between reports in seconds | `300` (5 minutes) |
| `--report-max-buffer <num>` | Max profiles in buffer (discards oldest) | `100` |
| `--report-format <format>` | Data format: `json` or `protobuf` | `protobuf` **(70% smaller)** |
| `--api-key <key>` | API key for authentication | None |
### Environment Variables
| `PROFILER_ENDPOINT` | Backend API endpoint URL | `https://quality-web-app.pages.dev/api/v0/profiles` |
| `PROFILER_INTERVAL` | Report interval in seconds | `300` |
| `PROFILER_MAX_BUFFER_SIZE` | Maximum buffer size | `100` |
| `PROFILER_FORMAT` | Data format: `json` or `protobuf` | `protobuf` |
| `PROFILER_API_KEY` | API key for authentication | None |
**Note:** CLI arguments override environment variables.
## How It Works
### 1. Profile Collection
When profiling is enabled, the agent collects performance data including:
- Hot functions and CPU usage
- Execution counts
- Call stack information
- Timing data
### 2. Periodic Reporting
```
โโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโ
โ Profiler โโโโโโโถโ Profile โโโโโโโถโ Backend โ
โ Agent โ 5min โ Reporter โ HTTP โ API โ
โ โ โ (Buffer) โ POST โ โ
โโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโ
```
- Profiles are buffered in memory (max 100 by default)
- Every N seconds (default 300), profiles are sent via HTTP POST
- **Automatic retry with exponential backoff** (3 attempts: 1s, 2s, 3s delays)
- If all retries fail, profiles are kept in buffer for next interval
- **Buffer overflow protection** - discards oldest profiles if buffer is full
- **Agent continues running** even if reporting fails
- Final profiles are sent when the agent exits
### 3. Payload Format
**Default: Protocol Buffers (Protobuf)** - Binary format, 70% smaller than JSON
Profiles are sent in one of two formats:
#### Protobuf (Default, Recommended)
- **Content-Type:** `application/x-protobuf`
- **Size:** ~1 KB per profile (70% smaller than JSON)
- **Performance:** 2-3x faster serialization
- **Best for:** Production, high-frequency profiling, bandwidth optimization
See [PROTOBUF_FORMAT.md](PROTOBUF_FORMAT.md) for full details.
#### JSON (Alternative)
- **Content-Type:** `application/json`
- **Size:** ~3.5 KB per profile
- **Human-readable:** Yes
- **Best for:** Debugging, development, legacy systems
Example JSON payload:
```json
{
"profiles": [
{
"language": "Python",
"source_file": "server.py",
"timestamp": "2025-11-08T23:40:00Z",
"static_analysis": {
"file_size_bytes": 1024,
"line_count": 50,
"function_count": 5,
"class_count": 2,
"import_count": 3,
"complexity_score": 10
},
"runtime_analysis": {
"total_samples": 1000,
"execution_duration_secs": 10,
"functions_executed": 8,
"function_stats": {
"process_data": {
"name": "process_data",
"execution_count": 523,
"percentage": 52.3,
"line_number": 42,
"file_path": "server.py"
}
},
"hot_functions": [
{
"rank": 1,
"name": "process_data",
"samples": 523,
"percentage": 52.3
}
]
}
}
],
"reported_at": "2025-11-08T23:45:00Z",
"agent_version": "0.1.0"
}
```
### 4. HTTP Headers
```http
POST /api/v0/profiles HTTP/1.1
Host: quality-web-app.pages.dev
Content-Type: application/json
User-Agent: quality-agent/0.1.0
Authorization: Bearer your-api-key
```
## Examples
### Python Application
```bash
# Profile Python app, send reports every 2 minutes
./multi_lang_profiler python app.py \
--report-endpoint https://quality-web-app.pages.dev/api/v0/profiles \
--report-interval 120 \
--api-key prod-key-123
```
### Node.js Service
```bash
# Profile Node.js service with environment variables
export PROFILER_ENDPOINT=https://quality-web-app.pages.dev/api/v0/profiles
export PROFILER_API_KEY=staging-key-456
export PROFILER_INTERVAL=600
./multi_lang_profiler typescript server.js
```
### Java Application
```bash
# Profile Java app with custom endpoint
./multi_lang_profiler java MyApp.java \
--report-endpoint https://custom-backend.example.com/profiles \
--api-key java-app-key
```
### Go Service
```bash
# Profile Go service, default settings
./multi_lang_profiler go main.go \
--report-endpoint https://quality-web-app.pages.dev/api/v0/profiles
```
## Use Cases
### 1. Production Monitoring
Deploy agents on production servers to continuously monitor application performance:
```bash
# systemd service
[Unit]
Description=Quality Agent Profiler
After=network.target
[Service]
Type=simple
Environment=PROFILER_ENDPOINT=https://quality-web-app.pages.dev/api/v0/profiles
Environment=PROFILER_API_KEY=prod-secret-key
Environment=PROFILER_INTERVAL=300
ExecStart=/usr/local/bin/multi_lang_profiler python /app/server.py
Restart=always
[Install]
WantedBy=multi-user.target
```
### 2. Performance Testing
Monitor performance during load tests:
```bash
# Start profiler
./multi_lang_profiler python api.py \
--report-endpoint https://quality-web-app.pages.dev/api/v0/profiles \
--report-interval 30 \
--api-key loadtest-key &
# Run load test
ab -n 10000 -c 100 http://localhost:5000/
# Profiles automatically sent every 30 seconds
```
### 3. CI/CD Integration
Profile tests in CI/CD pipelines:
```yaml
# .github/workflows/profile.yml
name: Profile Tests
on: [push]
jobs:
profile:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Profile application
env:
PROFILER_ENDPOINT: ${{ secrets.PROFILER_ENDPOINT }}
PROFILER_API_KEY: ${{ secrets.PROFILER_API_KEY }}
PROFILER_INTERVAL: 60
run: |
./multi_lang_profiler python tests/benchmark.py
```
### 4. Multi-Service Monitoring
Monitor multiple microservices:
```bash
# Service 1: Python API
./multi_lang_profiler python api.py \
--report-endpoint https://quality-web-app.pages.dev/api/v0/profiles \
--api-key service1-key &
# Service 2: Node.js Gateway
./multi_lang_profiler typescript gateway.js \
--report-endpoint https://quality-web-app.pages.dev/api/v0/profiles \
--api-key service2-key &
# Service 3: Java Worker
./multi_lang_profiler java Worker.java \
--report-endpoint https://quality-web-app.pages.dev/api/v0/profiles \
--api-key service3-key &
```
## Disabling Reporting
By default, reporting is **disabled**. It's only enabled when:
1. `--report-endpoint` is provided as CLI argument, OR
2. `PROFILER_ENDPOINT` environment variable is set
### Running Without Reporting
```bash
# Just profile, don't send anywhere
./multi_lang_profiler python server.py
# Save to local JSON file only
./multi_lang_profiler python server.py --json profile.json
```
## Security Considerations
### API Keys
- Store API keys in environment variables, not in code
- Use different keys for dev/staging/prod
- Rotate keys regularly
- Never commit keys to version control
```bash
# Good: Use environment variable
export PROFILER_API_KEY=$(cat /secure/api-key.txt)
./multi_lang_profiler python app.py --report-endpoint https://...
# Bad: Don't hardcode keys
./multi_lang_profiler python app.py --api-key hardcoded-key-123
```
### HTTPS
- Always use HTTPS endpoints in production
- The default endpoint uses HTTPS: `https://quality-web-app.pages.dev/api/v0/profiles`
### Network Isolation
For security-sensitive environments:
```bash
# Use internal endpoint
export PROFILER_ENDPOINT=https://internal-metrics.corp.example.com/profiles
export PROFILER_API_KEY=$(vault read -field=value secret/profiler-key)
```
## Troubleshooting
### Profiles Not Being Sent
**Check 1: Is reporting enabled?**
```bash
# This will show reporting status
./multi_lang_profiler python app.py --report-endpoint https://...
# Look for: "๐ก Starting periodic reporting to: ..."
```
**Check 2: Network connectivity**
```bash
# Test endpoint manually
curl -X POST https://quality-web-app.pages.dev/api/v0/profiles \
-H "Content-Type: application/json" \
-H "Authorization: Bearer your-api-key" \
-d '{"test": true}'
```
**Check 3: API key**
```bash
# Verify API key is set
echo $PROFILER_API_KEY
# Test with correct key
./multi_lang_profiler python app.py \
--report-endpoint https://quality-web-app.pages.dev/api/v0/profiles \
--api-key correct-key
```
### Error Messages
**"โ Failed to report profiles"**
- Check network connectivity
- Verify endpoint URL is correct
- Check API key is valid
- Look at server logs
**"๐ No profiles to report"**
- Application may not be running long enough
- Check if profiling is working (look for profile output)
- Reduce `--report-interval` for testing
**"โ ๏ธ Could not send final profiles"**
- Final send failed but data was collected
- Check logs for specific error
- Profiles will retry on next interval
### Debug Mode
Enable verbose output to see what's happening:
```bash
# The agent shows these indicators:
# ๐ก Starting periodic reporting to: <url>
# โฑ๏ธ Report interval: <seconds> seconds
# ๐ Profile added to buffer (total: N)
# ๐ค Sending N profiles to backend...
# โ ๏ธ Attempt 1/3 failed: <error>
# ๐ Retrying in 1 seconds...
# โ
Succeeded on attempt 2/3
# โ
Successfully reported N profiles
# โ ๏ธ Failed to report profiles after retries: <error>
# Profiles will be retried in next batch
```
**Retry Behavior:**
- Each send attempt has 3 retries
- Delays between retries: 1s, 2s, 3s (exponential backoff)
- Failed profiles are kept in buffer
- Next interval will try again
- **Agent never crashes due to reporting failures**
## Reliability & Fault Tolerance
The reporter is designed to be **fail-safe** and never interrupt your application:
### โ
Guaranteed Behaviors
1. **Agent never crashes** due to reporting failures
2. **Profiling continues** even if backend is down
3. **Profiles are buffered** and retried automatically
4. **No data loss** - failed profiles are kept for next attempt
5. **Graceful degradation** - works offline, reports when reconnected
### ๐ Retry Strategy
```
Attempt 1: Immediate send
โ fails
Wait 1 second
Attempt 2: Retry
โ fails
Wait 2 seconds
Attempt 3: Final retry
โ fails
Keep in buffer โ Try again at next interval (5 min)
```
### ๐ Example Failure Scenario
```bash
# Backend is down
๐ค Sending 1 profiles to backend...
โ ๏ธ Attempt 1/3 failed: HTTP request failed: connection refused
๐ Retrying in 1 seconds...
โ ๏ธ Attempt 2/3 failed: HTTP request failed: connection refused
๐ Retrying in 2 seconds...
โ ๏ธ Attempt 3/3 failed: HTTP request failed: connection refused
โ ๏ธ Failed to report profiles after retries: All 3 attempts failed
Profiles will be retried in next batch
# Your application continues running normally
# Next interval (5 min later), will try again
```
## Buffer Management
### ๐๏ธ Memory Protection
The agent protects against memory issues during extended outages:
- **Default buffer size:** 100 profiles
- **Configurable:** Use `--report-max-buffer` or `PROFILER_MAX_BUFFER_SIZE`
- **Overflow behavior:** Discards oldest profiles first (FIFO)
- **Memory usage:** ~10-50KB per profile (max ~5MB at default limit)
### ๐ Buffer Overflow Example
```bash
# Configure smaller buffer for testing
./multi_lang_profiler python app.py \
--report-endpoint https://backend.down/api \
--report-max-buffer 5
# After 5 profiles collected:
๐ Profile added to buffer (total: 5)
# 6th profile triggers overflow:
โ ๏ธ Buffer full (5 profiles), discarding oldest profile
๐ Profile added to buffer (total: 5)
# Oldest profile (1) discarded, buffer now has profiles 2-6
```
### โ๏ธ Sizing Guidelines
Choose buffer size based on your reporting interval and profile rate:
```
Buffer Size = (Profiles/Hour) ร (Outage Hours) รท (Intervals/Hour)
```
**Examples:**
| **Low frequency** | 12 (5 min) | 1 hour | 12-20 |
| **Default** | 12 (5 min) | 8 hours | 100 (default) |
| **High frequency** | 60 (1 min) | 2 hours | 120-150 |
| **Continuous** | 360 (10 sec) | 30 min | 180-200 |
**Configuration:**
```bash
# Low frequency (save memory)
--report-max-buffer 20
# Default (balanced)
--report-max-buffer 100 # or omit for default
# High frequency (more resilience)
--report-max-buffer 200
# Very high frequency
--report-max-buffer 500
```
### ๐ฏ Best Practices
1. **Production systems:** 100-200 profiles (handles ~8-16 hour outage)
2. **Development:** 20-50 profiles (sufficient for testing)
3. **High-throughput:** 200-500 profiles (continuous profiling)
4. **Memory-constrained:** 20-50 profiles (minimize overhead)
### Example Configurations
**Production (default):**
```bash
export PROFILER_MAX_BUFFER_SIZE=100
./multi_lang_profiler python app.py --report-endpoint https://...
```
**High availability:**
```bash
# Tolerate longer outages
./multi_lang_profiler python app.py \
--report-endpoint https://... \
--report-max-buffer 200
```
**Memory-constrained environment:**
```bash
# Minimize memory usage
./multi_lang_profiler python app.py \
--report-endpoint https://... \
--report-max-buffer 20
```
## Performance Impact
- **Memory:** Profiles buffered in memory (~10-50KB per profile)
- Default (100 profiles): ~1-5MB
- Max reasonable (500 profiles): ~5-25MB
- **CPU:** Minimal (<0.1% overhead for reporting thread)
- **Network:** One HTTP POST per interval (1-100KB payload)
- **Latency:** Reporting happens in background thread, no impact on profiled application
- **Failures:** Zero impact - retries happen asynchronously
## Rate Limiting
To avoid overwhelming the backend:
- Default interval: 5 minutes (300 seconds)
- Minimum recommended: 30 seconds
- Maximum buffer size: 1000 profiles (auto-retry if send fails)
```bash
# High-frequency reporting (use cautiously)
./multi_lang_profiler python app.py --report-interval 30
# Low-frequency reporting (production recommended)
./multi_lang_profiler python app.py --report-interval 600
```
## Custom Backend Implementation
If you're building your own backend to receive profiles, here's what to implement:
### API Endpoint
```
POST /api/v0/profiles
Content-Type: application/json
Authorization: Bearer <api-key>
```
### Request Body
See "Payload Format" section above for the complete JSON schema.
### Response
```json
{
"status": "success",
"profiles_received": 1,
"message": "Profiles accepted"
}
```
### Example Backend (Node.js/Express)
```javascript
const express = require('express');
const app = express();
app.use(express.json());
app.post('/api/v0/profiles', (req, res) => {
// Verify API key
const apiKey = req.headers.authorization?.replace('Bearer ', '');
if (apiKey !== process.env.EXPECTED_API_KEY) {
return res.status(401).json({ error: 'Invalid API key' });
}
// Process profiles
const { profiles, reported_at, agent_version } = req.body;
console.log(`Received ${profiles.length} profiles from agent ${agent_version}`);
// Store in database, send to metrics system, etc.
profiles.forEach(profile => {
storeProfile(profile);
});
res.json({
status: 'success',
profiles_received: profiles.length
});
});
app.listen(3000);
```
## Changelog
### v0.1.0
- Initial reporting functionality
- Periodic profile sending
- CLI and environment variable configuration
- API key authentication support
- Background reporting thread
- Automatic retry on failure