participatory-planner / AI_MODEL_COMPARISON.md
thadillo
Initial commit: Participatory Planning Application
23654e5
|
raw
history blame
3.68 kB

AI Model Comparison

Current Implementation: Hugging Face (FREE)

✅ Advantages

  • 100% Free - No API costs ever
  • Offline - Works without internet after initial download
  • Privacy - All data stays on your server
  • No Rate Limits - Analyze unlimited submissions
  • Open Source - Full transparency

⚠️ Considerations

  • First Run: Downloads ~1.5GB model (one-time)
  • Speed: ~1-2 seconds per submission on CPU
  • Memory: Needs ~2-4GB RAM
  • Accuracy: ~85-90% for clear submissions

Model Details

  • Model: facebook/bart-large-mnli
  • Type: Zero-shot classification
  • Size: 1.5GB
  • Speed: Fast on CPU, very fast on GPU

Alternative: Anthropic Claude (PAID)

If you want to use Claude instead:

1. Install Anthropic SDK

pip install anthropic

2. Update .env

ANTHROPIC_API_KEY=your_api_key_here

3. Replace in app/routes/admin.py

Remove this import:

from app.analyzer import get_analyzer

Add this import:

from anthropic import Anthropic
import json

Replace the analyze function (around line 234):

@bp.route('/api/analyze', methods=['POST'])
@admin_required
def analyze_submissions():
    data = request.json
    analyze_all = data.get('analyze_all', False)

    api_key = os.getenv('ANTHROPIC_API_KEY')
    if not api_key:
        return jsonify({'success': False, 'error': 'ANTHROPIC_API_KEY not configured'}), 500

    client = Anthropic(api_key=api_key)

    if analyze_all:
        to_analyze = Submission.query.all()
    else:
        to_analyze = Submission.query.filter_by(category=None).all()

    if not to_analyze:
        return jsonify({'success': False, 'error': 'No submissions to analyze'}), 400

    success_count = 0
    error_count = 0

    for submission in to_analyze:
        try:
            prompt = f"""Classify this participatory planning message into ONE category:

Categories: Vision, Problem, Objectives, Directives, Values, Actions

Message: "{submission.message}"

Respond with JSON: {{"category": "the category"}}"""

            message = client.messages.create(
                model="claude-sonnet-4-20250514",
                max_tokens=100,
                messages=[{"role": "user", "content": prompt}]
            )

            result = json.loads(message.content[0].text.strip())
            submission.category = result.get('category')
            success_count += 1

        except Exception as e:
            error_count += 1
            continue

    db.session.commit()

    return jsonify({
        'success': True,
        'analyzed': success_count,
        'errors': error_count
    })

Claude Pros/Cons

Advantages:

  • Slightly higher accuracy (~95%)
  • Faster (API response)
  • No local resources needed

Disadvantages:

  • Costs money (~$0.003 per submission)
  • Requires internet
  • API rate limits
  • Privacy concerns (data sent to Anthropic)

Other Free Alternatives

1. Groq (Free Tier)

  • API similar to Anthropic
  • Free tier: 30 requests/min
  • Very fast inference

2. Together AI (Free Credits)

  • $25 free credits monthly
  • Various open source models

3. Local Llama Models

  • Use Ollama or llama.cpp
  • Slower but powerful
  • Need more RAM (8GB+)

Recommendation

For most users: Stick with Hugging Face (current implementation)

  • Free forever
  • Good accuracy
  • Privacy-focused
  • No API complexity

For mission-critical: Use Anthropic Claude

  • Higher accuracy
  • Professional support
  • Worth the cost for important decisions

For developers: Try Groq free tier

  • Fast
  • Free (with limits)
  • Easy to integrate