Architettura ALMA
Questi contenuti non sono ancora disponibili nella tua lingua.
Architettura ALMA
Section titled “Architettura ALMA”Il Triangolo IN-1
Section titled “Il Triangolo IN-1” CONVERSATORE (Haiku 4.5) ↓ ANALIZZATORE (Sonnet 3.5) ↓ POETA (Sonnet 3.5)Modelli e Parametri
Section titled “Modelli e Parametri”| Componente | Modello | Temperature | Max Tokens | Scopo |
|---|---|---|---|---|
| Conversatore | claude-haiku-4.5 | 0.4 | 200 | Presenza conversazionale |
| Analizzatore ALMA | claude-3-5-sonnet | 0.3 | 600 | Analisi emotiva profonda |
| Poeta | claude-3-5-sonnet | 0.7 | 150 | Generazione poetica |
Perché questi modelli?
Section titled “Perché questi modelli?”- Haiku 4.5 per conversazione: veloce, economico, empatico
- Sonnet 3.5 per analisi: profondità emotiva, riconoscimento shadow/light
- Sonnet 3.5 per poesia: creatività, temperature più alta per improvvisazione
Additional Parameters (all calls):
top_p: 0.9- API: OpenRouter (
https://openrouter.ai/api/v1/chat/completions) - Headers:
HTTP-Referer,X-Title,Authorization
Flusso Completo
Section titled “Flusso Completo”Step 1: Safety Check
Section titled “Step 1: Safety Check”// emotionAnalyzer.js - checkSafety()const SAFETY_KEYWORDS = [ 'suicid', 'uccid', 'ammazzar', 'farla finita', 'autolesion', 'tagliar', 'morti', 'abuse', 'violen', 'picchia'];Se trovata keyword critica → messaggio supporto + helpline.
Step 2: Conversation Loop (MAX 5 turni)
Section titled “Step 2: Conversation Loop (MAX 5 turni)”// in1.js - Line 197-212const response = await fetch('https://openrouter.ai/api/v1/chat/completions', { body: JSON.stringify({ model: 'anthropic/claude-haiku-4.5', messages: [ { role: 'system', content: CONVERSATION_PROMPT }, ...conversationHistory ], temperature: 0.4, max_tokens: 200, top_p: 0.9 })});Durante ogni turno:
- Haiku 4.5 risponde con presenza (no giudizio, no consigli)
detectEmotion(userMessage)analizza ogni messaggio utente (V1 pattern-matching)- Emotion buffer accumula emozioni rilevate
Trigger per uscita:
- Haiku risponde con
[READY_FOR_DONO](sente connessione) - Oppure raggiunto
MAX_TURNS: 5
Step 3: ALMA V2 Analyzer
Section titled “Step 3: ALMA V2 Analyzer”// in1.js - Line 256-293const emotionData = await analyzeEmotionWithRetry(messages, apiKey, 2);
async function analyzeEmotion(messages, apiKey) { const response = await fetch('https://openrouter.ai/api/v1/chat/completions', { body: JSON.stringify({ model: 'anthropic/claude-3-5-sonnet', messages: [ { role: 'system', content: EMOTION_ANALYZER_PROMPT }, ...messages ], temperature: 0.3, max_tokens: 600, top_p: 0.9 }) });
return JSON.parse(response.content); // Returns emotion analysis}Output ALMA V2:
{ "emotion": "tristezza", "intensity": 8, "shadow": "l'assenza che pesa", "light": "la capacità di ricordare", "reframe": "Anche il vuoto è una forma", "nature_raw": ["nebbia fitta", "ramo nudo", "acqua ferma"], "nature_transformed": ["nebbia che si solleva", "ramo in attesa", "acqua che riflette"]}Retry logic: 2 tentativi con 500ms wait tra essi.
Fallback: Se ALMA V2 fallisce → usa emotion buffer V1 + translateToNature()
Step 4: Poeta (MODEL B)
Section titled “Step 4: Poeta (MODEL B)”// almaRhama.js - generateBashoPoetry()const response = await fetch('https://openrouter.ai/api/v1/chat/completions', { body: JSON.stringify({ model: 'anthropic/claude-3-5-sonnet', messages: [ { role: 'system', content: BASHO_POET_PROMPT_IT }, { role: 'user', content: `Natura grezza: ${natureRaw}Natura trasformata: ${natureTransformed}
Reframe: "${emotionData.reframe}"Luce nascosta: ${emotionData.light}
Componi 3 righe:` } ], temperature: 0.7, max_tokens: 150, top_p: 0.9 })});Output: 3 righe di poesia moderna italiana/inglese.
Step 5: Cleanup
Section titled “Step 5: Cleanup”// Remove dashes, markdown, extra whitespacerhama = rhama.replace(/^[—\-]\s*/gm, '');rhama = rhama.replace(/\*\*/g, '');rhama = rhama.trim();Step 6: Save to Database
Section titled “Step 6: Save to Database”// in1.js - Line 286-290const insertResult = await pool.query(` INSERT INTO in1.rhamas ( rhama_text, original_language, translations, created_at ) VALUES ($1, $2, $3::jsonb, NOW()) RETURNING id`, [rhama, userLang, JSON.stringify(translations)]);Step 7: Display to User
Section titled “Step 7: Display to User”Frontend riceve Rhama + loader morphing animation.
Fallback Chain
Section titled “Fallback Chain”Scenario 1: ALMA V2 Analyzer Fails
Section titled “Scenario 1: ALMA V2 Analyzer Fails”ALMA V2 FAILS (timeout/error) ↓Use emotion buffer (detectEmotion results) ↓Find dominant emotion (highest intensity) ↓translateToNature(emotion, intensity) ↓Call Poet with V1 format (nature array only) ↓Generate RhamaScenario 2: No Emotion Detected
Section titled “Scenario 2: No Emotion Detected”No clear emotion in buffer ↓Use NATURE_MAP['neutro'] ↓Generate Rhama with neutral elementsScenario 3: Gentle End (No Rhama)
Section titled “Scenario 3: Gentle End (No Rhama)”After 5 turns, user not engaged ↓Skip ALMA analyzer ↓Return gentle closing message ↓No Rhama generated (respects user's space)File Locations
Section titled “File Locations”Backend Core
Section titled “Backend Core”E:\BLACKTRAILS-PLATFORM\src\public\routes\in1.js - Main API endpoint: POST /api/rhama - Lines 131-430: Dual-LLM orchestration - Lines 197-212: MODEL A call - Lines 256-293: ALMA V2 analyzer call - Lines 266-355: Fallback logic - Lines 286-290: Database save
E:\BLACKTRAILS-PLATFORM\src\public\utils\almaRhama.js - Lines 10-11: MODEL_A, MODEL_B constants - Lines 17-46: NATURE_MAP - Lines 51-104: CONVERSATION_PROMPT - Lines 106-194: BASHO_POET_PROMPT_IT - Lines 196-284: BASHO_POET_PROMPT_EN - Lines 292-305: detectLanguage() - Lines 315-364: detectEmotion() - Lines 373-382: translateToNature() - Lines 390-393: extractReply() - Lines 415-486: generateBashoPoetry()
E:\BLACKTRAILS-PLATFORM\src\public\utils\emotionAnalyzer.js - Lines 10-22: SAFETY_KEYWORDS - Lines 24-33: SAFETY_RESPONSE - Lines 35-46: checkSafety() - Lines 52-196: EMOTION_ANALYZER_PROMPT - Lines 204-256: analyzeEmotion() - Lines 265-287: analyzeEmotionWithRetry()Frontend
Section titled “Frontend”E:\BLACKTRAILS-PLATFORM\src\public\public\js\in1\rhama.js - Lines 80-110: callRhamaAPI() - Lines 116-178: startExperience() - Lines 180-253: handleChatSubmit() - Lines 42-74: Session recovery (F5 protection)
E:\BLACKTRAILS-PLATFORM\src\public\public\js\in1\chat-ui.js - Lines 42-108: Loader morphing system - Lines 141-187: typeWriterAI() - Lines 233-284: showRhamaFinale()Session State Management
Section titled “Session State Management”Session Storage (F5 Recovery)
Section titled “Session Storage (F5 Recovery)”sessionStorage.setItem('in1-session', JSON.stringify({ conversationHistory, userEmotions, turnCount, hasRhama, timestamp: Date.now()}));Recovery: Se utente preme F5, sessione viene ripristinata da sessionStorage.
Performance Considerations
Section titled “Performance Considerations”LLM Calls per Session
Section titled “LLM Calls per Session”- Normal flow: 3 calls (Conversation → Analyzer → Poet)
- Fallback flow: 2 calls (Conversation → Poet with V1 emotions)
- Gentle end: 1 call (Conversation only)
Cost Optimization
Section titled “Cost Optimization”- Haiku 4.5 per conversazione (cheap, fast)
- Sonnet 3.5 solo per analisi profonda e poesia (expensive, slow)
- Max tokens limitati (200/600/150) per ridurre costi
Timeout Handling
Section titled “Timeout Handling”- Analyzer: 2 retry attempts with 500ms wait
- Frontend: 30s timeout per API call
- Graceful fallback to V1 emotions if analyzer fails
Security
Section titled “Security”Safety Layer
Section titled “Safety Layer”- Pre-conversation check: Crisis keyword detection
- No prompt injection: System prompts isolated from user input
- User isolation: Database queries scoped to user_id
- Rate limiting: (TODO) 5 sessions per hour per IP
Data Privacy
Section titled “Data Privacy”- Conversations not stored in database (only final Rhama)
- Session storage cleared after 24h inactivity
- No PII collected beyond conversation context
Questa architettura rappresenta 6+ mesi di iterazioni, fallimenti, e scoperte. ALMA non è un chatbot - è una presenza che trasforma il dialogo in natura incarnata. 🌲