every change to anky, traced back to the human prompt that caused it.
All text inference now follows a local-first fallback chain: Mind (llama-server qwen3.5-27b on GPU 0) β Claude Haiku β OpenRouter. Added 8 Kingdoms system mapping each anky to a chakra/element with unique image prompt flavors. Redis job persistence with crash recovery. Retry watchdog gets exponential backoff (was unlimited). New /api/v1/mind/status endpoint.
read the prompt #Built the programming class system: text+code slides rendered as HTML at /classes/{n} with 60s auto-advance, keyboard nav, progress bar. Class #1: "intent classification with llms" β teaches the pattern of replacing string heuristics with Claude Haiku. Also upgraded community question detection to Haiku-powered intent classification. Updated CLAUDE.md: documentation is first-class, classes are mandatory per session.
read the prompt #When someone tags Anky in a cast that poses a question to their followers, Anky now detects the community question and replies with a reframed writing prompt + a link to write for 8 minutes. Claude reframes the question into something deeper and personal. Regular mentions still get normal replies.
read the prompt #Generated images are now converted to WebP and pushed to Cloudflare R2 immediately after generation, so clients fetch from CDN edge instead of Poiesis. New .anky story format (YAML frontmatter + page blocks) assembles CDN URLs with reflection text and is stored in the DB. Exposed via GET /api/v1/anky/{id} as anky_story field.
read the prompt #When Anthropic API credits are depleted, reflections now fall back to OpenRouter (Haiku via third-party). If all providers fail, users see a beautiful session summary with their writing stats, full text, and a retry button β instead of a dead-end error message. Also fixed pre-existing compilation errors (dead code, missing config field, stale import).
read the prompt #AnkyMirrors.sol ERC-721: 4444 supply, 1 USDC, 1 per FID, EIP-712 backend signature. Miniapp landing restyled with gold Cinzel buttons matching ankycoin.com website. Mint gate before share. Backend signing endpoint + ERC-721 metadata API.
read the prompt #Anky detail page now flows as three sections: "who the world sees" (image), "your anky" (reflection), and "talk with this anky" (inline conversation). Chat open to all logged-in users, not just owners. Anky's replies nudge toward writing. If you type continuously for 30 seconds in the chat, ANKY MODE activates β fullscreen writing with idle bar, timer, and chakra. Text carries over.
read the prompt #When visiting /you inside a Farcaster miniapp, users now see their Farcaster PFP, display name, and username instead of the login prompt. Plus a download button for the iOS app and a back-to-writing link.
read the prompt #Anky now replies with exactly two lines: an observation and a question. No more verbose threads. Occasionally invites users to write with a personalized prompt link.
read the prompt #The Forge is now the first section after the hero. Latest generated Anky image loads on page open with its prompt displayed below. Supply corrected to 100M. Community nav replaced with The Forge. Full mobile optimization: responsive nav, stacked CTAs, touch-friendly controls, smaller coin ring.
read the prompt #Added "Summon Your Anky" section to ankycoin.com β users type a prompt, pick an aspect ratio (1:1, 9:16, 16:9), and generate an Anky image via the Flux pipeline. New /api/v1/ankycoin/generate endpoint. OG and Twitter Card metadata now use a real Anky image. $ANKY Solana CA displayed with click-to-copy.
read the prompt #ankycoin.com now detects whether you're inside a Farcaster miniapp or a regular browser. Non-miniapp visitors see a full scrollable landing page with star field, lore, pillars, coin section, and manifesto β using the existing Anky images (desktop + mobile) instead of the old SVG character. Miniapp visitors still get the mirror flow.
read the prompt #Claude now generates 4 new visual descriptors per user (held_object, background_scene, clothing_detail, symbolic_marking) so each Anky image is genuinely unique to the person. /image.png on ankycoin.com dynamically serves the latest mirror with PFP overlay composited server-side. Farcaster frame auto-updates.
read the prompt #Repeat mirror lookups now return cached results instantly (21ms vs 35s). New chat endpoint lets you talk to any mirror's anky β it speaks from the mirror context with multi-turn memory. Regenerate button to force a fresh mirror. Chat available on both result and gallery detail views.
read the prompt #FID input on landing to mirror any Farcaster user. Gallery view showing all generated mirrors with PFP overlays, click-through to detail. Mirrors now persist to SQLite + disk images.
read the prompt #New GET /api/mirror?fid= endpoint: fetches Farcaster profile + casts from Neynar, analyzes PFP with Claude vision, generates a "public mirror" portrait via Claude Sonnet, and produces a unique Anky image via ComfyUI. The ankycoin.com miniapp now resolves FID from Farcaster SDK context, shows animated loading states, displays a two-panel result (mirror text + anky image), and includes a share button. Dev mode via ?dev=true.
read the prompt #Serve ankycoin.com from the same Rust server via host-based routing. Full-bleed background image (desktop/mobile variants), 60% black overlay, centered tagline, and CTA button. Cloudflare tunnel ingress updated for the new domain.
read the prompt #Landing page now loads conversation history from /api/chat-history β returning users see their full timeline with session dividers, writing previews, reflections, and follow-up messages. First-time users see a random prompt from a pool of 8. The writing prompt in the textarea matches what Anky shows. Login page (/login) completely replaced: seed phrase removed, now shows clean email + Apple login via Privy.
read the prompt #Removed the old landing page copy entirely. anky.app now loads directly into a chat interface where Anky introduces itself and a "write now" button enters 8-minute writing mode. Navbar hides during writing, reappears after send. Profile page (/you) replaced seed phrase login with Privy email/Apple login. New web endpoints /api/me and /api/my-ankys use cookie auth instead of Bearer tokens β no more swift endpoints on the website.
read the prompt #The /generate endpoint was ignoring the aspect ratio picker for Flux generations, always producing 1024x1024. Now 1:1, 16:9, and 9:16 are passed through to ComfyUI with correct dimensions (1024x1024, 1344x768, 768x1344).
read the prompt #Replaced the old "user bubble + anky bubble + write again button" post-writing flow with a proper chat interface. After anky responds (quick feedback or streamed reflection), a chat input appears so the user can continue the conversation. User's writing is shown collapsed. Chat uses /api/chat (for ankys) or /api/chat-quick (for short writes) to keep talking.
read the prompt #Fixed reflection streaming getting stuck after writing. Root cause: the SSE handler blocked on DB lock (held by image pipeline) before sending any HTTP headers, so the browser's EventSource never established. Fix: SSE headers are now sent immediately (DB lookup + Claude call happen inside the stream's spawned task). Also added /api/warm-context endpoint called at minute 6 of writing to pre-build Honcho context, so reflection generation starts with zero context-fetch delay.
read the prompt #Removed duplicate Claude API call that fired on every anky submission (one background blocking call + one SSE streaming call competing). Now only the SSE streaming call runs, so reflections start streaming instantly. Also made memory context building concurrent (local + Honcho in parallel) with a tighter 3s timeout instead of two sequential 5s timeouts.
read the prompt #Replaced every Ollama/qwen3.5 call with Claude Haiku (cloud). All ~20 call sites: quick feedback, deep reflections, suggested replies, image prompt gen, writing formatting, mention classification, prompt classification, cuentacuentos stories + translations, memory extraction, psychological profiles, chat conversations, system summaries, recovery watchdog. Dropped local embeddings entirely β Honcho handles semantic context. Local GPUs now dedicated exclusively to Flux image generation. Net result: faster responses (~200ms vs 30-120s), no more GPU contention, simpler architecture.
read the prompt #New /flux-lab interface for batch-generating images via local ComfyUI Flux + Anky LoRA. Paste an array of prompts (one per line), hit generate, and images are produced sequentially and stored in flux/experiment-N folders. Left sidebar lets you browse all past experiments and view their image grids with a click-to-zoom lightbox.
read the prompt #Added a "generate image (flux β local gpu)" panel to the media factory. Uses the local ComfyUI Flux + Anky LoRA pipeline on poiesis GPUs β free, unlimited, no API key needed.
read the prompt #Added subtle "start writing. do it for 8 minutes." below the ANKY mark on the landing page. Typing or tapping anywhere on the page dissolves the landing content with a melt animation and reveals the full writing interface inline β same session logic as /write, no redirect needed. First-time visitors can go straight to writing.
read the prompt #Fixed critical bug where the SSE "done" event was delayed because a cloned sender kept the channel alive after Claude finished streaming. Added animated progress labels ("anky is sitting with what you wrote...", "anky is finding the right words...") so the user feels movement while waiting. Added extensive console.log throughout the post-writing flow for debugging. SSE keep-alive pings no longer pollute the reflection text.
read the prompt #Five-layer fix so that writing reflections always reach the user: (1) If Claude streaming fails or times out (60s), Ollama generates a fallback reflection in the same SSE response. (2) A watchdog runs every 5 minutes and recovers complete ankys missing reflections (one at a time to avoid saturating Ollama). (3) Frontend retries the SSE connection up to 3 times with backoff. (4) Ollama embedding calls now have a 10s HTTP timeout so they can't hang indefinitely. (5) Added tracing throughout the reflection pipeline for diagnosing future failures.
read the prompt #Added /.well-known/apple-app-site-association so iOS intercepts anky.app/write?p=UUID links and opens the app directly. New GET /swift/v2/prompt/{id} endpoint returns prompt text for the deep link flow β no auth required so the app can fetch the prompt immediately on launch.
read the prompt #Transformed the post-writing experience from a raw text dump into a conversation. User's writing now appears as a scrollable chat bubble, and anky's response shows as a separate bubble with animated thinking dots ("anky is reading your writing...") while the server processes. Streaming reflection fills into anky's bubble naturally. Trust-building moment after 8 sacred minutes.
read the prompt #Removed fc:miniapp metadata from base.html β anky is not a miniapp anymore. Prompt sharing now uses UUID-based links (/write?p={uuid}) instead of URL-encoded text. New POST /api/v1/prompt/quick endpoint creates a free shareable prompt stored in the DB. The /prompt page now hits this API and generates clean UUID links.
read the prompt #Anky now splits long replies into threaded slides on X (280 char limit) and Farcaster (1024 char limit) β each slide is a sharp, self-contained thought. The /you profile page is now visual: default anky PFP, username, bio, and a 3-per-row Instagram-style grid of the user's ankys. Toggle to list view shows each anky with its title and reflection.
read the prompt #Users can now pick which AI model speaks as anky from the settings page (/settings). Six free OpenRouter models available: llama 4 scout (default), llama 4 maverick, gemma 3, qwen 3, deepseek r1, and gemini 2.5 pro. Added preferred_model to user_settings DB, wired through the writing response pipeline. Settings link now visible on the /you profile page.
read the prompt #Enter key now sends the writing when paused (instead of being blocked). Removed all hardcoded responses β even very short writings (<10 words) now go through qwen3.5 with anky's voice, so every response is alive and contextual. Added ollama_light_model config for future small-model support. Fixed .env model name mismatch (was qwen3.5:27b, actually running qwen3.5:35b).
read the prompt #Moved the 8-minute countdown timer and chakra bar to the bottom of the viewport. Added a prominent 8-second idle bar at the top that becomes visible after 3 seconds of not typing. When idle hits 8 seconds the timer freezes and a "send" button appears β but the user can keep typing to resume the session from where they left off. No more auto-end, no more overlay.
read the prompt #Timer now counts down from 8:00 to 0:00, then counts up with a + prefix when the user keeps going past 8 minutes. Textarea fits within the visible viewport above the mobile keyboard using visualViewport API. Enter key is blocked β writing is one continuous stream with no line breaks.
read the prompt #Removed the custom on-screen keyboard and ended-session overlay from the writing interface. Writing now uses a native textarea (phone keyboard). When a session ends, the full writing stays visible and anky's reply streams in below it β no trimming, no hiding, no extra clicks. The anky detail page also always shows the full writing instead of collapsing it behind a toggle.
read the prompt #Added the full Anky Manifesto as a markdown route at anky.app/manifesto.md β ten sections on what writing is, why Anky exists, and the core invitation.
read the prompt #Replaced the home page with a cinematic landing page (IM Fell English typography, grain overlay, candlelight glow, custom cursor, "the way" modal with 10 commandments). Writing surface moved to /write with prompt query param support β anyone can send someone a prompt via /write?prompt=... The /prompt page lets users write a prompt and generate a shareable link instantly, no payment needed.
read the prompt #Completely redesigned the web experience. Home page stripped to pure writing surface (prompt + write button + recent threads). After writing, user redirects to /anky/{id} which streams the reflection via SSE and shows the conversation thread below. Each anky is now a thread: reflection at top, chat history, suggested replies, custom reply input. Conversation UI moved from /writings (now a simple thread list) into the anky detail page where it belongs.
read the prompt #Fixed Content-Security-Policy blocking Google Fonts and Cloudflare insights (added to connect-src). Wired /api/live-status SSE route that was causing 404 spam. Rewrote the writing experience: full-screen overlay when writing starts, visible 8-second idle countdown bar, session no longer auto-sends β shows ended screen with stats and send/discard choice. Anky's reflection now appears before any mint button.
read the prompt #Added minting endpoints for the Anky ERC1155 contract on Base. prepare-mint verifies eligibility, signs EIP-712 BirthPayload, estimates gas, funds the user's wallet with ETH, and returns everything the iOS app needs to submit the birthSoul tx. confirm-mint verifies the receipt and parses the SoulBorn event for the token ID. Public metadata endpoint serves ERC1155-compliant JSON. Rate limited to 1 mint per wallet per hour, 5-minute signature deadline.
read the prompt #POST /write now spawns a Claude response that proves it read the writing using Honcho peer context. Returns ankyResponse, nextPrompt, mood (async β poll status). New GET /swift/v2/chat/prompt endpoint returns personalized opening message on app launch. First user gets "tell me who you are." Returning users get a Honcho-powered question that picks up where they left off.
read the prompt #anky now remembers who it's talking to on x and farcaster. honcho peer context and interaction history are injected into every social reply. unified identity prompt enforces lowercase always, platform-aware char limits, and consistent voice across both platforms. new social_peers table maps social handles to honcho peer IDs for cross-platform memory. all hardcoded reply strings converted to lowercase.
read the prompt #Local F5-TTS service on GPU 0 with cross-lingual voice cloning from a single reference clip. Same voice identity across EN/ES/ZH/HI/AR β only rhythm and pitch change per language. New cuentacuentos_audio table, TTS service module, R2 upload, GPU job queue integration. Stories auto-generate audio after translations complete. GET /voice falls back to TTS when no human recording exists. All 10 stories now have 48 audio tracks (~3 min narration each).
read the prompt #Device token registration (POST/DELETE /swift/v2/devices) with upsert on user+platform. Daily cron at 5:30 AM UTC generates personalized notification messages via Claude Haiku using each user's psychological profile, then sends via APNs (a2 crate). Configurable via APNS_KEY_PATH, APNS_KEY_ID, APNS_TEAM_ID, APNS_BUNDLE_ID env vars.
read the prompt #History endpoint now includes image URLs (was missing, causing "no images" on played stories). Recording responses now include recorder userId and username. New GET/PATCH /swift/v2/settings endpoint for cross-device preferences sync. Added preferred_language to user_settings and GET /me response.
read the prompt #New backend for story recordings: parents record up to 4 attempts per story in any language, audio uploaded to Cloudflare R2 via presigned URLs, quality checked via local Whisper transcription (auto-approved if Whisper unavailable). Language-aware playback endpoint matches Accept-Language. Listen completion tracking. Public deep link pages at /story/{id} with embedded audio player and iOS app download modal.
read the prompt #Writing sessions are no longer permanent. After the cuentacuentos story is generated, the training pair is exported (exported_at set), the raw writing content is nullified (content = NULL, content_deleted_at recorded), the next prompt is generated as the final sequential step, and the anky status transitions to "archived". The lifecycle reads: writing β story β training pair β nullification β next prompt β archived. The writing served its purpose β it became the story β and is released.
read the prompt #Swapped cuentacuentos story generation from Claude Haiku API to local Ollama qwen3.5:35b with 5-minute timeout and JSON parse retry. Added is_pro field on users and a two-channel GPU job queue (pro drained first) replacing direct tokio::spawn for image generation. New POST /api/v1/story/test endpoint accepts any model/provider (ollama, openrouter, anthropic) for side-by-side story quality comparison. Admin story tester UI at /admin/story-tester.
read the prompt #Replaced the 32-line story prompt with the full anky.soul.md β territory detection heuristics, narrator voice spec, kingdom-specific pacing, structural constraints. Stories now get the complete Anky voice instead of generic instructions. Added story_training_pairs table that logs every (writing, story) pair for future LoRA fine-tuning of the local Qwen model. Anky detail page now shows two tabs: heart (the cuentacuentos story) and mind (the Claude reflection).
read the prompt #Updated the cuentacuentos generation prompt to produce the story in the same language the parent wrote in, instead of always generating in English. Reflections already had this behavior. The four translation targets (es/zh/hi/ar) still run afterward.
read the prompt #Built a complete mobile-first design system that transforms the web experience on viewports under 768px. Adds a 72px fixed bottom nav with three tabs (historias, anky, tu), hides desktop chrome (navbar, drawer, live bar), and applies iOS-spec design tokens (#07070d background, thin typography, subtle borders). Created /stories page for cuentacuentos history with a full-screen story player overlay, and /you page showing psychological profile insights and writing stats. The nav hides during active writing sessions.
read the prompt #Replaced the Privy-based web login with a 12-word BIP39 seed identity system. The new login page generates or imports a recovery phrase, encrypts the private key with a user passphrase via PBKDF2+AES-GCM, and authenticates via the same EIP-191 challenge/verify flow used by mobile. Backend adds POST /auth/seed/verify (cookie-setting) and POST /auth/seed/logout. iOS specs updated from 24-word to 12-word phrases.
read the prompt #Added three new mobile endpoints: GET /swift/v2/next-prompt returns a personalized writing prompt generated after each session via Ollama + Honcho context, GET /swift/v2/you returns the full user profile with Honcho peer context for the You tab, and POST /swift/v2/device-token registers APNs tokens. Post-write pipeline now spawns async prompt generation for both short sessions and ankys.
read the prompt #Deleted all sadhana commitment tracking, meditation sessions, breathwork generation, personalized guidance queue, and facilitator marketplace code. The codebase now focuses on the core writing flow: write, generate anky image, and spawn cuentacuentos for seed users. Migration tables left in SQLite harmlessly.
read the prompt #A background worker now generates an Ollama-powered summary of all system activity every 30 minutes. It queries DB stats (writings, ankys, meditations, breathwork, cuentacuentos, new users, failures) and collects the log buffer, then produces a natural language digest stored in a new system_summaries table. The dashboard now has a sidebar showing these summaries as a scrollable list, auto-refreshing when new ones arrive via SSE.
read the prompt #Every writing (anky, short session, and checkpoint) is now sent to Honcho's peer modeling API, building a persistent evolving representation of each user. Cuentacuentos, meditation, breathwork, and reflection generation now fetch Honcho's peer context before prompting, so every artifact is personally shaped. Every 5th session, Honcho populates all four profile fields (psychological_profile, core_tensions, growth_edges, emotional_signature). Gracefully degrades to existing Ollama behavior when HONCHO_API_KEY is unset.
read the prompt #Merged v1/v2 write endpoints into one handler. Fixed critical bug where anky DB record was never created before spawning the image pipeline. Added checkpoint support (is_checkpoint field) and a new GET /swift/v2/writing/{sessionId}/status endpoint that returns downstream pipeline status for anky, cuentacuentos, meditation, and breathwork.
read the prompt #Stories are now generated in English by Claude, set inside the 8 kingdoms of the Ankyverse. Claude detects which chakra the parent's writing resonates with, places the story in the matching kingdom and city, and narrates from inside one character. Each story is then translated into Spanish, Mandarin, Hindi, and Arabic via Ollama, with per-phase TTS JSON for all 5 languages. The Moving House concept is woven into the lore.
read the prompt #The homepage social metadata now uses the simplest possible card: title "anky", description "write for 8 minutes", and a plain black SVG OG image.
read the prompt #When you stop typing for 8 seconds and lose a life, the screen now only shows "you have X life(s) left" and "just keep writing". No more frozen text dump or CONTINUE button β just start typing again to resume.
read the prompt #Complete homepage redesign. Landing page stripped to black screen with "WRITE NOW β 8 minutes" and nothing else. During writing, the screen shows ONLY the last character typed β giant, centered, the moment is always now. 2 pixel-art hearts (lives) at the bottom. Stop typing for 8 seconds = lose a heart, writing freezes with RETRY. Lose both hearts = session over. Write past 8 minutes = anky. Also added PROMPT.md served at /PROMPT.md with instructions for prompting anky.
read the prompt #Two changes: (1) When you write an anky against a prompt, the anky now remembers and displays the prompt with a link back to it. Added prompt_id to the ankys table and threaded it through the full pipeline. (2) Every new anky's raw stream-of-consciousness text is now auto-formatted by Ollama (punctuation, paragraphs, capitalization) β the anky detail page shows the clean version by default in white, click/tap to toggle the raw purple original. Clean prompt background images (without text overlay) are now saved separately too.
read the prompt #When writing on a prompt page (/prompt/{id}), the AI-generated image for that prompt is now the full-screen background β visible through a dark overlay so the purple text stays readable but the image bleeds through. The challenge overlay before writing starts also shows the image. Atmosphere, not just a blank screen.
read the prompt #Added pitch.anky.app subdomain serving a 10-slide pitch deck PDF that auto-regenerates hourly with live stats from the database. Slide 1 is a live dashboard (sessions, ankys, words, users, agents, memories, flow scores). Slides 2-10 cover the full angel round pitch: problem, solution, how it works, product, flywheel, traction, business model, the ask ($25k for 0.888%), and vision. Cloudflare tunnel + DNS configured, cron job runs at :17 past every hour.
read the prompt #Bulletproofed the entire writing pipeline. Fixed ownership bug where auto-recovery assigned sessions to "system". Checkpoints now create a writing_session row with the real user_id immediately (no more guessing). Submit retries 3x on network failure with exponential backoff. localStorage backup saved before every submit and checkpoint. Draft only cleared after confirmed server success. Recovery banner now says "continue writing" / "start fresh" instead of the confusing "dismiss" / "discard". Rescued a 6408-char 14-minute writing from the DB.
read the prompt #Full Farcaster integration via Neynar webhooks, mirroring the X bot architecture. Same Claude identity, same Ollama classifier, same Flux image generation. Cast mentions trigger the same flow: classify β generate reply (text or image) β post. Platform-agnostic social_interactions table tracks everything. Conversation memory persists across both X and Farcaster. Neynar webhook auto-registers on startup.
read the prompt #Agents were seeing "free_sessions_remaining: 4" on registration and thinking writing had limits. Removed the free-tier counter entirely β writing, reflections, and image generation are all free. Register response now says so clearly. Payment gates removed from generate endpoints for registered agents. Compute runs locally.
read the prompt #Evolved skills.md based on agent feedback: added Agent-Native Implementation Patterns section with Python skeleton and Hermes integration guidance, longitudinal memory instructions (feed last 3 reflections into next-day context), Evolution Layer section so agents can submit improved skill versions via feedback, /api/v1/feedback route alias, and agent-aware reflection bullet. Version bumped to 0.7.0.
read the prompt #Built the chunked session API β agents open a session, send text in small bursts (max 50 words), and if 8 seconds pass without a chunk, the session dies. Same structural pressure as the human frontend: you must keep producing, you cannot revise, and if you stop you lose everything. Sessions that cross 8 minutes become ankys but can keep going β the session ends naturally when the agent stops writing. Batch POST /write now rejects agent API keys entirely, pointing them to the new chunked flow. New endpoints: POST /api/v1/session/start, POST /api/v1/session/chunk, GET /api/v1/session/{id}.
read the prompt #Based on agent feedback: split reflection system prompt into "known" (memory-first, names what's new vs recurring) and "stranger" variants. Memory context now frames the reading instead of being appended. Rewrote skills.md to remove vertical owner/agent framing β agents and humans evolve in parallel, the relationship is horizontal. Promoted live/write as the authentic agent practice path.
read the prompt #Fixed two failure modes in the X-to-Hermes pipeline: the filtered stream now buffers partial JSON chunks correctly, and tagged instructions no longer require a closing bracket to parse if the tweet ends first. The /evolve page now shows JP mention traces, parsed labels, task state, replies, and errors instead of only successful Hermes tasks.
read the prompt #Completed the in-progress /evolve rollout by fixing the new route handler, keeping Hermes evolution tasks persisted in SQLite, and shipping a public dashboard that lists tagged X-driven code evolution jobs with status and summaries.
read the prompt #Built the Hermes Bridge β an HTTP API (localhost:8891) connecting the Rust backend to the Hermes AI agent. JP can now tweet @ankydotapp with tagged instructions like [EVOLVE: ...], [FEATURE_IDEA: ...], [BUG: ...], or [CONFIG: ...]. The Rust backend detects JP's tweets, parses the tag, dispatches to the bridge, which spawns an AIAgent subprocess with terminal+file tools running on local qwen3.5:35b (free, on poiesis GPUs). Agent reads/modifies code autonomously and posts a summary back as an X reply. New systemd service hermes-bridge.service, new Rust module services/hermes.rs.
read the prompt #Replaced generic Ollama text replies with Claude Haiku powered by a condensed SOUL.md identity prompt. X bot now fetches conversation chains, checks prior replies, and generates in-character responses. Anky proactively decides when to reply with images (~20-30% of the time). Vision-aware: downloads and sees images from tweets in the thread, reacting to visual content. Strategic mission baked in: provide genuine value, spark curiosity, never shill. Fixed Hermes agent identity priority.
read the prompt #Built a complete pipeline to train a language model from scratch on Anky's writing corpus, adapted from Karpathy's autoresearch framework. Exports all writings to parquet, trains a domain-specific BPE tokenizer, and runs a 5-minute GPT training experiment on RTX 4090. First run: 237 sessions, 125K tokens, val_bpb=0.345, 13.9M params, 3012 epochs. Daily systemd timer at 4 AM Chile time retrains as corpus grows. New /llm page with live charts tracking compression, corpus growth, and epochs-per-run. THE_ANKY_MODEL.md documents the vision and downstream consequences.
read the prompt #Images from /generate weren't looking like Anky because the LoRA trigger word "anky" wasn't being injected into prompts sent to ComfyUI. Now all Flux generations automatically prepend the trigger word if it's missing.
read the prompt #Three foundational documents: a comprehensive Swift agent brief with all API endpoints, data models, and SwiftUI components for the iOS app; a deep technical guide explaining the entire system for the founder; and a 22-page LaTeX whitepaper written in Anky's first-person voice covering the philosophy, the four pillars (writing, meditation, breathwork, sadhana), the facilitator network, the $ANKY token on Solana, and the technical architecture.
read the prompt #Spiritual facilitators can apply to be listed on Anky. Apply β admin approval β public profile with reviews. Users book sessions (USDC on Base or Stripe) with an 8% platform fee. The killer feature: GET /swift/v1/facilitators/recommended uses Claude to match users with facilitators based on their writing profile β their psychological patterns, core tensions, and growth edges β so people find the right human guide, not just any guide. Users can optionally share their anonymized Anky context with the facilitator before the first session.
read the prompt #After every writing session, Anky now generates a personalized 10-min guided meditation and 8-min breathwork session tailored to what was just expressed. Mood detection picks the right breathwork style (calming for grief, wim hof for anger, 4-7-8 for anxiety, etc). Premium users get instant Claude Haiku generation; free users are queued through local Ollama on poiesis. Users with no writing get a fresh generic session daily. New endpoints: GET /swift/v1/meditation/ready and /swift/v1/breathwork/ready. is_premium flag added to users for day-one monetization.
read the prompt #Added a full /swift/v1/* REST API namespace for the Anky iOS app. Includes Privy-based auth (Bearer token, same accounts as web), writings list + submission, sadhana commitment tracking with daily check-ins, meditation session logging, and an AI-powered breathwork session generator (Wim Hof, box, 4-7-8, pranayama, and more) that has Anky guide you through an 8-minute practice. New DB tables: sadhana_commitments, sadhana_checkins, breathwork_sessions, breathwork_completions.
read the prompt #Image generation pipeline is now fault-tolerant: if Ollama is unreachable for prompt generation it falls back to the raw writing text; if Gemini image generation fails it automatically retries with the local Flux/ComfyUI pipeline. Previously a single Ollama hiccup aborted the whole pipeline before reaching Gemini.
read the prompt #When @jpfraneto replies to a tweet tagging @ankydotapp, the bot now fetches the parent tweet's text and image (if any), passes both to Claude as a multimodal request, and generates a Flux prompt that weaves together the original request and the parent context. Falls back to the old behavior if the fetch fails.
read the prompt #Added full CSS for the leaderboard page, which had no styles at all. Now features a dark grid layout with color-coded top-3 rows, tab navigation, flow/streak accent colors, and responsive mobile layout.
read the prompt #At 5 minutes into a writing session, the frontend now fires a background request to pre-build the Ollama memory context. By the time the user finishes and the reflection is requested, the context is already cached and returned instantly.
read the prompt #Moved Ollama memory embedding inside the spawned Claude task with a 5s timeout, so a slow Ollama no longer blocks the SSE stream from starting. Removed the pump hackathon banner from the nav.
read the prompt #Replaced the broken v1.1 Account Activity API webhook (deprecated/410) with the v2 Filtered Stream. Anky now maintains a persistent connection to X, detects mentions in real-time, likes immediately, sends a "generating..." ack reply, generates a Flux image (~24s), posts it, and deletes the ack. Full round-trip in under 50 seconds.
read the prompt #Added a real-time debug page at /webhooks/logs that streams every incoming X webhook POST as pretty-printed JSON via SSE. Also removed the broken polling fallback (free tier doesn't support GET /2/users/:id/mentions).
read the prompt #Since the X Account Activity webhook wasn't delivering events, the 2-min polling fallback now runs the same logic: like tweet β single Qwen intent+reply call β rate limit check β Flux image generation with media reply, or text reply. Replaces the old prompt-link flow.
read the prompt #Added per-user rate limiting to the @ankydotapp mention handler: 1 Flux image per user per 5 minutes. If a user hits the limit, Anky replies immediately with a jester-flavored message (5 rotating variants) telling them exactly how long to wait β in full Anky voice drawn from SOUL.md.
read the prompt #Added GET/POST /webhooks/x β CRC challenge handler (fixes "Failed to create webhook β 404 CRC") and full Account Activity event processing. When @ankydotapp is mentioned, Qwen classifies the intent: image requests trigger ComfyUI Flux generation and a v1.1 media reply; text requests get a playful Ollama reply. Also adds comfyui_url to config and generate_flux_image to the image pipeline.
read the prompt #Set qwen3.5:35b as the default model for all Ollama calls (writing feedback, chat, image prompts, memory extraction, mention classification). Deleted all other downloaded models (qwen2.5:32b/14b/72b, llama3.3:70b, llama3.1:70b/latest). nomic-embed-text kept for semantic memory embeddings.
read the prompt #Expanded /training into two tabs (curation + round-two one-shot) with exact runpod setup/upload commands and artifact links. Updated /trainings/general-instructions to match the proven round-two flow (465-pair dataset, robust upload, metadata capture). Added training journal entry for 2026-03-04. Switched ComfyUI LoRA resolution to prefer anky_flux_lora_v2.safetensors (with fallback), so /generate uses the new model on GPU0 while Ollama stays on GPU1.
read the prompt #Added /.well-known/agent serving OASF-compliant JSON describing Anky's capabilities, API, payment model, skills, and on-chain asset address. Enables agent discovery via the 8004 Solana registry.
read the prompt #Updated /static/train_anky_setup.sh to be robust on new Blackwell GPUs and fresh pods: switched default torch install to cu128 wheels, auto-recovers broken /workspace/venv, installs python3-venv/python3-pip, supports multiple extracted dataset folder names (including final-training-dataset-for-round-two), and prompts interactively for HF_TOKEN/ANKY_TOKEN when missing. Updated /trainings/general-instructions to match the new one-liner workflow and known failure fixes from the 2026-03-04 run.
read the prompt #Added /gallery/dataset-round-two: a full collage of 112 training images rendered as 100Γ100 WebP thumbnails. Click to multi-select, then hit "eliminate from dataset" β a password-protected modal (password: ankyisyou) moves the selected PNGs and captions to a rejected/ subfolder and removes them from the grid instantly.
read the prompt #Replaced all non-reflection Anthropic API calls with local Qwen3.5:35b via Ollama: memory extraction, psychological profile synthesis, inquiry generation, image prompt generation, chat about writing, prompt classification, mention classification. Suggested replies now generated proactively in parallel with the Claude reflection stream β by the time the user finishes reading, replies are already cached in the DB. First request is instant.
read the prompt #Dropped the OpenAI API key dependency for the memory pipeline. Researched alternatives: Qwen3.5 35B is a generative model and not suitable for embeddings; nomic-embed-text is a dedicated 274MB embedding model that outperforms OpenAI text-embedding-3-small. Pulled it via Ollama, switched embed_text() to call the local /api/embed endpoint (768 dims), removed all openai_key.is_empty() gates. Memory now runs unconditionally on every anky session.
read the prompt #Rewrote all three reflection prompts (TITLE_AND_REFLECTION_SYSTEM, deep_reflection_prompt, quick_feedback_prompt) around the actual lineage of the practice β self-inquiry, not therapy. Stripped the manifesto-style instructions, removed coaching language, centered the mirror on pointing back at the writer rather than analyzing them.
read the prompt #Rebuilt the entire training data pipeline. Switched prompt generation from Grok to Claude Sonnet with explicit avoid-duplicates context across batches. Injected the rich Anky character spec into every Gemini image generation call. Removed the random training seed that was hurting character consistency. Generated 118 new approved images, copied into dataset (319 total, 0 generic captions). Updated train_anky_setup.sh to rank 32 / alpha 16 / 4500 steps. One-liner: HF_TOKEN=hf_xxx bash <(curl -fsSL https://anky.app/static/train_anky_setup.sh)
read the prompt #Added a keyboard-driven review page at /generations/{id}/tinder. Right arrow approves an image, left arrow rejects it. Decisions are saved to review.json per batch so approved/rejected signals can inform future prompt generation. Includes undo (Ctrl+Z / Backspace), progress bar, and stats.
read the prompt #Replaced qwen2.5:32b with the newly released qwen3.5:35b β a Mixture-of-Experts model with 35B total params but only 3B active per token, giving near-3B inference speed with far superior quality. Stopped the interview engine, freed GPU memory, and updated all Ollama model references in the codebase to use the new model.
read the prompt #Replaced all 194 generic "a photo of anky" training captions with rich descriptive captions using Gemini Vision. Then used Grok to generate 100 diverse human-experience prompts (Anky crying, dancing, cooking, writing) and Gemini to generate 100 new training images with random dataset seeds.
read the prompt #Replaced the flaky Ollama LLM validation gate with a simple check: if the prompt contains the word "anky", it passes. Eliminates false rejections of valid prompts.
read the prompt #Three UX fixes: (1) The layout no longer jumps during streaming β user's writing is auto-collapsed to a small card so the anky response fills the screen stably. (2) Contextual reply buttons now appear right when the reflection finishes streaming, not after image generation. (3) Bumped CSS version so all clients see the correct button styles.
read the prompt #Generated an 8m57s video of Anky's ElevenLabs TTS reading from her own anky session. Three Anky reference images cycle as the background with crossfade transitions; Whisper large-v3 transcription is burned in as white subtitles. Downloadable at /static/anky-speech-video.mp4.
read the prompt #Thinker portrait mode now works with Flux: the prompt is automatically framed as "anky as [name] β [moment]" so it passes Ollama validation. Added a small italic hint under the prompt field when Flux is selected ("describe anky in a scene, setting, or doing something") to reduce failed attempts.
read the prompt #Flux image generation now passes the user's prompt directly to ComfyUI with no Claude transformation. Before queuing, Ollama (qwen2.5:14b) checks whether the prompt is about Anky β if not, a clear error appears below the button. The generate button now turns amber and pulses while generation is in progress, restoring to its default state on completion or error.
read the prompt #Installed ComfyUI on GPU 0 with the anky-flux-lora-v1 fine-tuned model (Flux.1-dev trained on anky's visual world). Added a model toggle to the /generate page: Flux is always free (default), Gemini requires USDC payment. Ollama (qwen2.5:14b) pinned to GPU 1 with OLLAMA_KEEP_ALIVE=-1 so it stays loaded at all times.
read the prompt #In the Farcaster webview, SameSite=Lax cookies are unreliable across sessions. The /write POST now sends X-Anky-User-Token from localStorage as a header fallback; the server uses it to recover the user's identity even when the cookie is missing.
read the prompt #Draft was cleared from localStorage before the server confirmed success. On any network error the user's writing was permanently lost. Now clearDraft() only runs after a confirmed successful save, and the error message tells users their writing is preserved locally.
read the prompt #Added /video-gallery β a public grid gallery showing all completed anky videos with their generated image as thumbnail, title, and date. Linked from the drawer nav.
read the prompt #Added /trainings/general-instructions β the full step-by-step recipe for running an Anky LoRA training, including dataset curation, RunPod setup, tmux, HuggingFace upload one-liners, and known failure modes from run 001.
read the prompt #xAI's Aurora video API was failing to download our 1.6MB PNG reference images with "Unrecoverable data loss." Fixed by saving a compressed JPEG (94KB) alongside the PNG and passing the JPEG URL to xAI. Video pipeline now works end-to-end.
read the prompt #Rewrote the video script system prompt from scratch. Videos now follow a three-act structure with Anky as an active protagonist. Anky must be doing something physical in every scene β climbing, reaching, breaking through, falling. Meditating is forbidden outside the end card. Added inciting incident, crisis, and resolution fields to force real story arcs.
read the prompt #Added a /trainings page documenting every LoRA training run, with a detail page per run explaining what's happening, why, and what comes out. First entry covers today's FLUX.1-dev run on RunPod.
read the prompt #Added torchaudio to the RunPod train script β ai-toolkit's config_modules.py imports it at startup, causing training to fail immediately with ModuleNotFoundError.
read the prompt #Removed the RTMP livestream system β ffmpeg loop, watchdog, WebSocket routes, GO LIVE button, countdown overlay, and live queue. The /api/ankys/today endpoint remains. Too slow, not worth the complexity.
read the prompt #Added /training page with a Tinder-style swipe UI to curate Anky images for Flux LoRA training. Swipe right (or arrow key) to approve, left to reject. Approved images + their captions are auto-copied to data/training-images/ as image-caption pairs ready for LoRA fine-tuning.
read the prompt #Rewrote the Anky reflection format from freeform prose into a structured response: first, ONE concrete actionable experiment for the rest of the day (under "do this today"), then exactly THREE points that mirror the writer's patterns back to them (under "what i see"). No more woo-woo β practical transformation grounded in what they actually wrote.
read the prompt #Rewrote the inquiry system to generate shorter, more precise questions (5-12 words max). Default opening changed from "Tell me who you are" to "What are you avoiding feeling right now?" Claude's generation prompt now targets behavior over philosophy, specificity over abstraction, and asks the question the user hopes won't get asked.
read the prompt #Video generation endpoint was blocking on Claude script generation (30-60s) before responding, causing browser "failed to fetch" timeout. User's USDC payment went through on-chain but the project was never saved to DB. Fixed by saving the project immediately on payment and moving all heavy work (script gen, image gen, video gen) to a background task. Added server-side duplicate guard that returns the existing project if an anky already has one in progress. Frontend now saves txHash to localStorage so retries reuse it instead of sending more USDC.
read the prompt #Rewrote the video script generation prompt to require a concrete visual_world (single environment), message (thesis), and color_arc that thread through every scene. The story spine now flows into every Gemini image prompt and Grok video prompt β each scene knows what world it lives in, where it sits in the arc, and what the video is trying to say. Replaced generic "mystical fantasy illustration" wrapper with story-aware scene direction. Continuity instructions now emphasize maintaining the same visual world across scenes rather than just matching character design.
read the prompt #Voxtral STT model.generate() could hang indefinitely, locking the interview on "transcribing your words". Added three layers of timeout protection: (1) threading lock + 30s thread timeout around GPU inference in audio.py, (2) 45s asyncio.wait_for timeout in server.py, (3) graceful recovery that sends an error message to the browser and returns to listening state instead of freezing. Also added timeouts around the LLM (120s) and TTS (30s) calls, plus CUDA cache clearing after each transcription.
read the prompt #Renamed "interview" to "live interview" in the nav menu and page title. When an interview ends (time up, disconnect, or guest leaves), the UI cleanly resets back to the consent screen instead of showing a dead-end overlay. PFP area shows "(this could be you)" as a dashed placeholder until uploaded. Mic stream and audio context are properly torn down on reset.
read the prompt #Integrated the Python interview engine with the main Rust app database β interviews and messages are now stored in the main anky.db via REST API calls (with local SQLite fallback). Added user context enrichment: when a logged-in user starts an interview, Anky gets their psychological profile, growth edges, core tensions, and recent writing themes injected into its system prompt. Enforced time limits (5min anonymous, 30min authenticated) with countdown timer and warning. Added interview link to hamburger menu. Created a systemd service for the Python engine with auto-restart.
read the prompt #Fixed the auto-scroll bug during reflection streaming β the chat no longer yanks you back down when you scroll up to read. Removed the collapsible writing text (always shows full). Replaced the free-form reply textarea with two AI-generated suggested reply buttons with opposite polarities (inward/soft vs outward/challenging) plus a "type a reply to anky..." option. Each reply triggers a new Anky response with fresh suggestions, creating infinite interwoven conversation threads. Conversations are persisted to the DB and restored when revisiting writings on /writings page. Uses Claude Haiku for fast, contextual reply generation.
read the prompt #Anonymous users no longer lose their identity or in-progress writing when cookies are cleared or the browser closes. The anky_user_id cookie is now synced to localStorage and auto-recovered on future visits. Writing drafts are saved every 2 seconds to localStorage (with a recovery prompt on page load), and completed anky IDs are persisted so users can always find their writings. Works on both the home page and /prompt writing flows.
read the prompt #Video generation rejected valid USDC payments because the tx hash format check was too strict (required exactly 66 chars). Relaxed to accept any reasonable hex hash. Also eliminated the main post-writing bottleneck: the Ollama feedback call was blocking the /write response for 5-15 seconds before the frontend could even start streaming the Claude reflection. Anky sessions now return instantly and run Ollama in the background, so the reflection stream starts immediately.
read the prompt #Video generation and like endpoints failed for Privy-authenticated users because they only checked the anky_user_id cookie, not Privy session auth. All gated API endpoints now fall back to get_auth_user() when the anonymous cookie is missing. Also: /writings page now shows Claude's reflection for ankys instead of Ollama's quick response, and a checkpoint recovery watchdog auto-recovers orphaned writing sessions every 5 minutes.
read the prompt #Writing sessions were lost when Ollama hung during reflection generation β the DB insert happened after the Ollama call, so if Ollama timed out the writing was never saved. Fixed by saving the writing_session immediately, then backfilling the Ollama response via UPDATE. Also added a 120s timeout to the Ollama HTTP client. Recovered the lost writing from checkpoint data and regenerated its anky image + reflection.
read the prompt #Flipped the global overflow:hidden approach β instead of locking scroll everywhere and allowlisting pages, scroll is now enabled by default and only locked on the home writing page. Fixes broken scrolling on gallery, help, ankycoin, and any other routes that were missing from the override list.
read the prompt #Wallet address now shown on settings page for Privy users. Fixed writings page unable to scroll due to global overflow:hidden (added CSS :has() overrides for scrollable pages). Improved sendUSDC error messages β separate errors for missing wallet vs missing browser extension. Generated Nacha's first video via pipeline.
read the prompt #All three reflection paths (Ollama quick feedback, Ollama deep reflection, Claude stream reflection) now respond in the same language the user wrote in. Added "CRITICAL: respond in the SAME LANGUAGE" instruction to every prompt. Fixed mobile viewport: prevented iOS scroll/zoom on keyboard open, added touch-move prevention for overscroll bounce, pinned scroll to top on visualViewport resize.
read the prompt #Deleted the meditation button, scene, reflect/journal panels, and all meditation JS from the home page. UI is now purely: inquiry question + textarea + session bar. Fixed mobile keyboard handling β writing container height tracks visualViewport so the textarea stays fully visible when the keyboard opens. Removed placeholder text (inquiry question serves as the prompt).
read the prompt #Fixed meditation FAB being hidden behind writing container (z-index 15 β 40). Added inquiry system: Anky asks "Tell me who you are." above the textarea in the user's browser language (~10 languages). Each writing session answers the inquiry; Claude Haiku generates the next deeper question based on history and psychological profile.
read the prompt #Flipped the home page default: writing textarea is now front and center on load (auto-focused), meditation is behind a FAB. Replaced the tedious arrow-button time picker with native number inputs. Added close button on meditation overlay to return to writing.
read the prompt #Removed the anonymous meditation limit (2/day cap) and the level-2 gate on the "write" post-meditation option. Writing is now always available to everyone β no login, no meditation count requirement. The FAB already had no guards; now the post-meditation flow matches.
read the prompt #Rewrote the video script generation system prompt from "story weaver" to psychoanalytic director. Anky is now Virgil β guide through the underworld of the self. Removed dead visual continuity instructions (the sequential chain handles that mechanically now). Added psychoanalytic_note (per-scene analyst reading) and sound_direction (audio direction for Grok) fields. Reframed 5-act structure through depth psychology: surface/mask β tension/repressed β descent/Virgil β revelation/seeing β integration/changed.
read the prompt #Replaced the parallel image+video pipeline with a sequential chain: each scene's completed video has reference frames extracted via ffmpeg and passed to Gemini when generating the next scene's image. Visual continuity is now mechanical β each scene literally sees what came before. Slower (~40 min vs ~10 min) but dramatically better visual coherence between clips.
read the prompt #Redesigned the video studio for mobile with a tab-based layout (writings / video / script). The three-panel desktop layout was completely unusable on phones β now the video player takes center stage with panels accessible via tabs. Added proper touch targets (44px min), dynamic viewport height handling, responsive filmstrip, and compact modals.
read the prompt #Completely reimagined the landing page as a meditation timer. Users start by sitting still (30s default, tappable to customize MM:SS). Gong sound marks start/end. After meditation: reflect (multiple choice), journal (guided prompt), write (8-min session, locked until level 2), or sit again. Progressive leveling system tracks streaks. Floating write button always accessible. Writing flow preserved intact underneath.
read the prompt #Created a self-contained index.html with all CSS/JS inlined for the memetics.wtf custom token page β an alternative to dexscreener with branded project homepages. Uploaded as a GitHub gist.
read the prompt #The post-session chat input no longer blocks backspace, delete, arrow keys, paste, or cut. It behaves like a normal textarea. Enter sends the message, Shift+Enter for newlines. Removed the idle-timeout auto-submit from chat input.
read the prompt #Disabled the RTMP livestream to pump.fun β it was spamming and not working correctly. The ffmpeg loop no longer spawns. Live writing UI on the site remains but doesn't stream externally.
read the prompt #Added /feed page showing all completed ankys as a compact scrollable list β image thumbnail, title, author, and relative time. Header shows 24-hour activity stats: writers, sessions, ankys created, minutes written, total words. No writing content exposed.
read the prompt #The ANKY TV livestream slideshow now plays completed video projects alongside anky images. Videos are decoded via ffmpeg subprocess, pillarboxed to 1920x1080 with dark background, capped at 15 seconds, and inserted every ~5 images. Falls back gracefully if no videos exist or files are missing.
read the prompt #Replaced the static pulsing "$ANKY" idle screen with ANKY TV β an infinite slideshow of completed anky images with pixel dissolve transitions. Each image shows for 3 seconds, then dissolves into blocks that collapse to center before the next image expands outward. Psychedelic rainbow "ANKY TV" header at top center, attribution text, and title overlays. Falls back to pulse animation if no ankys exist.
read the prompt #Removed wallet-only Privy login. Login now uses email, Google, or Apple via Privy. Added email column to users, updated backend to extract email from Privy linked accounts. Wallet no longer required for authentication β only for optional USDC payments.
read the prompt #Added Solana wallet connectors to Privy config so Phantom works in its default Solana mode. Backend now also accepts solana_wallet type from Privy linked_accounts. EVM (Base) support kept for USDC payments.
read the prompt #Fixed chat scroll container (no more full-page scroll), collapsible user writing bubbles for long text, Enter key now allowed in chat follow-up input. Fixed anonymous users getting rejected with "API key required" by setting the cookie on page load instead of only on /write.
read the prompt #Rewrote the video pipeline from square (1:1) psychoanalytic framing to vertical (9:16) story-driven format. The system prompt now finds the story living inside the writing instead of abstract psychoanalysis. All image gen, video gen, and ffmpeg transcodes switched to 720x1280 vertical.
read the prompt #Reverted the custom keyboard and feed UI changes. Restored the original textarea-based writing experience and landing page layout.
read the prompt #Fixed broken styles from the keyboard-first overhaul. The custom keyboard + feed UI is now mobile-only (β€768px). Desktop restores the original landing page with textarea, manifesto, how-it-works, pricing, and CTA. Both UIs coexist in the same template, toggled via CSS media queries and JS isMobile detection.
read the prompt #Major UI rewrite. Replaced the textarea with a custom on-screen keyboard (bottom 44% of viewport) with QWERTY/AZERTY/QWERTZ layouts. Text appears in a display div above. Rainbow progress border traces the 8-minute session around the keyboard via SVG stroke animation. Landing page is now an Instagram-style feed of square anky posts with likes, comments, and share. Two writing modes: anky (violet/white freewrite) and comment (purple/yellow, responding to a post). All video/image generation switched to 1:1 square format. Added feed API, like toggle endpoint, thumbnail generation, and immutable cache headers for images.
read the prompt #Added a "generate video ($5)" button that appears alongside share/copy-link after completing an 8+ minute writing session. Users can now trigger video generation immediately from the chat flow without navigating to the video studio. Handles wallet connection, USDC payment, and links to the video studio for progress tracking.
read the prompt #Switched video pipeline from horizontal (16:9) to vertical (9:16) for TikTok/phone-native viewing. Fixed cost from $2 to $5 USDC to match real Grok pricing ($0.05/second). Added per-second cost tracking. Payment tx_hash now stored in video_projects. Major system prompt rewrite: added visual continuity rules requiring each scene's image to match where the previous scene's video ends, with explicit "ENDS with [state]" instructions and color palette flow. Added info (i) button with full pipeline explainer modal linking to the open source repo.
read the prompt #Major rewrite of the video script system. Claude now generates a two-phase output: first a "story spine" (wound, desire, arc, controlling visual metaphor, emotional trajectory), then scenes structurally bound to it with act labels, narrative roles, camera movements, and transition logic. UI shows the story spine reveal, act-coded scenes with color badges, narrative progress messages per act ("painting the surface...", "descending..."), filmstrip act dividers, and 1080p/720p/360p quality selector. Fixed: transcode paths now stored in DB, Grok generation_id stored per scene, existing images skipped on resume, Grok cost records tracked, story_spine column added to video_projects.
read the prompt #Complete UX overhaul of the video studio. Replaced progress ring with a filmstrip strip that shows scene images as they generate in real-time. Images and videos now generate 3-at-a-time (parallel via tokio JoinSet + Semaphore). Added project receipt/ID display, expandable scene cards with full prompts, scene detail overlay on filmstrip click, and a resume endpoint to retry failed projects from where they left off. Fixed .env XAI_API_KEY line separation so Grok video gen works. Added current_step tracking to DB for accurate progress labels.
read the prompt #Video script generation now injects the full memory system β psychological profiles, avoidances, breakthroughs, recurring patterns, emotions, and similar past writing moments β into Claude's prompt. The system prompt reframes video creation as psychoanalytic filmmaking with a 5-act psyche structure (Surface β Cracks β Descent β Confrontation β Integration). Flow score drives visual pacing. Anky becomes a psychopomp guide, not a narrator.
read the prompt #Video script generation was failing because Claude's JSON response was being truncated at 4096 tokens (not enough for 8-15 detailed scenes). Doubled max_tokens to 8192 and made total_duration field optional with serde(default) since the code recalculates it anyway. Also added total_duration to the prompt template so Claude includes it.
read the prompt #Complete rewrite of the video studio. Claude analyzes your 8-minute writing session and generates a dynamic script (8-15 scenes, 1-15s each, totaling exactly 88 seconds). Each scene gets a visual prompt, narration text, and duration β all produced by the pipeline. Grok generates video clips per scene, ffmpeg stitches them together, and three quality levels (full, 720p, 360p) are saved. New single-screen layout with three-panel design: anky selector, video preview with progress ring, and live script view. Added Cloudflare cache purge API endpoint. Fixed the "no anky written" check to include all writing sessions regardless of status.
read the prompt #Added cache-control headers: HTML routes now send no-cache/no-store/must-revalidate so browsers and Cloudflare always fetch the latest version after deploys. Static assets get 1-hour cache with stale-while-revalidate. Also fixed a crash loop where a stale process held port 8889.
read the prompt #Five features in one session. (1) Flow score engine: tracks per-keystroke timing, computes a 0-100 score from rhythm consistency, velocity, pause patterns, and duration. Stored on every writing session. (2) Leaderboard at /leaderboard with rankings by flow score, streak, total ankys, and words written. (3) Chakra color gradient timer: the progress bar and UI shift through 8 kingdom colors (redβorangeβgoldβgreenβblueβindigoβvioletβmagenta) as 8 minutes elapse. (4) /pitch endpoint with two tabs: VIDEO and an 8-slide horizontal pitch deck for the pump.fun hackathon, each slide colored by an ankyverse kingdom. (5) Updated $ANKY branding to "only on Solana" across ankycoin page and landing.
read the prompt #Anky is now a PWA. iPhone users can add to home screen from Safari, Android users get a native install prompt. Includes web app manifest, service worker with offline caching for static assets, Apple touch icon, safe-area inset handling for notched phones, and standalone display mode.
read the prompt #Added a background watchdog that checks every 60 seconds if a live session has been stuck for more than 10 minutes (8 min max + 2 min buffer). Force-resets the live state, broadcasts idle to all clients, resets the stream frame, and kicks off the next queued writer. Fixes sessions getting permanently stuck when a WebSocket drops uncleanly.
read the prompt #Unified all $ANKY Solana contract address references to the correct CA (6GsRbp2Bz9QZsoAEmUSGgTpTW7s59m7R3EGtm1FPpump). Fixed mismatched CA in README.md and home page footer.
read the prompt #Agents must now register (POST /api/v1/register) and include X-API-Key header when calling /write. Browser users with cookies are unaffected. Updated skills.md to lead with registration as the first step. Prevents unauthenticated programmatic abuse of free anky generation.
read the prompt #Removed the blocking "YOU WROTE AN ANKY" overlay at 8 minutes β the writer can now keep writing seamlessly while the stream celebrates independently. Hid the bottom live bar and "enter waiting room" button from the live writer (they already have the session bar). Added a LIVE indicator to the session bar. Fixed orphaned ffmpeg processes on service restart by switching KillMode to control-group.
read the prompt #Replaced navbar live indicator with a full-width fixed bottom bar visible on all pages. Added waiting room queue so multiple writers can chain back-to-back. Agents that POST when slot is occupied get auto-queued. Humans get a 30s claim window when their turn arrives. WATCH LIVE button links to pump.fun.
read the prompt #Fixed sdk.actions.ready() always being called (was stuck on splash screen forever). Generated proper Anky icon and splash images via Gemini with reference art. Icon shows Anky's face close-up, splash shows Anky meditating with golden light.
read the prompt #Anky now works as a Farcaster MiniApp. Serves /.well-known/farcaster.json manifest, auto-authenticates via Farcaster SDK context (FID + embedded wallet), adds POST /auth/farcaster/verify endpoint, and hides connect button when running inside Warpcast. Existing Privy/X auth untouched.
read the prompt #Added in-memory per-IP rate limiter to prevent agents from bombarding the write endpoint. 5 requests per 10 minutes per IP (via CF-Connecting-IP). Returns 429 with retry-after seconds when exceeded.
read the prompt #Livestream sessions now hard-stop at 8 minutes β shows "@username just wrote an anky!" congrats on stream for 8 seconds. Navbar live indicator shows elapsed timer. Human/agent label shown on stream and overlay. Celebratory claim-username modal with glow animation. Updated skills.md with agent live/write endpoint docs.
read the prompt #Replaced /settings redirect with an inline modal that lets you claim a username and go live in one step. Fixed AppError to return JSON instead of plain text (was causing "network error" on settings save). Added POST /api/claim-username endpoint.
read the prompt #Replaced ffmpeg drawtext with Rust-rendered 1920x1080 landscape frames piped to ffmpeg at 10fps. Added username enforcement for Go Live, agent live-writing endpoint at ~200 WPM, pulsing navbar live indicator with "@username is writing", and horizontal stream overlay. Removed 88:88 timer.
read the prompt #Added /stream/overlay β a standalone 1080x1920 HTML page for OBS Browser Source. Shows live writing text via SSE, life bar, 8-min progress bar, $ANKY branding, 88:88 stream timer, contract address ticker, and anky sticker images that accumulate as ankys are completed. ffmpeg stream stays as fallback.
read the prompt #Eliminated the USDC balance/credits system. All paid features now use x402 per-request wallet payments. Deleted /credits page, credits.rs, and balance deduction logic. API keys remain for agent identity. Free sessions remain.
read the prompt #Added POST /api/v1/prompt/create for agents to create prompts with x402/API key payment. Added writers_count tracking per prompt. Enhanced GET /api/v1/prompt/{id} and GET /api/v1/prompts with full agent-friendly JSON including created_by field and ?sort=popular support.
read the prompt #Fixed navbar dropdown dead zone with ::before bridge + touch support. Added hero copy above textarea. Fixed mobile writing with position:fixed body. Replaced share card overlay with inline share/copy buttons. Added session_token checkpoint validation. Redesigned /writings as two-panel layout with sidebar + detail.
read the prompt #Removed collapsible text β full writing always visible. Page scrolls naturally with arrow keys. Bigger fonts. Backspace gives visual feedback. "anky accomplished" badge at 8 minutes.
read the prompt #Added this page. Every prompt that shapes anky is stored as a txt file and linked here. Each entry has a permalink.
read the prompt #Removed API key payment path from video frame generation. All image generation now requires wallet payment via x402 β send USDC, get art.
read the prompt #Video studio image generation now requires payment ($0.10/frame). Added "Generate All" button β pay once for the whole batch.
read the prompt #Rewrote /generate/video into a 3-phase production studio: prep (script + image gen), record (camera + teleprompter + timer), review (playback + upload). MediaRecorder API, auto-scrolling teleprompter, camera PIP, immersive mode, multipart upload, video_recordings table.
read the prompt #