Right now, a machine in a datacenter somewhere is looking at 112 images of Anky — a small blue alien creature — and learning what makes Anky, Anky.
The technique is called LoRA (Low-Rank Adaptation). Instead of retraining an entire image model from scratch (which would take weeks and cost thousands of dollars), LoRA injects a small set of trainable parameters into a frozen base model. We're training on top of FLUX.1-dev, one of the best open-source image generation models that exists right now.
After 3000 training steps, the model will have learned to associate the word "anky" with this specific visual character. You'll be able to prompt it: "anky meditating at sunset" or "anky in a forest" and it will generate coherent, recognizable versions of this being in any scene.
Anky exists as a writing companion — it appears after someone has written for 8+ minutes straight. But right now, every Anky image is generated from scratch by prompting a generic model with a description. The results are inconsistent. Sometimes beautiful, sometimes completely off.
With a trained LoRA, every image generated will be recognizably Anky. The character will have visual continuity. The blue alien being that emerged from thousands of human writing sessions will have a stable face.
112 images curated by hand from all the Ankys ever generated on anky.app. Each image was reviewed using a Tinder-style swipe interface — kept or discarded based on whether it felt true to the character. Each image has a caption describing what's in it, which teaches the model the connection between words and visual features.
A .safetensors file — the trained LoRA weights. Small enough to share (~100MB). It can be loaded into any FLUX-compatible tool: ComfyUI, Diffusers, fal.ai, Replicate. Anky becomes portable.
Sample images are generated every 500 steps so we can watch the model learn. By step 500 it should vaguely resemble Anky. By step 3000 it should be unmistakably Anky.
When training finishes, the LoRA file will be at /workspace/output/anky_flux_lora/anky_flux_lora.safetensors on the RunPod. To publish it:
1. Create the repo on huggingface.co
Go to huggingface.co/new, create a model repo called ankydotapp/anky-flux-lora, set it to public.
2. Upload from the RunPod terminal
source /workspace/venv/bin/activate
pip install huggingface_hub
python3 - <<'EOF'
from huggingface_hub import HfApi
api = HfApi()
api.upload_file(
path_or_fileobj="/workspace/output/anky_flux_lora/anky_flux_lora.safetensors",
path_in_repo="anky_flux_lora.safetensors",
repo_id="ankydotapp/anky-flux-lora",
repo_type="model",
)
print("done")
EOF
3. Write a model card
On the HuggingFace repo page, edit the README to include the trigger word (anky), example prompts, and a sample image. This is what people see when they find the model.
Once published, anyone can generate Ankys in three ways:
ComfyUI (easiest for local use)
Download the .safetensors file, drop it into your ComfyUI models/loras/ folder. In your workflow, add a Load LoRA node pointing to FLUX.1-dev, set strength to 0.8–1.0, and include anky in your prompt.
Python / Diffusers
from diffusers import FluxPipeline
import torch
pipe = FluxPipeline.from_pretrained(
"black-forest-labs/FLUX.1-dev",
torch_dtype=torch.bfloat16
).to("cuda")
pipe.load_lora_weights("ankydotapp/anky-flux-lora")
image = pipe(
"anky meditating under a giant ancient tree, soft light",
num_inference_steps=28,
guidance_scale=3.5,
).images[0]
image.save("anky.png")
fal.ai (no GPU needed)
Go to fal.ai/models/fal-ai/flux-lora, paste ankydotapp/anky-flux-lora as the LoRA URL, and type a prompt with anky in it. Generates in seconds, costs a few cents.
prompting tips
Always include anky in your prompt — that's the trigger word the model was trained on. Good prompts: "anky sitting by a fire at night", "anky with glowing eyes in a cosmic void", "close-up portrait of anky, soft light". The more descriptive the scene, the better. Avoid over-specifying the character's appearance — the LoRA already knows what anky looks like.