initial setup: ComfyUI + kohya_ss scripts, LoRA config, workflows

This commit is contained in:
Johannes
2026-03-13 22:12:04 +01:00
commit 4c2972e7a2
9 changed files with 419 additions and 0 deletions

72
README.md Normal file
View File

@@ -0,0 +1,72 @@
# animepics
Anime image generation + LoRA training setup.
## Stack
- **ComfyUI** — image generation UI
- **NoobAI-XL** — base anime model (SDXL-based, SFW+NSFW)
- **kohya_ss** — LoRA training
## Requirements
- Python 3.10+ installed and in PATH
- Git installed
- NVIDIA GPU with CUDA (4070 recommended)
- ~20GB free disk space for models
## Setup
Run once to install everything:
```powershell
.\setup.ps1
```
This will:
1. Clone ComfyUI and install dependencies
2. Clone kohya_ss and install dependencies
3. Install ComfyUI custom nodes (AnimateDiff, ControlNet, etc.)
4. Download the NoobAI-XL base model + VAE
## Launch
```powershell
# Start image generation UI (opens browser at localhost:8188)
.\launch_comfyui.ps1
# Start LoRA training UI (opens browser at localhost:7860)
.\launch_kohya.ps1
```
## LoRA Training
1. Put your training images in `training_data/<your_lora_name>/img/10_<trigger_word>/`
2. Copy `training/example_lora_config.toml` and edit it
3. Launch kohya and use the GUI, or run `.\train_lora.ps1 <config_file>`
## Directory Structure
```
animepics/
├── comfyui/ # ComfyUI install (gitignored venv)
├── kohya_ss/ # kohya_ss install (gitignored venv)
├── models/ # shared model storage (gitignored)
│ ├── checkpoints/ # base models (.safetensors)
│ ├── loras/ # trained LoRAs
│ ├── vae/ # VAE models
│ ├── embeddings/ # textual inversions
│ └── controlnet/ # ControlNet models
├── training_data/ # LoRA training images (gitignored)
├── output/ # generated images (gitignored)
├── training/ # LoRA training configs
└── workflows/ # ComfyUI workflow JSON files
```
## Model Downloads
Base model (NoobAI-XL Vpred):
- https://civitai.com/models/833294
Good NSFW VAE:
- Already baked into NoobAI, but sdxl_vae.safetensors from stabilityai works too