n8n AI Cost Calculator
Estimate your AI workflow costs before you build
Quick Presets
Content Repurposing
Blog to 4 platforms, 2 AI nodes, 20 runs/day
Email Classifier
Inbox triage, 1 AI node, 100 emails/day
Support Bot
Ticket response + escalation, 3 AI nodes, 50/day
Web Scraping + AI
Extract + analyze, 2 AI nodes, 200 runs/day
Lead Scoring
Score + cold email, 2 AI nodes, 30/day
RAG Chatbot
Retrieval + generation, 2 AI nodes, 500/day
AI Model
GPT-4o ($2.50 / $10.00 per 1M tokens)
GPT-4o-mini ($0.15 / $0.60 per 1M tokens)
GPT-3.5-turbo ($0.50 / $1.50 per 1M tokens)
Claude 3.5 Sonnet ($3.00 / $15.00 per 1M tokens)
Claude 3 Haiku ($0.25 / $1.25 per 1M tokens)
Gemini 1.5 Flash ($0.075 / $0.30 per 1M tokens)
Gemini 1.5 Pro ($1.25 / $5.00 per 1M tokens)
Llama 3.1 70B via Groq (FREE)
Mistral 7B local / Ollama (FREE)
DeepSeek V2 ($0.14 / $0.28 per 1M tokens)
Average Input Tokens per Run
500
Average Output Tokens per Run
300
Number of AI Nodes in Workflow
2
Runs per Day
10
Cost per Run
$0.0000
Daily Cost
$0.00
Monthly Cost
$0.00
Yearly Cost
$0.00
Model Comparison
Model
Cost / Run
Monthly
Yearly
Savings
Cost Optimization Tips
Use gpt-4o-mini for most tasks
— 95% cheaper than gpt-4o with similar quality for classification, extraction, and simple generation
Batch similar requests
— reduce API call overhead by processing multiple items in a single prompt
Cache repeated queries
— identical inputs produce identical outputs; cache them to avoid redundant API calls
Use free models for development
— Groq (free tier) and local Ollama models cost nothing; switch to paid models only in production
Keep system prompts concise
— every token in your system prompt is sent with every request; trim unnecessary instructions
Use structured output (JSON mode)
— reduces output tokens by eliminating verbose natural language wrappers
Set max_tokens wisely
— cap output length to prevent runaway costs on edge cases