DevBolt
Processed in your browser. Your data never leaves your device.

OpenAI vs Anthropic: API Prompt Format Comparison

OpenAI and Anthropic use different API structures for prompts. This guide compares them side by side so you can write prompts that work across both platforms or convert between them.

← Home/

AI Prompt Template Builder

Build structured prompts with reusable templates, variables, and multi-format output for OpenAI, Anthropic, and Gemini APIs. All processing happens in your browser.

Prompt Sections

Variables (use {{name}} in prompts)

{{}}=
{{}}=

Output Format

Output Preview

~177 tokens708 chars
[SYSTEM]
You are an expert code reviewer. Analyze the provided code for bugs, performance issues, security vulnerabilities, and adherence to best practices. Be specific and actionable in your feedback.

[USER]
## Role
You are an expert TypeScript code reviewer with 10+ years of experience.

## Context
Review the following TypeScript code from a web application project.

## Task
Analyze this code for:
1. Bugs and logical errors
2. Performance issues
3. Security vulnerabilities
4. Best practice violations
5. Readability improvements

## Output Format
For each issue found, provide:
- **Severity**: Critical / Warning / Info
- **Line**: The affected code
- **Issue**: What's wrong
- **Fix**: How to fix it

Variable Quick-Fill

Message structure differences

OpenAI places system instructions inside the messages array as a message with role 'system'. Anthropic uses a top-level 'system' parameter separate from the messages array. Both support alternating user/assistant messages for multi-turn conversations. This structural difference means you cannot simply copy-paste API payloads between providers — the system prompt must be moved to/from the messages array.

Model parameters compared

Both APIs support temperature (0-1 for OpenAI, 0-1 for Anthropic), max tokens (max_tokens in both), and top_p. OpenAI additionally supports frequency_penalty and presence_penalty for repetition control. Anthropic supports stop_sequences for custom stop tokens. OpenAI defaults to no max token limit while Anthropic requires max_tokens to be specified explicitly.

When to use which format

Use OpenAI format when targeting GPT-4o, GPT-4 Turbo, or o1/o3 models. Use Anthropic format for Claude Opus, Sonnet, and Haiku models. If building for multiple providers, use a template builder like DevBolt's to generate both formats from a single prompt. The core prompt content is the same — only the JSON wrapper changes.

Frequently Asked Questions

Can I use the same prompt for OpenAI and Anthropic?

The prompt content (system instructions, user messages) can be identical, but the JSON structure differs. OpenAI puts system messages in the messages array while Anthropic uses a top-level system parameter. A prompt template builder can generate both formats from one source.

Which API format does Gemini use?

Google Gemini uses a 'contents' array with 'parts' objects. It has no native system message field — system instructions are typically sent as the first user message, followed by a model acknowledgment. The generationConfig object controls temperature and max output tokens.

Do OpenAI and Anthropic handle multi-turn differently?

Both require alternating user/assistant messages. OpenAI allows multiple consecutive system messages. Anthropic requires the first message to be from the user. Both support injecting previous conversation history for context.

Related Generate Tools