Start trial
PricingContact Us
Log InStart Free Trial

TinyMCE AI: Features, Pricing, and Integration Guide

11 min read

TinyMCE AI Features, Pricing, and Integration Guide

Written by

Coco Poley

Category

World of WYSIWYG

TinyMCE AI is a fully managed AI writing plugin that integrates directly into the TinyMCE rich text editor (RTE). It gives your application three built-in capabilities that offer a variety of workflow assistance tools:

  • AI Chat (conversational chat)
  • AI Quick Actions (text transformations)
  • AI Review (automated quality checks)

No custom AI backend, prompt engineering infrastructure, or model management required from your development team. This FAQ has the answers you need to decide when and if TinyMCE AI is right for your application.

Question

Answer

What is TinyMCE AI?

A managed plugin: chat, quick actions, and review built in.

How is it different from AI Assistant?

Fully managed, no BYO API key for cloud,  on-prem custom AI models, three modes vs. one.

Which AI models does it support?

Cloud: OpenAI, Anthropic (Claude), Gemini. On-prem: any OpenAI-compatible models.

Does it need a custom backend?

No. TinyMCE manages infrastructure.

How much development effort does integration take?

Low to Medium (JWT endpoint + plugin config).

How is it priced?

Credit-based add-on to paid TinyMCE plans.

Is content used to train AI models?

No.

Is a free trial available?

Yes, for 14 days.

What Is TinyMCE AI?

TinyMCE AI is a complete AI writing environment built directly into the RTE: a native plugin that gives your users conversational AI chat, instant text transformations, translations, research, and automated document review inside your application, without leaving the editor. TinyMCE manages the AI infrastructure, and you configure what your users see and how they use it.

How is TinyMCE AI different from AI Assistant (legacy)?

The AI Assistant was a starting point: a single-action, prompt-based plugin that required you to bring your own API key and wire up your own AI backend. TinyMCE AI is a fundamentally different architecture. It's fully managed (no API key required), has full document awareness, and ships with advanced features out of the box. 

How does TinyMCE AI work?

What is AI Chat?

AI Chat is TinyMCE AI's conversational interface: a persistent chat panel that understands the full document, not just a selected passage. Users can have multi-turn conversations, access previous chat history, upload files, paste links for additional context, and choose which AI model they want to work with. The output inserts directly into the document, so nothing needs to leave the editor.

What are AI Quick Actions?

Quick Actions are context-aware commands that apply instant transformations to selected text: rewriting, expanding, shortening, tone adjustment, grammar correction, translation, and any custom actions your team defines. A user highlights a paragraph, triggers an action from the toolbar, and gets a result in place. The available actions are configurable, so you can shape them to fit your application's workflow.

What is AI Review?

AI Review runs automated quality checks across an entire document and surfaces inline suggestions for the user to preview and accept or reject. It covers grammar, style, clarity, tone, and readability, and offers five review types out of the box that you can configure. The preview-before-accept workflow keeps users in control.

Can users choose which AI model they interact with?

Yes. TinyMCE AI supports OpenAI (ChatGPT), Anthropic (Claude), and Google (Gemini). Model selection is configurable at two levels: developers can set which models are available in the first place, and users can choose between those models directly from the chat panel. You can lock users to a specific model or give them options.

Can TinyMCE AI be used outside the editor?

Yes. TinyMCE AI includes a REST API that lets you trigger AI capabilities outside the editor UI, useful for building custom workflows, background processing, or integrating AI outputs into parts of your application that don't use the editor interface directly. It's the same managed AI service accessed programmatically.

How do I integrate TinyMCE AI into my application?

What does TinyMCE AI integration actually require?

Three things: one plugin added to your TinyMCE configuration, one JWT endpoint on your backend to handle authentication, and a configuration block in your init script to define which features and models you're enabling. There's no AI infrastructure to build, no model to host, and no prompt engineering to set up from scratch. TinyMCE handles the backend and you handle the configuration.

How does access control work?

Access is controlled via JWT tokens. Your backend issues tokens that determine which users have access to TinyMCE AI features, and TinyMCE validates those tokens against the managed service. You configure the JWT endpoint and afterward TinyMCE handles everything downstream. This means you can gate AI access by user role, subscription tier, or any other logic your application already manages.

Can I customize the AI prompts, actions, and review types?

Yes. The API gives you control over all three. You can define custom quick actions tailored to your users' workflows (e.g., a "localize for market" action for a CMS, a "simplify for reading level" action for an LMS), set custom system prompts to enforce brand voice, and configure which of the five review types run during AI Review. The defaults are functional out of the box. It’s the customization layer that lets you make them specific to your product.

Why not just build AI into my editor myself using an LLM API?

You can, and you will have full control over AI provider connections, UX/UI, and every capability you want from the AI is written by your team. But you'll spend engineering cycles recreating infrastructure that TinyMCE AI already handles. Beyond that, your team will end up maintaining your own AI integration over time. An RTE that understands its own document model can apply AI to selected text, preserve formatting, and maintain structure in ways a raw LLM API response cannot. And if it’s a pre-built component like TinyMCE AI, maintenance, security, and bug fixes are owned by another team, saving time and money. The build-vs-buy math shifts pretty quickly once you itemize the full scope.

How does TinyMCE AI pricing work?

What is the difference between credits and tokens in TinyMCE AI?

Credits are the unit you purchase. Tokens are what the underlying LLM consumes per request. Each AI interaction (a chat message, a quick action, a review run) draws against your credit balance based on the model used and the length of the input and output. Credits give you a predictable budget to plan around, regardless of which model your users choose or how they're using the AI.

How many credits does each AI model consume?

Credit consumption varies by model and by whether you're sending input tokens, receiving output tokens, or using cached context. Output tokens cost significantly more than input tokens across all providers. That's worth keeping in mind if your users tend toward longer generated responses.

Here's the full breakdown:

Anthropic

Model

Input (per 1M tokens)

Cache write (per 1M)

Cache read (per 1M)

Output (per 1M)

Claude Sonnet 4.6

7.5M credits

9.375M credits

0.75M credits

37.5M credits

Claude Sonnet 4.5

7.5M credits

9.375M credits

0.75M credits

37.5M credits

Claude Haiku 4.5

2.5M credits

3.125M credits

0.25M credits

12.5M credits

Google

Model

Input (per 1M tokens)

Output (per 1M tokens)

Cached input (per 1M)

Gemini 3 Flash

1.25M credits

7.5M credits

0.125M credits

Gemini 2.5 Flash

0.75M credits

6.25M credits

0.075M credits

OpenAI

Model

Input (per 1M tokens)

Cached input (per 1M)

Output (per 1M)

GPT-5.2

4.375M credits

0.4375M credits

35M credits

GPT-5.1

3.125M credits

0.3125M credits

25M credits

GPT-5

3.125M credits

0.3125M credits

25M credits

GPT-5 Mini

0.625M credits

0.0625M credits

5M credits

GPT-4.1

5M credits

1.25M credits

20M credits

GPT-4.1 Mini

1M credits

0.25M credits

4M credits

If you're estimating usage, lighter models like Claude Haiku 4.5, Gemini 2.5 Flash, or GPT-5 Mini draw fewer credits per interaction, which matters for high-volume applications where the task doesn't require the most powerful model available.

What are the TinyMCE AI pricing tiers?

TinyMCE AI is available as a monthly add-on for online, cloud-hosted customers at three credit volume levels:

Tier

Monthly price

Included credits

Overage rate

Starter

$160/month

100 million credits

$7.50 per 5 million

Growth

$360/month

400 million credits

$6.00 per 5 million

Scale

$560/month

800 million credits

$5.00 per 5 million

Enterprise pricing is available for custom volumes. Contact sales to talk about your specific needs.

There is also annual pricing available for TinyMCE AI. Overage amounts and included credits remain the same, but there is a discount on the monthly payment with the annual rate.

Tier

Annual price

Included credits

Overage rate

Starter

$133/month

100 million credits

$7.50 per 5 million

Growth

$300/month

400 million credits

$6.00 per 5 million

Scale

$467/month

800 million credits

$5.00 per 5 million

Is TinyMCE AI included in my existing TinyMCE plan?

No. TinyMCE AI is a separate add-on, not bundled into any plan tier by default. It's available as an add-on on any paid TinyMCE plan (Essential, Professional, or Enterprise). Your editor plan and your AI credit tier are two independent costs.

Is self-hosting available?

On-premise support is currently in early access and is expected to be available for general release soon. Interested in evaluating on-prem TinyMCE AI today? Contact the TinyMCE team to get added to the early access group.

Is there a free trial?

Yes, you can try TinyMCE AI free for 14 days. Start your trial today.

How does TinyMCE AI compare to Tiptap and Froala?

All three editors offer some form of AI capability, but they take fundamentally different approaches. 

TinyMCE AI vs. Tiptap AI and Froala: what's the implementation difference?

The clearest way to see the difference is at the implementation level:

 

TinyMCE AI

Tiptap AI

Froala

AI chat

Built in, multi-turn with persistent history and document context

Available via AI Toolkit (beta), requires custom implementation

Available via Ask Anything popup, and requires your own AI backend to power it

Quick actions

Built in, grammar, tone, length, translation, and custom actions included

Pre-configured commands via AI Generation, so custom commands require additional build work

Available via Edit Smarter dropdown, with tone and translation options configurable, but backend is yours to build

Document review

Built in, 5 review types configurable out of the box

Available via AI Toolkit (beta), not pre-built

Not available

Model support

OpenAI, Anthropic (Claude), Gemini, all selectable at integrator or user level

OpenAI default, with custom LLMs via resolver functions

LLM-agnostic so any model your backend supports

Managed backend

Yes. TinyMCE handles infrastructure and model updates

No. Developers wire their own LLM and API

No. You build, host, and maintain the AI backend the plugin points at

Custom UI required

No. Toolbar buttons and sidebar panel included by default

Yes. Developers build and maintain all UI components

Partial. Toolbar buttons included, but custom tone options, translation targets, and prompt templates require additional build work

Implementation effort

Low–Medium (JWT setup + plugin config)

Medium–Extra Large (scales with extension choice–full LLM wiring and UI build required)

Extra Large (plugin handles editor UI only. AI backend, hosting, and maintenance are fully on your team)

On-premises support

Cloud only now, with on-prem in early access

Self-hostable. Developer manages own infrastructure

Self-hostable. Developer manages own infrastructure

TinyMCE AI vs. Froala AI Assist: what's the difference?

Froala's AI Assist provides the editor-side wiring: the UI hooks where AI outputs can be inserted. But your team owns everything behind it. It's a lower-level building block that enables AI capabilities. If your team has strong AI infrastructure already and wants editor-level control over every detail, that tradeoff may be worthwhile. If you're trying to ship AI features without building an AI backend, it may not be the best fit.

When would Tiptap be a better choice than TinyMCE AI?

If you're building an AI-native application where the document structure is complex, deeply custom, or schema-driven, and you need granular programmatic control over how AI interacts with specific node types, Tiptap's extension architecture is better suited to that level of customization. The tradeoff is real implementation overhead: you'll build and maintain the UI, wire your own LLM, and manage the backend yourself. 

📖 Read more: Check out AI Tools in TinyMCE, Froala, and Tiptap Compared or Comparing Implementation for AI: TinyMCE vs Tiptap to learn more.

What can developers build with TinyMCE AI?

TinyMCE AI for CMS platforms

In a CMS, the editor is where content lives, and where most of the friction in the publishing workflow happens. TinyMCE AI lets users generate drafts, adjust tone for different audiences, ask for improvement suggestions, run translations, and run a quality review before publishing, all without leaving the editor interface.

TinyMCE AI for LMS platforms

In a learning management system, content quality directly affects learner outcomes. TinyMCE AI gives instructors and content designers tools to summarize source material, research topics by uploading reference files or pasting URLs directly into the chat, and run readability checks to match content to the appropriate reading level. Document context awareness means the AI understands the whole course or module being authored, not just the paragraph in front of it.

TinyMCE AI for document management and enterprise workflows

For longer-form documents like policies, legal drafts, and compliance materials, AI Review is where TinyMCE AI shines. Running automated quality checks across a full document and surfacing inline suggestions before a document goes to review reduces editing cycles and catches consistency issues that manual review tends to miss. Custom quick actions can help enforce regulatory language standards or flag terminology that doesn't meet compliance requirements.

Beyond legal and compliance standards, there are more uses for TinyMCE AI in enterprise workflows. For organizations where branding is key, AI Review can help users maintain brand consistency and make sure their work meets the right criteria when they’re editing content. 

Does TinyMCE AI use my content to train AI Models?

No. Your content is never used to train AI models. TinyMCE AI routes requests through a secure, managed infrastructure that does not retain or learn from your data. Your users' content is processed to generate a response, not stored to improve the model. This applies regardless of which underlying model (ChatGPT, Claude, or Gemini) handles a given request.

Try TinyMCE AI for free

If you're evaluating AI for your rich text editor, the fastest way to assess fit is to see it running in your application. TinyMCE AI's 14-day free trial gives you full access to AI Chat, AI Quick Actions, and AI Review. Getting there takes one plugin, one JWT endpoint, and a configuration block in your init script. Start your free trial today.

TinyMCE AIcustomizationAI
byCoco Poley

Coco Poley is a creative content marketer and writer with over 10 years of experience in technology and storytelling. Currently a Technical Content Marketer at TinyMCE, she crafts engaging content strategies, blogs, tutorials, and resources to help developers use TinyMCE effectively. Coco excels at transforming complex technical ideas into accessible narratives that drive audience growth and brand visibility.

Related Articles

  • World of WYSIWYG

    How to Set Up Rich Text Editor Comments in Embedded Mode: Step-by-Step Guide

Join 100,000+ developers who get regular tips & updates from the Tiny team.

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.