AI proxy server reference guide

This plugin is only available as a paid add-on to a TinyMCE subscription.

What is an AI proxy server?

A proxy is a server that sits between the browser containing the TinyMCE editor and the AI Large Language Model (LLM); for example, the OpenAI server.

With a proxy server in place, TinyMCE does not communicate directly with the AI LLM. Instead requests go through the proxy.

The LLM’s responses, in return, also go through the proxy and then to TinyMCE.

The proxy adds a layer of flexibility by allowing extra processing before the request is sent to the LLM and before returning the response to the editing session in TinyMCE.

Why use a proxy service?

  • To hide the OpenAI API key used for queries.

  • It allows multiple queries from different TinyMCE sessions to work with one OpenAI key.

  • It can, optionally, validate logged-in users and reject unauthorized usage of the OpenAI service.

    • Recommended for production systems.

  • It can, optionally, allow for extra processing before sending the request to OpenAI.

    • For example, filtering for and then rejecting abusive content before the OpenAI LLM processes it.

  • It can, optionally, allow for extra processing when the server responds to TinyMCE.

    • For example, modifying or re-formatting the response.

What do I need to setup a proxy service with TinyMCE?

Sign up for the TinyMCE AI Assistant

The TinyMCE AI Assistant provides the end user Ui interaction components and workflows. This enables end users to make AI requests, modify, fine tune results and insert enhanced content back into the editor. The plugin also provides the server request component that sends user requests to the AI LLM service.

Sign-up details are available here.

Select a proxy server of your choice

Choose a proxy server that works for your implementation.

For demonstration purposes, we use Envoy as a reference.

The proxy server can work with other services to pre-process requests before sending the final request with the OpenAI API key to the OpenAI server.

In addition, the proxy server can also provide an extra layer of processing when receiving a response from the OpenAI server before delivering to the TinyMCE editing session.

OpenAI chat completions API

This is the service that returns the content generated by your API query.

You need an account with OpenAI and the OpenAI API key that comes with the account.

The OpenAI API key is required to make requests from the OpenAI service.

An authentication endpoint

This is recommended in all circumstances, but for production systems in particular.

Before sending any requests to the OpenAI server, check if the user is allowed to make these requests. If the request cannot be authenticated, reject it.

An authentication service is needed to provide this mechanism.

OpenAI moderation API

OpenAI has an abuse policy. Frequent violation of this policy can lead to account suspension.

To prevent this, a moderation service can pre-filter abusive content and reject the request.

Below is a flow diagram that illustrates how the above components work together to provide the AI enhanced experience.

OpenAI Proxy Call Flows

Example Reference Application

An example reference application has been created to demonstrate the TinyMCE AI Assistant.

The reference application provides a technical walkthrough of the source code, highlighting the key integration points between the required components described in the Proxy Call Flows diagram above.

The technical walkthrough documentation can be found in the project documentation.

The reference application is packaged in a Docker container so you will need Docker to run the example application.