AI Assistant plugin
This plugin is only available as a paid add-on to a TinyMCE subscription. |
This feature is only available for TinyMCE 6.6 and later. |
The AI Assistant plugin allows a user to interact with registered AI APIs by sending queries and viewing responses within a TinyMCE editor dialog.
Once a response is generated and displayed within the dialog, the user can choose to either:
-
insert it into the editor at the current selection;
-
create another query to further refine the response generated by the AI; or
-
close the dialog and discard the returned response.
Users can retrieve a history of their conversations with the AI using the getThreadLog
API, including any discarded responses.
On the absence of an AI Assistant demo
This initial release of the AI Assistant developer documentation does not include an in-page working demo. |
Basic setup
To add the AI Assistant plugin to the editor, add both ai
to the plugins
option in the editor configuration and the ai_request
function to the editor configuration.
For example, interfacing with the OpenAI Completions API:
// This example stores the API key in the client side integration. This is not recommended for any purpose.
// Instead, an alternate method for retrieving the API key should be used.
const api_key = '<INSERT_API_KEY_HERE>';
tinymce.init({
selector: 'textarea', // Change this value according to your HTML
plugins: 'ai',
toolbar: 'aidialog aishortcuts',
ai_request: (request, respondWith) => {
const openAiOptions = {
method: 'POST',
headers: {
'Content-Type': 'application/json',
'Authorization': `Bearer ${api_key}`
},
body: JSON.stringify({
model: 'gpt-3.5-turbo',
temperature: 0.7,
max_tokens: 800,
messages: [{ role: 'user', content: request.prompt }],
})
};
respondWith.string((signal) => window.fetch('https://api.openai.com/v1/chat/completions', { signal, ...openAiOptions })
.then((response) => response.ok ? response.json() : response.text())
.then((data) => {
if (typeof data === 'string') {
Promise.reject(`Failed to communicate with the ChatGPT API. ${data}`);
} else if (data.error) {
Promise.reject(`Failed to communicate with the ChatGPT API because of ${data.error.type} error: ${data.error.message}`);
} else {
// Extract the response content from the data returned by the API
return data?.choices[0]?.message?.content?.trim();
}
})
);
}
});
Using a proxy server with AI Assistant
As per OpenAI’s best practices for API key safety, deployment of an API key in a client-side environment is specifically not recommended. |
Using a proxy server obviates this, reducing financial and service uptime risks.
A proxy server can also provide flexibility by allowing extra processing before the request is sent to an LLM AI endpoint and before returning the response to the user.
See the AI Proxy Server reference guide for information on how to setup a proxy server for use with the AI Assistant.
The AI Proxy Server reference guide is, as its name notes, a reference. There is no single proxy server setup that is right or correct for all circumstances and other setups may be better for your use-case. |
ai_request
The AI Assistant uses the ai_request
function to send prompts to an AI endpoint, and display the responses.
The ai_request
function will be called each time a user submits a prompt.
These prompts are only submitted with the AI Assistant dialog open, whether from typing in the dialog input field, or from using an AI Assistant shortcut.
The content returned within the ai_request
function is displayed within the dialog, once a response is provided.
This option is required to use the AI Assistant plugin. |
Type: Function
Example: using ai_request
and the stream
callback to interface with the OpenAI Completions API
const fetchApi = import("https://unpkg.com/@microsoft/fetch-event-source@2.0.1/lib/esm/index.js").then(module => module.fetchEventSource);
// This example stores the API key in the client side integration. This is not recommended for any purpose.
// Instead, an alternate method for retrieving the API key should be used.
const api_key = '<INSERT_API_KEY_HERE>';
tinymce.init({
selector: 'textarea', // change this value according to your html
plugins: 'ai',
toolbar: 'aidialog aishortcuts',
ai_request: (request, respondWith) => {
respondWith.stream((signal, streamMessage) => {
// Adds each previous query and response as individual messages
const conversation = request.thread.flatMap((event) => {
if (event.response) {
return [
{ role: 'user', content: event.request.query },
{ role: 'assistant', content: event.response.data }
];
} else {
return [];
}
});
// Forms the new query sent to the API
const content = request.context.length === 0 || conversation.length > 0
? request.query
: `Question: ${request.query} Context: """${request.context}"""`;
const messages = [
...conversation,
{ role: 'system', content: request.system.join('\n') },
{ role: 'user', content }
];
const requestBody = {
model: 'gpt-3.5-turbo',
temperature: 0.7,
max_tokens: 800,
messages,
stream: true
};
const openAiOptions = {
signal,
method: 'POST',
headers: {
'Content-Type': 'application/json',
'Authorization': `Bearer ${api_key}`
},
body: JSON.stringify(requestBody)
};
// This function passes each new message into the plugin via the `streamMessage` callback.
const onmessage = (ev) => {
const data = ev.data;
if (data !== '[DONE]') {
const parsedData = JSON.parse(data);
const firstChoice = parsedData?.choices[0];
const message = firstChoice?.delta?.content;
if (message) {
streamMessage(message);
}
}
};
const onerror = (error) => {
// Stop operation and do not retry by the fetch-event-source
throw error;
};
// Use microsoft's fetch-event-source library to work around the 2000 character limit
// of the browser `EventSource` API, which requires query strings
return fetchApi
.then(fetchEventSource =>
fetchEventSource('https://api.openai.com/v1/chat/completions', {
...openAiOptions,
openWhenHidden: true,
onmessage,
onerror
})
);
});
}
});
The request
object
The ai_request
function is given a request object as the first parameter, which has these fields:
query
-
The user-submitted prompt as a string, without any context. This is either the text as written by the user in the AI Assistant dialog, or the
prompt
as written in the shortcut object, when selected by the user from the shortcuts menu. context
-
The current selection as a string, if any, or the current response displayed in the dialog. This can be combined with the `query`in a custom manner by the integrator to form a request. The current selection will be provided in HTML format, as will any displayed HTML response, and will increase token use.
thread
-
An array containing the history of requests and responses within the dialog, provided as an array of objects. This thread array is the same as is recorded in the
getThreadLog
API, for current instance of the AI Assistant dialog. system
-
An array of messages which provide instructions for handling the user prompts. The
system
array:
[ 'Answer the question based on the context below.',
'The response should be in HTML format.',
'The response should preserve any HTML formatting, links, and styles in the context.' ]
prompt
-
The submitted prompt as a string, combined with any current selection (when first opening the dialog) or the previous response. The AI Assistant plugin provides a customised format which combines these strings, though integrators are free to build their own with any of the other provided fields in the
request
object.
The default prompt and token use.
The AI Assistant automatically prepends the
This string is intended to improve the UX and increases the response accuracy, and simplify the initial integration of the AI Assistant plugin. However, this string uses more tokens than the |
The respondWith
object
The ai_request
function provides an object containing two separate callbacks as the second parameter. These callbacks allow the integrator to choose how the response from the API will be displayed in the AI Assistant dialog.
Both of these callbacks expect a Promise
which indicates that the response is either finished (when resolved), or interrupted (when rejected). The return type of the promise differs between callbacks.
Both callbacks provide a signal
parameter.
signal
-
If the user closes the dialog, or aborts a streaming response, the
signal
parameter can abort the request.
The respondWith.string
callback
The respondWith.string
callback provides functionality for displaying the entire response from the AI.
The final response is to be returned as a string using Promise.resolve()
. This string will be displayed within the AI Assistant dialog.
The respondWith.stream
callback
The respondWith.stream
callback provides functionality for displaying streamed responses from the AI.
This callback expects a Promise
which resolves once the AI has finished streaming the response.
This callback provides streamMessage
callback as the second parameter, which should be called on each new partial message so the message can be displayed in the AI Assistant dialog immediately.
streamMessage
-
Takes a string and appends it to the content displayed in the AI Assistant dialog.
ai_shortcuts
The ai_shortcuts
option controls the list of AI Assistant shortcuts available in the AI Shortcuts
toolbar button and menu item.
This option can be configured with an array to present a customised set of AI Assistant shortcuts.
As well, it can be set to a Boolean
value to control the use of the default list of AI Assistant shortcuts.
When not specified, or set to true
, the AI Assistant shortcuts toolbar button and menu item present and display the default set of shortcuts included with the AI Assistant.
When set to []
(an empty array), or false
, the AI Assistant shortcuts toolbar button and menu item do not present in the TinyMCE instance.
When configured with an instance-specific object array, the AI Assistant shortcuts toolbar button and menu item present, and display the configured shortcuts when activated.
Type: Array
of Objects
, or Boolean
Default value:
[
{ title: 'Summarize content', prompt: 'Provide the key points and concepts in this content in a succinct summary.' },
{ title: 'Improve writing', prompt: 'Rewrite this content with no spelling mistakes, proper grammar, and with more descriptive language, using best writing practices without losing the original meaning.' },
{ title: 'Simplify language', prompt: 'Rewrite this content with simplified language and reduce the complexity of the writing, so that the content is easier to understand.' },
{ title: 'Expand upon', prompt: 'Expand upon this content with descriptive language and more detailed explanations, to make the writing easier to understand and increase the length of the content.' },
{ title: 'Trim content', prompt: 'Remove any repetitive, redundant, or non-essential writing in this content without changing the meaning or losing any key information.' },
{ title: 'Change tone', subprompts: [
{ title: 'Professional', prompt: 'Rewrite this content using polished, formal, and respectful language to convey professional expertise and competence.' },
{ title: 'Casual', prompt: 'Rewrite this content with casual, informal language to convey a casual conversation with a real person.' },
{ title: 'Direct', prompt: 'Rewrite this content with direct language using only the essential information.' },
{ title: 'Confident', prompt: 'Rewrite this content using compelling, optimistic language to convey confidence in the writing.' },
{ title: 'Friendly', prompt: 'Rewrite this content using friendly, comforting language, to convey understanding and empathy.' },
] },
{ title: 'Change style', subprompts: [
{ title: 'Business', prompt: 'Rewrite this content as a business professional with formal language.' },
{ title: 'Legal', prompt: 'Rewrite this content as a legal professional using valid legal terminology.' },
{ title: 'Journalism', prompt: 'Rewrite this content as a journalist using engaging language to convey the importance of the information.' },
{ title: 'Medical', prompt: 'Rewrite this content as a medical professional using valid medical terminology.' },
{ title: 'Poetic', prompt: 'Rewrite this content as a poem using poetic techniques without losing the original meaning.' },
] },
]
Translations and changes
The default AI Assistant shortcuts are only available in English. They have not been translated into any other languages, and switching TinyMCE to a language other than English does not change the default AI Assistant shortcuts. Also, the default AI Assistant shortcuts are subject to change. If you prefer to keep these shortcuts, include them within your integration. |
Example: using ai_shortcuts
to present a customised set of AI Assistant shortcuts
tinymce.init({
selector: 'textarea', // change this value according to your html
plugins: 'ai',
toolbar: 'aidialog aishortcuts',
ai_request: (request, respondWith) => respondWith.string(() => Promise.reject("See docs to implement AI Assistant")),
ai_shortcuts: [
{ title: 'Screenplay', prompt: 'Convert this to screenplay format.' },
{ title: 'Stage play', prompt: 'Convert this to stage play format.' },
{ title: 'Classical', subprompts:
[
{ title: 'Dialogue', prompt: 'Convert this to a Socratic dialogue.' },
{ title: 'Homeric', prompt: 'Convert this to a Classical Epic.' }
]
},
{ title: 'Celtic', subprompts:
[
{ title: 'Bardic', prompt: 'Convert this to Bardic verse.' },
{ title: 'Filí', prompt: 'Convert this to Filí-an verse.' }
]
},
]
});
Example: disabling ai_shortcuts
To disable the AI Assistant shortcuts menu and toolbar options, set ai_shortcuts
to false
(or to []
, an empty array).
tinymce.init({
selector: 'textarea', // change this value according to your HTML
ai_shortcuts: false
});
tinymce.init({
selector: 'textarea', // change this value according to your HTML
ai_shortcuts: []
});
Valid Shortcuts
Valid shortcut objects contain the following fields.
title
-
A string which is displayed in the
aishortcuts
toolbar button and menu item. This will indicate which shortcut is used, or which category of shortcuts are in this menu.
And either
prompt
-
A string containing the query which is given to the
ai_request
function when the shortcut is used.
or
subprompts
-
An array containing more valid shortcut objects. This allows shortcuts to be grouped into categories within the AI Assistant shortcuts toolbar button and menu item.
Toolbar buttons
The AI Assistant plugin provides the following toolbar buttons:
Toolbar button identifier | Description |
---|---|
|
Open the AI Assistant dialog. |
|
Opens the AI Shortcuts menu, displaying the available shortcut prompts for querying the AI API. |
These toolbar buttons can be added to the editor using:
-
The
toolbar
configuration option. -
The
quickbars_insert_toolbar
configuration option.
Menu items
The AI Assistant plugin provides the following menu items:
Menu item identifier | Default Menu Location | Description |
---|---|---|
|
Tools |
Open the AI Assistant dialog. |
|
Tools |
Opens the AI Assistant shortcuts sub-menu, displaying the available shortcut prompts for querying the AI API. |
These menu items can be added to the editor using:
-
The
menu
configuration option. -
The
contextmenu
configuration option.
Commands
The AI Assistant plugin provides the following TinyMCE commands.
Command | Description |
---|---|
mceAiDialog |
This command opens the AI Assistant dialog. For details, see Using |
mceAiDialogClose |
This command closes the AI Assistant dialog. |
tinymce.activeEditor.execCommand('mceAiDialog');
tinymce.activeEditor.execCommand('mceAiDialog', true|false, { prompt: '<value1>', generate: true, display: false });
tinymce.activeEditor.execCommand('mceAiDialogClose');
Using mceAiDialog
mceAiDialog
accepts an object with any of the following key-value pairs:
Name | Value | Requirement | Description |
---|---|---|---|
prompt |
|
Not required |
The prompt to pre-fill the input field with when the dialog is first opened. |
generate |
|
Not required |
Whether a request should be sent when the dialog is first opened. |
display |
|
Not required |
Whether to display the input field and generate button in the dialog when the dialog is first opened. |
Events
The AI Assistant plugin provides the following events.
The following events are provided by the xref:<plugincode>.adoc[<Plugin name> plugin].
Name | Data | Description |
---|---|---|
AIRequest |
|
Fired when a request is sent to the |
AIResponse |
|
Fired when a |
AIError |
`+{ error: Error |
|
Fired when a |
AIDialogOpen |
N/A |
Fired when the AI Assistant dialog is opened. |
AIDialogClose |
N/A |
APIs
The AI Assistant plugin provides the following APIs.
Name | Arguments | Description |
---|---|---|
getThreadLog |
N/A |
Retrieves the history of each conversation thread generated while using the plugin. |
// Retrieves the history of each conversation thread generated while using the plugin in the active editor.
tinymce.activeEditor.plugins.ai.getThreadLog();
The getThreadLog
API
A user or integrator can retrieve the history of each conversation thread by calling editor.ai.getThreadLog()
on an editor instance with the AI Assistant plugin enabled.
A new thread is recorded into the thread log with a unique ID each time the AI dialog is opened. When a request returns either a response or an error, an event is recorded in the current thread containing the following fields:
eventUid
-
Unique identifier for the event.
timestamp
-
The time-stamp date at which the event is recorded in the thread, in the ISO-8601 format.
request
-
The
request
object as it was provided to the integration of theai_request
function, excluding the current thread.
and either:
response
-
The
response
object provided by the integration, with atype
field denoting theai_request
callback used (eitherstring
orstream
) anddata
field containing the entire response data; or error
-
A string with any error returned by the integration.
The thread log can contain any number of threads, with any number of events in each thread. The following example only shows a single thread containing a single event. The returned object is provided in the following format:
{
"mce-aithread_123456": [
{
"eventUid": "mce-aithreadevent_654321",
"timestamp": "2023-03-15T09:00:00.000Z",
"request": {
"prompt": "Answer the question based on the context below.\nThe response should be in HTML format.\nThe response should preserve any HTML formatting, links, and styles in the context.\n\nContext: \"\"\"Some selection\"\"\"\n\nQuestion: \"\"\"A user query\"\"\"\n\nAnswer:",
"query": "A user query",
"context": "Some selection",
"system": [
"Answer the question based on the context below.",
"The response should be in HTML format.",
"The response should preserve any HTML formatting, links, and styles in the context."
]
},
"response": {
"type": "string",
"data": "Sorry, there is not enough information to provide an answer to your query,"
}
}
]
}
Once a TinyMCE editor instance is closed, any and all temporarily stored results are lost, so use the getThreadLog() to retrieve and store any responses which should not be lost.
|