Skip to content

Documentation for UE plugin AIChatPlus

Public warehouse

UE.AIChatPlus.Public

Obtain Plugin

AIChatPlus

Introduction to the plugin

This plugin supports UE5.2+.

UE.AIChatPlus is a UnrealEngine plugin that enables communication with various GPT AI chat services. Currently, it supports services such as OpenAI (ChatGPT, DALL-E), Azure OpenAI (ChatGPT, DALL-E), Claude, Google Gemini, Ollama, and local offline llama.cpp. In the future, it will continue to support more service providers. Its implementation is based on asynchronous REST requests, ensuring high performance and making it convenient for UE developers to integrate these AI chat services.

At the same time, UE.AIChatPlus also includes an editor tool that allows you to directly use these AI chat services in the editor to generate text and images, analyze images, and so on.

Instructions for use

Editor chat tool

Menu bar: Tools -> AIChatPlus -> AIChat can open the chat tool editor provided by the plugin.

The tool supports text generation, text chat, image generation, and image analysis.

The interface of the tool is roughly as follows:

text chat

image chat

Main Functions

Offline Large Models: integrated the llama.cpp library, supporting local offline execution of large models

Text Chat: Click the New Chat button in the bottom left corner to start a new text chat session.

Image Generation: Click on the New Image Chat button in the bottom left corner to start a new image generation session.

Image Analysis: Some chat services provided by New Chat support sending images, such as Claude, Google Gemini. Simply click the 🖼️ or 🎨 button above the input box to load the image you want to send.

Support Blueprint: Support Blueprint to create API requests, complete text chats, image generation, and other functions.

Set the current chat character: The dropdown menu at the top of the chat box can be used to set the current character for sending text, allowing you to adjust AI chat by simulating different characters.

Clear conversation: The ❌ button on the top of the chat box can clear the history of the current conversation.

Dialogue Template: Built-in hundreds of dialogue setting templates, making it easy to handle common issues.

Global Settings: Click the Setting button in the bottom left corner to open the global settings window. You can set default text chat, image generation API services, and specify parameters for each API service. The settings will be automatically saved in the project's path $(ProjectFolder)/Saved/AIChatPlusEditor.

Conversation Settings: Click the settings button at the top of the chat box to open the settings window for the current conversation. You can change the conversation name, edit the API service used in the conversation, and independently set specific parameters for each conversation's API. Conversation settings are automatically saved in $(ProjectFolder)/Saved/AIChatPlusEditor/Sessions.

Edit chat content: When hovering the mouse over a chat message, a settings button for that specific message will appear, supporting options to regenerate content, edit content, copy content, delete content, and regenerate content below (for content authored by users).

  • Image browsing: For image generation, clicking on an image will open the Image Viewer, supporting image saving as PNG/UE Texture. Textures can be viewed directly in the Content Browser, making it convenient for image usage within the editor. Additionally, functions such as deleting images, regenerating images, and generating more images are supported. For editors on Windows, image copying is also supported, allowing images to be copied directly to the clipboard for easy use. Images generated during the session will be automatically saved under each session folder, typically located at $(ProjectFolder)/Saved/AIChatPlusEditor/Sessions/${GUID}/images.

Blueprint:

blueprint

Global Settings:

global settings

Conversation settings:

session settings

Edit chat content:

chat edit

Image Viewer:

image viewer

Use offline large models

offline model

Dialogue Template

system template

Introduction to the core code

The plug-in is currently divided into the following modules:

AIChatPlusCommon: Runtime module, responsible for handling various AI API interface requests and parsing response content.

  • AIChatPlusEditor: Editor module, responsible for implementing the AI chat tool editor.

AIChatPlusCllama: Runtime module, responsible for encapsulating the interface and parameters of llama.cpp, implementing offline execution of large models

  • Thirdparty/LLAMACpp: Runtime third-party module, integrating llama.cpp dynamic library and header files.

The UClass responsible for sending requests is FAIChatPlus_xxxChatRequest specifically. Each API service has its own independent Request UClass. Replies to the requests are obtained through two types of UClass, UAIChatPlus_ChatHandlerBase and UAIChatPlus_ImageHandlerBase, requiring only the registration of the corresponding callback delegates.

Before sending a request, you need to set up the parameters and the message to be sent for the API, which is done through setting up FAIChatPlus_xxxChatRequestBody. The specific content of the reply is also parsed into FAIChatPlus_xxxChatResponseBody. When receiving the callback, you can retrieve the ResponseBody through a specific interface.

More source code details are available on the UE Marketplace: AIChatPlus

User Guide

Use the offline model llama.cpp in the editor tool.

The following explains how to use the offline model llama.cpp in the AIChatPlus editor tool.

Firstly, download the offline model from the HuggingFace website: Qwen1.5-1.8B-Chat-Q8_0.gguf

Put the model in a specific folder, for example, place it in the Content/LLAMA directory of the game project.

E:/UE/projects/FP_Test1/Content/LLAMA
> ls
qwen1.5-1_8b-chat-q8_0.gguf*

Open the AIChatPlus editor tool: Tools -> AIChatPlus -> AIChat, create a new chat session, and open the session settings page.

guide editor

Set the API to Cllama, enable Custom API Settings, add model search paths, and select models.

guide editor

Start chatting!!

guide editor

The code uses the offline model llama.cpp

The following explains how to use the offline model llama.cpp in the code.

First, you also need to download the model files to Content/LLAMA.

Modify the code to add a command, and send a message to the offline model inside the command.

#include "Common/AIChatPlus_Log.h"
#include "Common_Cllama/AIChatPlus_CllamaChatRequest.h"

void AddTestCommand()
{
    IConsoleManager::Get().RegisterConsoleCommand(
        TEXT("AIChatPlus.TestChat"),
        TEXT("Test Chat."),
        FConsoleCommandDelegate::CreateLambda([]()
        {
            if (!FModuleManager::GetModulePtr<FAIChatPlusCommon>(TEXT("AIChatPlusCommon"))) return;

            TWeakObjectPtr<UAIChatPlus_ChatHandlerBase> HandlerObject = UAIChatPlus_ChatHandlerBase::New();
            // Cllama
            FAIChatPlus_CllamaChatRequestOptions Options;
            Options.ModelPath.FilePath = FPaths::ProjectContentDir() / "LLAMA" / "qwen1.5-1_8b-chat-q8_0.gguf";
            Options.NumPredict = 400;
            Options.bStream = true;
            // Options.StopSequences.Emplace(TEXT("json"));
            auto RequestPtr = UAIChatPlus_CllamaChatRequest::CreateWithOptionsAndMessages(
                Options,
                {
                    {"You are a chat bot", EAIChatPlus_ChatRole::System},
                    {"who are you", EAIChatPlus_ChatRole::User}
                });

            HandlerObject->BindChatRequest(RequestPtr);
            const FName ApiName = TEnumTraits<EAIChatPlus_ChatApiProvider>::ToName(RequestPtr->GetApiProvider());

            HandlerObject->OnMessage.AddLambda([ApiName](const FString& Message)
            {
                UE_LOG(AIChatPlus_Internal, Display, TEXT("TestChat[%s] Message: [%s]"), *ApiName.ToString(), *Message);
            });
            HandlerObject->OnStarted.AddLambda([ApiName]()
            {
                UE_LOG(AIChatPlus_Internal, Display, TEXT("TestChat[%s] RequestStarted"), *ApiName.ToString());
            });
            HandlerObject->OnFailed.AddLambda([ApiName](const FAIChatPlus_ResponseErrorBase& InError)
            {
                UE_LOG(AIChatPlus_Internal, Error, TEXT("TestChat[%s] RequestFailed: %s "), *ApiName.ToString(), *InError.GetDescription());
            });
            HandlerObject->OnUpdated.AddLambda([ApiName](const FAIChatPlus_ResponseBodyBase& ResponseBody)
            {
                UE_LOG(AIChatPlus_Internal, Display, TEXT("TestChat[%s] RequestUpdated"), *ApiName.ToString());
            });
            HandlerObject->OnFinished.AddLambda([ApiName](const FAIChatPlus_ResponseBodyBase& ResponseBody)
            {
                UE_LOG(AIChatPlus_Internal, Display, TEXT("TestChat[%s] RequestFinished"), *ApiName.ToString());
            });

            RequestPtr->SendRequest();
        }),
        ECVF_Default
    );
}

After recompiling, you can use commands in the editor Cmd to see the output results of the large model in the log OutputLog.

guide code

Blueprint uses offline model llama.cpp

The following explains how to use the offline model llama.cpp in the blueprint.

Create a node Send Cllama Chat Request in the blueprint by right-clicking.

guide bludprint

Create an Options node, and set Stream=true, ModelPath="E:\UE\projects\FP_Test1\Content\LLAMA\qwen1.5-1_8b-chat-q8_0.gguf".

guide bludprint

guide bludprint

  • Create Messages, add a System Message and a User Message respectively.

guide bludprint

  • Create a Delegate that accepts model output information and prints it on the screen

guide bludprint

guide bludprint

The complete blueprint looks like this, simply run the blueprint, and you will see the message printed on the game screen returning the large model.

guide bludprint

guide bludprint

Update Log

v1.3.2 - 2024.10.10

Bugfix

Fix crash when manually stopping request in cllama. Fix the issue of not being able to find ggml.dll and llama.dll files when packaging the Win version of the mall for download. When creating a request, check if it is in the GameThread.

v1.3.1 - 2024.9.30

Add a SystemTemplateViewer, which allows you to browse and use hundreds of system setting templates.

Bugfix

Fix the plugin downloaded from the app store, llama.cpp cannot find the linking library. Fix the issue of LLAMACpp path being too long Fix the llama.dll error in windows after packaging. Fix iOS/Android file path reading issue Fix Cllame setting name error

v1.3.0 - 2024.9.23

Major Update

Integrated llama.cpp, supporting local offline execution of large models

v1.2.0 - 2024.08.20

Support OpenAI Image Edit/Image Variation.

Support Ollama API, support automatically obtaining the list of models supported by Ollama.

v1.1.0 - 2024.08.07

Support Blueprint

v1.0.0 - 2024.08.05

Complete basic functionality

Support OpenAI, Azure, Claude, Gemini

Built-in feature-rich editor chat tool

Original: https://wiki.disenone.site/en

This post is protected by CC BY-NC-SA 4.0 agreement, should be reproduced with attribution.

This post is translated using ChatGPT, please feedback if any omissions.