C++ Article - Get Started
Core Code Introduction
Currently, the plugin is divided into the following modules:
AIChatPlusCommon: Runtime module, responsible for handling various AI API interface requests and parsing response content.
AIChatPlusEditor: Editor module, responsible for implementing the AI chat tool within the editor.
AIChatPlusCllama: Runtime module responsible for encapsulating the interface and parameters of llama.cpp, enabling offline execution of large models.
Thirdparty/LLAMACpp: Runtime third-party module integrating the dynamic library and header files of llama.cpp.
The UClass responsible for sending requests is FAIChatPlus_xxxChatRequest. Each API service has its own unique Request UClass. The replies to the requests are obtained through two UClass, UAIChatPlus_ChatHandlerBase and UAIChatPlus_ImageHandlerBase. All you need to do is register the corresponding callback delegates.
Before sending a request, you need to set up the parameters and the message to be sent for the API. This is done by setting up with FAIChatPlus_xxxChatRequestBody. The specific response content is also parsed into FAIChatPlus_xxxChatResponseBody. When you receive the callback, you can retrieve the ResponseBody through a specific interface.
The code uses the offline model Cllama (llama.cpp).
The following instructions explain how to use the offline model llama.cpp in your code.
Firstly, you also need to download the model file to Content/LLAMA.
Modify the code to add a new command, and send a message to the offline model within the command.
#include "Common/AIChatPlus_Log.h"
#include "Common_Cllama/AIChatPlus_CllamaChatRequest.h"
void AddTestCommand()
{
IConsoleManager::Get().RegisterConsoleCommand(
TEXT("AIChatPlus.TestChat"),
TEXT("Test Chat."),
FConsoleCommandDelegate::CreateLambda([]()
{
if (!FModuleManager::GetModulePtr<FAIChatPlusCommon>(TEXT("AIChatPlusCommon"))) return;
TWeakObjectPtr<UAIChatPlus_ChatHandlerBase> HandlerObject = UAIChatPlus_ChatHandlerBase::New();
// Cllama
FAIChatPlus_CllamaChatRequestOptions Options;
Options.ModelPath.FilePath = FPaths::ProjectContentDir() / "LLAMA" / "qwen1.5-1_8b-chat-q8_0.gguf";
Options.NumPredict = 400;
Options.bStream = true;
// Options.StopSequences.Emplace(TEXT("json"));
auto RequestPtr = UAIChatPlus_CllamaChatRequest::CreateWithOptionsAndMessages(
Options,
{
{"You are a chat bot", EAIChatPlus_ChatRole::System},
{"who are you", EAIChatPlus_ChatRole::User}
});
HandlerObject->BindChatRequest(RequestPtr);
const FName ApiName = TEnumTraits<EAIChatPlus_ChatApiProvider>::ToName(RequestPtr->GetApiProvider());
HandlerObject->OnMessage.AddLambda([ApiName](const FString& Message)
{
UE_LOG(AIChatPlus_Internal, Display, TEXT("TestChat[%s] Message: [%s]"), *ApiName.ToString(), *Message);
});
HandlerObject->OnStarted.AddLambda([ApiName]()
{
UE_LOG(AIChatPlus_Internal, Display, TEXT("TestChat[%s] RequestStarted"), *ApiName.ToString());
});
HandlerObject->OnFailed.AddLambda([ApiName](const FAIChatPlus_ResponseErrorBase& InError)
{
UE_LOG(AIChatPlus_Internal, Error, TEXT("TestChat[%s] RequestFailed: %s "), *ApiName.ToString(), *InError.GetDescription());
});
HandlerObject->OnUpdated.AddLambda([ApiName](const FAIChatPlus_ResponseBodyBase& ResponseBody)
{
UE_LOG(AIChatPlus_Internal, Display, TEXT("TestChat[%s] RequestUpdated"), *ApiName.ToString());
});
HandlerObject->OnFinished.AddLambda([ApiName](const FAIChatPlus_ResponseBodyBase& ResponseBody)
{
UE_LOG(AIChatPlus_Internal, Display, TEXT("TestChat[%s] RequestFinished"), *ApiName.ToString());
});
RequestPtr->SendRequest();
}),
ECVF_Default
);
}
After recompiling, using the command in the Cmd editor will allow you to see the output results of the large model in the OutputLog.
Original: https://wiki.disenone.site/en
This post is protected by CC BY-NC-SA 4.0 agreement, should be reproduced with attribution.
Visitors. Total Visits. Page Visits.
This post was translated using ChatGPT. Please provide feedbackPoint out any omissions