Make the first API call
This explains the order of LLM API calls and provides examples of scenarios that can be implemented with the API.
Order of API calls
- Initialize SDK
- Register App
- Set Model Parameter
- Generate Content
- Start Chat
- Send Message
- Release SDK
Initialize SDK
When starting the LLM SDK, call the initialize API to initialize the LLM SDK.
- Kotlin
- Java
val llm: LLM = LLM(context)
llm.initialize()
LLM llm = LLM(context);
llm.initialize();
Register App
Call the registerApp API to register your app with the LLM server.
- Kotlin
- Java
val clientID = "your_client_id"
val clientSecret = "your_client_secret"
llm.registerApp(clientID, clientSecret, object : OnServerConnectionListener {
override fun onConnected() {
// Once the app is successfully registered with the LLM server, it can receive events.
}
override fun onFailed(msg: String) {
// If the registration with the LLM server fails, an event with an error message is delivered to the app.
}
})
String clientID = "your_client_id";
String clientSecret = "your_client_secret";
llm.registerApp(clientID, clientSecret, new OnServerConnectionListener() {
@Override
void onConnected() {
// Once the app is successfully registered with the LLM server, it can receive events.
}
@Override
void onFailed(String msg) {
// If the registration with the LLM server fails, an event with an error message is delivered to the app.
}
});
Set Model Parameter
Call the setModelParameter API to change the LLM model type and configuration values.
- Kotlin
- Java
// If not set by the app, the default model configuration is used.
// temperature : 0.7, token : 512, model : gpt-4o
// the range of temperature : from 0.0 to 2.0
// the range of token : from 1 to 1024
// the type of model : chatbaker-16k, gpt-4o-mini, gpt-4o
val temperature = "0.8"
val maxTokens = "1024"
val newModelName = "gpt-4o-mini"
llm.setModelParameter(ModelParameter.TEMPERATURE, temperature)
llm.setModelParameter(ModelParameter.MAX_TOKENS, maxTokens)
llm.setModelParameter(ModelParameter.MODEL_NAME, newModelName)
// If not set by the app, the default model configuration is used.
// temperature : 0.7, token : 512, model : gpt-4o
// the range of temperature : from 0.0 to 2.0
// the range of token : from 1 to 1024
// the type of model : chatbaker-16k, gpt-4o-mini, gpt-4o
String temperature = "0.8";
String maxTokens = "1024";
String newModelName = "gpt-4o-mini";
llm.setModelParameter(ModelParameter.TEMPERATURE, temperature);
llm.setModelParameter(ModelParameter.MAX_TOKENS, maxTokens);
llm.setModelParameter(ModelParameter.MODEL_NAME, newModelName);
Generate Content
Call the generateContent API to send input prompt data from the LLM app to the LLM service. When you enter data in the app, the partial responses generated by the LLM and the full response are delivered through the Listener.
- Kotlin
- Java
llm.generateContent(prompt = prompt, object: ResultListener {
override fun onResponse(response: String, completed: Boolean) {
println("onResponse response: $response, complete: $completed")
}
override fun onError(reason: String) {
println("onError reason: $reason")
}
})
llm.generateContent(prompt, new ResultListener() {
@Override
void onResponse(String response, Boolean completed) {
println("onResponse response: $response, complete: $completed");
}
@Override
void onError(String reason) {
println("onError reason: $reason");
}
});
Start Chat
You can start a conversation by calling the startChat API. By providing the previous chat history, you can enable continuous conversations.
- Kotlin
- Java
val initialHistory = listOf(
LLMContent(role = "user", parts = mutableListOf(TextPart("안녕"))),
LLMContent("assistant", mutableListOf(TextPart("안녕하세요. 저는 LLM 입니다.")))
)
llm.startChat(history = initialHistory, { /** onSuccess */}, { /** onFailed */ println(it) })
List<Part> userParts = new ArrayList<>();
userParts.add(new TextPart("안녕"));
List<Part> assistantParts = new ArrayList<>();
assistantParts.add(new TextPart(("안녕하세요. 저는 LLM 입니다."));
List<LLMContent> initialHistory = new ArrayList<>();
initialHistory.add(new LLMContent("user", userParts));
initialHistory.add(new LLMContent("bot", botParts));
llm.startChat(initialHistory, () -> {}, reason -> System.out.println(reason));
Send Message
Call the sendMessage API to send a message stream from the LLM app to the LLM service. This API is used for conversations and chats.
- Kotlin
- Java
val contents = LLMContent("user", listOf(TextPart(text = "서울 날씨 알려줘")))
llm.sendMessage(contents, object: ResultListener {
override fun onResponse(response: String, completed: Boolean) {
println("onResponse response: $response, complete: $completed")
}
override fun onError(reason: String) {
println("onError reason: $reason")
}
})
List<TextPart> parts = new ArrayList<>()
parts.add(new TextPart("서울 날씨 알려줘"));
LLMContent contents = LLMContent("user", parts);
llm.sendMessage(contents, new ResultListener() {
@Override
void onResponse(String response, Boolean completed) {
println("onResponse response: $response, complete: $completed");
}
@Override
void onError(String reason) {
println("onError reason: $reason");
}
});
Release SDK
Release the LLM SDK resources by calling the release API.
- Kotlin
- Java
llm.release()
llm.release();