Skip to main content

Make the first API call

This explains the order of LLM API calls and provides examples of scenarios that can be implemented with the API.

Order of API calls

Initialize SDK

When starting the LLM SDK, call the initialize API to initialize the LLM SDK.

val llm: LLM = LLM(context)
llm.initialize()

Register App

Call the registerApp API to register your app with the LLM server.

val clientID = "your_client_id"
val clientSecret = "your_client_secret"
llm.registerApp(clientID, clientSecret, object : OnServerConnectionListener {
override fun onConnected() {
// Once the app is successfully registered with the LLM server, it can receive events.
}

override fun onFailed(msg: String) {
// If the registration with the LLM server fails, an event with an error message is delivered to the app.
}
})

Set Model Parameter

Call the setModelParameter API to change the LLM model type and configuration values.

// If not set by the app, the default model configuration is used.
// temperature : 0.7, token : 512, model : gpt-4o
// the range of temperature : from 0.0 to 2.0
// the range of token : from 1 to 1024
// the type of model : chatbaker-16k, gpt-4o-mini, gpt-4o

val temperature = "0.8"
val maxTokens = "1024"
val newModelName = "gpt-4o-mini"

llm.setModelParameter(ModelParameter.TEMPERATURE, temperature)
llm.setModelParameter(ModelParameter.MAX_TOKENS, maxTokens)
llm.setModelParameter(ModelParameter.MODEL_NAME, newModelName)

Generate Content

Call the generateContent API to send input prompt data from the LLM app to the LLM service. When you enter data in the app, the partial responses generated by the LLM and the full response are delivered through the Listener.

llm.generateContent(prompt = prompt, object: ResultListener {
override fun onResponse(response: String, completed: Boolean) {
println("onResponse response: $response, complete: $completed")
}

override fun onError(reason: String) {
println("onError reason: $reason")
}
})

Start Chat

You can start a conversation by calling the startChat API. By providing the previous chat history, you can enable continuous conversations.

val initialHistory = listOf(
LLMContent(role = "user", parts = mutableListOf(TextPart("안녕"))),
LLMContent("assistant", mutableListOf(TextPart("안녕하세요. 저는 LLM 입니다.")))
)

llm.startChat(history = initialHistory, { /** onSuccess */}, { /** onFailed */ println(it) })

Send Message

Call the sendMessage API to send a message stream from the LLM app to the LLM service. This API is used for conversations and chats.

val contents = LLMContent("user", listOf(TextPart(text = "서울 날씨 알려줘")))
llm.sendMessage(contents, object: ResultListener {
override fun onResponse(response: String, completed: Boolean) {
println("onResponse response: $response, complete: $completed")
}

override fun onError(reason: String) {
println("onError reason: $reason")
}
})

Release SDK

Release the LLM SDK resources by calling the release API.

llm.release()