OpenAI API
The OpenAI library supports access to the OpenAI cloud-based LLM.
It provides the ability to create on-device LLM prompts and send to powerful cloud-based LLMs.
Function Index
Json * | openaiChatCompletion(Json *props) |
Submit a request to OpenAI Chat Completion API. | |
int | openaiInit(cchar *endpoint, cchar *key, Json *config, int flags) |
Initialize the OpenAI API client library. | |
Json * | openaiListModels(void) |
List available OpenAI models. | |
Url * | openaiRealTimeConnect(Json *props) |
Connect to OpenAI Real-Time API via WebSocket. | |
Json * | openaiResponses(Json *props, OpenAIAgent agent, void *arg) |
Submit a request to OpenAI Responses API with agent callbacks. | |
Url * | openaiStream(Json *props, UrlSseProc callback, void *arg) |
Submit a request to OpenAI Response API with streaming response handling. | |
void | openaiTerm(void) |
Terminate the OpenAI API client library. |
Typedef Index
OpenAI | OpenAI client configuration structure. |
OpenAIAgent | OpenAI Agent callback function for processing responses. |
Defines
Typedefs
typedef char *(* OpenAIAgent) (cchar *name, Json *request, Json *response, void *arg)
OpenAI Agent callback function for processing responses.
- Description:
- This callback is invoked during streaming responses to allow custom processing of agent responses. The callback can modify or augment the response data.
- Parameters:
name Agent name identifier. request Original JSON request object sent to OpenAI response JSON response object received from OpenAI arg User-defined argument passed through from the calling function.
- Returns:
- Allocated string to be added to the response. Caller must free using rFree. Return NULL if no additional content.
- API Stability:
- Evolving.
OpenAI
OpenAI client configuration structure.
- Description:
- Contains the configuration settings for connecting to OpenAI services. This structure is managed internally by the library.
- Fields:
char * endpoint OpenAI API endpoint URL (default: ). int flags Configuration flags controlling tracing and behavior. char * headers HTTP headers including authorization bearer token. char * realTimeEndpoint Real-time WebSocket endpoint URL for streaming connections.
- API Stability:
- Evolving.
Functions
Submit a request to OpenAI Chat Completion API.
- Description:
- Submit a synchronous request to the OpenAI Chat Completion API for text generation. The default model is set to 'gpt-4o-mini' if not specified in props. This is a blocking call that returns the complete response.
- Parameters:
props JSON object containing Chat Completion API parameters. Common fields include: 'messages' (required array), 'model', 'max_tokens', 'temperature', 'top_p', 'stream'.
- Returns:
- JSON object containing the complete response from OpenAI Chat Completion API. Response includes 'choices' array with generated content. Returns NULL on failure. Caller must free using jsonFree.
- API Stability:
- Evolving.
Initialize the OpenAI API client library.
- Description:
- Initialize the OpenAI client with endpoint, authentication, and configuration. This must be called before using any other OpenAI API functions. The library will validate the API key and establish the connection parameters.
- Parameters:
endpoint OpenAI API base endpoint URL. Use NULL for default "https://api.openai.com/v1". key OpenAI API key for authentication. Must be a valid API key starting with "sk-". config JSON object containing additional configuration parameters. May be NULL for defaults. Optional fields include timeout settings and custom headers. flags Bitmask of AI_SHOW_* flags controlling debug output and tracing behavior.
- Returns:
- Returns 0 on successful initialization. Returns negative error code on failure.
- API Stability:
- Internal.
Json * openaiListModels (void )
List available OpenAI models.
- Description:
- Retrieve a list of all available models from the OpenAI API. This includes both OpenAI's base models and any fine-tuned models available to your account.
- Returns:
- JSON object containing an array of model objects. Each model object includes: 'id' (model identifier), 'object' (type), 'created' (timestamp), 'owned_by' (owner). Returns NULL on failure. Caller must free using jsonFree.
- API Stability:
- Evolving.
Connect to OpenAI Real-Time API via WebSocket.
- Description:
- Establish a WebSocket connection to the OpenAI Real-Time API for bidirectional real-time communication. This enables voice and streaming text interactions with low latency. The connection supports full-duplex communication for interactive applications.
- Parameters:
props JSON object containing Real-Time API connection parameters. May include 'model', 'voice', 'input_audio_format', 'output_audio_format'.
- Returns:
- Url object representing the active WebSocket connection on success. Returns NULL on connection failure. Use urlClose to terminate the connection.
- API Stability:
- Evolving.
Submit a request to OpenAI Responses API with agent callbacks.
- Description:
- Submit a request to the OpenAI Responses API which supports agent-based interactions. The API automatically sets default values: model='gpt-4o-mini', truncation='auto'. Response text is aggregated into "output_text" field for convenient access. Agent callbacks are invoked for processing structured responses.
- Parameters:
props JSON object containing Response API parameters. Required fields depend on the specific API endpoint. Common fields include 'messages', 'model', 'max_tokens', 'temperature'. agent Callback function invoked for each response chunk. May be NULL if no agent processing required. arg User-defined argument passed to the agent callback function. May be NULL.
- Returns:
- JSON object containing the complete response from OpenAI. Contains aggregated "output_text" field. Returns NULL on failure. Caller must free using jsonFree.
- API Stability:
- Evolving.
Submit a request to OpenAI Response API with streaming response handling.
- Description:
- Submit a request and receive responses via Server-Sent Events (SSE) streaming. Default values are automatically set: model='gpt-4o-mini', truncation='auto'. The response text is aggregated into "output_text" for convenient access. Streaming allows real-time processing of responses as they arrive.
- Parameters:
props JSON object containing Response API parameters. Must include required fields for the target endpoint. callback SSE callback function invoked for each streaming chunk. Cannot be NULL. arg User-defined argument passed to the callback function. May be NULL.
- Returns:
- Url object representing the active streaming connection on success. Returns NULL on failure. Use urlClose to terminate the stream when finished.
- API Stability:
- Evolving.