Skip to content

[Enhancement]: Update documentation with example of DockerModelRunner usage #10449

@scottsteen

Description

@scottsteen

Module

Core

Proposal

I'm investigating using DockerModelRunner to enable integration testing of usage of an LLM. The documentation explains how to pull the image, but does not go on to explain the API or how to use the container after it is started.

I'm also unclear on the differences between getBaseEndpoint() & getOpenAIEndpoint(). There are no Javadocs for either.

For example, if I want to invoke OpenAI's Responses API, how would/can I do that with DockerModelRunner? Is this the right line of thinking? If not, what is? Is the underlying model important for using the OpenAI APIs?

I'm new to developing LLM-based applications, so any help is welcomed.

This request isn't to help with how to write the assertions/evals.

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions