You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm investigating using DockerModelRunner to enable integration testing of usage of an LLM. The documentation explains how to pull the image, but does not go on to explain the API or how to use the container after it is started.
I'm also unclear on the differences between getBaseEndpoint() & getOpenAIEndpoint(). There are no Javadocs for either.
For example, if I want to invoke OpenAI's Responses API, how would/can I do that with DockerModelRunner? Is this the right line of thinking? If not, what is? Is the underlying model important for using the OpenAI APIs?
I'm new to developing LLM-based applications, so any help is welcomed.
This request isn't to help with how to write the assertions/evals.