OpenAI
OpenAI
Prerequisites
- OpenAI
api_key
(for non-cached examples)
Setting up the OpenAI Client
The constructor initializes the base class LM
to support prompting requests to OpenAI models. This requires the following parameters:
api_key
(Optional[str], optional): OpenAI API provider authentication token. Defaults toNone
.api_provider
(Literal["openai", "azure"], optional): OpenAI API provider to use. Defaults to"openai"
.api_base
(Optional[str], optional): Base URL for the OpenAI API endpoint. Defaults toNone
.model_type
(Literal["chat", "text"]): Specified model type to use. Defaults to"gpt-3.5-turbo-instruct"
.**kwargs
: Additional language model arguments to pass to OpenAI request. This is initialized with default values for relevant text generation parameters needed for communicating with the GPT API, such astemperature
,max_tokens
,top_p
,frequency_penalty
,presence_penalty
, andn
.
Example of the OpenAI constructor:
class GPT3(LM): #This is a wrapper for the OpenAI class - dspy.OpenAI = dsp.GPT3
def __init__(
self,
model: str = "gpt-3.5-turbo-instruct",
api_key: Optional[str] = None,
api_provider: Literal["openai", "azure"] = "openai",
api_base: Optional[str] = None,
model_type: Literal["chat", "text"] = None,
**kwargs,
):
Under the Hood
__call__(self, prompt: str, only_completed: bool = True, return_sorted: bool = False, **kwargs) -> List[Dict[str, Any]]
Parameters:
prompt
(str): Prompt to send to OpenAI.only_completed
(bool, optional): Flag to return only completed responses and ignore completion due to length. Defaults to True.return_sorted
(bool, optional): Flag to sort the completion choices using the returned averaged log-probabilities. Defaults to False.**kwargs
: Additional keyword arguments for completion request.
Returns:
List[Dict[str, Any]]
: List of completion choices.
Internally, the method handles the specifics of preparing the request prompt and corresponding payload to obtain the response.
After generation, the completions are post-processed based on the model_type
parameter. If the parameter is set to 'chat', the generated content look like choice["message"]["content"]
. Otherwise, the generated text will be choice["text"]
.
Using the OpenAI client
turbo = dspy.OpenAI(model='gpt-3.5-turbo')
Sending Requests via OpenAI Client
- Recommended Configure default LM using
dspy.configure
.
This allows you to define programs in DSPy and simply call modules on your input fields, having DSPy internally call the prompt on the configured LM.
dspy.configure(lm=turbo)
#Example DSPy CoT QA program
qa = dspy.ChainOfThought('question -> answer')
response = qa(question="What is the capital of Paris?") #Prompted to turbo
print(response.answer)
- Generate responses using the client directly.
response = turbo(prompt='What is the capital of Paris?')
print(response)