1. Install the openai python client with pip install openai
  2. Create an API key.
  3. Set your API key with openai.api_key = '<YOUR_AUTOMORPHIC_API_KEY_HERE>'.
  4. Get the id of the model you want to inference. You can do this by going to a model’s page and copying the id from the url. For example, the id of the model at https://automorphic.ai/dashboard/conduit/models/clmq8qfq60000q6kzv3nof4fk is clmq8qfq60000q6kzv3nof4fk. Alternately, you can check the settings page of any model to copy the model’s id.

You can start inferencing a finetuned model simply by changing the api_base to https://api.automorphic.ai/conduit/v1.

For single-turn conversations (input and output), you can inference a model with:

import openai openai.api_key = '<YOUR_AUTOMORPHIC_API_KEY_HERE>'
openai.api_base = 'https://api.automorphic.ai/conduit/v1'

completion = openai.Completion.create(model='<YOUR_MODEL_ID_HERE>', prompt="<YOUR_PROMPT_HERE>")

print(completion['choices'][0]['text'])

For multi-turn conversations (chat history), you can inference a model with:

import openai openai.api_key = '<YOUR_AUTOMORPHIC_API_KEY_HERE>'
openai.api_base = 'https://api.automorphic.ai/conduit/v1'

messages = [
    {'role': 'user', 'content': "<SOME_USER_MESSAGE>"},
    {'role': 'assistant', 'content': "<SOME_ASSISTANT_MESSAGE>"},

]

response = openai.ChatCompletion.create(
    model='<YOUR_MODEL_ID_HERE>',
    messages=messages
)