pip install .[openai]
or from the requirements.txt file.openai_engine
) should be used as a value for the engine
parameter in the USING
clause of the CREATE MODEL
statement.
CREATE MODEL
statement is used to create, train, and deploy models within MindsDB.
gpt-3.5-turbo
model by default. But you can use the gpt-4
model as well by passing it to the model-name
parameter in the USING
clause of the CREATE MODEL
statement.Supported Models
To see all supported models, including chat models, embedding models, and more, click here.USING
clause takes more parameters depending on the operation mode. Follow the next section to learn about available operation modes.
Expression | Description |
---|---|
openai_model | The model name is openai_model and it resides inside the mindsdb project by default. Learn more about MindsDB projects here. |
answer | It is the value to be predicted. |
engine | The openai engine is used. |
question_column | It is the column that stores input data. |
model_name | Optional. By default, the text-davinci-002 model is used. If you prefer to use a cheaper model or a model that was fine-tuned outside of MindsDB, use this parameter. |
api_key | Your OpenAI API key. |
Expression | Description |
---|---|
openai_model | The model name is openai_model and it resides inside the mindsdb project by default. |
answer | It is the value to be predicted. |
engine | The openai engine is used. |
question_column | It is the column that stores input data being a question. |
context_column | It is the column that stores input data being a context. |
api_key | Your OpenAI API key. |
prompt_template
parameter.
Expression | Description |
---|---|
openai_model | The model name is openai_model and it resides inside the mindsdb project by default. |
answer | It is the value to be predicted. |
engine | The openai engine is used. |
prompt_template | It is the column that stores a query to be answered. Please note that this parameter can be overridden at prediction time. |
max_tokens | It defines the maximum token cost of the prediction. Please note that this parameter can be overridden at prediction time. |
temperature | It defines how risky the answers are. The value of 0 marks a well-defined answer, and the value of 0.9 marks a more creative answer. Please note that this parameter can be overridden at prediction time. |
api_key | Your OpenAI API key. |
mode
parameter in the USING
clause.
default
, conversational
, conversational-full
, image
, and embedding
.
default
mode is used by default. The model replies to the prompt_template
message.conversational
mode enables the model to read and reply to multiple messages.conversational-full
mode enables the model to read and reply to multiple messages, one reply per message.image
mode is used to create an image instead of a text reply.embedding
mode enables the model to return output in the form of embeddings.Expressions | Values |
---|---|
project_name | mindsdb |
predictor_name | sentiment_classifier |
target_column | sentiment |
engine | openai |
prompt_template | predict the sentiment of the text:{{review}} exactly as either positive or negative or neutral |
prompt_template
parameter, we use a placeholder for a text value that comes from the review
column, that is, text:{{review}}
.sentiment_classifier
model.
complete
, we can query for predictions.
example_db
database before using one of its tables, like in the query above.FINETUNE
command creates a new version of the openai_davinci
model. You can query all available versions as below: