Getting data required for Azure OpenAI model configuration
To create a model for autodocumentation in Dataedo, endpoint, API key and deployment name from Azure OpenAI are needed.
Go to the Azure AI Foundry portal
- First you need to open Azure AI Foundry portal in your Azure OpenAI deployment.
Get Endpoint and API Key information
- Go to Home page
- Here you can see API key for connection authorization
- Here you can see Endpoint of Azure OpenAI Service for which the connection will be estabilshed
Get Deployment Name information
- Open Deployments page
- Choose deployment of model which you want to use for Autodocumentation. Its Name will be needed for configuration
Configuring the AI Autodocumentation in Dataedo Portal
Prerequisites
- Azure OpenAI Service Endpoint
- Azure OpenAI Service API key
- Azure Open AI model deployment Name
- Admin permissions to the repository
Opening the settings
- Click the Settings icon in the bottom of the menu if Dataedo Portal.
- Click System Settings in the menu.
- Click LLM (AI) engines tab.
- Press [Add] button to add a new engine.
Choosing AI Platform
- Expand Platform list
- Choose AzureOpenAI
AI engine settings form
Fill out all fields in the provided form and click [Add] button. You can accept default values optimized for Dataedo Portal or select custom values:
- Engine - Supported AI engines
- Engine name - Name of AI engine (editable)
- API key - Click the eye icon and paste the API key from the Azure OpenAI
- Max tokens - Maximum amount of tokens per response
- Temperature - Sampling temperature from 0.0 to 2.0. Higher values make the output more random
- Frequency penalty - Penalty value from -2.0 to 2.0. It allows to avoid repeating the same words or phrases too frequently. Higher values give more differentiated texts
- Presence penalty - Penalty value from -2.0 to 2.0. It allows to generate more words that have not yet been included in the generated text. Higher values increase the likelihood of generating completely new concepts and ideas
- Additional Context - Additional context to enhance the model’s output. This can be specifying a preferred language for descriptions or including details about the business scope. This additional context will be sent with the model requests.