Configuring AI Autodocumentation for Azure OpenAI

Michal Adamczyk - Dataedo Team Michal Adamczyk 13th March, 2025

Getting data required for Azure OpenAI model configuration

To create a model for autodocumentation in Dataedo, endpoint, API key and deployment name from Azure OpenAI are needed.

Go to the Azure AI Foundry portal

  1. First you need to open Azure AI Foundry portal in your Azure OpenAI deployment.

Go to Azure OpenAI Foundry portal

Get Endpoint and API Key information

  1. Go to Home page
  2. Here you can see API key for connection authorization
  3. Here you can see Endpoint of Azure OpenAI Service for which the connection will be estabilshed

Get Azure OpenAI endpoint and API key

Get Deployment Name information

  1. Open Deployments page
  2. Choose deployment of model which you want to use for Autodocumentation. Its Name will be needed for configuration

Get Azure OpenAI deployment name

Configuring the AI Autodocumentation in Dataedo Portal

Prerequisites

  • Azure OpenAI Service Endpoint
  • Azure OpenAI Service API key
  • Azure Open AI model deployment Name
  • Admin permissions to the repository

Opening the settings

  1. Click the Settings icon in the bottom of the menu if Dataedo Portal.
  2. Click System Settings in the menu.
  3. Click LLM (AI) engines tab.
  4. Press [Add] button to add a new engine.

Opening AI Autodocumentation settings

Choosing AI Platform

  1. Expand Platform list
  2. Choose AzureOpenAI

Selecting Azure OpenAI platform

AI engine settings form

Fill out all fields in the provided form and click [Add] button. You can accept default values optimized for Dataedo Portal or select custom values:

  • Engine - Supported AI engines
  • Engine name - Name of AI engine (editable)
  • API key - Click the eye icon and paste the API key from the Azure OpenAI
  • Max tokens - Maximum amount of tokens per response
  • Temperature - Sampling temperature from 0.0 to 2.0. Higher values make the output more random
  • Frequency penalty - Penalty value from -2.0 to 2.0. It allows to avoid repeating the same words or phrases too frequently. Higher values give more differentiated texts
  • Presence penalty - Penalty value from -2.0 to 2.0. It allows to generate more words that have not yet been included in the generated text. Higher values increase the likelihood of generating completely new concepts and ideas
  • Additional Context - Additional context to enhance the model’s output. This can be specifying a preferred language for descriptions or including details about the business scope. This additional context will be sent with the model requests.

AI engine configuration