Manual evaluation in Azure AI Studio enables you to continuously iterate and evaluate your prompt against your test data in a single interface. You can import data or choose one of your previous existing datasets in your project.
In this demo, we'll demonstrate how to configure a prompt and run a manual evaluation.
Disclosure: This demo contains an AI-generated voice.
Chapters:
00:00 - Introduction
00:26 - Configure the prompt
00:59 - Upload test data
01:24 - Run the evaluation
02:40 - Fine tune the model
02:45 - Re-run the evaluation
Resources:
Azure AI Studio - [ Ссылка ]
Responsible AI Developer Resources - [ Ссылка ]
To learn more about Harm Categories and Severity Levels - [ Ссылка ]
Ещё видео!