Intermediate
4.8
2,847

Trace Your AI Applications and Collect User Feedback

Learn to implement OpenTelemetry tracing for AI applications using Azure AI Foundry, including automatic instrumentation and user feedback collection.

Skills You'll Learn

Python
Azure AI Foundry
Open Telemetry
Tracing
Lab preview
Ready
4
Modules
1 hour
Duration

Lab Modules

4 steps
Logging into Azure Account using Azure Portal
Enable Tracing in Your Project
Instrument the OpenAI SDK
Add Custom Spans and User Feedback

Lab Overview

OpenTelemetry tracing is an observability framework that captures telemetry data from AI applications by automatically recording API calls, timing, and custom events. Azure AI Foundry integrates with OpenTelemetry to monitor AI model calls and application behavior, helping organizations track performance and debug issues in production environments.

In this lab, you will implement OpenTelemetry tracing for AI applications using Azure AI Foundry and collect user feedback data. You'll learn how to set up automatic instrumentation for AI model calls, create custom spans and attributes for application context, and implement user feedback collection to track user satisfaction alongside technical metrics.

Objectives

Upon completion of this intermediate level lab, you will be able to:

  • Configure Azure AI Foundry environment with Application Insights for telemetry collection
  • Deploy AI models and set up authentication for development environments
  • Implement automatic OpenTelemetry instrumentation for OpenAI SDK to capture API calls and conversation content
  • Create custom spans and attributes to add application context to traces
  • View trace data in Azure AI Foundry portal including spans, attributes, and events

Who is this lab for?

This lab is designed for:

  • Software developers building AI-powered applications who need to implement production-ready observability
  • DevOps engineers responsible for monitoring and maintaining AI applications in cloud environments
  • AI/ML engineers who want to understand how their models perform in real-world user interactions
  • Site reliability engineers focused on ensuring AI application performance and user experience
  • Technical architects designing observability strategies for enterprise AI solutions
  • Product managers who need to understand user satisfaction and AI application performance metrics