This course equips participants to develop, design and deploy AI solutions using Azure AI Studio. You’ll learn to collaborate on projects, manage resources, and use advanced AI techniques like prompt engineering, retrieval augmented generation, and AI orchestration frameworks in Python like Prompt Flow and Langchain. The course also covers fine-tuning models for accuracy, ensuring responsible AI practices, and monitoring applications in production.
Developing AI-Powered Apps with Python and Azure AI Studio
Qui devrait suivre ce cours?
This course is designed for developers, data scientists, and AI Operators looking to leverage the full AI app development toolset provided by Azure AI Foundry. Basic understanding of Python is recommended. If needed, attend our Python for data engineering training first.
Prérequis
This course is designed for developers, data scientists, and AI Operators looking to leverage the full AI app development toolset provided by Azure AI Foundry. Basic understanding of Python is recommended. If needed, attend our Python for data engineering training first.
Introduction to Azure AI Foundry
Azure AI Foundry is a comprehensive platform that streamlines AI
solution development and deployment. In this chapter, discover how to use hubs for building
and testing AI solutions, projects for grouping and deploying AI apps, and tools
for managing resources, all while ensuring responsible AI practices are followed.
- What is Azure AI Foundry?
- Collaborative AI Development
- Developing and Testing AI Solutions in Hubs
- Deploying AI Applications with Projects
- Managing Resources and Projects
Ready-to-Use AI Models with Azure AI Services
Azure AI services provides a comprehensive suite of out-of-the-box and customizable AI tools, APIs, and pre-trained
models that
detect sentiment, recognize speakers, understand pictures, etc.
Azure AI Foundry brings together these services into a single, unified development environment.
- Azure AI Services Overview
- Azure AI Language
- Azure AI Vision
- Azure AI Speech
- Azure AI Document Intelligence
Azure OpenAI and Large Language Model Fundamentals
This module introduces Azure OpenAI and the GPT family of Large Language Models (LLMs). You’ll
learn about available LLM models, how to configure and use them in the Azure Portal, and the Transformer
architecture behind models like GPT-4. The latest GPT models offer Function Calling, enabling connections
to external tools, services, or code, allowing the creation of AI-powered Copilots. Additionally, you’ll discover
how Azure OpenAI provides a secure way to use LLMs without exposing your company’s private data.
- Introducing OpenAI and Large Language Models
- The Transformer Model
- What is Azure OpenAI?
- Configuring Deployments
- Understanding Tokens
- LLM Pricing
- Azure OpenAI Chat Completions API
- Role Management: System, User and Assistant
- Azure OpenAI SDK
- Extending LLM capabilities with Function Calling
- LAB: Deploying and Using Azure OpenAI
Model Context Protocol (MCP): Standardizing LLM Integrations
This chapter explores the Model Context Protocol (MCP), an
open standard revolutionizing how applications provide context
to LLMs. MCP acts as a ‘USB-C for AI,’ standardizing connections
between LLMs and various data sources or tools. Crucially, MCP
empowers companies to define, once and for all, precisely how
their proprietary data and tools are utilized by AI systems.
- What is the Model Context Protocol (MCP)?
- Understanding MCP’s Core Components: Hosts, Clients and Servers
- Defining MCP Prompts, Tools and Resources
- Building MCP Servers with Python
Deploying AI Models
The cost and quality of your AI-powered app depend largely on
your choice of AI model and how you deploy it. Learn about the
available model catalog, featuring state-of-the-art Azure OpenAI
models and open-source models from Hugging Face, Meta, Google,
Microsoft, Mistral, and many more.
- Model Catalog Overview
- Model Benchmarks
- Selecting the Best Deployment Mode
Retrieving Semantically Related Data with Vector Search
Vector search is a powerful technique that allows you to retrieve semantically
related data from large datasets such as company documents or databases.
This chapter will teach you how vector search works and how it enables you to
find relevant information without depending on exact keyword based search terms
or language of the information in the dataset.
- Capture Semantic Meaning with Embeddings
- Vector Search
- Vector Search Design Considerations
Retrieval Augmented Generation with Azure AI Search
Azure AI Search enables the Retrieval Augmented Generation (RAG)
design pattern, enhancing LLMs knowledge with your
own company specific data. This chapter explores the RAG design pattern by incorporating Azure AI Search,
into your Semantic Kernel Python applications.
- What is Azure AI Search?
- Retrieval Augmented Generation with Semantic Kernel
- Enhancing AI Models with your Own Data: Blob Storage, Azure SQL, OneLake…
- Hybrid Search with Semantic Reranking
- Use AI Enrichment to extract insights
- Fine-tuning vs RAG
- LAB: Chat with Azure OpenAI models using your own data
Orchestrating AI Models using Semantic Kernel
Semantic Kernel is an open-source SDK backed by Microsoft that seamlessly integrates Large Language Models
such as OpenAI and Azure OpenAI with programming languages like Python.
It allows users to use natural language input within Large Language Models to seamlessly invoke and interact
with your custom code.
- An Introduction to Semantic Kernel
- Integrating LLMs in your applications
- Keeping track of Token Usage
- Enable AI Models to execute code using Plugins
- Control AI Models with Filters
- Best practices for dependency injection in managing AI services
- Observable AI Apps with OpenTelemetry
- LAB: Create a Natural Language to SQL Translation Copilot
Working with Open-Source Language Models
This chapter empowers you to bring powerful AI capabilities to end-user
environments like mobile devices, personal computers and browsers, enhancing scalability,
costs and performance. Additionally you will learn how to deploy and host your own open-source
Language Models in the form of an API that you have full control over.
- The Phi-3 Family of Small Language Models
- Deploying AI Models on Mobile and Edge Devices with ONNX Runtime
- Hosting and Deploying Language Models on-prem and in the cloud with Ollama
Prompt Engineering and Design Patterns
In this chapter, you’ll explore advanced techniques allowing you to control the
model’s output, transforming generic responses into precise, valuable results. Additionally
the chapter covers emerging design patterns in the field of Gen AI app development that help
you increase quality of model responses and reduce costs.
- What is Prompt Engineering?
- Few-Shot Prompting
- Structured Query Generation
- Verifying Model responses with Hallucination Detection
- Saving costs with Semantic Caching
Building Agentic AI Systems
This chapter introduces building agentic AI systems. Learn what
agents are, suitable use cases, essential design
foundations including models, tools, and instructions, different
orchestration patterns, and the importance of
implementing robust guardrails and human oversight mechanisms.
- Introduction to Agentic AI Systems
- Identifying Ideal Use Cases for Agents
- Core Principles of Agent Design (Models, Tools, Instructions)
- Orchestrating Agent Workflows: Single and Multi-Agent Approaches
- Implementing Guardrails for Safe and Reliable Agents
- Integrating Human Oversight and Intervention Strategies
Testing and Moderating AI Models
How can you ensure an LLM provides relevant and coherent
answers to users’ questions using the correct info? How do you prevent an LLM from responding
inappropriately? Discover the answers to these questions and more by exploring
evaluation metrics in Azure AI Foundry and the Azure AI Content Safety
Service.
- Ensuring Coherent and Relevant LLM Responses
- Utilizing Correct Information in AI Answers
- Preventing Inappropriate LLM Responses
- Exploring Custom Evaluation Metrics in Azure AI Foundry
- Leveraging Azure AI Content Safety Service
- Enhancing AI Performance and Safety
Making your AI Apps Traceable
Ensuring your AI app behaves as expected doesn’t end at deployment. It’s
crucial to monitor its interactions with users while it’s running in
production. Learn how Azure AI Foundry integrates with industry standards
like OpenTelemetry to give you a clear and transparent view of your
app’s behavior.
- Monitoring AI App Interactions in Production
- Integrating OpenTelemetry with Azure AI Foundry
- Tracing and Debugging
- Capturing Model Calls and Latency Issues
- Setting Up Local Testing Environments
Fine-tuning AI Models with Azure AI Foundry
This chapter explores the advantages of fine-tuning pre-trained
LLMs for higher accuracy and customized behavior compared to Retrieval Augmented
Generation (RAG). While RAG offers dynamic updates and cost-effectiveness,
fine-tuning provides superior precision for specialized tasks, making it ideal
for achieving domain-specific results.
- Introduction to Fine-Tuning LLMs
- How to decide between Fine-Tuning and RAG?
- Using Task-Specific Data for Enhanced Performance
- Reducing Hallucinations with Fine-Tuning

Réservez votre formation
Entrez vos informations pour confirmer votre réservation.