In this course, you will learn to seamlessly integrate pre-built AI services and Large Language Models such as ChatGPT and Phi into your .NET development projects. The course will teach you how to use your own data with Large Language Models using Azure AI Search. Furthermore, you will gain hands-on experience with AI libraries such as Semantic Kernel. This course will equip you with the skills to integrate advanced AI capabilities to your software solutions without needing to be a data scientist.
Developing AI-Powered Apps with C# and Azure AI
Who should attend this course?
This course targets professional C# developers that want to get started with the Microsoft AI platform. Participants of this course need to have a decent understanding of C# and preferably some experience with Microsoft Azure. This is not a course for data scientists who want to build their own AI models or understand how existing AI models work.
Prerequisites
What is Artificial Intelligence?
In this chapter you will get a short overview about what AI is exactly, and what we can do with it.
- Definitions of Artificial Intelligence
- Domains of Artificial Intelligence
- History, Current State and Future
Introduction to Azure AI Studio and Azure AI Services
Your intelligent app needs to understand its environment and make decisions. For that you can use pre-trained Azure AI Services that detect sentiment, recognize speakers, understand pictures, etc. Azure AI Studio is a web portal that brings together these services into a single, unified development environment. Azure AI services provides a comprehensive suite of out-of-the-box and customizable AI tools, APIs, and models that help modernize your business processes faster.
- What is Azure AI Studio?
- Creating Azure AI Services in Azure AI Studio
- Accessing Azure AI Services
- Collaborating efficiently and effectively on AI projects
Ready-to-Use AI Models with Azure AI Services
Azure AI services provides a comprehensive suite of out-of-the-box and customizable AI tools, APIs, and pre-trained models that detect sentiment, recognize speakers, understand pictures and many more.
- Azure AI Services Overview
- Azure AI Language
- Azure AI Vision
- Azure AI Speech
- Azure AI Document Intelligence
Azure OpenAI and Large Language Model Fundamentals
This module introduces Azure OpenAI and the GPT family of Large Language Models (LLMs). You’ll learn about available LLM models, how to configure and use them in the Azure Portal, and the Transformer architecture behind models like GPT-4. The latest GPT models offer Function Calling, enabling connections to external tools, services, or code, allowing the creation of AI-powered Copilots. Additionally, you’ll discover how Azure OpenAI provides a secure way to use LLMs without exposing your company’s private data.
- Introducing OpenAI and Large Language Models
- The Transformer Model
- What is Azure OpenAI?
- Configuring Deployments
- Understanding Tokens
- LLM Pricing
- Azure OpenAI Chat Completions API
- Role Management: System, User and Assistant
- Azure OpenAI SDK
- Extending LLM capabilities with Function Calling
Orchestrating AI Models using Semantic Kernel
The Semantic Kernel is an open-source SDK backed by Microsoft that seamlessly integrates Large Language Models such as OpenAI and Azure OpenAI with programming languages like C#. It allows users to use natural language input within Large Language Models to seamlessly invoke and interact with your custom code.
- An Introduction to Semantic Kernel
- Integrating LLMs in your applications
- Keeping track of Token Usage
- Enable AI Models to execute code using Plugins
- Solve complex problems through use of Planners
- LAB: Create a Natural Language to SQL Translation Copilot
Deploying AI Models on Azure AI Studio
The cost and quality of your AI-powered app depend largely on your choice of AI model and how you deploy it. Learn about the available model catalog, featuring state-of-the-art Azure OpenAI models and open-source models from Hugging Face, Meta, Google, Microsoft, Mistral, and many more.
- Model Catalog Overview
- Model Benchmarks
- Selecting the Best Deployment Mode
Working with Open-Source Language Models
This chapter empowers you to bring powerful AI capabilities to end-user environments like mobile devices, personal computers and browsers, enhancing scalability, costs and performance. Additionally you will learn how to deploy and host your own open-source Language Models in the form of an API that you have full control over.
- The Phi-3 Family of Small Language Models
- Deploying AI Models on Mobile and Edge Devices with ONNX Runtime
- Hosting and Deploying Language Models on-prem and in the cloud with Ollama
Using your own data in a LLM with Azure AI Search
Azure AI Search facilitates the adoption of the Retrieval Augmented Generation (RAG) design pattern. This methodology involves retrieving pertinent information from a data source and leveraging it to enhance the output of generative AI models.This symbiosis between retrieval and generation sets a new standard for AI-driven search solutions.
- What is Azure AI Search?
- Retrieval Augmented Generation
- Creating an Index on your Own Data
- Creating an Index on your Own Data
- Introduction to Embeddings and Vector Search
- AI Enrichment with your own Data
- Using the Azure OpenAI SDK
- Privacy Concerns
- Fine-tuning vs RAG
- LAB: Chat with Azure OpenAI models using your own data
Integrating Semantic Kernel and Azure AI Services in .NET Apps
This chapter covers the integration of Semantic Kernel and Azure AI Services into .NET applications like Blazor and ASP.NET Core Web API. It covers the practical application of dependency injection to manage AI components efficiently and the implementation of robust logging and telemetry to monitor AI integrations.
- Configuring Semantic Kernel and Azure AI in .NET environments
- Best practices for dependency injection in managing AI services
- Configuring logging and telemetry
- Lab: Implementing an AI Chat Apps with Blazor
Building on the Microsoft Copilot Ecosystem
While building a complete AI-powered application from scratch can be beneficial, it is sometimes not the most efficient approach. In this chapter, you will learn the basics of extending the capabilities and knowledge of Copilot for Microsoft 365, allowing you to enhance its functionality and leveraging its robust and secure infrastructure and UI.
- Overview of Copilot for Microsoft 365 Extensibility Options
- Overview of Copilot Studio
- Extending Copilot’s knowledge with Graph Connectors
- Allowing Microsoft Copilot to call REST API’s
- Expanding Copilot Capabilities via Teams Message Extensions