A Practical Guide to Azure AI Foundry and Prompt Flow

Many people want to build AI applications but get confused with all the complex tools, coding steps, and deployment methods. They struggle to set up the right environment, connect different AI models, and make them work together without spending too much time or money.
Here is the solution: Microsoft’s Azure AI Foundry with Prompt Flow, which makes this much easier. It gives you a simple way to set up resources, create reusable AI tools, connect them with your application, and deploy them in a cost-efficient way. By following a clear step-by-step process, even beginners can design, test, and run AI workflows without feeling overwhelmed.
Setting Up the Infrastructure in Azure AI Foundry
Before we can do anything smart with AI, we need to set up the playground where all our tools will be stored.
In our case, that playground is Azure AI Foundry. It is like your AI workshop, it’s where you store your tools, run experiments, and connect everything.
What is Azure AI Foundry?
Azure AI Foundry is Microsoft’s platform that lets you build, test, and deploy AI solutions without having to stitch together a hundred different services yourself.
It gives you a clean interface to work with AI models, manage data, run workflows, and connect everything into your applications.
In short:
- It’s where you create your AI “workspace”
- It keeps your resources organized
- It helps you focus on building AI features instead of worrying about server setups
Step 1: Sign in and Create a Project
Sign in to Microsoft Azure. Once inside, you’ll see the option to create a new project. This is like opening a fresh notebook for your AI ideas.
Step 2: Create the Required Resources
When you create a project, Azure will ask you to set up some resources in the background.
These resources are the services and infrastructure your AI will use.
When an Azure AI Hub is created, the following resources are automatically provisioned:
- Azure AI Service: This is where your AI models will run
- Storage Account: A place to store data, logs, and any files your AI needs
- Azure AI Foundry: The engine that runs your AI workflows
Use a Resource Group to keep all related items together.

Step 3: Choose the Right Region
Azure AI Hub will ask you to select a region when you are creating a resource. (such as “East US” or “West Europe”).
This matters because:
- The closer the region is to your users, the faster the AI will respond
- Some AI services are only available in certain regions
Step 4: Configure Smartly
- Start with the compute size you need
- Use serverless options where possible in storage accounts
- Also, enable Blob Anonymous in the configuration from the storage account under settings
Understanding Prompt Flow in Azure AI Foundry
If Azure AI Foundry is our workshop, then Prompt Flow is like the assembly line, as it connects all the pieces, runs them in order, and makes sure everything works together.
What is Prompt Flow?
Prompt Flow is a tool that helps you design, test, and run AI workflows. Instead of writing complex scripts, you can drag, drop, and connect steps in a visual diagram.
Comparison:
Without Prompt Flow, building an AI-powered feature usually means:
- Complex coding
- Manual testing
- Time wasted debugging
With Prompt Flow:
- Visual workflow view
- Step-by-step testing
- Easy integration with Azure services
How Prompt Flow Works
Prompt Flow uses nodes (steps) connected together:
- Input Node: Takes a question from the user.
- Retrieval Node: Looks up relevant documents from a database.
- Augmentation Node: Combines the question with retrieved content.
- LLM Node: Sends the prompt to GPT and gets the answer.
- Output Node: Returns the final answer to the user.
Serverless vs. VM Compute
When you run a flow, it needs computing power. Prompt Flow gives you two main choices:
- Serverless Compute: Pay only for what you use, no need to manage servers. Great for quick experiments or low-traffic apps.
- VM Compute: A dedicated virtual machine that runs continuously. Better for high-traffic, always-on scenarios.
Debugging and Reusability
- Run steps individually to debug
- Inspect inputs/outputs
- Reuse flows across projects by exporting or sharing
Example Use Cases
Prompt Flow isn’t just for Q&A bots. Here are some quick examples:
- Web URL Classification: Feed in a link, the flow checks the content, and tags it by category
- Named Entity Recognition (NER): Highlight important names, places, or dates in a document
- Customer Support AI: Pull relevant answers from a database before replying to a customer
Building the RAG Prompt Flow in Azure AI Foundry
Now that the infrastructure is ready, let’s create a Retrieval Augmented Generation (RAG) Prompt Flow.
1. Navigate to Your Project
- Sign in to Azure AI Foundry.
- Go to your Azure AI Hub.
- Select the project where you want to build the RAG Prompt Flow.
2. Create Required Connections
Before starting the Prompt Flow, you must connect your Azure services:
- Azure OpenAI Service: Your deployed GPT-4 model and Text Embedding model on AI Azure Foundry.

- Azure Storage Account: The container holding your uploaded documents.

3. Create a New Prompt Flow
- Go to Prompt Flow → Create new flow
- Choose Standard Flow → name it and create

4. Remove Unnecessary Components
Delete:
- Python Component (echo)
- Template Language Component (joke)
5. Add the Input
- Name:
user_query
- Type:
string

6. Add Index Lookup Component
The Index Lookup will retrieve relevant documents from Azure AI Search.
- Start Compute Session (needed to add new components).
- Add Index Lookup:
- ML index content: Azure AI Search index
- Connection: Your Azure Search service connection
- Index Name: The name of your created index
- Content Field:
chunk
- Embedding Field:
text_vector
- Embedding Type: Azure OpenAI
- Embedding Deployment: Your text embedding model
- Query Type: Hybrid Search (Vector + Keyword)
- Top K: 1 (fetch most relevant document)
- Query:
user_query
(input)


7. Add Augmentation Component
This step prepares the prompt for the GPT engine.
- Prompt Text:
System: You will be provided with the content of the document fetched as part of the RAG architecture. Use this content to answer the user query.
User:
Query: {{user_query}}
Content: {{content}}
8. Configure Generation Output
- Output Name:
chat_response
- Value:
augmentation.output
9. Connect to GPT Engine
In the Augmentation component:
- Connection: Azure OpenAI
- API: Chat API
- Deployment Name: Your GPT-4 deployment
- Max Tokens: 500
- Response Format: Text

10. Run the Prompt Flow
Example query: what are the reviews of Mayfair Boutique in London?
- The Index Lookup retrieves relevant brochure content from the vector store.
- The Augmentation component sends the content + query to GPT-4.
- GPT-4 responds with accurate answers sourced from your private documents.

Conclusion:
Azure AI Foundry with Prompt Flow provides a clear, beginner-friendly way to build AI applications. Instead of struggling with complex integrations and manual coding, you get:
- A single platform to manage models, data, and workflows
- A visual, testable process for building flows
- The ability to reuse and share AI workflows easily
Even as a beginner, you can follow simple steps to design powerful apps like RAG systems. This approach saves time, reduces costs, and lowers complexity while delivering strong AI features that work in real-world scenarios.