Tag Archives: LLM

Designing Safe and Actionable Agents in the New Azure Foundry Portal

In this blog we will go through the Azure AI foundry Portal and its capabilities .The new Azure AI Foundry portal brings model experimentation, agent building, data grounding, and safety controls into a single, coherent workspace. It’s designed so builders can move from idea to prototype to hardened agent without context switching.

The first thing when we login is we need to switch on the toggle new foundry and it totally brings altogether a new interface and lands us to the dashboard.

This dashboard is the Foundry project home for a developer or team building AI agents. It surfaces the project endpoint and API key for integration, shows the project region, and highlights recent model and tooling updates so teams can stay current. The page also lists recent projects and provides quick links to documentation and community resources, making it a practical launchpad for both prototyping and production work.

In the coding quick start we have the option coding quick start. We can open in vs code for the web.

Continue reading

Build Trusted AI with Guardrails and Controls in Azure Foundry

As AI systems move from proof of concepts to production, organizations must ensure their applications are safe, secure, and compliant without slowing teams down. Microsoft Azure Foundry brings these capabilities together under Guardrails & Controls, giving builders a central place to filter harmful content, govern agent behavior, block sensitive terms, and receive security insights.

In this walkthrough, We’ll learn how to use the Guardrails & Controls workspace in Azure Foundry with a focus on four areas:

  1. Try it out : experiment with safety checks (text, images, prompts, groundedness)
  2. Content filters : create and assign policy to deployments
  3. Blocklists :ban specific words/phrases from inputs and outputs
  4. Security recommendations : get posture guidance via Defender for Cloud

Why Guardrails Matter ?

Production AI faces unpredictable inputs, sensitive data, and regulatory requirements. Without guardrails, systems can hallucinate, leak private information, or produce unsafe content. Azure Foundry’s Guardrails & Controls reduce those risks by combining content moderation, agent behavior governance, blocked terms, and security posture insights in one place.

Navigate to Guardrails & Controls.

From your Foundry project:

Foundry → (Your Project) → Guardrails & controls

Guardrails & Controls Overview

The Guardrails & Controls landing page in Azure Foundry with tabs for Try it out, Content filters, Blocklists, and Security recommendations.

What you’re seeing:
The overview introduces the guardrails surface with quick entry points for Safety & security guardrails (content filters, blocklists, alerts) and Agent controls (behavior and tool use governance). Use this page as your starting point to design and test safety policies.

Continue reading