
A lot of multi-agent demos still cheat on the interoperability part. Two agents live in the same process, share the same model client, and call each other through in-memory interfaces. That is useful for teaching roles, but it is not what interoperability looks like in a deployed system.
That is the wrong shape if you want agents that can be discovered, validated, and delegated to across service boundaries. A gateway should not assume how a specialist is implemented. It should fetch the specialist's published identity, validate it deterministically, delegate over HTTP, and keep control of the caller-facing response contract.
In this issue, we build that system in C# on Azure. One ASP.NET Core image runs in two modes. In Specialist mode it exposes a Foundry-backed expert over A2A. In Gateway mode it discovers the specialist's card, validates host and skill policy, delegates the user's question, and runs a second Foundry call to turn the raw specialist answer into structured JSON. Both apps deploy to Azure Container Apps from a single image, with Microsoft Foundry instead of local models, Azure-based image builds instead of local Docker, and runtime-injected credentials instead of hardcoded secrets.
What A2A Means Here
A2A, short for agent-to-agent, is the protocol boundary between the gateway and the specialist in this issue. The specialist publishes an agent card that describes its identity and skill, the gateway discovers and validates that card, and the actual work is delegated as HTTP message requests rather than in-process method calls.
What You Are Building
You are building a production-shaped A2A gateway-and-specialist workflow with explicit interoperability boundaries:
- Load runtime config from
appsettings.jsonandA2AINT_environment overrides - Run the same ASP.NET Core image as either
GatewayorSpecialist - In Specialist mode, expose an A2A-compatible agent card and delegated message endpoint backed by Microsoft Foundry
- In Gateway mode, fetch the remote card, validate its host, scheme, and skill ID, then delegate the user's question over A2A
- Run a second Foundry call on the gateway to synthesize the specialist's prose answer into structured JSON
- Optionally protect specialist A2A routes with an API key checked using constant-time comparison
- Thread an optional
conversationIdthrough the A2AcontextIdand back to the caller - Build one image in Azure Container Registry and deploy it twice to Azure Container Apps with environment-driven mode switching
This is not two agents pretending to interoperate inside one process. The gateway and specialist are separate HTTP services with authentication, validation, and a stable contract between them.
System Structure
The architecture is intentionally small. The gateway receives a user question, discovers the specialist's card, validates it, delegates the request, then synthesizes the answer into a compact response. The specialist receives that delegated request, answers from a fixed Azure architecture knowledge pack, and returns a standard A2A message.
The diagram below shows the high-level control flow:
Runtime Configuration First
The app loads and validates the runtime profile before any model call, route, or HTTP client is used:
builder.Configuration
.SetBasePath(AppContext.BaseDirectory)
.AddJsonFile("appsettings.json", optional: false, reloadOnChange: false)
.AddEnvironmentVariables(prefix: "A2AINT_");
var config = AppConfig.Load(builder.Configuration);
config.Validate();The default configuration in this repo:
{
"Runtime": {
"Mode": "Gateway",
"RequestTimeoutSeconds": 45,
"SpecialistBaseUrl": "https://specialist.contoso.com/a2a/specialist",
"SpecialistApiKey": "replace-me",
"A2AApiKeyHeaderName": "x-a2a-api-key",
"RequireHttpsSpecialist": true,
"AllowedAgentHosts": [
"specialist.contoso.com"
]
},
"Foundry": {
"BaseUrl": "https://YOUR-RESOURCE.services.ai.azure.com/api/projects/YOUR-PROJECT",
"ApiKey": "replace-me",
"ModelId": "gpt-4.1-mini"
},
"SpecialistAgent": {
"Name": "Foundry Architecture Specialist",
"Description": "Azure-hosted AI engineering specialist for A2A interoperability, secure delegation, and production deployment patterns.",
"Version": "1.0.0",
"SkillId": "azure_ai_architecture_review",
"PublicBaseUrl": "https://specialist.contoso.com"
}
}This matters because the deployment boundary is operational. Mode, timeout, host allowlist, specialist URL, and Foundry endpoint are all visible controls rather than hidden environment assumptions.
One Image, Two Modes
After config validation, the runtime registers services based entirely on the chosen mode:
if (config.IsGatewayMode)
{
builder.Services.AddSingleton<AgentCardValidationPolicy>();
builder.Services.AddSingleton<GatewaySynthesisService>();
builder.Services.AddSingleton<GatewayQueryService>();
}
else
{
builder.Services.AddSingleton<SpecialistAnswerService>();
}Route mapping follows the same split:
app.MapGet("/", () => Results.Ok(new
{
app = "Azure Foundry A2A Interoperability",
mode = config.Runtime.Mode,
deployment = "Azure Container Apps ready",
endpoints = config.IsGatewayMode
? new[] { "/api/query", "/healthz" }
: new[] { "/a2a/specialist/v1/card", "/a2a/specialist/v1/message:stream", "/healthz" }
}));
app.MapHealthChecks("/healthz");That is a good fit for this kind of sample. There is one codebase, one Dockerfile, and one image, but the role boundary remains explicit because the mode decides which routes and services are active.
The Specialist Stays Narrow
When a specialist API key is configured, all /a2a/* routes pass through a middleware branch before any handler runs:
app.UseWhen(
context => context.Request.Path.StartsWithSegments("/a2a", StringComparison.OrdinalIgnoreCase),
specialistBranch =>
{
specialistBranch.Use(async (context, next) =>
{
if (string.IsNullOrWhiteSpace(config.Runtime.SpecialistApiKey))
{
await next();
return;
}
if (!context.Request.Headers.TryGetValue(config.Runtime.A2AApiKeyHeaderName, out var providedHeader) ||
!ConstantTimeEquals(providedHeader.ToString(), config.Runtime.SpecialistApiKey))
{
context.Response.StatusCode = StatusCodes.Status401Unauthorized;
await context.Response.WriteAsJsonAsync(new { error = "Missing or invalid A2A API key." });
return;
}
await next();
});
});The specialist then publishes its identity through an A2A card:
app.MapGet("/a2a/specialist/v1/card", () => Results.Ok(new AgentCard
{
Name = config.SpecialistAgent.Name,
Description = config.SpecialistAgent.Description,
Url = $"{config.SpecialistAgent.PublicBaseUrl.TrimEnd('/')}/a2a/specialist/v1/card",
Version = config.SpecialistAgent.Version,
Skills =
[
new AgentSkill
{
Id = config.SpecialistAgent.SkillId,
Name = "Azure AI architecture review",
InputModes = ["text"],
OutputModes = ["text"]
}
]
}));Its answer behavior is also narrow on purpose. The specialist runs from a fixed knowledge pack rather than an open-ended prompt:
You are Foundry Architecture Specialist, a remote A2A-accessible agent focused on Azure-hosted AI engineering systems.
You answer only within this knowledge pack:
1. Microsoft Foundry should host the model-facing layer...
2. A2A is for agent-to-agent delegation...
3. Production Azure hosting should use containers, health probes, explicit environment variables, and stateless services...
...
Operating rules:
1. Stay inside the knowledge pack only.
2. If the question asks for something outside the knowledge pack, say that the current specialist scope is limited.
3. Prefer concise implementation guidance over generic theory.
4. Answer in plain text, not JSON.That combination matters. The specialist publishes a stable identity, enforces an optional auth boundary, and answers only within a fixed domain instead of pretending to be a general-purpose agent server.
The Gateway Validates Before It Delegates
The gateway does not trust discovery on its own. Every agent card goes through deterministic validation before delegation:
public void Validate(AgentCard agentCard)
{
if (string.IsNullOrWhiteSpace(agentCard.Name))
throw new InvalidOperationException("Remote agent card did not include a name.");
if (string.IsNullOrWhiteSpace(agentCard.Description))
throw new InvalidOperationException("Remote agent card did not include a description.");
if (string.IsNullOrWhiteSpace(agentCard.Url) || !Uri.TryCreate(agentCard.Url, UriKind.Absolute, out var cardUri))
throw new InvalidOperationException("Remote agent card did not include a valid URL.");
if (config.Runtime.RequireHttpsSpecialist &&
!string.Equals(cardUri.Scheme, Uri.UriSchemeHttps, StringComparison.OrdinalIgnoreCase))
throw new InvalidOperationException("Remote agent card URL must use HTTPS.");
if (config.Runtime.AllowedAgentHosts.Length > 0 &&
!config.Runtime.AllowedAgentHosts.Contains(cardUri.Host, StringComparer.OrdinalIgnoreCase))
throw new InvalidOperationException($"Remote agent host '{cardUri.Host}' is not in the configured allowlist.");
if (!agentCard.Skills.Any(skill => string.Equals(skill.Id, config.SpecialistAgent.SkillId, StringComparison.OrdinalIgnoreCase)))
throw new InvalidOperationException($"Remote agent card did not advertise expected skill '{config.SpecialistAgent.SkillId}'.");
}Only after that validation passes does the gateway send a structured A2A message:
var requestBody = new A2AMessageRequest
{
Message = new A2AMessage
{
MessageId = Guid.NewGuid().ToString("N"),
ContextId = string.IsNullOrWhiteSpace(conversationId) ? null : conversationId,
Parts =
[
new A2APart
{
Text = question
}
]
}
};This is the actual interoperability boundary in the sample. Discovery is not enough. The gateway validates who it found, what host it belongs to, and whether it advertises the required capability before any remote answer is trusted.
Synthesis Is a Second Model Call
The gateway does not return the specialist's raw prose directly. It turns that prose into a bounded JSON contract through a second Foundry call:
private static string BuildInstructions() =>
"""
You are GatewaySynthesisAgent in a production A2A workflow.
Your job is to convert a remote specialist answer into a compact structured reply for an API client.
Non-negotiable rules:
1. Stay grounded in the supplied remote answer and agent card only.
2. Do not invent Azure features, policies, or deployment steps that were not stated.
3. Keep delegatedFindings and nextActions concise and implementation-oriented.
4. Return JSON only.
""";The response is then normalized deterministically:
private static void Normalize(GatewaySynthesis synthesis)
{
synthesis.Summary = synthesis.Summary.Trim();
synthesis.DelegatedFindings = synthesis.DelegatedFindings
.Select(item => item.Trim())
.Where(item => !string.IsNullOrWhiteSpace(item))
.Distinct(StringComparer.Ordinal)
.ToList();
synthesis.NextActions = synthesis.NextActions
.Select(item => item.Trim())
.Where(item => !string.IsNullOrWhiteSpace(item))
.Distinct(StringComparer.Ordinal)
.ToList();
synthesis.Confidence = synthesis.Confidence.Trim().ToLowerInvariant() switch
{
"high" or "medium" or "low" => synthesis.Confidence.Trim().ToLowerInvariant(),
_ => "medium"
};
}This is the same pattern that keeps showing up across these projects. The model can generate useful structure, but extraction, shaping, and trust boundaries stay in code.
Foundry Endpoint Resolution Is Explicit
The configuration layer accepts either a Foundry project URL or an Azure OpenAI-style URL and resolves it once up front:
public Uri GetChatEndpoint()
{
if (!Uri.TryCreate(BaseUrl, UriKind.Absolute, out var inputUri))
throw new InvalidOperationException("Foundry:BaseUrl must be a valid absolute URI.");
var host = inputUri.Host;
if (host.EndsWith(".openai.azure.com", StringComparison.OrdinalIgnoreCase))
return EnsureOpenAiPath(inputUri);
if (host.EndsWith(".services.ai.azure.com", StringComparison.OrdinalIgnoreCase))
{
var resourceName = host[..host.IndexOf(".services.ai.azure.com", StringComparison.OrdinalIgnoreCase)];
return new Uri($"https://{resourceName}.openai.azure.com/openai/v1/");
}
return EnsureOpenAiPath(inputUri);
}That keeps the rest of the application simple. Both modes share the same client factory, and the runtime resolves the endpoint shape once instead of scattering Foundry-specific assumptions across the service layer.
Cloud-First Build and Deployment
The image is built inside Azure Container Registry rather than on a developer machine:
az acr build `
--resource-group $ResourceGroupName `
--registry $AcrName `
--image $imageName `
$projectRootThe deploy script then promotes that same image into both container apps and wires the gateway to the specialist:
if ([string]::IsNullOrWhiteSpace($SpecialistApiKey)) {
$SpecialistApiKey = [Guid]::NewGuid().ToString("N")
}
--env-vars `
A2AINT_Runtime__Mode=Specialist `
A2AINT_Runtime__SpecialistApiKey=$SpecialistApiKey `
...
--env-vars `
A2AINT_Runtime__Mode=Gateway `
A2AINT_Runtime__SpecialistBaseUrl=$specialistBaseUrl `
A2AINT_Runtime__SpecialistApiKey=$SpecialistApiKey `
A2AINT_Runtime__AllowedAgentHosts__0=$specialistUrl `
...That is the right cloud-first pattern for a sample like this. One image is built in Azure, then configuration decides which role each container app plays.
Walking a Real Live Run
With both apps deployed, the gateway returns a structured answer rather than raw remote prose:
Invoke-RestMethod `
-Method Post `
-Uri "https://a2a-gateway.lemonhill-a32d43e5.northeurope.azurecontainerapps.io/api/query" `
-ContentType "application/json" `
-Body '{"question":"How should I host a Foundry-backed A2A system on Azure?"}'specialistAgentName : Foundry Architecture Specialist
specialistCardUrl : https://a2a-specialist.lemonhill-a32d43e5.northeurope.azurecontainerapps.io/a2a/specialist/v1/card
summary : Host the Foundry-backed A2A system on Azure Container Apps using stateless containers
with external durable state, managed identities, and strict gateway validation.
delegatedFindings : {Use Azure Container Apps for ingress, autoscaling, and managed identities.,
Keep agent services stateless and push durable state into managed data services.,
Publish A2A card metadata so the gateway can discover and validate the specialist.}
nextActions : {Deploy stateless agent containers with health probes.,
Externalize durable state.,
Configure managed identities and publish validated A2A card metadata.}
confidence : high
conversationId : 7da0bfdfaea7433694eb03b5fc20d4e0The current automated test suite is small but targeted:
dotnet test AzureFoundryA2AInteroperability.slnxTest summary: total: 4, failed: 0, succeeded: 4, skipped: 0That result is enough to confirm the two key deterministic boundaries already under test: startup configuration validation and remote agent-card policy validation.
Why This Architecture Works
The system works because the gateway and specialist are decoupled on purpose, and the coupling that remains is explicit:
- The specialist publishes a stable identity and answers from a fixed knowledge pack
- The gateway validates host, scheme, and skill policy before delegation
- The caller-facing contract belongs to the gateway, not to the remote specialist's raw prose
- Both modes validate configuration at startup so bad deployments fail early
- The cloud-first build flow keeps the deployment surface small: one Dockerfile, one image, two runtime roles
Potential Enhancements
To extend this project further, you can consider:
- Replace registry admin credentials with managed identity pull where the hosting model allows it
- Add retry and circuit-breaker behavior to
A2ARemoteClientfor transient specialist failures - Expand card validation to inspect skill tags or input and output modes, not just skill presence
- Add end-to-end integration tests that run the full gateway-to-specialist path against a real or stubbed remote endpoint
- Add structured correlation logging so delegation failures, synthesis failures, and specialist failures are easier to separate operationally
Final Notes
Agent interoperability becomes more useful when the boundary between agents is explicit and validated rather than assumed.
If the gateway validates what it discovers, the specialist scopes what it answers, and both apps enforce their configuration contracts at startup, the system stays understandable as software even when the architecture is genuinely agentic.
Explore the source code at the GitHub repository.
See you in the next issue.
Stay curious.
Join the Newsletter
Subscribe for AI engineering insights, system design strategies, and workflow tips.