AIGQLUnify doesn’t guess against a blank schema. It reads your OpenAPI files, builds a graph, then lets AI propose joins, SDL patches, and GraphQL queries — all visible and overridable in the UI. Every AI call is downstream of PDP, so prompts and outputs never bypass policy.
AI helpers in AIGQLUnify are just another consumer of the governed graph. Before an AI resolver runs,
the data plane calls your PDP with the same subject, resource, and context used for normal queries.
If the PDP obligations say features.ai = false, the AI resolver simply does not execute.
features.ai and a masked selection set.# PDP evaluates whether AI is allowed for this query
curl -sS -X POST https://<cp-host>/pdp/decision.v2 \
-H "content-type: application/json" \
-d '{
"tenant":"t_demo",
"workspace":"ws_primary",
"action":"read",
"resource":{"type":"GraphQuery","name":"orders"},
"context":{
"role":"analyst",
"selection":["orders.id","orders.total","orders.userEmail"],
"client":"console",
"useAI":true
}
}' | jq
AIGQLUnify uses allowFields, mask, and obligations.features.ai
to decide which fields exist in the AI plan and whether AI runs at all.
# 1) User types: "Show risky orders from last week."
# 2) Modeling helper proposes a GraphQL query plan.
# 3) Data plane calls PDP. Only if features.ai = true:
# - execute GraphQL selection
# - run AI summarizer on the masked result.
curl -sS https://<dp-host>/graphql \
-H "content-type: application/json" \
-H "x-tenant-id: t_demo" \
-H "x-workspace-id: ws_primary" \
-H "authorization: Bearer <jwt>" \
-d '{
"query": "query AskAIOverOrders($prompt:String!) {
askAI {
riskyOrdersSummary(prompt:$prompt) {
text
traceId
usedFields
}
}
}",
"variables": {
"prompt": "Summarize the riskiest orders from last week."
}
}' | jq
If PDP returns features.ai = false, askAI short-circuits with a
policy error and emits a span so you can see who asked and why it was denied.
Modeling helpers do three things:
In every case, they operate on the same shape and the same decisions as your regular GraphQL clients. No secret data paths, no “AI shadow API.”
AI outputs carry a traceId that ties them back to:
That means “what data fed this AI answer?” goes from hand-wavy to one search in your tracing backend.
# Find the AI span by traceId
# (traceId is returned in askAI.summary.traceId)
# Example Jaeger / OpenTelemetry search
traceId="4f2b9c1d7a3e4f12"
open "https://<jaeger-host>/search?traceID=$traceId"Because AI is in-path, not sidecar, DSAR and policy logs already include the context for what each AI call saw and why it was allowed.