If you’re evaluating compliance controls against your Kosli trail data today, there’s a good chance you’ve written some glue code to make it work. A script that pulls trail data from the API. Another that downloads attestations one by one. Something that mangles the JSON together into a shape that your chosen compliance engine can evaluate. And then that engine itself, whether it’s OPA, a custom Python script, or something else, installed and configured in your pipeline.
We know, because we were doing exactly the same thing. Our own CS team had an 80-line bash script just to assemble trail data for Rego evaluation. It worked. But it was fragile, hard to maintain, and impossible to hand to a customer and say “here, do this.”
With Kosli CLI v2.12.0, that script is one command.
What it does
kosli evaluate lets you define a Rego policy and run it against your trail data in a single CLI call. You don’t need to install OPA in your pipeline, query the API and download attestations individually, or manage Rego policies across environments. It replaces all of that, including the 80-line bash scripts.
kosli evaluate trail "$TRAIL_NAME" \
--policy my-policy.rego \
--flow my-flow \
--org my-org
The CLI has a built-in Rego evaluator, so OPA doesn’t need to be installed. It fetches the trail data, enriches it with attestation details, and evaluates your policy against it. You get a clear allowed-or-denied result.
What a policy looks like
Policies are standard Rego with a simple contract: declare package policy, define an allow rule, and optionally define a violations rule that explains what failed.
Here’s a policy that checks whether every pull request on a trail has at least one approver:
package policy
import rego.v1
default allow = false
violations contains msg if {
some trail in input.trails
some pr in trail.compliance_status.attestations_statuses["pull-request"].pull_requests
count(pr.approvers) == 0
msg := sprintf("trail '%v': pull-request %v has no approvers",
[trail.name, pr.url])
}
allow if {
count(violations) == 0
}
The power here is that you’re writing policy against real evidence: the actual trail data that Kosli has collected from your pipeline, not a mocked-up JSON blob or a manually assembled payload.
Why Rego
Rego is the policy language from the Open Policy Agent project, and it’s a deliberate choice, not just something we reached for because it was nearby.
It’s declarative: you describe what should be true, not how to check it. “Every pull request must have an approver” reads as a statement of policy, which is exactly what it is. It’s JSON-native, so trail data maps directly into Rego queries without parsing, deserialization, or type coercion. It’s deterministic by default: policies evaluate against the data they’re given, which makes them easy to test, reproduce, and audit. And it’s widely adopted for Kubernetes admission control, cloud infrastructure policy, and authorization across the industry, so many teams already know it.
Rego was purpose-built for exactly the kind of decision kosli evaluate needs to make: look at structured data, apply a rule, return a verdict.
Evaluate one trail or many
For a single trail, use kosli evaluate trail. The policy receives the trail data as input.trail:
kosli evaluate trail "$TRAIL_NAME" \
--policy snyk-check.rego \
--flow my-flow \
--org my-org
For multiple trails in one evaluation, use kosli evaluate trails. The policy receives an array as input.trails:
kosli evaluate trails trail-1 trail-2 trail-3 \
--policy pr-approved.rego \
--flow my-flow \
--org my-org
This distinction matters when you’re writing the Rego. Single trail policies reference input.trail. Multi-trail policies iterate over input.trails. The tutorial covers both patterns in detail.
Built for your pipeline
The exit code behaviour makes kosli evaluate a natural gate in CI/CD:
if kosli evaluate trail "$GIT_COMMIT" \
--policy policies/pr-approved.rego \
--flow "$FLOW_NAME" \
--org "$KOSLI_ORG"; then
echo "Deploying..."
else
echo "Blocked by policy"
exit 1
fi
You can also capture the evaluation result as an attestation, creating an auditable record that the check ran and what it found:
kosli evaluate trail "$TRAIL_NAME" \
--policy my-policy.rego \
--flow "$FLOW_NAME" \
--org "$KOSLI_ORG" \
--show-input \
--output json > eval-report.json 2>/dev/null || true
kosli attest generic \
--name opa-evaluation \
--flow "$FLOW_NAME" \
--trail "$TRAIL_NAME" \
--org "$KOSLI_ORG" \
--compliant="$(jq -r '.allow' eval-report.json)" \
--attachments my-policy.rego,eval-report.json
The --show-input flag is worth knowing about. It includes the full input data in the JSON output, which is invaluable when you’re writing and debugging policies. You can see exactly what Kosli sends to the Rego evaluator.
Why we built this
Multiple customers, independently, arrived at the same architecture: collect evidence with Kosli, evaluate it with Rego. But every one of them had to build the plumbing themselves: install OPA, query our API, download attestations, assemble the JSON, run the evaluation, handle the results.
We were doing the same thing internally. The pattern was clear; the tooling wasn’t.
kosli evaluate is the first step in making policy evaluation a first-class part of Kosli. It wraps the entire fetch-assemble-evaluate cycle into a single command that works anywhere the CLI runs, with no extra dependencies or infrastructure.
Where this goes next
Today, Rego policies live in your repo and evaluation runs in your pipeline. That’s useful, but it’s just the starting point.
We’re working towards Rego policies stored and versioned inside Kosli, with server-side evaluation. Upload a policy, point it at a trail, and get a verdict without the CLI in the loop at all. That also opens up retroactive evaluation: run a new policy version against existing trail data to see what would change before you enforce it.
We also want to streamline the input side — making it simpler to shape and assemble the data that flows into evaluation, so you get a complete, end-to-end view of your controls without having to think about the plumbing underneath.
The longer-term picture is connecting evaluation verdicts to named controls, bridging the gap between “this Rego policy passed” and “control SDLC-053 is satisfied.” But that’s a story for another post.
Try it
The Evaluate Trails with OPA tutorial walks through everything: writing policies, exploring trail data with --show-input, single and multi-trail evaluation, and recording results as attestations.
We’d love to hear how you use it, especially which controls you’re trying to automate and what patterns emerge. That feedback directly shapes where we take evaluations next.