We built NORNR because once agents start buying APIs, vendors, or services, the hard part is no longer the payment rail. It is deciding whether that spend should happen at all.
NORNR sits between intent and settlement. Policy decides whether something is approved, queued, or rejected. Human approvals kick in when thresholds or counterparties require it. Receipts, manifests, and evidence stay attached to the decision trail.
What is live today:
Quickstart: https://nornr.com/quickstart
Control room: https://nornr.com/app
Python and TypeScript SDKs
Design partner application flow
Live on-chain settlement proof on the launch page (Base Sepolia)
Giving an AI agent the ability to spend real money sounds like complete madness. Are you introducing this project because people are actually doing this insane thing already, and you want to make the process safer, or because you want to encourage people to begin doing so, by introducing a limited sandbox?
We are not trying to persuade teams that agents should suddenly be given money for the first time. We are reacting to the fact that many agent workflows are already close to real economic actions: buying API usage, triggering vendor services, provisioning paid infrastructure, or moving toward delegated purchasing flows.
Our view is that if those workflows are happening anyway, the dangerous setup is not “agent with constraints”. The dangerous setup is “agent with no mandate, no approval thresholds, and no evidence trail”.
So the goal of NORNR is not to encourage reckless autonomy. It is to put policy, approvals, and auditability between intent and settlement.
In other words: if an agent is going to touch spend, we think the default should be more governance, not less.
We built NORNR because once agents start buying APIs, vendors, or services, the hard part is no longer the payment rail. It is deciding whether that spend should happen at all.
NORNR sits between intent and settlement. Policy decides whether something is approved, queued, or rejected. Human approvals kick in when thresholds or counterparties require it. Receipts, manifests, and evidence stay attached to the decision trail.
What is live today:
Quickstart: https://nornr.com/quickstart Control room: https://nornr.com/app Python and TypeScript SDKs Design partner application flow Live on-chain settlement proof on the launch page (Base Sepolia)
Repo: https://github.com/NORNR Launch page: https://nornr.com
Giving an AI agent the ability to spend real money sounds like complete madness. Are you introducing this project because people are actually doing this insane thing already, and you want to make the process safer, or because you want to encourage people to begin doing so, by introducing a limited sandbox?
It is the first one.
We are not trying to persuade teams that agents should suddenly be given money for the first time. We are reacting to the fact that many agent workflows are already close to real economic actions: buying API usage, triggering vendor services, provisioning paid infrastructure, or moving toward delegated purchasing flows.
Our view is that if those workflows are happening anyway, the dangerous setup is not “agent with constraints”. The dangerous setup is “agent with no mandate, no approval thresholds, and no evidence trail”.
So the goal of NORNR is not to encourage reckless autonomy. It is to put policy, approvals, and auditability between intent and settlement.
In other words: if an agent is going to touch spend, we think the default should be more governance, not less.