Trust Scores for AI Agents
Multi-dimensional reputation. Know who to trust before you interact. Independent of identity. Earned, not assigned.
The Problem
Identity tells you WHO someone is. Reputation tells you WHETHER to trust them.
Identity ≠ Trust
Knowing an agent's identity doesn't tell you if they're reliable, accurate, or safe. A verified identity can still behave badly.
No Shared History
Every platform starts from zero. An agent with 10,000 successful interactions on one platform is unknown on another.
Binary Trust Fails
Trusted/untrusted is too simple. An agent might be highly reliable but slow, or fast but occasionally inaccurate. Context matters.
Multi-Dimensional Trust
One score can't capture everything. We measure what matters.
Reliability
Does this agent complete tasks? Uptime, success rate, consistency over time.
Accuracy
Are the outputs correct? Verified against ground truth where possible.
Safety
Does this agent follow rules? No spam, no injection attempts, no abuse.
Responsiveness
How fast does this agent respond? Latency percentiles, timeout rates.
Cooperation
How well does this agent work with others? Multi-agent task completion.
Confidentiality
Does this agent protect secrets? No data leakage, proper handling of sensitive info.
How It Works
Reputation is earned through behavior, not claimed.
Platforms Report
Services report agent behavior: task completions, failures, response times, incidents.
We Aggregate
Reports are weighted by platform trust, recency, and statistical significance.
Scores Update
Multi-dimensional scores update in near real-time. Sudden drops trigger alerts.
You Query
Before interacting, check an agent's reputation. Make informed trust decisions.
Use Cases
How platforms and agents use reputation data.
Access Control
Only allow agents with reliability > 80 to access your premium API tier.
if (reputation.reliability >= 80) {
grantPremiumAccess();
}
Rate Limiting
Higher reputation = higher rate limits. Reward good actors.
rateLimit = baseLimit *
(1 + reputation.safety / 100);
Partner Selection
When choosing which agent to delegate a task to, pick the most reliable.
agents.sort((a, b) =>
b.reputation.reliability -
a.reputation.reliability);
Risk Assessment
Flag interactions with low-safety agents for human review.
if (reputation.safety < 50) {
flagForReview(interaction);
}
Pricing
Simple pricing for reputation queries.
Free During Beta
- Unlimited reputation queries
- All 6 dimensions
- Historical data (90 days)
- Reputation change alerts via notify.im
- Report behavior (contribute data)
Requires 1ID authentication. Post-beta pricing TBD.
For AI Agents
Machine-readable endpoints for autonomous reputation checks.
Authentication
OAuth2 via 1id.com
Scope: reputation:read
Query Endpoint
GET https://rep-u-tation.com/api/v1/score/{1id}
Report Endpoint
POST https://rep-u-tation.com/api/v1/report
Machine Metadata
https://rep-u-tation.com/.well-known/index.json
LLM Description
https://rep-u-tation.com/llms.txt
Status
Coming Q2 2026
Example: Query Reputation
curl https://rep-u-tation.com/api/v1/score/1id_K7X9M2Q4 \
-H "Authorization: Bearer $ONEID_TOKEN"
# Response:
{
"1id": "1id_K7X9M2Q4",
"handle": "@clawdia",
"scores": {
"reliability": 94,
"accuracy": 87,
"safety": 91,
"responsiveness": 78,
"cooperation": 85,
"confidentiality": 89
},
"sample_size": 12847,
"last_updated": "2026-02-11T05:30:00Z"
}
rep-u-tation.com provides the trust layer for the agent identity ecosystem. Identity from 1id.com tells you WHO. Reputation tells you WHETHER to trust them.
Powered by Crypt Inc.