Skip to main content

Overview

ACN’s discovery engine lets agents find services using natural language. Instead of browsing a catalog or reading API docs, an agent simply describes what it needs — and ACN returns the best matches, ranked by relevance, quality, and price.
POST /v1/discover
{
  "query": "translate text from English to Spanish"
}
Discovery is free and unauthenticated by design. Any agent on the internet can discover what services are available on ACN without signing up or paying.

How Discovery Works

The discovery pipeline has four stages:

1. Intent Parsing

An LLM analyzes the natural language query to extract:
  • Primary intent — What the agent is trying to accomplish
  • Category signals — Which service categories are relevant
  • Constraint signals — Implied requirements (speed, accuracy, price sensitivity)
The parsed intent is converted into a vector embedding and matched against all registered endpoints using k-nearest-neighbor (k-NN) vector search. This finds services that are semantically similar to the request, even if they don’t contain the exact keywords. For example, “send a notification to a user’s phone” would match SMS services, push notification services, and messaging APIs — even though none of those descriptions contain the word “notification.”

3. Structured Filtering

After semantic matching, results are filtered by any explicit constraints:
FilterDescriptionExample
categoryService categorydata_intelligence, ai_ml, communication, blockchain, utility
tagsSpecific tags["email", "transactional"]
max_price_per_callPrice ceiling in USDC0.01
min_uptime_pctMinimum uptime99.5
methodHTTP methodPOST

4. Ranking

Results are scored using a weighted combination of:
  • Relevance score — How well the service matches the query (from semantic search)
  • Quality metrics — Uptime percentage, average latency, error rate
  • Usage signals — Total calls served (a proxy for trust and reliability)
  • Price — Lower-priced services rank higher when relevance is equal

Discovery Response

Each result includes everything an agent needs to decide and execute:
{
  "results": [
    {
      "provider_id": "resend",
      "provider_name": "Resend",
      "endpoint_slug": "send-email",
      "description": "Send transactional emails with high deliverability",
      "category": "communication",
      "tags": ["email", "transactional", "smtp"],
      "pricing": {
        "model": "per_call",
        "price_per_call_usdc": "0.002"
      },
      "quality": {
        "uptime_pct": 99.9,
        "avg_latency_ms": 230,
        "error_rate_pct": 0.1,
        "total_calls": 125000
      },
      "endpoint": {
        "method": "POST",
        "parameters": { ... },
        "request_schema": { ... },
        "response_schema": { ... }
      },
      "relevance_score": 0.95
    }
  ]
}

Caching

Discovery results are cached for performance:
  • Semantic results: 5-minute TTL
  • Quality metrics: 1-minute TTL (fresher data for uptime/latency)
Authenticated requests can bypass cache for real-time results when needed.

Rate Limits

AuthenticationRate Limit
Unauthenticated (by IP)20 requests/minute
Authenticated (by developer)100 requests/minute
See Rate Limits for full details.

Best Practices

“Send a transactional email with HTML support” returns better results than just “email.” The more context you provide, the better the semantic matching.
If you need a service under $0.01/call or with 99.9% uptime, use structured filters rather than mentioning it in the query text. Filters are applied as hard cutoffs.
Don’t hard-code provider IDs. Let your agent discover and choose the best provider at runtime. This makes your agent resilient to provider outages and price changes.
For repeated tasks, cache the discovery results on your side to reduce latency. The provider landscape doesn’t change every second.