API Reference

Public handoff and response reference for Persona SDK integrations.

Updated 19 March 2026
apiendpointsrestreference

🌐 Base URL

Production https://api2.onairos.uk

For Persona integrations, follow the returned apiUrl instead of hardcoding paths in your app.

🔐 Authentication

All API requests require Bearer token authentication in the header:

Authorization: Bearer YOUR_SECRET_TOKEN
Content-Type: application/json
Follow the returned contract

Build against the returned apiUrl, token, request shape, and response fields.

How Persona API access works

  1. The SDK finishes the consent flow.
  2. Your app receives a short-lived token and a returned apiUrl.
  3. If SDK fetch mode is enabled, the SDK fetches that URL for you and places the result in apiResponse.
  4. If SDK fetch mode is disabled, your app sends a POST request to the returned apiUrl with the bearer token.

1. Handoff Response

The SDK handoff gives your app the API URL to call next plus a token scoped to the approved request.

{
  "apiUrl": "https://api2.onairos.uk/...",
  "token": "eyJhbGciOiJIUzI1NiIsInR5cCI6...",
  "authorizedData": {
    "basic": true,
    "personality": true,
    "preferences": false,
    "rawMemories": true,
    "fastTraits": false
  },
  "training": {
    "ready": true,
    "statusUrl": "https://api2.onairos.uk/...",
    "pollIntervalMs": 5000
  },
  "traitsMode": "standard",
  "usage": {
    "includeRawMemories": true,
    "note": "Include includeRawMemories: true in request body to access raw LLM conversation data"
  },
  "warning": "Optional route-specific warning"
}
FieldMeaning
apiUrlThe returned API URL for this approved request.
tokenShort-lived bearer token for that returned API URL.
authorizedDataThe approved data categories the user consented to.
trainingBackground training status hints, including whether traits are already ready.
traitsModeUsually standard or fast when a fast-traits path was requested.
usageOptional hints for special request modes such as raw memories access.
warningOptional runtime warning when Onairos falls back to a safer flow.

Some mobile handoff payloads may include a usage hint mentioning includeRawMemories. For manual calls to the returned Persona API URL, use includeLlmData in the request body.

Stable integration rule

Use the returned apiUrl. The downstream response changes based on the user's approvals and whether your app is asking for traits, inference, raw memories, or a combined result.

2. Manual Request Body

If you disable SDK fetch mode, send a POST request to the returned apiUrl.

const response = await fetch(result.apiUrl, {
  method: 'POST',
  headers: {
    'Authorization': 'Bearer ' + result.token,
    'Content-Type': 'application/json'
  },
  body: JSON.stringify({
    Input: {
      item1: { text: 'Product description', category: 'fashion' },
      item2: { text: 'Another item', category: 'lifestyle', img_url: 'https://...' }
    },
    includeLlmData: false,
    topicFilter: 'fashion'
  })
});
FieldRequiredMeaning
InputOnly for inference requestsThe items you want Onairos to score with the user's preference model.
includeLlmDataNoAsk for optional LLM interaction summary data when the user approved that access.
topicFilter or domainNoFilter returned traits or inference-related outputs to a topic/domain.

Do not send Input for traits-only or cached profile-style responses.

3. Response Patterns

Persona returns a small number of stable response families. Your app should look for the fields it needs rather than assuming one fixed envelope.

Traits / Profile Response

This is the standard traits-only response used when the user approved profile/personality outputs.

{
  "success": true,
  "traits": {
    "positive_traits": {
      "Creative Problem Solving": 92,
      "Community Engagement": 85
    },
    "traits_to_improve": {
      "Routine Consistency": 32
    },
    "user_summary": "You are curious and idea-driven...",
    "top_traits_explanation": "Your strongest signals came from...",
    "archetype": "Creative Explorer",
    "nudges": [
      { "text": "Turn one of your ideas into a small weekly habit." }
    ]
  },
  "userProfile": {
    "user_summary": "You are curious and idea-driven...",
    "top_traits_explanation": "Your strongest signals came from...",
    "archetype": "Creative Explorer",
    "nudges": [
      { "text": "Turn one of your ideas into a small weekly habit." }
    ]
  },
  "userTraits": {},
  "user": {
    "userId": "user_123"
  },
  "connectedPlatforms": ["reddit", "youtube"],
  "metadata": {
    "format": "standard",
    "traitsMode": "standard",
    "retrievedAt": "2026-03-19T12:00:00.000Z",
    "platformCount": 2,
    "topicFilter": null,
    "topicFilterApplied": false
  }
}

Optional extras on this response family include domainFilter and llmData.

Inference Response

This is the response family used when you send Input for user preference scoring. The scores come from the MIND1 model.

{
  "InferenceResult": {
    "output": [
      [[0.9998]],
      [[0.0013]]
    ]
  },
  "domainFilter": {
    "domain": "fashion",
    "relevantCount": 2,
    "method": "llm",
    "applied": true
  }
}

InferenceResult.output is returned in input order. Higher scores mean stronger predicted preference or engagement.

Combined Traits + Inference Response

This is the common combined response when the user approved both profile-style outputs and inference scoring.

{
  "InferenceResult": {
    "output": [
      [[0.9998]]
    ]
  },
  "traits": {
    "positive_traits": {
      "Creative Problem Solving": 92
    },
    "traits_to_improve": {
      "Routine Consistency": 32
    },
    "user_summary": "You are curious and builder-minded...",
    "top_traits_explanation": "You repeatedly engage with...",
    "archetype": "Creative Explorer",
    "nudges": [
      { "text": "Use your strongest interests as the starting point for your next project." }
    ]
  },
  "userProfile": {
    "user_summary": "You are curious and builder-minded...",
    "top_traits_explanation": "You repeatedly engage with...",
    "archetype": "Creative Explorer",
    "nudges": [
      { "text": "Use your strongest interests as the starting point for your next project." }
    ]
  },
  "connectedPlatforms": ["youtube", "linkedin"]
}

Combined Training / Cached Results Response

When the returned API URL is serving training-plus-results style data, the response can include cached inference history and richer training metadata.

{
  "success": true,
  "userProfile": {
    "user_summary": "You are curious and builder-minded...",
    "top_traits_explanation": "You repeatedly engage with...",
    "archetype": "Creative Explorer",
    "nudges": []
  },
  "trainingResults": {
    "traits": {
      "positive_traits": {
        "Creative Problem Solving": 92
      }
    },
    "trainingCompleted": true,
    "lastTrainingDate": "2026-03-18T09:30:00.000Z"
  },
  "inferenceResults": {
    "hasInferenceResults": true,
    "latestResults": {
      "output": [
        [[0.9981]]
      ]
    },
    "allResults": [],
    "totalInferences": 3
  },
  "connectedPlatforms": ["youtube", "chatgpt"],
  "metadata": {
    "platformCount": 2,
    "combinedDataAvailable": {
      "traits": true,
      "inference": true,
      "model": true,
      "llmData": true
    }
  }
}
Route-specific fields can vary

Some responses also include user, trainingResults, inferenceResults, domainFilter, or llmData. Build against the fields your product actually uses instead of assuming every response has every field.

4. Optional llmData Shape

When the user approved raw-memory access and your request includes includeLlmData, the response may include an llmData object.

{
  "llmData": {
    "hasLlmData": true,
    "totalInteractions": 24,
    "lastInteraction": "2026-03-18T18:42:00.000Z",
    "platforms": {
      "chatgpt": 18,
      "claude": 6
    },
    "recentInteractions": [
      {
        "platform": "chatgpt",
        "conversationId": "conv_123",
        "dataType": "conversation",
        "summary": "Conversation data",
        "messageCount": 12,
        "createdAt": "2026-03-18T18:42:00.000Z",
        "isEncrypted": true
      }
    ],
    "accessLevel": "metadata_only"
  }
}

Depending on authorization and route, llmData may be a summary-only object or a fuller payload with decrypted conversation content.

5. Common Errors

StatusWhat it usually means
200Success. Response shape depends on the returned apiUrl.
400Invalid request shape, unauthorized inference size, or missing PIN for an encrypted-model path.
401Invalid token or the token could not be associated with a user.
403Domain registration or authorization failed during handoff.
404User or approved data could not be found.
405Wrong HTTP method for a handoff endpoint that only accepts POST.
413Your Input payload exceeded the authorized inference size.
500Model download, decryption, inference, or response assembly failed.

Integration Rules

  • Treat apiUrl as opaque and use it directly.
  • Use token as a short-lived bearer token for that specific response flow.
  • Send Input only when you are requesting inference scoring.
  • Expect traits/profile responses, inference responses, or combined responses depending on approvals.
  • Look for stable fields such as traits, userProfile, InferenceResult.output, connectedPlatforms, and metadata.
Recommended reading

Use Web Integration, React Native Integration, and Inference API together. The platform guide tells you how to obtain the handoff response; this page tells you how to reason about the payloads that come back.