API Reference
Public handoff and response reference for Persona SDK integrations.
🌐 Base URL
https://api2.onairos.uk
For Persona integrations, follow the returned apiUrl instead of hardcoding paths in your app.
🔐 Authentication
All API requests require Bearer token authentication in the header:
Authorization: Bearer YOUR_SECRET_TOKEN
Content-Type: application/json
Build against the returned apiUrl, token, request shape, and response fields.
How Persona API access works
- The SDK finishes the consent flow.
- Your app receives a short-lived
tokenand a returnedapiUrl. - If SDK fetch mode is enabled, the SDK fetches that URL for you and places the result in
apiResponse. - If SDK fetch mode is disabled, your app sends a
POSTrequest to the returnedapiUrlwith the bearer token.
1. Handoff Response
The SDK handoff gives your app the API URL to call next plus a token scoped to the approved request.
{
"apiUrl": "https://api2.onairos.uk/...",
"token": "eyJhbGciOiJIUzI1NiIsInR5cCI6...",
"authorizedData": {
"basic": true,
"personality": true,
"preferences": false,
"rawMemories": true,
"fastTraits": false
},
"training": {
"ready": true,
"statusUrl": "https://api2.onairos.uk/...",
"pollIntervalMs": 5000
},
"traitsMode": "standard",
"usage": {
"includeRawMemories": true,
"note": "Include includeRawMemories: true in request body to access raw LLM conversation data"
},
"warning": "Optional route-specific warning"
}
| Field | Meaning |
|---|---|
apiUrl | The returned API URL for this approved request. |
token | Short-lived bearer token for that returned API URL. |
authorizedData | The approved data categories the user consented to. |
training | Background training status hints, including whether traits are already ready. |
traitsMode | Usually standard or fast when a fast-traits path was requested. |
usage | Optional hints for special request modes such as raw memories access. |
warning | Optional runtime warning when Onairos falls back to a safer flow. |
Some mobile handoff payloads may include a usage hint mentioning includeRawMemories. For manual calls to the returned Persona API URL, use includeLlmData in the request body.
Use the returned apiUrl. The downstream response changes based on the user's approvals and whether your app is asking for traits, inference, raw memories, or a combined result.
2. Manual Request Body
If you disable SDK fetch mode, send a POST request to the returned apiUrl.
const response = await fetch(result.apiUrl, {
method: 'POST',
headers: {
'Authorization': 'Bearer ' + result.token,
'Content-Type': 'application/json'
},
body: JSON.stringify({
Input: {
item1: { text: 'Product description', category: 'fashion' },
item2: { text: 'Another item', category: 'lifestyle', img_url: 'https://...' }
},
includeLlmData: false,
topicFilter: 'fashion'
})
});
| Field | Required | Meaning |
|---|---|---|
Input | Only for inference requests | The items you want Onairos to score with the user's preference model. |
includeLlmData | No | Ask for optional LLM interaction summary data when the user approved that access. |
topicFilter or domain | No | Filter returned traits or inference-related outputs to a topic/domain. |
Do not send Input for traits-only or cached profile-style responses.
3. Response Patterns
Persona returns a small number of stable response families. Your app should look for the fields it needs rather than assuming one fixed envelope.
Traits / Profile Response
This is the standard traits-only response used when the user approved profile/personality outputs.
{
"success": true,
"traits": {
"positive_traits": {
"Creative Problem Solving": 92,
"Community Engagement": 85
},
"traits_to_improve": {
"Routine Consistency": 32
},
"user_summary": "You are curious and idea-driven...",
"top_traits_explanation": "Your strongest signals came from...",
"archetype": "Creative Explorer",
"nudges": [
{ "text": "Turn one of your ideas into a small weekly habit." }
]
},
"userProfile": {
"user_summary": "You are curious and idea-driven...",
"top_traits_explanation": "Your strongest signals came from...",
"archetype": "Creative Explorer",
"nudges": [
{ "text": "Turn one of your ideas into a small weekly habit." }
]
},
"userTraits": {},
"user": {
"userId": "user_123"
},
"connectedPlatforms": ["reddit", "youtube"],
"metadata": {
"format": "standard",
"traitsMode": "standard",
"retrievedAt": "2026-03-19T12:00:00.000Z",
"platformCount": 2,
"topicFilter": null,
"topicFilterApplied": false
}
}
Optional extras on this response family include domainFilter and llmData.
Inference Response
This is the response family used when you send Input for user preference scoring. The scores come from the MIND1 model.
{
"InferenceResult": {
"output": [
[[0.9998]],
[[0.0013]]
]
},
"domainFilter": {
"domain": "fashion",
"relevantCount": 2,
"method": "llm",
"applied": true
}
}
InferenceResult.output is returned in input order. Higher scores mean stronger predicted preference or engagement.
Combined Traits + Inference Response
This is the common combined response when the user approved both profile-style outputs and inference scoring.
{
"InferenceResult": {
"output": [
[[0.9998]]
]
},
"traits": {
"positive_traits": {
"Creative Problem Solving": 92
},
"traits_to_improve": {
"Routine Consistency": 32
},
"user_summary": "You are curious and builder-minded...",
"top_traits_explanation": "You repeatedly engage with...",
"archetype": "Creative Explorer",
"nudges": [
{ "text": "Use your strongest interests as the starting point for your next project." }
]
},
"userProfile": {
"user_summary": "You are curious and builder-minded...",
"top_traits_explanation": "You repeatedly engage with...",
"archetype": "Creative Explorer",
"nudges": [
{ "text": "Use your strongest interests as the starting point for your next project." }
]
},
"connectedPlatforms": ["youtube", "linkedin"]
}
Combined Training / Cached Results Response
When the returned API URL is serving training-plus-results style data, the response can include cached inference history and richer training metadata.
{
"success": true,
"userProfile": {
"user_summary": "You are curious and builder-minded...",
"top_traits_explanation": "You repeatedly engage with...",
"archetype": "Creative Explorer",
"nudges": []
},
"trainingResults": {
"traits": {
"positive_traits": {
"Creative Problem Solving": 92
}
},
"trainingCompleted": true,
"lastTrainingDate": "2026-03-18T09:30:00.000Z"
},
"inferenceResults": {
"hasInferenceResults": true,
"latestResults": {
"output": [
[[0.9981]]
]
},
"allResults": [],
"totalInferences": 3
},
"connectedPlatforms": ["youtube", "chatgpt"],
"metadata": {
"platformCount": 2,
"combinedDataAvailable": {
"traits": true,
"inference": true,
"model": true,
"llmData": true
}
}
}
Some responses also include user, trainingResults, inferenceResults, domainFilter, or llmData. Build against the fields your product actually uses instead of assuming every response has every field.
4. Optional llmData Shape
When the user approved raw-memory access and your request includes includeLlmData, the response may include an llmData object.
{
"llmData": {
"hasLlmData": true,
"totalInteractions": 24,
"lastInteraction": "2026-03-18T18:42:00.000Z",
"platforms": {
"chatgpt": 18,
"claude": 6
},
"recentInteractions": [
{
"platform": "chatgpt",
"conversationId": "conv_123",
"dataType": "conversation",
"summary": "Conversation data",
"messageCount": 12,
"createdAt": "2026-03-18T18:42:00.000Z",
"isEncrypted": true
}
],
"accessLevel": "metadata_only"
}
}
Depending on authorization and route, llmData may be a summary-only object or a fuller payload with decrypted conversation content.
5. Common Errors
| Status | What it usually means |
|---|---|
| 200 | Success. Response shape depends on the returned apiUrl. |
| 400 | Invalid request shape, unauthorized inference size, or missing PIN for an encrypted-model path. |
| 401 | Invalid token or the token could not be associated with a user. |
| 403 | Domain registration or authorization failed during handoff. |
| 404 | User or approved data could not be found. |
| 405 | Wrong HTTP method for a handoff endpoint that only accepts POST. |
| 413 | Your Input payload exceeded the authorized inference size. |
| 500 | Model download, decryption, inference, or response assembly failed. |
Integration Rules
- Treat
apiUrlas opaque and use it directly. - Use
tokenas a short-lived bearer token for that specific response flow. - Send
Inputonly when you are requesting inference scoring. - Expect traits/profile responses, inference responses, or combined responses depending on approvals.
- Look for stable fields such as
traits,userProfile,InferenceResult.output,connectedPlatforms, andmetadata.
Use Web Integration, React Native Integration, and Inference API together. The platform guide tells you how to obtain the handoff response; this page tells you how to reason about the payloads that come back.