Inference API
How to call the returned API URL yourself and understand the response you get back.
Inference API
You only need this page if you want to make the API request yourself. If you leave autoFetch enabled, the SDK does the request for you and places the result in result.apiResponse.
Manual request flow
- The SDK finishes the consent flow.
- Your
onCompletecallback receivesresult.tokenandresult.apiUrl. - Your app sends a
POSTrequest to thatapiUrlwith the bearer token.
const handleComplete = async (result) => {
const response = await fetch(result.apiUrl, {
method: 'POST',
headers: {
'Authorization': 'Bearer ' + result.token,
'Content-Type': 'application/json'
},
body: JSON.stringify({
Input: {
item1: {
text: 'Product description',
category: 'fashion'
}
}
})
});
const data = await response.json();
console.log(data);
};
Access tokens are short-lived. They currently expire after 1 hour and only work for the approved request.
Request body
If the returned API URL supports inference, send your items inside Input.
| Field | Type | Required | Description |
|---|---|---|---|
Input | Object | Only for inference requests | The items you want the model to score |
includeLlmData | Boolean | No | Include LLM summary data when the user approved that access |
topicFilter or domain | String | No | Filter returned traits to a topic or domain |
const response = await fetch(apiUrl, {
method: 'POST',
headers: {
'Content-Type': 'application/json',
'Authorization': 'Bearer ' + token
},
body: JSON.stringify({
Input: {
item1: { text: "Product description", category: "fashion" },
item2: { text: "Another item", category: "lifestyle", img_url: "https://..." }
}
})
});
const data = await response.json();
What response should you expect?
Most integrations only need to think in three response patterns. Your app should look for the fields it needs instead of depending on one fixed response shape.
| Response pattern | When you will see it | Main fields to use |
|---|---|---|
| Traits profile | Your app asks for profile and personalization signals | traits, userTraits, connectedPlatforms, metadata |
| Inference scores | Your app sends inferenceData for content scoring | InferenceResult.output |
| Combined response | Your app needs both profile data and scoring results | InferenceResult, traits, connectedPlatforms, sometimes user and metadata |
Write your app against the fields you use, such as traits or InferenceResult.output, instead of assuming one fixed response path.
Response examples
Traits only
{
"success": true,
"traits": {
"positive_traits": {
"Creative Problem Solving": 92,
"Community Engagement": 85
},
"traits_to_improve": {
"Routine Consistency": 32
},
"user_summary": "You are curious and idea-driven...",
"top_traits_explanation": "Your strongest signals came from...",
"archetype": "Creative Explorer",
"nudges": [
{ "text": "Turn one of your ideas into a small weekly habit." }
]
},
"connectedPlatforms": ["reddit", "youtube"],
"metadata": {
"format": "standard"
}
}
Inference only
{
"InferenceResult": {
"output": [
[[0.9998]],
[[0.0013]]
]
}
}
| Score | Meaning |
|---|---|
0.9+ | User will highly engage/like this content |
0.5-0.9 | Moderate positive response |
<0.5 | User likely won't engage |
These scores come from the MIND1 model, which is the per-user preference model trained on the user's positive and negative interaction data.
Combined traits and inference
{
"InferenceResult": {
"output": [
[[0.9998]]
]
},
"traits": {
"positive_traits": {
"Creative Problem Solving": 92
},
"traits_to_improve": {
"Routine Consistency": 32
},
"user_summary": "You are curious and builder-minded...",
"top_traits_explanation": "You repeatedly engage with...",
"archetype": "Creative Explorer",
"nudges": [
{ "text": "Use your strongest interests as the starting point for your next project." }
]
}
}
The trait generator internally produces a DataAnalysis structure, but the public response payload reshapes that output. When you build against the API, follow the response you actually receive.
See Example Usage for practical ways to use traits, archetypes, nudges, and inference scores in your product.