Authentic Reaction Intelligence: the science behind the data. Reactr measures real emotional response to video content using audio waveform analysis, facial sentiment detection, and geographic tagging.
Reactr captures authentic, unmanaged facial and audio reactions at the moment of first exposure to content. The participant is not in a lab. Not told they are being measured. Not prompted to react.
This produces genuine emotional response data that declared-preference research (focus groups, surveys, post-viewing questionnaires) simply cannot replicate. When someone knows they're being watched, they perform. Reactr captures the truth before the performance begins.
The result is authentic reaction data at scale: thousands of genuine first-impression responses, captured in the wild, tagged by geography, and analyzed across 7 sentiment categories.
Participants opt in by opening the Reactr link. In exchange, standout reactions may be featured on the brand or studio's social channels, giving fans a genuine incentive to participate and creating a content pipeline for the brand at zero additional cost.
Reactr analyzes the audio component of every reaction using acoustic measurement techniques that go far beyond simple volume detection. The system performs audience sentiment analysis on each vocal response to extract meaningful emotional signal from raw audio.
This audio-first approach to emotional response measurement captures involuntary vocal reactions that participants cannot consciously suppress, producing data that is fundamentally more honest than any survey.
The front camera captures micro-expressions during content playback. Reactr's system maps facial movement to emotional states across the same 7 sentiment categories, cross-referenced with the audio signal for validation.
Facial sentiment detection adds a second independent signal layer to the analysis. When someone laughs (audio signal) and simultaneously displays a genuine Duchenne smile (facial signal), the system has high-confidence confirmation of a joy response. When signals diverge, a nervous laugh with a fear micro-expression, the system captures emotional complexity that single-signal analysis would miss entirely.
This dual-signal approach to video testing emotional response produces richer, more reliable data than either audio or facial analysis alone.
Every reaction is tagged by geography at the time of capture, enabling market-specific analysis that transforms raw emotional data into actionable geographic intelligence.
Which markets reacted strongest? Where did emotion peak earliest? Where did the content fall flat? Reactr's trailer reaction data answers these questions with geographic precision, enabling studios, brands, and distributors to optimize P&A spend by market.
For global campaigns, geographic emotional intelligence reveals which markets are ready to convert and which need a different creative approach. This is not social listening after the fact. It's authentic first-impression reaction data tagged to real locations at the moment of exposure.
The entertainment and advertising industries have relied on two kinds of data for decades: declared behavior (focus groups, surveys, test screenings) and post-hoc response (social metrics, box office correlation, view counts).
Both are fundamentally flawed. Focus groups capture what people say they feel, not what they actually feel. Social metrics capture what people choose to broadcast publicly, not their genuine reaction. Neither captures the authentic first-impression response.
Reactr captures the moment that determines whether someone tells a friend, buys a ticket, streams the album, or forgets what they saw. That moment, the authentic, unperformed, involuntary emotional response to first exposure, is the most valuable signal in entertainment and advertising. And until now, no one was measuring it at scale.
This is the future of audience sentiment analysis: authentic reaction data, captured in the wild, analyzed across 7 sentiment categories, tagged by geography, and delivered via API.
Traditional emotional intelligence platforms have built impressive dashboards around emotional response data. They offer second-by-second sentiment curves, facial coding reports, and brand association scores. The market clearly wants this intelligence.
But every one of these platforms shares the same fundamental constraint: the participant knows they are being measured.
They recruit an online panel. The participant opts in, opens a webcam survey, and watches an ad while their face is being recorded. They know they are evaluating content. Their brain knows it too. What you capture is a managed, self-regulated response. The face a person makes when they think someone is watching.
In 1927, physicists discovered that observing a particle changes its behavior. The same principle applies to human emotion. The moment someone knows their reaction is being measured, they stop having a reaction and start performing one.
Reactr captures the moment before any of that kicks in. A person receives content through a normal social message. They tap to open it. In that instant, before the content registers, before they can perform, their authentic response is captured.
| Method | Participant Knows? | Data Type | Scale |
|---|---|---|---|
| Focus groups | Yes | Declared | 8–20 people |
| Panel-recruited webcam testing | Yes | Managed webcam | 150–300 per test |
| Social metrics | Yes | Post-hoc broadcast | Millions (noisy) |
| Reactr | No | Authentic first reaction | Unlimited, viral |
Reactr campaigns go out through existing social channels, collect authentic reactions from real audiences in real environments, and return geographic and demographic intelligence that panel-recruited studies cannot produce.
The better question is not whether emotional response data matters. The market has already answered that. The question is whether the data is real. That is the only question Reactr answers differently.
Explore the Reactr platform or talk to our team about API integration.