Voice prosody
Pitch contour, jitter, shimmer, speaking rate, pause distribution.
Behavioral signal from your own meetings — voice, gaze, facial dynamics, speech timing — captured locally on your Mac and grounded in 47 peer-reviewed studies.
Universal binary — Apple Silicon and Intel · macOS 26+. Extension works with Chrome, Brave, Arc, Edge, Opera.
What Cadence sees
Pitch contour, jitter, shimmer, speaking rate, pause distribution.
Fixation duration, saccade frequency, time looking toward the camera.
FACS-12 action-unit intensities, expression entropy, head-movement RMS.
Turn-taking, latency to respond, balance of who is talking when.
Vocabulary diversity, repetition, sentence structure — never the words themselves.
The same 31 indicators extracted week after week, so trends are comparable.
Where it captures
Native macOS apps no extension needed
Browser meetings Cadence Bridge
LMS-embedded (iframe) Cadence Bridge
Cadence Bridge is a small Chrome extension that solves one problem: meetings hosted inside browser iframes are invisible to macOS's window-level detection. The native app sees the parent Canvas URL — not the BigBlueButton call running inside it.
The Bridge hooks getUserMedia and RTCPeerConnection at the page world via a MV3 main-world content script, then forwards lifecycle events (candidate / connected / closed) to the Cadence app over a local Native Messaging channel. It never captures media. It never sends anything to a server. The Cadence desktop app does the actual capture, locally, with the same consent flow as native meetings.
How it reasons
Cadence fans the feature stream out to a panel of agents, each focused on a single signal. Their outputs are reconciled by a final Synthesizer pass, with explicit calibration anchors and citation traces attached to every indicator that surfaces in the dashboard.
AcousticAgent
F0, jitter, shimmer, pause structure.
GazeAgent
Fixation, saccade, camera-attention.
LinguisticAgent
Lexical diversity, disfluency, MLU.
TemporalAgent
Turn-taking, response latency.
BaselineAgent
Compares to your own calibration.
CitationAgent
Pulls supporting studies from the library.
ClinicalFramingAgent
Keeps language correlational, never diagnostic.
Synthesizer
Reconciles the panel into the indicator set the dashboard renders, with citations + repeatability chips attached.
Grounded in literature
47
peer-reviewed studies seed the on-device citation library on first launch
31
indicators in docs/METRICS.md, each tied to its strongest support
∞
live retrieval via OpenAlex — DOIs resolve to abstracts the CitationAgent quotes from
When the CitationAgent surfaces an indicator, it doesn't paraphrase — it pulls the actual abstract from the local library (seeded with 47 studies, extensible via OpenAlex), quotes the relevant claim, and links the DOI in the dashboard. Admins can upload their own PDFs to the library for closed-access work.
Repeatability is gated separately. Indicators whose worst-case driver falls below the Fleiss 0.75 clinical-decision threshold (per Stegmann 2020 ICC + WSCV priors) get a "low repeatability" chip in the dashboard — they're not suppressed, but they're marked so reviewers see them in context. The repeatability harness ships with the app for local recompute.
Where it lives
Cadence captures and processes meetings locally, with macOS permission prompts you can revoke at any time. The only thing that ever ships to a server — and only if you opt in — is a small numeric fingerprint of the features above. No recordings. No transcripts. No third-party SDKs or analytics.
Cadence is not a diagnostic device. Indicators are correlational with published research findings, not predictions about any individual. Built for opt-in deployment with diagnosed families, consenting adults, and supervised clinical / research use.