How we detect vital signs from video
Vital Intelligence’s technology extracts data from video feeds to read and interpret biometric nuance and measure diagnostics information. Our core algorithm – developed by a world-renowned scientist, a machine learning researcher, and an electrical engineer with experience in biomedical instrumentation – analyzes that diagnostics data to deliver intrusion-free vital information in real-time.
How It Works
Vital Intelligence is a sophisticated set of algorithms and technologies, but the process begins with a simple RGB camera, which captures live video of individuals anonymously and without interference.
Most common cameras collect video in three layers: red, green, and blue.
Our technology deconstructs RGB video back into its individual layers for deeper analysis.
Vital Intelligence runs raw red, green, and blue video data through various algorithms to measure subtleties indiscernible by the human eye. All this happens instantly and in real time.
Those human insights are shared with you in a digestible dashboard format, allowing you to learn and make better decisions while protecting the privacy of scanned individuals.
The Technology Behind Our Accurate
and Remote Vitals Measurements
Our algorithm measures heart rate by analyzing blush pulsations in the face to determine how frequently blood is pumped.
Video analysis measures movement in the shoulders and upper thorax to determine breathing rate.
Waveform analysis of heart rate identifies diastolic and systolic peaks and patterns to gauge blood pressure.
Cameras observe the red-to-green spectrum signature in skin blushing to measure oxygen levels.
Cameras locate the carotid artery behind the tear duct to get a read that is much more reliable than a traditional measurement.