HYBRID VISION
Hybrid Vision Sensing (HVS) Technology
Imaging Principle
Response Speed
Scenarios
Features
EVS
Event-based Vision Sensors (EVS)
Employing an intra-pixel differential or differential architecture, and utilizing EvMo technology based on a change-driven perception principle, each pixel operates independently, generating an event signal only when the perceived brightness change exceeds a threshold. The event includes x/y coordinates, polarity, and a time-stamp.
EVS features microsecond-level response, 120 dB or above dynamic range, and extremely high temporal resolution
Rapid motion analysis, instantaneous recognition, and target detection under complex lighting conditions
EVS inherently provides advantages such as low latency, high dynamic range, high temporal resolution, low power consumption
APS
Traditional image sensors (APS)
Based on frame rate driving and limited by exposure time, the entire image is sampled and output at fixed intervals (low frame rate, typically tens of frames per second).
Lower frame rates, typically tens of frames per second.
Static imaging
While suitable for static imaging, this architecture often leads to motion blur, latency, and data redundancy in high-speed motion or complex lighting scenarios, creating system response bottlenecks
EVS
Event Vision Sensors (EVS)
Employing an intra-pixel differential or differential architecture, and utilizing EvMo technology based on a change-driven perception principle, each pixel operates independently, generating an event signal only when the perceived brightness change exceeds a threshold. The event includes x/y coordinates, polarity, and a timestamp.
It features microsecond-level response, >120dB dynamic range, and extremely high time accuracy.
Rapid motion analysis, instantaneous recognition, and target detection under complex lighting conditions
It inherently provides advantages such as low latency, high dynamic range, high temporal resolution, low power consumption
APS
Traditional image sensors (APS)
Based on frame rate driving and limited by exposure time, the entire image is sampled and output at fixed intervals (low frame rate, typically tens of frames per second).
Lower frame rates, typically tens of frames per second.
static imaging
While suitable for static imaging, this architecture often leads to motion blur, latency, and data redundancy in high-speed motion or complex lighting scenarios, creating system response bottlenecks
VS
Core Technologies
Our HVS (Hybrid Vision Sensor) technology, employs a pixel-level event-image hybrid design that integrates both image sampling and event detection circuits within the same pixel array. Leveraging in-pixel sensing-storage-computation integration, architectures such as iampCOMB, GESP, or IN-PULSE DiADC, and a time interleaving technology readout mechanism (PixMU), the sensor supports optional output in image mode, event mode, or hybrid vision mode—balancing full-image integrity with dynamic response capability. Using BSI (back-side-illumination) stacking and advanced inter-pixel isolation technology, it achieves both pixel miniaturization and high photoelectric performance.
Data types and compatibility
Image Output Support
RAW8/RAW10/RAW12 formats
Event Output Supports
Full-frame: RAW8/Raw10/Raw12, customizable.
Sparse matrix format (x, y, p)
Complete terminal platform compatibility
HVS’s data path is compatible with MIPI and supports algorithm stacks based on ISP/NPU/DVP/QSPI/LVDS.
AI-ISP enhancement algorithms including video frame interpolation (EVS-SlowMotion), deblurring, rolling shutter correction, motion/flicker-aware compensation, and super-resolution (Event Super-Resolution).
Lightweight models for human shape detection, doorway behavior classification, and privacy- and power-friendly monitoring using event background suppression and Always-On (AON) sensing.
Event-based optical flow (E-Flow), event-based SLAM, and temporal key point tracking for rapid localization and mapping in drones and robotics.
Core algorithmic models such as event-based trajectory detection (TrajectoryNet), fast occlusion recovery, and low-bandwidth remote monitoring.
Dedicated algorithm models including HVS object detection, dynamic ambient light adaptation (Event HDR Pipeline), and forward collision warning (FCW) and so forth.
Compatibility and Platform Adaptation
All APX-series HVS chips support standard image RAW data and EVS data output through separate virtual channels. They can interface directly with mainstream ISP pipelines or serve as event-only peripherals for NPU processing.
The chips are compatible with ARM Cortex-A/NPU platforms, X86 Linux, and Windows systems. A full suite of development tools is provided, including SDKs, event data decoding libraries, MIPI/USB drivers, and interfaces such as DVP or QSPI. They also support mainstream computer vision frameworks and machine learning inference platforms (e.g., ONNX, TensorRT).
System-Level Products and Services
AlpsenTek offers end-to-end products and services - from chips and sensor modules to evaluation kits (APX-EVK), synchronized optical systems, and algorithm demo reference designs.
Our products support customized output interfaces, low-power configuration profiles, and scenario-specific optimizations to accelerate customer development from proof-of-concept to mass production.
Our Products
Our Products
To build a complete product chain to facilitate the efficient realization of intelligent sensing.
APX003
Hybrid vision sensor for mobile end
APX002
High cost-efficiency EVS sensor
APX004
Low power hybrid vision sensor for AIoT and Security