NEMA®|xNN – Low-power “Vision-Inference-Accelerator”

TSi_AI_NEMA_xNN-01NEMA®|xNN – Low-power “Vision-Inference-Accelerator“ targeting edge-devices, processing artificial intelligence (AI) convolutional network (CNN) tasks.

The architecture has the ability to scale from single to multi-core and leverages real-time compression algorithms to move data efficiently to the on–chip and off-chip memory, while providing 8-bit MAC operations, approximate calculations, data reuse optimizations and delivers memory-latency capabilities.