High-efficiency floating-point neural network inference operators for mobile, server, and Web
cpu neural-network inference multithreading simd matrix-multiplication neural-networks convolutional-neural-networks convolutional-neural-network inference-optimization mobile-inference
- Updated
Nov 7, 2025 - C