Smart City

Video Analytics

Smart homes

Security, Assistes

Drones

Sense, Avoid and Navigate

medical sensing system

Save Medical Manpower

Industrial

Predictive Maintenance, Inspection

Wearables

High Performance at Minimum Power

Cost Effective

Recude memory footprint of edge AI system over TPU/FPGA/DSP/CPU platforms. Complex AI algorithms (more than 70 million hyperparameters) can run on the platforms

High Accuracy

Make (8bits/4bits) low-bits computation of edge AI systems have the same quality of results as 32bits floating-point computation of cloud servers.

On-premises Sevices

Edge AI units work with Cloud servers. Only few data need to be uploaded to the cloud for advance analysis. Much to reduce transmission risk and costs. Process data in real-time.

Multi-AI Algorithm

Provide a flexible, reconfigurable architecture hardware platform. Multiple AI algorithms can run in parallel.

Miniaturization Aware Tools (MAT)

  • Support Tensorflow/ Pytorch framework, be able to miniaturize differnt AI algorithms .
  • Due to AI algorithm just likes a human brain , very huge and complicated, DeepMentor uses proprietary /patent low-bits computation approaches to miniaturize the whole AI algorithms with 90% off data movement and computation, then generate the fast-and-accurate software intellectual property (C/C++/verilog codes) for customers.

Automatic Hardware Customization

  • Proprietary AI Automatic system design tool for 3 weeks development process.
  • 8bits miniaturized models have 99% quality of results as 32bits floating-point computation.
  • Generated (C/C++) source codes of AI miniaturized models are applied to TPU/DSP/CPU.
  • Generated verilog HDL(hardware description language) of miniaturized AI models is applied to FPGA.