Redefining AI for Edge Computing


“Our fundamental thesis is the future of AI models will be smaller, domain-specific, and on-device LLMs that operate at the Edge with zero inference costs for OEMs and keep the data private and secure.


The north-star vision of EdgeLens AI is to build smaller and highly customized AI models for Edge devices with low computing resources and power, ensuring low latency and reducing inference costs for device manufacturers by keeping processing fully on the devices.


Product offering

Ultra efficient Small Language Models uniquely tailored for Edge Devices to deliver performance and accuracy for AI processing at the Edge.

Custom trained models and AI Inference engine for Consumer Electronics, Smart Appliances and Automotive.

Search for information retrieval and NLP tasks.