machine learning

Tutorials

Using MediaPipe LLM Inference API in an Android App

What Is MediaPipe LLM Inference? If you’re building an Android app that needs to run large language models (LLMs) on-device, MediaPipe LLM Inference API is one of the most accessible ways to get there. It handles model loading, quantisation, and hardware acceleration so you can focus on the experience — no cloud dependency, no server […]

, , , , , ,
Tutorials

Running Gemini Nano On-Device: Your First Android AI Feature Without a Server

Why On-Device AI Matters for Android Developers Every AI feature you ship today probably depends on a network call. The user types something, your app hits an API, waits for a response, and displays the result. It works, but it comes with latency, server costs, and the uncomfortable fact that your app is useless without

, , , , ,
Scroll to Top