LI

Liquid AI Models via Hanzo AI

5 models available

Access all 5 Liquid AI models through Hanzo's OpenAI-compatible API. Single API key, unified billing, no rate limit juggling.

LiquidAI: LFM2-24B-A2B
33K

LFM2-24B-A2B is the largest model in the LFM2 family of hybrid architectures designed for efficient on-device deployment. Built as a 24B parameter Mixture-of-Experts model with only 2B active parameters per token, it delivers high-quality generation while maintaining low inference costs. The model fits within 32 GB of RAM, making it practical to run on consumer laptops and desktops without sacrificing capability.

text
LiquidAI: LFM2.5-1.2B-Thinking (free)
33K

LFM2.5-1.2B-Thinking is a lightweight reasoning-focused model optimized for agentic tasks, data extraction, and RAG—while still running comfortably on edge devices. It supports long context (up to 32K tokens) and is designed to provide higher-quality “thinking” responses in a small 1.2B model.

text
LiquidAI: LFM2.5-1.2B-Instruct (free)
33K

LFM2.5-1.2B-Instruct is a compact, high-performance instruction-tuned model built for fast on-device AI. It delivers strong chat quality in a 1.2B parameter footprint, with efficient edge inference and broad runtime support.

text
LiquidAI: LFM2-8B-A1B
33K

LFM2-8B-A1B is an efficient on-device Mixture-of-Experts (MoE) model from Liquid AI’s LFM2 family, built for fast, high-quality inference on edge hardware. It uses 8.3B total parameters with only ~1.5B active per token, delivering strong performance while keeping compute and memory usage low—making it ideal for phones, tablets, and laptops.

text
LiquidAI: LFM2-2.6B
33K

LFM2 is a new generation of hybrid models developed by Liquid AI, specifically designed for edge AI and on-device deployment. It sets a new standard in terms of quality, speed, and memory efficiency.

text

Use Liquid AI models via Hanzo

One API key. Unified billing. OpenAI-compatible. Works with every existing SDK.