Home

Liquid AI Announces Multi‑Year Partnership with Shopify to Bring Sub‑20ms Foundation Models to Core Commerce Experiences

CAMBRIDGE, Mass., Nov. 13, 2025 (GLOBE NEWSWIRE) -- Liquid AI today announced a multi‑faceted partnership with Shopify to license and deploy Liquid AI’s flagship Liquid Foundation Models (LFMs) across quality‑sensitive workflows on Shopify’s platform, including search and other multimodal use cases where quality and latency matter. The first production deployment is a sub‑20ms text model that enhances search. The agreement follows Shopify’s participation in Liquid AI’s $250 million Series A round in December 2024, and formalizes deep co‑development already underway between the companies.

As part of the partnership, Shopify and Liquid have codeveloped a generative recommender system with a novel HSTU architecture. In controlled testing, the model has proven to outperform prior stack, resulting in higher conversion rates from recommendations.

Ramin Hasani, Liquid AI CEO:
“Recommendation is the backbone of decision‑making in finance, healthcare, and e‑commerce. To be useful in the real world, models must be reliable, efficient, and fast. Shopify has been an ideal partner to validate that at scale. We’re excited to bring Liquid Foundation Models to millions of shoppers and merchants and to show how efficient ML translates into measurable value in everyday experiences.”

Liquid’s LFMs are designed for sub‑20 millisecond, multimodal, quality‑preserving inference. On specific production‑like tasks, LFMs with ~50% fewer parameters have outperformed popular open‑source models such as Qwen3, Gemma3, and Llama 3, while delivering 2-10× faster inference, enabling real‑time shopping experiences at platform scale.

Mikhail Parakhin, Shopify CTO:
“I’ve seen a lot of models. No one else is delivering sub‑20ms inference on real workloads like this. Liquid’s architecture is efficient without sacrificing quality; in some use cases, a model with ~50% fewer parameters beats Alibaba Qwen, and Google Gemma, and still runs 2–10× faster. That’s what it takes to power interactive commerce at scale.”

Mathias Lechner, Liquid AI CTO:
“We design Liquid Foundation Models with an intertwined objective function that maximizes the quality while making the system on the hardware of choice the fastest on the market. This makes them a natural fit for applications in e-commerce, such as personalized ranking, retrieval‑augmented generation, and session‑aware recommendations, all under tight latency and cost budgets for delivering the best user experience. In Shopify’s environment, we’ve focused on production robustness, from low‑variance tail latency to safety and drift monitoring.”

The partnership includes a multi‑purpose license for LFMs across low‑latency, quality‑sensitive Shopify workloads, ongoing R&D collaboration, and a shared roadmap. While today’s deployment is a sub‑20ms text model for search, the companies are evaluating multimodal models for additional products and use cases, including customer profiles, agents, and product classification. Financial terms are not disclosed.

About Liquid AI
Liquid AI builds Liquid foundation models (LFMs)—multimodal, efficient models engineered for real‑time, reliability‑critical applications. Founded by researchers behind liquid neural networks, Liquid AI focuses on low‑latency inference, resource efficiency, and production‑grade safety.

Press Contact
Rachel Gordon
rg@liquid.ai

A photo accompanying this announcement is available at https://www.globenewswire.com/NewsRoom/AttachmentNg/1a6bcd9c-934a-4db5-9dcd-f64d86f82fe6


Primary Logo