Intel and Weizmann Institute Speed AI with Speculative Decoding Advance

A new method to handle AI acceleration algorithms delivers up to 2.8 times faster LLM inference, enabling vendor-agnostic AI. It is available on Hugging Face.

What’s New: At the International Conference on Machine Learning (ICML), researchers from Intel Labs and the Weizmann Institute of Science introduced a major advance in speculative decoding. The new technique, presented at the conference in Vancouver, Canada, enables any small “draft” model to accelerate any large language model (LLM) regardless of vocabulary differences.

"We have solved a core inefficiency in generative AI. Our research shows how to turn speculative acceleration into a universal tool. This isn't just a theoretical improvement; these are practical tools that are already helping developers build faster and smarter applications today."
– Oren Pereg, senior researcher, Natural Language Processing Group, Intel Labs

About Speculative Decoding: Speculative decoding is an inference optimization technique designed to make LLMs faster and more efficient without compromising accuracy. It works by pairing a small, fast model with a larger, more accurate one, creating a “team effort” between models.

How Speculative Decoding Works: Consider the prompt for an AI model: “What is the capital of France…”

A traditional LLM generates each word step by step. It fully computes “Paris,” then “a”, then “famous”, then “city” and so on, consuming significant resources at each step. With speculative decoding, the small assistant model quickly drafts the full phrase “Paris, a famous city…” The large model then verifies the sequence. This dramatically reduces the compute cycles per output token.

 Why It Matters: This universal method by Intel and the Weizmann Institute removes the limitations of shared vocabularies or co-trained model families, making speculative decoding practical across heterogeneous models. It delivers performance gains of as much as 2.8x faster inference without loss of output quality.1 It also works across models from different developers and ecosystems, making it vendor-agnostic; it is open source ready through integration with the Hugging Face Transformers library.

In a fragmented AI landscape, this speculative decoding breakthrough promotes openness, interoperability and cost-effective deployment from cloud to edge. Developers, enterprises and researchers can now mix and match models to suit their performance needs and hardware constraints.

“This work removes a major technical barrier to making generative AI faster and cheaper,” said Nadav Timor, Ph.D. student in the research group of Prof. David Harel at the Weizmann Institute. “Our algorithms unlock state-of-the-art speedups that were previously available only to organizations that train their own small draft models.”

The Technical Details: The research paper introduces three new algorithms that decouple speculative coding from vocabulary alignment. This opens the door for flexible LLM deployment with developers pairing any small draft model with any large model to optimize inference speed and cost across platforms.

 Where Theory Meets Deployment: The research isn’t just theoretical. The algorithms are already integrated into the Hugging Face Transformers open source library used by millions of developers. With this integration, advanced LLM acceleration is available out of the box with no need for custom code.

More Context: Accelerating LLM Inference with Lossless Speculative Decoding Algorithms for Heterogeneous Vocabularies (Intel Labs and the Weizmann Institute of Science Research Paper)

The Small Print:

 Timor, N., Mamou, J., Korat, D., Berchansky, M., Pereg, O., Wasserblat, M., Gaurav, J., and Harel, D. Accelerating LLM Inference with Lossless Speculative Decoding Algorithms for

Heterogeneous Vocabularies. In International Conference in Machine Learning, 2025.