
Mobile “Smart Cart” Automates Food Conveyor CIP
Aug 26, 2023Millionaires, racing royalty and unbeaten champions
Aug 27, 2023Why Did Apple Endorse California’s Right
Aug 22, 2023Rotzinger Group Unveils Robot with Integrated Buffer and Hygienic Conveyor at PACK EXPO
Aug 31, 2023Kookiejar Lays Groundwork For Licensed Vertiport Network
Aug 18, 2023Speculative Streaming: Fast LLM Inference Without Auxiliary Models - Apple Machine Learning Research
AuthorsNikhil Bhendawade, Irina Belousova, Qichen Fu, Henry Mason, Mohammad Rastegari, Mahyar Najibi
View publication
Copy Bibtex
This paper was accepted at the Efficient Natural Language and Speech Processing (ENLSP) Workshop at NeurIPS 2024.
Speculative decoding is a prominent technique to speed up the inference of a large target language model based on predictions of an auxiliary draft model. While effective, in application-specific settings, it often involves fine-tuning both draft and target models to achieve high acceptance rates. As the number of downstream tasks grows, these draft models add significant complexity to inference systems. We propose Speculative Streaming, a single-model speculative decoding method that fuses drafting into the target model by changing the fine-tuning objective from next token prediction to future n-gram prediction. Speculative Streaming speeds up decoding by 1.8 - 3.1X in a diverse set of tasks, such as Summarization, Structured Queries, and Meaning Representation, without sacrificing generation quality. Additionally, Speculative Streaming is parameter-efficient. It achieves on-par/higher speed-ups than Medusa-style architectures while using ~10000X fewer extra parameters, making it well-suited for resource-constrained devices.
December 18, 2024
November 18, 2024 | research area Methods and Algorithms, research area Speech and Natural Language Processing
Our research in machine learning breaks new ground every day.
Work with us

