Tech Trends Watcher
  • Home
  • Artificial Intelligence
  • Chatbots
  • Digital Marketing
  • Energy & Resources
  • Software & High-Tech
  • Financial Services
  • Machine Learning
No Result
View All Result
  • Home
  • Artificial Intelligence
  • Chatbots
  • Digital Marketing
  • Energy & Resources
  • Software & High-Tech
  • Financial Services
  • Machine Learning
No Result
View All Result
Tech Trends Watcher
No Result
View All Result
Home Machine Learning

LazyLLM: Dynamic Token Pruning for Efficient Long Context LLM Inference

Tech Trends Watcher by Tech Trends Watcher
2 August 2024
in Machine Learning
0
SHARES
1
VIEWS
Share on FacebookShare on Twitter

This paper was accepted at the Efficient Systems for Foundation Models Workshop at ICML 2024

The inference of transformer-based large language models consists of two sequential stages: 1) a prefilling stage to compute the KV cache of prompts and generate the first token, and 2) a decoding stage to generate subsequent tokens. For long prompts, the KV cache must be computed for all tokens during the prefilling stage, which can significantly increase the time needed to generate the first token. Consequently, the prefilling stage may become a bottleneck in the generation process. An open question remains whether all prompt tokens are essential for generating the first token. To answer this, we introduce a novel method, LazyLLM, that selectively computes the KV for tokens important for the next token prediction in both the prefilling and decoding stages. Contrary to static pruning approaches that prune the prompt at once, LazyLLM allows language models to dynamically select different subsets of tokens from the context in different generation steps, even though they might be pruned in previous steps. Extensive experiments on standard datasets across various tasks demonstrate that LazyLLM is a generic method that can be seamlessly integrated with existing language models to significantly accelerate the generation without fine-tuning. For instance, in the multi-document question-answering task, LazyLLM accelerates the prefilling stage of the LLama 2 7B model by 2.34x while maintaining accuracy.

Previous Post

DataComp-LM: In Search of the Next Generation of Training Sets for Language Models

Next Post

Pre-Trained Foundation Model Representations to Uncover Breathing Patterns in Speech

Tech Trends Watcher

Tech Trends Watcher

Next Post

Pre-Trained Foundation Model Representations to Uncover Breathing Patterns in Speech

Recent Posts

Apple’s wearable ideas include smart glasses and cameras in your ears

‘You are a helpful mail assistant,’ and other Apple Intelligence instructions

12 August 2024

BISCUIT: Scaffolding LLM-Generated Code with Ephemeral UIs in Computational Notebooks

5 August 2024
Photo collage of an image of Donald Trump behind a graphic, glitchy design.

Donald Trump says Google ‘has to be careful’ or it will be ‘shut down’

5 August 2024
Vector illustration of the Chat GPT logo.

Elon Musk is suing OpenAI and Sam Altman again

5 August 2024
OpenAI is making ChatGPT cheaper for schools and nonprofits

OpenAI won’t watermark ChatGPT text because its users could get caught

5 August 2024
footer_logo

Welcome to Tech Trends Watcher! Your go-to source for the latest in tech updates. Stay informed and ahead of the curve! 

Browse by Category

COMPANY

  • About Us
  • Contact us

Subscribe to Our Newsletter

    SUPPORT

    • Disclaimer
    • Privacy Policy
    • Terms & Conditions

    © 2024 Tech Trends Watcher

    No Result
    View All Result
    • Home
    • Artificial Intelligence
    • Chatbots
    • Digital Marketing
    • Energy & Resources
    • Software & High-Tech
    • Financial Services
    • Machine Learning

    © 2024 Tech Trends Watcher