Tech Trends Watcher
  • Home
  • Artificial Intelligence
  • Chatbots
  • Digital Marketing
  • Energy & Resources
  • Software & High-Tech
  • Financial Services
  • Machine Learning
No Result
View All Result
  • Home
  • Artificial Intelligence
  • Chatbots
  • Digital Marketing
  • Energy & Resources
  • Software & High-Tech
  • Financial Services
  • Machine Learning
No Result
View All Result
Tech Trends Watcher
No Result
View All Result
Home Machine Learning

DataComp-LM: In Search of the Next Generation of Training Sets for Language Models

Tech Trends Watcher by Tech Trends Watcher
2 August 2024
in Machine Learning
0
SHARES
1
VIEWS
Share on FacebookShare on Twitter

We introduce DataComp for Language Models (DCLM), a testbed for controlled dataset experiments with the goal of improving language models. As part of DCLM, we provide a standardized corpus of 240T tokens extracted from Common Crawl, effective pretraining recipes based on the OpenLM framework, and a broad suite of 53 downstream evaluations. Participants in the DCLM benchmark can experiment with data curation strategies such as deduplication, filtering, and data mixing at model scales ranging from 412M to 7B parameters. As a baseline for DCLM, we conduct extensive experiments and find that model-based filtering is key to assembling a high-quality training set. The resulting dataset, DCLM-Baseline enables training a 7B parameter language model from scratch to 64% 5-shot accuracy on MMLU with 2.6T training tokens. Compared to MAP-Neo, the previous state-of-the-art in open-data language models, DCLM-Baseline represents a 6.6 percentage point improvement on MMLU while being trained with 40% less compute. Our baseline model is also comparable to Mistral-7B-v0.3 and Llama 3 8B on MMLU (63% & 66%), and performs similarly on an average of 53 natural language understanding tasks while being trained with 6.6x less compute than Llama 3 8B. Our results highlight the importance of dataset design for training language models and offer a starting point for further research on data curation.

Previous Post

Model-Driven Heart Rate Estimation and Heart Murmur Detection Based on Phonocardiogram

Next Post

LazyLLM: Dynamic Token Pruning for Efficient Long Context LLM Inference

Tech Trends Watcher

Tech Trends Watcher

Next Post

LazyLLM: Dynamic Token Pruning for Efficient Long Context LLM Inference

Recent Posts

Apple’s wearable ideas include smart glasses and cameras in your ears

‘You are a helpful mail assistant,’ and other Apple Intelligence instructions

12 August 2024

BISCUIT: Scaffolding LLM-Generated Code with Ephemeral UIs in Computational Notebooks

5 August 2024
Photo collage of an image of Donald Trump behind a graphic, glitchy design.

Donald Trump says Google ‘has to be careful’ or it will be ‘shut down’

5 August 2024
Vector illustration of the Chat GPT logo.

Elon Musk is suing OpenAI and Sam Altman again

5 August 2024
OpenAI is making ChatGPT cheaper for schools and nonprofits

OpenAI won’t watermark ChatGPT text because its users could get caught

5 August 2024
footer_logo

Welcome to Tech Trends Watcher! Your go-to source for the latest in tech updates. Stay informed and ahead of the curve! 

Browse by Category

COMPANY

  • About Us
  • Contact us

Subscribe to Our Newsletter

    SUPPORT

    • Disclaimer
    • Privacy Policy
    • Terms & Conditions

    © 2024 Tech Trends Watcher

    No Result
    View All Result
    • Home
    • Artificial Intelligence
    • Chatbots
    • Digital Marketing
    • Energy & Resources
    • Software & High-Tech
    • Financial Services
    • Machine Learning

    © 2024 Tech Trends Watcher