In their research paper, AI21 Labs demonstrates that frozen LLMs have untapped potential that can match or exceed fine-tuning approaches, without the downsides.
LLMs have some serious limitations that constrain their usefulness for real-world applications. To overcome these limitations, AI researchers have proposed a new type of AI system architecture called Modular Reasoning, Knowledge and Language (MRKL).
Researchers have developed a benchmark called the LAMBADA dataset to rigorously test how well AI models can leverage broader discourse context when predicting an upcoming word.
Researchers developed a tool called Storywrangler that leveraged Twitter data to create an "instrument for understanding our world through the lens of social media."
Data analysis often involves exploring data to unearth insights, then crafting stories to communicate those findings. But converting discoveries into coherent narratives poses challenges. Researchers have developed an AI assistant called Notable that streamlines data storytelling.
Back in 2018, researchers from Facebook AI developed a new method to improve story generation through hierarchical modeling. Their approach mimics how people plan out narratives.
New research from the University of California, Berkeley sheds light on one slice of these models' knowledge: which books they have "read" and memorized. The study uncovers systematic biases in what texts AI systems know most about.
Prosecraft, a data-driven project designed to analyze word usage and provide statistics on various writing-style markers, has recently come under fire from authors and subsequently shut down its operations.