Blog

Automating Common Sense for AI With Ensemble Models
"Symbolic knowledge distillation" that automates common sense acquisition for AI.
Ines Almeida
16.08.23 11:38 AM - Comment(s)
Examining Claims and Hype: Large Language Models
AI experts Alexandra Luccioni and Anna Rogers take a critical look at LLMs, analyzing common claims and assumptions while identifying issues and proposing ways forward.
Ines Almeida
16.08.23 09:04 AM - Comment(s)
Towards Responsible AI: Model Cards for Transparent Machine Learning
In 2019, a research paper proposed "model cards" as a way to increase transparency into AI systems and mitigate their potential harms.
Ines Almeida
13.08.23 10:39 PM - Comment(s)
A thought-provoking paper from computer scientists raises important concerns about the AI community's pursuit of ever-larger language models.
Ines Almeida
13.08.23 09:46 PM - Comment(s)
Ensuring Ethical AI Through Internal Audits
A research paper proposes formal internal audits as a mechanism for technology companies to ensure their artificial intelligence (AI) aligns with ethical priorities before deployment.
Ines Almeida
13.08.23 08:31 PM - Comment(s)
The Emerging Task of Measuring AI Training Data
A new perspective paper argues for "measuring data" as a critical task to advance responsible AI development. Just as physical objects can be measured, data used to train AI systems should also be quantitatively analyzed to understand its composition.
Ines Almeida
13.08.23 08:14 PM - Comment(s)
A study from AI researchers at OpenAI demonstrates how large language models like chatbots can be adapted to reflect specific societal values through a simple "fine-tuning" process.
Ines Almeida
13.08.23 08:03 PM - Comment(s)
A new study has shown that transformers can be expressed in a simple logic formalism. This finding challenges the perception that transformers are inscrutable black boxes and suggests avenues for interpreting how they work.
Ines Almeida
13.08.23 07:50 PM - Comment(s)
A new study has shown that transformers can be expressed in a simple logic formalism. This finding challenges the perception that transformers are inscrutable black boxes and suggests avenues for interpreting how they work.
Ines Almeida
13.08.23 07:50 PM - Comment(s)
Making Data Work More Visible Through Documentation
A new study provides insights into the complex processes and people behind ML data work.
Ines Almeida
13.08.23 12:31 PM - Comment(s)
Examining How AI Training Datasets Are Built: A Framework for More Responsible Practices
In a recent paper, researchers Mehtab Khan and Alex Hanna highlight the need for greater scrutiny, transparency, and accountability in how massive datasets for machine learning models are created.
Ines Almeida
13.08.23 11:51 AM - Comment(s)
Machine learning models rely heavily on their training datasets, inheriting inherent biases and limitations. This research proposes "datasheets for datasets" increasing transparency and mitigating risks.
Ines Almeida
13.08.23 11:21 AM - Comment(s)
Study Uncovers Bias in AI Text Detectors Against Non-Native Writers
A new study reveals troubling bias in AI detectors of machine-generated text against non-native English speakers. The findings raise important questions around AI fairness and underscore the need for more inclusive technologies.
Ines Almeida
13.08.23 05:47 AM - Comment(s)
Protecting LLMs from Theft with Watermarks
Protecting the Copyright of Large Language Models Using Waterrmarks
Ines Almeida
12.08.23 10:41 AM - Comment(s)
Enhancing AI's Compositional Language Skills
Enhancing AI's Compositional Language Skills
Ines Almeida
12.08.23 10:10 AM - Comment(s)
DisentQA: Catching Knowledge Gaps and Avoiding Misleading Users
Building QA Systems that catch knowledge gaps and avoid misleading users.
Ines Almeida
12.08.23 09:22 AM - Comment(s)
Training Smarter AI Systems to Understand Natural Language
Researchers are exploring new techniques to improve AI's ability to grasp diverse sentence structures and indirect meaning.
Ines Almeida
12.08.23 08:46 AM - Comment(s)
Making Conversational AI More Natural: Helping Systems Understand Indirect References
Making Conversational AI More Natural: Helping Systems Understand Indirect References
Ines Almeida
12.08.23 08:22 AM - Comment(s)
The Promise of Frozen Language Models
In their research paper, AI21 Labs demonstrates that frozen LLMs have untapped potential that can match or exceed fine-tuning approaches, without the downsides.
Ines Almeida
11.08.23 10:04 AM - Comment(s)
Evaluating AI Language Models: Formal vs Functional Linguistic Competence
A new perspective paper argues that we must evaluate LLMs along two dimensions: formal linguistic competence and functional linguistic competence.
Ines Almeida
10.08.23 09:53 PM - Comment(s)