Responsible AI

Blog categorized as Responsible AI

Measuring the Truthfulness of Large Language Models: Benchmarks, Challenges, and Implications for Business Leaders
LLMs currently face significant challenges when it comes to truthfulness. Understanding these limitations is essential for any business considering leveraging LLMs.
Ines Almeida
29.04.24 01:00 PM - Comment(s)
The Responsible AI Imperative: Key Insights for Business Leaders
We explore the current state of responsible AI, examining the lack of standardized evaluations for LLMs, the discovery of complex vulnerabilities in these models, the growing concern among businesses about AI risks, and the challenges posed by LLMs outputting copyrighted material.
Ines Almeida
29.04.24 11:19 AM - Comment(s)
Manipulation in AI-Powered Product Recommendations: What Business Leaders Need to Know
A new study from Harvard University reveals how LLMs can be manipulated to boost a product's visibility and ranking in recommendations.
Ines Almeida
15.04.24 02:00 PM - Comment(s)
Unlocking the Power of Interpretable AI with InterpretML: A Guide for Business Leaders
InterpretML is a valuable tool for unlocking the power of interpretable AI in traditional machine learning models. While it may have limitations when it comes to directly interpreting LLMs, the principles of interpretability and transparency remain crucial in the age of generative AI.
Ines Almeida
04.04.24 12:45 PM - Comment(s)
Critique of the AI Transparency Index
A recent critique calls into question a prominent AI transparency benchmark, illustrating the challenges in evaluating something as complex as transparency.
Ines Almeida
01.11.23 12:07 PM - Comment(s)
Measuring Transparency in Foundation Models
A recent critique calls into question a prominent AI transparency benchmark, illustrating the challenges in evaluating something as complex as transparency.
Ines Almeida
01.11.23 12:07 PM - Comment(s)
Examining Claims and Hype: Large Language Models
AI experts Alexandra Luccioni and Anna Rogers take a critical look at LLMs, analyzing common claims and assumptions while identifying issues and proposing ways forward.
Ines Almeida
16.08.23 09:04 AM - Comment(s)
Towards Responsible AI: Model Cards for Transparent Machine Learning
In 2019, a research paper proposed "model cards" as a way to increase transparency into AI systems and mitigate their potential harms.
Ines Almeida
13.08.23 10:39 PM - Comment(s)
A thought-provoking paper from computer scientists raises important concerns about the AI community's pursuit of ever-larger language models.
Ines Almeida
13.08.23 09:46 PM - Comment(s)
Ensuring Ethical AI Through Internal Audits
A research paper proposes formal internal audits as a mechanism for technology companies to ensure their artificial intelligence (AI) aligns with ethical priorities before deployment.
Ines Almeida
13.08.23 08:31 PM - Comment(s)
A study from AI researchers at OpenAI demonstrates how large language models like chatbots can be adapted to reflect specific societal values through a simple "fine-tuning" process.
Ines Almeida
13.08.23 08:03 PM - Comment(s)
Making Data Work More Visible Through Documentation
A new study provides insights into the complex processes and people behind ML data work.
Ines Almeida
13.08.23 12:31 PM - Comment(s)
Examining How AI Training Datasets Are Built: A Framework for More Responsible Practices
In a recent paper, researchers Mehtab Khan and Alex Hanna highlight the need for greater scrutiny, transparency, and accountability in how massive datasets for machine learning models are created.
Ines Almeida
13.08.23 11:51 AM - Comment(s)
Machine learning models rely heavily on their training datasets, inheriting inherent biases and limitations. This research proposes "datasheets for datasets" increasing transparency and mitigating risks.
Ines Almeida
13.08.23 11:21 AM - Comment(s)
The Future of AI Language Models: Making Them More Interpretable and Controllable
Backpack models have an internal structure that is more interpretable and controllable compared to existing models like BERT and GPT-3.
Ines Almeida
10.08.23 07:59 AM - Comment(s)
Reading Between the Lines: Subtle Stereotypes in AI Text Generation
Recent advances in AI language models like GPT-4 and Claude 2 have enabled impressively fluent text generation. However, new research reveals these models may perpetuate harmful stereotypes and assumptions through the narratives they construct.
Ines Almeida
10.08.23 07:59 AM - Comment(s)
Emily Bender, Alex Hanna, and Hannah Zeavin discuss misguided uses of AI in the mental health area.
Ines Almeida
10.08.23 07:55 AM - Comment(s)
Training AI to Behave Ethically Through a "Constitution"
Researchers at Anthropic recently published a paper demonstrating a constitutional AI technique. Their goal was to make the assistants helpful, while avoiding harmful, dangerous, or unethical content.
Ines Almeida
09.08.23 06:22 PM - Comment(s)
Understanding How AI Generates Images from Text
In a paper titled "What the DAAM: Interpreting Stable Diffusion Using Cross Attention", researchers propose a method called DAAM (Diffusion Attentive Attribution Maps) to analyze how words in a prompt influence different parts of the generated image.
Ines Almeida
08.08.23 04:59 PM - Comment(s)