Better AI Systems - Deepstash

Better AI Systems

Delivering the best member and customer experiences with a focus on trust is core to everything that we do at LinkedIn. As we continue to build on our Responsible AI program that we recently outlined three months ago, a key part of our work is designing products that provide the right protections, mitigate unintended consequences, and ultimately better serve our members, customers, and society.

STASHED IN:

2

MORE IDEAS FROM Our approach to building transparent and explainable AI systems

From a holistic “responsible design” perspective, there are many non-AI initiatives that help increase the transparency of our products and experiences.

  • On the backend, researchers have previously highlighted the importance of dataset documentation as a key enabler of transparency.
  • On the front end, we launched a transparency initiative designed to earn and preserve member trust through improvements to our reporting and policy enforcement experiences.

STASHED IN:

2

Complex predictive machine learning models often lack transparency, resulting in low trust from these teams despite having high predictive performance. While many model interpretation approaches such as SHAP and LIME return top important features to help interpret model predictions, these top features may not be well-organized or intuitive to these teams.

To deal with this challenge, we developed Intellige, a customer-facing model explainer that creates digestible interpretations and insights reflecting the rationale behind model predictions.

STASHED IN:

2

Transparency means that AI system behaviour and its related components are understandable, explainable, and interpretable. The goal is that end-users of AI—such as LinkedIn employees, customers, and members—can use these insights to understand these systems, suggest improvements, and identify potential problems (should they arise).

Developing large-scale AI-based systems that are fair and equitable or that protect the users may not be possible if our systems are opaque.

STASHED IN:

2

Transparency allows modellers, developers, and technical auditors to understand how an AI system works, including how a model is trained and evaluated, what its decision boundaries are, what inputs go into the model, and finally, why it made a specific prediction. This is often also described as “interpretability” in existing research.

Explainable AI (XAI) goes a step further by explaining how a system works or why a particular recommendation was made to members and customers.

STASHED IN:

2

A few key ways we've improved transparency in AI at LinkedIn:

  • Explainable AI for model consumers to build trust and augment decision-making.
  • Explainable AI for modellers to perform model debugging and improvement.
  • Transparency beyond AI systems.

STASHED IN:

2

Machine learning engineers at LinkedIn need to understand how their models are making the decisions to identify blindspots and, thereby, opportunities for improvement. For this, we have explainability tools that allow model developers to derive insights and characteristics about their model at a finer granularity.

STASHED IN:

2

Deepstash helps you become inspired, wiser and productive, through bite-sized ideas from the best articles, books and videos out there.

GET THE APP:

RELATED IDEA

Building ethical AI

Companies are leveraging data and artificial intelligence to create scalable solutions — but they’re also scaling their reputational, regulatory, and legal risks. For instance, Los Angeles is suing IBM for allegedly misappropriating data it collected with its ubiquitous weather app. Optum is being investigated by regulators for creating an algorithm that allegedly recommended that doctors and nurses pay more attention to white patients than to sicker black patients. Goldman Sachs is being investigated by regulators for using an AI algorithm that allegedly discriminated against women by granting larger credit limits to men than women on their Apple cards. Facebook infamously granted Cambridge Analytica, a political firm, access to the personal data of more than 50 million users.

Just a few years ago discussions of “data ethics” and “AI ethics” were reserved for nonprofit organizations and academics. Today the biggest tech companies in the world — Microsoft, Facebook, Twitter, Google, and more — are putting together fast-growing teams to tackle the ethical problems that arise from the widespread collection, analysis, and use of massive troves of data, particularly when that data is used to train machine learning models, aka AI.

AI and Equality
  • Designing systems that are fair for all.

1

STASHED IN:

9

Accept that no one has all the answers

Organizations today are often faced with a challenge: How do we move forward even if we don’t have all of the answers yet? A data engineer can have an impact across the application, from application performance to the semantics and meaning of the data flowing across the system.

AI team members must be curious and humble enough to acknowledge that they don’t have all the answers and identify who can reach across different boundaries within a system to track down an answer.

STASHED IN:

1

Although AI researchers can train systems to win at Space Invaders, it couldn’t play games like Montezuma Revenge where rewards could only be collected after completing a series of actions (for example, climb down ladder, get down rope, get down another ladder, jump over skull and climb up a third ladder).

For these types of games, the algorithms can’t learn because they require an understanding of the concept of ladders, ropes and keys. Something us humans have built in to our cognitive model of the world & that can’t be learnt by the reinforcement learning approach of DeepMind.

1

STASHED IN:

16