Responsible AI practices – Google AI - Deepstash
Responsible AI practices – Google AI

Responsible AI practices – Google AI

Curated from: ai.google

Ideas, facts & insights covering these topics:

9 ideas

·

3.17K reads

12

Explore the World's Best Ideas

Join today and uncover 100+ curated journeys from 50+ topics. Unlock access to our mobile app with extensive features.

Responsible AI practices

The development of AI is creating new opportunities to improve the lives of people around the world, from business to healthcare to education.

It is also raising new questions about the best way to build fairness, interpretability, privacy, and security into these systems.

34

500 reads

General recommended practices for AI

Reliable, effective user-centered AI systems should be designed following general best practices for software systems, together with practices that address considerations unique to machine learning.

32

514 reads

Use a human-centred design approach

The way actual users experience your system is essential to assessing the true impact of its predictions, recommendations, and decisions.

  • Design features with appropriate disclosures built-in: clarity and control are crucial to a good user experience.
  • Consider augmentation and assistance.
  • Model potential adverse feedback early in the design process, followed by specific live testing and iteration for a small fraction of traffic before full deployment.
  • Engage with a diverse set of users and use-case scenarios, and incorporate feedback before and throughout project development.

33

390 reads

Identify multiple metrics to assess training and monitoring

The use of several metrics rather than a single one will help you to understand the tradeoffs between different kinds of errors and experiences.

Consider metrics including feedback from user surveys, quantities that track overall system performance and short- and long-term product health (e.g., click-through rate and customer lifetime value, respectively), and false positive and false negative rates sliced across different subgroups.

Ensure that your metrics are appropriate for the context and goals of your system.

32

336 reads

Directly examine your raw data

ML models will reflect the data they are trained on, so analyze your raw data carefully to ensure you understand it.

  • Does your data contain any mistakes (e.g., missing values, incorrect labels)?
  • Is your data sampled in a way that represents your users and real-world settings?
  • Are any features in your model redundant or unnecessary? 
  • If you are using a data label X as a proxy to predict a label Y, in which cases is the gap between X and Y problematic?

37

311 reads

Training-serving skew

The difference between performance during training and performance during serving—is a persistent challenge.

During training, try to identify potential skews and work to address them, including by adjusting your training data or objective function. During the evaluation, continue to try to get evaluation data that is as representative as possible of the deployed setting.

36

288 reads

Understand the limitations of your dataset and model

  • A model trained to detect correlations should not be used to make causal inferences, or imply that it can. 
  • Machine learning models today are largely a reflection of the patterns of their training data.
  • Communicate limitations to users where possible.

36

289 reads

Test, test, test

  • Conduct rigorous unit tests to test each component of the system in isolation.
  • Conduct integration tests to understand how individual ML components interact with other parts of the overall system.
  • Proactively detect input drift by testing the statistics of the inputs to the AI system to make sure they are not changing in unexpected ways.
  • Use a gold standard dataset to test the system and ensure that it continues to behave as expected.
  • Conduct iterative user testing to incorporate a diverse set of users’ needs in the development cycles.
  • Build quality checks into a system.

37

264 reads

Monitor and update the system after deployment

  • Issues will occur: any model of the world is imperfect almost by definition. Build time into your product roadmap to allow you to address issues.
  • Consider both short- and long-term solutions to issues. A simple fix (e.g., blocklisting) may help to solve a problem quickly, but may not be the optimal solution in the long run. Balance short-term simple fixes with longer-term learned solutions.
  • Before updating a deployed model, analyze how the candidate and deployed models differ, and how the update will affect the overall system quality and user experience.

34

278 reads

IDEAS CURATED BY

anikad

Life Is A Marathon| Life Lover

Anika Dhar's ideas are part of this journey:

Machine Learning With Google

Learn more about philosophy with this collection

Understanding machine learning models

Improving data analysis and decision-making

How Google uses logic in machine learning

Related collections

Read & Learn

20x Faster

without
deepstash

with
deepstash

with

deepstash

Personalized microlearning

100+ Learning Journeys

Access to 200,000+ ideas

Access to the mobile app

Unlimited idea saving

Unlimited history

Unlimited listening to ideas

Downloading & offline access

Supercharge your mind with one idea per day

Enter your email and spend 1 minute every day to learn something new.

Email

I agree to receive email updates