Responsible AI needs support at the top - Deepstash

Responsible AI needs support at the top

To gain traction throughout an organization, support for responsible AI needs to come from its leadership. Unfortunately, many board members and executive teams lack an understanding of AI.

The World Economic Forum created a toolkit for boards to learn about the different oversight responsibilities in companies involved with AI. They can use it to understand how responsible AI can be adopted across different areas of the business — including branding, competitive and customer strategies, cybersecurity, governance, operations, human resources, and corporate social responsibility — and prevent ethical issues from taking hold.



All companies will need to become “AI companies”

... so they can leverage the considerable benefits to be gained through greater knowledge of their customers, explore new markets, and counteract new, AI-driven companies that might seek their market share.

To extract the benefits from AI while mitigating the risks, companies must ensure that they are sufficiently agile so they can adopt best practices to create responsible transformation with AI.


  • It’s more important than ever that companies develop a strategy around responsible AI and communicate it well to internal and external stakeholders in order to maintain accountability. 
  • Companies should also keep in mind that a one-size-fits-all approach does not always work with emerging technology; instead, they need to match the right AI solution to the right customers and create business offerings that align with customer needs.
  • Startups with responsible AI strategies will be more valuable. The purchase of a responsible AI startup may depend on the startup’s approval of the acquirer’s apprcoach. Investors may refuse to buy stock in companies that don’t have responsible AI. Indeed, there may be an increase in activist investors in this space.


Organizations must recognize the drawbacks that some algorithms bring into the screening and hiring process as a result of the way they are trained, which can have a direct impact on outcomes such as diversity and inclusion. 

  • When it comes to reskilling and retaining employees, AI can be helpful to companies when deployed for screening and training employees for new positions. 
  • Many employees have skills that can be built upon to cross into a new position, but companies often don’t realize the full extent of their employees’ capabilities.
  • For companies seeking to attract and retain employees with AI skills, it helps to develop responsible AI policies, because many of the most talented AI designers and developers value their company’s positions on ethics and transparency in their work. 


  1. The whole organization must be engaged with the AI strategy, which involves a total organizational review and potential changes.
  2. All employees need education and training to understand how AI is used in the company so that diverse teams can be created to manage AI design, development, and use. Additionally, employees should understand how the use of AI will impact their work and potentially help them do their jobs.
  3. Responsibility for AI products does not end at the point of sale: Companies must engage in proactive responsible AI audits for all ideas and products before development and deployment.


Deepstash helps you become inspired, wiser and productive, through bite-sized ideas from the best articles, books and videos out there.



Building ethical AI

Companies are leveraging data and artificial intelligence to create scalable solutions — but they’re also scaling their reputational, regulatory, and legal risks. For instance, Los Angeles is suing IBM for allegedly misappropriating data it collected with its ubiquitous weather app. Optum is being investigated by regulators for creating an algorithm that allegedly recommended that doctors and nurses pay more attention to white patients than to sicker black patients. Goldman Sachs is being investigated by regulators for using an AI algorithm that allegedly discriminated against women by granting larger credit limits to men than women on their Apple cards. Facebook infamously granted Cambridge Analytica, a political firm, access to the personal data of more than 50 million users.

Just a few years ago discussions of “data ethics” and “AI ethics” were reserved for nonprofit organizations and academics. Today the biggest tech companies in the world — Microsoft, Facebook, Twitter, Google, and more — are putting together fast-growing teams to tackle the ethical problems that arise from the widespread collection, analysis, and use of massive troves of data, particularly when that data is used to train machine learning models, aka AI.

AI and Equality
  • Designing systems that are fair for all.



Accept that no one has all the answers

Organizations today are often faced with a challenge: How do we move forward even if we don’t have all of the answers yet? A data engineer can have an impact across the application, from application performance to the semantics and meaning of the data flowing across the system.

AI team members must be curious and humble enough to acknowledge that they don’t have all the answers and identify who can reach across different boundaries within a system to track down an answer.


  • Today, AI(Artificial Intelligence) can diagnose disease, translate languages, provide useful customer services, and drive a car for us.
  • Many companies use AI for automation, but to do that to displace human employees can backfire in the long run.

In extensive research involving fifteen hundred organizations, it is found that companies with the most significant improvements are those which join humans and machines to work in sync.