To gain traction throughout an organization, support for responsible AI needs to come from its leadership. Unfortunately, many board members and executive teams lack an understanding of AI.
The World Economic Forum created a toolkit for boards to learn about the different oversight responsibilities in companies involved with AI. They can use it to understand how responsible AI can be adopted across different areas of the business — including branding, competitive and customer strategies, cybersecurity, governance, operations, human resources, and corporate social responsibility — and prevent ethical issues from taking hold.
MORE IDEAS FROM THEARTICLE
... so they can leverage the considerable benefits to be gained through greater knowledge of their customers, explore new markets, and counteract new, AI-driven companies that might seek their market share.
To extract the benefits from AI while mitigating the risks, companies must ensure that they are sufficiently agile so they can adopt best practices to create responsible transformation with AI.
Organizations must recognize the drawbacks that some algorithms bring into the screening and hiring process as a result of the way they are trained, which can have a direct impact on outcomes such as diversity and inclusion.
Companies are leveraging data and artificial intelligence to create scalable solutions — but they’re also scaling their reputational, regulatory, and legal risks. For instance, Los Angeles is suing IBM for allegedly misappropriating data it collected with its ubiquitous weather app. Optum is being investigated by regulators for creating an algorithm that allegedly recommended that doctors and nurses pay more attention to white patients than to sicker black patients. Goldman Sachs is being investigated by regulators for using an AI algorithm that allegedly discriminated against women by granting larger credit limits to men than women on their Apple cards. Facebook infamously granted Cambridge Analytica, a political firm, access to the personal data of more than 50 million users.
Just a few years ago discussions of “data ethics” and “AI ethics” were reserved for nonprofit organizations and academics. Today the biggest tech companies in the world — Microsoft, Facebook, Twitter, Google, and more — are putting together fast-growing teams to tackle the ethical problems that arise from the widespread collection, analysis, and use of massive troves of data, particularly when that data is used to train machine learning models, aka AI.
Organizations today are often faced with a challenge: How do we move forward even if we don’t have all of the answers yet? A data engineer can have an impact across the application, from application performance to the semantics and meaning of the data flowing across the system.
AI team members must be curious and humble enough to acknowledge that they don’t have all the answers and identify who can reach across different boundaries within a system to track down an answer.
In extensive research involving fifteen hundred organizations, it is found that companies with the most significant improvements are those which join humans and machines to work in sync.
❤️ Brainstash Inc.