Technology can push the boundaries of what reasoning is or isn’t acceptable within the rationale of the human brain. A “system” has no inherent morals outside of those its human engineers can synthesize and program. Artificial intelligence technologies are no different, and arguably more crucial when it comes to having a discussion around the topic of moral competency within the minds of machines. When the main purpose of burgeoning tech is to emulate human reasoning, those in the process of innovating such solutions shouldn’t take the responsibility lightly when deciding which points-of-view will be represented.
Human nature (and nurture) dictate that we grow up forming opinions, and the more dangerous cousin of opinion, bias . However, most humans have also evolved to maintain their own personal set of ethics that go along with those opinions and biases. Machines have not achieved such a feat. Such a scenario is termed: Artificial General Intelligence, also known as AGI or Strong AI – which commonly relates to the most detrimental AI cliché you often see in sci-fi movies like The Terminator, The Matrix, Ex Machina, and more.
What are the Concerns?
More important issues emerge when the teams developing artificial intelligence solutions are homogenous and insular in nature. Humanity encompasses a great many perspectives and backgrounds from all over the planet with varying experiences, upbringings, and knowledge-bases. It’s vital for those teams working on AI solutions to represent that reality both amongst themselves and within the technology itself. Artificial intelligence must be as diverse and inclusive as the humans innovating it claim to be so that it represents all of us as it evolves and not just the biases of those select few with exclusive access to an elite education. Diverse collaboration amongst developers of AI (and all technology) is an imperative component.
Companies are leveraging data and artificial intelligence to create scalable solutions — but they’re also scaling their reputational, regulatory, and legal risks. For instance, Los Angeles is suing IBM for allegedly misappropriating data it collected with its ubiquitous weather app. Optum is being investigated by regulators for creating an algorithm that allegedly recommended that doctors and nurses pay more attention to white patients than to sicker black patients. Goldman Sachs is being investigated by regulators for using an AI algorithm that allegedly discriminated against women by granting larger credit limits to men than women on their Apple cards. Facebook infamously granted Cambridge Analytica, a political firm, access to the personal data of more than 50 million users.
Just a few years ago discussions of “data ethics” and “AI ethics” were reserved for nonprofit organizations and academics. Today the biggest tech companies in the world — Microsoft, Facebook, Twitter, Google, and more — are putting together fast-growing teams to tackle the ethical problems that arise from the widespread collection, analysis, and use of massive troves of data, particularly when that data is used to train machine learning models, aka AI.
AI(Artificial Intelligence) has got a bad rep, mostly coming from movies(The Matrix, for instance) or news articles demonizing AI’s reach and scope, stoking fears ranging from privacy invasion to space wars.
AI, and technology itself, is a double-edged sword having the capacity to either overwhelm and overpower us or be a defence mechanism for the looming future threats, depending on how we use it.
Companies are leveraging data and artificial intelligence to create scalable solutions — but they’re also scaling their reputational, regulatory, and legal risks.
❤️ Brainstash Inc.