Ethical Questions In The Last Decades: Examples - Deepstash

Ethical Questions In The Last Decades: Examples

  • There used to be no need for brain death criteria, because we did not have the technological power to even ask the question of whether someone were dead when their brain lost functioning. But with the development of artificial means of maintaining circulation and respiration this became a serious question. 
  • With communications technologies like social media we are still figuring out how to behave when we have access to so many people and so much information; and the recent problems with fake news reflect how quickly things can go wrong on social media if bad actors have access to the public. 
  • With nuclear weapons, we never used to need to ask the question of how we should avoid a civilization-destroying nuclear war because it simply wasn’t possible, but once those weapons were invented, then we did need to ask that question, and answer it, because we were – and still are – at risk for global disaster.

STASHED IN:

10

STASHED IN:

0 Comments

MORE IDEAS FROM Technology Ethics

Technology ethics is the application of ethical thinking to the practical concerns of technology.

The reason technology ethics is growing in prominence is that new technologies give us more power to act, which means that we have to make choices we didn't have to make before. While in the past our actions were involuntarily constrained by our weakness, now, with so much technological power, we have to learn how to be voluntarily constrained by our judgment: our ethics.

2

STASHED IN:

11

AI has a fundamentally ethical aspect. And we need to not mistake efficiency for morality – just because something is more efficient does not mean that it is morally better, though often efficiency is a dramatic benefit to humanity. For example, people can make more efficient weapons but that does not mean they are good or will be used for good.

Lots of organizations are exploring AI with a goal in mind that is not necessarily the best goal for everyone. They are looking for something good, whether it is making sense of large datasets or improving advertising. But is that ultimately the best use for the technology? Could we perhaps apply it instead to social issues such as the best way to structure an economy or the best way to promote human flourishing? There are lots of good uses of AI, but are we really aiming towards those good uses, or are we aiming towards lower goods?

1

STASHED IN:

11

Deepstash helps you become inspired, wiser and productive, through bite-sized ideas from the best articles, books and videos out there.

GET THE APP:

RELATED IDEA

Companies are leveraging data and artificial intelligence to create scalable solutions — but they’re also scaling their reputational, regulatory, and legal risks. 

  • Los Angeles is suing IBM for allegedly misappropriating data it collected with its ubiquitous weather app.
  • Optum is being investigated by regulators for creating an algorithm that allegedly recommended that doctors and nurses pay more attention to white patients than to sicker black patients.
  • Goldman Sachs is being investigated by regulators for using an AI algorithm that allegedly discriminated against women by granting larger credit limits to men than women on their Apple cards.
  • Facebook infamously granted Cambridge Analytica, a political firm, access to the personal data of more than 50 million users.

3

STASHED IN:

15

STASHED IN:

0 Comments

Building ethical AI

Companies are leveraging data and artificial intelligence to create scalable solutions — but they’re also scaling their reputational, regulatory, and legal risks. For instance, Los Angeles is suing IBM for allegedly misappropriating data it collected with its ubiquitous weather app. Optum is being investigated by regulators for creating an algorithm that allegedly recommended that doctors and nurses pay more attention to white patients than to sicker black patients. Goldman Sachs is being investigated by regulators for using an AI algorithm that allegedly discriminated against women by granting larger credit limits to men than women on their Apple cards. Facebook infamously granted Cambridge Analytica, a political firm, access to the personal data of more than 50 million users.

Just a few years ago discussions of “data ethics” and “AI ethics” were reserved for nonprofit organizations and academics. Today the biggest tech companies in the world — Microsoft, Facebook, Twitter, Google, and more — are putting together fast-growing teams to tackle the ethical problems that arise from the widespread collection, analysis, and use of massive troves of data, particularly when that data is used to train machine learning models, aka AI.

AI and Equality
  • Designing systems that are fair for all.

1

STASHED IN:

9

Ethics is not subjective

Thinking that ethics is subjective creates bad arguments.

People use the word subjective - and objective - in ways that lead to confusion, misguided conclusions, and failed attempts to convince someone that ethics matters. Thinking carefully about distinctions and words and concepts enables us to see the world more clearly, enabling us to draw better conclusions and decisions.

11

STASHED IN:

129