🧐

Problem Solving

99 STASHED IDEAS

Companies are leveraging data and artificial intelligence to create scalable solutions — but they’re also scaling their reputational, regulatory, and legal risks. 

  • Los Angeles is suing IBM for allegedly misappropriating data it collected with its ubiquitous weather app.
  • Optum is being investigated by regulators for creating an algorithm that allegedly recommended that doctors and nurses pay more attention to white patients than to sicker black patients.
  • Goldman Sachs is being investigated by regulators for using an AI algorithm that allegedly discriminated against women by granting larger credit limits to men than women on their Apple cards.
  • Facebook infamously granted Cambridge Analytica, a political firm, access to the personal data of more than 50 million users.
Colin I. (@colinii53) - Profile Photo

@colinii53

🧐

Problem Solving

Just a few years ago discussions of “data ethics” and “AI ethics” were reserved for nonprofit organizations and academics. Today the biggest tech companies in the world are putting together fast-growing teams to tackle the ethical problems that arise from the widespread collection, analysis, and use of massive troves of data, particularly when that data is used to train machine learning models, aka AI.

These companies realized one simple truth: failing to operationalize data and AI ethics is a threat to the bottom line. Missing the mark can expose companies to reputational, regulatory, and legal risks, but that’s not the half of it. Failing to operationalize data and AI ethics leads to wasted resources, inefficiencies in product development and deployment, and even an inability to use data to train AI models at all.

  • Identify existing infrastructure that a data and AI ethics program can leverage. 
  • Create a data and AI ethical risk framework that is tailored to your industry.
  • Change how you think about ethics by taking cues from the successes in health care. Leaders should take inspiration from health care, an industry that has been systematically focused on ethical risk mitigation since at least the 1970s.
  • Optimize guidance and tools for product managers. 
  • Build organizational awareness.
  • Formally and informally incentivize employees to play a role in identifying AI ethical risks.
  • Monitor impacts and engage stakeholders.
  • The academic approach: this means spotting ethical problems, their sources, and how to think through them. But it unfortunately tends to ask different questions than businesses. The result is academic treatments that do not speak to the highly particular, concrete uses of data and AI.
  • The “on-the-ground” approach: it knows to ask the business-relevant risk-related questions precisely because they are the ones making the products, but it lacks the skill, knowledge, and experience to answer ethical questions systematically, exhaustively, efficiently and institutional support.
  • The high-level AI ethics principles: Google and Microsoft, for instance, trumpeted their principles years ago. The difficulty comes in operationalizing those principles. What, exactly, does it mean to be for “fairness?” Which metric is the right one in any given case, and who makes that judgment?

Creativity needs stimulus, but also an optimum load of boredom to thrive. Too much boredom will make one lethargic, as in the case for imposed isolation of lockdowns across the world. One has to find the unique boredom sweet spot.

Creativity is not a linear process, and one needs bursts of isolation and stimuli to weave the creative content.

Creativity In Isolation

For many people in creative professions, a lack of stimulus and isolation hampers work. Yet for some, a lockdown and being completely alone with oneself is a boon for real creativity.

Such contradiction makes understanding creativity hard, but the two main factors for creative thinking are openness to new experiences and being comfortable with one’s own thoughts and inner voice.

One can try to redefine what kind of stimuli would be useful in the creative process, as it is easy to get stuck in the old methods of finding creative sparks.

  • If the outdoor stimulus is not possible, one can try inner experiences like journaling, mindfulness and meditation.
  • Journaling your emotions, for example, is proven to help creativity.
  • One can reconnect with old friends, or look up old pictures on their PC and phone to recall moments that can spark creativity.
  • A good walk in the park also helps declutter the mind and process thoughts that are stuck inside.
  • It is important to not be stressed about being creative.

Taking risks is a way towards potential growth, even if it is not according to the original plan.

When entrepreneurs fail at their startup, a large chunk of them still benefit by:

  • New connections and friendships
  • New job opportunities
  • A pivot towards another company.

When starting a project that requires time, energy and resources, the cost-benefit ratio is a good rule for deciding if the plunge is worth it.

One has to think about the risk in terms of probability, while also keeping in mind what is at stake: Will the effort, time, money and resources that are spent will go down the drain or will all of it still provide value in case the primary goal is not reached.

One needs to take into consideration that any risky decision has the probability of failure, keeping in mind the potential benefits of not reaching the goal.

The ideal decision model should:

  1. Calculate or research about the baseline likelihood of success.
  2. Take a self-assessment of one’s own ability.
  3. Take into account the situation where the results of the decision are less than required.
  4. Take into consideration the opportunity cost.
  5. Ask oneself if the activity being undertaken is providing joy even without any outcome.

In science, "statistically significant" is the effect that can be picked up with a particular statistical tool called a p-value. A good p-value (or calculated probability) is arbitrary and can vary between scientific fields. The cut-off for statistically significant is a p-value of 0.05.

One problem is that if you run a study multiple times or do a whole bunch of different statistical analyses on the same data, your results could look meaningful purely by chance.

In a peer review system, independent, anonymous experts read over a paper submitted to a journal. They can recommend revisions to the text, new experiments that should be added, or even that the journal shouldn't publish the paper.

But reviewers aren't asked to ensure that the results are absolutely correct. It would be too time-consuming and impractical. That means that a peer review is mostly beneficial, but not perfect.

Problems with how we view scientific studies

The world is full of evidence and studies, some good and some poor.

  • One major problem is that scientific lingo often means something different from everyday language. Words like theory, significant, and control have entirely different meanings in the realm of science.
  • Another problem is that experiments can suffer from problems in how they're designed, how they're analysed, and how scientific journals review them.

Studies can suffer from selection bias when people are recruited from a specific group that are not representative of the whole. Scientists select a smaller group to study, but the chosen set isn't random enough and is therefore somehow biased in favor of a specific outcome of the study.

Selection bias can also occur when certain types of people are more likely to want to be involved or are more committed to staying in a longer experiment.

  • A hypothesis is often the first step of the scientific method. It is a proposed and still-unproven explanation that can be tested through experiments and observations.

  • In science, a theory is a widely accepted idea backed by data, observations, and experiments. Of course, established scientific theories can later be changed or rejected if there's enough data to support it.

  • The Impact Factor is the most commonly used metric to assess a scientific journal's influence. It counts the number of times a journal's papers have been mentioned in other papers, relative to their own output.
  • You can find a journal's Impact Factor in the Journal Citation Reports analysis that appears annually. Otherwise, search for "impact factor." The most prestigious journals have Impact Factors in the high-20s to mid-30s.
  • The Impact Factor is controversial. Some science generates more citations, but it doesn't mean they're better. For-profit journals will often publish anything (even without peer review)

One type of conflict of interest is financial. It could be someone who received funding from a company with a vested interest in the study's outcome. Or that person has a relationship with the company that could lead to benefits in the future.

A recent analysis found that from 7 to 32 percent of randomized trials in top medical journals were funded by medical industry sources.

  • Correlation: Scientists may find that two variables are correlated. They may be related, but it doesn't mean that one is causing the other. It could be a coincidence, or perhaps a third variable is causing both of the other two.
  • Causation: Lots of correlative evidence can lead to a stronger case that something is causing something else. It is often combined with systematically ruling out other possibles causes. However, the best way to show causation is to perform a controlled experiment.

If you are looking at a clinical trial, a psychology study, or an animal study, it must be a randomized, placebo-controlled, double-blind study.

  • Randomized: The participants in the study are randomly placed into the experimental group and the comparison group.
  • Placebo-controlled: In medical studies, one comparison group gets a placebo, such as a sugar pill. This is to see the effects of the actual drug. (However, placebo effects can be so strong that they often relieve pain, among other health problems.)
  • Double-blind: A study is "blind" if the participants don't know whether they are in the experiment or the control group. A study is "double-blind" if the researchers in personal contact with participants also don't know which treatment they are administering.

Bad ideas are not your enemy. They are essential steps to better ideas.

Focus on making something worth sharing. How small can you make it and still do something you're proud of? Could you play just one note on the clarinet that's worth listening to?

There is a community of critics and tweakers and tinkerers that are ready to criticize the logo your agency put together. What is scarce are the people willing to make the logo themselves.

Here is then a clue about what to do next: Go first. After you have done the scary bits, you can easily get help from the people who are good at smoothing out the rough places with their critiques.

We all seek out "good" or even "great". But we need to define what good is before we begin. Twelve publishers rejected Harry Potter, but then it became a worldwide phenomenon and was suddenly good enough.

Judge your work by asking what it is for and who it is for. If it achieves its mission, then it is good.

Creating good ideas

It’s tempting to think that all the good ideas must be taken by now and that there is no possible way to make any new positive contribution.

However, the story of every good idea, every new project, every novel starts with: There was a bad idea. And then there was a better one.

This is not your one and only chance. You won't run out of ideas.

There is no perfect idea, just the next thing you haven't discovered yet. No one is keeping you from posting a video or blogging every day, or hanging up your artwork. You have to do the steps.

© Brainstash, Inc

AboutCuratorsJobsPress KitTopicsTerms of ServicePrivacy PolicySitemap