Is Google’s AI research about to implode? - Deepstash

Although AI researchers can train systems to win at Space Invaders, it couldn’t play games like Montezuma Revenge where rewards could only be collected after completing a series of actions (for example, climb down ladder, get down rope, get down another ladder, jump over skull and climb up a third ladder).

For these types of games, the algorithms can’t learn because they require an understanding of the concept of ladders, ropes and keys. Something us humans have built in to our cognitive model of the world & that can’t be learnt by the reinforcement learning approach of DeepMind.



Mathematical modelling consists of 3 components:

  1. Assumptions: These are taken from our experience and intuition to be the basis of our thinking about a problem.
  2. Model: This is the representation of our assumptions in a way that we can reason (i.e. as an equation or a simulation).
  3. Data: This is what we measure and understand about the real world.

Current AI is strong on the model (step 2): the neural network model of pictures & words. But this is just one model of many, possibly infinite many, alternatives. It is one way of looking at the world.

In emphasising the model researchers have a strong implicit assumption: that their model doesn’t need assumptions.But all models do.



Neutrality in AI

True neutrality in language and image data is impossible.

If our text and image libraries are formed by and document sexism, systemic racism and violence, how can we expect to find neutrality in this data? We can’t.

If we use models that learn from Reddit with no assumed model, then our assumptions will come from Reddit.



Deepstash helps you become inspired, wiser and productive, through bite-sized ideas from the best articles, books and videos out there.