An algorithm is a set of instructions

An algorithm is a set of instructions

The instructions tell a computer how to transform a set of facts into useful information.

The facts are data. The useful information is knowledge for people, instructions for machines or input for another algorithm. Typical examples are sorting sets of numbers or finding routes through maps.

Jack  (@jackh) - Profile Photo

@jackh

🧐

Problem Solving

theconversation.com

MORE IDEAS FROM THE ARTICLE

To a computer, input is the information needed to make decisions.

For example, if you get dressed, you will need information such as what clothes are available to you, then you might consider the temperature, the season, and personal preferences.

The last step of an algorithm is output - expressing the answer.

Output to a computer is usually more data. It allows computers to string algorithms together in complex ways to produce more algorithms. Output can also present information, such as putting words on a screen.

It can sometimes be too complicated to spell out a decision-making process. Machine learning tries to "learn" based on a set of past decision-making examples.

Machine learning is used for things like recommendations, predictions, and looking up information.

Computation is the heart of an algorithm and involves arithmetic, decision-making, and repetition.

To apply this to getting dressed, you make decisions by doing some math on input quantities. Wearing a jacket might depend on the temperature. To a computer, part of getting dressed algorithm would be "if it is below 50 degrees and raining, then pick the rain jacket and a long-sleeved shirt."

Deepstash helps you become inspired, wiser and productive, through bite-sized ideas from the best articles, books and videos out there.

GET THE APP:

RELATED IDEAS

Neuroevolution

Neuroevolution is a form of artificial intelligence. It is a meta-algorithm, an algorithm for designing algorithms. It adopts the principles of biological evolution in order to design smarter algorithms. Eventually, the algorithms get pretty good at their job.

Big data Hadoop
  • Ability to store and process huge amounts of any kind of data, quickly. With data volumes and varieties constantly increasing, especially from social media and the Internet of Things (IoT) , that's a key consideration.
  • Computing power. Hadoop's distributed computing model processes big data fast. The more computing nodes you use, the more processing power you have.
  • Fault tolerance. Data and application processing are protected against hardware failure. If a node goes down, jobs are automatically redirected to other nodes to make sure the distributed computing does not fail. Multiple copies of all data are stored automatically.
  • Flexibility. Unlike traditional relational databases, you don’t have to preprocess data before storing it. You can store as much data as you want and decide how to use it later. That includes unstructured data like text, images and videos.
  • Low cost. The open-source framework is free and uses commodity hardware to store large quantities of data.
  • Scalability. You can easily grow your system to handle more data simply by adding nodes. Little administration is required.

MapReduce programming is not a good match for all problems. It’s good for simple information requests and problems that can be divided into independent units, but it's not efficient for iterative and interactive analytic tasks. MapReduce is file-intensive. Because the nodes don’t intercommunicate except through sorts and shuffles, iterative algorithms require multiple map-shuffle/sort-reduce phases to complete. This creates multiple files between MapReduce phases and is inefficient for advanced analytic computing.

There’s a widely acknowledged talent gap. It can be difficult to find entry-level programmers who have sufficient Java skills to be productive with MapReduce. That's one reason distribution providers are racing to put relational (SQL) technology on top of Hadoop. It is much easier to find programmers with SQL skills than MapReduce skills. And, Hadoop administration seems part art and part science, requiring low-level knowledge of operating systems, hardware and Hadoop kernel settings.

Although AI researchers can train systems to win at Space Invaders, it couldn’t play games like Montezuma Revenge where rewards could only be collected after completing a series of actions (for example, climb down ladder, get down rope, get down another ladder, jump over skull and climb up a third ladder).

For these types of games, the algorithms can’t learn because they require an understanding of the concept of ladders, ropes and keys. Something us humans have built in to our cognitive model of the world & that can’t be learnt by the reinforcement learning approach of DeepMind.

❤️ Brainstash Inc.