7 STASHED IDEAS
The modern project of creating human-like artificial intelligence (AI) started after World War II, when it was discovered that electronic computers are not just number-crunching machines
This paper focuses on the distinction between artificial general intelligence (AGI) and artificial narrow intelligence (ANI)
Technology can push the boundaries of what reasoning is or isn’t acceptable within the rationale of the human brain. A “system” has no inherent morals outside of those its human engineers can synthesize and program. Artificial intelligence technologies are no different, and arguably more crucial when it comes to having a discussion around the topic of moral competency within the minds of machines. When the main purpose of burgeoning tech is to emulate human reasoning, those in the process of innovating such solutions shouldn’t take the responsibility lightly when deciding which points-of-view will be represented.
Human nature (and nurture) dictate that we grow up forming opinions, and the more dangerous cousin of opinion, bias . However, most humans have also evolved to maintain their own personal set of ethics that go along with those opinions and biases. Machines have not achieved such a feat. Such a scenario is termed: Artificial General Intelligence, also known as AGI or Strong AI – which commonly relates to the most detrimental AI cliché you often see in sci-fi movies like The Terminator, The Matrix, Ex Machina, and more.
What are the Concerns?
More important issues emerge when the teams developing artificial intelligence solutions are homogenous and insular in nature. Humanity encompasses a great many perspectives and backgrounds from all over the planet with varying experiences, upbringings, and knowledge-bases. It’s vital for those teams working on AI solutions to represent that reality both amongst themselves and within the technology itself. Artificial intelligence must be as diverse and inclusive as the humans innovating it claim to be so that it represents all of us as it evolves and not just the biases of those select few with exclusive access to an elite education. Diverse collaboration amongst developers of AI (and all technology) is an imperative component.
In basic terms, the goal of using AI is to make computers think as humans do. This may seem like something new, but the field was born in the 1950s.
A common machine learning task is supervised learning , in which you have a dataset with inputs and known outputs. The task is to use this dataset to train a model that predicts the correct outputs based on the inputs. The image below presents the workflow to train a model using supervised learning:
The goal of supervised learning tasks is to make predictions for new, unseen data. To do that, you assume that this unseen data follows a probability distribution similar to the distribution of the training dataset. If in the future this distribution changes, then you need to train your model again using the new training dataset.
❤️ Brainstash Inc.