In its most fundamental essence, the stuff of computing is symbol structures (systems of symbols, that is, entities that ‘stand for’, represent, or denote other entities like data, information or knowledge).
Computing is symbol processing.
Any automaton capable of processing symbol structures is a computer.
The ‘phenomena’ associated with computers as Perlis, Newell, and Simon suggested are all ultimately reducible to symbol structures and their processing.
We may choose to call such symbol structures information, data, or knowledge depending on the particular ‘culture’ within computer science to which we belong.
Computer science is, ultimately, the science of automatic symbol processing, an insight which Allen Newell and Herbert Simon have emphasized.
The modern computer is a hierarchically organized system of computational artefacts.
Hierarchical organization is a means of managing the complexity of an entity.
Computational artefacts are made things; they process symbol structures signifying information, data, or knowledge (depending on one’s point of view and context).
There are three classes of computational artefacts.
One class is material. These artefacts, like all material objects encountered through history, obey the physical laws of nature. e.g. all kinds of computer hardware.
Some computational artefacts, however, are entirely abstract. They not only process symbol structures, they themselves are symbol structures and are intrinsically devoid of any physicality. So physico-chemical laws do not apply to them. They neither occupy physical space nor do they consume physical time. e.g. algorithms, methodologies, languages, turing machines.
The third class of computational artefacts are the ones that most lend strangeness to computer science. These are abstract and material. To be more precise, they are themselves symbol structures, and in this sense they are abstract; yet their operations cause changes in the material world; moreover, their actions depend on an underlying material agent to execute the actions. Because of this nature, this class is called liminal (meaning a state of ambiguity, of between and betwixt). e.g. computer programs or software is one vast class of liminal computational artefacts.
Computer science is the science of computational artefacts.
Herbert Simon called all the sciences concerned with artefacts (abstract, liminal, or material) the sciences of the artificial. They stand apart from the natural sciences because they must take into account goals and purposes. A natural object has no purpose.
The sciences of the artificial entail the study of the relationship between means and ends: the goals or needs for which an artefact is intended, and the artefact made to satisfy the needs. The ‘science’ in computer science is, thus, a science of means and ends. It asks: given a human need, goal, or purpose, how can a computational artefact demonstrably achieve such a purpose? That is, how can one demonstrate, by reason or observation or experiment that the computational artefact satisfies that purpose?
In order for a procedure to qualify as an algorithm as computer scientists understand this concept, it must possess the following attributes (as first enunciated by Donald Knuth):
Finiteness. An algorithm always terminates (that is, comes to a halt) after a finite number of steps.
Definiteness. Every step of an algorithm must be precisely and unambiguously specified.
Effectiveness. Each operation performed as part of an algorithm must be primitive enough for a human being to perform it exactly (using, say, pencil and paper).
Input and output. An algorithm must have one or more inputs and one or more outputs.
Donald Knuth (who perhaps more than any other person made algorithms part of the computer scientist’s consciousness) once described computer science as the study of algorithms. Not all computer scientists would agree with this ‘totalizing’ sentiment, but none could conceive of a computer science without algorithms at its epicentre. Much like the Darwinian theory of evolution in biology, all roads in computing seem to lead to algorithms. If to think biologically is to think evolutionarily, to think computationally is to form the habit of algorithmic thinking.
Programming languages, in contrast to natural ones, are invented or designed. They are, thus, artefacts. They entail the use of notation as symbols. As we will see, a programming language is actually a set of symbol structures and, being independent of physical computers, are abstract in exactly the same sense that algorithms are abstract. We thus have the curious situation that while programs written in such languages are liminal, the languages of programming themselves are abstract.
The physical computer is the fundamental material computational artefact of interest to computer scientists.
The view of the physical computer as an abstract, symbol processing computational artefact constitutes the computer’s architecture.
A given architecture can be implemented using different technologies. Architectures are not independent of technologies in that developments in the latter influence architecture design, but there is a certain amount of autonomy or ‘degrees of freedom’ the designer of computer architectures enjoy. Conversely, the design of an architecture might shape the kind of technology deployed.
Computer architectures are thus liminal artefacts. The computer architect must navigate delicately between the Scylla of the computer’s functional and performance requirements and the Charybdis of technological feasibility.
Many problems are not conducive to algorithmic solutions.
Even if one understands the problem well enough, and possesses knowledge about the problem domain, and can construct an algorithm to solve the problem, the amount of computational resources (time or space) needed to execute the algorithm may be simply infeasible.
From an evolutionary point of view, algorithms are not all there is to our ways of thinking. And so the question arises: what other computational means are at our disposal to perform such tasks? The answer is to resort to a mode of computing that deploys heuristics.
Heuristics are rules, precepts, principles, hypotheses based on common sense, experience, judgement, analogies, informed guesses, etc., that offer promise but are not guaranteed to solve problems.
Heuristic computing embodies a spirit of adventure!
There is an element of uncertainty and the unknown in heuristic computing.
A problem solving agent (a human being or a computer) looking for a heuristic solution to a problem is, in effect, in a kind of terra incognita. And just as someone in an unknown physical territory goes into exploration or search mode so also the heuristic agent: he, she, or it searches for a solution to the problem, in what computer scientists call a problem space, never quite sure that a solution will obtain.
Thus one kind of heuristic computing is also called heuristic search.
Many strategies, however, that deploy heuristics have all the characteristics of an algorithm—with one notable difference: they give only ‘nearly right’ answers for a problem, or they may only give correct answers to some instances of the problem.
Computer scientists, thus, refer to some kinds of heuristic problem solving techniques as heuristic or approximate algorithms.
The term ‘heuristic computing’ encompasses both heuristic search and heuristic algorithms.
Most sciences in the modern era—say, after the Second World War—are so technical, indeed esoteric, that their deeper comprehension remains largely limited to the specialists, the community of those sciences’ practitioners. Think, for example, of the modern physics of fundamental particles. At best, when relevant, their implications are revealed to the larger public by way of technological consequences.
Yet there are some sciences that touch the imagination of those outside the specialists by way of the compelling nature of their central ideas. The theory of evolution is one such instance from the realm of the natural sciences. Its tentacles of influence have extended into the reaches of sociology, psychology, economics, and even computer science, fields of thought having nothing to do with genes or natural selection.
Among the sciences of the artificial, computer science manifests a similar characteristic. I am not referring to the ubiquitous and ‘in your face’ technological tools which have colonized the social world. I am referring, rather, to the emergence of a certain mentality.
This mentality, or at least its promise, was articulated passionately and eloquently by one of the pioneers of artificial intelligence, Seymour Papert, in his book Mindstorms (1980).
Papert’s vision, rather, was the inculcation of a mentality that would guide, shape, and influence the ways in which a person would think about, perceive, and respond to, aspects of the world— one’s inner world and the world outside—which prima facie have no apparent connection to computing—perhaps by way of analogy, metaphor, and imagination.
Over a quarter of a century after Papert’s manifesto, computer scientist Jeanette Wing gave this mentality a name: computational thinking.
But Wing’s vision is perhaps more prosaic than was Papert’s. Computational thinking, she wrote in 2008, entails approaches to such activities as problem solving, designing, and making sense of intelligent behaviour that draws on fundamental concepts of computing. Yet computational thinking cannot be an island of its own. In the realm of problem solving it would be akin to mathematical thinking; in the domain of design it would share features with the engineering mentality; and in understanding intelligent systems (including, of course, the mind) it might find common ground with scientific thinking.
Like Papert, Wing disassociated the mentality of computational thinking from the physical computer itself: one can think computationally without the presence of a computer.
Deepstash helps you become inspired, wiser and productive, through bite-sized ideas from the best articles, books and videos out there.