main page (gr) |
home |
main page |

novel (gr) |
selections |
readings |
gallery |
pre-meaning |
advanced computing |
str (gr) |
internet epitome (gr) |
win stuff |

lecture #1 |
lecture #2 |
lecture #3 |
lecture #4 |
lecture #5 |

**Prologue**
My thoughts on computing are given here in a form of a set of lectures. This is how I, consiously or unconsiously, think and operate when I am working. But it is not only philosophy, it is an actual course on advanced scientific computing. As a matter of fact, some years ago, this course was presented by me (in a slightly expanded version) before an audience in NRCPS Demokritos in Aghia Paraskevi, Athens, Greece.

Advanced Computing Course - Lecture #1

Copyright © Constantin S. Chassapis - amorphous.art

Version 1: 16/11/94. Version 2: 11 Apr 2000. V2.1 May 14, 2000

When we solve a problem, any problem, not necessarily a mathematical one, we not only discover a solution to the problem but we also invent a language to formulate it, to speak about it. Under that prism, we may see computing as the science and art of inventing, given a problem, the proper symbols and rules to solve it in an optimal automatic way.

Philosophically speaking, we act on the reality/problem by the formation of a language (set of rules) and an alphabet (the symbols) in order to speak about it and thus understand-explain (and resolve) it. Niels Bohr, speaking about quantum physics once said that ...

It is wrong to think that the task of physicsisto find out how nature is. Physics concerns onlywhat we can sayabout nature.

Theories (i.e. structured sets of symbols and rules), do not tell how the world (reality) is, they tell what we can predict reliably about the behaviour of the world.

As far as the propositions of mathematics refer to reality, they are not certain; and as far as they are certain, they do not refer to reality”,

(A. Einstein).

When I am in trouble understanding the world around me (or inside me) I put in use in my mind the next scheme.

What that means (returning to computing!), is, that there is no sole solution to a given problem, but many acceptable ones. The science is to find them. The art is to find the best one, quickly!

*A computer is a machine that manipulates symbols according to a well defined set of rules. What symbols and what rules, are up to us.*

This definition is so compact that it requires further explanation. There seems to be two avenues to pursue: the symbols being manipulated, and whatever is doing the actual manipulation.

The symbols we are referring to may be the elementary electrical signals in the circuitry of the computer and the set of rules the circuitry itself, or, the symbols may be even complete programs and the set of rules the operating system itself. Somewhere in the middle reside numbers (the symbols) and arithmetic (the set of rules), or, alphanumeric characters and computer languages. Imagine next, programmable programs, or compiler compilers, and you start to see that the relativity emerging does not reside just at the level of naming. There is a generalised relativity in the way one is about to see symbols
(there is a tendency nowadays in the field of computer science to give more emphasis on symbols than on the rules manipulating them. This shift is called object oriented programming paradigm (OOPP) in contrast to the procedure oriented programming paradigm (POPP) of the past, see ref. 8 for details) and elaborate on the way symbols are manipulated in an automatic way by a computer. Relativity is to be understood in the sense of different possible levels of **symbolic reality**.

The essential to be understood is the following. *Symbols *are a way the human spirit generalises and condenses a certain degree of normality he sees when trying to resolve a question, this being the question of God, mind, or a system of linear (or nonlinear) equations. On the other hand, the *set of rules *is the condensed normality in the behaviour of the objects some one identified as his symbols at that particular instant. One clearly sees the relativity of observation. What will be the set of symbols and what will be the set of rules, not only isn’t a well defined question, but is also a question very well residing on the sphere of Art, even in case the symbols are the integer numbers.

One may object here that a computer is a machine with a very well defined set of rules as we remarked at the beginning and therefore a machine whose behaviour is well defined and deterministic. Art has nothing to do with computers. Well, the person who understands most about a machine is its designer, and no designer of complex machine would claim that everything about it was perfectly understood. That is, *we do not know everything about machines* [3].

Badly speaking, a computer is just a device ... like a laser. But to leave it at that is grossly misleading. As a device, the laser is provocative indeed, but in the final analysis, it is only a light source. As such, the laser is a means to the experimentalist’s ends, and is limited by the fertility of the experimentalist’s imagination. By contrast, a computer can take our ideas, expand on them, and play one idea against another, with the result that this “device” is an idea-enhancer [9].

On a computer, at the level where the symbols are the signals and the rules the circuitry, matters are well understood, but when the symbols are the character set of some language, and the rules that that language encompass dance under the guidance of a good programmer, then, no one can answer “simple” questions such as “will this program ever halt?” (which is a simple question indeed, compared to trying for example to understand the proper epistemological role of physics simulation, see ref. 10 for details). There is no automatic way of determining if a particular program will ever stop or not. The answer resides somewhere where Art also resides. This is a reflection of the incompleteness theorem (1931) of Kurt Goedel.

David Hilbert said around 1928 that if we are going to have any fundamental system for all of mathematics it must satisfy three basic requirements: **consistency, completeness and decidability**. Consistency means that you won’t ever get a contradiction in your own system. Completeness means that if any statement is true, there must be some way of proving it by using the rules of your system. And decidability means that there must exist some method, some definite procedure or test, which can be applied to any given mathematical assertion and which will decide whether that assertion is provable.

**Hilbert though this was a very reasonable set of requirements to impose but within a few years, Goedel showed that no system for mathematics could be both consistent and complete.**

Goedel’s original proof is essentially the assertion that one cannot always prove that a program will fail to halt, but he had formulated the proof by means of arithmetic.

Alan Turing worked on the decision problem some years later (around 1937) and showed that there can be no single method that will work for all questions.

What will distinguish our approach on computing from many other approaches, is that we will discuss computer programming as an aesthetic activity. The computer is not a musical instrument, but at a slightly more abstract level one could truthfully call it the greatest keyboard instrument of the twentieth century. Just as the piano allows us to create beauty in structured patterns of sound, the computer *is* a tool for finding beauty in information structures. Most information structures represent or should represent: “representation is the essence of programming”. But, this is a statement which we can make about Art too.

Our approach will also be far less abstract than a numerical methods course. We will emphasise rethinking of our assumptions about things and we will emphasise rethinking of our way of looking at things. Course credit will not only be given on formal correctness of programs but also on clarity, originality and exploration. We will try to fill the (huge) gap between a first course on computing (say a fortran course) and the way a researcher defines and resolves a real-world problem in a scientific paper. We will always keep in mind that

The purpose of computing is insight, not numbers!, (R.W.Hamming).

Let us elaborate now a little on the lowest possible level that symbols are well defined and manipulated on a computer. We will use logic all the time but we are not going to prove, say, deMorgan's law. We will discuss in detail why 2=10 what is the AND & OR operators, why binary arithmetic is so fundamental, what is a bit and what is a byte.

One may represent a number in a novel way. 47 for example is 47 in the decimal system, in the system with base 10: 47 = 4X10^{1} + 7X10^{0}. But 47 is also 101111 in binary system, the system with base 2.

Proof: 47 = 1x2^{5} +0x2^{4} + 1x2^{3} + 1x2^{2} + 1x2^{1} + 1x2^{0}.

When we write an integer number, we imply a relation of the type

(1) d_{n} ... d_{1}d_{0} = (d_{n} ... d_{1}d_{0})_{b} = d_{n}b^{n} + ... + d_{1}b^{1} + d_{0}b^{0}

In such a relation, b is called the base and the d_{i} are the digits of the number and may be any one of {0,1,2,...,b-1}. If now x is the computed value of a real number x_{T}, then

(2) e_{x} =
(x - x_{T}) / x

is the relative or inherent error in x and if a computer uses base b for conversion and rounds symmetrically to a t-digit mantissa, then

(3) |e_{x}| =< (b/2) b^{-t}

Usually b=2, and relation (3) shows us that real numbers cannot be exactly represented by the arithmetic system of a computer. That does not mean that we cannot perform say 200-digits-precision arithmetic on a computer. It just means that the precision of the electronics which perform arithmetic (ALU) is finite. We are free to program our arithmetic (at a sufficiently higher level of symbolic reality) with the precision we like.

Now, why binary system?

Because we can do addition and all the other arithmetic and logical operations easily and rapidly in binary system. They exist easy-to-build electronics for that purpose.

To fully convince you of the complete equivalence of the decimal and binary systems, let us add two numbers in the two systems:

5 101

__+ 2__ __+ 010__

7 111

After a quick glance one realises that the two operations are equivalent and give the same result.

Now, the way to transform any decimal number to the binary equivalent, is to continuously divide the number by two until a zero is given as a result, and then inverse the order of the remainders.

For transforming a binary number to its decimal equivalent, we just multiply its digits with the corresponding power of two, and we sum up the results as shown above.

The gates which perform the very elementary operations and that they are at the heart of every microprocessor or any computer are (shown in positive logic, i.e. 1 is represented by the high positive potential):

also

and finally

We mainly find NAND (that is, NOT-AND gate) and NOR (NOT-OR) gates commercially, but, due to deMorgan, we have no problem since

(4) NOT(A AND B) = (NOT A) OR (NOT B), and

(5) NOT (A OR B) = (NOT A) AND (NOT B)

Now, what can we do with these gates? We can add and we can remember. From the moment that we realise that just these are the elementary but fundamental operations necessary to the construction of a computer, well, we got the point!

**Binary adder:**

If you go through the circuit you will verify that it really performs binary arithmetic addition of one-digit-long binary numbers and provides apart from their Sum the Carry of the addition.

**Flip-Flop:**

This *is* a memory constructed with logical gates. If S=0 and R=0 it remembers its previous status.

These are all we should know so to minimize any myths around hardware.

Computers, as we have previously pointed out, are machines that manipulate symbols according to specified set of rules and that's all! The digital circuitry, some elements of which we just saw, is the actual manipulator and binary digits (bits) are the symbols (a byte is just an eight bit unit). All these strange things we saw, are the lowest level in a way to talk and describe the computers. At a higher (logical) level we find the ALU (arithmetic logic unit), the RAM, the ROM, the VLSI chips charged with the I/O, at an even higher level we find the operating system (O.S.) routines and at the highest level on our macs, for example, we find this superb GUI (graphical user interface).

O.S. is everything lies between the hardware and our commands. That is, when we tell the computer to load something from the floppy disk, automatically many circuits are activated and a sequence of totally different movements takes place. The O.S. commands are a kind of macro-commands, since when we type them we invoke full procedures burned into ROM or residing in the tape itself which orchestrate the hardware actions.

Hierarchically we may therefore visualise the whole set of rules constituting the computer-complex as below:

APPLICATIONS, INCLUDING THE ONES WE BUILD *(high logical level)*

OPERATING SYSTEM *(low logical level)*

HARDWARE *(Physical level)*

Below is shown a flowchart. This is just a way of organising and describing and possibly optimising an algorithm.

Never forget, programs are descriptions.

The word "algorithm" comes from the name of the ninth century Persian mathematician Abu Ja’far Mohammed ibn Musa *al-Khowarizm* who wrote a mathematical textbook in about 825 AD, entitled "Kitab al jabr w’al-muqabala". The way that the name "algorithm" has now come to be spelt, rather than the earlier more accurate "algorism", seems to have been due to an association with the word "arithmetic". (Note also that the word "algebra" also comes from the title of that book.)

An algorithm is a systematic procedure of using the rules one has set (to himself or to a machine) so by manipulating the symbols associated with the aforementioned rules to obtain (in a finite number of steps) a solution to a problem. It is perhaps remarkable that despite the ancient historic origins of specific examples of algorithms (remember Euclid’s algorithm), the precise formulation of the concept of a general algorithm dates only from this century.

What Hilbert had asked when spoke for decidability, was no less than a general algorithmic procedure for resolving mathematical questions. **Turing had to formulate in a very precise way the notion of an algorithm and arrived in this way to his result that Hilbert’s decidability question had no universal solution.**

ENIAC was the first operational electronic computer ever constructed. The University of Pennsylvania was the constructor. ENIACs operation started on summer 1947. It was a base ten computer where an addition was taking 200µsec and a storing operation 6000µsec and had just 20 words of memory. The personnel was 11 engineers and 13 mathematicians! It covered a surface of 168 m2 and consumed 200KW of power.

Closing this lecture we should discuss a final issue. There is no real need to think in terms of keyboards, circuits, monitors, chips or menus, windows and events.

From now on, we will use the term *processor* to represent anything in the computer complex, except the main subject we explicitly deal with at a certain moment. The processor may be revealed to us in many levels, depending on the magnification we wish to perform and this has to do with what we are discussing at the moment. For example, when we are programming in Fortran, anything except the program we are building, is the processor: the compiler, the O.S., the Low level routines, the hardware.

Note a fundamental point: we can extend our notion of the processor, any time we like, in the following way: We may build a program and incorporate it *in* the processor. By doing so we can clearly understand how, for example, we may program a program, which is what has happened to that old Merlin magician and became programmable. But, this is another story, for another lecture.

1. “The emperor’s new mind”, R. Penrose, Oxford University Press, 1989, chap 2.

2. “Communicating with alien intelligence”, M. Minsky, Byte, April 1985.

3. “The mechanical Mind”, H. Barlow, Annu. Rev. Neurosci. 1990.13:15-24.

4. “Mathematics of programming”, C.A.R. Hoare, Byte, August 1986.

5. “Advanced programming and problem solving with Pascal”, G.M. Schneider & S.C. Bruell, John Wiley & Sons, 1981.

6. “Electronic fundamentals and applications for engineers and scientists”, J. Millman, C.C. Halkias, McGraw-Hill, 1978, chapter 6.

7. “How the laws of physics lie”, N. Cartwright, Oxford University Press, 1983.

8. “The object-oriented programming paradigm (OOPP) and fortran programs”, K. Dean Wampler, Computers in Physics, 385-394, Jul/Aug 1990.

9. “Physics: Ideas and Devices ? Or, Devices and ideas?”, J. S. Rigden, Computers in Physics, 360, May/Jun 1991.

10. “The last Positivist”, B. Bacon, Computers in Physics, 112, Sep/Oct 1989.

1. Roman Verostko on painting, reality and logic

If you have something to say about anything in this site, please, communicate! Contact me at cschassapis@acm.org. But, if you download and save this page to your machine, or print this page, then, please send me an e-mail saying that you did that. Thanks.

Last updated: June 7, 2008 |