Brief introduction to how computers work
Have you ever wondered why computers work the way they do? This is a common question today, when computers are ubiquitous, and when the number of casual users far exceeds the circle of qualified computer scientists. It is said that a child can use them, but no one fully understands them. Taking that qoute with a pinch of salt (as, afterall, we've created computers), let's look into the basic principles behind the computer.
This article is intended for anyone, who is interested in the fundamentals of computers. No prior knowledge is assumed.
What is a computer?
This question is more important than might at first seem. A computer is any machine that can perform an arbitrary program. A program is a list of instruction, much like a kitchen recipe, or a cycle on the washing machine. However, what sets a computer apart from a washing machine is that a computer can execute any program, while a washing machine can just do the 10 it is designed for.
Formally, a computer is a machine that can compute any partial recursive function. Such a machine can be built in many ways, but an efficient implementation, which is shared by all modern computers is the Von Neumann architecture.
[These dimmer paragraphs will give more technical detail throughout the article, but it is not necessary to understand this detail to follow the article.]
How electronics work
Inputs, outputs, and decision-making
In order to have a computer, we must have some physical hardware (today implemented with electronics). The actions a computer performs are like measuring out flour in a kitchen recipe. The idea there is to pour until the scales read the desired value. A computer could control an automatic valve that had flour behind it, so really the action (output) it needs to perform is just an electrical signal.
An electrical signal is a particular potential difference (voltage) between two wires. Typically computers use 5V to represent 'on' and 0V for 'off'.
We also need to know the current amount of flour. Again we can have a sensor tell us this, so the input is also an electrical signal.
Finally, we must be able to make decisions when to send which electrical signal (to open or close the valve). So we need a means of decision making. This is performed by an electronic component called a transistor, which can be thought of as an automatic switch (a relay). When we feed a particular voltage in it either turns on the output circuit or turns it off. In our example, if the sensor says that there is enough flour then we close the valve, otherwise we keep it open.
To take this further, consider using two transistors (again each with two inputs and one output). If we put them in series (like lights on a Christmas tree), both will need to be given the 'on' signal in order to turn the output 'on'. Alternatively, if we put them in parallel then we can make the output be 'on' if either of the inputs is on. There are altogether 16 ways to turn two on/off inputs into one on/off output and they can all be made by different ways of wiring up two transistors. These transistor configurations are called logic gates and are the basis for decision-making in a computer.
Further discussion of binary gates is beyond the scope of this article, but a couple of points are worth mentioning. There are numerous implementations of each gate, depending on the electrical properties that are required (e.g. speed with which they adjust the output to new inputs). Some designs use more than two transistors as well as other electronic components. There exist also gates with more than two inputs, but these can be built out of gates with two inputs, so they are not of interest here. Finally, electonics with more than two states per wire (e.g. three states such as 5V, 0V, -5V) have also been used in experiment, but their merits are outweighed by design difficulties.
From various combinations of 2-input binary gates (which collectively have an arbitrary number of inputs and outputs), one can create a 'machine' that makes an arbitrarily complex decision. The decision machine is designed so that for a particular combination of inputs, the output(s) will always be the same. If you imagine that the inputs represent two numbers, then the output could for example be the sum of those numbers - or something completely different, depending on what the designer intends. You may now be able to see how calculators work. However, calculators are not computers according to my definition, because they always do the same thing.
There is one additional component to computers and this is memory. An element of memory is a piece of harware that remembers an input that was once given to it and can repeat it later an arbitrary number of times. The implementation is not important to this discussion, but has been a difficult problem to solve historically. One element of memory generally holds one 'bit' of information, which is defined as a value that can be either 'on' or 'off'. The memory in a computer is therefore composed of many such elements, and each such element has an address, by which is can be accessed.
Memory is now implemented using looping transistors, but a number of wacky solutions have been tried, including tiny coils with magnetic fields and long mercury-filled tubes that carried sound waves. Implementations.
What makes a machine a computer
Imagine that the inputs are not wired to sensors, but to an arbitrary source (e.g. some memory). Then the decision machine could be used to solve different instances of the same problem. If its outputs were routed so that they change this arbitrary source and that change feeds through the same decision machine again, then the machine could perpetually compute a particular function.
Finally, recall that there are only 16 basic functions that take 2 on/off inputs. Say that we have implemented all of them with gates, and any one can be selected to compute and save into memory a new output based on two bits that are currently in memory. This means that we can simulate a number of gates connected to each other simply by computing them one-by-one and saving intermediate results into memory. Put another way, we do not need a complex combination of gates to perform the task of a 'simple calculator' described above - instead we can do the same task with a list of 'instructions' which specify the order in which to use the gates and, for each step, the memory addresses of the two input bits and one output bit that should be used. This is the basis of the computer processor (CPU).
What I have described is a computer that executes a fixed program read from memory. The program uses memory to store intermediate state during the computation (after each execution of one of the 16 gates). However, there is nothing to say that the memory that the program is in is the same as the memory that the intermediate state is in. However, we can take this further by asserting that the two memories are actually the same. This introduces two new possibilities:
How computer work in practice
In the previous sections, I described how transistors can be put together to create a machine capable of executing an arbitrary list of instructions, including making decisions and changing itself. To use it, we need to initialize the memory to some program that we've designed, start the machine, wait until the program has finished, and then read off the result from memory.
Reading raw memory (essentially a list of 'on' or 'off' values) can be done in various ways. For example, the reader here could be a device that requires the data, such as the flour valve, which will open and close itself depending on the signal stored in the memory. If the data is useful for a human, it may be used to light up black and white dots on a screen in a particular pattern that is readable as letters and numbers.
The input mechanism for altering the program (or data) in memory could involve the flour sensor, a hard disk, a keyboard, etc.
Role of the operating system
Much of the task of communicating with input and output devices is repetitive and possibly complicated. It can also vary depending on the type or model of the device. This is where operating systems come in. An operating system is a program that has the job of creating an 'interface' to these devices, so that other programs don't have to. This means that it can be asked to get some data (e.g. from the harddisk) and put it in a particular memory location for some other program to use, and this program only needs to know where to look and does not need to contain instructions for communicating with the device. Example of operating systems are Windows, Linux, or Mac OS.
It may also be desirable to run multiple programs on one computer. However, the designers of these programs cannot know about this fact at design time, so the programs may be accidentally made to use the same part of memory. This is another function of operating systems: to allocate different parts of memory to simultaneously running programs, and also to broker between them which one will use the shared CPU. There are various schemes that have been invented for the operating system to do these tasks.
Making a computer program boils down to writing a list of instructions that the CPU can execute. Although the priciples I described for the function of the CPU are correct, they are a big simplification of what actually happens. The CPU doesn't manipulate individual on/off states (bits) in memory, it manipulates larger chunks called words. (Typically a word is 32 bits.) A CPU typically supports tens of instructions, which include:
Instuction sets further divide into complex and simple (see RISC and CISC).
To address the problems of simple instruction sets and CPU-dependence, programming languages have been developed. They provide ways of writing programs that are more human-readable and use more complex concepts, such as 'read a line of text from a file'. By contrast in machine code, we would need to ask the operating system to fetch letters one-by-one from the hard-disk, keep track of them, and for each one decide whether it marks the end of the line or not. However, to run this high-level description, it needs to be changed into machine code, that the CPU understands. Fortunately, this is done by dedicated programs called 'compilers'.
Today, there is a multitude of programming languages for different purposes. Examples are C, C++, Java, Python, etc. They differ in the concepts they abstract and are therefore useful for different tasks. Modern operating systems are almost without exception written in C or C++ and compiled to machine code for the different CPUs they run on (e.g. Intel Pentium 4, Intel Core, etc).