A computer is a device that can be programmed to configure a series of logical or computational operations. Modern computers have the ability to follow generalized sets of operations called programs. These programs enable the computer to perform a wide range of tasks.
Computers are used as control systems for a variety of consumer and industrial devices. These include simple single-purpose devices such as microwaves and remote controls, factory devices such as industrial robots and computer-aided design, as well as multipurpose devices such as personal computers and mobile devices such as smartphones. .
The first computers were considered only as computing devices. From time immemorial, simple hand-held devices such as abacuses have helped people with arithmetic. Early in the Industrial Revolution, some mechanical devices were developed to automate tedious and lengthy tasks, such as guiding fabric patterns. More sophisticated electric machines were used in the early twentieth century for special analog calculations. The first digital electronic calculators were built during World War II. The speed, power, and versatility of computers have improved dramatically since then.
Typically, a modern computer consists of at least one processor element, usually a CPU, and some type of memory. The processor element performs logical and arithmetic operations, and a control and sequencing unit can change the order of operations in response to stored information. Ancillary devices include input devices (keyboard, mouse, joystick, etc.), output devices (screen, printer, etc.) and input / output devices that perform both functions (eg 2000 touch screens). ), be. Peripherals allow the retrieval of information from an external source and allow the results of operations to be stored and read.
Computer word etymology
According to the Oxford English Dictionary, the first known use of the word “computer” was in 1613 in a book called Young Man Science by the English author Richard Brightwith. “I met the truest computer of the time, and the best accountant who ever lived in the world, and he reduces the days to a small number.” This use of the word computer refers to a human calculator, a person who performed calculations and calculations. This word continued in the same sense until the middle of the twentieth century. Initially, women were hired as “human computers” because they could be paid less than their male counterparts. Until 1943, most human computers were female. From the end of the nineteenth century, the word gradually found its familiar meaning, a machine that performs calculations.
The online dictionary of the root finder makes the first documented use of “computer” in the 1640s, meaning “person who calculates”; This “… is a factor noun of the verb to calculate”. The Online Dictionary of Etymology states that the use of the term to mean “calculating machine” (of any kind) dates back to 1897, and notes that “modern use” of the term means “digital and programmable computer”. 1945 returns under the same title; “Theoretically, it goes back to 1937 under the name Turing Machine.”
Before the twentieth century
For thousands of years, tools were used to aid in calculations, mainly using one-to-one fingerprinting. The earliest means of counting was probably a typewriter. Future devices that helped record in the cradle of civilization were calculators (Russian spheres, cones, etc.) that displayed the number of objects, possibly cattle or grains, in the semi-empty, closed, closed Russian tanks. The use of counter bars is an example of this type. At first, the abacus was used for arithmetic. The Roman abacus was made with tools used in Babylon dating back to 2400 BC. Since then, various forms of accounting tables or boards have been invented. In medieval European counting houses, a checkered cloth was placed on the table and markers were moved around it according to special rules to help calculate the amount of money.
According to Derek J. Desula Price believes that the Antikytra machine is the first mechanical “computer” analog. The device was designed to calculate astronomical positions and was discovered in 1901 in the ruins of Antitra on the island of Antitra in Greece, between Kitra and Crete, and is estimated to be around 100 BC. Devices that were comparable in complexity to the Antikythera mechanism did not appear until a thousand years later.
Numerous mechanical instruments were developed for orientation and astronomical purposes. The star-finding cycle was a map of the sky invented by Abu Rihan al-Biruni in the early 11th century. Astrology was invented during the Hellenistic period in the first or second century BC and is often attributed to Hipparchus. Astrology, as a combination of the star-finding and diopter cycles, was practically an analog computer capable of solving several types of problems in spherical astronomy. An extruder including a mechanical calendar computer and a gear was invented by Abi Bakr Isfahani in 1235. Abu Rihan al-Biruni invented the first solar-lunar calendar inventor, a mechanical gear, which was a basic wire-fixed knowledge processing machine with a sequence of gears, around 1000 AD.
The military compass, a computational tool for solving problems of ratio, trigonometry, multiplication, and division, as well as for various functions such as the second and third roots, was built in the late 16th century and used in artillery, mapping, and navigation. .
The planimeter was a handy tool for calculating the area of a closed shape by traversing it by a mechanical link.
The calculation ruler was invented around 1620-1630 shortly after the introduction of the concept of logarithms and was an analog handheld computer for multiplication and division. As the construction of the computational ruler progressed, it added added scales, inverses, powers of the second and third roots, and transcendent logarithmic and exponential functions, hyperbolic and circular trigonometry, and other functions. Scale rulers with special scales are still used for fast performance of conventional calculations, such as the E6B circular calculation ruler used for distance and time calculations on light aircraft.
In the 1770s, Pierre-Zac Druz, a Swiss watchmaker, made a mechanical (automatic) doll that could write with a pen. By changing the number and order of its internal gears, different letters, and therefore different messages, could be produced. It could practically be “programmed” mechanically to read instructions. The doll, along with two other sophisticated machines, is housed in the Neuchاتtel Museum of Art in Switzerland and is still intact.
The tide predictor, invented by Sir William Thomson in 1872, was widely used for navigation in shallow water. The device used a system of spools and strings to automatically predict tidal levels over a period of time in a specific area.
The Differential Analyzer, a mechanical analog computer for solving differential equations by integration, used wheel and disk mechanisms for integration. As early as 1876, Lord Kelvin had described the possibility of making such calculators, but the limited output torque of the ball and disk integrators prevented him from doing so. In a differential analyzer, it guided the output of one integral, the input of another integral, or the output of a graph. The torque booster was an advancement that made these mechanisms possible. Beginning in the 1920s, Vanover Bush and others developed mechanical differential analyzers.
The first calculation device
Charles Babbage, an English sage and mechanical engineer, founded the concept of the programmable computer. Conceived as the “father of computers”, he conceptualized and invented the first mechanical computer in the early nineteenth century. After working on his revolutionary difference machine, he realized in 1833 that a much more comprehensive design, an analytical machine, was possible. The input of programs and data was given to the machine by a punch card, a method used at the time to guide textile machines such as jacquards. For output, the machine had a printer, a curve and an alarm. The machine was also able to punch numbers on cards to read at another time. It used a logical arithmetic unit to control the flow of conditional branching and loops, use, and memory, making it the first design for a multifunction computer described in modern literature as the Turang-complete. , changed.
This car was about a century ahead of its time. All the parts had to be made by hand – this was a big problem for a tool with a thousand pieces. Eventually, the project was scrapped with the British government deciding to cut funding. Babbage’s failure to complete the analytics engine can be attributed not only to political and financial difficulties, but also to his desire to develop a highly sophisticated computer, and to progress faster than others can follow. However, in 1888 his son, Henry Babge, completed a simplified version of the analytical engine (mill) computing unit. He presented a successful demonstration of its use in calculating tables in 1906.
During the first half of the twentieth century, many scientific computational needs were met by increasingly sophisticated analog computers, which used a direct mechanical or electrical model of the problem as a basis for computation. However, these computers were not programmable and lacked the comprehensiveness and precision of modern digital computers. The first modern analog computer was a tide forecasting machine invented in 1872 by Sir William Thomson. The Differential Analyzer, a mechanical analog computer for solving differential equations by integrating using disk and wheel mechanisms, was conceptualized in 1876 by James Thompson, brother of the famous Lord Kelvin.
The Art of Mechanical Analog Computing, with Differential Analyzer by H. ال. Hayen and Vanover Bush at MIT, built in 1927, reached their peak. This device is based on James Thomson mechanical integrators and torque boosters invented by H. W. Nieman was built. Many of these devices were made before they became obsolete. By the 1950s, the success of digital computers marked the end of many analog computers, but analog computers remained used in some specialized applications in the 1950s, such as education (control systems) and airplanes (calculation rulers).
By 1938, the US Navy was an electromechanical analog computer small enough to be used in a submarine. It was a torpedo information computer, which used trigonometry to solve the problem of a torpedo firing at a moving target. In World War II, similar devices were built in other countries.
Early digital computers were electromechanical; Electric switches guided mechanical relays to perform calculations. These devices had low operating speeds and were eventually replaced by much faster all-electric computers, which initially used vacuum tubes. The Z2, built by German engineer Konard Zeus in 1939, was the first example of an electromechanical relay computer.
In 1941, Zeus perfected his original car with the Z3, the first electromechanically programmable fully automatic digital computer. The Z3 was built with 2,000 relays, and implemented a 22-bit vocabulary that operated at a frequency of 5-10 Hz. The program code was generated on the punch film, while the data could be stored in 64 words of memory, stored or provided via the keyboard. It was very similar to modern machines in some respects, and was a pioneer in many developments, such as floating point numbers. Instead of using a decimal system that was difficult to implement (used in Charles Babbage’s original design), using a binary system meant that Zeus machines were easier to build and safer with the technology available at the time. They were wet. The Z3 Turing was perfect. Duplicate Z3 Zeus, the first fully automatic digital “electromechanical” computer.
Vacuum lamps and digital electronic circuits
Simultaneously with the replacement of digital computing with analog, purely electrical circuit elements quickly replaced their mechanical and electromechanical counterparts. Engineer Tommy Flowers, who worked at the Post Office Research Station in London in the 1930s, began exploring the possibility of using electronics for telephone exchanges. The experimental equipment he built in 1934 became operational five years later, using thousands of vacuum tubes to turn part of the telephone network into an electronic data processing system. In America in 1942, John Vincent Atanasov and Clifford E. Barry from the State University of Iowa developed and tested the Atanasov-Berry Computer (ABC), the first “automatic digital electronic computer.” The design was also all-electronic, using about 300 vacuum tubes, in which the capacitors were fixed as memory in a mechanically rotating cylinder.
During World War II, the British broke a number of encrypted German communications in Belchley Park. First, the German cryptocurrency Enigma was attacked with the help of electromechanical bombs, often controlled by women. To crack the more advanced German Lorenz SZ 42/40 machine used for high-end military communications, Max Newman and his colleagues employed Tommy Flowers to build the Colossus computer. From the beginning of February 1943, he spent eleven months designing and building the first Colossus. After a functional test in December 1943, Colossus was sent to Belchley Park, where he was delivered on January 18, 1944, and attacked his first message on February 5.
Colossus was the first digital electronic programmable computer in the world and used multiple valves (vacuum lamps). The computer used paper tape input and was configurable to perform various boolean operations on its data, but Turing was not complete. Nine Colossus Mark 2s were built (Mark 1 also became Mark 2, for a total of ten units). The Colossus Mark 1 had 1,500 thermal valves (lamps), but the Mark 2 with 2400 Valves was both 5 times faster and easier to operate than the Mark 1, which greatly accelerated the decoding process.
ENIAC (Computer and Electronic Numerical Integrator) made in America was the first programmable computer made in America. Although the Eniac was simpler than the Colossus, it was much faster, more flexible, and Turing-perfect. Similar to the Colossus, a “program” was defined on the Colossus by the condition of its switches and cable bundles, something very different from the program-stacked electronic machines introduced later. When a program was written, it had to be mechanically adjusted by manually resetting the plugs and switches inside the machine. The ENIAC programmers were six women, collectively referred to as the “ENIAC girls”.
This machine combined high-speed electronics with the ability to program for many complex issues. The ENIAC could add or subtract 5,000 times per second, a thousand times faster than any other car. It also had modules for multiplication, division, and the second root. High-speed memory was limited to 20 words (about 80 bytes). Developed and built by ENIAC, led by John Machley and J. Presper Eckart was built at the University of Pennsylvania and lasted in full operation from 1943 to the end of 1945. It was a large machine, weighing 30 tons, consuming 200 kilowatts of electricity, and had more than 18,000 vacuum tubes, 1,500 relays, and hundreds of thousands of resistors, capacitors, and inductors.
The concept of modern computers
The principle of the modern computer was introduced by Alan Turing in his 1936 paper on computable numbers. Turing proposed a simple device and called it the “universal calculating machine”, now known as the Turing universal machine. He proved that such a machine, by following the instructions (programs) stored on the tape, was able to calculate anything that could be calculated, something that made the machine programmable. The basic concept of the Turing scheme is the stored program, in which all computational instructions are stored (stored) in memory. Von Newman acknowledged that the central concept of the modern computer owes much to this article. To date, Turing machines have been a major topic in the study of computational theory. Despite the limitations created by limited memory resources, modern computers are considered Turing-complete, meaning that they have the ability to run an algorithm equivalent to a universal Turing machine.
Early calculators had stored programs. Changing their function required opening and closing the machine again. This was changed by the introduction of program-stacked computers. A program-stacked computer includes a set of instructions in its design and can store a set of instructions (programs) in memory that do the details of its computing. The theoretical basis of program-stacked computers was introduced by Alan Turing in his 1936 paper. In 1945, Turing joined the National Physics Laboratory and began work on the development of an electronic program-based digital computer. His 1945 report, The Proposed Electronic Calculator, was the first specification of such a device. John Von Newman of the University of Pennsylvania also compiled the first draft report on Adwak in 1945.
The Manchester kid was the first program-stacked computer in the world. This computer was created at the University of Victoria in Manchester by Frederick C. Made by Williams, Tom Kilburn and Geoff Tuttle, it premiered on June 21, 1948. The computer was designed as an experimental platform for the Williams lamp, the first digital storage device with random access. Although considered “small and rudimentary” by the standards of the time, it was the first working machine to have all the essential elements of a modern electronic computer. As soon as the computer demonstrated the feasibility of its design, a project began to develop it into a more usable computer, Manchester Mark 1. Grace Hopper was the first to develop a compiler for the programming language.
The Mark 1 quickly became the prototype for the Frontier Mark 1, the world’s first commercially available multifunction computer. The computer, built by Franty, was delivered to the University of Manchester in February 1951. At least seven of these newer machines were delivered between 1953 and 1957, one of which was delivered to Shell Laboratories in Amsterdam. In October 1947, the executives of the ceremonial service company J. Lyons & Co. decided to take an active role in promoting the commercial development of computers. The LEO I computer became operational in April 1951, making it the world’s first common office computer.
The bipolar transistor was invented in 1947. From 1955 onwards, transistors replaced vacuum bulbs in computer design, creating the “second generation of computers.” Transistors have many advantages over vacuum bulbs: they are smaller and require less power and therefore produce less heat. Silicon junction transistors were much more reliable than vacuum tubes, with a longer service life and indeed an endless life. Transistor computers could contain tens of thousands of binary logic circuits in relatively compact space.
At the University of Manchester, a team led by Tom Kilburn designed and built a machine using newly designed transistors instead of valves. Their first transistor computer and the world’s first transistor computer were operational by 1953, and the second version was completed there in April 1955. However, the machine used valves to generate its 125 kHz time-waveforms, and in orbits to read and write on magnetic drum memory, so the first computer was not entirely transistor. This advantage was given to Harul Cadet in 1955, which was built by the Electronics Division of the Harm Atomic Energy Research Facility.
The next major breakthrough in computing power came with the advent of integrated circuits. The idea of an integrated circuit was first developed by Geoffrey W.I. The drummer, who was a radar scientist at the Royal Radar Institute of the Ministry of Defense, was introduced. Drummer gave the first general description of an integrated circuit in the journal Symposium on the Advancement of High-Quality Electronic Components in Washington, DC. Presented on May 7, 1952.
The first practical ICs were invented by Jack Cable in Texas Instrument and Robert Nevis at Fairchild Semiconductor. Cabley recorded his initial ideas for the integrated circuit in July 1958 and successfully demonstrated the first example of a working complex on September 12, 1958. In a patent application dated February 6, 1959, Cabley described his new device as “a semiconductor body in which all the components of an electrical circuit are fully integrated.” Nevis also came up with the idea of integrated circuit 6 months after cable. His chip solved many practical problems that the cable chip did not solve. The chip, produced at Fairchild Semiconductor, was made of silicon, while the cable chip was made of germanium.
This new development marked the beginning of an explosion in the personal and commercial use of computers and led to the invention of the microprocessor. Although the question of exactly which device was the first microprocessor is debated, in part because of disagreement over the exact definition of the term “microprocessor,” it was almost certainly the first single-chip microprocessor, the Intel 4004, by Ted Hoff, Federico Fagin. And Stanley Mazur was designed and realized at Intel.
The first mobile computers were heavy and powered by city electricity. The 50lb IBM 5100 is a prime example. Later laptops, such as the Osborn 1 and Compaq Portable, were significantly lighter, but still needed to be plugged in. Early laptops, such as the Grid Compass, addressed this need with batteries – and as portable miniature computing resources continued to evolve, laptops became increasingly popular in the 2000s. These developments allowed manufacturers to integrate computing resources into cell phones.
These smartphones and tablets run on a variety of operating systems and quickly became the market-leading computing device, with about 237 devices sold in the second quarter of 2013, according to manufacturers.
Types of computers
Computers are usually classified by consumption:
Based on consumption
- Analog computer
- Digital computer
- Hybrid computer
Based on size
- Small computer
- Great computer
- Super computer
The term hardware includes all the tangible physical parts of a computer. Circuits, computer chips, graphics cards, sound cards, memory (RAM), motherboards, monitors, power supplies, cables, keyboards, printers, and “mouse” input devices are all hardware.
History of computing hardware
Other hardware issues
The multifunction computer has four main components: the logical account unit (ALU), the control unit, the memory, and the input and output devices (collectively represented by I / O). These components are connected by busbars, which often consist of bundles of wire. Inside each of these components are thousands of trillions of small electrical circuits that can be turned on or off with an electrical switch. Each circuit represents a bit (binary digit) of information, so when the circuit is on, it displays “1” and when it is off, it displays “0” (in a positive logic display). The circuits are arranged in logic gates, so one or more circuits may control the status of one or more other circuits.
When unprocessed data is sent to the computer by input devices, the data is processed and taken to the output devices. Input devices can be manual or automatic. The processing operation is usually set by the CPU. Some examples of input devices are:
- Computer keyboard
- Digital camera
- Digital Video
- Graphic tablet
- Image scanner
- joy stick
- Overlay keyboard
- Simultaneous clock
- Touch screen
The devices that output the computer are called output devices. Some examples of output devices are:
- Computer monitor
- PC speaker
- sound card
- Video card
The control unit (often called the control system or central controller) manages the various components of the computer; Reads and interprets (decodes) program instructions and converts them into control signals that activate other parts of the computer. Control systems in advanced computers may change the order in which some instructions are executed to improve performance.
A key component common to all CPUs is the program counter, which is a special memory cell (a register) that tracks in memory where the next instructions are to be read.
The function of the control system is as follows – note that this is a simplified description, and some of these steps may be performed simultaneously or in different order, depending on the CPU:
- Read the code for the next instructions of the cell specified by the program counter.
- Decode the instruction numeric code into a set of commands or signals for each of the other systems.
- Increase the program counter to refer to the next instructions.
- Read the required instruction data from memory cells (or perhaps from an input device). The location of this required data is usually stored in the instruction code.
- Provide the necessary data for an ALU or register.
- If the instruction requires that an ALU or special hardware be completed, the hardware must be authorized to perform the operation.
- Rewrite the ALU result in a memory location or in a register or perhaps an output device.
- Jumping to step 1.
Since the program counter (conceptually) is just another set of memory cells, it can be changed by calculations performed in the ALU. Adding 100 to the program counter will read the next instructions from 100 places below the program. Instructions that change the program counter are often referred to as “jumps”, and make it possible to repeat loops (instructions that are repeated by the computer) and often conditional instructions (both are examples of control flow).
The sequence of operations that the control unit goes through to process an instruction itself is like a small computer program, and in fact, in some more complex CPU designs, there is another smaller computer called a microcontroller, which executes a microcode program. And causes all these things to happen.
Central processing unit or CPU
The control unit, ALU, and registers are collectively known as the CPU unit. Early CPUs were made of several separate components, but since the mid-1970s, CPUs have typically been built on a single integrated circuit called a microprocessor.
Arithmetic logic unit or ALU
ALU is able to perform two types of operations: arithmetic and logic. The set of arithmetic operations that a particular ALU supports may be limited to addition and subtraction, or it may include multiplication, division, trigonometric functions such as sine and cosine, and so on. Some can only work on integers, while others use floating points to display real numbers, albeit with limited accuracy. However, any computer that can perform the simplest operations can be programmed to break down more complex operations into simple steps that it can perform. Therefore, any computer can be programmed to perform any arithmetic operation, although more time is required if the ALU does not support the operation. ALU can also compare numbers and return the correct Boolean value (true or false), depending on whether one number is equal to, greater than, or less than the other number (“Is 64 greater than 65?”). Logical operations include Boolean logic: and, or, exclusive and contradictory. These can be useful for constructing conditional statements and processing Boolean logic.
Superscalar computers may have multiple ALUs, allowing them to process multiple instructions simultaneously. GPUs and computers with SIMD and MIMD features often have ALUs that can count on vectors and matrices.
Computer memory can be thought of as a list of cells in which numbers may be stored or read. Each cell has a numbered “address” and can store a single number. You can instruct the computer to “put the number 123 in cell 1357” or “add the number in cell 1357 with the number inside cell 2468 and put the answer inside cell 1595.” Information stored in memory can represent virtually anything. Letters, numbers, even computer instructions can be easily stored in memory. Because the CPU does not distinguish between different types of information, it is the software’s job to interpret what is simply a string of numbers in terms of memory.
In almost all modern computers, each memory cell is set to store binary numbers in groups of eight bits (called one bytes). Each byte can display 256 different numbers (); Or from 0 to 255, or from -127 to +128. To store larger numbers, several consecutive bytes may be used (usually two, four or eight). When negative numbers are needed, they are usually stored in the complementary two symbol. Other layouts are possible, but are usually not seen outside of specific applications or historical contexts. The computer can store any type of information in memory, if it can be displayed numerically. Modern computers have billions, even trillions of bytes of memory.
The CPU has a special type of memory cell called a register that can be read and written to much faster than the main memory range. Depending on the CPU, there are usually between two and one hundred registers. Registers are used for the most needed data items to avoid the need for access to main memory whenever data is needed. Because it is always working on data, reducing the need for access to main memory (which is often slower than ALUs and control units) greatly increases the speed of the computer.
There are two main types of computer memory:
- Random access memory (RAM)
- Read-only memory or ROM
The RAM can be read and written to whenever the CPU commands, but the ROM is already loaded with data and software that never changes, so the CPU can only read it. The ROM is commonly used to store initial computer startup instructions. In general, the contents of the RAM are cleared when the computer is turned off, but the ROM keeps its contents unrestricted. On a PC, the ROM has a special program called the BIOS that directs the loading of the operating system from the hard disk drive to the RAM, whenever the computer is turned off or reset. On embedded computers, which often do not have a hard disk, all the necessary software may be stored in the ROM. Software stored on a ROM is often called firmware because it is conceptually more like hardware than software. Flash memory blurs the distinction between RAM and RAM, because it holds its data when turned off, but it can also be rewritten.
More complex computers may have one or more RAM caches, which are slower than registers but faster than main memory. Computers with this type of cache are usually designed to automatically transfer the required data to the cache, often without the need for programmer intervention.
Input / Output (I / O)
I / O is a tool for computers to exchange information with the outside world. Devices that provide input or output to a computer are called peripherals. In a typical PC, peripherals include input devices such as keyboards and mice, and output devices include monitors and printers. Hard disk drives, floppy disk drives, and optical disk drives act as both inputs and outputs. Computer networks are another type of I / O. I / O devices are usually complex computers, which have their own CPU and memory. A GPU may contain fifty or more small computers that perform the calculations needed to display 3D graphics. Modern desktops include several smaller computers that help the main CPU do I / O. A flat screen display made in 2016 has a computer circuit.
While it can be assumed that each computer runs a giant program stored in its main memory, some systems may need to run several programs simultaneously. This is done by multitasking, which means setting up the computer to quickly switch between running each program in turn. One tool for doing this is a signal called an interrupt, which alternately stops the computer from executing current commands and doing something else. The computer can return to work later by reminding you of its status before the interruption. If several “simultaneous” programs are running, then the interrupt generator may generate several hundred interrupts per second, causing one program to be switched on each time. Because modern computers usually execute instructions much faster than human perception, it may seem that multiple programs are running at the same time, although there is always only one program running at a time. This multitasking method is sometimes called “time sharing” because each program in turn gets a slice of time. Before the era of cheap computers, the main use of multitasking was to make it possible for multiple people to use one computer. Multitasking apparently slowed down a computer that switched between multiple programs, which was directly proportional to the number of programs it ran, but most programs spent most of their time waiting for input / output devices to complete the task. They did it themselves. If an application is waiting for the user to click on a mouse or press a key on the keyboard, then it does not get any “timeout” until something is waiting for it to happen. This frees up time to run other programs so that multiple programs can run simultaneously without significant slowdowns. Before the era of cheap computers, the main use of multitasking was to make it possible for multiple people to use one computer. Multitasking apparently slowed down a computer that switched between multiple programs, which was directly proportional to the number of programs it ran, but most programs spent most of their time waiting for input / output devices to complete the task. They did it themselves. If an application is waiting for the user to click on a mouse or press a key on the keyboard, then it does not get any “timeout” until something is waiting for it to happen. This frees up time to run other programs so that multiple programs can run simultaneously without significant slowdowns. Before the era of cheap computers, the main use of multitasking was to make it possible for multiple people to use one computer. Multitasking apparently slowed down a computer that switched between multiple programs, which was directly related to the number of programs it ran, but most programs spent most of their time waiting for input / output devices to complete the task. They did it themselves. If an application is waiting for the user to click on a mouse or press a key on the keyboard, then it does not get any “timeout” until something is waiting for it to happen. This frees up time to run other programs so that multiple programs can run simultaneously without significant slowdowns. Multitasking apparently slowed down a computer that switched between multiple programs, which was directly proportional to the number of programs it ran, but most programs spent most of their time waiting for input / output devices to complete the task. They did it themselves. If an application is waiting for the user to click on a mouse or press a key on the keyboard, then it does not get any “timeout” until something is waiting for it to happen. This frees up time to run other programs so that multiple programs can run simultaneously without significant slowdowns. Multitasking apparently slowed down a computer that switched between multiple programs, which was directly related to the number of programs it ran, but most programs spent most of their time waiting for input / output devices to complete the task. They did it themselves. If an application is waiting for the user to click on a mouse or press a key on the keyboard, then it does not get any “timeout” until something is waiting for it to happen. This frees up time to run other programs so that multiple programs can run simultaneously without significant slowdowns.
Some computers are designed to split their work into multiple CPUs with a multiprocessor configuration, a technique previously used only for large, powerful machines such as supercomputers, large computers, and servers. Multiprocessing and multi-core laptops and PCs (multiple CPUs on a single integrated circuit) are ubiquitous today, and as a result, are increasingly being used in low-cost markets.
Supercomputers, in particular, often have very unique architectures that are markedly different from the underlying programming architecture of multipurpose computers. These computers typically have thousands of CPUs, custom high-speed internal connections, and dedicated hardware. Due to the large-scale program organization required to make the most of the resources available at the same time, such designs are usually only useful for specialized work. Hypercomputers are commonly used in large-scale simulation applications, graphic rendering, and cryptography, as well as other so-called “embarrassing parallel” applications.
Software refers to parts of a computer that do not have a physical shape, such as programs, data, protocols, and so on. Software is part of a computer system that consists of encrypted information or computer instructions, as opposed to the physical hardware from which the system is made. Computer software includes computer programs, libraries, and related non-executable data, such as online documentation or digital media. Software is usually divided into system software and application software. Computer software and hardware need each other, and neither has real use on its own. When software is stored on hardware that cannot be easily changed, such as the BIOS ROM on an IBM compatible computer, it is sometimes referred to as “firmware”.
There are thousands of different programming languages - some all-purpose, and some only suitable for highly specialized applications.
The distinguishing feature of today’s computers that sets them apart from all other machines is their programmability. This means that some kind of instructions (program) can be given to the computer, and the computer processes them. Modern computers based on the von Neumann architecture often have machine code in the form of a grammatical programming language. Simply put, a computer program may have only a few instructions, or millions of instructions, such as word processor programs and web browsers. A typical computer today can execute billions of instructions per second (gigaflops) and rarely make a mistake over the years. Large computer programs made up of millions of instructions can take several years for programming teams to code, and because of the complexity of the task, they almost certainly involve errors.
Stored program architecture
This section applies to most “random access-based” computers or RAM machine-based computers.
In most cases, computer instructions are simple: add a number to another number, move data from one place to another, send a message to an external device, and so on. These instructions are read from computer memory and are usually executed in the order given. However, there are special commands that tell the computer to jump to another place in the program, front or back, and continue running the program from there. These commands are called “jumps” (or branches). In addition, jump commands can be conditioned so that different strings of commands can be used, depending on some previous computational results or an external event. Many computers directly support subroutines by providing a jump that “remembers” where they jumped from, and another command to return to the next command of that jump.
Running the program can be linked to reading a book. While one usually reads each word and line in order, one can sometimes skip backwards in the text or skip the dull parts. Similarly, a computer may occasionally go back and repeat the instructions of a part of the program over and over until an internal condition is met. This is called in-program control flow, and is what allows a computer to perform repetitive tasks without human intervention.
For comparison, a person with a pocket calculator can perform a basic arithmetic operation such as adding two numbers with the push of a few buttons. But to add all the numbers from 1 to 1000 thousand times requires a lot of push of buttons and time, and the occurrence of a mistake is almost certain. On the other hand, you can program a computer to do this with just a few simple commands. The following example is written in MIPS assembly language.
addi $8, $0, 0 # initialize sum to 0
addi $9, $0, 1 # set first number to add = 1
slti $10, $9, 1000 # check if the number is less than 1000
beq $10, $0, finish # if odd number is greater than n then exit
add $8, $8, $9 # update sum
addi $9, $9, 1 # get next number
j loop # repeat the summing process
add $2, $8, $0 # put sum in output register
When the computer is told to run the program, the computer performs the repetitive addition operation without human intervention. The computer almost never makes mistakes, and a modern PC can do it in a fraction of a second.
On most computers, each command is stored as machine code and given a unique number (its operational code or apcode for short). It has the command to add two numbers to an upgrade; Their multiplication instruction has a different code and so does the rest. The simplest computers can execute several different instructions; More sophisticated computers have several hundred instructions to choose from, each with a unique numeric code. Because computer memory can store numbers, it can also store grammar codes. This leads to the important fact that all programs (which are nothing but lists of these commands) can be displayed as lists of numbers, and programs, like numeric data, can be run on a computer. The basic concept of storing a program in a computer’s memory, along with the data on which it operates, is the foundation of Von Neumann architecture, or program-stacked architecture. In some cases, the computer may store part of its program, or all of it, in memory, which is kept separate from the data it runs on. This is called the Harvard Architecture in honor of the Harvard Mark 2 computer. Von Newman’s computers show some of Howard’s architectural features in their design, such as CPU caches.
Although writing computer programs as long lists of numbers is possible (machine language), and although this technique has been used in many early computers, it is very tedious and in practice has the potential for error, especially in programming. Complex. Instead, each basic command can be given a short name that represents its function and can be easily remembered – an abbreviation such as ADD, SUB, MULT or JUMP. These abbreviations are collectively known as the assembly language of a computer. Converting programs written in assembly language to what a computer can understand (machine language) is usually done by a computer program called an assembler.
Programming languages provide a variety of ways to specify programs to run on a computer. Unlike natural languages, programming languages are designed to leave no ambiguity and to be concise and useful. These languages are purely written and often difficult to read aloud. They are usually converted to machine code before execution by a compiler or assembler, or translated directly by an interpreter at runtime. Sometimes programs are run in a way that combines the two techniques.
Low-level programming languages
Machine languages and their representative assembly languages (collectively called low-level programming languages) tend to be unique to a particular type of computer. For example, a computer with an ARM architecture (such as a smartphone or handheld game console) cannot understand the machine language of an x86 CPU that may be on a PC.
High-level languages / third generation language
Writing long programs in assembly language, although much easier than machine language, is often difficult and erroneous. Therefore, most practical applications are written in more abstract high-level programming languages that are able to better express the needs of the programmer (and therefore help reduce programmer error). High-level languages are usually “compiled” in machine language (or sometimes first in assembly language and then in machine language) using another computer program called a compiler. High-level languages have less to do with how the destination computer works than assembly language, and are more related to the language and structure of the problem (s) solved by the final program. Therefore, it is often possible to use different compilers to translate a high-level language program into machine language of several types of computers.
Fourth-generation programming languages
4GL languages are less formal than 3G languages. The advantage of 4GL is that these languages provide ways to obtain information without the need for direct programmer assistance.
Program design or software design (Program design)
Small program design is relatively simple and includes problem analysis, input collection, use of in-language programming structures, development or use of known procedures and algorithms, data provision for output devices, and application solutions for It becomes a problem. As issues become larger and more complex, features such as subroutines, modules, Syrian documentation, and new paradigms such as object-oriented programming emerge. Large programs that contain thousands of lines of code require Syrian software methodologies. The development of large software systems poses a major intellectual challenge. Historically, it has been difficult to produce software with relatively high reliability at a predictable cost and timeline; The academic and professional field of software engineering focuses specifically on this challenge.
Computer programming errors are called “bugs”. These errors may be minor and do not affect the usefulness of the program, or they may have only minor effects. But in some cases, they may cause the program or the entire system to “hang”, fail to respond to inputs such as mouse clicks or keystrokes, fail completely, or crash. Even so, benign bugs can sometimes be exploited by a malicious user who writes malicious code to exploit a bug and disrupt the proper functioning of a computer. Bugs are not usually the fault of the computer. Because computers only follow the instructions given to them, bugs are almost always the result of a programming error or negligence in program design.
Hardware is a technology that combines hardware and software, such as a BIOS chip inside a computer. This chip (hardware) is located on the motherboard and stores the BIOS (software) settings.
Network and Internet
Computers have been used to transfer information between multiple locations since the 1950s. The US Army SAGE system was the first example of such a large-scale system, which led to the emergence of a number of commercial single-purpose systems such as the Saber. In the 1970s, computer engineers at research institutes across the United States began connecting their computers to telecommunications technology. This effort was funded by ARPA (now DARPA) and the resulting computer network was renamed ARPANET. The technologies that made ARPANET possible evolved.
Over time, the network went beyond military and academic institutions and became known as the Internet. The advent of the network has redefined the nature and boundaries of computers. Computer programs and operating systems have been modified to include the ability to define and access the resources of other computers on the network, such as peripherals, stored information, and the like, as extensions of a single computer resource. These facilities were initially only available to people working in high-tech environments, but in the 1990s the expansion of applications such as email and the World Wide Web, along with the development of network technologies. Cheap and fast, like Ethernet and ADSL, have led to the ubiquity of computer networks. In fact, the number of computers on the network is growing at an unprecedented rate. A very large fraction of personal computers regularly connect to the Internet to transmit and receive information.
It does not need to be an electronic computer, or have a processor, RAM or even a hard drive. Although the general use of the word “computer” is synonymous with the electronic personal computer, the modern definition of a computer is: “A computing device, especially a machine [usually] programmable electronic that performs mathematical or logical operations at high speed. Gives or collects, stores, relates or processes information. ” Every device that processes information is a computer, especially if the processing is purposeful.
Historically, computers have evolved from mechanical computers and eventually from vacuum tubes to transistors. However, concept computing systems that are as flexible as a personal computer can be built from almost anything. As a famous example, you can make a computer out of billiard balls (billiard ball computer). More realistically, modern computers are made of semiconductor transistors of optical design.
Dynamic research is underway to build computers from new and promising technologies such as optical computers, DNA computers, neural computers, and quantum computers. Most computers are universal and can compute any computable function, and their only limitations are memory capacity and computational speed. However, different computer designs can offer very different functions for specific problems; Quantum computers, for example, can potentially break some modern cryptographic algorithms at very high speeds (by quantum factoring).
Computer architecture paradigms
There are several computer architectures:
- Quantum computing vs. chemical computing
- Scalar processor vs. vector processor
- Non-integrated memory access computers (NUMA)
- Registry machine vs. stack machine
- Harvard vs. Newman’s architecture
- Cellular architecture
Of all these abstract machines, quantum computing holds the promise of revolutionizing computing. Logic gates are a common concept that can be applied to most of the above digital or analog paradigms. The ability to store and execute command lists called programs makes computers very versatile and different from calculators. The Church-Turing treatise is a mathematical expression of all this versatility: any computer with a minimum ability (Turing completeness) is, in principle, capable of doing what any other computer can do. Therefore, any type of computer (notebook, supercomputer, cellular automaton, etc.) has the ability to perform the same computational tasks, provided that it has sufficient time and storage capacity.
The computer solves problems exactly as they were programmed, regardless of performance, other answers, possible shortcuts, or possible code errors. Computer programs that learn and adapt are part of the emerging field of artificial intelligence and machine learning. AI-based products are generally divided into two broad categories: rule-based systems and pattern recognition systems. Rule-based systems try to represent the rules used by human experts, and their development is usually costly. Pattern-based systems use problem-related data to generate results. Examples of pattern-based systems include voice recognition, font recognition, translation, and the emerging field of online marketing.
Professions and organizations
With the spread of computer use throughout society, more and more computer-related job opportunities are being created.
The need for computers to work properly with each other and the ability to exchange information, in turn, has created the need for numerous organizations, associations, and standardization bodies, both formal and informal in nature.