In order to achieve complete understandings of computer systems, it is always important to consider both hardware and software design of various computer components. In other words, every functionality of the computer has to be studied to increase the performance of the computer.

Computer organization and architecture mainly focuses on various parts of the computer in order to reduce the execution time of the program, improve the performance of each part. Generally, we tend to think computer organization and computer architecture as same but there is slight difference.


  • Computer Organization is study of the system from software point of view and gives overall description of the system and working principles without going into much detail. In other words, it is mainly about the programmer’s or user point of view.
  • Computer Architecture is study of the system from hardware point of view and emphasis on how the system is implemented. Basically, throws light on the designer’s point of view.


  1. A chef prepares a certain recipe, then serves it to the customers. Chef knows how to prepare the food item whereas customer cares only about quality and taste of the food. In a same way, “chef” can referred to as computer architecture and “customeras computer organization.
  2. In a system, there are a set of instructions, it is enough for programmer or user to know what are a set of instructions present in case of computer organization; whereas it is important for the system designer worries about how a set of instructions are implemented, algorithm of implementation is the emphasis in the case of architectural studies.



A computer consists of various functional blocks- Input, Output, Memory, arithmetic and logical unit, control units.

INPUT UNITS: Various input devices like keyboard etc, provide input to computer; whenever a key is pressed, the letter or key gets automatically translated to binary codes and then transmitted to either memory or processor. The information is stored in memory for further use


MEMORY UNITS: The main function of memory unit is to store data and programs. The programs must be stored in the device while execution. Inside the system, memory plays a vital role in execution of   set of instructions. Memory can be further classified into:


PRIMARY MEMORY: The data or set of instructions are stored in primary storage before processing and the data is transferred to ALU where further processing is done. These are expensive and also known as Main Memory.

SECONDARY MEMORY: The data or set of instruction are stored permanently; user can use it whenever required in future. They are cheaper than primary memory.

ARITHMETIC LOGIC UNIT (ALU): Any arithmetic or logical operation like addition, multiplication etc has to be carried out by ALU. For instance, two numbers located in the memory are to be multiplied; from memory they have to be transferred to processor where ALU performs the arithmetic operation required. Then product will remain in the processor incase of immediate use, otherwise stored in the memory.

CONTROL UNIT (CU): All the other functional units as in ALU, I/O devices, memory has to be coordinated in some ways. Data transfers between the processor and the memory are controlled by this particular unit through timing signals. Timing signals are the signals to determine which action to take place and when.

OUTPUT UNIT: It provides processed results received from the operations performed. Devices like printers, monitor etc provides the desired output.


  • Input device provides information in the form of program to the computer and stores it in the memory.
  • Further the information is fetched from memory to the processer.

  • Inside the processor, it is processed by the ALU.

  • The processed output further passes to output devices.

  • All these activities are controlled by control unit.

 The most efficient computer depends on how quickly it executes tasks. The performance highly depends on few factors:

  • As the programs are written in high level language, compiler transfers the high level language to machine level language; so the performance is highly affected.
  • The speed of the computer depends on the hardware design and machine instruction sets.

Therefore, for optimum results it is important to design compiler, hardware and machine instruction sets in a coordinated way.

The hardware comprises of processor and memory usually connected by a bus. The execution of the program depends on computer system, the processor time depends on hardware. Cache memory is a part of the processor.


The flow of program instructions and data between processor and memory:

  • All the program and data are fetched from input device, and then stored in main memory.
  • Instructions are fetched one by one over bus from memory into processor and a copy is placed in cache memory for future use whenever required.

  • The processor and small cache memory is fabricated in a small integrated circuit chip which makes the processing speed very high.

  • If the instruction movement between main memory and processor is minimized, program will be executed faster which is achieved by cache memory.


Logical circuits are used to build computer that operates on two valued electric signals which are 0 and 1.  A bit of information is the amount of information received from these electric signals.

  • To represent a number in a computer system by string of bits is called binary number.

  • To represents a text character by string of bits is called character code. Characters can be alphabets, decimal digits, punctuation marks and so on which are represented by 8 bits long codes.

As we need to represent all types of positive and negative numbers which can be represented in three ways where leftmost bit is 0 for positive numbers and 1 for negative numbers:

  1. Sign and magnitude
  2. 1’s complement
  3. 2’s complement

Positive values have identical representation in all the three system whereas negative values have different representations.

  1. In Sign and magnitude representation:

Negative values are represented by changing the most significant bit from 0 to 1. It is most natural and can be manually computed.

Example: +5= 0101 whereas -5=1101 (Most significant bit changed from 0 to 1)

  1. 1’s complement representation:

We get negative number by complementing each bit of the corresponding positive number

Example: +3=0011 whereas 3=1100

  1. 2’s complement representations:

2’s complement is obtained by adding 1 to 1’s complement of that number.


2’s complement has only one representation of 0 whereas for +0 and -0, there are distinct representation for sign and magnitude and 1’s complement. However, 2’s complement is most efficient in carrying out addition and subtraction operations.



  1. Computer Organization by Carl Hamacher and Zaky 5th edition.
  2. IITM- NPTEL lectures by S Raman.
  3. IIT-KGP NPTEL digital computer design by P.K Biswas.


Contributor's Info

Difference between control memory and control register

Control memory address register specifies the address of the micro-instruction, and the control data register holds the micro-instruction read from memory, The micro-instruction contains a control word that specifies one or more micro operations for the data processor. Once these operations are executed, the control must determine the next address. The location of the next micro-instruction may be the one next in sequence, or it may be located somewhere else in the control memory. for this reason it is necessary to use some bits of the present micro-instruction to control the generation of the address of the next micro-instruction. The next address may also be a function of the external input condition. 

Contributor's Info

Write through and Write back

Write through: In write through cache , during writing both memory and cache are updated simultaneously, so write time here is memory access time for a word. On a cache miss, only memory update takes place.
While during reading , a cache block is retrieved from memory and hence , read time on cache miss will be the time to bring a block from memory including cache access time using heirarichal access.

Write Back: Here cache block is replaced only when dirty bit is set, then that block is written back to memory. Write back policy uses write allocate technique.


Contributor's Info

Different types of Addressing Modes

Addressing mode in simple words, is a way to access operand(data) from memory and CPU registers.
Different types of Addressing Modes are:

  1. Direct addressing :  Here, address field contains address of operand.
    Single memory access is required .
    Limited address space.
    No extra calculations are required to mainpulate effective address.
    Example : ADD B
  2. Indirect addressing : Use of pointer is required, which points to the address of the operand.
    Larger address space.
    Example : ADD (B) // adds contents of cell pointed to by contents of B to accumulator.
    It may be nested , multilevel.
    Example : ((B))
    slower way of accessing operand as compared to direct addressing mode as multiple memory accesses are required.
    Example: ADD @210 // adds contents of memory location 210 to accumulator.
  3.  Immediate addressing mode : operand is part of instruction.
    Example : ADD #12 // adds 12 to the contents of accumulator.
    Fastest mode.
    No memory access is required.
  4. Register Direct addressing mode : operand is present in register mentioned in addressing field.
    no memory access required.
    fastest way of execution.
    very limited address space.
    Example : ADD R1 // adds contents of Register R1 to the contents of accumulator.
  5. Register Indirect addressing mode : Here, address is present in register.
    operand is in memory cell pointed to by contents of register.
    larger address space.
    one fewer memory access than indirect addressing mode.
    Example : ADD @R1
    can be written as, ACC <- [ACC] + [[R1]]
  6. Index addressing mode : effective address is obtained by adding an index value to the address given in the instruction.
    Used for accessing arrays.
    EA = A + I // here, I is index and A is address.
  7. Relative addressing mode : EA is obtained by adding contents of program counter to the constant value specified in the instruction given.
    It is also known as Displacement addressing mode.
  8. Base register addessing mode : EA = A + BR // BR is base register and A is address.
    Mostly used in segmentation.

Contributor's Info

Created: Edited:
Memory Organisation

Memory Hierarchy

  • The memory unit is used for storing programs and data. It fulfills the need of storage of the information.
  • The additional storage with main memory capacity enhance the performance of the general purpose computers and make them efficient.
  • Only those programs and data, which is currently needed by the processor, reside in main memory. Information can be transferred from auxiliary memory to main memory when needed.

Contributor's Info

Types of Instructions

Types of Instructions:
Data Transfer Instructions: 
Data transfer instructions cause transfer of data from one location to another without changing the information content.
The common transfers may be between memory and processor registers, between processor registers and input/output.
Data Manipulation Instructions: Data manipulation instructions perform operations on data and provide the computational capabilities for the computer. There are three types of data manipulation instructions: Arithmetic instructions, Logical and bit manipulation instructions, and Shift instructions.
Program Control Instructions
Program control instructions specify conditions for altering the content of the program counter, while data transfer and manipulation instructions specify conditions for data processing operations. The change in value of a program counter as a result of the execution of a program control instruction causes a break in the sequence of instruction execution.


Contributor's Info


Pipelining is an implementation technique where multiple instructions are overlapped in execution. The computer pipeline is divided in stages. Each stage completes a part of an instruction in parallel. The stages are connected one to the next to form a pipe - instructions enter at one end, progress through the stages, and exit at the other end.
Pipelining does not decrease the time for individual instruction execution. Instead, it increases instruction throughput. The throughput of the instruction pipeline is determined by how often an instruction exits the pipeline.
Because the pipe stages are hooked together, all the stages must be ready to proceed at the same time. We call the time required to move an instruction one step further in the pipeline a machine cycle . The length of the machine cycle is determined by the time required for the slowest pipe stage.
Speed up = Time taken without pipelining / Time taken with pipelining
Speed up = m* Efficiency ; m is the number of phases.

Contributor's Info

Created: Edited:
Computer Organization and Architecture

Memory Mapping Types


There are a few different kinds of mappings that can be specified in the map attribute. All use the format described in the previous section.


Device Mapping
The most common kind of mapping. It is used for devices, RAM and ROM objects. The target field is not set.
Translator Mapping
Sometimes the address has to be modified between memory-spaces, or the destination memory-space depends on the address or some other aspect of the access such as the initiating processor. In these cases a translator can be used. A translator mapping is specified with the translator in the object field, and the default target as target. The translator has to implement the TRANSLATE interface. When an access reaches a translator mapping, the translate function in the TRANSLATE interface is called. The translator can then modify the address if necessary, and specify what destination memory-space to use. If it doesn't specify any new memory-space, the default one from the configuration is used. The following fields can be changed by the translator: physical_address, ignore, block_STC, inverse_endian and user_ptr.
Translate to RAM/ROM Mapping
Used to map RAM and ROM objects with a translator first. The object field is set to the translator, and target is set to the RAM/ROM object.
Space-to-space Mapping
Map one memory-space in another. Both object and target should be set to the destination memory-space object.
Bridge Mapping
A bridge mapping is typically used for mappings that are setup by some kind of bridge device. The purpose of a bridge mapping is to handle accesses where nothing is mapped, in a way that corresponds to the bus architecture. For a bridge mapping, the object field is set to the bridge device, implementing the BRIDGE interface. The target field is set to the destination memory-space. If both a translator and bridge is needed, they must be implemented by the same object. If an access is made where nothing is mapped, the memory-space by default returns the Sim_PE_IO_Not_Taken pseudo exception. But if the access was made through a bridge mapping, the bridge device will be called to notify it about the unmapped access. It can then update any internal status registers, specify a new return exception, and set the data that should be returned in the case of a read access. Since the bridge is associated with the mapping and not the memory-space itself, several bridges can exist for one space, and devices doing accesses directly to the memory-space in question will not affect the bridge for non-mapped addresses. In the latter case, the device itself has to interpret the Sim_PE_IO_Not_Taken exception. The Sim_PE_IO_Error exception, indicating that a device returned an error is also sent to the bridge. Finally, bridges are called for accesses that generate Sim_PE_Inquiry_Outside_Memory, i.e. an inquiry access where nothing is mapped. In this case the bridge may have to set a default return value, such as −1.

Contributor's Info