Input Output: Computer Architecture & Organization Class Notes

Updated: Aug 18

Mobiprep has created last-minute notes for all topics of CAO to help you with the revision of concepts for your university examinations. So let’s get started with the lecture notes on CAO.

  1. Design of control unit

  2. Fundamentals of computer systems

  3. Information Representation

  4. Input output

  5. Machine Instruction Set

  6. Memory Organization

Our team has curated a list of the most important questions asked in universities such as DU, DTU, VIT, SRM, IP, Pune University, Manipal University, and many more. The questions are created from the previous year's question papers of colleges and universities.


  1. Enlist all disk performance parameters?

  2. Explain the usage of SSD?

  3. Explain standard RAID levels.

  4. Explain Nested RAID.

  5. What do you understand by peripheral devices?

  6. What is the work of Input output interface?

  7. What are the three ways to communicate with both memory and input output devices

  8. What do you understand by Asynchronous data transfer?

  9. What is Strobe control?

  10. Explain the working of source initiated and destination-initiated data transfer.

  11. Explain two wired handshaking mechanisms for data transfer?

  12. Explain serial and parallel transmission.

  13. What are the differences between synchronous and asynchronous serial transmission?

  14. What is Baud rate?

  15. What are priority interrupts?

  16. What is DMA?

  17. Explain Burst transfer and cycle stealing in DMA?

  18. What are the various registers present in the DMA controller?

  19. What is the sequence of DMA transfer?

Input Output


Question 1) Enlist all disk performance parameters?

Answer) Seek time: On a movable head, seek time is the time it takes to position the head at the track.

Rotational delay/latency: the time it takes for the beginning of the sector to reach the head. Access time: seek time + latency – Time it takes to get into position to read or write

Transfer time: time required to transfer data T = b/r*N, where b is the number of bytes to be transferred, r is the rotation speed and N is the number of bytes on a track.

Average access time Ta = Ts + 1/2r + b/rN, where Ts is average seek time.



 

Question 2) Explain the usage of SSD?

Answer) The solid-state drive (SSD) is presently the primary choice of every uses because of its

Servers: Enterprise servers need SSDs to get fast reads and writes in order to properly serve their client PCs.


Business: The SSD uses less power than a standard HDD, which means a lower energy bill over time and for laptops, an increase in battery life. An SSD has access speeds of 35 to 100 microseconds, which is nearly 100 times faster. This faster access speed means programs can run more quickly, which is very significant, especially for programs that access large amounts of data often like your operating system.


Gaming: Gaming computers have always pressed the limits of current computing technology, justifying relatively expensive equipment for the benefit of gaming performance. SSD provides faster processing hence better performance.


Mobility: More resistant to physical shock, run more quietly, have lower access time and less latency

With no moving parts, the SSD generates no noise. Because there are no moving parts and due to the nature of flash memory, the SSD generates less heat, helping to increase its lifespan and reliability.


 

Question 3) Explain standard RAID levels.

Answer) RAID stands for redundant array of independent disks. RAID allows you to store the same data redundantly (in multiple places) in a balanced way to improve overall performance. It uses multiple disks in order to provide fault tolerance & to improve overall performance.


STANDARD RAID LEVELS

• Level 0: Striped disk array without fault tolerance

• Level 1: mirrored

• Level 2: Error-correcting coding

• Level 3: bit interleaved parity

• Level 4: block level parity

• Level 5: block distributed parity

• Level 6: dual redundancy


RAID 0

RAID 0 is not considered as a true member of the RAID family as it does not include redundancy to improve performance. In RAID 0 the user and system data are distributed across all disks in the array in strips.


RAID 1

In raid 1 redundancy is achieved by duplicating all the data. Each logical strip (which contains data) is mapped to two separate physical disks. A read request can be serviced by either of the two disks whereas a write request requires that both strips be updated but this can be done in parallel. When a drive fails, data can still be accessed from the mirror/second disk

Disadvantage of Raid 1 is the use of two disks resulted in doubling of cost.


RAID 2

Raid 2 utilizes parallel access techniques. All disks participate in the execution of every I/O request. Spindles of individual drives are synchronized so that each disk head is in the same position on each disk at any given time. The strips are small; a single bit.

An error-correcting code is calculated across corresponding bits on each data disk, and bits of the code are stored in the corresponding bit positions on multiple parity disks. Typically Hamming code is used to be able to correct single-bit errors and detect double-bit errors.

Disadvantage: Costly; only useful when disk errors are very high; since disks are pretty reliable not used in practice.


RAID 3

In raid 3 the strips are small; a single byte or a word and a single redundant disk is used. A simple parity bit is computed for the set of individual bits in the same position on all of the disks

A parity bit is a check bit, which is added to a block of data for error detection purposes. It is used to validate the integrity of the data.


RAID 4

Each disk operates independently - Separate I/O requests satisfied in parallel. It is suitable for applications with high I/O request rates and is not well suited for those requiring high data transfer rates.

To calculate new parity, the old user data, new user data and old parity strips must be read. Then it can update these two strips with the new data and the newly calculated parity. Thus, each strip write involves two reads and two writes.


RAID 5

The Mechanism of raid 5 is the same as raid 4 but in raid 5 the parity strips are distributed across all disks. The allocation of data takes place using a round-robin mechanism. For an n-disk array, the parity strip is on a different disk for the first n strips and the pattern then repeats.


RAID 6

RAID 6 is similar to RAID 5, except that it includes a second parity element to allow survival in the event of two disk failures. Two different parity calculations are carried out and stored in separate blocks on different disks.

Example: XOR and an independent data check algorithm => makes it possible to regenerate data even if two disks containing user data fail. No. of disks required = N + 2 (where N = number of disks required for data).

This mechanism provides high data availability and incurs substantial write penalty as each write affects two parity blocks.


 

Question 4) Explain Nested RAID.

Answer) To improve efficiency and upgrade the performance of Raids a mechanism of RAID 0 with the redundancy benefit of RAID 1 is introduced.


RAID 0+1: This mechanism is also known as mirrored Striped. In this first the data is striped across HDDs, then the entire stripe is mirrored. If one drive fails, the entire stripe is faulted. Rebuild operation requires data to be copied from each disk in the healthy stripe, causing increased load on the surviving disks.


RAID 1+0: This mechanism is also known as striped Mirror. In this mechanism first the data is mirrored, and then both copies are striped across multiple HDDs. When a drive fails, data is still accessible from its mirror. Rebuild operation only requires data to be copied from the surviving disk into the replacement disk.


 

Question 5) What do you understand by peripheral devices?

Answer) A peripheral device is an internal or external device that connects directly to a computer or other digital device but does not contribute to the computer's primary function, such as computing. It helps end users’ access and use the functionalities of a computer.

Since it’s not a core device for the system, the computer can still function without the peripheral, which simply provides extra functions. However, some peripherals such as a mouse, keyboard, or monitor tend to be pretty much fundamental to the interaction between the user and the computer itself.

A peripheral device is also called a peripheral, computer peripheral, input-output device, or I/O device.


 

Question 6) What is the work of Input output interface?

Answer) In computer systems, there are special hardware components between the CPU and peripherals to control or manage the input output transfers. These components are called input-output interface units because they provide communication links between processor bus and peripherals. It lies between the processor bus and peripheral device (or its controller).

Method of Transfer information between internal storage and external peripherals (I/O devices) is provided by the I/O interface. It also resolves the difference between CPU and peripherals like

  • Signal value changes (Peripherals are electromagnetic and electromechanical devices, while CPU and memory are electronic devices).

  • Data Transfer rate mapping (CPU fast & IO slow)

  • Data code format mapping (CPU and Peripherals code formats are different)

  • Control operating modes (not to disturb other peripherals)


 

Question 7) What are the three ways to communicate with both memory and input output devices?

Answer) Data transfer between the computer and external device may be handled by one of the three possible modes:

  • Programmed I/O.

  • Interrupt-initiated I/O

  • Direct Memory Access (DMA)

Programmed I/O.

Each Data transfer is initiated by instructions in a program (Transfer bet. CPU & I/O device). I/O instructions are executed according to operations written in a program. The I/O instruction transfers from and to CPU registers. A memory load instruction used to load its memory. Another instruction used to verify data and count the number of words transferred. Constant I/O monitoring is required by the CPU. The CPU stays in a program loop until the I/O unit indicates data ready. This is time Consuming and wastes CPU time.


Programmed I/O un CAO class notes
Programmed I/O

I/O device and Interface use handshaking for data transfer. Once data available on Data Register Interface sets flag bit (F) indicating data availability. Interface do not reset data accepted line until CPU reads data and clear the flag

CPU needs 3 instruction for each byte transfer

  • Read the status register

  • Check the flag bit

  • Read data register when data available

• Transfer can be done in blocks for efficiency


Interrupt-initiated I/O

In this method an interrupt facility an interrupt command is used to inform the device about the start and end of transfer. In the meantime, the CPU executes another program. When the interface determines that the device is ready for data transfer it generates an Interrupt Request and sends it to the computer.

When the CPU receives such a signal, it temporarily stops the execution of the program and branches to a service program to process the I/O transfer. The CPU gets deviated from the current program and takes care of data transfer. It saves the return address from program counter to stack and then control branches to service routing

After completing I/O transfer it returns back to previous program

There are two ways of choosing the branch address:

  • Vectored Interrupt

  • Non-vectored Interrupt

Vectored Interrupt: Here the source which generated Interrupt supplies the Branch Information to the CPU. Also called INTERRUPT VECTOR.

Non-vectored Interrupt: Here branch address is assigned to a fixed location in memory.



Direct Memory Access (DMA)

CPU limits the data transfer speed for fast I/O devices. DMA removes CPU and allows peripherals to handle memory bus. DMA directly transfers data bet. I/O device and Memory. During the transfer the CPU does not have the control over the bus. CPU idling the bus can be done through the control signals “Bus Request” & “Bus Grant”

DMA controller enables BR,

then CPU finishes current operation and puts its address and data buses in high impedance state

CPU sets BG line (to inform DMA busses are in High Impedance)

DMA transfers data and resets BR for the CPU to use the memory bus by the Interrupt.

DMA data transfer can either happen as – Burst transfer or – Cycle stealing


DMA Controller in CAO class notes
DMA Controller

 

Question 8) What do you understand by Asynchronous data transfer?

Answer) Internal operations in a computer are synchronized with the internal clock pulse generator. But the CPU and I/O interfaces are independent. They run on their own clocks.

If the input/output does not share common clock with CPU two units said to be asynchronized

When the internal timing in each unit is independent from each other in such a way that each uses its own private clock for its internal registers. In that case, the two units are said to be asynchronous to each other, and if data transfer occurs between them this data transfer is said to be Asynchronous Data Transfer.


 

Question 9) What is Strobe control?

Answer) Transfer of Data between CPU & Interface Unit is done using STROBE. To indicate time at which data is transmitted is done in asynchronous data transfer through Control Signal called STROBE. (Strobe pulse is applied by one unit to tell another when data transfer has to occur.

In strobe control CPU is the source unit when producing output & CPU is the destination unit when receiving input. It uses a control line to time each transfer. Strobe is activated either by source or destination. Strobe says when there is valid data on a data bus.

Generally, strobes activated by clock signals. CPU is always in control of the transfer (i.e. strobe is always from CPU). This method is mainly applicable in memory R/W operations. Most of I/O operations use handshaking.


 

Question 10) Explain the working of source initiated and destination-initiated data transfer.

Answer) In Source-initiated data transfer, source first places data on the bus There is a brief delay to settle data on the bus. Then Source activates the strobe pulse then the destination reads data to the internal register (Often uses falling edge of strobe pulse from data bus to registers). Once the data is been read the source removes data after brief delay


Block diagram source initiated data transfer in CAO class notes

Block Diagram

Timing Diagram source initiated data transfer in CAO class notes

Timing diagram


In Destination-initiated data transfer, the data transfer is initiated by destination. Destination activates strobe and then source places data on the bus. Source keeps data on the bus until it is accepted by destination. The destination reads data to a register (Generally at falling edge of the strobe) and once all data is read destination disable strobe after disabling strobe source removes data after predetermined time.


Block diagram destination initiated data transfer in CAO class notes

Block diagram


Timing diagram destination initiated data transfer in CAO class notes

Timing Diagram


 

Question 11) Explain two wired handshaking mechanisms for data transfer?

Answer) Data Transfer between Interface Unit and I/O Device is commonly controlled by Set of HANDSHAKING Lines.

There were two main disadvantages of Strobe control:

  • In source initiation - Source doesn’t know whether destination got the data –

  • In destination initiation – Destination doesn’t know whether source has placed the data on the bus

To solve this problem a two wired handshaking is introduced.

  • The 1st control line is in the same direction as the data flow. It is used by the source and it is used to indicate whether it has valid data or not.

  • The 2nd control line is from destination to source and it is used by the destination. It indicates whether the destination can accept data or not. Sequence of control used depends on the unit initiating transfer. In a fault at one end timeout uses to detect the error


 

Question 12) Explain serial and parallel transmission.

Answer) When data is sent or received using serial data transmission, the data bits are organized in a specific order, since they can only be sent one after another. The order of the data bits is important as it dictates how the transmission is organized when it is received. It is viewed as a reliable data transmission method because a data bit is only sent if the previous data bit has already been received.

When data is sent using parallel data transmission, multiple data bits are transmitted over multiple channels at the same time. This means that data can be sent much faster than using serial transmission methods.


 

Question 13) What are the differences between synchronous and asynchronous serial transmission?

Answer)

Basis for Comparison

Synchronous Transmission

Asynchronous Transmission

Clock pulse

Transmitter and receiver share a common clock pulse.

A common clock pulse is not shared by transmitter and receiver.

Speed of transmission

Fast

Comparatively slow.

Form of data transmission



Data is sent in the form of frames or blocks.

Data is transmitted in the form of byte or character.

Time interval

Constant

Variable

Cost

Expensive

Comparatively less expensive.

Efficiency

More efficient

Less efficient

Need of external clock

Exist

Do not exist

Need of start and stop bit

Not exist

Exist

Circuit

Complex

Comparatively less complex.


 

Question 14) What is Baud rate?

Answer) Baud Rate is the rate at which serial information is transmitted or transferred in a communication channel and is equivalent to the data transfer in bits per second.


 

Question 15) What are priority interrupts?

Answer) Generally, I/O data transfer is initiated by the CPU for the input output transfer to start the device must be ready first. Device readiness for data transfer can be identify by the interrupt signal When CPU receive an interrupt it responds to the interrupt request by pushing the return address to the memory stack and branch to the interrupt service routine

Priority Interrupt system: If a number of I/O devices are attached and 2 or more requests arrive simultaneously then PRIORITY of I/O devices is checked. – Deals with simultaneous interrupts and determine which one to serve first (critical situation / fast I/O) – Determine in which conditions allow interrupting while executing another interrupt service routine


 

Question 16) What is DMA?

Answer) DMA stands for direct access memory. DMA is used to transfer data between Fast devices (Disk) and Memory. In DMA transfer, the CPU is idle and has no control on the memory busses. The DMA controller takes the control and manages the transfer between the memory and I/O device .CPU limits the data transfer speed for fast I/O devices. DMA removes CPU and allows peripherals to handle memory bus. DMA directly transfers data bet. I/O device and Memory. During the transfer the CPU does not have the control over the bus. CPU idling the bus can be done through the control signals ``Bus Request” & “Bus Grant”

  1. DMA controller enables BR,

  2. then CPU finishes current operation and puts its address and data buses in high impedance state

  3. CPU sets BG line (to inform DMA busses are in High Impedance)

  4. DMA transfers data and resets BR for the CPU to use the memory bus by the Interrupt.

DMA data transfer can either happen as – Burst transfer or – Cycle stealing


 

Question 17) Explain Burst transfer and cycle stealing in DMA?

Answer) Burst Transfer: In DMA Burst Transfer number of memory words or a block sequence is transferred. It is needed for fast devices e.g. Disk where data cannot be slowed down or stopped until the entire block is transferred.

Cycle Stealing: In DMA Cycle Stealing only one word is transferred & then control of busses are returned to the CPU. Here CPU delays its operation for One Memory Cycle to allow DMA to “steal” one memory cycle.


 

Question 18) What are the various registers present in the DMA controller?

Answer) DMA Controller has 3 registers:

  1. Address Register: It controls Address of specified location of memory using Address Bus and Address Register is incremented after every work is transferred to memory.

  2. Word Count Register: It holds a number of words to be transferred. This register is decremented after each word is transferred till Zero.

  3. Control Register: It specifies Mode of transfer (Read/Write)


 

Question 19) What is the sequence of DMA transfer?

Answer) To transfer data from input devices Direct memory access uses a DMA Transfer mechanism.

  1. Peripheral device sends DMA request

  2. DMA controller activates BR (bus request)

  3. CPU finishes current bus cycle & grant the bus by activating BG

  4. DMA puts current address to the address bus and activates RD or WR accordingly. RD and WR lines of DMA Controller are Bidirectional.

  5. Acknowledges peripheral

  6. Then peripheral puts data to (or reads data from) the bus.

  7. Thus, peripheral directly read or write memory

  8. For each word transferred DMA increment address and decrement word count register

If word count is not zero DMA checks request line coming from peripheral

If active (fast devices) initiate the second transfer immediately Otherwise disable BR

If word count is zero – DMA stop transfer, disable BR and inform CPU the termination of data transfer

Zero value in word count indicates successful data transfer. DMA can have even more than one channel. DMA commonly used in devices like magnetic disks and screen displays.





8 views0 comments