Threads | Operating System Notes | B.Tech
top of page

Threads: Operating System Class Notes

Updated: Oct 16, 2022

Mobiprep has created last-minute notes for all topics of operating system to help you with the revision of concepts for your university examinations. So let’s get started with the lecture notes on Operating System (OS).

Our team has curated a list of the most important questions asked in universities such as DU, DTU, VIT, SRM, IP, Pune University, Manipal University, and many more. The questions are created from the previous year's question papers of colleges and universities.


Threads


Question- 1) List benefits of threads.

Answer: The following explains the benefits of writing multi-threaded programs. Major improvements of threads programming are:

  1. Parallel programming techniques are easier to implement.

  2. Multi-threaded programs provide better performance.


Threads do have some limitations and cannot be used for some special purposes which still require multi-processed programs.


Parallel Programming Concepts

There are two main advantages for using parallel programming instead of serial programming techniques:

  1. Parallel programming can improve the performance of a program.

  2. Some common software models are well suited to parallel programming techniques.


Traditionally, multiple single-threaded processes have been used to achieve parallelism, but some programs can benefit from a finer level of parallelism. Multi-threaded processes offer parallelism within a process and share many of the concepts involved in programming multiple single-threaded processes.

Modularity


Programs are often modeled as a number of distinct parts interacting with each other to produce a desired result or service. A program can be implemented as a single, complex entity that performs multiple functions among the different parts of the program. A more simple solution consists of implementing several entities, each entity performing a part of the program and sharing resources with other entities.


By using multiple entities, a program can be separated according to its distinct activities, each having an associated entity. These entities do not have to know anything about the other parts of the program except when they exchange information. In these cases, they must synchronize with each other to ensure data integrity.


Threads are well-suited entities for modular programming. Threads provide simple data sharing (all threads within a process share the same address space) and powerful synchronization facilities (such as mutexes and condition variables).

Software Models


The following common software models can easily be implemented with threads.

  1. Master/slave model

  2. Divide-and-conquer model

  3. Producer/consumer model.


All these models lead to modular programs. Models may also be combined to efficiently solve complex tasks.


These models can apply to either traditional multi-process solutions, or to single process multi-thread solutions, on multi-threaded systems such as AIX. In the following descriptions, the word entity refers to either a single-threaded process or to a single thread in a multi-threaded process.


Master/Slave Model

In the master/slave (sometimes called boss/worker) model, a master entity receives one or more requests, then creates slave entities to execute them. Typically, the master controls how many slaves there are and what each slave does. A slave runs independently of other slaves.


An example of this model is a print job spooler controlling a set of printers. The spooler's role is to ensure that the print requests received are handled in a timely fashion. When the spooler receives a request, the master entity chooses a printer and causes a slave to print the job on the printer. Each slave prints one job at a time on a printer, handling flow control and other printing details. The spooler may support job cancellation or other features which require the master to cancel slave entities or reassign jobs. The following figure illustrates this model.


Divide-and-Conquer Models

In the divide-and-conquer (sometimes called simultaneous computation or work crew) model, one or more entities perform the same tasks in parallel. There is no master entity; all entities run in parallel independently.


An example of a divide-and-conquer model is a parallelized grep command implementation, which could be done as follows. The grep command first establishes a pool of files to be scanned. It then creates a number of entities. Each entity takes a different file from the pool and searches for the pattern, sending the results to a common output device. When an entity completes its file search, it obtains another file from the pool or stops if the pool is empty.

Producer/Consumer Models


The producer/consumer (sometimes called pipelining) model is typified by a production line. An item proceeds from raw components to a final item in a series of stages. Usually a single worker at each stage modifies the item and passes it on to the next stage. In software terms, an AIX command pipe, such as the cpio command, is a good example of a this model.


The following figure illustrates a typical producer/consumer model. In this example, the Reader entity reads raw data from standard input and passes it to the processor entity, which processes the data and passes it to the writer entity, which writes it to standard output. Parallel programming allows the activities to be performed concurrently: the writer entity may output some processed data while the reader entity gets more raw data.


Performance Consideration

Multi-threaded programs can improve performance in many ways compared to traditional parallel programs using multiple processes. Furthermore, higher performance can be obtained on multiprocessor systems using threads.

Managing Threads


Managing threads, that is creating threads and controlling their execution, requires fewer system resources than managing processes. Creating a thread, for example, only requires the allocation of the thread's private data area, usually 64KB, and two system calls. Creating a process is far more expensive, because the entire parent process addressing space is duplicated.


The threads library API is also easier to use than the one for managing processes. Programmers should think about the six ways of calling the exec subroutine. Thread creation requires just one syntax: the pthread_create subroutine.


Inter-Thread Communications

Inter-thread communication is far more efficient and easier to use than inter-process communication. Because all threads within a process share the same address space, they need not use shared memory. Shared data should just be protected from concurrent access using mutexes or other synchronization tools.


Synchronization facilities provided by the threads library allow easy implementation of flexible and powerful synchronization tools. These tools can easily replace traditional inter-process communication facilities, such as message queues. Note that pipes can be used as an inter-thread communication path.

Multiprocessor Systems


On a multiprocessor system, multiple threads can concurrently run on multiple CPUs. Therefore, multi-threaded programs can run much faster than on a uniprocessor system. They will also be faster than a program using multiple processes, because threads require fewer resources and generate less overhead. For example, switching threads in the same process can be faster, especially in the M:N library model where context switches can often be avoided. Finally, a major advantage of using threads is that a single multi-threaded program will work on a uniprocessor system, but can naturally take advantage of a multiprocessor system, without recompiling.


 

Question- 2) Discuss different thread modeling.

Answer: A thread is a light weight process which is similar to a process where every process can have one or more threads. Each thread contains a Stack and a Thread Control Block. There are four basic thread models :


1. User Level Single Thread Model :

  • Each process contains a single thread.

  • Single process is itself a single thread.

  • Process table contains an entry for every process by maintaining its PCB.

2. User Level Multi Thread Model :

  • Each process contains multiple threads.

  • All threads of the process are scheduled by a thread library at user level.

  • Thread switching can be done faster than process switching.

  • Thread switching is independent of operating system which can be done within a process.

  • Blocking one thread makes blocking of entire process.

  • Thread table maintains Thread Control Block of each thread of a process.

  • Thread scheduling happens within a process and not known to Kernel.

3. Kernel Level Single Thread Model :

  • Each process contains a single thread.

  • Thread used here is kernel level thread.

  • Process table works as thread table.

4. Kernel Level Multi Thread Model :

  • Thread scheduling is done at kernel level.

  • Fine grain scheduling is done on a thread basis.

  • If a thread blocks, another thread can be scheduled without blocking the whole process.

  • Thread scheduling at Kernel process is slower compared to user level thread scheduling.

  • Thread switching involves switch.

 

Question- 3) What is Thread?

Answer: It is the sequence of instructions in a program, divided into smallest parts, managed independently by a part of the operating system called scheduler.

 

Question- 4) What are different types of threads?

Answer: Threads are mainly of two types:

  1. User level

  2. Kernel level.

 

Question- 5) What is shared in different threading processes?

Answer: Thus, this can be clearly seen in the figure below.


single threaded process and multithreaded process in operating system

 

Question- 6) Differentiate Linux threads and Java threads.

Answer:

Linux threads

Java threads

  • Linux threads are not suitable for parallel activities.

  • Java threads are suitable for parallel activities.

  • ​Linux threads are independent, so they can’t share a common address.

  • Java threads are dependent, so they can share a common memory space.

  • They are loosely coupled.

  • They are tightly coupled.

  • ​Linux threads need more resources to execute.

  • Java threads need less resources to execute.

  • It takes more time to create

  • It takes less time to create.



 






bottom of page