October 29, 2016
Categorised in: Operating System Design
A thread is the smallest unit of processing that can be performed in an OS.
In most modern operating systems, a thread exists within a process – that is, a single process may contain multiple threads
A thread has its own:
- Program counter
- System registers
A thread shares following information with peer threads:
- Code segment
- Data segment
- Open files
A basic unit of CPU utilization.
It consists of a thread ID, a program counter, a register set, and a stack.
It is a single sequential flow of control within a program
If a process has multiple threads of control, it can perform more than one task at a time.
Threads are a way for a program to split itself into two or more simultaneously running tasks.
Traditional processes have two characteristics:
Process includes a virtual address space to hold the process image
the OS provides protection to prevent unwanted interference between processes with respect to resources
Follows an execution path that may be interleaved with other processes:
– A process has an execution state (Running, Ready, etc.) and a dispatching priority and is scheduled and dispatched by the OS
– Traditional processes are sequential; i.e. only one execution path
The unit of resource ownership is referred to as a process or task
The unit of dispatching is referred to as a thread or lightweight process
Difference between Process and Thread
Single and Multithread Process
Foreground and background work:
- A word processor may have a threads for displaying graphics
- Responding to keystrokes from the user
- Performing spelling and grammar checking in the background
In a spreadsheet program:
- One thread could display menus and read user input, while another thread executes user commands and updates the spreadsheet.
- This arrangement often increases the perceived speed of the application by allowing the program to prompt for the next command before the previous command is complete.
Imagine the following C program
What is the behaviour here?
Version of program with Threads
What does “CreateThread” do?
Start independent thread running given procedure
What is the behaviour here?
This should behave as if there are two separate CPUs
Thread Example: Multithreaded Server Architecture
Benefits/Advantages of Threads
Takes less time to create a new thread than a process
Less time to terminate a thread than a process
Switching between two threads takes less time than switching between processes
Threads enhance efficiency in communication between programs
Supported by the Kernel. Examples:
Mac OS X
Support for threads may be provided either at the user level, for user threads (supported above the kernel and managed without kernel support), or by the kernel, for kernel threads (supported and managed directly by the OS)
Three common ways of establishing relationship between user and kernel threads
Thread management done by user-level threads library
Many to One User Level Threads
Many user-level threads mapped to single kernel thread
Basically, the kernel is not aware of the existence of threads.
Thread switching does not require kernel mode privileges and scheduling is application specific
Thread management is done by the thread library in user space, so it is efficient
Just as a uniprocessor provides the illusion by multiplexing multiple processes on a single CPU, user-level threads packages provide the illusion by multiplexing multiple user threads on a single kernel thread
Since there is only one kernel thread, if a user thread executes a blocking system call, the entire process blocks, since no other user thread can execute until the kernel thread (which is blocked in the system call) becomes available
Multithreaded programs will run no faster on multiprocessors than they run on uniprocessors
The single kernel thread acts as a bottleneck, preventing optimal use of the multiprocessor
–easy to do with few systems dependencies
–cannot take advantage of parallelism
–may have to block for synchronous I/O
–there is a clever technique for avoiding it
One to One User Level Threads
Each user-level thread maps to kernel thread
–This model provides more concurrency than the many-to-one model.
–It also allows another thread to run when a thread makes a blocking system call.
–It supports multiple threads to execute in parallel on microprocessors.
–The only drawback is that creating a user thread requires creating the corresponding kernel thread
–Overhead of creating kernel threads can burden the performance of the application
Examples: OS/2, windows NT and windows 2000
Many to Many User Level Threads
In this model, many user level threads multiplexes to the Kernel thread of smaller or equal numbers. The number of Kernel threads may be specific to either a particular application or a particular machine.
Allows many user-level threads to be mapped to many kernel threads
Idea is to combine the best of both approaches
Difference between User Level & Kernel Level Thread
Semantics of fork() and exec() system calls
Thread cancellation of target thread
–Asynchronous or deferred
Semantics of fork() and exec()
fork() system call is used to create or duplicate the process
If the creating process have many threads then does the threads are get duplicated?
fork() system call have two versions: One version of the system call duplicates all the threads with the process and another version of the system call does not duplicate threads instead creates only process
exec() is used to invoke the another process or program.
If thread invokes the exec() system call the program specified in the parameter to exec() will replace the entire process including threads.
Which version of the fork() to use is totally depends upon the programmer as well as programming needs
Signals are used in UNIX systems to notify a process that a particular event has occurred
A signal handler is used to process signals
1.Signal is generated by particular event
2.Signal is delivered to a process
3.Signal is handled
–Deliver the signal to the thread to which the signal applies
–Deliver the signal to every thread in the process
–Deliver the signal to certain threads in the process
–Assign a specific thread to receive all signals for the process
Thread cancellation means to stop execution of the thread before its completion
E.g. searching- multiple threads
Thread to be cancelled is called as target thread
Cancellation of thread can be asynchronous or deferred cancellation
In asynchronous cancellation, thread immediately terminates the target thread
In deferred cancellation, target thread periodically checks whether it should terminate or not.
Thread Specific Data
Allows each thread to have its own copy of data
Thread within a same process share many resources like address space, code, data files, memory etc.
But in some cases thread needs own copy of the data, the data needed for tread for its execution is called thread specific data
Useful when you do not have control over the thread creation process (i.e., when using a thread pool)
Transaction processing needs thread specific data
–Create a number of threads in a pool where they await work
–Usually slightly faster to service a request with an existing thread than create a new thread
–Allows the number of threads in the application(s) to be bound to the size of the pool
Linux refers to them as tasks rather than threads
Thread creation is done through clone() system call
clone() allows a child task to share the address space of the parent task (process)
Each thread can be in one of 5 states:
0) Start – allocate resources
1) Running – has the CPU
2) Blocked – waiting for I/O or synchronization with another thread
3) Ready to run – on the ready list, waiting for CPU
4) Done – deallocate resources