Do You know, what is that magical behind-the-scenes mechanism that allows your computer to juggle multiple tasks effortlessly? If your answer is no, then Buckle up as we will be going on an adventurous journey to understand the inner workings of process scheduling in operating system, its importance, and how it ensures smooth multitasking. So, let’s get started!
What is Process Scheduling?
Process scheduling is the fundamental mechanism employed by operating systems to manage and allocate system resources to multiple processes or tasks running concurrently.
It determines the order in which processes are executed, ensuring that every task gets its fair share of the CPU’s attention.
You can imagine a chef juggling various dishes simultaneously; that’s what process scheduling does for your computer!
If you are not aware of what a process is in operating system, first learn about process in operating system and then move forward.
Why Process Scheduling is needed?
The Need for Process Scheduling You might be wondering, why bother with process scheduling? Well, imagine a world without it – your computer would freeze up, leaving you stuck with one task at a time.
Process scheduling optimizes resource utilization, improves system efficiency, and enhances user experience. It’s the superhero of multitasking!
Operating System Process Scheduling Algorithms
Now, we’ll explore various operating system process scheduling algorithms that dictate how the CPU selects and assigns tasks.
From the straightforward FCFS to the dynamic Multilevel Queue Scheduling, each algorithm has its quirks and charm.
First-Come, First-Serve (FCFS) Scheduling
Imagine you’re waiting in line at a bustling coffee shop, eagerly anticipating your turn to place an order. Just like in real life, First-Come, First-Serve (FCFS) Scheduling works on the principle of serving tasks based on their arrival time.
How FCFS Scheduling Works?
FCFS scheduling is one of the simplest scheduling algorithms. It operates on the basis of a queue, where processes are placed in the order they arrive.
The CPU executes the processes in the same order, following a “first in, first out” approach. Once a process starts executing, it continues until it completes or is interrupted.
The FCFS algorithm can be explained using a real-life analogy. Imagine you’re a teacher grading assignments. You stack the assignments in the order you received them, and you grade them one by one, starting from the top of the stack. The first assignment you received is the first one you grade, and so on.
Similarly, FCFS scheduling processes tasks in the order they arrive, without considering their execution time or priority.
FCFS Scheduling Algorithm in C Language
Let’s take a look at a simple implementation of the FCFS scheduling algorithm in the C programming language.
#include <stdio.h>
#define MAX_PROCESSES 10
typedef struct {
int process_id;
int arrival_time;
int burst_time;
} Process;
void fcfs_scheduling(Process processes[], int n) {
int completion_time[MAX_PROCESSES];
int waiting_time[MAX_PROCESSES];
int turnaround_time[MAX_PROCESSES];
int total_waiting_time = 0;
int total_turnaround_time = 0;
completion_time[0] = processes[0].burst_time;
waiting_time[0] = 0;
turnaround_time[0] = completion_time[0] - processes[0].arrival_time;
for (int i = 1; i < n; i++) {
completion_time[i] = completion_time[i - 1] + processes[i].burst_time;
waiting_time[i] = completion_time[i - 1] - processes[i].arrival_time;
turnaround_time[i] = completion_time[i] - processes[i].arrival_time;
total_waiting_time += waiting_time[i];
total_turnaround_time += turnaround_time[i];
}
printf("Process\tArrival Time\tBurst Time\tCompletion Time\tWaiting Time\tTurnaround Time\n");
for (int i = 0; i < n; i++) {
printf("%d\t%d\t\t%d\t\t%d\t\t%d\t\t%d\n", processes[i].process_id, processes[i].arrival_time,
processes[i].burst_time, completion_time[i], waiting_time[i], turnaround_time[i]);
}
float average_waiting_time = (float)total_waiting_time / n;
float average_turnaround_time = (float)total_turnaround_time / n;
printf("\nAverage Waiting Time: %.2f\n", average_waiting_time);
printf("Average Turnaround Time: %.2f\n", average_turnaround_time);
}
int main() {
Process processes[MAX_PROCESSES];
int n;
printf("Enter the number of processes: ");
scanf("%d", &n);
printf("Enter the arrival time and burst time for each process:\n");
for (int i = 0; i < n; i++) {
printf("Process %d:\n", i + 1);
processes[i].process_id = i + 1;
printf("Arrival Time: ");
scanf("%d", &processes[i].arrival_time);
printf("Burst Time: ");
scanf("%d", &processes[i].burst_time);
}
fcfs_scheduling(processes, n);
return 0;
}
Shortest Job First (SJF) Scheduling Algorithm
In the realm of process scheduling algorithms, Shortest Job First (SJF) stands out as a method that prioritizes efficiency and minimizing waiting time. Let’s delve into the inner workings of SJF scheduling, understand its criteria, and explore an implementation in the C programming language.
How does SJF Scheduling Works?
SJF scheduling, also known as Shortest Job Next (SJN) or Shortest Process Next (SPN) scheduling, operates on a simple principle: “First come, first served—the shortest first!” In other words, it prioritizes processes based on their burst time, executing the one with the shortest burst time first. This approach aims to minimize waiting time and optimize overall system performance.
Criteria of SJF Scheduling Algorithm
The SJF scheduling algorithm relies on the following criteria to determine the order of process execution:
- Burst Time: Burst time refers to the time required for a process to complete its execution. The algorithm selects the process with the shortest burst time, assuming that shorter tasks will finish quickly and minimize the waiting time for other processes.
- Preemptive and Non-Preemptive SJF:
a. Preemptive SJF: In the preemptive version of SJF, if a new process with a shorter burst time arrives while a process is already running, the running process is interrupted and replaced by the shorter job. This allows for dynamic adaptation to changing conditions.
b. Non-Preemptive SJF: In the non-preemptive version, once a process starts executing, it continues until completion, even if a shorter job arrives during its execution. This approach guarantees fairness but may result in longer waiting times if shorter jobs arrive later.
Implementing SJF Scheduling in C language
Now, let’s explore an implementation of the SJF scheduling algorithm in the C programming language.
The code example below demonstrates the non-preemptive version of SJF:
#include <stdio.h>
void sjf_scheduling(int processes[], int burst_time[], int n) {
int waiting_time[n], turnaround_time[n], completion_time[n];
int total_waiting_time = 0, total_turnaround_time = 0;
// Calculate completion time for each process
completion_time[0] = burst_time[0];
for (int i = 1; i < n; i++) {
completion_time[i] = completion_time[i - 1] + burst_time[i];
}
// Calculate turnaround time and waiting time for each process
for (int i = 0; i < n; i++) {
turnaround_time[i] = completion_time[i];
waiting_time[i] = turnaround_time[i] - burst_time[i];
total_waiting_time += waiting_time[i];
total_turnaround_time += turnaround_time[i];
}
// Display the scheduling results
printf("Process\tBurst Time\tCompletion Time\tWaiting Time\tTurnaround Time\n");
for (int i = 0; i < n; i++) {
printf("%d\t%d\t\t%d\t\t%d\t\t%d\n", processes[i], burst_time[i], completion_time[i], waiting_time[i], turnaround_time[i]);
}
// Calculate and display average waiting time and turnaround time
float avg_waiting_time = (float)total_waiting_time / n;
float avg_turnaround_time = (float)total_turnaround_time / n;
printf("\nAverage Waiting Time: %.2f\n", avg_waiting_time);
printf("Average Turnaround Time: %.2f\n", avg_turnaround_time);
}
int main() {
int processes[] = {1, 2, 3
, 4, 5}; // Array to store process IDs
int burst_time[] = {6, 8, 2, 4, 3}; // Array to store burst times
int n = sizeof(processes) / sizeof(processes[0]);
// Call the SJF scheduling function
sjf_scheduling(processes, burst_time, n);
return 0;
}
Round Robin (RR) Scheduling
Round Robin (RR) scheduling is like a fair and democratic approach to process scheduling. Just like in a round-robin tournament, each participant gets an equal chance to compete. In the world of operating systems, RR scheduling ensures that every process gets an equal share of the CPU’s attention.
How does Round Robin (RR) Scheduling Work?
In RR scheduling, each process is assigned a fixed time slice called a “quantum.” The CPU allocates time slices to processes in a cyclic manner, where each process receives the CPU for a predefined quantum, and then it moves on to the next process in line.
Criteria of the Round Robin Scheduling
- Quantum Size: The quantum size determines how long each process gets to execute before being preempted and giving way to the next process. It is typically a small value to achieve fair CPU time distribution.
- Process Queue: Processes are organized in a queue, and the CPU executes them in a circular order, moving to the next process once a quantum expires.
- Preemption: RR scheduling employs preemptive behavior, meaning that even if a process hasn’t completed, it is paused after its quantum and moved to the back of the queue to let the next process execute.
- Time Sharing: Since each process receives a fixed quantum, even long-running processes cannot monopolize the CPU, allowing other processes to get their turn.
Round Robin Scheduling Code in C Language
#include <stdio.h>
#define MAX_PROCESS 10
#define TIME_SLICE 2
struct Process {
int process_id;
int burst_time;
int remaining_time;
};
void round_robin_scheduling(struct Process processes[], int n) {
int i, total_time = 0;
// Calculate the total burst time of all processes
for (i = 0; i < n; i++) {
total_time += processes[i].burst_time;
processes[i].remaining_time = processes[i].burst_time;
}
// Perform round-robin scheduling
while (total_time > 0) {
for (i = 0; i < n; i++) {
if (processes[i].remaining_time > 0) {
if (processes[i].remaining_time <= TIME_SLICE) {
total_time -= processes[i].remaining_time;
processes[i].remaining_time = 0;
} else {
total_time -= TIME_SLICE;
processes[i].remaining_time -= TIME_SLICE;
}
printf("Process %d executed for a quantum. Remaining time: %d\n", processes[i].process_id, processes[i].remaining_time);
}
}
}
}
int main() {
struct Process processes[MAX_PROCESS] = {
{1, 8},
{2, 4},
{3, 6},
{4, 2}
};
int num_processes = 4;
round_robin_scheduling(processes, num_processes);
return 0;
}
Priority Scheduling
In process scheduling, some tasks demand immediate attention and preferential treatment. That’s where Priority Scheduling comes into play. This algorithm assigns a priority value to each process, indicating its significance or urgency.
The CPU selects and executes processes based on their priority, allowing high-priority tasks to take the spotlight. Think of it as a red carpet event where the most important guests are given VIP treatment.
Criteria for the Priority Scheduling Algorithm
Priority scheduling can follow either a preemptive or non-preemptive approach. Let’s explore the criteria used in this algorithm:
- Priority Assignment: Each process is assigned a priority value, which can be an integer or a floating-point number. Higher values typically indicate higher priorities. The priority can be determined based on various factors such as the importance of the task, its deadline, or its expected execution time.
- Preemptive or Non-Preemptive:
In preemptive priority scheduling, a running process can be interrupted and replaced by a higher-priority process if it arrives. Conversely, in non-preemptive priority scheduling, a running process continues until it completes or voluntarily gives up the CPU.
Preemptive scheduling ensures that urgent tasks are promptly addressed, while non-preemptive scheduling guarantees that once a process starts, it can complete its execution without interruptions. - Priority Aging: To prevent starvation of lower-priority processes, a technique called priority aging can be employed. It gradually increases the priority of waiting processes over time. This ensures that lower-priority tasks eventually get their chance to execute, maintaining fairness in the system.
Priority Scheduling Code in C Language
Now, let’s dive into an example of implementing Priority Scheduling in the C programming language. The following code demonstrates a non-preemptive approach where the process with the highest priority completes its execution before the CPU moves on to the next process:
#include <stdio.h>
struct Process {
int process_id;
int burst_time;
int priority;
};
void priority_scheduling(struct Process processes[], int n) {
int i, j;
struct Process temp;
// Sorting the processes based on priority (in non-decreasing order)
for (i = 0; i < n - 1; i++) {
for (j = 0; j < n - i - 1; j++) {
if (processes[j].priority > processes[j + 1].priority) {
temp = processes[j];
processes[j] = processes[j + 1];
processes[j + 1] = temp;
}
}
}
printf("Process Execution Order: ");
for (i = 0; i < n; i++) {
printf("%d ", processes[i].process_id);
}
}
int main() {
int i, n;
printf("Enter the number of processes: ");
scanf("%d", &n);
struct Process processes[n];
printf("Enter the burst time and priority for each process:\n");
for (i = 0; i < n; i++) {
printf("Process %d: ", i + 1);
scanf("%d %d", &processes[i].burst_time, &processes[i].priority);
processes[i].process_id = i + 1;
}
priority_scheduling(processes, n);
return 0;
}
Multilevel Queue Scheduling
In the world of process scheduling, one size doesn’t fit all. Different processes have varying priorities and characteristics. That’s where Multilevel Queue Scheduling comes into play. Let’s explore how this dynamic algorithm juggles multiple queues to balance priorities and ensure efficient resource allocation.
How Multilevel Queue Scheduling Works?
Multilevel Queue Scheduling categorizes processes into multiple queues based on predetermined criteria. Each queue has its own scheduling algorithm, allowing processes with similar characteristics and priorities to be treated differently.
It’s like managing different lanes on a highway, where each lane has its own speed limit and traffic rules.
Criteria of Multilevel Queue Scheduling Algorithm
The criteria used to categorize processes into different queues can vary based on the system’s requirements. Here are some common criteria used for queue classification:
- Priority: Processes with higher priority, such as system-critical tasks, are assigned to high-priority queues, while lower-priority tasks are placed in separate queues. It’s like giving VIPs express access while others wait patiently.
- Process Type: Different types of processes, such as interactive tasks, batch jobs, or background processes, may have distinct queues based on their specific requirements. Interactive tasks, for example, may be prioritized to ensure a responsive user interface.
- Resource Requirements: Processes that require specific resources, such as I/O-bound tasks or CPU-bound tasks, can be assigned to separate queues. This ensures that resource-intensive tasks do not hamper the overall system performance.
- Aging: This prevents starvation and ensures fairness in resource allocation.
Multilevel Queue Scheduling Algorithm in C Language
Let’s now explore an example of implementing the Multilevel Queue scheduling algorithm using C language.
#include <stdio.h>
struct Process {
int process_id;
int burst_time;
int priority;
};
void multilevel_queue_scheduling(struct Process processes[], int n) {
// Define separate queues based on priority levels
struct Process high_priority_queue[n];
struct Process low_priority_queue[n];
int high_priority_count = 0;
int low_priority_count = 0;
// Categorize processes into respective queues based on priority
for (int i = 0; i < n; i++) {
if (processes[i].priority == 0) {
high_priority_queue[high_priority_count] = processes[i];
high_priority_count++;
} else {
low_priority_queue[low_priority_count] = processes[i];
low_priority_count++;
}
}
// Perform scheduling for each queue separately
printf("High Priority Queue Scheduling:\n");
printf("Process\tBurst Time\n");
for (int i = 0; i < high_priority_count; i++) {
printf("%d\t%d\n", high_priority_queue[i].process_id, high_priority_queue[i].burst_time);
// Implement scheduling algorithm for high priority queue
// ...
}
printf("\nLow Priority Queue Scheduling:\n");
printf("Process\tBurst Time\n");
for (int i = 0; i < low_priority_count; i++) {
printf("%d\t%d\n", low_priority_queue[i].process_id, low_priority_queue[i].burst_time);
// Implement scheduling algorithm for low priority queue
// ...
}
}
int main() {
struct Process processes[] = {
{1, 8, 0},
{2, 6, 1},
{3, 10, 1},
{4, 4, 0},
{5, 7, 1}
};
int n = sizeof(processes) / sizeof(processes[0]);
multilevel_queue_scheduling(processes, n);
return 0;
}
Process Synchronization
Process synchronization refers to the coordination and orderly execution of concurrent processes to avoid race conditions, deadlocks, and other undesired outcomes.
It involves implementing mechanisms that allow processes to share resources, communicate, and coordinate their activities in a mutually exclusive and orderly manner.
Think of it as orchestrating a complex dance performance where each dancer follows specific steps at the right time to create a seamless and synchronized routine.
Types of Process Synchronization
Let’s explore some commonly used process synchronization mechanisms:
Mutex (Mutual Exclusion)
Mutex is a synchronization object used to ensure that only one process can access a shared resource at a time.
It acts as a lock, allowing a process to acquire exclusive access to a resource, perform its critical section of code, and then release the lock for other processes to use.
Semaphore
A semaphore is a synchronization object that controls access to a shared resource based on a counter.
It can allow multiple processes to access a resource simultaneously up to a certain limit. Semaphores use two operations, namely “wait” and “signal”.
When a process wants to access a resource, it must acquire the semaphore using the “wait” operation. If the resource is available, the process proceeds; otherwise, it waits until it becomes available.
Once a process finishes using the resource, it releases the semaphore using the “signal” operation, allowing other waiting processes to proceed.
Condition Variables
Condition variables provide a way for processes to synchronize their execution based on certain conditions. Processes can wait on a condition variable until a specific condition is met, at which point they can proceed.
These are often used in conjunction with mutexes to ensure safe and synchronized access to shared resources.
You can Imagine it as, a group of people waiting in a queue until the counter reaches a certain number before they can enter a venue. They patiently wait until the condition is met, ensuring a coordinated and orderly entrance.
Scheduling Algorithms Performance Metrics
Now that we have explored various process scheduling algorithms, it’s important to evaluate their performance using specific metrics.
Performance metrics provide quantitative measurements that help us assess the effectiveness and efficiency of different scheduling algorithms. Some commonly used metrics are:
Throughput
Throughput measures the number of processes completed per unit of time. It indicates the system’s overall processing capacity and efficiency. The formula for throughput is:
Throughput = Total number of completed processes / Total execution time
A higher throughput value indicates that the system is capable of processing a larger number of tasks within a given time frame, resulting in better performance.
Turnaround Time (TAT)
Turnaround time is the total time taken for a process to complete, from its arrival in the system to its completion. It includes both the waiting time in the ready queue and the execution time. The formula for turnaround time is:
Turnaround Time = Completion Time - Arrival Time
Turnaround time provides an insight into how long a process has to wait and how efficiently the system handles its execution.
A lower turnaround time indicates faster task completion and better performance.
Waiting Time
Waiting time measures the total time a process spends waiting in the ready queue before its execution begins. It reflects the efficiency of the scheduling algorithm in managing process priorities and resource allocation. The formula for waiting time is:
Waiting Time = Turnaround Time - Burst Time
A shorter waiting time implies quicker access to resources and faster execution, resulting in improved performance.
Response Time
Response time refers to the time it takes for a process to start responding after a request is made. It measures the system’s responsiveness and how quickly it services incoming tasks. The formula for response time can vary depending on the context and the specific metric definition used.
Process Scheduling Challenges
In the fast-paced world of technology, process scheduling faces new challenges brought about by advancements in hardware and the ever-increasing demands of modern applications.
Let’s dive into the real-world challenges that affect process scheduling in modern operating systems, ensuring optimal performance and efficient resource utilization.
Multicore Processors and Thread Scheduling
Gone are the days of single-core processors dominating the computing landscape. With the advent of multicore processors, process scheduling has taken on a new level of complexity.
Now, operating systems must not only allocate processes to cores but also manage the scheduling of threads within each core.
Multicore processors enable simultaneous execution of multiple threads, resulting in improved performance and multitasking capabilities.
Thread scheduling becomes crucial in efficiently utilizing the available cores. Operating systems employ algorithms like the multilevel queue scheduler or the popular Completely Fair Scheduler (CFS) to distribute threads across cores, ensuring fairness and load balancing.
Real-Time Scheduling
In certain applications, such as embedded systems, industrial control systems, or critical infrastructure, real-time scheduling is of utmost importance.
Real-time tasks have stringent timing requirements, with deadlines that must be met to avoid system failures or compromising safety.
Real-time scheduling distinguishes between hard real-time tasks, which have strict and non-negotiable deadlines, and soft real-time tasks, which have deadlines that can be missed occasionally without catastrophic consequences.
Operating systems employ scheduling algorithms like Rate Monotonic Scheduling (RMS) or Earliest Deadline First (EDF) to prioritize real-time tasks and ensure timely execution.
Interactivity and User Experience
In today’s interactive world, user experience plays a vital role in the success of an operating system. Users expect their devices to be responsive, with smooth and seamless interactions.
Operating systems employ scheduling techniques that prioritize interactive tasks to provide a snappy and enjoyable user experience.
Schedulers incorporate mechanisms to differentiate between interactive tasks, such as user input or graphical rendering, and non-interactive background tasks like file backups or system maintenance.
By giving priority to interactive tasks, the operating system ensures that user actions are handled promptly, making the system feel responsive and engaging.
Power Management and Energy Efficiency
Energy efficiency has become a significant concern in modern computing systems. As devices become more mobile and battery-powered, optimizing power consumption is crucial. Process scheduling plays a role in managing power resources effectively.
Operating systems employ techniques like CPU frequency scaling and task migration to lower the power consumption of idle or less demanding processes.
For example, the Linux kernel implements the “ondemand” or “powersave” CPU governors, dynamically adjusting CPU frequencies based on workload to conserve power.
Virtualization and Containerization
Virtualization and containerization technologies have revolutionized the way software is deployed and managed. With virtual machines (VMs) and containerized environments, multiple operating system instances or isolated environments can run concurrently on the same physical hardware.
Process scheduling in virtualized environments involves managing the allocation of resources, such as CPU time and memory, among VMs or containers.
Techniques like virtual CPU scheduling and resource quota enforcement ensure fair and efficient resource utilization, maintaining isolation and performance across multiple virtualized instances.
Wrapping Up
In the intricate world of process scheduling in operating systems, we have explored the fundamental concepts, various scheduling algorithms, synchronization mechanisms, and real-world considerations. We have seen how process scheduling plays a vital role in enabling multitasking, optimizing resource utilization, and enhancing overall system performance.
As we ventured into the real world, we discovered the challenges and considerations faced by modern operating systems. By embracing these challenges and considerations, operating systems can provide seamless multitasking, responsiveness, energy efficiency, and scalability, catering to the diverse needs of applications and users alike.
Process scheduling is the backbone of efficient multitasking and a hidden force that allows us to switch effortlessly between applications. So, the next time you enjoy a smooth and responsive computing experience, take a moment to appreciate the magic happening behind the scenes.
Now, it’s your turn! Have you ever encountered a situation where process scheduling played a significant role in your computing experience? Share your thoughts, experiences, or questions in the comments below. Let’s continue the conversation and unravel more mysteries of the fascinating world of process scheduling.
Happy scheduling!
References
- Scheduling Algorithms
- Process Synchronization
- Multicore Processors and Thread Scheduling
- Virtualization and Containerization