Operating System: Question Set – 06
How does multilevel feedback queue scheduling work?
Processes can move between queues based on behavior:
- Processes that use too much CPU time are moved to lower-priority queues.
- I/O-bound or interactive processes may be promoted to higher-priority queues.
This flexibility helps balance responsiveness and throughput.
What is throughput maximization in process scheduling?
Throughput maximization makes sure that the most processes are finished in the allotted period. For maximizing throughput, algorithms such as Shortest Job Next (SJN) are effective.
What is the convoy effect in process scheduling?
The convoy effect lowers system efficiency when a CPU-bound operation delays the CPU, making I/O-bound processes wait. First-Come-First-Served (FCFS) scheduling frequently results in this.
Why is Shortest Job Next (SJN) scheduling considered optimal?
By starting with the processes that have the smallest burst times, SJN reduces the average waiting time. Nevertheless, it necessitates prior knowledge of burst times, which is challenging in practical situations.
What is a race condition in process scheduling?
When several processes use shared resources at the same time and the order in which they execute determines how the system behaves, this is known as a race condition. Race circumstances are avoided via synchronization techniques like locks or semaphores.
What factors influence the choice of a scheduling algorithm?
- Type of system (real-time, batch, or interactive).
- Number of processes.
- Process priority and burst time distribution.
- Desired goals (e.g., low waiting time, high throughput, fairness).
What is the difference between hard and soft real-time scheduling?
- Hard real-time scheduling: Guarantees task completion within strict deadlines (e.g., flight control systems).
- Soft real-time scheduling: Prioritizes deadline adherence but doesn’t guarantee it (e.g., video streaming).
What is load balancing in process scheduling?
By distributing the workload equally among all CPUs or processors in a system, load balancing enhances performance and prevents bottlenecks.
How do modern operating systems handle scheduling on multicore processors?
Modern OSes implement:
- Symmetric multiprocessing (SMP): All processors are treated equally, and any process can run on any processor.
- Processor affinity: Binds a process to a specific CPU to improve cache performance.
- Load sharing: Dynamically distributes tasks across available cores.
How does priority inversion occur, and how is it resolved?
Priority inversion happens when a higher-priority process waits for a lower-priority process holding a needed resource.
Resolution techniques:
- Priority inheritance (low-priority process temporarily gets higher priority).
- Avoidance via robust design.
Why is FCFS not suitable for interactive systems?
Short or interactive tasks in FCFS may be delayed by lengthy CPU-bound activities, which will negatively impact user experience and responsiveness.
How does aging solve the starvation problem?
Even if higher-priority processes take precedence, aging ensures that a waiting process eventually receives CPU time by progressively raising its priority over time.
What is the difference between deterministic and probabilistic scheduling?
- Deterministic scheduling: Assumes complete knowledge of process arrival times and burst times (ideal for analysis).
- Probabilistic scheduling: Considers uncertainties like variable arrival times and burst durations (real-world scenarios).
What metrics are used to evaluate scheduling algorithms?
- CPU Utilization: Percentage of time the CPU is active.
- Throughput: Number of processes completed per unit time.
- Turnaround Time: Total time from submission to completion.
- Waiting Time: Time spent in the ready queue.
- Response Time: Time from submission to the first execution.
- Fairness: Ensures all processes get a fair share of CPU time.