CENG334 Introduction to Operating Systems - PowerPoint PPT Presentation

1 / 67
About This Presentation
Title:

CENG334 Introduction to Operating Systems

Description:

CENG334 Introduction to Operating Systems Synchronization Topics: Synchronization problem Race conditions and Critical Sections Mutual exclusion Locks – PowerPoint PPT presentation

Number of Views:27
Avg rating:3.0/5.0
Slides: 68
Provided by: edut1304
Category:

less

Transcript and Presenter's Notes

Title: CENG334 Introduction to Operating Systems


1
CENG334Introduction to Operating Systems
  • Synchronization
  • Topics
  • Synchronization problem
  • Race conditions and Critical Sections
  • Mutual exclusion
  • Locks
  • Spinlocks
  • Mutexes
  • Erol Sahin
  • Dept of Computer Eng.
  • Middle East Technical University
  • Ankara, TURKEY

Some of the following slides are adapted from
Matt Welsh, Harvard Univ.
2
Single and Multithreaded Processes
3
Synchronization
  • Threads cooperate in multithreaded programs in
    several ways
  • Access to shared state
  • e.g., multiple threads accessing a memory cache
    in a Web server
  • To coordinate their execution
  • e.g., Pressing stop button on browser cancels
    download of current page
  • stop button thread has to signal the download
    thread
  • For correctness, we have to control this
    cooperation
  • Must assume threads interleave executions
    arbitrarily and at different rates
  • scheduling is not under applications control
  • We control cooperation using synchronization
  • enables us to restrict the interleaving of
    executions

4
Shared Resources
  • Well focus on coordinating access to shared
    resources
  • Basic problem
  • Two concurrent threads are accessing a shared
    variable
  • If the variable is read/modified/written by both
    threads, then access to the variable must be
    controlled
  • Otherwise, unexpected results may occur
  • Well look at
  • Mechanisms to control access to shared resources
  • Low-level mechanisms locks
  • Higher level mechanisms mutexes, semaphores,
    monitors, and condition variables
  • Patterns for coordinating access to shared
    resources
  • bounded buffer, producer-consumer,
  • This stuff is complicated and rife with pitfalls
  • Details are important for completing assignments
  • Expect questions on the midterm/final!

5
Shared Variable Example
  • Suppose we implement a function to withdraw
    moneyfrom a bank account
  • int withdraw(account, amount)
  • balance get_balance(account)
  • balance balance - amount
  • put_balance(account, balance)
  • return balance
  • Now suppose that you and your friend share a bank
    account with a balance of 1000.00TL
  • What happens if you both go to separate ATM
    machines, and simultaneously withdraw 100.00TL
    from the account?

6
Example continued
  • We represent the situation by creating a separate
    thread for each ATM user doing a withdrawal
  • Both threads run on the same bank server system
  • Thread 1
    Thread 2
  • Whats the problem with this?
  • What are the possible balance values after each
    thread runs?

7
Interleaved Execution
  • The execution of the two threads can be
    interleaved
  • Assume preemptive scheduling
  • Each thread can context switch after each
    instruction
  • We need to worry about the worst-case scenario!
  • Whats the account balance after this sequence?
  • And who's happier, the bank or you???

balance get_balance(account) balance - amount
Execution sequence as seen by CPU
context switch
context switch
8
Interleaved Execution
  • The execution of the two threads can be
    interleaved
  • Assume preemptive scheduling
  • Each thread can context switch after each
    instruction
  • We need to worry about the worst-case scenario!
  • Whats the account balance after this sequence?
  • And who's happier, the bank or you???

Balance 1000TL
balance get_balance(account) balance - amount
Local 900TL
Execution sequence as seen by CPU
Local 900TL
Balance 900TL
Balance 900TL!
9
Race Conditions
  • A race occurs when correctness of the program
    depends on one thread reaching point x before
    another thread reaches point y
  • The problem is that two concurrent threads access
    a shared resource without any synchronization
  • This is called a race condition
  • The result of the concurrent access is
    non-deterministic
  • Result depends on
  • Timing
  • When context switches occurred
  • Which thread ran at context switch
  • What the threads were doing
  • We need mechanisms for controlling access to
    shared resources in the face of concurrency
  • This allows us to reason about the operation of
    programs
  • Essentially, we want to re-introduce determinism
    into the thread's execution
  • Synchronization is necessary for any shared data
    structure
  • buffers, queues, lists, hash tables,

10
Which resources are shared?
  • Local variables in a function are not shared
  • They exist on the stack, and each thread has its
    own stack
  • You can't safely pass a pointer from a local
    variable to another thread
  • Why?
  • Global variables are shared
  • Stored in static data portion of the address
    space
  • Accessible by any thread
  • Dynamically-allocated data is shared
  • Stored in the heap, accessible by any thread

(Reserved for OS)
Unshared
Stack for thread 0
Stack for thread 1
Stack for thread 2
Shared
Heap
Uninitialized vars (BSS segment)
Initialized vars (data segment)
Code (text segment)
11
Mutual Exclusion
  • We want to use mutual exclusion to synchronize
    access to shared resources
  • Meaning When only one thread can access a shared
    resource at a time.
  • Code that uses mutual exclusion to synchronize
    its execution is called a critical section
  • Only one thread at a time can execute code in the
    critical section
  • All other threads are forced to wait on entry
  • When one thread leaves the critical section,
    another can enter

Critical Section
(modify account balance)
12
Mutual Exclusion
  • We want to use mutual exclusion to synchronize
    access to shared resources
  • Meaning When only one thread can access a shared
    resource at a time.
  • Code that uses mutual exclusion to synchronize
    its execution is called a critical section
  • Only one thread at a time can execute code in the
    critical section
  • All other threads are forced to wait on entry
  • When one thread leaves the critical section,
    another can enter

Critical Section
(modify account balance)
2nd thread must wait for critical section to clear
13
Mutual Exclusion
  • We want to use mutual exclusion to synchronize
    access to shared resources
  • Meaning When only one thread can access a shared
    resource at a time.
  • Code that uses mutual exclusion to synchronize
    its execution is called a critical section
  • Only one thread at a time can execute code in the
    critical section
  • All other threads are forced to wait on entry
  • When one thread leaves the critical section,
    another can enter

Critical Section
(modify account balance)
1st thread leaves critical section
2nd thread free to enter
14
Critical Section Requirements
  • Mutual exclusion
  • At most one thread is currently executing in the
    critical section
  • Progress
  • If thread T1 is outside the critical section,
    then T1 cannot prevent T2 from entering the
    critical section
  • Bounded waiting (no starvation)
  • If thread T1 is waiting on the critical section,
    then T1 will eventually enter the critical
    section
  • Assumes threads eventually leave critical
    sections
  • Performance
  • The overhead of entering and exiting the critical
    section is small with respect to the work being
    done within it

15
Locks
  • A lock is a object (in memory) that provides the
    following two operations
  • acquire( ) a thread calls this before entering a
    critical section
  • May require waiting to enter the critical section
  • release( ) a thread calls this after leaving a
    critical section
  • Allows another thread to enter the critical
    section
  • A call to acquire( ) must have a corresponding
    call to release( )
  • Between acquire( ) and release( ), the thread
    holds the lock
  • acquire( ) does not return until the caller holds
    the lock
  • At most one thread can hold a lock at a time
    (usually!)
  • We'll talk about the exceptions later...
  • What can happen if acquire( ) and release( )
    calls are not paired?

16
Using Locks
17
Execution with Locks
Thread 1 runs
Thread 2 waits on lock
Thread 1 completes
Thread 2 resumes
  • What happens when the blue thread tries to
    acquire the lock?

18
Spinlocks
  • Very simple way to implement a lock
  • Why doesn't this work?
  • Where is the race condition?

struct lock int held 0 void
acquire(lock) while (lock-gtheld)
lock-gtheld 1 void release(lock)
lock-gtheld 0
19
Implementing Spinlocks
  • Problem is that the internals of the lock
    acquire/release have critical sections too!
  • The acquire( ) and release( ) actions must be
    atomic
  • Atomic means that the code cannot be interrupted
    during execution
  • All or nothing execution

20
Implementing Spinlocks
  • Problem is that the internals of the lock
    acquire/release have critical sections too!
  • The acquire( ) and release( ) actions must be
    atomic
  • Atomic means that the code cannot be interrupted
    during execution
  • All or nothing execution

This sequence needs to be atomic
21
Implementing Spinlocks
  • Problem is that the internals of the lock
    acquire/release have critical sections too!
  • The acquire( ) and release( ) actions must be
    atomic
  • Atomic means that the code cannot be interrupted
    during execution
  • All or nothing execution
  • Doing this requires help from hardware!
  • Disabling interrupts
  • Why does this prevent a context switch from
    occurring?
  • Atomic instructions CPU guarantees entire
    action will execute atomically
  • Test-and-set
  • Compare-and-swap

22
Spinlocks using test-and-set
  • CPU provides the following as one atomic
    instruction
  • So to fix our broken spinlocks, we do this

bool test_and_set(bool flag) // Hardware
dependent implementation
struct lock int held 0 void
acquire(lock) while(test_and_set(lock-gtheld)
) void release(lock) lock-gtheld 0
23
What's wrong with spinlocks?
  • OK, so spinlocks work (if you implement them
    correctly), andthey are simple. So what's the
    catch?

24
Problems with spinlocks
  • Horribly wasteful!
  • Threads waiting to acquire locks spin on the CPU
  • Eats up lots of cycles, slows down progress of
    other threads
  • Note that other threads can still run ... how?
  • What happens if you have a lot of threads trying
    to acquire the lock?
  • Only want spinlocks as primitives to build
    higher-level synchronization constructs

25
Disabling Interrupts
  • An alternative to spinlocks
  • Can two threads disable/reenable interrupts at
    the same time?
  • What's wrong with this approach?

struct lock // Note no state! void
acquire(lock) cli() // disable
interrupts void release(lock) sti() //
reenable interupts
26
Disabling Interrupts
  • An alternative to spinlocks
  • Can two threads disable/reenable interrupts at
    the same time?
  • What's wrong with this approach?
  • Can only be implemented at kernel level (why?)
  • Inefficient on a multiprocessor system (why?)
  • All locks in the system are mutually exclusive
  • No separation between different locks for
    different bank accounts

struct lock // Note no state! void
acquire(lock) cli() // disable
interrupts void release(lock) sti() //
reenable interupts
27
Petersons Algorithm
flag0 0 flag1 0 turn P0 flag0
1 turn 1 while (flag1 1 turn
1) // busy wait // critical section
. // end of critical section flag0 0

P1 flag1 1 turn 0 while
(flag0 1 turn 0) // busy wait
// critical section . // end of
critical section flag1 0
The algorithm uses two variables, flag and turn.
A flag value of 1 indicates that the process
wants to enter the critical section. The variable
turn holds the ID of the process whose turn it
is. Entrance to the critical section is granted
for process P0 if P1 does not want to enter its
critical section or if P1 has given priority to
P0 by setting turn to 0.
28
Mutexes Blocking Locks
  • Really want a thread waiting to enter a critical
    section to block
  • Put the thread to sleep until it can enter the
    critical section
  • Frees up the CPU for other threads to run
  • Straightforward to implement using our TCB
    queues!

???
1) Check lock state
Ø
Lock wait queue
29
Mutexes Blocking Locks
  • Really want a thread waiting to enter a critical
    section to block
  • Put the thread to sleep until it can enter the
    critical section
  • Frees up the CPU for other threads to run
  • Straightforward to implement using our TCB
    queues!

1) Check lock state
2) Set state to locked
3) Enter critical section
Ø
Lock wait queue
30
Mutexes Blocking Locks
  • Really want a thread waiting to enter a critical
    section to block
  • Put the thread to sleep until it can enter the
    critical section
  • Frees up the CPU for other threads to run
  • Straightforward to implement using our TCB
    queues!

???
1) Check lock state
Ø
Lock wait queue
31
Mutexes Blocking Locks
  • Really want a thread waiting to enter a critical
    section to block
  • Put the thread to sleep until it can enter the
    critical section
  • Frees up the CPU for other threads to run
  • Straightforward to implement using our TCB
    queues!

1) Check lock state
2) Add self to wait queue (sleep)
Ø
Lock wait queue
32
Mutexes Blocking Locks
  • Really want a thread waiting to enter a critical
    section to block
  • Put the thread to sleep until it can enter the
    critical section
  • Frees up the CPU for other threads to run
  • Straightforward to implement using our TCB
    queues!

???
1) Check lock state
2) Add self to wait queue (sleep)
Lock wait queue
33
Mutexes Blocking Locks
  • Really want a thread waiting to enter a critical
    section to block
  • Put the thread to sleep until it can enter the
    critical section
  • Frees up the CPU for other threads to run
  • Straightforward to implement using our TCB
    queues!

1) Thread 1 finishes critical section
Lock wait queue
34
Mutexes Blocking Locks
  • Really want a thread waiting to enter a critical
    section to block
  • Put the thread to sleep until it can enter the
    critical section
  • Frees up the CPU for other threads to run
  • Straightforward to implement using our TCB
    queues!

1) Thread 1 finishes critical section
2) Reset lock state to unlocked
3) Wake one thread from wait queue
Lock wait queue
35
Mutexes Blocking Locks
  • Really want a thread waiting to enter a critical
    section to block
  • Put the thread to sleep until it can enter the
    critical section
  • Frees up the CPU for other threads to run
  • Straightforward to implement using our TCB
    queues!

Thread 3 can now grab lock and enter critical
section
Lock wait queue
36
Limitations of locks
  • Locks are great, and simple. What can they not
    easily accomplish?
  • What if you have a data structure where it's OK
    for many threadsto read the data, but only one
    thread to write the data?
  • Bank account example.
  • Locks only let one thread access the data
    structure at a time.

37
Limitations of locks
  • Locks are great, and simple. What can they not
    easily accomplish?
  • What if you have a data structure where it's OK
    for many threadsto read the data, but only one
    thread to write the data?
  • Bank account example.
  • Locks only let one thread access the data
    structure at a time.
  • What if you want to protect access to two (or
    more) data structures at a time?
  • e.g., Transferring money from one bank account to
    another.
  • Simple approach Use a separate lock for each.
  • What happens if you have transfer from account A
    -gt account B, at the same timeas transfer from
    account B -gt account A?
  • Hmmmmm ... tricky.
  • We will get into this next time.

38
Now..
  • Higher level synchronization primitives How do
    to fancier stuff than just locks
  • Semaphores, monitors, and condition variables
  • Implemented using basic locks as a primitive
  • Allow applications to perform more complicated
    coordination schemes

39
CENG334Introduction to Operating Systems
  • Semaphores
  • Topics
  • Need for higher-level synchronization primitives
  • Semaphores and their implementation
  • The Producer/Consumer problem and its solution
    with semaphores
  • The Reader/Writer problem and its solution with
    semaphores
  • Erol Sahin
  • Dept of Computer Eng.
  • Middle East Technical University
  • Ankara, TURKEY

Some of the following slides are adapted from
Matt Welsh, Harvard Univ.
40
Higher-level synchronization primitives
  • We have looked at one synchronization primitive
    locks
  • Locks are useful for many things, but sometimes
    programs have different requirements.
  • Examples?
  • Say we had a shared variable where we wanted any
    number of threads to read the variable, but only
    one thread to write it.
  • How would you do this with locks?
  • What's wrong with this code?

Reader() lock.acquire() mycopy
shared_var lock.release() return mycopy
Writer() lock.acquire() shared_var
NEW_VALUE lock.release()
41
Semaphores
  • Higher-level synchronizationconstruct
  • Designed by Edsger Dijkstra in the1960's, part
    of the THE operatingsystem (classic stuff!)
  • Semaphore is a shared counter
  • Two operations on semaphores
  • P() or wait() or down()
  • From Dutch proeberen, meaning test
  • Atomic action
  • Wait for semaphore value to become gt 0, then
    decrement it
  • V() or signal() or up()
  • From Dutch verhogen, meaning increment
  • Atomic action
  • Increments semaphore value by 1.

Semaphore
42
Semaphore Example
  • Semaphores can be used to implement locks
  • A semaphore where the counter value is only 0 or
    1 iscalled a binary semaphore.

43
Simple Semaphore Implementation
struct semaphore int val threadlist L //
List of threads waiting for semaphore down(sema
phore S) // Wait until gt 0 then decrement if
(S.val lt 0) add this thread to S.L block(this
thread) S.val S.val -1 return up(semaphore
S) // Increment value and wake up next
thread S.val S.val 1 if (S.L is nonempty)
remove a thread T from S.L wakeup(T)
  • What's wrong with this picture???

44
Simple Semaphore Implementation
  • struct semaphore
  • int val
  • threadlist L // List of threads waiting for
    semaphore
  • down(semaphore S) // Wait until gt 0 then
    decrement
  • while (S.val lt 0)
  • add this thread to S.L
  • block(this thread)
  • S.val S.val -1
  • return
  • up(semaphore S) // Increment value and wake
    up next thread
  • S.val S.val 1
  • if (S.L is nonempty)
  • remove a thread T from S.L
  • wakeup(T)

down() and up() must be atomic actions!
45
Semaphore Implementation
  • How do we ensure that the semaphore
    implementation is atomic?

46
Semaphore Implementation
  • How do we ensure that the semaphore
    implementation is atomic?
  • One approach Make them system calls, and ensure
    only one down() or up() operation can be executed
    by any process at a time.
  • This effectively puts a lock around the down()
    and up() operations themselves!
  • Easy to do by disabling interrupts in the down()
    and up() calls.
  • Another approach Use hardware support
  • Say your CPU had atomic down and up instructions

47
OK, but why are semaphores useful?
  • A binary semaphore (counter is always 0 or 1) is
    basically a lock.
  • The real value of semaphores becomes apparent
    when the counter can be initialized to a value
    other than 0 or 1.
  • Say we initialize a semaphore's counter to 50.
  • What does this mean about down() and up()
    operations?

48
The Producer/Consumer Problem
  • Also called the Bounded Buffer problem.
  • Producer pushes items into the buffer.
  • Consumer pulls items from the buffer.
  • Producer needs to wait when buffer is full.
  • Consumer needs to wait when the buffer is empty.

Producer
Consumer
49
The Producer/Consumer Problem
  • Also called the Bounded Buffer problem.
  • Producer pushes items into the buffer.
  • Consumer pulls items from the buffer.
  • Producer needs to wait when buffer is full.
  • Consumer needs to wait when the buffer is empty.

zzzzz....
Producer
Consumer
50
One implementation...
Producer
Consumer
Consumer() int item while (TRUE) if
(count 0) sleep() item remove_item() co
unt count 1 if (count N-1)
wakeup(producer) eat(item)
int count 0 Producer() int item while
(TRUE) item bake() if (count N)
sleep() insert_item(item) count count
1 if (count 1) wakeup(consumer)
What's wrong with this code?
51
A fix using semaphores
Producer
Consumer
Semaphore mutex 1 Semaphore empty
N Semaphore full 0 Producer() int
item while (TRUE) item bake() down(empt
y) down(mutex) insert_item(item) up(mutex
) up(full)
Consumer() int item while (TRUE)
down(full) down(mutex) item
remove_item() up(mutex) up(empty) eat(ite
m)
52
Reader/Writers
  • Let's go back to the problem at the beginning of
    lecture.
  • Single shared object
  • Want to allow any number of threads to read
    simultaneously
  • But, only one thread should be able to write to
    the object at a time
  • (And, not interfere with any readers...)

Semaphore mutex 1 Semaphore wrt 1 int
readcount 0 Writer() down(wrt) do_write()
up(wrt)
Reader() down(mutex) readcount if
(readcount 1) down(wrt) up(mutex) do_read
() down(mutex) readcount-- if (readcount 0)
up(wrt) up(mutex)
  • A Reader should only wait for a Writer to
    complete its do_write().
  • A Reader should not wait for other Readers to
    complete their do_read().
  • The Writer should wait for the other Writers to
    complete their do_write().
  • The Writer should wait for all the Readers to
    complete their do_read().

53
Issues with Semaphores
  • Much of the power of semaphores derives from
    calls todown() and up() that are unmatched
  • See previous example!
  • Unlike locks, acquire() and release() are not
    always paired.
  • This means it is a lot easier to get into trouble
    with semaphores.
  • More rope
  • Would be nice if we had some clean, well-defined
    language support for synchronization...
  • Java does!

54
CENG334Introduction to Operating Systems
  • Synchronization patterns
  • Topics
  • Signalling
  • Rendezvous
  • Barrier
  • Erol Sahin
  • Dept of Computer Eng.
  • Middle East Technical University
  • Ankara, TURKEY

55
Signalling
  • Possibly the simplest use for a semaphore is
    signaling, which means that one thread sends a
    signal to another thread to indicate that
    something has happened.
  • Signaling makes it possible to guarantee that a
    section of code in one thread will run before a
    section of code in another thread in other
    words, it solves the serialization problem.

Adapted from The Little Book of Semaphores.
56
Signalling
  • Imagine that a1 reads a line from a file, and b1
    displays the line on the screen. The semaphore in
    this program guarantees that Thread A has
    completed a1 before Thread B begins b1.
  • Heres how it works if thread B gets to the wait
    statement first, it will find the initial value,
    zero, and it will block. Then when Thread A
    signals, Thread B proceeds.
  • Similarly, if Thread A gets to the signal first
    then the value of the semaphore will be
    incremented, and when Thread B gets to the wait,
    it will proceed immediately.
  • Either way, the order of a1 and b1 is guaranteed.

semaphore sem0
Thread A statement a1 sem.up()
Thread B sem.down() statement b1
Adapted from The Little Book of Semaphores.
57
Rendezvous
  • Generalize the signal pattern so that it works
    both ways. Thread A has to wait for Thread B and
    vice versa. In other words, given this code we
    want to guarantee that a1 happens before b2 and
    b1 happens before a2.
  • Your solution should not enforce too many
    constraints. For example, we dont care about the
    order of a1 and b1. In your solution, either
    order should be possible.
  • Two threads rendezvous at a point of execution,
    and neither is allowed to proceed until both have
    arrived.

Thread A statement a1 statement a2
Thread B statement b1 statement b2
Adapted from The Little Book of Semaphores.
58
Rendezvous - Hint
  • Generalize the signal pattern so that it works
    both ways. Thread A has to wait for Thread B and
    vice versa. In other words, given this code we
    want to guarantee that a1 happens before b2 and
    b1 happens before a2.
  • Your solution should not enforce too many
    constraints. For example, we dont care about the
    order of a1 and b1. In your solution, either
    order should be possible.
  • Two threads rendezvous at a point of execution,
    and neither is allowed to proceed until both have
    arrived.
  • Hint Create two semaphores, named aArrived and
    bArrived, and initialize them both to zero.
    aArrived indicates whether Thread A has arrived
    at the rendezvous, and bArrived likewise.

semaphore aArrived0 semaphore bArrived0
Thread A statement a1 statement a2
Thread B statement b1 statement b2
Adapted from The Little Book of Semaphores.
59
Rendezvous - Solution
  • Generalize the signal pattern so that it works
    both ways. Thread A has to wait for Thread B and
    vice versa. In other words, given this code we
    want to guarantee that a1 happens before b2 and
    b1 happens before a2.
  • Your solution should not enforce too many
    constraints. For example, we dont care about the
    order of a1 and b1. In your solution, either
    order should be possible.
  • Two threads rendezvous at a point of execution,
    and neither is allowed to proceed until both have
    arrived.
  • Hint Create two semaphores, named aArrived and
    bArrived, and initialize them both to zero.
    aArrived indicates whether Thread A has arrived
    at the rendezvous, and bArrived likewise.

semaphore aArrived0 semaphore bArrived0
Thread A statement a1 aArrived.up() bArrived.dow
n() statement a2
Thread B statement b1 bArrived.up() aArrived.dow
n() statement b2
Adapted from The Little Book of Semaphores.
60
Rendezvous A less efficient solution
  • This solution also works, although it is probably
    less efficient, since it might have to switch
    between A and B one time more than necessary.
  • If A arrives first, it waits for B. When B
    arrives, it wakes A and might proceed immediately
    to its wait in which case it blocks, allowing A
    to reach its signal, after which both threads can
    proceed..

semaphore aArrived0 semaphore bArrived0
Thread A statement a1 bArrived.down()? aArrived.up
()? statement a2
Thread B statement b1 bArrived.up() aArrived.dow
n() statement b2
Adapted from The Little Book of Semaphores.
61
Rendezvous How about?
semaphore aArrived0 semaphore bArrived0
Thread A statement a1 bArrived.down()? aArrived.up
()? statement a2
Thread B statement b1 aArrived.down() bArrived.u
p() statement b2
Adapted from The Little Book of Semaphores.
62
Barrier
  • Rendezvous solution does not work with more than
    two threads.
  • Puzzle Generalize the rendezvous solution. Every
    thread should run the following code

rendezvous() criticalpoint()
The synchronization requirement is that no thread
executes critical point until after all threads
have executed rendezvous. You can assume that
there are n threads and that this value is stored
in a variable, n, that is accessible from all
threads. When the first n - 1 threads arrive they
should block until the nth thread arrives, at
which point all the threads may proceed.
Adapted from The Little Book of Semaphores.
63
Barrier - Hint
n thenumberofthreads count 0 Semaphore
mutex1, barrier0
  • count keeps track of how many threads have
    arrived. mutex provides exclusive access to count
    so that threads can increment it safely.
  • barrier is locked (zero or negative) until all
    threads arrive then it should be unlocked (1 or
    more).

Adapted from The Little Book of Semaphores.
64
Barrier Solution?
n thenumberofthreads count 0 Semaphore
mutex1, barrier0 rendezvous() mutex.down()
count count 1 mutex.up() if (count n)
barrier.up() else barrier.down() Criticalpoint(
)
  • Since count is protected by a mutex, it counts
    the number of threads that pass. The first n-1
    threads wait when they get to the barrier, which
    is initially locked. When the nth thread arrives,
    it unlocks the barrier.
  • What is wrong with this solution?

Adapted from The Little Book of Semaphores.
65
Barrier Solution?
n thenumberofthreads count 0 Semaphore
mutex1, barrier0 rendezvous() mutex.down()
count count 1 mutex.up() if (count n)
barrier.up() else barrier.down() Criticalpoint(
)
  • Imagine that n 5 and that 4 threads are waiting
    at the barrier. The value of the semaphore is the
    number of threads in queue, negated, which is -4.
  • When the 5th thread signals the barrier, one of
    the waiting threads is allowed to proceed, and
    the semaphore is incremented to -3. But then no
    one signals the semaphore again and none of the
    other threads can pass the barrier.

Adapted from The Little Book of Semaphores.
66
Barrier Solution
n thenumberofthreads count 0 Semaphore
mutex1, barrier0 rendezvous() mutex.down()
count count 1 mutex.up() if (count n)
barrier.up() else barrier.down() barrier.up()
Criticalpoint()
  • The only change is another signal after waiting
    at the barrier. Now as each thread passes, it
    signals the semaphore so that the next thread can
    pass.

Adapted from The Little Book of Semaphores.
67
Barrier Bad Solution
n thenumberofthreads count 0 Semaphore
mutex1, barrier0 rendezvous() mutex.down()
count count 1 if (count n)
barrier.up() barrier.down() barrier.up() mut
ex.up() Criticalpoint()
  • Imagine that the first thread enters the mutex
    and then blocks. Since the mutex is locked, no
    other threads can enter, so the condition,
    countn, will never be true and no one will ever
    unlock.

Adapted from The Little Book of Semaphores.
Write a Comment
User Comments (0)
About PowerShow.com