[argobots-discuss] condition variable and rwlock
Iwasaki, Shintaro
siwasaki at anl.gov
Fri Jul 23 10:53:25 CDT 2021
Hi Matthieu,
By default, ABT_rwlock() and ABT_mutex() waits in a busy-yield loop (keeps yielding until a flag is set). If that configuration is set, waiters will suspend, not yield.
This behavior becomes default because of a historical reason (https://github.com/pmodels/argobots/pull/102). There's a performance trade-off between a simple mutex (current default) and a non-simple mutex, so we hesitate to change it silently.
If this default behavior is not what the user expects, I will create a PR to change it (if so I'd be happy if you create a GitHub issue that briefly describes this issue).
Thanks,
Shintaro
________________________________
From: Dorier, Matthieu <mdorier at anl.gov>
Sent: Friday, July 23, 2021 10:34 AM
To: discuss at lists.argobots.org <discuss at lists.argobots.org>; Iwasaki, Shintaro <siwasaki at anl.gov>
Subject: Re: condition variable and rwlock
Thanks, I'm curious about your comment about --disable-simple-mutex. What you describe is what I would expect rwlock to do by default... How do rwlock by default, then?
Thanks
Matthieu
Get Outlook for Android<https://aka.ms/ghei36>
________________________________
From: Iwasaki, Shintaro <siwasaki at anl.gov>
Sent: Friday, July 23, 2021 3:51:21 PM
To: discuss at lists.argobots.org <discuss at lists.argobots.org>; Dorier, Matthieu <mdorier at anl.gov>
Subject: Re: condition variable and rwlock
Hello Matthieu,
Thanks for your question!
> Is there a way of using an Argobots condition variable with an rwlock instead of a mutex?
No. The user cannot combine ABT_rwlock with ABT_cond.
> my use-case is a structure that receives many read requests and a few writes, clearly justifying using a rwlock instead of a mutex, but I may want some of the reads to block until a write has happened, which means I need a condition variable
First, if readers encounter an rwlock locked by a writer, they will suspend until the writer releases the rwlock (if --disable-simple-mutex is set, which is not set by default).
For this specific purpose(especially if "some", not "all" of the readers should block), what I came up with in my mind first is the following. It looks fine except for a seemingly complex structure that uses multiple synchronization objects.
void reader() {
while (1) {
if (work_queue.is_empty() and I_AM_SOME_OF_READERS()) {
// 1. Internally ABT_self_suspend()-like mechanism needs
// to take a lock (even if it is in the readers' lock)
// since multiple readers might access the same data
// structure (for example, a user-maintained
// suspended ULT list).
// 2. Anyway this path is not performance sensitive
ABT_mutex_lock(mutex);
ABT_cond_wait(cond, mutex);
ABT_mutex_unlock(mutex);
// Now someone woke me up after pushing work.
}
ABT_rwlock_rdlock(rwlock);
if (!work_queue.is_empty())
; // Do real work.
ABT_rwlock_unlock(rwlock);
}
}
void writer() {
ABT_rwlock_wrlock(rwlock);
work_queue.push_work(work);
// You do not need to take a mutex to call ABT_cond_broadcast.
// It is fine even if there's no waiter.
ABT_cond_broadcast(cond);
ABT_rwlock_unlock(rwlock);
}
I would welcome any suggestions! (For example, does ABT_rwlock_trywrlock() help?)
Best,
Shintaro
________________________________
From: Dorier, Matthieu via discuss <discuss at lists.argobots.org>
Sent: Friday, July 23, 2021 5:03 AM
To: discuss at argobots.org <discuss at argobots.org>
Cc: Dorier, Matthieu <mdorier at anl.gov>
Subject: [argobots-discuss] condition variable and rwlock
Hi,
I suspect the answer is no, but is there a way of using an Argobots condition variable with an rwlock instead of a mutex?
(my use-case is a structure that receives many read requests and a few writes, clearly justifying using a rwlock instead of a mutex, but I may want some of the reads to block until a write has happened, which means I need a condition variable).
Thanks,
Matthieu
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.argobots.org/pipermail/discuss/attachments/20210723/cc891554/attachment.html>
More information about the discuss
mailing list