streamline mutex unlock to remove a useless branch, use a_store to unlock
authorRich Felker <dalias@aerifal.cx>
Wed, 30 Mar 2011 13:06:00 +0000 (09:06 -0400)
committerRich Felker <dalias@aerifal.cx>
Wed, 30 Mar 2011 13:06:00 +0000 (09:06 -0400)
this roughly halves the cost of pthread_mutex_unlock, at least for
non-robust, normal-type mutexes.

the a_store change is in preparation for future support of archs which
require a memory barrier or special atomic store operation, and also
should prevent the possibility of the compiler misordering writes.

src/thread/pthread_mutex_unlock.c

index 67aa7ba..5855db0 100644 (file)
@@ -14,11 +14,15 @@ int pthread_mutex_unlock(pthread_mutex_t *m)
                        self->robust_list.pending = &m->_m_next;
                        *(void **)m->_m_prev = m->_m_next;
                        if (m->_m_next) ((void **)m->_m_next)[-1] = m->_m_prev;
+                       a_store(&m->_m_lock, 0);
+                       self->robust_list.pending = 0;
+               } else {
+                       a_store(&m->_m_lock, 0);
                }
+       } else {
+               a_store(&m->_m_lock, 0);
        }
 
-       m->_m_lock = 0;
        if (m->_m_waiters) __wake(&m->_m_lock, 1, 0);
-       if (m->_m_type >= 4) self->robust_list.pending = 0;
        return 0;
 }