Today I've given the presentation related to multithreading topic: "Fine-grained locking". The considered approach uses fairly simple idea: to incorporate lock
/unlock
mutex operations into object access using overloaded operator->
:
template<typename T, typename T_mutex>
struct Access : std::unique_lock<T_mutex>
{
Access(T* t_, T_mutex& m)
: std::unique_lock<T_mutex>(m), t(t_) {}
template<typename T_lockType>
Access(T* t_, T_mutex& m, T_lockType type)
: std::unique_lock<T_mutex>(m, type), t(t_) {}
T* operator->() { init(); return t; }
private:
void init() { if (!this->owns_lock()) this->lock(); }
T* t;
};
template<typename T>
struct SmartMutex
{
typedef Access<T, Mutex> WAccess;
typedef Access<const T, Mutex> RAccess;
RAccess operator->() const { return read(); }
WAccess operator->() { return write(); }
RAccess read() const { return {get(), mutex()}; }
RAccess readLazy() const { return {get(), mutex(), std::defer_lock}; }
WAccess write() { return {get(), mutex()}; }
WAccess writeLazy() { return {get(), mutex(), std::defer_lock}; }
private:
T* get() const { return data.get(); }
Mutex& mutex() const { return *mutexData.get(); }
std::shared_ptr<T> data = std::make_shared<T>();
std::shared_ptr<Mutex> mutexData = std::make_shared<Mutex>();
};
This allows to avoid race conditions and to use atomicity on the object data level.
Lazy methods are suitable to avoid another issue of multithreaded applications: deadlock. Here is an example how to use it correctly with std::lock
:
SmartMutex<X> x, y;
auto rx = x.readLazy();
auto ry = y.readLazy();
std::lock(rx, ry);
The same approach was used to implement SmartSharedMutex
allowing to share read access. But instead of usual overloaded ->
, new operator was introduced: --->
(long arrow). How was it implemented? See related article: Useful Multithreaded Idioms of C++ (in Russian)