Friday, June 27, 2014

Object-Oriented Programming: the Good, the Bad and the Ugly

Let me clarify. I love OOP. I developed a lot of functionality using OOP and found it is very productive and highly extensible approach. So what's the point?

The Good

What is primary goal of OOP? What are the benefits? Why is it so popular? Where is a magic?

These are simple questions. And I would like to have simple answers. I guess you too. I have this one: it allows to significantly improve the code reuse. How? Because you may use abstractions instead of concrete classes and this fact allows to reuse the functionality based on those abstractions. Why does it matter? Because code reuse is the most effective way to speed up your development.

The Bad

So what's wrong with OOP? Is it the holy grail and silver bullet together? Unfortunately the answer is: no. On the one hand we have development speed up. On the other hand we have the price to use abstractions. Let's discuss it in detail.

Thursday, June 26, 2014

Future/Promise Discussion

Introduction

Currently, C++11 has introduced async and future/promise pattern. It allows performing concurrent operations and waiting for the result. It looks very promising (in future). Yep.

I don't want to discuss the current standard implementation. It looks like an initial step and contains several flaws (destructor behavior, no thread pools, only blocking semantic for value retrieval etc). I would like to discuss particular usage and overhead questions.

There are several typical usage:

  1. Start the task asynchronously and wait for the result.
  2. Start the task asynchronously and don't wait for the result.
  3. Start several (or a lot) tasks asynchronously and wait for the results.

Let's discuss them in details.

Start the task asynchronously and wait for the result

The idea is simple: sometimes I need start task asynchronously while continuing doing processing at the same time. If the result of the task is needed I invoke the method get() from the future to obtain the result. It looks pretty simple. The only thing is that this is the only case where future/promise technique is very well suited.

Let's consider another typical usage.

Saturday, June 21, 2014

C++ User Group in St. Petersburg, 21 June 2014

Today I've given the presentation related to multithreading topic: "Fine-grained locking". The considered approach uses fairly simple idea: to incorporate lock/unlock mutex operations into object access using overloaded operator->:

template<typename T, typename T_mutex>
struct Access : std::unique_lock<T_mutex>
{
    Access(T* t_, T_mutex& m)
        : std::unique_lock<T_mutex>(m), t(t_) {}

    template<typename T_lockType>
    Access(T* t_, T_mutex& m, T_lockType type)
        : std::unique_lock<T_mutex>(m, type), t(t_) {}

    T* operator->()     { init(); return t; }

private:
    void init()         { if (!this->owns_lock()) this->lock(); }
    T* t;
};

template<typename T>
struct SmartMutex
{
    typedef Access<T, Mutex> WAccess;
    typedef Access<const T, Mutex> RAccess;

    RAccess operator->() const { return read(); }
    WAccess operator->()       { return write(); }
    RAccess read() const       { return {get(), mutex()}; }
    RAccess readLazy() const   { return {get(), mutex(), std::defer_lock}; }
    WAccess write()            { return {get(), mutex()}; }
    WAccess writeLazy()        { return {get(), mutex(), std::defer_lock}; }

private:
    T* get() const             { return data.get(); }
    Mutex& mutex() const       { return *mutexData.get(); }

    std::shared_ptr<T> data = std::make_shared<T>();
    std::shared_ptr<Mutex> mutexData = std::make_shared<Mutex>();
};

This allows to avoid race conditions and to use atomicity on the object data level.

Lazy methods are suitable to avoid another issue of multithreaded applications: deadlock. Here is an example how to use it correctly with std::lock:

SmartMutex<X> x, y;
auto rx = x.readLazy();
auto ry = y.readLazy();
std::lock(rx, ry);
// now rx and ry can be used

The same approach was used to implement SmartSharedMutex allowing to share read access. But instead of usual overloaded ->, new operator was introduced: ---> (long arrow). How was it implemented? See related article: Useful Multithreaded Idioms of C++ (in Russian)