Sorry, you need to enable JavaScript to visit this website.

fMBT

fMBT generates and executes tests automatically. It very quickly finds and tests paths that would never be tested by human test designers. This increases test coverage and cuts down test maintenance efforts, when compared to traditional test automation.

Testing and understanding concurrency

BY Antti Kervinen ON Oct 04, 2013

Concurrency causes complexity. Consider any multiuser system, for instance. Long usecase involving only one user is often easy to understand and test. But take another user who is going through another usecase simultaneously. This introduces many different interleavings of actions that the users can take. Add third user, and the number of different interleavings multiplies again. Any idea how you could test them? I'll show one way to do it in this blog.

Concurrent systems are not only difficult to test. They are difficult to understand and design, too. Tools capable of visualising concurrent behavior can help understanding it.

Quite recently I created models with fMBT only to understand, not to test anything. It started when my colleague explained what memory barriers do in Linux kernel. They basically promise something about the order in which certain instructions, like store and load, must take effect. When considering a single CPU core, the idea is simple. However, the promise given by write-read barrier pairs on symmetric multiprocessor systems was much more difficult to understand. When I created a model on it, it became quite obvious. I'll return to this example at the end of this blog post. Let's see the basics of modelling this kind of things with fMBT first.

Serial and parallel

fMBT v0.11.2 introduced two new constructs in AAL (Adapter Action Language): serial and parallel blocks. I'll demonstrate using these blocks with AAL/Python models with two actions of input type: "step1" and "step2". State machines on the right are screenshots from fmbt-editor and drawn automatically based on the AAL/Python code on the left.

aal "example0" {
    language "python" {}
    initial_state {}
    input "step1" {}
    input "step2" {}
}
      
AAL visualisation
"example0" does not have serial or parallel blocks implying restrictions on execution of "step1" and "step2". fmbt-editor draws only one state with two self-loops: "step1" and "step2". (Instead of drawing arrows, fmbt-editor just writes looping actions inside states.)
aal "example1" {
    language "python" {}
    initial_state {}
    serial {
        input "step1" {}
        input "step2" {}
    }
}
      
AAL serial visualisation 1
The serial block in "example1" forces execution of "step1" before "step2". Once every step in a serial block is executed, execution in the block starts over. Therefore, executing "step1" is possible again after executing "step2".
aal "example2" {
    language "python" {}
    initial_state {}
    serial(single) {
        input "step1" {}
        input "step2" {}
    }
}
AAL serial visualisation 2
The single parameter for the serial block of "example2" means that actions in the block can be executed at most once. Therefore, execution simply stops after "step2" without starting over.
aal "example3" {
    language "python" {}
    initial_state {}
    parallel {
        input "step1" {}
        input "step2" {}
    }
}
AAL parallel visualisation 1
The parallel block in "example3" allows execution of "step1" and "step2" in both orders. Like in the serial block, both actions must be executed before execution of the block can start over. Difference to "example0" is that neither of the steps can be executed twice in a row.
aal "example4" {
    language "python" {}
    initial_state {}
    parallel(single) {
        input "step1" {}
        input "step2" {}
    }
}
AAL parallel visualisation 2
Because of the single parameter, executing the parallel block cannot start over.

Testing interleaved usecases

Modelling interleavings of several usecases is straight-forward with serial and parallel blocks. As an example, let Alice and Bob be admins of a system where they can check, install or skip updates. The following model contains all interleavings of two parallel usecases. Once both have finished, new interleavings can be tested.

# preview-depth: 8
aal "admins" {
    language "python" {}
    variables {}
    initial_state {}
    parallel {
        serial {
            input "Alice:login" {}
            input "Alice:check updates" {}
            input "Alice:install updates" {}
            input "Alice:logout" {}
        }
        serial {
            input "Bob:login" {}
            input "Bob:check updates" {}
            input "Bob:skip updates" {}
            input "Bob:logout" {}
        }
    }
}

If you remove the parallel block, the serial blocks do not have to reach the end before restarting. In that case, Alice can login and logout many times while Bob is logged in, and vice versa.

Actions (inputs and outputs) inside serial and parallel blocks can still have guard(), body() and adapter() blocks, as usual in AAL/Python. Guards of only potentially enabled actions are evaluated.

Perhaps the easiest way to generate a test is to copy-paste the AAL/Python code to fmbt-editor and press F6 to see the generated test. In order to adjust what kind of test should be generated, you can modify and save the test configuration. To really test the system instead of just generating the test, you would write Python code to be executed on adapter() block of each action, and possibly define adapter_init and adapter_exit blocks that will be executed at the beginning of the test run and when the test run finally ends.

Memory barriers

The table below contains a model of the first example in the memory barriers documentation. That is, initially memory slots A and B contain values 1 and 2. CPU1 stores A=3 and B=4, and CPU2 loads x=A and y=B concurrently. As there are no barriers used, CPUs are allowed to execute these instructions in any order.

aal "no_barriers" {
    language "python" {}
    # preview-show-vars: x, y
    variables { A, B, x, y }
    initial_state {
        A = 1
        B = 2
        x = None
        y = None
    }

    parallel(single) {
        parallel {
            # CPU1
            input "STORE A=3" {body(){ A=3 }}
            input "STORE B=4" {body(){ B=4 }}
        }
        parallel {
            # CPU2
            input "x=LOAD A" {body(){ x=A }}
            input "y=LOAD B" {body(){ y=B }}
        }
    }
}

In above visualisation you can see all possible interleavings of instructions from the initial state on the top, down to four states at the bottom. Inside the bottom states you'll find all possible combinations of values x and y.

Adding barriers without correctly ordering instructions

Let's add a write barrier between STOREs, and a read barrier between LOADs. As the barriers force CPUs to execute STOREs and LOADs in the given order, I only replace inner parallel blocks with serial blocks.

aal "pair_incorrect" {
    language "python" {}
    # preview-show-vars: x, y
    variables { A, B, x, y }
    initial_state {
        A = 1
        B = 2
        x = None
        y = None
    }

    parallel(single) {
        serial {
            # CPU1
            input "STORE A=3" {body(){ A=3 }}
            input "STORE B=4" {body(){ B=4 }}
        }
        serial {
            # CPU2
            input "x=LOAD A" {body(){ x=A }}
            input "y=LOAD B" {body(){ y=B }}
        }
    }
}

Two things are obvious in the visualisation:

  • There are only six possible orders how CPUs can execute these instructions.
  • CPU2 can still load the same value combinations to x and y, as in the first case.

The reason why this memory barrier pair does not have effect on possible value combinations is on how LOADs are ordered with respect to STOREs: the value stored before the write barrier is mistakingly read before the read barrier. Therefore, nothing is guaranteed on what was read from slot A if an updated value of B is read.

 

But let's see how situation changes when reordering LOAD A and LOAD B on CPU2. Memory barrier pair starts making sense.

# SMP barrier pair with correct ordering of LOADs
aal "pair_correct" {
    language "python" {}
    # preview-show-vars: x, y
    variables { A, B, x, y }
    initial_state {
        A = 1
        B = 2
        x = None
        y = None
    }

    parallel(single) {
        serial {
            # CPU1
            input "STORE A=3" {body(){ A=3 }}
            input "STORE B=4" {body(){ B=4 }}
        }
        serial {
            # CPU2
            input "y=LOAD B" {body(){ y=B }}
            input "x=LOAD A" {body(){ x=A }}
        }
    }
}

The first observation in the visualisation is that the number of possible x and y value combinations drops from four to three. Namely, combination x=1, y=4 disappeared. If you take a look at the transitions from the initial state to state "x=1, y=4" in the previous visualisation, you can see that it was possible due to execution

  1. x=LOAD A
  2. STORE A=3
  3. STORE B=4
  4. y=LOAD B

 

If you search for executions where LOAD A takes place before STORE A in the last visualisation, you can see (the rightmost path) that if LOAD A happens before STORE A, then also LOAD B took place before STORE B.

Correspondingly, when looking for executions where LOAD B takes place after STORE B, there is only one. From that you can see (the leftmost path) that if LOAD B happens after STORE B, then also LOAD A happens after STORE B.

As a conclusion, it seems the memory barrier pair with correct ordering promises that:

  • If y has new value then also x has new value.
  • If x has old value then also y has old value.

 

I hope this example demonstrates that writing models and playing with them is not difficult. It's actually so handy that you can create a model to get an idea, and once you got it, you can throw the model away. Just like you would be sketching with pen and paper, but this time the paper can do some calculations for you.