Testing Simulink with C++

I have been working with a client who, by corporate mandate, uses Simulink and automated code generation for all of their embedded software. Part of my mandate is to speed the path to project completion.

For this client, the single biggest speed blocker is validation of the code against software requirements. After each feature is completed, an exhaustive review of of software requirements is undertaken and the work validated against it. There are also simulink tests which must be evaluated to ensure that they are correct and capture requirements. A casual estimate put the percentage of time spent performing these reviews at 75%. Obviously shrinking this percentage would cause a major throughput boost.

As part of our process we have a CI/CD pipeline. After simulink code is pushed to our source code repository, code generation is performed and the generated code is pushed to its own repository. Because the team has extensive experience doing test driven development with C and C++, I chose this point to inject my tests. Because we’ve used it elsewhere, I’ve implemented my tests with Google Test.

Because this is generated code, there are a lot of generated functions that you can’t count on staying around if your control logic changes. It also wasn’t really meant for humans to read, and I assure you that if a human wrote this code you’d reject it on code review. But there is a stable interface there if you know what to look for.

The other problem is that simulink is most often used for signal processing code, and expects to operate on a stream of data. Converstations around test driving and automated testing are typically using code that you call once, and it manipulates state or returns a value and you test that value. But like most languages C++ allows looping and we can certainly call a function in a loop.

Stable Functions

There will be a lot of generated functions from simulink that you should ignore. But there will be two or three generated functions which every simulink unit will have that remain stable. The stable functions will all start with the name of the simulink unit. So if your unit is AmbientTempMain, all of the functions will be in an AmbientTempMain.c file, and they will start with AmbientTempMain.

The stable functions that we can test against are:

Inputs and Outputs

The first thing to do is sort out inputs, outputs, and state information. Simulink generated code follows a simple formula:

If you’re looking at your own generated code, you will notice that all of these parameters are pointers. That means that every time I call one of the functions I’m going to need to deal with pointer dereferencing. This is tedious and ripe for errors, so there are two things I do to make this easier.

The first is to create a structure which contains all of the signals and the state block, but not in their pointer form. It might look like:

struct params_t {
    float32 rtu_ambientTemp;
    float32 rty_ambientTemp;
    float32 rty_ambientTempFiltered;
    float32 rty_ambientTempFault;
    struct localDW_t localDW;

The other thing I do is declare a params member in my test framework, and a step function that calls the step function of my simulink block, with appropriate dereferencing of params members.

class AmbientTempMainTests : public testing::Test {

    params_t params;

    void Step() {

Now a single call to this method doesn’t involve a lot of opportunities to mess up pointers, just a quick function call and my tests can check expected values on output parameters.

Signal Processing

The biggest difference in testing simulink generated code from business logic code is that we are less interested in the actions of a single function call, and more interested in the output based on input over a period of time.

The solution to that is to create a function which takes as its input a duration and signal generating function. In our application the filter should sample the signal once every millisecond. Because it’s a low pass filter, I am also interested in the differential of the filtered value between one call and the next, so my signal processing function looks like this:

float32 ProcessSignal(double duration; std::function< double(double)> generator) {
    float32 last_value = -10.0;
    float32 differential;

    for(double t = 0; t <= duration; t += 0.001) {
        params.rtu_ambientTemp = generator(t);
        differential = fabsf(params.rty_ambientFiltered - last_value);
        last_value = params.rty_ambientFiltered;

    return differetial;

Tying to Requirements

Now that I have this useful framework, I can easily write tests which enforce my software requirements based on various input signals.

Let’s say, for example, that Requirement AT-100 requires that within two seconds of power up, the filter should rise from it’s default state (0 degrees C) to the actual ambient temperature. I would write the following test.

/// Requirement AT-100
TEST_F(AmbientTempMainTests, FromDefaultState_FilteredTempShouldBeAmbientTemp_AfterTwoSeconds) {
    double ambient_temp = 40.0;
    ProcessSignal(2.0, [ambient_temp](double t) { 
        return ambient_temp;    

    ASSERT_NEAR(ambient_temp, params.rty_ambientTempFiltered, 0.1);

Note first the doxygen style comment at the top, which lists the requirement this test is in answer of. If your work is in a regulated industry, tracability between your tests and your requirements is mandatory. Even if your industry isn’t regulated, if you have to deal with formal software requirements this kind of tracability makes your code review easier.

The second item worth of note is that we can use lambdas to express our signal. This was enabled by making the second parameter to ProcessSignal a std::function, which can accept either a regular function pointer, a formally declared functor, or a lambda. The lambda here is simple. But it can be more complex.

If the the signal fluctuates too rapidly for up to a half second, our reported ambient temperature should be whatever the last recorded ambient temperature was before the noise started. A test for that might look like:

/// Requirement AT-101
TEST_F(AmbientTempMainTests, FromDefaultState_FilteredTempShouldBeAmbientTemp_AfterBriefNoise) {
    double ambient_temp = 27.3;
    double duration = 5.0;

    ProcessSignal(duration, [duration, ambient_temp](double t) { 
        if (t < duration - 0.4) {
            return ambient_temp;
        // Last 0.4 seconds are a 60hz wave
        return 2.0 * sin(2 * M_PI * 60) + ambient_temp;

    ASSERT_EQ(ambient_temp, params.rty_ambientTemp);


You’ll want to adapt this to your own situation, but here’s the core of what you’ll need to make this work for your workflow.

  1. After you have your basic interface defined in simulink, you need to generate C code.

  2. Write tests around the stable interfaces which ensure that your requirements are met.

  3. Build and run the tests against the generated code, using the host compiler (i.e. the compiler you use to build code that runs on your workstation or CI machine).

  4. Any time your push your simulink code to your code repository, your CI system should generate C code from the current model and repeat the previous step.

With this automated test suite, you can use the tests as a gate for whether or not code can be merged into your main branch, or be deployed to test hardware as part of a continuous delivery system.

If you’re using something similar in your workflow I’d love to hear about it in the comments below.

Further Reference

If, like me, you really don’t know anything about C++ lambdas and the syntax makes your head hurt, I strongly recommend Lambdas: From C++11 to C++20. It was a nice concise wak through with good examples and why you would want to use them.


comments powered by Disqus