I'm selected for the STM32H7B3I-DK -  DISCOVERY KIT road test.

I'm working on a touch screen GUI for my electronic load.

In this post I'm trying a mock interface - see how I can switch between screens

Thank you fellow roadtester jomoenginer for sending me a very good starting project.

 

customer action photo

image source: the mockup running in simulator mode on my laptop

 

Goal: Build an app with two screens, display data and info

 

I'm putting some light goals here. I've already checked other TouchGFX parts, button click event handling and showing external data.

Here, I want to see if I can:

  • react on an internal event (a timer tick)
  • show a different form on the screen, from code.
  • try if my basis user interaction scenario would work

 

flow overview

image: proposed GUI flow

 

This is a mock interface. I do not really query my electronic load. Values and messages are made up.

Check previous posts (linked below) to see how a real interaction over a serial interface works.

 

Initial Script: use Timer to Show a Progress Bar and Status while Starting Up

 

The process I'm mocking here, is checking if the test instrument is connected, then check what mode it is in.

While doing that, display a progress bar. And intermediate statuses.

Then when that's done, move to another screen based on the instrument's mode.

 

I used ST's Progress Bar tutorial to learn about using background processes and keeping the interface active at the same time.

 

 

There are different locations to react to a TouchGFX tick. I want the tick to animate the screen (progress bar) so I used the home screen View's tick.

I told before that this is a mock. Instead of really checking if the electronic load is ready, I faked a few moments when to simulate that the instrument was up, and it reported it's mode:

 

void MainView::handleTickEvent() {
     static int timesAnimated = 0;

     int currentValue = progrBar.getValue();
     int16_t max;
     int16_t min;
     progrBar.getRange(min, max);


     if (currentValue == min)  {
         increase = true;
         timesAnimated++;
     } else if (currentValue == max) {
         increase = false;
         timesAnimated++;
     }

     int nextValue = increase == true ? currentValue+1 : currentValue-1;
     progrBar.setValue(nextValue);

     if (timesAnimated == 2 ) { 
       Unicode::strncpy(txtStatusBuffer, (const char*)"init THEBREADBOARD,ELECTRONICLOAD,1,1.0", TXTSTATUS_SIZE);
       txtStatus.invalidate();
     }

     if (timesAnimated == 3 ) {
       Unicode::strncpy(txtStatusBuffer, (const char*)"mode detected: constant current", TXTSTATUS_SIZE);
       txtStatus.invalidate();
     }

     if (timesAnimated == 4 ) {
       static_cast<FrontendApplication*>(Application::getInstance())->gotoDisplayCCScreenNoTransition();
     }

}

 

Most of this code is indeed simulation, but there is one new thing I wanted to test: Opening a new screen when the instrument reports ready:

 

static_cast<FrontendApplication*>(Application::getInstance())->gotoDisplayCCScreenNoTransition();

 

This is the mechanism to make the LCD switch another display.

The gotoDisplayCCScreenNoTransition() method is generated by the TouchGFX Designer.

When I created the second screen, called DisplayCC, I defined the handler for its display, by generating an interaction.

 

No matter in what screen yo define that interaction, the functions generted will end up in FrontendApplicationBase.cpp:

 

// DisplayCC

void FrontendApplicationBase::gotoDisplayCCScreenNoTransition() {
    transitionCallback = touchgfx::Callback<FrontendApplicationBase>(this, &FrontendApplication::gotoDisplayCCScreenNoTransitionImpl);
    pendingScreenTransitionCallback = &transitionCallback;
}

void FrontendApplicationBase::gotoDisplayCCScreenNoTransitionImpl() {
    touchgfx::makeTransition<DisplayCCView, DisplayCCPresenter, touchgfx::NoTransition, Model >(&currentScreen, &currentPresenter, frontendHeap, &currentTransition, &model);
}

 

Calling the first function from the TouchGFX Application API is all that's needed to switch screens.

The second screen is very simple too. Again a mock:

 

 

Then - just to check the look and feel, I animate the two mock measured values:

 

void DisplayCCView::handleTickEvent() {
  static float iCurr = 0.998;
  static float iVolt = 3.898;
  static uint16_t count = 0;

  if ((count % 50) == 15) {
    iCurr = 0.997;
    iVolt = 3.898;
  }

  if ((count % 50) == 40) {
    iCurr = 0.998;
    iVolt = 3.899;
  }

  count++;

  Unicode::snprintfFloat(txtCurrentValBuffer, TXTCURRENTVAL_SIZE, "%3.3f A", iCurr);
  Unicode::snprintfFloat(txtVoltageValBuffer, TXTVOLTAGEVAL_SIZE, "%3.3f V", iVolt);
  txtCurrentVal.invalidate();
  txtVoltageVal.invalidate();
}

 

customer action photo 2

image source: the mockup running on the road test kit

 

What did I learn? That it's not hard to switch screens and use the clock ticks to keep a reactive application when in parallel the system is preparing startup or other things.

With the investigations done before - serial communication, backend data integration, sending commands, I'm set for the next step: real integration.

 

Related Posts
First Experience with CubeIDE
Create a TouchGFX Project with support for the Designer, CubeIDE and Debugger - Pt 1: Screen Works
Create a TouchGFX Project with support for the Designer, CubeIDE and Debugger - Pt 2: Touch Works
TouchGFX Simple Example: react on a button click
USB, freeRTOS ,Task Notifications and Interrupts
the Development Kit STMod+ Connector and Using UART2
TouchGFX Application Framework: Model, View, Presentation, Message Queue
TouchGFX Application Framework: A Mock GUI - Show Statuses, Switch Screens
TouchGFX Application Framework: MVP and the ModelListener
Write a CubeIDE MX application: Hardware Cryptography with DMA