Synchronization between tasks
In Part 2, we explained about “Tasks” as a unit of processing on T-Kernel.
Although each task operates independently, sometimes you may want to have multiple tasks execute a process in cooperation with each other. For example, you may want to execute one task after another task is completed, or you may want to execute task C after both task A and task B are completed, or conversely, you may want to execute task B and task C after task A is completed. A combination of them is possible.
The term “synchronization” refers to making a user wait until the completion of one task to execute another task in order to carry out such a coordinated processing between multiple tasks.
Since T-Kernel provides various types of synchronization functions, you can develop efficient programs by using the most appropriate synchronization function as needed.
Event flag
As explained earlier, an event flag is a function that can be used when you want to execute a task after another task has been completed. With an event flag, the meaning of an event, such as the completion of a task, is expressed in a bit pattern to synchronize tasks.
Listing 1 is an example of programming to “execute the process of task A after the process of task B is completed” (*1) using an event flag.
(*1) doWorkA and doWorkB shall perform the processing for each task and may generate a wait if necessary. Each task would perform a variety of processes in addition to doWorkA and doWorkB, but these have been omitted in the list for simplicity.
【Listing 1: Task A’s doWorkA is executed after Task B’s doWorkB is completed.】
#include <basic.h> #include <tk/tkernel.h> #include <tm/tmonitor.h> IMPORT void doWorkA( void ); IMPORT void doWorkB( void ); #define FPTN 0x00000001U ID flgid; void taskA( INT stacd, VP exinf ) { UINT flgptn; while(1){ tk_wai_flg(flgid, FPTN, TWF_ORW|TWF_CLR, &flgptn, TMO_FEVR); /* Waiting for doWorkB() to complete */ doWorkA(); /* What to do after doWorkB() */ } tk_ext_tsk(); } void taskB( INT stacd, VP exinf ) { while(1){ doWorkB(); /* What to do before doWorkA() */ tk_set_flg(flgid, FPTN); /* Notifies the completion of the process of doWorkB() */ } tk_ext_tsk(); } EXPORT INT usermain( void ) /* Functions called by the initial task */ { T_CFLG cflg = { NULL, TA_TFIFO|TA_WMUL, 0 }; T_CTSK ctskA = { NULL, TA_HLNG|TA_RNG0, taskA, 1, 4*1024 }; T_CTSK ctskB = { NULL, TA_HLNG|TA_RNG0, taskB, 1, 4*1024 }; ID tskIdA; /* Task A Identifier */ ID tskIdB; /* Task B Identifier */ flgid = tk_cre_flg( &cflg ); /* Generate an event flag */ tskIdA = tk_cre_tsk( &ctskA ); /* Generate Task A */ tk_sta_tsk( tskIdA, 0 ); /* Start executing Task A. */ tskIdB = tk_cre_tsk( &ctskB ); /* Generate Task B */ tk_sta_tsk( tskIdB, 0 ); /* Start execution of Task B. */ tk_slp_tsk(TMO_FEVR); /* Transitioning to a wake-up call. */ return 0; }
tk_wai_flg
waits for an event flag to be set, and tk_set_flg
sets an event flag.
When you run the program, Task A starts executing first, but it stops the process at this point because it has put tk_wai_flg
in line 16 (in Task A). Then, when Task B executes tk_set_flg
on line 27 (after doWorkB
processing is completed), Task A is stopped by tk_wai_flg
(“waiting state” in T-Kernel). to be able to execute doWorkA
.
In addition, when using event flags, etc. on T-Kernel, you need to make preparations for them beforehand. This is tk_cre_flg
(generation of event flags) on line 40.
As in Listing 1, you can use event flags to execute task C after both tasks A and B are completed, or to execute tasks B and C after task A is completed, as well as various other synchronization methods.
Exclusion Control
In a real-time OS such as T-Kernel, “exclusion control” is a function that is often used together with synchronization.
Since tasks are apparently executed in parallel, if multiple tasks attempt to access the same resources (shared variables, etc.) and execute some kind of processing at the same time, normal results may not be obtained due to a conflict in processing by multiple tasks. The mechanism to prevent such phenomena is “Exclusion Control” and T-Kernel provides “Semaphore” and “Mutex” (see Part 5).
合わせて読みたい
For example, let’s say a system has three tasks each performing a different process, and the system as a whole has a single counter (variable) that counts the number of times a task has completed a process.
In this case, each task can be programmed as shown in Listing 2, for example.
【List 2: Sample Program for Task A】
void taskA( INT stacd, VP exinf ) { while(1){ doWorkA(); /* Processing for Task A */ counter++; /* count up */ } tk_ext_tsk(); /* End of task */ }
If you program the other tasks (Task B and C) in the same way, it seems to work correctly at first glance. However, it does not work well in the following cases.
For example, if task A is executing counter++
of task B and task A is in the execution state and counter++
is executed as an interruption to task B, the counting process will go wrong.
Here’s how it works.
counter++
can be written as a single line in C. However, if the counter
variable is placed in memory, the processing procedure from the CPU’s point of view is as follows.
Counter++ processing steps
- Read the current value from the
- counter
- Add 1 to the read value.
- Writes the new value into the variable (counter).
variable.
By the way, if we write this process in Arm’s assembly language, it looks like Listing 3, for example.
【Listing 3: Example of counter++ written in Arm’s assembly language】
※ counter is assumed to have a separate label defined.
ldr r2, counter // (1)-1 ldr r3, [r2, #0] // (1)-2 add r3, r3, #1 // (2) str r3, [r2, #0] // (3)
Suppose Task B executes up to “2”, and then the higher priority Task A comes into the execution state and executes counter++
. In this case, the process of Task B is suspended, and Task A executes from “1” to “3” first to update the value of counter
. Then, Task B resumes processing from “2” and updates counter
. In other words, the new counter
value written by Task A is overwritten by Task B at “3”, and the result of the update by Task A is lost.
In order to prevent such a phenomenon from occurring, we need to program the code to execute the processes from “1” to “3” in succession (without being interrupted by other tasks, i.e., exclusively). This is accomplished by the “exclusive control” feature.
Semaphore
There are various ways to achieve exclusion control, and one of the most common ways is to use semaphores.
Listing 4 shows an example of adding exclusion control using semaphores.
【List 4: Sample Program with Exclusion Control】
#include <basic.h> #include <tk/tkernel.h> #include <tm/tmonitor.h> IMPORT void doWorkA( void ); IMPORT void doWorkB( void ); IMPORT void doWorkC( void ); volatile INT counter = 0; ID semid; void taskA( INT stacd, VP exinf ) { while(1){ doWorkA(); /* Processing for Task A */ tk_wai_sem(semid,1,TMO_FEVR); counter++; /* count up */ tk_sig_sem(semid,1); } tk_ext_tsk(); } void taskB( INT stacd, VP exinf ) { while(1){ doWorkB(); /* Processing for Task B */ tk_wai_sem(semid,1,TMO_FEVR); counter++; /* count up */ tk_sig_sem(semid,1); } tk_ext_tsk(); } void taskC( INT stacd, VP exinf ) { while(1){ doWorkC(); /* Processing for Task C */ tk_wai_sem(semid,1,TMO_FEVR); counter++; /* count up */ tk_sig_sem(semid,1); } tk_ext_tsk(); } EXPORT INT usermain( void ) /* Functions called by the initial task */ { T_CSEM csem = { NULL, TA_TFIFO|TA_FIRST, 1, 1 }; T_CTSK ctskA = { NULL, TA_HLNG|TA_RNG0, taskA, 1, 4*1024 }; T_CTSK ctskB = { NULL, TA_HLNG|TA_RNG0, taskB, 2, 4*1024 }; T_CTSK ctskC = { NULL, TA_HLNG|TA_RNG0, taskC, 2, 4*1024 }; ID tskIdA; /* Task A Identifier */ ID tskIdB; /* Task B Identifier */ ID tskIdC; /* Task C Identifier */ tk_chg_pri(TSK_SELF,1); semid = tk_cre_sem( &csem ); /* Generate semaphores */ tskIdB = tk_cre_tsk( &ctskB ); /* Generate Task B */ tk_sta_tsk( tskIdB, 0 ); /* Start execution of Task B. */ tskIdC = tk_cre_tsk( &ctskC ); /* Generate Task C */ tk_sta_tsk( tskIdC, 0 ); /* Start execution of Task C. */ tskIdA = tk_cre_tsk( &ctskA ); /* Generate Task A */ tk_sta_tsk( tskIdA, 0 ); /* Start executing Task A. */ tk_slp_tsk(TMO_FEVR); /* Transitioning to a wake-up call. */ return 0; }
As in counter++
in Listing 4, when multiple tasks operate on a single variable (resource), you enclose the part with tk_wai_sem
and tk_sig_sem
. A semaphore is a function for exclusion control and synchronization of resources by expressing the presence or absence of unused resources and the number of unused resources numerically.
tk_wai_sem
tries to acquire a semaphore resource. If the semaphore resource is acquired, it subtracts the semaphore resource and continues processing.
tk_sig_sem
returns the semaphore resources. At this time, if a task is waiting for a semaphore resource, it allocates the semaphore resource and releases the waiting state of the task.
From each task’s point of view, the waiting state at this time means that tk_wai_sem
does not return from the function tk_wai_sem
, which means that execution is pending before counter++
as a result.
Now, if a task that had acquired a semaphore resource executes counter++
and then issues tk_sig_sem
, the semaphore resource is returned to the semaphore. Then, if there is a task that was waiting for a semaphore resource by issuing tk_wai_sem
, the semaphore resource is acquired and the waiting state is released, and the process following tk_wai_sem
can be executed.
Thus, “1” to “3” in the counter++
procedure shown above are executed sequentially for each task, and are now counted normally. By using semaphores in this way, it is possible to exclusion control over a commonly used resource (a variable called counter
in the case of List 4).
With this program, tk_wai_sem
to tk_sig_sem
can be executed without being interrupted by other tasks, so you can write not only simple processes such as counter++ but also more complex processes. In addition, we need to prepare for using semaphores in advance. This is tk_cre_sem
(line 57) in List 4. This process is called “generating semaphores” in T-Kernel, and multiple semaphores can be generated and used for different purposes.
“もっと見る” カテゴリーなし
Mbed TLS overview and features
In this article, I'd like to discuss Mbed TLS, which I've touched on a few times in the past, Transport …
What is an “IoT device development platform”?
I started using Mbed because I wanted a microcontroller board that could connect natively to the Internet. At that time, …
Mbed OS overview and features
In this article, I would like to write about one of the components of Arm Mbed, and probably the most …