top of page
Forum Posts
david.truan
Sep 23, 2025
In General Discussions
Hello, in this post, we'll discover more about the MCUBoot bootloader and its integration into Zephyr/NCS.
What is MCUBoot?
MCUBoot is a lightweight bootloader capable of multiple tasks:
• Boot an image stored in flash
• Verify an image in flash
• Swap images in slots
• Launch the TFM environement
• Wait for Serial recovery
• ...
It is board/OS agnostic and can be ported to any board with simple adaptations. In this post, we'll focus on its integration in the nRF Connect SDK (NCS).
Why using MCUBoot?
Having a bootloader is often a requirement for most project. It allows upgrade management as well as image integrity checks and a lot more features as we saw previously.
The basics images management is needed to offer an upgrade system to the user. As MCUBoot can be as trimmed down to ~24kB, it is suited for low memory system.
MCUBoot can also be upgraded if a first stage bootloader is present before it. In that case, it requires two MCUBoot slots to be able to perform verification and images swapping if needed.
Upgrade modes
MCUBoot allows multiple upgrade types:
• Single slot: Only one application slot is available and is always overwritten in case of an upgrade. Upgrade only possible from MCUBoot
• Dual slot: Two slots are reserved for the application. When upgrading, the slot 2 is written, then MCUBoot checks if the image is valid and swap it into slot 1, as only slot 1 can be executed.
• Serial recovery: Allow MCUBoot to upgrade/overwrite an image by launching a SMP client, waiting on UART commands, containing the new image to write.
The most commonly used is the dual slot one, as it allows to keep a backup of the old application, as well as defining some rules to validate the new image. It's main disadvantage is that the application can only take up to 1/2 of the remaining memory (after MCUBoot and the other storage partitions are set).
How to use MCUBoot in NCS
In NCS, MCUBoot can be enabled using the sysbuild configuration by enabling:
SB_CONFIG_BOOTLOADER_MCUBOOT=y
This will compile MCUBoot for your project and automatically create the corresponding partitions if dynamic partitioning is used. More information about how to configure MCUBoot can be found on the official nRF documentation.
Image upgrade from an application
The classical and most common way of upgrading an application is through the DFU Target API available in an nRF application. It is only possible in a dual slots setup, as the application cannot erase itself if we are using a single slot.
Here is a pseudo-code using the DFU Target API:
/* Set the update buffer used by the DFU API */
dfu_target_mcuboot_set_buf(mcuboot_buf, MCUBOOT_BUF_SZ);
/* Initialize the DFU Target API with the image info (type, size)*/
dfu_target_init(DFU_TARGET_IMAGE_TYPE_MCUBOOT, 0, img_size, dfu_target_callback_handler);
/* Send images chunks */
while (images bytes left) {
bytes_to_write = read_image_from_sd(dec_buf);
dfu_target_write(dec_buf, bytes_to_write);
}
/* Notify that the flash is done and all chunks were sent */
dfu_target_done(true);
/* Ask MCUBoot to check for an update on all slots */
dfu_target_schedule_update(-1);
/* Reboot to let MCUBoot upgrade the system) */
sys_reboot(SYS_REBOOT_COLD);
Once MCUBoot boots again, it will check for the image validity (version higher, correct slot, ...) and integrity (correct encryption key, correct signature, ...) and swap it if it is valid. Then the application can do some more checks (comparing values stored in NVS for example) to validate and confirm the upgrade using the following function:
boot_write_img_confirmed();
Conclusion
We now can setup and use MCUBoot to upgrade our system! This is the first step to consider when designing a system requiring upgrades, as it will determine:
• What to enable
• How to configure the upgrade
• What sizes are available for my different system components (bootloader, application, TFM, NVS, ...)
• How and if the upgrade can be recovered in case it fails
I hope you are now ready to dig deeper in MCUBoot and see what else it has to offer! Its code is always fully open-source and is a great way of learning the boot process and its capabilities. I also strongly encourage you to read the available MCUBoot samples from NRF!
Links:
• https://docs.nordicsemi.com/bundle/ncs-latest/page/mcuboot/readme-ncs.html
• https://docs.mcuboot.com/
• https://docs.zephyrproject.org/latest/services/device_mgmt/dfu.html
0
0
23
david.truan
Aug 24, 2025
In General Discussions
Hello everyone!
Today we'll cover the topic on how to enter Serial Recovery boot mode in MCUBoot, using Zephyr. This can be handful if no user input is available, as traditionally, it is how you let the user initiate Serial Recovery.
In this article, we'll see:
• How configure MCUBoot to enable Serial Recovery
• How to add a retention memory to communicate between our application and MCUBoot
• How to request Serial Recovery mode from the application
• What is needed once entering the Serial Recovery mode
Configuring MCUBoot for Serial Recovery
The first step is enabling serial recovery support in MCUBoot. This is done through the MCUBoot Kconfig options.
You’ll need to make sure your build of MCUBoot is compiled with:
CONFIG_MCUBOOT_SERIAL=y
Depending on your board, you may also need to configure the UART device used by MCUBoot for recovery. For example:
CONFIG_MCUBOOT_SERIAL_UART=1
This ensures that MCUBoot includes the serial recovery subsystem and listens on the correct UART port for incoming SMP commands.
Adding Retention Memory
To request Serial Recovery from your application, Zephyr offers the retention memory API. It allows one to define an area in the DTS, which will be shared across you application and MCUBoot. Here is how to add it in the DTS overlay:
/ {
/* Define a non-init RAM region as retained memory */
sram@2003FC00 {
compatible = "zephyr,memory-region", "mmio-sram";
reg = <0x2003FC00 DT_SIZE_K(1)>;
zephyr,memory-region = "RetainedMem";
status = "okay";
/* Define a small partition dedicated to boot mode (1 byte) */
retainedmem {
compatible = "zephyr,retained-ram";
status = "okay";
#address-cells = <1>;
#size-cells = <1>;
retention0: retention@0 {
compatible = "zephyr,retention";
status = "okay";
reg = <0x0 0x1>;
};
};
};
/* Tell Zephyr this retention region is used for boot-mode */
chosen {
zephyr,boot-mode = &retention0;
};
};
/* Reduce SRAM0 usage by 1 byte to account for non-init area */
&sram0 {
reg = <0x20000000 0x3FFFF>;
};
And in the config (prj.conf or an overlay):
CONFIG_RETAINED_MEM=y
CONFIG_RETENTION=y
CONFIG_RETENTION_BOOT_MODE=y
# Enable MCUboot serial recovery feature
CONFIG_MCUBOOT_SERIAL=y
CONFIG_BOOT_SERIAL_BOOT_MODE=y
Requesting Serial Recovery from the Application
Once the retention mechanism is in place, the application can request Serial Recovery mode by:
1. Writing the predefined flag value into the retention memory.
2. Triggering a system reset (sys_reboot(SYS_REBOOT_COLD) in Zephyr).
#include <zephyr/retention/bootmode.h>
#include <zephyr/sys/reboot.h>
#include <zephyr/logging/log.h>
LOG_MODULE_REGISTER(app, LOG_LEVEL_INF);
void request_serial_recovery(void){
int ret;
ret = bootmode_set(BOOT_MODE_TYPE_BOOTLOADER);
if (ret) {
LOG_ERR("Failed to set boot mode (err %d)", ret);
return;
}
sys_reboot(SYS_REBOOT_COLD);
/* Should not reach here */
}
On the next startup, MCUBoot detects the flag and drops directly into Serial Recovery, waiting for a flashing system to send new firmware.
What Happens in Serial Recovery Mode
When MCUBoot is running in Serial Recovery, it uses the Simple Management Protocol (SMP) over UART. This means it expects SMP packets containing image management commands (upload, erase, test, confirm, etc.). The commands implementation can be found at bootloader/mcuboot/boot/boot_serial/src/boot_serial.c
To make use of this, you need an SMP-capable machine another Zephyr device configured as an SMP client.
Some common options:
• From a host machine: use mcumgr, the reference SMP tool. For example:
mcumgr -t 30 -c serial1 image upload app_update.bin
• From another Zephyr board: you can build Zephyr with the mcumgr subsystem enabled, turning the board into an SMP client. In this case, you’ll need to configure the board with:
CONFIG_MCUMGR=y
CONFIG_MCUMGR_CMD_IMG_MGMT=y
CONFIG_MCUMGR_CMD_OS_MGMT=y
CONFIG_MCUMGR_SMP_UART=y
Then, your second board can act as the update host and push firmware into the target board running MCUBoot in Serial Recovery, from an SDCard, WiFi, ...
Conclusion
With this setup, your application can programmatically request Serial Recovery, and MCUBoot will listen for new firmware uploads via SMP, whether from a PC host tool like mcumgr or from another SMP capable device.
0
0
53
david.truan
Jul 14, 2025
In Embedded User Interface
Welcome to this second part of the great quest to add a custom format to LVGL 9.x!
Last time, we saw how to define our custom pixel format and how to add a way to blend it using LVGL. At this point, we are able to draw basic objects with our custom pixel format.
This time, we'll see how to patch LVGL to be able to transform our objects. This includes:
• Scaling
• Rotation
• Skewing
So let's dig in and see how to achieve that!
How does LVGL transform objects
LVGL method of transforming object is the following:
• Take a snapshot of the object representation and save it in a draw layer (equivalent to an image either in ARGB8888, or in your custom pixel format), using blending
• Transform the draw buffer using integer math and specific pixel format transform function
• Draw back the draw buffer into the object in custom pixel format, using blending
Here is a picture to resume it:
We see that this pipeline heavily uses the blending mechanisms we covered last time. More precisely, it uses the image blending capabilities, as the transformation happens in a draw buffer instead of a classical object.
As said in the first point, the object is snapshot and is either saved as ARGB8888, or in the custom pixel format. In this post, we'll cover the case where it uses our custom pixel format.
Using ARGB8888 has the advantage to already offer transformation functions, but it needs the image blending from/into the custom pixel format. More on all that in a future post about the draw buffers!
Adding our own
So we are already equipped with all the blending functions we need, as we set them up in the last post.
Now it is time to write our transformation functions! Luckily our example pixel format is really simple so we can take example on the ARGB8888 transformation.
The transformation happens in refr_obj function:
lv_layer_t * new_layer = lv_draw_layer_create(layer,
area_need_alpha ? LV_COLOR_FORMAT_ARGB8888 : LV_COLOR_FORMAT_NATIVE, &layer_area_act);
lv_obj_redraw(new_layer, obj);
lv_point_t pivot = {
.x = lv_obj_get_style_transform_pivot_x(obj, 0),
.y = lv_obj_get_style_transform_pivot_y(obj, 0)
};
if(LV_COORD_IS_PCT(pivot.x)) {
pivot.x = (LV_COORD_GET_PCT(pivot.x) * lv_area_get_width(&obj->coords)) / 100;
}
if(LV_COORD_IS_PCT(pivot.y)) {
pivot.y = (LV_COORD_GET_PCT(pivot.y) * lv_area_get_height(&obj->coords)) / 100;
}
lv_draw_image_dsc_t layer_draw_dsc;
lv_draw_image_dsc_init(&layer_draw_dsc);
layer_draw_dsc.pivot.x = obj->coords.x1 + pivot.x - new_layer->buf_area.x1;
layer_draw_dsc.pivot.y = obj->coords.y1 + pivot.y - new_layer->buf_area.y1;
layer_draw_dsc.opa = opa_layered;
layer_draw_dsc.rotation = lv_obj_get_style_transform_rotation(obj, 0);
while(layer_draw_dsc.rotation > 3600) layer_draw_dsc.rotation -= 3600;
while(layer_draw_dsc.rotation < 0) layer_draw_dsc.rotation += 3600;
layer_draw_dsc.scale_x = lv_obj_get_style_transform_scale_x(obj, 0);
layer_draw_dsc.scale_y = lv_obj_get_style_transform_scale_y(obj, 0);
layer_draw_dsc.skew_x = lv_obj_get_style_transform_skew_x(obj, 0);
layer_draw_dsc.skew_y = lv_obj_get_style_transform_skew_y(obj, 0);
layer_draw_dsc.blend_mode = lv_obj_get_style_blend_mode(obj, 0);
layer_draw_dsc.antialias = disp_refr->antialiasing;
layer_draw_dsc.bitmap_mask_src = bitmap_mask_src;
layer_draw_dsc.image_area = obj_draw_size;
layer_draw_dsc.src = new_layer;
lv_draw_layer(layer, &layer_draw_dsc, &layer_area_act);
We see the draw layer creation, then the transformation data gathering and the lv_draw_layer call, which will request a draw unit to draw our object when available. Once a draw unit is ready, the lw_draw_sw_transform function is called. This function handles the transformation data preparation and will call the specific transform function for each line.
In this post, we won't got in the details of how the this transformation preparation is handled. To resume, it calculates for each point in the source layer, the position in the transformed layer, in term of steps. It allows for example to use 2x each pixels in the source buffer in the case of a 2x scaling.
We end up with the following specific ABRG8888 function. It is the exact same as the ARGB8888, but is shown here as an example:
static void transform_abgr8888(const uint8_t * src, int32_t src_w, int32_t src_h, int32_t src_stride,
int32_t xs_ups, int32_t ys_ups, int32_t xs_step, int32_t ys_step,
int32_t x_end, uint8_t * dest_buf, bool aa)
{
int32_t xs_ups_start = xs_ups;
int32_t ys_ups_start = ys_ups;
lv_color32_t * dest_c32 = (lv_color32_t *) dest_buf;
int32_t x;
for(x = 0; x < x_end; x++) {
xs_ups = xs_ups_start + ((xs_step * x) >> 8);
ys_ups = ys_ups_start + ((ys_step * x) >> 8);
int32_t xs_int = xs_ups >> 8;
int32_t ys_int = ys_ups >> 8;
/*Fully out of the image*/
if(xs_int < 0 || xs_int >= src_w || ys_int < 0 || ys_int >= src_h) {
((uint32_t *)dest_buf)[x] = 0x00000000;
continue;
}
/*Get the direction the hor and ver neighbor
*`fract` will be in range of 0x00..0xFF and `next` (+/-1) indicates the direction*/
int32_t xs_fract = xs_ups & 0xFF;
int32_t ys_fract = ys_ups & 0xFF;
int32_t x_next;
int32_t y_next;
if(xs_fract < 0x80) {
x_next = -1;
xs_fract = 0x7F - xs_fract;
}
else {
x_next = 1;
xs_fract = xs_fract - 0x80;
}
if(ys_fract < 0x80) {
y_next = -1;
ys_fract = 0x7F - ys_fract;
}
else {
y_next = 1;
ys_fract = ys_fract - 0x80;
}
const lv_color32_t * src_c32 = (const lv_color32_t *)(src + ys_int * src_stride + xs_int * 4);
dest_c32[x] = src_c32[0];
if(aa &&
xs_int + x_next >= 0 &&
xs_int + x_next <= src_w - 1 &&
ys_int + y_next >= 0 &&
ys_int + y_next <= src_h - 1) {
lv_color32_t px_hor = src_c32[x_next];
lv_color32_t px_ver = *(const lv_color32_t *)((uint8_t *)src_c32 + y_next * src_stride);
if(px_ver.alpha == 0) {
dest_c32[x].alpha = (dest_c32[x].alpha * (0xFF - ys_fract)) >> 8;
}
else if(!lv_color32_eq(dest_c32[x], px_ver)) {
if(dest_c32[x].alpha) dest_c32[x].alpha = ((px_ver.alpha * ys_fract) + (dest_c32[x].alpha * (0xFF - ys_fract))) >> 8;
px_ver.alpha = ys_fract;
dest_c32[x] = lv_color_mix32(px_ver, dest_c32[x]);
}
if(px_hor.alpha == 0) {
dest_c32[x].alpha = (dest_c32[x].alpha * (0xFF - xs_fract)) >> 8;
}
else if(!lv_color32_eq(dest_c32[x], px_hor)) {
if(dest_c32[x].alpha) dest_c32[x].alpha = ((px_hor.alpha * xs_fract) + (dest_c32[x].alpha * (0xFF - xs_fract))) >> 8;
px_hor.alpha = xs_fract;
dest_c32[x] = lv_color_mix32(px_hor, dest_c32[x]);
}
}
/*Partially out of the image*/
else {
if((xs_int == 0 && x_next < 0) || (xs_int == src_w - 1 && x_next > 0)) {
dest_c32[x].alpha = (dest_c32[x].alpha * (0x7F - xs_fract)) >> 7;
}
else if((ys_int == 0 && y_next < 0) || (ys_int == src_h - 1 && y_next > 0)) {
dest_c32[x].alpha = (dest_c32[x].alpha * (0x7F - ys_fract)) >> 7;
}
}
}
}
We'll skip the fixed point math and the integer calculations magic for this time, but the first part is calculating the steps to be able to get the pixel in the source buffer here:
const lv_color32_t * src_c32 = (const lv_color32_t *)(src + ys_int * src_stride + xs_int * 4);
dest_c32[x] = src_c32[0];
You can see it finally uses the integer steps we talked about previously to get the pixel in the source and place it in the destination. In our case, we multiply the xs_int by 4 as our pixel format uses 4bpp.
Now it is time to add this function call in the lw_draw_sw_transform function:
case LV_COLOR_FORMAT_ABGR8888:
transform_abgr888(src_buf, src_w, src_h, src_stride, xs_ups, ys_ups, xs_step_256, ys_step_256, dest_w, dest_buf, aa, 4);
break;
Our custom pixel format transformation pipeline is now ready!
Conclusion
We are now equipped to transform our objects in our custom pixel format!
As we saw, this pipeline is not straightforward and we skipped a lot of details in this post to focus on the global mechanism and the basic use case.
The custom pixel format is used during the whole pipeline. In our case it does not matter that much as it uses the same amount of memory as the ARGB8888, but it can be extremely useful when dealing with smaller bpp formats!
Next time, we'll see how to decode an image encoded in our custom pixel format to be able to use it as an image or in an animation!
0
0
16
david.truan
Jun 15, 2025
In Embedded User Interface
Since 9.x it becomes possible to patch LVGL to natively support new pixel formats. It can be a simple color swap mode not originally planned or a super specific display pixel format.
In this posts serie, we will cover the necessary steps for you to add such a custom pixel format and end up with a patch to apply to LVGL sources.
In the following examples, we'll use a custom pixel format on ABRG 32bpp (8A, 8B, 8R, 8G). This is a made up format and is only used as an example purpose. We'll name this pixel format ABGR8888.
Adding the color format to lv_color.h
The first step is to let LVGL know about our new pixel format. To do this, we'll add our pixel format in src/misc/lv_color.h in the lv_color_format_t enum. Here we need to add the color fomat and specify to use it natively if LVGL is configured in 32bpp:
...
LV_COLOR_FORMAT_ABGR8888 = 0x36,
...
#elif LV_COLOR_DEPTH == 32
LV_COLOR_FORMAT_NATIVE = LV_COLOR_FORMAT_ABGR8888,
#else
Then, complete both lv_color_format_get_bpp and lv_color_format_has_alpha:
uint8_t lv_color_format_get_bpp(lv_color_format_t format) {
...
case LV_COLOR_FORMAT_ARGB8888:
case LV_COLOR_FORMAT_XRGB8888:
case LV_COLOR_FORMAT_ABGR8888: /* Added this line on the return 32 case */
return 32;
...
}
bool lv_color_format_has_alpha(lv_color_format_t format) {
...
case LV_COLOR_FORMAT_ARGB4444:
case LV_COLOR_FORMAT_ABGR8888: /* Added here as our pixel format has an alpha chanel */
return true;
...
}
We now are ready to implement the blending functions!
Adding the blending functions
LVGL needs a way to know how to blend a pixel format into a draw buffer, which can be the same or another color format. In our case, we are only interrested to blend our custom pixel format with itself, but the procedure is the same to blend any other format.
The blending in LVGL can be of two different type:
• Color blending: Simple blending of a single color in a given area. Consist of filling an area with the color, with or without masking.
• Image blending: Blending of an image descriptor comming with its source buffer to blend. Can be with or without mask.
In the examples, we only cover the color/image blend with no mask or with a mask, but not the AA, to keep it simple.
So first, create your custom blending files (which you can copy paste from existant) in src/draw/sw/blend/:
• lv_draw_sw_blend_to_abgr8888.c: Functions definitions
• lv_draw_sw_blend_to_abgr8888.h: Functions declarations
Color blending function
The color blending function has this prototype (can also be customized):
blend_color_to_format(lv_draw_sw_blend_fill_dsc_t * dsc)
Where the dsc parameter contains all the information about the color, the destination buffer, the coordinates, ... You should en up with something like this:
static inline void *drawbuf_next_row(const void *buf, uint32_t stride)
{
return (void *)((uint8_t *)buf + stride);
}
void lv_draw_sw_blend_color_to_abgr8888(lv_draw_sw_blend_fill_dsc_t * dsc)
{
int32_t w = dsc->dest_w;
int32_t h = dsc->dest_h;
lv_color_t color_abgr8888;
lv_opa_t opa = dsc->opa;
const lv_opa_t * mask = dsc->mask_buf;
int32_t mask_stride = dsc->mask_stride;
uint32_t *dest_buf_abgr8888 = dsc->dest_buf;
int32_t dest_stride = dsc->dest_stride;
int32_t x;
int32_t y;
/* Invert here, as we did not patched lv_color_make() */
color_abgr8888.blue = = dsc->color.red;
color_abgr8888.green = = dsc->color.green;
color_abgr8888.red = = dsc->color.blue;
LV_UNUSED(w);
LV_UNUSED(h);
LV_UNUSED(x);
LV_UNUSED(y);
LV_UNUSED(opa);
LV_UNUSED(mask);
LV_UNUSED(mask_stride);
LV_UNUSED(dest_stride);
/* Simple fill, no mask, full opacity */
if(mask == NULL && opa >= LV_OPA_MAX) {
for(y = 0; y < h; y++) {
for (x = 0; x < w; x++) {
dest_buf_u8[x] = color_abgr8888;
}
/* Get next row */
dest_buf_abgr8888 = drawbuf_next_row(dest_buf_abgr8888, dest_stride);
}
/* Fill with mask */
} else if(mask && opa >= LV_OPA_MAX) {
for(y = 0; y < h; y++) {
for(x = 0; x < w; x++) {
/* Mask is on 8bits */
uint8_t mask8 = *((uint8_t *)&mask[x]);
/* Apply color to all the mask pixel which are not 0 */
if(mask8 != 0x00) {
dest_buf_u8[x] = LV_OPA_MIX2(color_abgr8888.alpha, mask);
}
}
/* Get next row */
dest_buf_abgr8888 = drawbuf_next_row(dest_buf_abgr8888, dest_stride);
mask += mask_stride;
}
}
}
We can see the two different branches, once without masking, once with a mask. The mask is a uint8_t array containing a mask per bit (0-255), defining how to blend each pixel. The masking branch is taken for example when drawing letters, which are just a rectangle area with a mask to only draw on the letters outlines, with AA.
Now add the function prototype in the previously created header.
Image blending function
Then is the image blending function, which acts exactly like the color blending, but with a source buffer instead of a single color:
void lv_draw_sw_blend_image_to_abgr8888(lv_draw_sw_blend_image_dsc_t * dsc)
{
switch(dsc->src_color_format) {
case LV_COLOR_FORMAT_ABGR8888:
abgr8888_image_blend(dsc);
break;
default:
LV_LOG_WARN("Not supported source color format");
break;
}
}
static uint8_t blend_channel(uint8_t c1, uint8_t c2, uint8_t mask) {
return (uint8_t)(((c1 * (255 - mask)) + (c2 * mask)) / 255);
}
static uint32_t blend_abgr(uint32_t color1, uint32_t color2, uint8_t mask) {
uint8_t a1 = (color1 >> 24) & 0xFF;
uint8_t b1 = (color1 >> 16) & 0xFF;
uint8_t g1 = (color1 >> 8) & 0xFF;
uint8_t r1 = color1 & 0xFF;
uint8_t a2 = (color2 >> 24) & 0xFF;
uint8_t b2 = (color2 >> 16) & 0xFF;
uint8_t g2 = (color2 >> 8) & 0xFF;
uint8_t r2 = color2 & 0xFF;
uint8_t a = blend_channel(a1, a2, mask);
uint8_t b = blend_channel(b1, b2, mask);
uint8_t g = blend_channel(g1, g2, mask);
uint8_t r = blend_channel(r1, r2, mask);
return (a << 24) | (b << 16) | (g << 8) | r;
}
static void abgr8888_image_blend(lv_draw_sw_blend_image_dsc_t * dsc)
{
int32_t w = dsc->dest_w;
int32_t h = dsc->dest_h;
lv_opa_t opa = dsc->opa;
uint32_t * dest_buf = dsc->dest_buf;
int32_t dest_stride = dsc->dest_stride;
const uint32_t * src_buf = dsc->src_buf;
int32_t src_stride = dsc->src_stride;
const lv_opa_t * mask_buf = dsc->mask_buf;
int32_t mask_stride = dsc->mask_stride;
int32_t dest_x;
int32_t src_x;
int32_t y;
if(dsc->blend_mode == LV_BLEND_MODE_NORMAL) {
if(mask_buf == NULL && opa >= LV_OPA_MAX) {
for(y = 0; y < h; y++) {
for(dest_x = 0, src_x = 0; dest_x < w; dest_x++, src_x++) {
dest_buf[dest_x] = src_buf[src_x];
}
dest_buf = drawbuf_next_row(dest_buf, dest_stride);
src_buf = drawbuf_next_row(src_buf, src_stride);
}
} else (mask_buf != NULL && opa >= LV_OPA_MAX) {
for(y = 0; y < h; y++) {
for(x = 0; x < w; x++) {
uint8_t mask = mask_buf[x]
if (mask != 0x00) {
dest_buf[x] = blend_abgr(dest_buf[x], src_buf[x], mask);
}
}
dest_buf = drawbuf_next_row(dest_buf, dest_stride);
src_buf = drawbuf_next_row(src_buf, src_stride);
mask += mask_stride;
}
}
}
}
We see that like before we treat both with or without masking. In the masking case, the destintaion buffer is also used in the mix.
Now add the function prototype in the previously created header.
Adding our functions to LVGL SW draw
We also see that we need another function called lv_draw_sw_blend_image_to_abgr8888, as it will now need to be added to the common lv_sw_blend function found in lv_draw_sw_blend.c file. Here we need to add both calls to our new functions. So first include the header:
#include "lv_draw_sw_blend_to_abgr8888.h
In lv_sw_blend, we can now add our functions calls. This function have two main branches:
/*Color fill, as now image source is specified*/
if(blend_dsc->src_buf == NULL) {
...
switch(layer->color_format) {
...
case LV_COLOR_FORMAT_ABGR8888:
lv_draw_sw_blend_color_to_abgr8888(&fill_dsc);
break;
}
} else {
switch(layer->color_format) {
...
case LV_COLOR_FORMAT_ABGR8888:
lv_draw_sw_blend_image_to_abgr8888(&image_dsc);
break;
}
}
We now are ready to let LVGL use our custom pixel format for basic color and image blending, which allow use to draw simple LVGL objects, letters and already decoded images (more on that in a future post!).
Conclusion
This first part covers the very basics of adding a custom pixel format to LVGL 9.x. It is kept pretty simple for example purpose, but I strongly encourage you to check the already existing work in LVGL to base your addition on it.
In the next part we'll see how to add transformation to our pixel format, to be able to rotate and scale our objects!
0
0
93
david.truan
May 11, 2025
In General Discussions
Welcome to the second part of the Testing posts serie!
As promised, this post will cover the integration of our previous testing setup into a Gitlab CI. We'll see how to:
• Prepare a specific Runner to ease the run and keep our CI file to a minimum.
• Prepare the CI file to build our project, run the analysis and prepare the result.
• Access the results on GitLab
Preparing the runner
To have the least amount of work to do in the gitalb-ci.yml, we prepare a runner equipped with the necessary tools. Here is it's dockerfile:
FROM gcc:latest
ARG SONAR_SCANNER_VERSION="7.0.2.4839"
ARG SONAR_SERVER_URL="https://next.sonarqube.com/sonarqube"
ARG UNITY_RELEASE="2.6.1"
ARG UNITY_RELEASE_URL="https://github.com/ThrowTheSwitch/Unity/archive/refs/tags/v${UNITY_RELEASE}.zip"
RUN apt update && apt install -y curl git gcovr cmake
RUN curl -sSLo sonar-scanner.zip "https://binaries.sonarsource.com/Distribution/sonar-scanner-cli/sonar-scanner-cli-${SONAR_SCANNER_VERSION}-linux-x64.zip" \
&& unzip -o sonar-scanner.zip \
&& mv sonar-scanner-${SONAR_SCANNER_VERSION}-linux-x64 sonar-scanner \
&& rm sonar-scanner.zip
RUN curl -sSLo build-wrapper.zip "${SONAR_SERVER_URL}/static/cpp/build-wrapper-linux-x86.zip" \
&& unzip -o build-wrapper.zip \
&& mv build-wrapper-linux-x86 build-wrapper \
&& rm build-wrapper.zip
RUN curl -sSLo Unity.zip "${UNITY_RELEASE_URL}" \
&& unzip -o Unity.zip \
&& cd Unity-${UNITY_RELEASE} \
&& cmake -B build \
&& cmake --build build \
&& cmake --install build
ENV PATH="/build-wrapper:/sonar-scanner/bin:${PATH}"
It will get both sonar-scanner and build-wrapper binaries and add them to the PATH environment variable, letting us access them easily in the next step.
Registering the runner can be done following the official documentation, as it is not the focus of this post. It is just to remember to tag the runner with a tag which will be used afterward (i.e.: testing, sonar, ...). The next step will need a runner with the sonar-testing tag.
Preparing the CI file
The CI file will do the following:
• Build our code and store the output as artifact.
• Run the analysis using sonar-scanner.
Here is the CI file:
variables:
SONAR_SERVER_URL: https://next.sonarqube.com/sonarqube # Replace with your SonarQube server URL
SONAR_SCANNER_VERSION: 7.0.2.4839 # Find the latest version in the "Linux" link on this page:
# https://docs.sonarqube.org/latest/analysis/scan/sonarscanner/
BUILD_WRAPPER_OUT_DIR: bw-output # Directory where build-wrapper output will be placed
SONAR_USER_HOME: "${CI_PROJECT_DIR}/.sonar" # Defines the location of the analysis task cache
GIT_DEPTH: "0" # Tells git to fetch all the branches of the project, required by the analysis task
# note that SONAR_TOKEN is transmitted to the environment through Gitlab CI
UNITY_RELEASE_URL: https://github.com/ThrowTheSwitch/Unity/archive/refs/tags/v2.6.1.zip
build:
stage: deploy
tags:
- sonar-testing
script:
- echo "Building the application"
- cd app
- cmake -B build
- build-wrapper-linux-x86-64 --out-dir "${BUILD_WRAPPER_OUT_DIR}" cmake --build build/ --config Release
artifacts:
paths:
- app/build/app
untracked: false
when: on_success
access: all
cache:
policy: push
key: "${CI_COMMIT_SHORT_SHA}"
paths:
- app/${BUILD_WRAPPER_OUT_DIR}
- app/build/
unity-tests:
stage: .post
tags:
- sonar-testing
script:
- echo "Running Unity unit tests"
- cd app
- ctest --test-dir build --output-junit junit.xml
- gcovr -e main.c --sonarqube > coverage.xml
artifacts:
untracked: false
when: always
paths:
- app/build/junit.xml
- app/coverage.xml
reports:
junit: app/build/junit.xml
cache:
policy: pull-push
key: "${CI_COMMIT_SHORT_SHA}"
paths:
- app/build/
- app/coverage.xml
- app/${BUILD_WRAPPER_OUT_DIR}
sonar-qube:
stage: .post
tags:
- sonar-testing
needs:
- job: unity-tests
artifacts: true
script:
- cd app
- sonar-scanner
cache:
policy: pull
key: "${CI_COMMIT_SHORT_SHA}"
paths:
- app/${BUILD_WRAPPER_OUT_DIR}
- app/coverage.xml
It is split in three jobs:
• Building the project and getting the build wrapper output
• Running the Unity tests
• Running the sonar-scanner analysis and sending it to the sonar-cloud server
Each of these job ask to use a runner having the sonar-testing tag, which will use the runner previously configured.
Accessing the results
Once you push on main, the CI will run and execute all the jobs. After a successful run, the results can be accessed on GitLab.
Here is the successful pipeline:
All jobs passed!
And here is the testing results, available on Gitlab:
We see that all the tests ran and no error was reported
Conclusion
This mini-serie showed the integration of only a subset of the possible frameworks available. This solution offer easy to use and setup components, already validated and widely adopted in the testing world. It shows the first steps on how to build a robust and automated test environment, which also offering rich and useful inside about your code, thanks to the Sonar-Scanner analysis.
I hope this will encourage you to try this kind of setup in your projects and that you'll build nice and bug-free code a bit more easily next time!
0
0
29
david.truan
Apr 13, 2025
In General Discussions
Testing your code is crucial. Having it tested automatically and reported in a concise and understandable way is a must. There are multiple aspects of testing:
- Unit tests + code coverage
- Integration tests
- Static analysis
- Dynamic analysis
In this post, we'll show you how to setup testing and analysing your code with SonarQube Cloud and Unity. It will cover:
• Unit tests setup with Unity
• Code coverage report gcovr
• SonarQube Cloud report
• CMake integration
The Part 2 will cover the integration into a GitLab CI pipeline to run the analysis and the tests on each push.
Project architecture
The project files are available as a zip:
. They are organized as follows:
.
├── CMakeLists.txt
├── main.c
├── src
│ ├── app.c
│ ├── app.h
│ ├── CMakeLists.txt
│ └── my_math
│ ├── CMakeLists.txt
│ ├── my_math.c
│ └── my_math.h
├── test
│ ├── CMakeLists.txt
│ ├── my_math_test.c
│ └── test_main.c
• The sources are located in src/ folder
• The tests are in the test/ folder
• The project is configured and built using CMake. The top CMakeLists.txt is the main list file.
Building the project
cmake -B build # without tests
cmake -B build --DBUILD_TYPE=testing # with tests
cmake --build build
This will build the sources and our test. The -DBUILD_TYPE=testing flag is here to add the coverage option to GCC as well as adding the test/ folder to the configuration/build.
Unit tests using Unity
As stated before, we use Unity framework to write our tests. Its advantages are:
- Easy C/C++ integration
- Simple .c and .h library
- Tests are easy to write
- Concise reports
- Can be integrated with Ceedling
Running the tests
Once the sources and tests are compiled, one can run the tests using ctest:
ctest --test-dir build
Internal ctest changing into directory: /home/dtruan/posts/app/build
Test project /home/dtruan/posts/app/build
Start 1: test_math
1/2 Test #1: test_math ........................ Passed 0.00 sec
Start 2: test_main
2/2 Test #2: test_main ........................ Passed 0.00 sec
100% tests passed, 0 tests failed out of 2
Total Test time (real) = 0.01 sec
The tests run and you get a report on stdout. We'll later use a specific format to be compatible with junit report, used by GitLab CI.
Running the code coverage
The code coverage will be reported using gcovr:
gcovr -e main.c -e build
------------------------------------------------------------------------------
GCC Code Coverage Report
Directory: .
------------------------------------------------------------------------------
File Lines Exec Cover Missing
------------------------------------------------------------------------------
src/app.c 6 6 100%
src/my_math/my_math.c 4 4 100%
test/test_main.c 11 11 100%
test/test_my_math.c 20 20 100%
------------------------------------------------------------------------------
TOTAL 41 41 100%
------------------------------------------------------------------------------
This will run the code coverage to check if our tests cover enough lines of code. We can tell it to generate the report in sonarqube format and to save it in coverage.xml:
gcovr -e main.c -e build --sonarqube > coverage.xml
SonarQube Cloud
SonarQube is a powerful static analysis and code report tool. It covers a variety of languages and offers multiple pricing plans in order to just test the tool or to support a full scale enterprise setup.
In this article, we are using the free plan and their public Cloud server. It is assumed that you already have a SonarQube account and an organization already created. You must also have your SONAR_TOKEN saved, as it will be used when running the analysis
How SonarQube works
SonarQube works in two steps:
- First it collects the build environment using a build wrapper. It generates some output files which will be used by the next step.
- Then it uses the sonar-scanner executable to analyse the code. It uses a configuration files to define the sources to analyse, the account/server information, ... It generates a SonarQube compatible report after a successful run.
The configuration file for this project is like this:
sonar.projectKey=<your_project_key>
sonar.organization=<your_project_org>
sonar.sources=src
sonar.cfamily.compile-commands=bw-output/compile_commands.json
sonar.host.url=https://sonarcloud.io
sonar.sourceEncoding=UTF-8
sonar.coverageReportPaths=coverage.xml
It specifies:
• The project key (from SonarQube Cloud)
• The organization this project is in
• Where our sources are located
• Where to look at the build command, which is the build-wrapper output
• The sonar server URL, here we use the public server
• The sources encoding
• The coverage report file path (optional)
Run an analysis
Here are the steps, from a fresh state, without any build done before:
cmake -B build -DBUILD_TYPE=testing
build-wrapper-linux-x86-64 --out-dir bw-output cmake --build build
ctest --test-dir build
gcovr -e main.c --sonarqube > coverage.xml
SONAR_TOKEN=<YOUR_TOKEN> sonar-scanner
It will build the project, run the tests and code coverage, feed them to the analysis and send the report to the SonarQube Cloud server, which can then be consulted using the Web UI:
We can see the report on our main branch. Here everything passes:
• No Security issue was found
• Our tests cover enough of the code
Next time, we'll see how to integrate all of this into the GitLab CI to automatically run the analysis!
0
0
57
david.truan
Mar 14, 2025
In Embedded User Interface
So you want to include images into your LVGL project, running on a nRF5340. You prepare your images, convert them using LVGL online tool, integrate them into your application, and.... end up with 167% of your flash used!
Optimizing your images is crucial in low resources system such as MCUs. As a quick example, if you want to use a full size background for your application, using a 480x320 display, using ARGB8888 you end up with:
480 * 320 * 4 = 614400 B = 614.4 kB
Which, in some system corresponds to more than half the flash size.
So what can we do to optimize it? The following sections will show solutions to reduce flash usage.
Using RLE compression
RLE is a simple compression format which group contiguous symbols together.
Here is a simple example:
AAABBCCCCD (10B) -> RLE compression -> 3A2B4C1D (8B)
It is very useful when your image have a lot of contiguous same color pixels. It becomes a bit less interesting with images using a lot of colors and blurring as there won't be that much contiguous pixels.
To convert an image to RLE, LVGL offers an utility script in scripts/LVGLImage.py:
python3 lvgl/scripts/LVGLImage.py --ofmt C --cf RGB565 --compress RLE -o my_image.c my_image.png
This will convert my_image.png to a .c file in RGB565 format, compressed using RLE. To be able to use this compressed image in LVGL, you must activate these configs:
• LV_USE_RLE 1: Enable RLE processing.
• LV_BIN_DECODER_RAM_LOAD 1: Enable decoding the binary images in RAM.
Note that RLE decompression can use quite a lot of RAM, as it will need decompression buffers while loading the image.
Using indexed color format
The indexed color format allow to represent each pixel as an index, pointing to a color in a palette,which is encoded next to the pixels data. It means that if your image has <= 256 colors, the palette index can be represented as 1B (index 0 to 255). The palettes are written as ARGB8888, meaning that a 256 colors palette = 256 * 4 = 1024B = 1kB.
To convert an image to indexed color, you can also use LVGL scripts/LVGLImage.py:
python3 lvgl/LVGLImage.py --ofmt C --cf I8 -o my_image.c my_image.png
The script internally convert your base PNG to an indexed PNG if needed, and then convert it to a .c.
One can use these color format (parameter --cf):
• I1: 1 color palette
• I2: 4 color palette
• I4: 16 colors palette
• I8: 256 colors palette
Using A* format (for grayscale images)
The last format(s) we will talk about are the A* formats, where * can be 1, 2, 4, 8. This format allows to only encode the transparency. Like this, you end up with a grayscale image, which can the be recolored using the style.recolor property.
To convert an image to A8, you can also use LVGL scripts/LVGLImage.py:
python3 lvgl/LVGLImage.py --ofmt C --cf A8 -o my_image.c my_image.png
Conclusion
We saw that multiple options are available when you need optimizing your images size. Depending on your base image, it may be better to use one or the other method, so my advice would be to try playing a bit with these color formats and see what works best for your images.
If you need any assistance in optimizing your images for your project, don't hesitate to contact us!
Links:
• RLE compression: https://en.wikipedia.org/wiki/Run-length_encoding
• Indexed color format: https://en.wikipedia.org/wiki/Indexed_color#:~:text=In%20computing%2C%20indexed%20color%20is,2%2Dbit%20indexed%20color%20image
• LVGL Image doc: https://docs.lvgl.io/9.2/overview/image.html#overview-image
0
0
213
david.truan
Feb 13, 2025
In General Discussions
So you just received a Nordic board for your next project. First thing you do is to open the documentation to see how to setup a project for your board (at least it should be the first thing to do!). Then you quickly realize that in the Nordic world, you have two main ways of managing the environment and your development:
• The nRF Connect for Desktop
• The VSCode nRF Connect Extension Pack
So why would you choose one over the other? First it may be a matter of habits. Maybe you absolutely despise VSCode and never want to ever touch it, maybe downloading yet another GUI app to manage you development is the last straw for you. The good news is that you can use one or the other for almost all your tasks:
• Manage your SDKs & toolchains
• Flash/erase/recover the board
• Monitor the board serial output on a console
• Configure the DTS
But not everything can be done by the nRF Connect for Desktop. The nRF Connect for Desktop has no way to let you write and build your applications. The build can be done using the CLI tools offered by Nordic (nrfutils, west, ...).
On the other hand, the VSCode extension allows you to build your project as well as managing your build configurations for your applications. It also hide the build commands under some Tasks linked to your build configuration to help you concentrate on the coding part.
My take on this is that the VSCode extension is becoming more and more robust and features complete. It shows by Nordic pushing to use the VSCode extension in their user guides/doc.
For having used theses two in my projects, I think the VSCode extension is overall the best choice, as it really is an all-in-one solution, actively maintained and developed by Nordic.
Here is my pro and cons for each one.
VSCode Extension Pack:
✅ All-in-one development kit
✅ Integrates in the VSCode ecosystem
✅ Heavily maintained
❌ Hide the build details and how the underlying build system works
❌ Can sometimes be buggy
nRF Connect for Desktop:
✅ Very easy to use
✅ Clear roles of the different applications
✅ Integrated updater
❌ Must compile your code aside
❌ Standalone GUI application
In the end, it is mainly a matter of preference and usage. For example I really like the build configurations management of the VSCode extension Pack as well as its powerful build tasks, but I always use the Serial Monitor of the nRF Connect for Desktop to monitor my board output to have a nice view in another window. As for everything, the correct usage of a tool is the usage you are comfortable with the most!
1
0
54
david.truan
Jan 16, 2025
In Orchestration and Services
In the world of IoT devices being deployed everywhere, it sometimes become difficult to keep track of your fleet logs and status. Fluent-Bit is a lightweight and high performance telemetry agent for logs, device metric or traces.
It allows:
• Definition of Input, which correspond to the data you want to collect
• Usage of Parsers to transform any unstructured log entry and give them a structure that makes easier it processing and further filtering
• Filters pass to only select a specific subset of your logged data to be forwarded
• Flexible Output definition which allows to easily route the processed data to any location
Here is a simple example to show how to collect RAM usage of a specific application called my-amazing-app:
[INPUT]
Name proc
Proc_Name my-amazing-app
Interval_Sec 1
Interval_NSec 0
Fd true
Mem true
[OUTPUT]
Name stdout
Match *
[OUTPUT]
Name http
Match *
Host 192.168.2.3
Port 80
URI /something
This will monitor the memory usage of the application my-amazing-app every second. The two OUTPUT are defined to prints the logs on stdout and send them on a specific URI at the IP 192.168.2.3.
Fluent-Bit defines a lot of input type to gather system metrics. It is also possible to use custom commands to logs more specific data. For example, let's say we want to get the disk usage of my main disk. The df command gives this output:
$ df -k
Filesystem 1K-blocks Used Available Use% Mounted on
tmpfs 3246320 3500 3242820 1% /run
efivarfs 438 309 125 72% /sys/firmware/efi/efivars
/dev/nvme0n1p3 975262680 806211136 119437160 88% /
tmpfs 16231600 414416 15817184 3% /dev/shm
tmpfs 5120 4 5116 1% /run/lock
tmpfs 16231600 0 16231600 0% /run/qemu
/dev/nvme0n1p1 243852 6272 237580 3% /boot/efi
tmpfs 3246320 5936 3240384 1% /run/user/1001
Knowing that the disk is the /dev/nvme0n1p3, we can then use a combination of grep and jq to gather the information and output them in a JSON format:
$ df -k | grep nvme0n1p3 | jq -R -c -s 'gsub(" +"; " ") | split(" ") | { "disk_total": .[1], "disk_used": .[2], "disk_avail": .[3]}'
{"disk_total":"975262680","disk_used":"806195688","disk_avail":"119452608"}
Next we must fill the Fluent-Bit configuration file with the new Input, using the Exec plugin:
[INPUT]
Name exec
Tag disksize
Command df -k | grep nvme0n1p3 | jq -R -c -s 'gsub(" +"; " ") | split(" ") | { "disk_total": .[1], "disk_used": .[2], "disk_avail": .[3]}'
Parser json
Interval_Sec 3600
[FILTER]
Name nest
Match disksize
Operation nest
Wildcard *
Nest_under custom
[OUTPUT]
Name http
Match *
Host 192.168.2.3
Port 80
URI /something
This will gather the disk size using our custom command every hour. The Nest FILTER allow to nest all the data gathered from the JSON under a custom key:
[0] disksize: [[1737017868.008708965, {}], {"custom"=>{"disk_total"=>"975262680", "disk_used"=>"806364640", "disk_avail"=>"119283656"}}]
Useful links:
• Fluent-Bit documentation: https://docs.fluentbit.io/manual
• Torizon OS example: https://developer.toradex.com/torizon/torizon-platform/device-monitoring-in-torizoncore/
0
0
26
david.truan
Admin
More actions
bottom of page

