LVGL: Adding a custom pixel format (Part 2: Transformation)
Welcome to this second part of the great quest to add a custom format to LVGL 9.x!
Last time, we saw how to define our custom pixel format and how to add a way to blend it using LVGL. At this point, we are able to draw basic objects with our custom pixel format.
This time, we'll see how to patch LVGL to be able to transform our objects. This includes:
Scaling
Rotation
Skewing
So let's dig in and see how to achieve that!
How does LVGL transform objects
LVGL method of transforming object is the following:
Take a snapshot of the object representation and save it in a draw layer (equivalent to an image either in ARGB8888, or in your custom pixel format), using blending
Transform the draw buffer using integer math and specific pixel format transform function
Draw back the draw buffer into the object in custom pixel format, using blending
Here is a picture to resume it:

We see that this pipeline heavily uses the blending mechanisms we covered last time. More precisely, it uses the image blending capabilities, as the transformation happens in a draw buffer instead of a classical object.
As said in the first point, the object is snapshot and is either saved as ARGB8888, or in the custom pixel format. In this post, we'll cover the case where it uses our custom pixel format.
Using ARGB8888 has the advantage to already offer transformation functions, but it needs the image blending from/into the custom pixel format. More on all that in a future post about the draw buffers!
Adding our own
So we are already equipped with all the blending functions we need, as we set them up in the last post.
Now it is time to write our transformation functions! Luckily our example pixel format is really simple so we can take example on the ARGB8888 transformation.
The transformation happens in refr_obj function:
lv_layer_t * new_layer = lv_draw_layer_create(layer,
area_need_alpha ? LV_COLOR_FORMAT_ARGB8888 : LV_COLOR_FORMAT_NATIVE, &layer_area_act);
lv_obj_redraw(new_layer, obj);
lv_point_t pivot = {
.x = lv_obj_get_style_transform_pivot_x(obj, 0),
.y = lv_obj_get_style_transform_pivot_y(obj, 0)
};
if(LV_COORD_IS_PCT(pivot.x)) {
pivot.x = (LV_COORD_GET_PCT(pivot.x) * lv_area_get_width(&obj->coords)) / 100;
}
if(LV_COORD_IS_PCT(pivot.y)) {
pivot.y = (LV_COORD_GET_PCT(pivot.y) * lv_area_get_height(&obj->coords)) / 100;
}
lv_draw_image_dsc_t layer_draw_dsc;
lv_draw_image_dsc_init(&layer_draw_dsc);
layer_draw_dsc.pivot.x = obj->coords.x1 + pivot.x - new_layer->buf_area.x1;
layer_draw_dsc.pivot.y = obj->coords.y1 + pivot.y - new_layer->buf_area.y1;
layer_draw_dsc.opa = opa_layered;
layer_draw_dsc.rotation = lv_obj_get_style_transform_rotation(obj, 0);
while(layer_draw_dsc.rotation > 3600) layer_draw_dsc.rotation -= 3600;
while(layer_draw_dsc.rotation < 0) layer_draw_dsc.rotation += 3600;
layer_draw_dsc.scale_x = lv_obj_get_style_transform_scale_x(obj, 0);
layer_draw_dsc.scale_y = lv_obj_get_style_transform_scale_y(obj, 0);
layer_draw_dsc.skew_x = lv_obj_get_style_transform_skew_x(obj, 0);
layer_draw_dsc.skew_y = lv_obj_get_style_transform_skew_y(obj, 0);
layer_draw_dsc.blend_mode = lv_obj_get_style_blend_mode(obj, 0);
layer_draw_dsc.antialias = disp_refr->antialiasing;
layer_draw_dsc.bitmap_mask_src = bitmap_mask_src;
layer_draw_dsc.image_area = obj_draw_size;
layer_draw_dsc.src = new_layer;
lv_draw_layer(layer, &layer_draw_dsc, &layer_area_act);We see the draw layer creation, then the transformation data gathering and the lv_draw_layer call, which will request a draw unit to draw our object when available. Once a draw unit is ready, the lw_draw_sw_transform function is called. This function handles the transformation data preparation and will call the specific transform function for each line.
In this post, we won't got in the details of how the this transformation preparation is handled. To resume, it calculates for each point in the source layer, the position in the transformed layer, in term of steps. It allows for example to use 2x each pixels in the source buffer in the case of a 2x scaling.
We end up with the following specific ABRG8888 function. It is the exact same as the ARGB8888, but is shown here as an example:
static void transform_abgr8888(const uint8_t * src, int32_t src_w, int32_t src_h, int32_t src_stride,
int32_t xs_ups, int32_t ys_ups, int32_t xs_step, int32_t ys_step,
int32_t x_end, uint8_t * dest_buf, bool aa)
{
int32_t xs_ups_start = xs_ups;
int32_t ys_ups_start = ys_ups;
lv_color32_t * dest_c32 = (lv_color32_t *) dest_buf;
int32_t x;
for(x = 0; x < x_end; x++) {
xs_ups = xs_ups_start + ((xs_step * x) >> 8);
ys_ups = ys_ups_start + ((ys_step * x) >> 8);
int32_t xs_int = xs_ups >> 8;
int32_t ys_int = ys_ups >> 8;
/*Fully out of the image*/
if(xs_int < 0 || xs_int >= src_w || ys_int < 0 || ys_int >= src_h) {
((uint32_t *)dest_buf)[x] = 0x00000000;
continue;
}
/*Get the direction the hor and ver neighbor
*`fract` will be in range of 0x00..0xFF and `next` (+/-1) indicates the direction*/
int32_t xs_fract = xs_ups & 0xFF;
int32_t ys_fract = ys_ups & 0xFF;
int32_t x_next;
int32_t y_next;
if(xs_fract < 0x80) {
x_next = -1;
xs_fract = 0x7F - xs_fract;
}
else {
x_next = 1;
xs_fract = xs_fract - 0x80;
}
if(ys_fract < 0x80) {
y_next = -1;
ys_fract = 0x7F - ys_fract;
}
else {
y_next = 1;
ys_fract = ys_fract - 0x80;
}
const lv_color32_t * src_c32 = (const lv_color32_t *)(src + ys_int * src_stride + xs_int * 4);
dest_c32[x] = src_c32[0];
if(aa &&
xs_int + x_next >= 0 &&
xs_int + x_next <= src_w - 1 &&
ys_int + y_next >= 0 &&
ys_int + y_next <= src_h - 1) {
lv_color32_t px_hor = src_c32[x_next];
lv_color32_t px_ver = *(const lv_color32_t *)((uint8_t *)src_c32 + y_next * src_stride);
if(px_ver.alpha == 0) {
dest_c32[x].alpha = (dest_c32[x].alpha * (0xFF - ys_fract)) >> 8;
}
else if(!lv_color32_eq(dest_c32[x], px_ver)) {
if(dest_c32[x].alpha) dest_c32[x].alpha = ((px_ver.alpha * ys_fract) + (dest_c32[x].alpha * (0xFF - ys_fract))) >> 8;
px_ver.alpha = ys_fract;
dest_c32[x] = lv_color_mix32(px_ver, dest_c32[x]);
}
if(px_hor.alpha == 0) {
dest_c32[x].alpha = (dest_c32[x].alpha * (0xFF - xs_fract)) >> 8;
}
else if(!lv_color32_eq(dest_c32[x], px_hor)) {
if(dest_c32[x].alpha) dest_c32[x].alpha = ((px_hor.alpha * xs_fract) + (dest_c32[x].alpha * (0xFF - xs_fract))) >> 8;
px_hor.alpha = xs_fract;
dest_c32[x] = lv_color_mix32(px_hor, dest_c32[x]);
}
}
/*Partially out of the image*/
else {
if((xs_int == 0 && x_next < 0) || (xs_int == src_w - 1 && x_next > 0)) {
dest_c32[x].alpha = (dest_c32[x].alpha * (0x7F - xs_fract)) >> 7;
}
else if((ys_int == 0 && y_next < 0) || (ys_int == src_h - 1 && y_next > 0)) {
dest_c32[x].alpha = (dest_c32[x].alpha * (0x7F - ys_fract)) >> 7;
}
}
}
}We'll skip the fixed point math and the integer calculations magic for this time, but the first part is calculating the steps to be able to get the pixel in the source buffer here:
const lv_color32_t * src_c32 = (const lv_color32_t *)(src + ys_int * src_stride + xs_int * 4);
dest_c32[x] = src_c32[0];You can see it finally uses the integer steps we talked about previously to get the pixel in the source and place it in the destination. In our case, we multiply the xs_int by 4 as our pixel format uses 4bpp.
Now it is time to add this function call in the lw_draw_sw_transform function:
case LV_COLOR_FORMAT_ABGR8888:
transform_abgr888(src_buf, src_w, src_h, src_stride, xs_ups, ys_ups, xs_step_256, ys_step_256, dest_w, dest_buf, aa, 4);
break;Our custom pixel format transformation pipeline is now ready!
Conclusion
We are now equipped to transform our objects in our custom pixel format!
As we saw, this pipeline is not straightforward and we skipped a lot of details in this post to focus on the global mechanism and the basic use case.
The custom pixel format is used during the whole pipeline. In our case it does not matter that much as it uses the same amount of memory as the ARGB8888, but it can be extremely useful when dealing with smaller bpp formats!
Next time, we'll see how to decode an image encoded in our custom pixel format to be able to use it as an image or in an animation!


