This is a resume in what I've been working on.
There is no much content in this post if you are looking for useful information to your project. I am posting more useful information under the Nerd category.
Check my first Outreachy post to know how they are organized.
This week I worked in a first pipeline skeleton as first proposed here.
This is the output of the topology from my code:
The sensors generate a simple grey image for now from a kthread, and it is being propagated to the Raw capture nodes. The other nodes (debayer, input and scaler) don't have any inteligent yet.
It is possible to view the generated (grey) image from tools such as qv4l2
The sensors and capture device codes still need to improve.
This week I sent this patch series to my mentor. After my mentor's feedback and the appropriate patch corrections, the next step is implementing a basic debayer, scaler and input intelligence.
You can check the development of this in my github Linux Kernel Fork branch vmc/devel/video-pipe