Pages

Sunday, June 7, 2015

[Outreachy Status Log] First two weeks

This is a resume in what I've been working these two weeks.
There is no much content in this post if you are looking for useful information to your project. I am posting more useful informations under the Nerd category, check my first Outreachy post to know how they are organized.

I read the docs:
*Understanding userspace V4l2 API: http://linuxtv.org/downloads/v4l-dvb-apis/

*Understanding the internal Linux Kernel API to write a v4l2 driver:
In the Linux Kernel Tree:
Core framework: Documentation/video4linux/v4l2-framework.txt
How to implement the control v4l2 API: Documentation/video4linux/v4l2-controls.txt
How to implement Input Output: Documentation/video4linux/videobuf
The new interface to IO: Videobuf2: https://lwn.net/Articles/447435/

*Kthreads: https://lwn.net/Articles/65178/


I played with the code from
I checked the codes from the media_ctrl and other codes from the v4l_utils
Now I understand much better how the API from kernel to userspace works.

I setup CCache, Icecc and a VM to compile faster and debug faster. I decided to setup a VM because my computer GUI was freezing in a Kernel Ops and I needed to reboot my pc for each test. Those setups took me some time, but now I am saving a bunch of time.

I started reading the Vivid's code and to write a capture node based on the Documentation/video4linux/v4l2-pci-skeleton.c.

The code I did was not usable. I check the mandatory functions with the v4l2 API and now I am able to query the capabilities and enum the pixel formats.
I am testing with v4l2-ctl from the v4l-utils project and the yavta tool.

The code must be modular, so I separate the capture entity from the core in another source file. Latter we should be able to instantiate as many capture device nodes in the topology as we want.

This image will be generated from an internal kthread, then I wrote the kthread skeleton.

Now I am integrating the videobuf2 framework to populate the buffers with a hardcoded image inside the kthread.

After that I'll add a subdev to simulate the sensor.

You can check my progress in my github branch here, its not a clean branch with clean patch, I am going to clean them latter.

No comments:

Post a Comment