Pages

Saturday, June 20, 2015

Linux Kernel: How to start contributing

Is really not that dificult to start contributing with a small patch to the Linux kernel, my first patch was to correct a missing space between binary operands (i==j) -> (i == j) that I found running a script available in the kernel code, I ran something like:
perl scripts/checkpatch.pl -f path_to_some_driver_folder/ | less
And I found the coding style problem. But first you need to download and compile the kernel.

This is going to be a short post with really useful links.

Things you will need to know to start:
* GIT
* C programing

1) Download, compile and install a kernel compiled by your self: follow these instructions
Note1: I takes a really long time: check my posts about CCache and Icecc to speed up compilation.
Note2: Consider to setup a virtual machine if you plan to develop something that can break your system.

2) Setup more tools, find a coding style problem and prepare a patch and submit it: follow these instructions
Note1: the above link tells you to not send the patch to the Linux list. You need to check who are the maintainers of the file you are modifying by running the script:
perl scripts/get_maintainer.pl <path_to_your_patch_generated_with_git_format.patch>
Or:
perl scripts/get_maintainer.pl -f <file_that_you_modified>
Or:
git show HEAD | perl scripts/get_maintainer.pl
This will get you a list of people and emails, so you can send your patch to all of them in CC.

Note2: Do interleaved/bottom-posting when replying emails. Don't put your reply before quoted original message as Gmail does by default.

3) Joing the #kernelnewbies irc channel on freenode, its a chat so you can make newbie questions.
If you don't know how to do it, download the HexChat irc client (sudo apt-get install hexchat), go to menu->Network List, look for freenode and click on connect. Then wait a window to show up asking you which channels you would like to connect and type #kernelnewbies, or type /join kernelnewbies in the freenode chat window.

How to start working in something more interesting then fixing coding style:

4) Start reading the Linux Device Drivers (LDD3) Book, its free and you can read online.
You will learn the basis in writing a driver. The book comes with some code to test the drivers it implement but most of it is not compatible with the last kernel version. If you are stuck and can not compile a driver provided by the LDD3, leave me a comment and I can help.

5) Be aware of the Documentation folder inside the Linux kernel tree, it has a lot of docs about the internal API and software architecture. Sometimes it is hard to find which file should you be reading (just ask in the #kernelnewbies).

6) Where to go now: Check the http://kernelnewbies.org/ site, it is meant for newbies :)

Check this ilustrated guide about Linux Kernel development from another Outreachy intern

I am working on a video4linux driver. If you are interested in learning how to write a driver for your camera, follow my posts :)

Wednesday, June 17, 2015

[How to] Speed up compilation time with Icecc

With Icecc (Icecream) you can use other machines in your local network to compile for you.

If you have a single machine, usually you would do (for a quad-core machine) something like:

make -j4

This command will generate four compilation jobs and distribute to your CPU cores, compiling the jobs in parallel.

But if you have another machine in your local network, Icecc let you use the cores of this other machine too. If this other machine is dual core, you could run:

make -j6

How it works?


When you call make -jN, instead of calling the classic GNU Gcc, we will "trick" the make so it will call another "Gcc" binary defined by the Icecc (by changing the PATH).

The make command will generate the jobs and call the Icecc Gcc that will send the source files to the scheduler that will forward the jobs to the remote machines (or to him self or to the machine who started the compilation).



How to setup the network?


Easy on Ubuntu:

* Do the following commands in every computer in the network:

$ sudo apt-get install icecc

$ export PATH=/usr/lib/icecc/bin:$PATH

Check if the gcc in the /usr/lib/icecc is being used:

$ which gcc
/usr/lib/icecc/bin/gcc

Let say that the IP address of the machine you chose to be the scheduler is 192.168.0.34. Edit the file /etc/icecc/icecc.conf and change the follow variables (still in all the machines in the network):

ICECC_NETNAME="icecc_net"
ICECC_ALLOW_REMOTE="yes"
ICECC_SCHEDULER_HOST="192.168.0.34"

Reset the Icecc Deamon

sudo service iceccd restart

* Do the following command in the the scheduler machine 192.168.0.34:

sudo service icecc-scheduler start

How can I know if it works?


Install and Run the monitor:

$ sudo apt-get install icecc icecc-monitor

$ icemon -n icecc_net

You should see all machines and an indicator saying that the network is online:


In this case I have 3 machines, the first two have four cores and the last one just one core.

When I compile something with make -j9 I see the Jobs number growing and the slots being filled.

Done!!!

CCache with Icecc (edited):

To  speed up even more your compilation time, you can setup CCache (explained in the last post).

The general ideia is: check in a local cache first (using CCache) if the source files have been already compiled, if not, then give the job to Icecc.

When using with CCache, you don't need to add Icecc in the PATH, we use CCACHE_PREFIX instead:

$ export CCACHE_PREFIX=icecc

$ echo $PATH
/usr/lib/ccache:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games

$ which gcc
/usr/lib/ccache/gcc

Tuesday, June 16, 2015

[How to] Speed up compilation time with CCache

CCache stores the compiled version of your source files into a cache. If you tries to compile the same source files again, you will get a cache hit and will retrieve the compiled objects from the cache instead of compiling them again.

How it works?


 Instead of calling the GNU Gcc compiler directly, you will point your PATH to another Gcc binary provided by CCache that will check the cache first and them call the GNU Gcc if necessary:



How to install it?


Easy on Ubuntu:

$ sudo apt-get install ccache

Then change you PATH to point to the Ccache gcc (not just gcc) version:

$ export PATH="/usr/lib/ccache:$PATH"

Check with:

$ which gcc
/usr/lib/ccache/gcc

Done!!!

How can I know if it works? 


You can re-compile something and check if CCache is working with ccache -s command, you should have some cache hits:

You can increase/decrease your cache size with:

$ ccache --max-size=5G


Troubleshooting: Re-compiling Linux kernel never hit the cache?

Check if the flag CONFIG_LOCALVERSION_AUTO is set in you menuconfig, disable it and try again.
This flag seems to change a core header file as it appends the git version to the version string automatically, forcing CCache to recompile almost it all.


CCache with Icecc (edited):

If you want to use CCache with Icecc (that I'll explain about it in another post) to speed up even more your compilation time, use CCACHE_PREFIX=icecc (thanks Joel Rosdahl who commented about this)

$ export CCACHE_PREFIX=icecc

NOTE: You don't need to add icecc /usr/lib/icecc/bin in your PATH

Sunday, June 14, 2015

[Outreachy Status Log] Third week - Basic video pipe with a capture and sensor node

This is a resume in what I've been working on.
There is no much content in this post if you are looking for useful information to your project. I am posting more useful information under the Nerd category.
Check my first Outreachy post to know how they are organized.


This week I worked in a first pipeline skeleton as first proposed here.

This is the output of the topology from my code:


The sensors generate a simple grey image for now from a kthread, and it is being propagated to the Raw capture nodes. The other nodes (debayer, input and scaler) don't have any inteligent yet.
It is possible to view the generated (grey) image from tools such as qv4l2

The sensors and capture device codes still need to improve.

This week I sent this patch series to my mentor. After my mentor's feedback and the appropriate patch corrections, the next step is implementing a basic debayer, scaler and input intelligence.

You can check the development of this in my github Linux Kernel Fork branch vmc/devel/video-pipe

Monday, June 8, 2015

Compiling a single Linux Kernel module inside the Kernel tree

I recently find out a faster way to compile a single module in the Linux Kernel tree thanks to this post:

What I was doing before was taking much more time:

Or a pure make -j5, even if I had already compiled my Kernel once and just modified one module, was taking me ~5 minutes. So each tiny test I made (like changing a the contents of a string) was taking me that long, now it takes some seconds using the make modules SUBDIRS=path/to/module command \o/

To update the module I just copy the .ko to the right place:

It usually works if you made a simple change in the code. Or you can use the SUBDIRS too:

Sunday, June 7, 2015

[Outreachy Status Log] First two weeks

This is a resume in what I've been working these two weeks.
There is no much content in this post if you are looking for useful information to your project. I am posting more useful informations under the Nerd category, check my first Outreachy post to know how they are organized.

I read the docs:
*Understanding userspace V4l2 API: http://linuxtv.org/downloads/v4l-dvb-apis/

*Understanding the internal Linux Kernel API to write a v4l2 driver:
In the Linux Kernel Tree:
Core framework: Documentation/video4linux/v4l2-framework.txt
How to implement the control v4l2 API: Documentation/video4linux/v4l2-controls.txt
How to implement Input Output: Documentation/video4linux/videobuf
The new interface to IO: Videobuf2: https://lwn.net/Articles/447435/

*Kthreads: https://lwn.net/Articles/65178/


I played with the code from
I checked the codes from the media_ctrl and other codes from the v4l_utils
Now I understand much better how the API from kernel to userspace works.

I setup CCache, Icecc and a VM to compile faster and debug faster. I decided to setup a VM because my computer GUI was freezing in a Kernel Ops and I needed to reboot my pc for each test. Those setups took me some time, but now I am saving a bunch of time.

I started reading the Vivid's code and to write a capture node based on the Documentation/video4linux/v4l2-pci-skeleton.c.

The code I did was not usable. I check the mandatory functions with the v4l2 API and now I am able to query the capabilities and enum the pixel formats.
I am testing with v4l2-ctl from the v4l-utils project and the yavta tool.

The code must be modular, so I separate the capture entity from the core in another source file. Latter we should be able to instantiate as many capture device nodes in the topology as we want.

This image will be generated from an internal kthread, then I wrote the kthread skeleton.

Now I am integrating the videobuf2 framework to populate the buffers with a hardcoded image inside the kthread.

After that I'll add a subdev to simulate the sensor.

You can check my progress in my github branch here, its not a clean branch with clean patch, I am going to clean them latter.

Saturday, June 6, 2015

[Outreachy] The Virtual Media Controller in Linux Kernel

If you don't know what the Outreachy program is, you can check my last post.

About my project

My Outreachy project is about a Virtual Media Controller in the Linux Kernel. Thus, I'll be in contact with the Video4Linux (V4L2) API and the Media API.

My mentor ls Laurent Pinchart, you can find more information about the project here.

I have two main goals:

1) To provide a virtual media driver with a given topology* that simulate a real camera for example, similar to the Vivid driver, generating a fake image internally from the kernel. Thus it can be used by people who needs to implement and test a program which interacts with a media device without the need to posses the real equipment.

*A topology is an abstraction of how the hardware is organized. Depending on the hardware, it can have several internal functions like: a) one or more sensors which capture the image b) a filter c) the zoom
So the kernel can model this as a video pipe:

Sensor->Sepia Filter->2x Zoom->Data Bus

It is not necessarily a linear pipe, it could be like:

                                          ->Filter 1
                                        /                 \
Real life image ->Sensor                    -> Compose -> Zoom 2X -> Data Bus 1 -> Digital image 1
                                        \                 /
                                           ->Filter 2  -> Data Bus 2 -> Digital image 2


In this example we would be able to retrieve the image after the Zoom and the image directly from Filter 2 at the same time.

2) To provide a dynamically configurable topology API.

In some devices, the topology can be highly configurable (we could link the Zoom directly in the Filter 1 for example if we wanted too). But the current V4L2 API just allow a pre-configured link to be enabled or disabled. So the goal is to define this API and make the topology of the driver made in 1) configurable through user space.


You can follow how the development is going in my github kernel tree.

Any questions you can leave a comment or ping me on irc (my nick is koike) at freenode or gnome or oftc servers.

Cheers!

Friday, June 5, 2015

The Outreachy Program - May 2015

This is the first post of a series of posts about my Linux Kernel development journey in the context of the Outreachy Program.

What is the Outreachy Program?


Formally known as OPW (Outreach Program for Women), the Outreachy Program is similar to the Google Summer of Code, it is a paid internship for 3 months meant to encourage women and other minority groups to participate with the development of free and open source projects.

It was organized by Gnome but it is being moved to the Software Freedom Conservancy organization.

Usually there are two editions of the Program per year and the interns earns $5500 total. Each intern is mentored by a veteran developer.

To participate, one doesn't need to be a student as in GSoC, the interns just need to have enough time to work 40hs per week on their projects, otherwise it is likely that the intern won't be accepted.

How can you apply?


Usually, there is a list of open source projects int the Outreachy page, for example: Linux Kernel, Mozilla, Gnome, Debian, etc.
You need to chose one of those projects and go to theirs specific sites. This is the link to the Linux Kernel Outreachy Program.

The projects are responsible to select the interns, so it is highly recommended to start sending patches before the application period ends.

In the case of the Linux Kernel, there is a really good tutorial about how to send your first patch. It basicaly consists in running a script that will catch style problems, like a missing empty line between the declaration of variables and the code. You can easily fix one issue like that and send them a patch.

In the Linux Kernel, there are also a task list that you can start with.

What it has to do with me?


I am one of the interns of the Linux Kernel projects!!!! \o/
I am going to explain more about my project in another post.
I was accepted and the internship goes from May 25th to August 25th. (Yes, it has already started). Thus, I am going to be posting at least once every two weeks.

How the posts will be organized?


There will be two kinds of post:

1) Log posts: short posts that will be made more often explaining my current work, daily problems and solutions.
These kind of post will have the Outreachy label and the Log label.

2) Article posts: More complete posts about a subject, a tutorial or a explanation about how something works with useful links.
These kind of post will have the Outreachy label and the Nerd label.

If you want to follow both, you can follow the Outreachy feed, or you can follow just the one most interesting to you (the Log or Nerd feeds).


Any questions you can leave a comment or ping me on irc (my nick is koike) at freenode or gnome or oftc servers.

Cheers!