fedora-planet on Yet Another Me 2019-05-06T00:00:00-05:00 https://blog.sergiodj.net/tags/fedora-planet/ Hugo -- gohugo.io <![CDATA[Improve gcore and support dumping ELF headers]]> https://blog.sergiodj.net/posts/improve-gcore-elf-headers/ 2019-05-06T00:00:00-05:00 2019-05-06T00:00:00-05:00 Back in 2016, when life was simpler, a Fedora GDB user reported a bug (or a feature request, depending on how you interpret it) saying that GDB’s gcore command did not respect the COREFILTER_ELF_HEADERS flag, which instructs it to dump memory pages containing ELF headers. As you may or may not remember, I have already written about the broader topic of revamping GDB’s internal corefile dump algorithm; it’s an interesting read and I recommend it if you don’t know how Linux (or GDB) decides which mappings to dump to a corefile.

Anyway, even though the bug was interesting and had to do with a work I’d done before, I couldn’t really work on it at the time, so I decided to put it in the TODO list. Of course, the “TODO list” is actually a crack where most things fall through and are usually never seen again, so I was blissfully ignoring this request because I had other major priorities to deal with. That is, until a seemingly unrelated problem forced me to face this once and for all!

What? A regression? Since when?

As the Fedora GDB maintainer, I’m routinely preparing new releases for Fedora Rawhide distribution, and sometimes for the stable versions of the distro as well. And I try to be very careful when dealing with new releases, because a regression introduced now can come and bite us (i.e., the Red Hat GDB team) back many years in the future, when it’s sometimes too late or too difficult to fix things. So, a mandatory part of every release preparation is to actually run a regression test against the previous release, and make sure that everything is working correctly.

One of these days, some weeks ago, I had finished running the regression check for the release I was preparing when I noticed something strange: a specific, Fedora-only corefile test was FAILing. That’s a no-no, so I started investigating and found that the underlying reason was that, when the corefile was being generated, the build-id note from the executable was not being copied over. Fedora GDB has a local patch whose job is to, given a corefile with a build-id note, locate the corresponding binary that generated it. Without the build-id note, no binary was being located.

Coincidentally or not, at the same I started noticing some users reporting very similar build-id issues on the freenode’s #gdb channel, and I thought that this bug had a potential to become a big headache for us if nothing was done to fix it right now.

I asked for some help from the team, and we managed to discover that the problem was also happening with upstream gcore, and that it was probably something that binutils was doing, and not GDB. Hmm…

Ah, so it’s ld’s fault. Or is it?

So there I went, trying to confirm that it was binutils’s fault, and not GDB’s. Of course, if I could confirm this, then I could also tell the binutils guys to fix it, which meant less work for us :-).

With a lot of help from Keith Seitz, I was able to bisect the problem and found that it started with the following commit:

commit f6aec96dce1ddbd8961a3aa8a2925db2021719bb
Author: H.J. Lu <hjl.tools@gmail.com>
Date:   Tue Feb 27 11:34:20 2018 -0800

	ld: Add --enable-separate-code

This is a commit that touches the linker, which is part of binutils. So that means this is not GDB’s problem, right?!? Hmm. No, unfortunately not.

What the commit above does is to simply enable the use of --enable-separate-code (or -z separate-code) by default when linking an ELF program on x86_64 (more on that later). On a first glance, this change should not impact the corefile generation, and indeed, if you tell the Linux kernel to generate a corefile (for example, by doing sleep 60 & and then hitting C-\), you will notice that the build-id note is included into it! So GDB was still a suspect here. The investigation needed to continue.

What’s with -z separate-code?

The -z separate-code option makes the code segment in the ELF file to put in a completely separated segment than data segment. This was done to increase the security of generated binaries. Before it, everything (code and data) was put together in the same memory region. What this means in practice is that, before, you would see something like this when you examined /proc/PID/smaps:

00400000-00401000 r-xp 00000000 fc:01 798593                             /file
Size:                  4 kB
KernelPageSize:        4 kB
MMUPageSize:           4 kB
Rss:                   4 kB
Pss:                   4 kB
Shared_Clean:          0 kB
Shared_Dirty:          0 kB
Private_Clean:         0 kB
Private_Dirty:         4 kB
Referenced:            4 kB
Anonymous:             4 kB
LazyFree:              0 kB
AnonHugePages:         0 kB
ShmemPmdMapped:        0 kB
Shared_Hugetlb:        0 kB
Private_Hugetlb:       0 kB
Swap:                  0 kB
SwapPss:               0 kB
Locked:                0 kB
THPeligible:    0
VmFlags: rd ex mr mw me dw sd

And now, you will see two memory regions instead, like this:

00400000-00401000 r--p 00000000 fc:01 799548                             /file
Size:                  4 kB
KernelPageSize:        4 kB
MMUPageSize:           4 kB
Rss:                   4 kB
Pss:                   4 kB
Shared_Clean:          0 kB
Shared_Dirty:          0 kB
Private_Clean:         4 kB
Private_Dirty:         0 kB
Referenced:            4 kB
Anonymous:             0 kB
LazyFree:              0 kB
AnonHugePages:         0 kB
ShmemPmdMapped:        0 kB
Shared_Hugetlb:        0 kB
Private_Hugetlb:       0 kB
Swap:                  0 kB
SwapPss:               0 kB
Locked:                0 kB
THPeligible:    0
VmFlags: rd mr mw me dw sd
00401000-00402000 r-xp 00001000 fc:01 799548                             /file
Size:                  4 kB
KernelPageSize:        4 kB
MMUPageSize:           4 kB
Rss:                   4 kB
Pss:                   4 kB
Shared_Clean:          0 kB
Shared_Dirty:          0 kB
Private_Clean:         0 kB
Private_Dirty:         4 kB
Referenced:            4 kB
Anonymous:             4 kB
LazyFree:              0 kB
AnonHugePages:         0 kB
ShmemPmdMapped:        0 kB
Shared_Hugetlb:        0 kB
Private_Hugetlb:       0 kB
Swap:                  0 kB
SwapPss:               0 kB
Locked:                0 kB
THPeligible:    0
VmFlags: rd ex mr mw me dw sd

A few minor things have changed, but the most important of them is the fact that, before, the whole memory region had anonymous data in it, which means that it was considered an anonymous private mapping (anonymous because of the non-zero Anonymous amount of data; private because of the p in the r-xp permission bits). After -z separate-code was made default, the first memory mapping does not have Anonymous contents anymore, which means that it is now considered to be a file-backed private mapping instead.

GDB, corefile, and coredump_filter

It is important to mention that, unlike the Linux kernel, GDB doesn’t have all of the necessary information readily available to decide the exact type of a memory mapping, so when I revamped this code back in 2015 I had to create some heuristics to try and determine this information. If you’re curious, take a look at the linux-tdep.c file on GDB’s source tree, specifically at the functions dump_mapping_p and linux_find_memory_regions_full.

When GDB is deciding which memory regions should be dumped into the corefile, it respects the value found at the /proc/PID/coredump_filter file. The default value for this file is 0x33, which, according to core(5), means:

Dump memory pages that are either anonymous private, anonymous
shared, ELF headers or HugeTLB.

GDB had the support implemented to dump almost all of these pages, except for the ELF headers variety. And, as you can probably infer, this means that, before the -z separate-code change, the very first memory mapping of the executable was being dumped, because it was marked as anonymous private. However, after the change, the first mapping (which contains only data, no code) wasn’t being dumped anymore, because it was now considered by GDB to be a file-backed private mapping!

Finally, that is the reason for the difference between corefiles generated by GDB and Linux, and also the reason why the build-id note was not being included in the corefile anymore! You see, the first memory mapping contains not only the program’s data, but also its ELF headers, which in turn contain the build-id information.

gcore, meet ELF headers

The solution was “simple”: I needed to improve the current heuristics and teach GDB how to determine if a mapping contains an ELF header or not. For that, I chose to follow the Linux kernel’s algorithm, which basically checks the first 4 bytes of the mapping and compares them against \177ELF, which is ELF’s magic number. If the comparison succeeds, then we just assume we’re dealing with a mapping that contains an ELF header and dump it.

In all fairness, Linux just dumps the first page (4K) of the mapping, in order to save space. It would be possible to make GDB do the same, but I chose the faster way and just dumped the whole mapping, which, in most scenarios, shouldn’t be a big problem.

It’s also interesting to mention that GDB will just perform this check if:

  • The heuristic has decided not to dump the mapping so far, and;
  • The mapping is private, and;
  • The mapping’s offset is zero, and;
  • There is a request to dump mappings with ELF headers (i.e., coredump_filter).

Linux also makes these checks, by the way.

The patch, finally

I submitted the patch to the mailing list, and it was approved fairly quickly (with a few minor nits).

The reason I’m writing this blog post is because I’m very happy and proud with the whole process. It wasn’t an easy task to investigate the underlying reason for the build-id failures, and it was interesting to come up with a solution that extended the work I did a few years ago. I was also able to close a few bug reports upstream, as well as the one reported against Fedora GDB.

The patch has been pushed, and is also present at the latest version of Fedora GDB for Rawhide. It wasn’t possible to write a self-contained testcase for this problem, so I had to resort to using an external tool (eu-unstrip) in order to guarantee that the build-id note is correctly present in the corefile. But that’s a small detail, of course.

Anyway, I hope this was an interesting (albeit large) read!

]]>
<![CDATA[Memory mappings, core dumps, GDB and Linux]]> https://blog.sergiodj.net/posts/linux-memory-mapping/ 2015-04-05T00:00:00-05:00 2015-04-05T00:00:00-05:00 After spending the last weeks struggling with this, I decided to write a blog post. First, what is “this” that you are talking about? The answer is: Linux kernel’s concept of memory mapping. I found it utterly confused, beyond my expectations, and so I believe that a blog post is the write way to (a) preserve and (b) share this knowledge. So, let’s do it!

First things first

First, I cannot begin this post without a few acknowledgements and “thank you’s”. The first goes to Oleg Nesterov (sorry, I could not find his website), a Linux kernel guru who really helped me a lot through the whole task. Another “thank you” goes to Jan Kratochvil, who also provided valuable feedback by commenting my GDB patch. Now, back to the point.

The task

The task was requested here: GDB needed to respect the /proc/<PID>/coredump_filter file when generating a coredump (i.e., when you use the gcore command).

Currently, GDB has his own coredump mechanism implemented which, despite its limitations and bugs, has been around for quite some time. However, and maybe you don’t know that, but the Linux kernel has its own algorithm for generating the corefile of a process. And unfortunately, GDB and Linux were not really following the same standards here…

So, in the end, the task was about synchronizing GDB and Linux. To do that, I first had to decipher the contents of the /proc/<PID>/smaps file.

The /proc/<PID>/smaps file

This special file, generated by the Linux kernel when you read it, contains detailed information about each memory mapping of a certain process. Some of the fields on this file are documented in the proc(5) manpage, but others are missing there (asking for a patch!). Here is an explanation of everything I needed:

  • The first line of each memory mapping has the following format:

    The fields here are:

    a) address is the address range, in the process’ address space, that the mapping occupies. This part was already treated by GDB, so I did not have to worry about it.

    b) perms is a set of permissions (r ead, w rite, e x ecute, s hared, p rivate [COW – copy-on-write]) applied to the memory mapping. GDB was already dealing with rwx permissions, but I needed to include the p flag as well. I also made GDB ignore the mappings that did not have the r flag active, because it does not make sense to dump something that you cannot read.

    c) offset is the offset into the applied to the file, if the mapping is file-backed (see below). GDB already handled this correctly.

    d) dev is the device (major:minor) related to the file, if there is one. GDB already handled this correctly, though I was using this field for more things (continue reading).

    e) inode is the inode on the device above. The value of zero means that no inode is associated with the memory mapping. Nothing to do here.

    f) pathname is the file associate with this mapping, if there is one. This is one of the most important fields that I had to use, and one of the most complicated to understand completely. GDB now uses this to heuristically identify whether the mapping is anonymous or not.

  • GDB is now also interested in Anonymous: and AnonHugePages: fields from the smaps file. Those fields represent the content of anonymous data on the mapping; if GDB finds that this content is greater than zero, this means that the mapping is anonymous.

  • The last, but perhaps most important field, is the VmFlags: field. It contains a series of two-letter flags that provide very useful information about the mapping. A description of the fields is: a) sh: the mapping is shared (VM_SHARED) b) dd: this mapping should not be dumped in a corefile (VM_DONTDUMP) c) ht: this is HugeTLB mapping

With that in hands, the following task was to be able to determine whether a memory mapping is anonymous or file-backed, private or shared.

Types of memory mappings

There can be four types of memory mappings:

  1. Anonymous private mapping
  2. Anonymous shared mapping
  3. File-backed private mapping
  4. File-backed shared mapping

It should be possible to uniquely identify each mapping based on the information provided by the smaps file; however, you will see that this is not always the case. Below, I will explain how to determine each of the four characteristics that define a mapping.

Anonymous

A mapping is anonymous if one of these conditions apply:

  1. The pathname associated with it is either /dev/zero (deleted), /SYSV%08x (deleted), or <filename> (deleted) (see below).
  2. There is content in the Anonymous: or in the AnonHugePages: fields of the mapping in the smaps file.

A special explanation is needed for the <filename> (deleted) case. It is not always guaranteed that it identifies an anonymous mapping; in fact, it is possible to have the (deleted) part for file-backed mappings as well (say, when you are running a program that uses shared libraries, and those shared libraries have been removed because of an update, for example). However, we are trying to mimic the behavior of the Linux kernel here, which checks to see if a file has no hard links associated with it (and therefore is truly deleted).

Although it may be possible for the userspace to do an extensive check (by stat ing the file, for example), the Linux kernel certainly could give more information about this.

File-backed

A mapping is file-backed (i.e., not anonymous) if:

  1. The pathname associated with it contains a <filename>, without the (deleted) part.

As has been explained above, a mapping whose pathname contains the (deleted) string could still be file-backed, but we decide to consider it anonymous.

It is also worth mentioning that a mapping can be simultaneously anonymous and file-backed: this happens when the mapping contains a valid pathname (without the (deleted) part), but also contains Anonymous: or AnonHugePages: contents.

Private

A mapping is considered to be private (i.e., not shared) if:

  1. In the absence of the VmFlags field (in the smaps file), its permission field has the flag p.
  2. If the VmFlags field is present, then the mapping is private if we do not find the sh flag there.

Shared

A mapping is shared (i.e., not private) if:

  1. In the absence of VmFlags in the smaps file, the permission field of the mapping does not have the p flag. Not having this flag actually means VM_MAYSHARE and not necessarily VM_SHARED (which is what we want), but it is the best approximation we have.
  2. If the VmFlags field is present, then the mapping is shared if we find the sh flag there.

The patch

With all that in mind, I hacked GDB to improve the coredump mechanism for GNU/Linux operating systems. The main function which decides the memory mappings that will or will not be dumped on GNU/Linux is linux_find_memory_regions_full; the Linux kernel obviously uses its own function, vma_dump_size, to do the same thing.

Linux has one advantage: it is a kernel, and therefore has much more knowledge about processes’ internals than a userspace program. For example, inside Linux it is trivial to check if a file marked as “(deleted)” in the output of the smaps file has no hard links associated with it (and therefore is not really deleted); the same operation on userspace, however, would require root access to inspect the contents of the /proc/<PID>/map_files/ directory.

The case described above, if you remember, is something that impacts the ability to tell whether a mapping is anonymous or not. I am talking to the Linux kernel guys to see if it is possible to export this information directly via the smaps file, instead of having to do the current heuristic.

While doing this work, some strange behaviors were found in the Linux kernel. Oleg is working on them, along with other Linux hackers. From our side, there is still room for improvement on this code. The first thing I can think of is to improve the heuristics for finding anonymous mappings. Another relatively easy thing to do would be to let the user specify a value for coredump_filter on the command line, without editing the /proc file. And of course, keep this code always updated with its counterpart in the Linux kernel.

Upstream discussions and commit

If you are interested, you can see the discussions that happened upstream by going to this link. This is the fourth (and final) submission of the patch; you should be able to find the other submissions in the archive.

The final commit can be found in the official repository.

]]>
<![CDATA[Respectful Software]]> https://blog.sergiodj.net/posts/respectful-software/ 2014-10-15T00:00:00-05:00 2014-10-15T00:00:00-05:00 To what extent should Free Software respect its users?

The question, strange as it may sound, is not only valid but also becoming more and more important these days. If you think that the four freedoms are enough to guarantee that the Free Software will respect the user, you are probably being oversimplistic. The four freedoms are essential, but they are not sufficient. You need more. I need more. And this is why I think the Free Software movement should have been called the Respectful Software movement.

I know I will probably hear that I am too radical. And I know I will hear it even from those who defend Free Software the way I do. But I need to express this feeling I have, even though I may be wrong about it.

It all began as an innocent comment. I make lots of presentations and talks about Free Software, and, knowing that the word “Free” is ambiguous in English, I started joking that Richard Stallman should have named the movement “Respectful Software”, instead of “Free Software”. If you think about it just a little, you will see that “respect” is a word that brings different interpretations to different people, just as “free” does. It is a subjective word. However, at least it does not have the problem of referring to completely unrelated things such as “price” and “freedom”. Respect is respect, and everybody knows it. What can change (and often does) is what a person considers respectful or not.

(I am obviously not considering the possible ambiguity that may exist in another language with the word “respect”.)

So, back to the software world. I want you to imagine a Free Software. For example, let’s consider one that is used to connect to so-called “social networks” like GNU Social or pump.io. I do not want to use a specific example here; I am more interested in the consequences of a certain decision. Which decision? Keep reading :-).

Now, let’s imagine that this Free Software is just beginning its life, probably in some code repository under the control of its developer(s), but most likely using some proprietary service like GitHub (which is an issue by itself). And probably the developer is thinking: “Which social network should my software support first?”. This is an extremely valid and important question, but sometimes the developer comes up with an answer that may not be satisfactory to its users. This is where the “respect” comes into play.

In our case, this bad answer would be “Facebook”, “Twitter”, “Linkedin”, or any other unethical social network. However, those are exactly the easiest answers for many and many Free Software developers, either because those “vampiric” services are popular among users, or because the developer him/herself uses them!! By now, you should be able to see where I am getting at. My point, in a simple question, is: “How far should we, Free Software developers, allow users to go and harm themselves and the community?”. Yes, this is not just a matter of self-inflicted restrictions, as when the user chooses to use a non-free software to edit a text file, for example. It is, in most cases, a matter of harming the community too. (I have written a post related to this issue a while ago, called “Privacy as a Collective Good”.)

It should be easy to see that it does not matter if I am using Facebook through my shiny Free Software application on my computer or cellphone. What really matters is that, when doing so, you are basically supporting the use of those unethical social networks, to the point that perhaps some of your friends are also using them because of you. What does it matter if they are using Free Software to access them or not? Is the benefit offered by the Free Software big enough to eliminate (or even soften) the problems that exist when the user uses an unethical service like Linkedin?

I wonder, though, what is the limit that we should obey. Where should we draw the line and say “I will not pass beyond this point”? Should we just “abandon” the users of those unethical services and social networks, while we lock ourselves in our not-very-safe world? After all, we need to communicate with them in order to bring them to our cause, but it is hard doing so without getting our hands dirty. But that is a discussion to another post, I believe.

Meanwhile, I could give plenty of examples of existing Free Softwares that are doing a disservice to the community by allowing (and even promoting) unethical services or solutions for their users. They are disrespecting their users, sometimes exploiting the fact that many users are not fully aware of privacy issues that come as a “gift” when you use those services, without spending any kind of effort to teach the users. However, I do not want this post to become a flamewar, so I will not mention any software explicitly. I think it should be quite easy for the reader to find examples out there.

Perhaps this post does not have a conclusion. I myself have not made my mind completely about the subject, though I am obviously leaning towards what most people would call the “radical” solution. But it is definitely not an easy topic to discuss, or to argument about. Nonetheless, we are closing our eyes to it, and we should not do so. The future of Free Software depends also on what kinds of services we promote, and what kinds of services we actually warn the users against. This is my definition of respect, and this is why I think we should develop Free and Respectful Software.

]]>
<![CDATA[Fedora on an Acer C720P Chromebook]]> https://blog.sergiodj.net/posts/fedora-on-acer-c720p/ 2014-09-26T00:00:00-05:00 2014-09-26T00:00:00-05:00 Yes, you are reading correctly: I decided to buy a freacking Chromebook. I really needed a lightweight notebook with me for my daily hackings while waiting for my subway station, and this one seemed to be the best option available when comparing models and prices. To be fair, and before you throw me rocks, I visited the LibreBoot X60’s website for some time, because I was strongly considering buying one (even considering its weight); however, they did not have it in stock, and I did not want to wait anymore, so…

Anyway, as one might expect, configuring GNU/Linux on notebooks is becoming harder as time goes by, either because the infamous Secure Boot (anti-)feature, or because they come with more and more devices that demand proprietary crap to be loaded. But fortunately, it is still possible to overcome most of those problems and still get a GNU/Linux distro running.

References

For main reference, I used the following websites:

I also used other references for small problems that I had during the configuration, and I will list them when needed.

Backing up ChromeOS

The first thing you will probably want to do is to make a recovery image of the ChromeOS that comes pre-installed in the machine, in case things go wrong. Unfortunately, to do that you need to have a Google account, otherwise the system will fail to record the image. So, if you want to let Google know that you bought a Chromebook, login into the system, open Chrome, and go to the special URL chrome://imageburner. You will need a 4 GiB pendrive/sdcard. It should be pretty straightforward to do the recording from there.

Screw the screw

Now comes the hard part. This notebook comes with a write-protect screw. You might be thinking: what is the purpose of this screw?

Well, the thing is: Chromebooks come with their own boot scheme, which unfortunately doesn’t work to boot Linux. However, newer models also offer a “legacy boot” option (SeaBIOS), and this can boot Linux. So far, so good, but…

When you switch to SeaBIOS (details below), the system will complain that it cannot find ChromeOS, and will ask if you want to reinstall the system. This will happen every time you boot the machine, because the system is still entering the default BIOS. In order to activate SeaBIOS, you have to press CTRL-L (Control + L) every time you boot! And this is where the screw comes into play.

If you remove the write-protect screw, you will be able to make the system use SeaBIOS by default, and therefore will not need to worry about pressing CTRL-L every time. Sounds good? Maybe not so much…

The first thing to consider is that you will lose your warranty the moment you open the notebook case. As I was not very concerned about it, I decided to try to remove the screw, and guess what happened? I stripped the screw! I am still not sure why that happened, because I was using the correct screw driver for the job, but when I tried to remove the screw, it seemed like butter and started to “decompose”!

Anyway, after spending many hours trying to figure out a way to remove the screw, I gave up. My intention is to always suspend the system, so I rarely need to press CTRL-L anyway…

Well, that’s all I have to say about this screwed screw. If you decide to try removing it, keep in mind that I cannot help you in any way, and that you are entirely responsible for what happens.

Now, let’s install the system :-).

Enable Developer Mode

You need to enable the Developer Mode in order to be able to enable SeaBIOS. To do that, follow these steps from the Arch[GNU/]Linux wiki page.

I don’t remember if this step works if you don’t have activated the ChromeOS (i.e., if you don’t have a Google account associated with the device). For my use, I just created a fake account to be able to proceed.

Accessing the superuser shell inside ChromeOS

Now, you will need to access the superuser (root) shell inside ChromeOS, to enable SeaBIOS. Follow the steps described in the Arch[GNU/]Linux wiki page. For this specific step, you don’t need to login, which is good.

Enabling SeaBIOS

We’re almost there! The last step before you boot your Fedora LiveUSB is to actually enable SeaBIOS. Just go inside your superuser shell (from the previous step) and type:

> crossystem dev_boot_usb=1 dev_boot_legacy=1

And that’s it!

If you managed to successfuly remove the write-protect screw, you may also want to enable booting SeaBIOS by default. To do that, there is a guide, again on Arch[GNU/]Linux wiki. DO NOT DO THAT IF YOU DID NOT REMOVE THE WRITE-PROTECT SCREW!!!!

Booting Fedora

Now, we should finally be able to boot Fedora! Remember, you will have to press CTRL-L after you reboot (if you have not removed the write-protect screw), otherwise the system will just complain and not boot into SeaBIOS. So, press CTRL-L, choose the boot order (you will probably want to boot from USB first, if your Fedora is on a USB stick), choose to boot the live Fedora image, and… bum!! You will probably see a message complaining that there was not enough memory to boot (the message is “Not enough memory to load specified image”).

You can solve that by passing the mem parameter to Linux. So, when GRUB complains that it was unable to load the specified image, it will give you a command prompt (boot:), and you just need to type:

boot: linux mem=1980M

And that’s it, things should work.

Installing the system

I won’t guide you through the installation process; I just want to remember you that you have a 32 GiB SSD drive, so think carefully before you decide how you want to set up the partitions. What I did was to reserve 1 GB for my swap, and take all the rest to the root partition (i.e., I did not create a separate /home partition).

You will also notice that the touchpad does not work (neither does the touchscreen). So you will have to do the installation using a USB mouse for now.

Getting the touchpad to work

I strongly recommend you to read this Fedora bug, which is mostly about the touchpad/touchscreen support, but also covers other interesting topics as well.

Anyway, the bug is still being constantly updated, because the proposed patches to make the touchpad/touchscreen work were not fully integrated into Linux yet. So, depending on the version of Linux that you are running, you will probably need to run a different version of the scripts that are being kindly provided in the bug.

As of this writing, I am running Linux 3.16.2-201.fc20, and the script that does the job for me is this one. If you are like me, you will never run a script without looking at what it does, so go there and do it, I will wait :-).

OK, now that you are confident, run the script (as root, of course), and confirm that it actually installs the necessary drivers to make the devices work. In my case, I only got the touchpad working, even though the touchscreen is also covered by this script. However, since I don’t want the touchscreen, I did not investigate this further.

After the installation, reboot your system and at least your touchpad should be working :-). Or kind of…

What happened to me was that I was getting strange behaviors with the touchpad. Sometimes (randomly), its sensitivity became weird, and it was very hard to move the pointer or to click on things. Fortunately, I found the solution in the same bug, in this comment by Yannick Defais. After creating this X11 configuration file, everything worked fine.

Getting suspend to work

Now comes the hard part. My next challenge was to get suspend to work, because (as I said above) I don’t want to poweroff/poweron every time.

My first obvious attempt was to try to suspend using the current configuration that came with Fedora. The notebook actually suspended, but then it resumed 1 second later, and the system froze (i.e., I had to force the shutdown by holding the power button for a few seconds). Hmm, it smelled like this would take some effort, and my nose was right.

After a lot of search (and asking in the bug), I found out about a few Linux flags that I could provide in boot time. To save you time, this is what I have now in my /etc/default/grub file:

GRUB_CMDLINE_LINUX="tpm_tis.force=1 tpm_tis.interrupts=0 ..."

The final ... means that you should keep whatever was there before you included those parameters, of course. Also, after you edit this file, you need to regenerate the GRUB configuration file on /boot. Run the following command as root:

> grub2-mkconfig -o /boot/grub2/grub.cfg

Then, after I rebooted the system, I found that only adding those flags was still not enough. I saw a bunch of errors on dmesg, which showed me that there was some problem with EHCI and xHCI. After a few more research, I found the this comment on an Arch[GNU/]Linux forum. Just follow the steps there (i.e., create the necessary files, especially the /usr/lib/systemd/system-sleep/cros-sound-suspend.sh), and things should start to get better. But not yet…

Now, you will see that suspend/resume work OK, but when you suspend, the system will still resume after 1 second or so. Basically, this happens because the system is using the touchpad and the touchscreen to determine whether it should resume from suspend or not. So basically what you have to do is to disable those sources of events:

echo TPAD > /proc/acpi/wakeup
echo TSCR > /proc/acpi/wakeup

And voilà! Now everything should work as expected :-). You might want to issue those commands every time you boot the system, in order to get suspend to work every time, of course. To do that, you can create a /etc/rc.d/rc.local, which gets executed when the system starts:

> cat /etc/rc.d/rc.local
#!/bin/bash

suspend_tricks()
{
  echo TPAD > /proc/acpi/wakeup
  echo TSCR > /proc/acpi/wakeup
}

suspend_tricks

exit 0

Don’t forget to make this file executable:

> chmod +x /etc/rc.d/rc.local

Conclusion

Overall, I am happy with the machine. I still haven’t tried installing Linux-libre on it, so I am not sure if it can work without binary blobs and proprietary craps.

I found the keyboard comfortable, and the touchpad OK. The only extra issue I had was using the Canadian/French/whatever keyboard that comes with it, because it lacks some useful keys for me, like Page Up/Down, Insert, and a few others. So far, I am working around this issue by using xbindkeys and xvkdb.

I do not recommend this machine if you are not tech-savvy enough to follow the steps listed in this post. If that is the case, then consider buying a machine that can easily run GNU/Linux, because you feel much more comfortable configuring it!

]]>
<![CDATA[Reflexões de um ativista -- Parte 02]]> https://blog.sergiodj.net/posts/reflexoes-de-um-ativista-parte-2/ 2013-11-14T00:00:00-05:00 2013-11-14T00:00:00-05:00 Ainda não sei se estou preparado pra enfrentar a segunda parte dessa “série”, mas também não adianta fugir… O que eu sei é que essas reflexões podem não ser condizentes com a realidade (ou com a sua realidade), e que talvez eu esteja exagerando (ou aliviando) nas minhas observações, mas em todo caso eu espero que seja possível para você, querido leitor, traçar alguns paralelos com o seu modo de ver o mundo, e, quem sabe, mudar algo na sua região.

Preguiça

Este ponto relaciona-se mutuamente com os outros dois pontos (que também relacionam-se mutuamente entre si). É claro, tudo está conectado nesse mundo, até mesmo (e principalmente!) os motivos que levam alguém a se desconectar de alguns valores morais e éticos.

Eu vejo pessoas preguiçosas o tempo todo. Às vezes, sou uma delas (por mais que tente me afastar desse comportamento). Mas creio que existe uma diferença entre alguém inerentemente preguiçoso, e alguém que se deixa levar pela tentação da preguiça por conta de algum outro fator. A minha reclamação, aqui, é com o primeiro tipo de pessoas.

O “teste” pra saber se você se encaixa nesse grupo é: quando você se depara com algum problema difícil de ser resolvido, qual seu modus operandi? Buscar soluções, ou desistir? Tentar você mesmo, ou pedir pra alguém? Aprender com seus erros, ou repetí-los ad eternum? Se você não quis nem pensar sobre esse teste, então acho a resposta é óbvia…

Mas o que isso tem a ver com ativismo? Tudo. Ser ativista é, por definição, ter que enfrentar situações difíceis e desanimadoras, platéias apáticas e desconfiadas, pessoas descrentes e alienadas. E isso tudo é absurdamente frustrante, principalmente quando você acredita naquilo que está falando, e sabe que as pessoas que estão ouvindo precisam entender também! Afinal, como eu falei em outro post, a privacidade (mas não só ela!) é um bem coletivo. A manutenção dela depende da compreensão da comunidade sendo espionada.

Em outras palavras, as empresas, entidades e governos que estão lutando para que você tenha cada vez menos direitos não dormem no ponto. Não vai ser muito legal se nós dormirmos…

Só que esse ponto não se aplica somente aos ativistas em si. Obviamente, encontramos (muitos!) preguiçosos (e preguiçosas) do outro lado, na platéia. É sempre bom (e necessário) assumir que as pessoas pra quem você está falando são ignorantes naquele assunto, e portanto precisam ser instruídas minimamente para que possam tomar decisões maduras e inteligentes. No entanto, mesmo depois de serem alertadas sobre vários fatos e consequências dos seus atos, as pessoas ainda assim preferem continuar na ignorância!! Existem vários nomes pra essa “teimosia”, mas eu costumo achar que um dos fatores que contribui pra isso é a preguiça.

Preguiça em levantar da cadeira e procurar soluções que respeitem você e sua comunidade. Preguiça em continuar pensando (ou seja, “sempre alerta”) sobre quais os riscos você está efetivamente correndo quando usa aquela “rede social”. Preguiça em mudar os hábitos. Preguiça em lutar por seus direitos virtuais. Enfim, preguiça.

Preconceito

Esse é um dos pontos mais problemáticos. O preconceito está enraizado nas pessoas, sem exceção. E o preconceito contra ativistas, de qualquer tipo, é evidente.

Ser ativista não é somente acreditar em algo. Ser ativista é principalmente saber de algo, e querer levar essa sabedoria para as pessoas. Obviamente, existem vários tipos de ativismo, mas quando olho pro que eu faço, eu me vejo mais como alguém que sente ser sua obrigação ensinar as pessoas sobre algo que é desconhecido da maioria. Apesar de realmente esperar que as pessoas acreditem nos valores que eu tento passar (e quem não espera?), acredito que meu objetivo principal seja o de “habilitar” a sociedade a tomar decisões conscientes sobre os assuntos que tento “ensinar”.

Algumas pessoas têm medo ou vergonha de me falar que usam Facebook, Twitter, ou algum software não-livre. Mas eu noto que, na maior parte dos casos, o medo delas decorre do fato de elas saberem que eu não “gosto” de nenhum desses itens, e não do fato de elas saberem por que eu não gosto deles. E nesse caso, eu não sinto raiva ou decepção pela pessoa com quem estou conversando, mas sim uma necessidade de realmente explicar o motivo de eu não concordar com a utilização desses programas! Sei que se eu explicar, na verdade eu estarei dando ferramentas pra que a pessoa consiga, ela mesma, decidir se quer continuar usando-os. Essa é minha tarefa, no final das contas. Permitir que o usuário de tecnologia consiga, de forma consciente e ética, escolher o que quer e o que não quer. Mas aí entra o preconceito…

Quando começo a falar, é inevitável usar expressões como “liberdade”, “respeito”, “ética”, “comunidade”, “privacidade”, “questões sociais”, etc. Elas são o cimento pra que eu possa construir meus argumentos, e não creio que palavras ou expressões por si só possam definir um liberal de um conservador, por exemplo. No entanto, o que mais vejo são pessoas que confundem ativistas de Software Livre com comunistas ou socialistas. E como hoje a moda é o conservadorismo, às vezes as pessoas ignoram tudo aquilo que falamos por conta desse preconceito idiota.

Meu objetivo não é discutir sobre se é bom ou ruim ser socialista/comunista (apesar de eu definitivamente não ser “conservador”, e achar esse preconceito absurdo). Mas o que deve ficar claro é que o Software Livre, apesar de ser um movimento político, não é um movimento partidário. Defendemos valores bem definidos, que podem ou não ter a ver com idéias comunistas/socialistas, mas que não advogam a favor desse movimento político. Também é importante mencionar que, por ser um movimento social, é natural que muitas idéias e preceitos defendidos pelos ativistas de Software Livre sejam simpáticos à causa socialista/comunista. Mas isso obviamente não faz com que Stallman seja o novo Stalin (apesar da semelhança dos sobrenomes).

Enfim, o meu pedido para a comunidade em geral é: ouçam a mensagem, independente do interlocutor, e pensem a respeito, independente da sua orientação político-partidária. Aquilo pelo qual lutamos independe de partido, religião, time de futebol, nacionalidade. Depende simplesmente de seres humanos, de uma comunidade que não tem fronteiras, não tem uma única cultura, mas que merece mais respeito. Só que, infelizmente, vamos ter que exigir isso.

]]>
<![CDATA[About coherence, Twitter, and the Free Software Foundation]]> https://blog.sergiodj.net/posts/fsf-twitter-coherence/ 2013-10-16T00:00:00-05:00 2013-10-16T00:00:00-05:00 The Free Software Foundation has a Twitter account. Surprised? So am I, in a negative way, of course. And I will explain why on this post.

You may not agree with me on everything I write here, and I am honestly expecting some opposition, but I would like to make it crystal clear that my purpose is to raise awareness for the most important “feature” an organization should have: coherence.

The shock

I first learned about the Twitter account on IRC. I was hanging around in the #fsf channel on Freenode, when someone mentioned that “… something has just been posted on FSF’s Twitter!” (yes, it was a happy announcement, not a complaint). I thought it was a joke, but before laughing I decided to confirm. And to my deepest sorrow, I was wrong. The Free Software Foundation has a Twitter account. The implications of this are mostly bad not only for the Foundation itself, but also for us, Free Software users and advocates.

Twitter uses Free Software to run its services. So does Facebook, and I would even bet that Microsoft runs some GNU/Linux machines serving intranet pages… But the thing is not about what a web service uses. It is about endorsement. And I will explain.

Free ads, anyone?

I remember having this crazy thought some years ago, when I saw some small company in Brazil putting the Facebook logo in their product’s box. What surprised me was that the Facebook logo was actually bigger than the company’s logo! What the heck?!?! This is “Marketing 101”: you are drawing attention to Facebook, not to your company who actually made the product. And from that moment on, every time I see Coca Cola putting a “Find us on http://facebook.com/cocacola” (don’t know if the URL is valid, it’s just an example) I have this strange feeling of how an internet company can twist the rules of marketing and get free ads everywhere…

My point is simple: when a company uses a web service, it is endorsing the use of this same web service, even if in an indirect way. And the same applies to organizations, or foundations, for that matter. So the question I had in my mind when I saw FSF’s Twitter account was: do we really want to endorse Twitter? So I sent them an e-mail…

Talking to the FSF - First message

I have exchanged some interesting messages with Kyra, FSF’s Campaign Organizer, and with John Sullivan, FSF’s Executive Director. I will not post the messages here because I don’t have their permission to do so, but I will try to summarize what we discussed, and the outcomings.

My first message was basically requiring some clarifications. I had read this interesting page about the presence of FSF on Twitter, and expressed my disagreement about the arguments used there.

They explicitly say that Twitter uses nonfree JavaScript, and suggest that the reader use a free client to access it. Yet, they still close their eyes to the fact that a big part of the Twitter community use it through the browser, or through some proprietary application.

They also acknowledge that Twitter accounts have privacy issues. This is obvious for anyone interested in privacy, and the FSF even provides a link to an interesting story about subpoenas during the Occupy Wall Street movement.

Nevertheless, the FSF still thinks it’s OK to have a Twitter account, because it uses Twitter via a bridge which connects FSF’s StatusNet instance to Twitter. Therefore, in their vision, they are not really using Twitter (at least, they are not using the proprietary JavaScript), and well, let the bridge do its job…

This is nonsense. Again: when a foundation uses a web service, it is endorsing it, even if indirectly! And that was the main argument I have used when I wrote to them. Let’s see how they replied…

FSF answers

The answer I’ve got to my first message was not very good (very weak arguments), so I won’t even bother talking about it here. I had to send another message to make it clear that I was interested in real answers.

After the second reply, it became clear to me that the main point of the FSF is to reach as many people as they can, and pass along the message of software user freedom. I have the impression that it doesn’t really matter the means they will use for that, as long as it is not Facebook (more on that latter). So if it takes using a web service that disrespects privacy and uses nonfree Javascript, so be it.

It also seems to me that the FSF believes in an illusion created by themselves. In some messages, they said that they would try to do a harder job at letting people know that using Twitter is not the solution, but part of the problem (the irony is that they would do that using Twitter). However, sometimes I look at FSF’s Twitter account, and so far nothing has been posted about this topic. Regular people just don’t know that there are alternatives to Twitter.

I will take the liberty to tell a little story now. I told the same story to them, to no avail. Let’s imagine the following scenario: John has just heard about Free Software and is beginning to study about it. He does not have a Twitter account, but one of the first things he finds when he looks for Free Software on the web is FSF’s Twitter. So, he thinks: “Hey, I would like to receive news about Free Software, and it’s just a Twitter account away! Neat!”. Then, he creates a Twitter account and starts following FSF there.

Can you imagine this happening in the real world? I definitely can.

The FSF is also mistaken when they think that they should go to Twitter in order to reach people. I wrote them, and I will say it again here, that I think we should create ways to reach those users “indirectly” (which, as it turns out, would be more direct!), trying to promote events, conferences, talks, face-to-face gatherings, etc. The LibrePlanet project, for example, is a great way of doing this job through local communities, and the FSF should pay a lot more attention to it in my opinion! These are “offline” alternatives, and I confess I think we should discuss the “online” ones with extra care, because we are in such a sad situation regarding the Internet now that I don’t even know where to start…

And last, but definitely not least, the FSF is being incoherent. When it says that “it is OK to use Twitter through a bridge in a StatusNet instance”, then it should also be coherent and do the same thing for Facebook. One can use Facebook through bridges connecting privacy-friendly services such as Diaspora and Friendica (the fact that Diaspora itself has a Facebook account for the project is a topic I won’t even start to discuss). And through those bridges, the FSF will be able to reach much more people than through Twitter.

I am not, in any way, comparing Twitter and Facebook. I am very much aware that Facebook has its own set of problems, which are bigger and worse than Twitter’s (in the most part). But last time I checked, we were not trying to find the best between both. They are both bad in their own ways, and the FSF should not be using either of them!

Conclusion

My conversation with the FSF ended after a few more messages. It was clear to me that they would not change anything (despite their promises to raise awareness to alternatives to Twitter, as I said above), and I don’t believe in infinite discussions about some topic, so I decided to step back. Now, this post is the only thing I can do to try to let people know and think about this subject. It may seem a small problem to solve, and I know that the Free Software community must be together in order to promote the ideas we share and appreciate, but that is precisely why I am writing this.

The Free Software movement was founded on top of ideas and coherence. In order to be successful, we must remain coherent to what we believe. This is not an option, there is no alternative. If we don’t defend our own beliefs, no one will.

]]>
<![CDATA[Reflexões de um ativista -- Parte 01]]> https://blog.sergiodj.net/posts/reflexoes-de-um-ativista-parte-1/ 2013-09-23T00:00:00-05:00 2013-09-23T00:00:00-05:00 Nesse último fim de semana, durante os dias 20 e 21 de Setembro (sexta-feira e sábado, respectivamente), ocorreram dois eventos sobre Software Livre na UNICAMP. Um deles, o Upstream, foi um “evento teste” que ajudei a organizar junto com o Cascardo e o Leonardo Garcia, ambos do LTC/IBM. O outro, o Software Freedom Day (SFD), eu organizei em nome do LibrePlanet São Paulo. Durante os dois eventos (e principalmente durante o SFD) eu fiquei pensando e refletindo bastante sobre vários assuntos relacionados (ou não) com o Software Livre. Resolvi, então, aproveitar a oportunidade e escrever um pouco sobre essas opiniões.

Antes, um breve relato dos dois eventos. Gostei parcialmente do resultado que obtivemos com o Upstream. Acho que a qualidade dos palestrantes foi ótima, e as discussões tiveram um nível muito bom. No entanto, os workshops deixaram a desejar. Pelo pouco que pensei a respeito, cheguei à conclusão de que faltou organização para definirmos os assuntos que iriam ser abordados, e principalmente o melhor modo de abordá-los. Assumo minha parcela de culpa nisso, afinal eu tentei ajudar na organização do workshop de toolchain e ele não saiu do modo como esperávamos. Problemas na infra-estrutura do local também atrapalharam no resultado final. Mas, de modo geral, e levando em conta que essa foi a primeira edição do evento, acho que conseguimos nos sair razoavelmente bem. Certamente já temos muitas coisas pra pensar e melhorar para a próxima edição!

Já sobre o SFD, apesar de várias pessoas muito boas terem participado do evento, a minha impressão inicial (e forte) foi a de que fazer a sociedade se interessar (ou ao menos ouvir, se bem que os dois conceitos são intrinsecamente ligados) por assuntos que são de suma importância para a manutenção (ou, no caso, a restauração) de um Estado que a respeite é mais difícil do que eu pensava. E essa é também a primeira reflexão do post.

Indignação x Ignorância

Há um conflito muito grande acontecendo com as pessoas. Provavelmente ele não é “de hoje”, mas de qualquer modo ele existe e precisa ser resolvido. O conflito, do modo que vejo, pode ser resumido da seguinte forma: “até que ponto eu quero sentir indignação sobre um assunto, de modo que eu não precise necessariamente tomar alguma atitude sobre ele?”. Ou seja, a pessoa opta voluntariamente por permanecer na ignorância parcial, para que ela não se sinta obrigada a tomar uma posição sobre determinado problema que a atinge.

Tomemos o exemplo do Facebook. Alguém que tenha uma conta lá (i.e., “quase todo mundo”) prefere se manter na ignorância sobre os termos de serviço e privacidade que o site possui. Não estou entrando no mérito de operações clandestinas de espionagem; estou falando sobre os textos disponíveis no site do Facebook e que explicam (talvez não de maneira muito clara, mas isso já é outro problema) o que o site faz e não faz a respeito dos seus dados. É uma opção. É mais fácil apenas usar o site, compartilhar imagens engraçadas com seus mil “amigos”, e não olhar para uma questão que deveria ser muito mais importante do que qualquer “like” que possa ser dado.

Não sou sociólogo e estou longe de poder dar opiniões acadêmicas sobre esse assunto, mas tenho a impressão de que o que acontece é um “retardo social” na maioria dos cidadãos deste planeta. Não deixa de ser um paradoxo o fato de que esse comportamento é exacerbado através de uma “rede social”, que se traveste de facilitadora de comunicações entre indivíduos para poder exercer a derradeira função de uma empresa: ganhar dinheiro. É importante frisar que não sou contra “ganhar dinheiro”, mas sou contra vários meios que são usados pra atingir esse objetivo.

No final, o produto somos nós, ou nossa privacidade. E quando eu digo “nós” ao invés de “eles”, é porque eu fiz uma outra reflexão…

Privacidade é um “bem” coletivo

Pode parecer paradoxal à primeira vista, mas pare e pense um pouco. A privacidade é sim um direito do indivíduo, mas quando você opta por não tê-la, você está fazendo essa opção em nome de todas as pessoas que se comunicam com você. Afinal, se você não se importa se alguém está lendo suas mensagens, então qualquer tipo de comunicação que chega até você pode e vai ser lida. E se essa comunicação partir de alguém que preza pela própria privacidade, não vai fazer diferença alguma: a mensagem será lida de qualquer jeito, porque você escolheu isso.

Estou acostumado a ouvir pessoas dizerem que elas não são tão importantes a ponto de despertarem interesse em algum governo para que ele queira espioná-las. “Portanto”, dizem as pessoas, “não preciso me preocupar”. Bem, acho que esse argumento não invalida de maneira alguma o fato de que proteger a própria privacidade é importante. Não interessa o quão público alguém é; se ele não preza pela sua privacidade, ele está abrindo mão de algo que afeta direta ou indiretamente várias pessoas.

O meu ponto aqui é simples. Faça a sua parte e proteja a sua privacidade. Ninguém vai fazer isso por você, mas todos precisam e podem fazer suas respectivas partes. É um trabalho em conjunto, mas que depende da cooperação de todos. Se alguém perto de você não se importar, você provavelmente vai ser prejudicado.

]]>
<![CDATA[Relato: FAD SP 2013]]> https://blog.sergiodj.net/posts/fad-sp-2013/ 2013-06-10T00:00:00-05:00 2013-06-10T00:00:00-05:00 Estava devendo este post há 1 semana pro meu amigo Leonardo Vaz! Desculpaê, Leo :-).

Vou tentar fazer um (breve?) relato sobre o Fedora Activity Day (ou simplesmente FAD), que aconteceu em São Paulo no dia 1 de Junho de 2013, mais conhecido como sábado retrasado :-). Se quiser ver a página de organização do evento (em inglês), clique neste link aqui.

Chegada em Sampa

Bem, como sou um ex-embaixador do Fedora novato, inexperiente, e que não faz nada da vida (ao contrário de vários ex-colegas que participam há anos como embaixadores contribuindo solidamente para o bem comum e sem deixar a peteca cair), eu resolvi levar os DVDs do Fedora que estavam comigo para que o Leo e o Itamar (e quem mais estivesse por lá!) pudessem se encarregar de redistribuí-los antes que eles perdessem a “validade”. Saí cedo de Campinas, e com uma São Paulo sem trânsito nem problemas, consegui chegar no escritório da Red Hat às 9h e pouco.

Conheci (e reconheci!) algumas pessoas por lá, entre colegas de trabalho da empresa, embaixadores/contribuidores do Fedora, e entusiastas que estavam lá pra conhecer melhor e ver qual era a do evento. Certamente foi uma tarde/noite proveitosa em termos de contatos pessoais!

Palestras

Depois de um atraso no início do evento, o Leo começou apresentando uma palestra sobre o projeto Fedora (e seus sub-projetos, como o de embaixadores, por exemplo). Mesmo com boa parte (senão todos!) dos presentes já fazendo parte do projeto de algum jeito, ainda assim a palestra foi um momento legal pra que algumas discussões e reflexões acontecessem. Considero que a maior parte da “nata” da comunidade estava naquela sala (com óbvias exceções como o Fábio Olivé, o Amador Pahim, e outras pessoas cujos nomes não vou ficar citando porque estou com preguiça de pensar em todos!). Portanto, acho que o plano do Leo (que é o de revitalizar a comunidade Fedora no Brasil, principalmente a de embaixadores) começou com os dois pés direitos (se é que isso é possível!).

A idéia inicial era de que cada palestra durasse 1 hora, mas é claro que com tanto assunto pra tratar a palestra do Leo durou muito mais que isso! No fim das contas, quando a palestra terminou já era hora do almoço :-). Como não poderia deixar de ser, o papo continou na cozinha, e foi lá que pude conhecer melhor o pessoal que estava presente. Foi bem legal :-).

Bem, com a bateria carregada, era hora do segundo ciclo de palestras! O Leo pediu pra que eu apresentasse um pouco da minha experiência com o GDB, tanto na parte de lidar com a comunidade upstream, quanto na hora de focar no desenvolvimento de funcionalidades para o Fedora (ou para o Red Hat Enterprise (GNU/)Linux). Eu não tinha preparado nenhum slide, e fui com a cara (de pau) e a coragem tentar bater um papo com a galera ;-). Aqui está uma foto na hora da palestra (reparem na pose, no garbo e na elegância do palestrante):

Apresentação do GDB

Acho que consegui passar uma idéia de como é o meu dia-a-dia trabalhando com o GDB e navegando entre os mares upstream e empresarial. Algumas pessoas fizeram algumas perguntas (o Maurício Teixeira inclusive fez perguntas técnicas!), e felizmente minha palestra durou bem menos do que a do Leo! Eu certamente não tinha tanto assunto pra tratar :-P.

A última atividade do dia foi um hands-on que o Itamar fez sobre empacotamento RPM. Foi legal, e acho que deu pro pessoal ter uma noção de que empacotar pro Fedora não é um bicho de sete cabeças. Inclusive, se você estiver interessado em saber mais, sugiro que dê uma olhada na página wiki que ensina o básico disso, e não se sinta envergonhado de enviar suas dúvidas pras listas de desenvolvimento do Fedora!

Após esse how-to ao vivo, e levando em conta o horário avançado (mais de 19h) e o cansaço do pessoal, decidimos finalizar o evento. Na verdade, ainda ficamos discutindo bastante sobre vários pontos importantes da comunidade, os problemas vivenciados (sim, existem problemas, a não ser que você viva num mundo encantado ou não se envolva o suficiente pra notá-los, mas aí é só pedir pra alguém traduzir o que está acontecendo e talvez você entenda), e as possíveis soluções. Acabei saindo de Sampa quase 20h30min, mas achei que valeu muito a pena ter ido!

Conclusões

A conclusão pessoal é que eu estava mesmo precisando ir a eventos e conhecer pessoas novas! Acho isso muito legal, é um combustível pra fazer mais coisas e ter mais idéias.

A conclusão na parte da comunidade é a de que o Leo vai conseguindo aos poucos mudar a mentalidade do Fedora Brasil. Não me arrependo de ter dado um tempo no sub-projeto de embaixadores, e estou achando muito legal ver as ações do Leo & cia. para mudar as coisas. Têm meu total apoio!

Agradecimentos

Esse evento certamente não teria acontecido sem o incansável Leonardo Vaz. Ele merece todos os agradecimentos e toda a admiração da comunidade (inter)nacional do Fedora por isso, sem dúvida. Se você estiver lendo este post, tiver alguma relação com o Fedora, e for ao FISL este ano, pague uma cerveja (ou suco!) a ele, porque ele merece.

Também queria agradecer ao pessoal que foi ao evento. É sempre bom ver gente que se preocupa de verdade em melhorar algo, que não fecha os olhos para os problemas que estão acontecendo, e principalmente que se dispõe a aprender algo novo. Foi gratificante ter conhecido pessoas como o Germán, um astrofísico argentino que mantém dois pacotes em Python no Fedora sem querer nada em troca! Ou tipo o Hugo Cisneiros, envolvido no mundo GNU/Linux há tanto tempo quanto aquele cabelo dele levou pra crescer :-P.

E vida longa ao Software Livre!

]]>
<![CDATA[GDB and SystemTap integration improving linker-debugger interface]]> https://blog.sergiodj.net/posts/gdb-stap-linker-debugger/ 2013-05-28T00:00:00-05:00 2013-05-28T00:00:00-05:00 It is really nice to see something you did in a project influence in future features and developments. I always feel happy and proud when I notice such scenarios happening, and this time was no different. Gary Benson, a colleague at Red Hat who works in the GDB team as well, has implemented a way of improving the interface between the linker and the debugger, and one of the things he used to achieve this is the GDB <-> SystemTap integration that I implemented with Tom Tromey 2 years ago. Neat!

The problem

You can read a detailed description of the problem in the message Gary sent to the gdb-patches mailing list, but to summarize: GDB needs to interface with the linker in order to identify which shared libraries were loaded during the inferior’s (i.e., program being debugged) life.

Nowadays, what GDB does is to put a breakpoint in _dl_debug_state, which is an empty function called by the linker every time a shared library is loaded (the linker calls it twice, once before modifying the list of loaded shlibs, and once after). But GDB has no way to know what has changed in the list of loaded shlibs, and therefore it needs to load the entire list every time something happens. You can imagine how bad this is for performance…

The solution

What Gary did was to put SDT probes strategically on the linker, so that GDB could make use of them when examining for changes in the list of loaded shlibs. It improves performance a lot, because now GDB doesn’t need to stop twice every time a shlib is loaded (it just needs to do that when stop-on-solib-events is set); it just needs to stop at the right probe, which will inform the address of the link-map entry of the first newly added library. It means GDB also won’t need to walk through the list of shlibs and identify what has changed: you get that for free by examining the probe’s argument.

Gary also mentions a discrepancy that happened on Solaris libc, which has also been solved by his patch.

And now, the most impressing thing: the numbers! Take a look at this table, which displays the huge improvement in the performance when using lots of shlibs (the time is in seconds):

Number of shlibs 128 256 512 1024 2048 4096
Old interface > 0 > 1 > 4 > 12 > 47 > 185
New interface > 0 > 0 > 2 > 4 > 10 > 36

Impressive, isn’t it?

Conclusion

This is one the things I like most in Free Software projects: the possibility of extending and improving things by using what others did before. When I hacked GDB to implement the integration between itself and SystemTap, I had absolutely no idea that this could be used for improving the interface between the linker and the debugger (though I am almost sure that Tom was already thinking ahead!). And I can say it is a pleasure and I feel proud when I see such things happening. It just makes me feel more and more certain that Free Software is the way to go :-).

]]>
<![CDATA[So long, Ambassadors...]]> https://blog.sergiodj.net/posts/so-long-ambassadors/ 2013-05-16T00:00:00-05:00 2013-05-16T00:00:00-05:00 No, I am not leaving the Fedora Project, I am just leaving (or taking a break, depending on how you look) its Ambassadors program. I am still the co-maintainer of the GDB package, and will contribute to the development of the distribution since it is also my job. However, after a few months trying to become more involved with the Fedora community (specifically with the Brazilian/LATAM community), I became so disappointed that the only logical action for me now is to step back.

My brief history

I joined the Ambassadors program on October, 2012. After having used the system heavily for almost 3 years, I decided that it was about time to pay something back to the community too. Since I live in Brazil, I joined the the brazilian team of Ambassadors (which meant that I was also part of the Latin America team). Thanks to my friend Leonardo Vaz (from Red Hat), I talked to Daniel Bruno who then became responsible for “mentoring” me.

The brazilian community was (and still is) very inactive (compared to others, and to itself a few years ago), but I was very excited and decided to try to revive it. And the first task that I assigned myself was to regain control of the brazilian and LATAM domains.

The domains

Alejandro Perez, a very nice guy from Panamá responsible for LATAM’s money, asked me to talk to Rodrigo Padula, an inactive Fedora Ambassador from Brazil, about the domains. Padula was a very active member of the brazilian community since 2006 if I’m not mistaken, but due to reasons beyond my knowledge is inactive in the Fedora community for quite some time now (he’s still very active in the Mozilla community, however). And he owns both domains.

Alejandro was worried because the LATAM domain had suffered some sort of outage during some days, which is obviously bad for the project. He was also concerned (and I totally agreed with him on this) because those domains shouldn’t be owned by a person (rather, it should be registered on behalf of the Fedora Project or, ultimately, Red Hat), specially if this person is now inactive.

To make a long story short, I spent more than 1 month doing the indirection and talking to both guys about this issue. Padula initially said he could transfer the domains without problem, but then changed his mind and said he wouldn’t do it. On the other side, Alejandro was getting upset because Padula did not want to make the transfer, and the LATAM community was pressuring him. In the end, I totally gave up, and the LATAM guys registered yet another domain, but right now are still using the old domain. Yes, a mess.

Working with LATAM

Anyway, after this episode, and after witnessing how active the LATAM community was in contrast with the brazilian community, I decided to work directly with them. I wanted to do something, and I was eager to start working as a real ambassador, spreading the word about Fedora everywhere. And my friends from Panamá, Argentina, México, Venezuela, etc., seemed the right people to work with.

So I started attending the weekly meetings on #fedora-latam, at Freenode, every Wednesday night. It is a well-organized meeting (run by Alejandro), whose main goal is to vote tickets from LATAM ambassadors (including brazilians). Tickets are basically requests made through a Trac instance, and are used to ask for swags, media, sponsorship for travels, etc. The Fedora Project has a budget, and the LATAM region gets a fraction of this budget for annual expenses, so our job as ambassadors was to vote those tickets and decide whether they deserve to be approved or not, according to some rules inside the project.

Keep in mind: we are dealing with money here. It’s not yours nor mine, but it’s still money that should be used to promote a project that embraces open source initiatives (unfortunately, I cannot say Fedora is Free Software, but that is a topic for another post).

So, after some weeks working with the LATAM guys, I became the default owner of Trac tickets from brazilian ambassadors. And a few more weeks down the road Alejandro asked me to produce media (Fedora DVDs) and be resposible for distributing them in Brazil. I spent a lot of time ordering the medias (I had to travel to São Paulo in order to make sure everything was OK), and every time an ambassador requests Fedora DVDs I go through a series of steps (link in pt_br, portugues) to guarantee that she gets her media and I get my reimbursement.

I also like to give talks and presentations about the project, and so I’ve attended some events (or organized them) just to be able to do that. I have posted some reports about them in this blog, you can find them in the archives (if you can read in pt_BR).

So, enough of self-promotion: why I am leaving the ambassador’s program after all?

Disappointment

A few things started to happen:

  • During the weekly LATAM meetings, it bothered me to see that the tickets were being approved without any kind of serious discussion. Everyone (including myself!) was just giving “+1” to everything!
  • FISL, the biggest open source (no, it is not about Free Software!!) event in LATAM, is going to happen on July. Suddenly, new brazilian ambassadors were popping out of nowhere, and inactive ambassadors were pretending to do something.
  • As a consequence, we received 9 sponsorship requests in our Trac. Some from active people, some not.

Something that I should have noticed before became crystal clear to me: some people are there just to take advantages for their own. They are not interested in the project, in the philosophy (yes, you can laugh at my face now…), in the promotion of the ideals, etc. They just want free lunch. And they get it…

During the last meeting I attended, two weeks ago, we were going to vote the FISL tickets. A few days before the meeting, I sent the following message to the LATAM Ambassadors list:

Hi there,

This message is just to let you know that we will be discussing several FISL tickets in our next meeting, May 8th. You can take a look at the meeting agenda by going to:

https://fedorahosted.org/fedora-latam/report/9

I would like to ask everyone to read the requests and make your decision based on merits, please. In my opinion, only active ambassadors should receive the honor of being sponsored by Fedora to go to FISL14. Let’s not spend money unnecessarily, so try to avoid the “+1” wave when voting for the tickets.

Thanks a lot,

–Sergio.

As I said, some tickets were filed by inactive ambassadors, and I wanted us to at least discuss the matter with him/her, showing that we were not happy with his/her conduct. It is one thing when you have personal problems and have to step away from the project for a while; it is another different thing when you disappear without saying a word and then comes back to request sponsorship for travel.

We began the meeting by discussing tickets filed by active members, and approving them without thinking much about it. However, eventually we got to the problematic ones. There is this specific guy, whose name I will not mention here, who was very absent since I started in the project, and I felt the need to point that out. I told him I hadn’t seen him in quite a while, and explained that there were many ambassadors doing things for Fedora. He’s a long term contributor to the project, as he himself told me in a not-so-friendly tone during the meeting. But that was not the subject of the discussion, and while he kept saying how hard he worked for the project in the last 5 years, or how much he’s done for this or that, I remained silent and began to think: what the hell am I doing? Why am I wasting my time in a Wednesday night to convince a group that someone maybe doesn’t deserve the credit he’s asking for? Well, the only reasonable answer was: because I feel it is the right thing to do. But nobody said a word during this discussion, and I started to feel something else. I felt that people were not interested in evaluating how much this guy (or anybody else, for that matter) really did for the project! And the feeling was corroborated when someone else said: “Hey, let’s just approve the ticket now, we can continue the discussion later”. WHAT????. Let me see if I get it: we are here to discuss, reach a consensus, and vote. You want to approve, maybe discuss, fuck the consensus. Well…

I left before the end of the meeting, but I still managed to see this behaviour explained by some people: there was enough money to approve all tickets, so the meeting was just a formality needed to explain the expenses later. I was at least fully convinced that I did not belong there.

Not my place

If you are part of a team and you disagree with its members, I believe you have two choices most of the time: you can either (a) discuss with them, try to understand their reasons for being different, try to explain yours, see what you can do to overcome this, or (b) leave it. Sometimes I choose one, sometimes another. This is the time for (b). I don’t want to spend more time and energy into something that doesn’t work the way I think it should. I don’t feel motivated to fight against the tide, because I am not so strong and the tide keeps getting bigger and bigger. And I also don’t want to stop people from doing what they think is right, honestly. In the end of the day, I still want to believe that everyone has a conscience and knows what’s correct…

But I am not going to cross my arms and sit. Some friends and I decided to create our own group, called LibrePlanet São Paulo (link in pt_br, portugues), and focus on the real important thing: Free Software. I really hope we can make a difference with our local community, and we have started with the right foot already: we organized the Document Freedom Day in our city this year!

As for Fedora, as I said, I still intend to continue contributing to it. I’m still subscribed to the fedora-devel mailing list, and I still follow the project’s decisions, partly because it is part of my job, partly because I strongly believe you have to give back what you take for free – as in freedom – from the community. I also have some DVDs and I intend to distribute them. But my time as a Fedora Ambassador is coming to an end. It was a good experience, I met good people, had a great time doing talks and presentations, and most of all, did what I felt right at the right time.

So, as Douglas Adams said, “…thanks for all the fish!”.

]]>
<![CDATA[Document Freedom Day 2013 in Campinas -- São Paulo -- Brazil]]> https://blog.sergiodj.net/posts/dfd-2013-campinas/ 2013-04-12T00:00:00-05:00 2013-04-12T00:00:00-05:00 Hi, there! This is the report of the Document Freedom Day event that took place in Campinas, São Paulo state, Brazil. I will talk a little bit about how we (keep reading to know who “we” are!) organized it, and the conclusions that can be drawn to help for the next edition.

Organization

The DFD (or Document Freedom Day) 2013 in Campinas was organized by the LibrePlanet São Paulo (link in pt_BR) group. If you follow this blog, and if you speak portuguese, then you have probably read the announcement of the group that I made last year. If you haven’t: LibrePlanet São Paulo is part of the LibrePlanet project (sponsored by the Free Software Foundation), and "… is a global network of free software activists and teams working together to help further the ideals of software freedom by advocating and contributing to free software.".

The DFD 2013 was an important event to us because it was the first serious event that we organized as a group. Despite some mistakes and errors, I believe we did fine and were able to learn some great lessons for the next events that we plan to do. By the way, if you want to see the official page which we used to promote the event (and organize it too), take a look here. The page is in pt_br, portugues.

Basically, we should have: (a) focused more on defining the venue as soon as possible, because that would have made it possible to (b) start sending announcements about the event earlier. We also should have contacted the Document Freedom organization and asked swags and banners earlier, because when we did it was too late for the shipment to arrive in time. And last but not least, we should really have taken pictures!! Unfortunately, I have absolutely no pictures to post here, so you will have to believe just in the words I write…

But well, nothing is perfect, and hey, the event happened!. So let’s talk it :-).

The Event

DFD 2013 occurred on Wednesday, March 27th. After some discussion, we decided to schedule the event from 13h (1 p.m.) to 17h (5 p.m.), with 4 presentations of 50 minutes each, approximately. The venue chosen was CCUEC, the Center of Computing at the University of Campinas, UNICAMP. This center has some great people working on it who are involved with Free Software since the beginning of the movement, particularly Rubens Queiroz de Almeida, a very nice guy (very famous in the Brazilian Free Software scene) who helped us a lot with the organization of this event.

We understand that doing the event on a Wednesday afternoon was something that made it very hard for most people to attend, and that is probably the main reason for the low attendance: only 8 people in the audience. I have to say I was a little frustrated in the beginning, but hey, what really matters is that we spread the word about Free Software to 8 brave souls there, who will hopefully spread the word again to more people, and so on :-). So, it was time for the show to begin!

Our schedule was (presentation titles translated):

  1. What is Free Software?”, by me
  2. Free Documents or the End of the World”, by Rubens Queiroz de Almeida
  3. HTML5: all the faces of the new standard”, by Ricardo Panaggio
  4. EPUB3: The book in the XXI century”, by Raniere Silva

So my presentation was scheduled to be the first one, and I really liked it (surprise!). It was virtually the first time I gave a “philosophical” talk, and a very important one: a general presentation about Free Software, its history, the present, and a little bit of the future. In my opinion, what I liked about my talk is that I focused less on the “freedom” part, and more on the “respect” part of the philosophy. This is something I did because I wanted to use a different argument that was on my head for a long time: that the main thing behing the Free Software is the respect towards others, and only with that one can achieve freedom.

I watched Rubens too, who gave an excelent presentation about why we need free documents and standards. Rubens is very talkative and warm, which makes the audience feel relaxed. People liked his presentation a lot, from what I noticed.

Unfortunately, Ricardo Panaggio had a problem with his computer before his presentation, so we decided to switch: Raniere Silva would take his place as the third presenter, while Ricardo tried to fix the problem. I helped him with his problems, and because of this I was unable to watch Raniere’s talk. In the end, we could not solve Ricardo’s problem and he decided to give his presentation without any slides. In my opinion, he managed to catch everyone’s attention (also because HTML5 is such a hot topic today), so I guess the missing slides were not so important after all!

At 17h o’clock, we declared DFD 2013 finished. I still had time to distribute some Free Software stickers (from FSF), and talk a little with two or three people there, who were satisfied with the presentations! It made my day, of course :-). And just because of that I now feel motivated to organized another DFD next year!

Acknowledgements

I would like to thank Rubens Queiroz for helping with the promotion, the location, and the presentation during the event. DFD 2013 would have been impossible without his help. Thanks, Rubens!

The LibrePlanet São Paulo team, specially Ricardo Panaggio, were also deeply involved with me in the organization. And I hope we manage to make a bigger event next year!

Finally, I would like to thank everyone who attended the event, even for watch only one talk. Your presence there was really, really important to all of us. See you all next year!

]]>
<![CDATA[Relato dos Install Fests na UNESP de Rio Claro/SP e na UNICAMP/SP]]> https://blog.sergiodj.net/posts/relato-installfest-unesp-unicamp/ 2013-04-01T00:00:00-05:00 2013-04-01T00:00:00-05:00 E… Aqui estamos (estou?) com mais um relato sobre duas atividades envolvendo o Projeto Fedora! Ele contempla, respectivamente, os Install Fests ocorridos na UNESP de Rio Claro/SP e na UNICAMP. Foram atividades que envolveram diversas pessoas, tiveram vitórias e derrotas, alegrias e tristezas, mas acima de tudo um sentimento de impotência (principalmente no Install Fest ocorrido na UNICAMP) em relação às novas “tecnologias” de boot, principalemente ao Secure Boot.

Install Fest: missão UNESP de Rio Claro/SP

Este foi o Install Fest mais tranquilo. Ele começou a ser organizado logo depois da minha participação na Semana da Computação da UNESP de Rio Claro, e a intenção inicial era realizá-lo no dia da matrícula dos alunos ingressantes na Universidade. No final das contas, decidimos postergar a data, e isso foi uma boa escolha.

O Install Fest aconteceu no dia 06 de março de 2013, em um auditório da Biblioteca do campus, e começou com uma palestra minha sobre o Projeto Fedora. Foi basicamente a mesma palestra que eu havia apresentado na Semana da Computação, mas de uma maneira mais sucinta porque tínhamos pouco tempo. Creio que a palestra foi bem recebida, porque o público demonstrou interesse em contribuir com o Projeto Fedora depois que eu expliquei os meios para isso :-). Além disso, apesar do número pequeno de pessoas (aproximadamente 12 participantes), todos estavam bastante interessados no conteúdo, o que é uma motivação extra!

Bem, após a palestra era hora de começar a instalar os sistemas. Levei vários DVDs do Fedora, em basicamente 2 versões: LiveDVDs, que permitem o boot e a utilização de um sistema Fedora sem a necessidade de instalar nada na máquina, e InstallDVDs, que não oferecem a opção de “experimentar” o sistema, mas já possuem todos os pacotes necessários para fazer uma instalação completa. Expliquei a todos os presentes algumas regras básicas de todo Install Fest: é preciso reparticionar o disco rígido caso se queira manter o Microsoft (R) Windows (R), quem organiza o Install Fest não pode assumir responsabilidade por nenhuma falha na instalação (apesar de elas serem raras), e também não pode assumir responsabilidade caso o usuário torne-se viciado no GNU/Linux :-). Dito isso, começamos a colocar as mãos na GNU/massa.

O primeiro desafio (e, até então, único!) dos Install Fests recentes é imposto pelos próprios fabricantes de notebooks. Um disco rígido que ainda utilize MBR (a maioria) suporta apenas 4 partições primárias. Antigamente, os fabricantes criavam apenas uma partição para o Microsoft (R) Windows (R), e às vezes chegavam a criar outra partição de “recuperação”, mas paravam por aí. Atualmente, não é raro encontrar computadores com 4 partições primárias já criadas. Eu inclusive já cheguei a ver notebooks com discos de 1 TB com uma partição primária de pouco mais de 1 MB! É uma prática totalmente absurda, e a meu ver é feita com má-fé, visando dificultar a instalação de outros sistemas operacionais. Além disso, pra piorar ainda mais, alguns fabricantes (HP me vem à cabeça, mas existem outros) dão um jeito de invalidar a garantia caso o esquema de particionamento seja alterado!!!

Felizmente, vários computadores no Install Fest possuíam apenas 3 partições (ou até menos!), e aqueles que possuíam 4 partições ou usavam um outro boot sector (chamado GPT), ou já estavam fora da garantia do fabricante e podiam ter seus esquemas de particionamento alterados. O próprio Microsoft (R) Windows (R), a partir da versão 7 (se não me engano), oferece uma ferramenta específica para redimensionar e reparticionar o disco, portanto essa primeira etapa foi concluída com sucesso em todas as máquinas (por favor, se você participou do Install Fest e se lembra de alguma máquina na qual não foi possível efetuar o reparticionamento, por favor contate-me <about> para que eu corrija o post!).

Depois de reparticionar, era hora de começar a instalação. Quase todos preferiram utilizar o InstallDVD, porque a instalação pela internet iria demorar muito. Após o boot, deparamo-nos com a interface do instalador do Fedora 18. Depois de ter lido diversas críticas sobre ele, pude finalmente confirmar que, infelizmente, quase todas condizem. Confesso que fiquei confuso no início, principalmente na tela de particionamento e seleção de disco, que não é nem um pouco intuitiva. Sei que o instalador foi reescrito, e que ele foi um dos principais motivos do atraso no lançamento do Fedora 18, então espero muito que as melhorias para o Fedora 19 contemplem, principalmente, essa parte de interface com o usuário. Após apanhar um pouco, acabei me acostumando com ele e as outras instalações foram mais tranquilas.

Conforme as instalações foram acabando, os sistemas começaram a ser configurados. Se minha memória não falha, todos optaram por instalar o GNOME 3, que é o desktop padrão do Fedora 18. Eu particularmente não gosto dele, e também tive algumas dificuldades (principalmente ao tentar encontrar modos de alterar opções mais avançadas), mas algumas pessoas gostaram do visual.

No final, esqueci de contar quantas máquinas foram instaladas, mas creio que chegamos perto de 11. Todas as instalações foram bem sucedidas, até onde minha memória alcança :-). E novamente eu fiquei bastante satisfeito com minha ida à UNESP de Rio Claro!

Entretanto, nuvens negras estavam se aproximando, e minha alegria duraria pouco…

Install Fest: missão UNICAMP

Há alguns anos começaram a surgir notícias sobre um novo sistema que substituiria a BIOS, permitindo muito mais flexibilidade durante o boot e inclusive adicionando camadas de segurança que protegeriam o usuário de vírus e outras ameaças. Esse sistema chama-se UEFI (e uma das tais “camadas de segurança” chama-se Secure Boot), e no ano passado ele ganhou muita notoriedade porque a Microsoft (R) anunciou que seu então novo sistema, o Windows (R) 8, só poderia ser utilizado em máquinas com UEFI. Isso causou uma corrida dos fabricantes de computador para adaptar-se a esse novo modelo (e ganhar o famigerado selo de compatibilidade da Microsoft (R)), e gerou incoformismo em boa parte das comunidades envolvidas com Software Livre e/ou Open Source.

Resumindo, o grande problema desse novo esquema é a necessidade de uma chave criptográfica assinada por uma autoridade certificadora para que o sistema operacional seja iniciado. Essa é a segurança que o Secure Boot provê, e o único jeito de obter uma chave assinada é… (tambores)… pagando à Microsoft (R)!

Até onde eu sei, o Microsoft (R) Windows (R) 8 não funciona caso o Secure Boot esteja desabilitado (um meio perfeitamente válido de instalar uma distribuição GNU/Linux que não possui a tal chave criptográfica), então a distribuição é obrigada a compactuar com esse esquema caso queira oferecer a opção de dual-boot ao usuário. E atualmente, as duas únicas distribuições que oferecem isso são o Fedora e o Ubuntu.

Bem, depois dessa sucinta explicação, começa aqui meu relato sobre o que aconteceu no Install Fest. No dia 13 de março de 2013, quarta-feira, nos reunimos no Instituto de Computação da UNICAMP para realizarmos a instalação de distribuições GNU/Linux. Novamente, eu levei vários DVDs do Fedora para serem utilizados pelos alunos ingressantes nos cursos de Ciência e Engenharia de Computação. Dessa vez não houve palestra introdutória sobre o Projeto Fedora, mas eu resolvi pegar 10 minutos e explicar as “regras” de um Install Fest. Também comentei sobre a má prática que algumas fabricantes de notebooks têm quando decidem entregar um disco rígido todo particionado e sem a possibilidade de adição de novas partições primárias. Dito isso, começamos a instalar.

Infelizmente, devido a diversos fatores como inexperiência, tempo curto para organização do evento, e erro na estimativa de quantas pessoas iriam ao evento, acabamos ficando com muita gente pra instalar e pouca gente pra ajudar. Não chegamos a fazer uma contagem oficial, mas eu suponho que pelo menos 20 pessoas estavam na sala querendo instalar o Fedora. E a grande maioria delas estava com notebooks novos, com Microsoft (R) Windows (R) 8, i.e., com UEFI e Secure Boot habilitados.

Conforme íamos reparticionando os discos e bootando os DVDs do Fedora, começamos a perceber que havia algo errado. Depois de terminar a instalação em algumas máquinas, notávamos que o sistema não iniciava. O que tínhamos que fazer, em alguns casos, era desabilitar o Secure Boot (mesmo assim, sem sucesso em alguns casos). E depois disso, o Fedora finalmente era iniciado, mas o Microsoft (R) Windows (R) 8 não aparecia na lista de sistemas operacionais do GRUB! Ou seja, era impossível fazer com que os dois sistemas convivessem na mesma máquina.

Tivemos alguns casos um pouco mais graves, mas que no fim foram resolvidos. E antes que você me pergunte qual foi a solução, eu respondo: reabilitamos o Secure Boot, e praticamente desfizemos a instalação do Fedora. Ou seja, a esmagadora maioria dos alunos presentes no Install Fest voltou pra casa com uma máquina sem Fedora ou qualquer outra distro GNU/Linux. Eu pessoalmente vi apenas 2 instalações bem sucedidas, apesar de que depois do Install Fest fiquei sabendo de mais.

Saí do evento bastante chateado, achando que a culpa havia sido nossa, e que os alunos nunca mais iriam querer instalar GNU/Linux nas suas máquinas. Mas depois de um tempo, coloquei as idéias em ordem e resolvi escrever este post. Não estou eximindo ninguém da culpa, creio que devíamos ter planejado o Install Fest um pouco melhor, e com certeza aprendemos com os erros que cometemos. Mas acho muito importante apontar alguns dedos e dizer o que realmente aconteceu.

Conclusões

A conclusão principal não poderia ser outra. É preciso tomar muito cuidado com essas novas tecnologias de boot. Quando for comprar uma máquina nova, é preciso prestar muita atenção a isso, pois essas novas tecnologias nada mais são do que armadilhas para tirar a sua liberdade de escolher o que quer executar na sua máquina. É preciso lutar contra essas imposições que as empresas fazem (não seja inocente pensando que é só a Microsoft (R) que está por trás disso…), e é preciso tomar conta da sua liberdade. Se quiser demonstrar ainda mais seu apoio contra essas imposições (e entender mais do porquê delas existirem), clique aqui e leia a página da Free Software Foundation sobre o assunto (e assine a petição também!).

Conclusões secundárias: um Install Fest (ou qualquer evento, na verdade) precisa ser organizado com antecedência, e precisa ter bastante gente disposta a ajudar nas instalações. Só assim as coisas fluem.

Agradecimentos

Não posso deixar de agradecer o Ricardo Panaggio por me ajudar indo até a UNESP de Rio Claro comigo! Ele também ajudou bastante no Install Fest da UNICAMP.

Também gostaria de agradecer ao Marcel Godoy e ao Centro Acadêmico da Computação da UNESP de Rio Claro pela organização e divulgação do Install Fest lá. Muito obrigado!

O Install Fest da UNICAMP só foi possível com a ajuda do Grupo Pró-Software Livre da UNICAMP, em especial ao Gabriel Krisman. O Ivan S. Freitas e o Raniere Gaia Silva também ajudaram no apoio logístico do Install Fest.

Por fim, gostaria de agradecer à comunidade Fedora pelo apoio com os DVDs. Obrigado a todos!

]]>
<![CDATA[Misunderstanding the Free Software Philosophy]]> https://blog.sergiodj.net/posts/misunderstanding-free-software/ 2012-12-17T00:00:00-05:00 2012-12-17T00:00:00-05:00 This will probably be one of those controversial posts, but I really cannot just be silent about a behaviour that I am constantly seeing around me.

Since my childhood, I am fascinated by the power of the words. I always liked reading a lot, and despite not knowing the grammar rules (either in pt_BR or en_US, the former being my native language, the latter being the only idiom I can consider myself fluent in), I am deeply interested in what words (and their infinite meanings) can do to us. (If you can read in portuguese, and if you also like to study or admire in this subject, I strongly recommend a romance by José Saramago called “O Homem Duplicado”). So now, what I am seeing everywhere is that people are being as careless as ever with words, their meanings, and specially their implications.

The problem I am seeing, and it is a serious problem in my opinion, is the constant use of the term “free software” when “open source” should be used. This is obviously not a recent problem, and I really cannot recall when was the first time I noticed this happening. But maybe because I am much more involved with (real) free software movements now, I have the strong impression that this “confusion” is starting to grow out of control. So here I am, trying to convince some people to be a little more coherent.

When you create a group to talk about free software, or when you join a group whose goal is to promote free software ideas, you should really do that. First of all, you should understand what free software is about. It is not about open source, for starters. It is also a political movement, not only a technical one.

I was part of a group in my former university which had “Free Software” in its name. For a long time, I believed the group really was about free software, even after receiving e-mails with heavy negative critics about my opinions when I defended something related to the free software ideology (e.g., when I suggested that we should not have a Facebook page, which had been created for the group by one of its members). Well, when I really could not hide the truth from myself anymore, I packed my things and left the group (this was actually the start of a new free software group that I founded with other friends in Brazil).

I also like a lot to go to events. And not only because of the presentations, but mostly because I really like to talk to people. Brazilians are fortunately very warm and talkative, so events here are really a fertile soil for my social skills :-). However, even when the event has “free software” in its name and description, it is very hard to find someone who really understands the philosophy behind the term. And I’m not just talking about the attendees: the event staff is also usually ignorant (and prefer to remain like this)! I feel really depressed when I start to defend the (real) free software, and people start looking at me and saying “You’re radical.”. It’s like going in a “Debugger Conference” and feel ridicularized when you start talking about GDB! I cannot understand this…

But the worst part of all this is that newcomers are learning that “free software” is “Linux”, or something which is not free software. This is definitely not a good thing, because people should be aware that the world is not just about software development: there are serious issues, including privacy and freedom menaces by Facebook/Google/Apple/etc, which we should fight against. Free software is about that as well. Awareness should be raised, actions should be taken, and people should refuse those impositions.

So, to finish what I want to say, if you do not consider yourself a free software activist, please consider becoming one. And if, after giving it a thought, you decided that you really do not want to be a free software activist, then do not use the name “free software” in your event/group/whatever, unless you really intend to talk about it and not open source.. In other words, if you don’t want to help, please don’t spread confusion.

]]>
<![CDATA[[ANÚNCIO] Criação do grupo LibrePlanet São Paulo!]]> https://blog.sergiodj.net/posts/criacao-libreplanet-sao-paulo/ 2012-12-15T00:00:00-05:00 2012-12-15T00:00:00-05:00 Olá a todos!

Finalmente consegui um pouco de tempo na minha agenda, e resolvi escrever no blog para anunciar a criação do grupo LibrePlanet São Paulo!

O que é o LibrePlanet

O projeto LibrePlanet teve início em 2006, durante a reunião de membros da FSF (a Free Software Foundation). Ele foi criado para ajudar a organizar maneiras de levar o movimento de Software Livre ao conhecimento da população em geral.

Os grupos são organizados geograficamente, e cada um é responsável por definir metas e estratégias visando fomentar o Software Livre na região. É importante deixar claro: o objetivo é trabalhar em prol do Software Livre, e não do open source. Para saber mais a respeito da definição de Software Livre, recomendo que leia este artigo.

O surgimento do LibrePlanet São Paulo

Essa história é um pouco longa, mas vou tentar resumir :-).

Tudo começou quando eu, Ricardo Panaggio, Ivan S. Freitas e Raniere Gaia Silva começamos a trocar alguns e-mails sobre assuntos como privacidade, software livre, soluções e serviços livres, etc. Eu e o Panaggio já estávamos nos sentindo muito insatisfeitos com os rumos que um grupo local, teoricamente “pró software livre”, estava tomando (como quase tudo hoje em dia, o nome “software livre” está lá simplesmente porque ninguém se tocou de que devia ser “open source” ainda…). E essa insatisfação já vinha nos fazendo querer criar um novo grupo, fiel à ideologia do Software Livre, no qual pudéssemos dar nossas opiniões sem medo de sermos esmagados por uma maioria que não se importa com “essas coisas”.

Bem, começamos a conversar, e logo o Ivan e o Raniere deram sinais de que eles topariam participar do grupo, sem problemas. Portanto, o solo já estava fértil para novas idéias :-).

Um dia, eu acordei e vi na minha INBOX uma mensagem do Raniere dizendo que havia encontrado algo sobre um projeto interessante, o LibrePlanet, na Internet. Foi a faísca que faltava pra começar a movimentação! Recordei-me de que eu já havia conversado com o Matt Lee, também da FSF, sobre o LibrePlanet, e depois de uma rápida busca na wiki do projeto, vi que ainda não havia nenhum grupo brasileiro. Então, depois de alguma conversa interna, decidimos criar um grupo para o Estado de São Paulo.

Hoje, pouco mais de 2 semanas depois da criação, contamos com 10 membros cadastrados na Wiki, e aproximadamente 7 membros ativos no nosso canal de IRC. Também temos uma lista de discussão, e já estamos começando a conversar sobre possíveis projetos para 2013.

Como você pode fazer parte do grupo?

É simples! Siga os seguintes passos:

  1. Entre na nossa Wiki, e leia todas as informações presentes lá antes de qualquer coisa!
  2. Depois disso, efetue a criação de seu usuário na FSF, indo até este link de cadastro e preenchendo as informações. Repare que você não precisa tornar-se membro da FSF (os membros são pessoas que contribuem financeiramente com a Fundação), mas se você puder, iria ser bem legal :-).
  3. Ok, agora que você já possui um usuário, efetue o login na Wiki do LibrePlanet, e crie sua página pessoal lá. Para isso, vá até este link, clique no link Edit, e insira algumas informações sobre lá. Se quiser, utilize minha página pessoal como exemplo. É importante que você insira, no final de todo o conteúdo, a seguinte linha: {% raw %}{{user SP}}{% endraw %}. Ele faz com que você passe a pertencer ao grupo LibrePlanet de São Paulo.
  4. Agora, é importante que você também efetue sua inscrição na nossa lista de discussão. Vá até esta página de inscrição e preencha as informações necessárias! Também recomendamos fortemente que você envie uma mensagem de apresentação para a lista. Nada formal, só para termos uma idéia do tamanho do grupo!
  5. Ufa, último passo! Se você utiliza IRC e frequenta a rede Freenode, entre no nosso canal: #lp-br-sp! É lá que a maior parte das discussões acontece, então seria muito legal se você também pudesse participar delas!

Acho que é isso :-). Se você ainda tiver alguma dúvida sobre qualquer assunto tratado neste post (objetivos do grupo, inscrição, etc), ou se quiser fazer algum comentário, sinta-se à vontade!

Saudações livres!

]]>
<![CDATA[Relato da Apresentação sobre o GDB no SoLiSC 2012]]> https://blog.sergiodj.net/posts/relato-apresentacao-gdb-solisc/ 2012-12-01T00:00:00-05:00 2012-12-01T00:00:00-05:00 Nesta última sexta-feira, dia 30/11/2012, estive presente na sétima edição do SoLiSC 2012, em Florianópolis, para apresentar uma palestra introdutória sobre o GDB. Este é um relato sobre minha particição no evento :-).

Impressões sobre o evento

Foi a primeira vez que fui ao SoLiSC. Já tive vontade de ir em anos anteriores, mas infelizmente sempre havia algo para atrapalhar. No entanto, nesse ano felizmente tudo correu bem, e inclusive tive uma palestra aceita! Ou seja, um ótimo motivo para visitar Floripa e rever o mar :-D.

Peguei um vôo saindo às 6h de Campinas, e cheguei lá às 7h10min. Estava bastante cansado, pois não havia dormido de quinta pra sexta, só que a ansiedade estava conseguindo me deixar ligado :-).

O evento aconteceu Universidade Estácio de Sá, que fica em São José. Cheguei por lá às 8h, e fui bem recebido pelo pessoal do evento. Já tentei me enturmar, e conheci algumas pessoas que também iam palestrar no evento. Como minha palestra estava marcada para começar às 14h, resolvi ficar batendo papo e de olho na grade de palestras.

Por coincidência (ou não!), acabei ficando na sala onde aconteceria o primeiro LibreOffice Hack Day no Brasil. Acabei ficando na sala o dia todo, ajudando o pessoal a resolver alguns problemas chatos com o firewall da Universidade, e depois com git. Foi uma experiência muito legal, nunca tinha participado de um Hack Day antes, e foi uma honra poder presenciar e ajudar no primeiro evento do tipo que o pessoal do LibreOffice fez no Brasil :-). Aliás, foi muito interessante conhecer um pouco mais sobre um projeto tão grande e complexo quanto o LibreOffice, e inclusive fiz um “jabá” sobre o GDB para eles :-).

No final, também conheci algumas pessoas muito interessadas em contribuir com projetos de software livre, o que é sempre bom! Isso me ajuda a ter mais motivação para continuar a fazer esse trabalho de divulgação. Você pode ler uma descrição mais detalhada sobre o LibreOffice Hack Day (inclusive com fotos) aqui.

Apresentação “GDB Crash Course”

Eu já estava esperando pouca gente na palestra, até porque falar sobre o GDB está ficando cada vez mais complicado… As pessoas em geral não sabem (e nem se interessam) pelo software, então é normal ficar meio “de escanteio” nesses eventos :-). Quem sabe um dia eu não escreva um post sobre isso?

Bem, mas mesmo com pouco público, creio que palestra correu bem. Dessa vez, meu amigo Edjunior não foi, então levei a palestra sozinho :-). Existem vantagens e desvantagens nisso, mas de modo geral acho que a palestra ficou um pouco mais rápida.

Adicionei alguns slides extras para falar sobre a Red Hat, e sobre o que estamos fazendo pelas comunidades de software livre por aí – não só na do GDB, mas também em muitas outras. Essa parte da apresentação realmente foi bacana, porque o orgulho de se trabalhar nessa empresa é grande!

Depois que terminei minha palestra e voltei à sala do LibreOffice Hack Day, alguns desenvolvedores que estavam por lá me perguntaram como foi, e disseram que tinham se arrependido de não ter ido… Sabe como é, preferiram ficar fazendo patches, então eu entendo :-P. Bem, pra não deixar ninguém insatisfeito, acabei fazendo uma segunda rodada da palestra dentro do Hack Day, e também foi muito bacana :-).

Várias pessoas me pediram os slides, então aqui estão eles:

Conclusão

Gostaria de agradecer especialmente à Eliane Domingos, ao David Jourdain e ao Olivier Hallot, todos membros da TDF e contribuidores do LibreOffice, pelos momentos prazerosos e pelas conversas divertidas que tivemos durante todo o evento!

Também gostaria de agradecer à organização do SoLiSC pela oportunidade de participar de um evento tão bacana! O Klaibson Ribeiro foi a pessoa com quem troquei alguns e-mails antes do evento, então um “muito obrigado” a ele também :-).

Nos vemos no próximo SoLiSC!

]]>
<![CDATA[GDB and SystemTap Probes -- part 3]]> https://blog.sergiodj.net/posts/gdb-and-systemtap-probes-part-3/ 2012-11-02T00:00:00-05:00 2012-11-02T00:00:00-05:00 Hi everybody :-).

I finally got some time to finish this series of posts, and I hope you like the overall result. For those of you who are reading this blog for the first time, you can access the first post here, and the second here.

My goal with this third post is to talk a little bit about how you can use the SDT probes with tracepoints inside GDB. Maybe this particular feature will not be so helpful to you, but I recommend reading the post either way. I will also give a brief explanation about how the SDT probes are laid out inside the binary. So, let’s start!

Complementary information

In my last post, I forgot to mention that the SDT probe support present on older versions of Fedora GDB is not exactly as the way I described here. This is because Fedora GDB adopted this feature much earlier than upstream GDB itself, so while this has a great positive aspect in terms of how the distro’s philosophy works (i.e., Fedora contains leading-edge features, so if you want to know how to FLOSS community will be in a few months, use it!), it also has the downside of delivering older/different versions of features in older Fedoras. But of course, this SDT feature will be fully available on Fedora 18, to be announced soon.

My suggestion is that if you use a not-so-recent Fedora (like Fedora 16, 15, etc), please upgrade it to the last version, or compile your own version of GDB yourself (it’s not that hard, I will make a post about it in the next days/weeks!).

With that said, let’s move on to our main topic here.

SDT Probes and Tracepoint

Before anything else, let me explain what a tracepoint is. Think of it as a breakpoint which doesn’t stop the program’s execution when it hits. In fact, it’s a bit more than that: you can define actions associated with a tracepoint, and those actions will be performed when the tracepoint is hit. Neat, huh? :-)

There is a nice description of what a tracepoint in the GDB documentation, I recommend you give it a reading to understand the concept.

Ok, so now we have to learn how to put tracepoints in our code, and how to define actions for them. But before that, let’s remember our example program:

#include <sys/sdt.h>

int
main (int argc, char *argv[])
{
	int a = 10;

    STAP_PROBE1 (test_program, my_probe, a);

 	return 0;
}

Very simple, isn’t it? Ok, to the tracepoints now, my friends.

Using tracepoints inside GDB

In order to properly use tracepoints inside GDB, you will need to use gdbserver, a tiny version of GDB suitable for debugging programs remotely, over the net or serial line. In short, this is because GDB cannot put tracepoints on a program running directly under it, so we have to run it inside gdbserver and then connect GDB to it.

Running our program inside gdbserver

In our case, we will just start gdbserver in our machine, order it to listen to some high port, and connect to it through localhost, so there will be no need to have access to another computer or device.

First of all, make sure you have gdbserver installed. If you use Fedora, the package name you will have to install is gdb-gdbserver. If you have it installed, you can do:

$ gdbserver :3001 ./test_program
Process ./test_program created; pid = 17793
Listening on port 3001

The second argument passed to gdbserver instructs it to listen on the port 3001 of your loopback interface, a.k.a. localhost.

You will notice that gdbserver will stay there indefinitely, waiting for new connections to arrive. Don’t worry, we will connect to it soon!

Connecting an instance of GDB to gdbserver

Now, go to another terminal and start GDB with our program:

$ gdb ./test_program
...
(gdb) target remote :3001
Remote debugging using :3001
Reading symbols from /lib64/ld-linux-x86-64.so.2...(no debugging symbols found)...done.
Loaded symbols for /lib64/ld-linux-x86-64.so.2
0x0000003d60401530 in _start () from /lib64/ld-linux-x86-64.so.2

The command you have to use inside GDB is target remote. It takes as an argument the host and the port to which you want to connect. In our case, we just want it to connect to localhost, port 3001. If you saw an output like the above, great, things are working for you (don’t pay attention to the messages about glibc debug information). If you didn’t see it, please check to see if you’re connecting to the right port, and if no other service is using it.

Ok, so now it is time to start our trace experiment!

Creating the tracepoints

Every command should be issued on GDB, not on gdbserver!

In your GDB prompt, put a tracepoint in the probe named my_probe:

(gdb) trace -probe-stap my_probe
Tracepoint 1 at 0x4005a9

As you can see, the trace command takes exactly the same arguments as the break command. Thus, you need to use the -probe-stap modified in order to instruct GDB to put the tracepoint in the probe.

And now, let’s define the actions associated with this tracepoint. To do that, we use the actions command, which is an interactive command inside GDB. It takes some specific keywords, and if you want to learn more about it, please take a look at this link. For this example, we will use only the collect keyword, which tells GDB to… hm… collect something :-). In our case, it will collect the probe’s first argument, or $_probe_arg0, as you may remember.

(gdb) actions 
Enter actions for tracepoint 1, one per line.
End with a line saying just "end".
>collect $_probe_arg0
>end
(gdb)

Simple as that. Finally, we have to define a breakpoint in the last instruction of our program, because it is necessary to keep it running on gdbserver in order to examine the tracepoints later. If we didn’t put this breakpoint, our program would finish and gdbserver would not be able to provide information about what happened with our tracepoints. In our case, we will simply put a breakpoint on line 10, i.e., on the return 0;:

Running the trace experiment

Ok, time to run our trace experiment. First, we must issue a tstart to tell GDB to start monitoring the tracepoints. And then, we can continue our program normally.

(gdb) tstart 
(gdb) continue
Continuing.

Breakpoint 1, main (argc=1, argv=0x7fffffffde88) at /tmp/test_program.c:10
10        return 0;
(gdb) tstop
(gdb)

Remember, GDB is not going to stop your program, because tracepoints are designed to not interfere with the execution of it. Also notice that we have also stopped the trace experiment after the breakpoint hit, by using the tstop command.

Now, we will be able to examine what the tracepoint has collected. First, we will the tfind command to make sure the tracepoint has hit, and then we can inspect what we ordered it to collect:

(gdb) tfind start
Found trace frame 0, tracepoint 1
8         STAP_PROBE1 (test_program, my_probe, a);
(gdb) p $_probe_arg0
$1 = 10

And it works! Notice that we are printing the probe argument using the same notation as with breakpoints, even though we are not exactly executing the STAP_PROBE1 instruction. What does it mean? Well, with the tfind start command we tell GDB to actually use the trace frame collected during the program’s execution, which, in this case, is the probe argument. If you know GDB, think of it as if we were using the frame command to jump back to a specific frame, where we would have access to its state.

This is a very simple example of how to use the SDT probe support in GDB with tracepoints. There is much more you can do, but I hope I could explain the basics so that you can start playing with this feature.

How the SDT probe is laid out in the binary

You might be interested in learning how the probes are created inside the binary. Other than reading the source code of /usr/include/sys/sdt.h, which is the heart of the whole feature, I also recommend this page, which explains in detail what’s going on under the hood. I also recommend that you study a little about how the ELF format works, specifically about notes in the ELF file.

Conclusion

After this series of blog posts, I expect that you will now be able to use the not-so-new feature of SDT probe support on GDB. Of course, if you find some bug while using this, please feel free to report it using our bugzilla. And if you have some question, use the comment system below and I will answer ASAP :-).

See ya, and thanks for reading!

]]>
<![CDATA[GDB and SystemTap Probes -- part 2]]> https://blog.sergiodj.net/posts/gdb-and-systemtap-probes-part-2/ 2012-10-27T00:00:00-05:00 2012-10-27T00:00:00-05:00 I tell you this: it is depressing when you realize that you spent more time struggling with blog engines than writing posts on your blog!

It’s been a long time since I wrote the first post about this subject, and since then the patches have been accepted upstream, and GDB 7.5 now has official support for userspace SystemTap probes :-). Yay!

Well, but enough of cheap talk, let’s get to the business!

Errata for my last post

Frank Ch. Eigler, one of SystemTap’s maintainers, kindly mentioned something that I should say about SystemTap userspace probes.

Basically, it should be clear that SDT probes are not the only kind of userspace probing one can do with SystemTap. There is yet another kind of probe (maybe even more powerful, depending on the goals): DWARF-based function/statement probes. SystemTap supports this kind of probing mechanism for quite a while now.

It is not the goal of this post to explain it in detail, but you might want to give it a try by compiling your binary with debuginfo support (use the -g flag on GCC), and do something like:

$ stap -e 'probe process("/bin/foo").function("name") { log($$parms) }' -c /bin/foo
$ stap -e 'probe process("/bin/foo").statement("*@file.c:443") { log($$vars) }' -c /bin/foo

And that’s it. You can read SystemTap’s documentation, or this guide to learn how to add userspace probes.

Using GDB with SystemTap SDT Probes

Well, now let’s get to the interesting part. It is time to make GDB work with the SDT probe that we have put in our example code. Let’s remember it:

#include <sys/sdt.h>

int
main (int argc, char *argv[])
{
  int a = 10;

  STAP_PROBE1 (test_program, my_probe, a);

  return 0;
}

It is a very simple example, and we will have to extend it later in order to show more features. But for now, it will do.

The first thing to do is to open GDB (with SystemTap support, of course!), and check to see if it can actually see probe inserted in our example.

$ gdb ./test_program
GNU gdb (GDB) 7.5.50.20121014-cvs
Copyright (C) 2012 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
...
(gdb) info probes
Provider     Name     Where              Semaphore Object
test_program my_probe 0x00000000004004ae           /home/sergio/work/src/git/build/gdb/test_program

Wow, it actually works! :-)

If you have seen something like the above, it means your GDB is correctly recognizing SDT probes. If you see an error, or if your GDB doesn’t have the info probes command, then you’d better make sure you have a recent version of GDB otherwise you won’t be able to use the SDT support.

Putting breakpoints in the code

Anyway, now it is time to start using this support. The first thing I want to show you is how to put a breakpoint in a probe.

(gdb) break -probe-stap my_probe
Breakpoint 1 at 0x4004ae

That’s all! We have chosen to extend the break command in order to support the new -probe-stap parameter. If you’re wondering … why the -probe prefix?, it is because I was asked to implement a complete abstraction layer inside GDB in order to allow more types of probes to be added in the future. So, for example, if someone implements support for an hypothetical type of probe called xyz, you would have break -probe-xyz. It took me a little more time to implement this layer, but it is worth the effort.

Anyway, as you have see above, GDB recognize the probe’s name and correctly put a breakpoint in it. You can also confirm that it has done the right thing by matching the address reported by info probes with the one reported by break: they should be the same.

Ok, so now, with our breakpoint in place, let’s run the program and see what happens.

(gdb) run
Starting program: /home/sergio/work/src/git/build/gdb/test_program

Breakpoint 1, main (argc=1, argv=0x7fffffffdf68) at /tmp/example-stap.c:8
8  STAP_PROBE1 (test_program, my_probe, a);

As you can see, GDB stopped at the exact location of the probe. Therefore, you are now able to put marks (i.e., probes) in your source code which are location-independent. It means that it doesn’t really matter where in the source code your probe is, and it also doesn’t matter if you change the code around it, changing the line numbers, or even moving it to another file. GDB will always find your probe, and always stop at the right location. Neat!

Examining probes’ arguments

But wait, there’s more! Remember when I told you that you could also inspect the probe’s arguments? Yes, let’s do it now!

Just remember that, in SDT’s parlance, the current probe’s argument is a. So let’s print its value.

(gdb) p $_probe_arg0
$1 = 10
(gdb) p a
$2 = 10

“Hey, captain, it seems the boat really floats!”

Check the source code above, and convince yourself that a’s value is 10 :-). As you might have seen, I have used a fairly strange way of printing it. It is because the probe’s arguments are available inside GDB by means of convenience variables. You can see a list of them here.

Since SDT probes can have up to 12 arguments (i.e., you can use STAP_PROBE1STAP_PROBE12), we have created inside GDB 12 convenience variables, named $_probe_arg0 until $_probe_arg11. I know, it is not an easy name to remember, and even the relation between SDT naming and GDB naming is not direct (i.e., you have to subtract 1 from the SDT probe number). If you are not satisfied with this, please open a bug in our bugzilla and I promise we will discuss other options.

I would like to emphasize something here: just as you don’t need debuginfo support for dealing with probes inside GDB, you also don’t need debuginfo support for dealing with their arguments as well. It means that you can actually compile your code without debuginfo support, but still have access to some important variables/expressions when debugging it. Depending on how GCC optimizes your code, you may experience some difficulties with argument printing, but so far I haven’t heard of anything like that.

More to come

Ok, now we have covered more things about the SDT probe support inside GDB, and I hope you understood all the concepts. It is not hard to get things going with this, specially because you don’t need extra libraries to make it work.

In the next post, I intend to finish this series by explaining how to use tracepoints with SDT probes. Also, as I said in the previous post of this series, maybe I will talk a little bit about how the SDT probes are organized within the binary.

See you soon!

]]>
<![CDATA[Relato das Apresentações na UNESP de Rio Claro/SP]]> https://blog.sergiodj.net/posts/relato-apresentacao-fedora-unesp-rio-claro/ 2012-10-24T00:00:00-05:00 2012-10-24T00:00:00-05:00 Conforme eu havia comentado no post anterior, segue o relato sobre as apresentações que fiz na Semana da Computação da UNESP de Rio Claro.

TL;DR: Gostei de ter tido a oportunidade de dar as apresentações, e principalmente de ter feito minha primeira palestra como Embaixador do Projeto Fedora no Brasil. Sobre a palestra a respeito do GDB, também gostei do jeito que ela foi conduzida. Notei algumas falhas que precisam ser corrigidas, mas no geral a experiência foi muito boa.

Apresentação “O Projeto Fedora”

Foi a primeira apresentação da noite, de acordo com a grade de programação. Começou meia hora atrasada, pois a organização pediu para esperarmos mais pessoas chegarem (estava chovendo bastante no momento, o que dificultou a locomoção).

Comecei a palestra falando um pouco sobre o Projeto Fedora. Acabei passando rapidamente pelas origens do projeto, uma falha que pretendo corrigir em próximas ocasiões. Dei muita ênfase na definição de comunidade e no que isso significa quando lidamos com software livre. Confesso que fiz algumas comparações com o Ubuntu, o que talvez não tenha sido uma boa idéia (de acordo com os guidelines do Projeto Fedora para Embaixadores). De qualquer modo, a mensagem foi passada e notei que algumas pessoas se interessaram em conhecer mais a respeito do projeto e da filosofia.

Pontos positivos: Creio ter conseguido informar as pessoas a respeito do projeto, com a ajuda dos ótimos slides do Paul W. Frields. É sempre gratificante dar palestras, mesmo que apenas uma ou duas pessoas no final acabem se interessando de verdade. Além disso, me senti bem por estar divulgando um projeto que respeita as liberdades dos usuários (ou pelo menos tenta fazer isso ao máximo), e que eu realmente uso e gosto.

Pontos a serem melhorados: Fazer uma palestra um pouco menos “pessoal”. É muito difícil conseguir isso, mas tenho a forte impressão de que minha orientação totalmente pró-software-livre acaba (às vezes) afastando algumas pessoas, que vêem no entusiasta por software livre uma pessoa “radical” e “xiita”. Preciso pensar um pouco a respeito do assunto…

A conclusão é que fiquei bastante satisfeito com o resultado da palestra. Percebi que, depois dela, algumas pessoas vieram comentar que estavam utilizando Fedora, ou que já andavam pensando em trocar de distribuição, que agora o Fedora era uma opção. O objetivo foi cumprido :-).

Apresentação “GDB Crash Course”

Creio que essa já é a quarta vez que apresento essa palestra, e a terceira vez junto com meu amigo Edjunior. Sempre que ela termina, fico(amos) com a impressão de que ainda não acertamos no ponto, e dessa vez não foi diferente.

A palestra começou em ponto, às 21h, e decidimos tentar uma abordagem um pouco diferente. A última vez que apresentamos a palestra foi no evento da Semana Integrada da PUC Campinas. Naquela ocasião, tínhamos optado por começar falando mais sobre os comandos do GDB, e depois mostrarmos como a coisa funciona, estilo hands-on. Dessa vez, resolvemos ir mostrando a prática junto com a teoria. Ficou melhor, e acho que a apresentação ficou mais fluida, mas ainda assim esbarramos no velho problema da interdependência dos comandos: quando íamos falar sobre breakpoints, precisávamos ter mostrado algum outro comando que só iria ser explicado mais à frente, que por sua vez iria precisar de outro comando, que iria precisar de breakpoints, etc. Enfim, no final acabamos sendo obrigados a pular alguns comandos, e a adiantar a explicação de outros, quebrando um pouco o fluxo dos slides.

Notei que algumas pessoas estavam bastante interessadas no GDB, talvez por já programarem há algum tempo. As outras, aparentemente, ainda não conseguiam ver muita utilidade para um depurador, mas mesmo assim tentavam aprender algo que talvez fosse lhes servir no futuro.

Já era de se esperar, mas mesmo assim não deixo de me surpreender quando vejo que uma palestra técnica consegue atrair muito mais atenção do que uma palestra “filosófica”, como foi a do Projeto Fedora. Talvez seja reflexo da sociedade em que vivemos, ou talvez seja apenas uma impressão errônea da minha parte.

A conclusão, finalmente, é que a palestra parece ter sido útil para algumas pessoas (mesmo que poucas), e isso nos dá ainda mais fôlego pra continuarmos tentando divulgar esse projeto pouco conhecido (mas muito útil) que é o GDB.

Agradecimentos

Não poderia deixar de agradecer primeiramente à organização da SECCOMP da UNESP de Rio Claro pelo ótimo evento. Fiquei surpreso com a infra-estrutura e, principalmente, com a receptividade das pessoas. Gostei muito do ambiente descontraído, e espero não ter decepcionado muita gente por lá com meus comentários informais e caipiras durante as palestras :-).

Também agradeço ao meu amigo Edjunior por ter me acompanhado até sua alma matter para me ajudar na realização da palestra sobre o GDB.

Até a próxima!

]]>
<![CDATA[Apresentação na UNESP de Rio Claro/SP]]> https://blog.sergiodj.net/posts/apresentacao-fedora-unesp-rio-claro/ 2012-10-23T00:00:00-05:00 2012-10-23T00:00:00-05:00 Hoje, dia 23/10/2012, estarei na UNESP de Rio Claro para dar duas apresentações na Semana da Computação.

A primeira palestra será sobre o Projeto Fedora. Vai ser a primeira vez que falarei sobre o projeto depois de ter me tornado Embaixador do Fedora no Brasil. Confesso que estou um pouco apreensivo, mas escolhi slides muito bons feitos pelo Paul W. Frields, ex-líder do Projeto e bastante competente em suas apresentações. Pretendo fazer um relato sobre a palestra na quarta-feira.

A segunda apresentação será sobre o GDB. Essa apresentação vai ser mais um crash course sobre como utilizar a ferramenta, e os slides estão disponíveis em https://github.com/sergiodj/gdb-unicamp2011.

Espero que ambas as palestras sejam bem recebidas pelo público! Volto depois pra contar como foi :-).

Abraços.

]]>
<![CDATA[GDB and SystemTap probes -- part 1]]> https://blog.sergiodj.net/posts/gdb-and-systemtap-probes-part-1/ 2012-03-29T00:00:00-05:00 2012-03-29T00:00:00-05:00 After a long time, here we are again :-).

With this post I will start to talk about the integration between GDB and SystemTap. This is something that Tom Tromey and I did during the last year. The patch is being reviewed as I write this post, and I expect to see it checked-in in the next few days/weeks. But let’s get our hands dirty…

SystemTap Userspace Probes

You probably use (or have at least heard of) SystemTap, and maybe you think the tool is only useful for kernel inspections. If that’s your case, I have a good news: you’re wrong! You can actually use SystemTap to inspect userspace applications too, by using what we call SDT probes, or Static Defined Tracing probes. This is a very cheap and easy way to include probes in your application, and you can even specify arguments to those probes.

In order to use the probes (see an example below), you must include the <sys/sdt.h> header file in your source code. If you are using Fedora systems, you can obtain this header file by installing the package systemtap-sdt-devel, version equal or greater than 1.4.

Here’s a simple example of an application with a one-argument probe:

#include <sys/sdt.h>

int
main (int argc, char *argv[])
{
  int a = 10;

  STAP_PROBE1 (test_program, my_probe, a);

  return 0;
}

As you can see, this is a very simple program with one probe, which contains one argument. You can now compile the program:

$ gcc test_program.c -o test_program

Now you must be thinking: “Wait, wait… Didn’t you just forget to link this program against some SystemTap-specific library or something?” And my answer is no. One of the spetacular things about this <sys/sdt.h> header is that it does not have any dependencies at all! As Tom said in his blog post, this is “a virtuoso display of ELF and GCC asm wizardy”.

If you want to make sure your probe was inserted in the binary, you can use readelf command:

$ readelf -x .note.stapsdt ./test_program

Hex dump of section '.note.stapsdt':
  0x00000000 08000000 3a000000 03000000 73746170 ....:.......stap
  0x00000010 73647400 86044000 00000000 88054000 sdt...@.......@.
  0x00000020 00000000 00000000 00000000 74657374 ............test
  0x00000030 5f70726f 6772616d 006d795f 70726f62 _program.my_prob
  0x00000040 65002d34 402d3428 25726270 29000000 e.-4@-4(%rbp)...

(I will think about writing an explanation on how the probes are laid out on the binary, but for now you just have to care if you actually see an output from this readelf command.)

You can also use SystemTap to perform this verification:

$ stap -L 'process("./test_program").mark("*")'
process("./test_program").mark("my_probe") $arg1:long

So far, so good. If you see an output like the one above, it means your probe is correctly inserted. You could obviously use SystemTap to inspect this probe, but I won’t do this right now because this is not the purpose of this post.

For now, we have learned how to:

  1. Include an SDT probe in our source code, and compile it;
  2. Verify if the probe was correctly inserted.

In the next post, I will talk about the GDB support that allows you to inspect, print arguments, and gather other information about SDT probes. I hope you like it!

]]>
<![CDATA[My workflow with GDB and git -- part 1]]> https://blog.sergiodj.net/posts/my-workflow-with-gdb-and-git-part-1/ 2011-11-29T00:00:00-05:00 2011-11-29T00:00:00-05:00 This post is actually a “reply” to Gary Benson’s Working on gdb post.

I have been working with GDB for quite some time now, and even though the project officially uses CVS (yes, you read it correctly, it is CVS indeed!) as its version control system, fortunately we also have a git mirror. In the end, what happens is that almost every developer uses the git mirror and just goes to CVS to commit something. But this is another discussion. Aside of this git mirror, we also have the Archer repository (which uses git by default).

My plan here is to show you how I do my daily work with GDB. The workflow is pretty simple, but maybe you will see something here that might help you.

Checking out the code

The first thing to do is to check out the code. I only have one GDB repository here, and I make branches out of it whenever I want to hack. So, to check out (or clone, in git’s parlance) the code, I do (or did):

With this, we have just cloned the GDB repository, and also added another remote (i.e., repository). This is useful because we might want to hack on a branch which is on Archer, but use GDB’s master branch as a base.

Create a new branch for your work

So, now it’s time to create a new branch for you. Here I use one of my little “tricks” (taught to me by my friend Dodji), which is the command git-new-workdir. This is a nice command because it creates a new working directory for your project!

Maybe you’re wondering why this is so cool. Well, if you ever worked with git, and more specifically, if you ever used more than one branch at a time, then maybe you will understand my excitement. In this scenario, having to constantly switch between the branches is not something rare. When you have uncommited work in your tree you can always use git stash, but that is not the ideal solution (for me). Sometimes I would forget what was on the stash, and later when I checked it, it was full of crap. Also, I like to have a separate directory for every project I am working on.

It is also important to mention that git-new-workdir is under the directory /usr/share/doc/git-VERSION/contrib/workdir/, so I created an alias that will automagically call the script for me:

So, after setting up the script, here is what I do:

Build GDB

In order to build the project, I create a build-64 directory inside my project directory (which, in the example above, is work/lazy-debuginfo-reading).

GDB fortunately supports VPATH building (i.e., build the project outside of the source tree). I strongly recommend you to use it.

As you may have noticed, I use -g3 (include debuginfo) and -O0 (do not optimize the code) in CFLAGS. Also, since some of the features I work on may affect code in other architectures, I use --enable-targets=all. It will tell configure to compile everything related to all architectures (not only x86_64, for example). At last, I specify a separate debug directory which GDB should use to search for debuginfo files.

Finalizing (for now)

After that, you will have a fresh GDB binary compiled in the build-64 directory. But that is not enough yet, since you will also want to test GDB and make sure you didn’t insert a bug while hacking on it. In my next post, I will explain what is my “testflow”. I hope it will be useful for someone :-).

Stay tuned!

]]>