Pagure is a git forge written entirely in Python using pygit2. It was almost entirely developed by one person, Pierre-Yves Chibon. He is (was?) a Red Hat employee and started working on this new git forge almost 10 years ago because the company wanted to develop something in-house for Fedora. The software is amazing and I admire Pierre-Yves quite a lot for what he was able to achieve basically alone. Unfortunately, a few years ago Fedora decided to move to Gitlab and the Pagure development pretty much stalled.
Packaging Pagure for Debian was hard, but it was also very fun. I learned quite a bit about many things (packaging and non-packaging related), interacted with the upstream community, decided to dogfood my own work and run my Pagure instance for a while, and tried to get newcomers to help me with the package (without much success, unfortunately).
I remember that when I had started to package Pagure, Debian was also moving away from Alioth and discussing options. For a brief moment Pagure was a contender, but in the end the community decided to self-host Gitlab, and that’s why we have Salsa now. I feel like I could have tipped the scales in favour of Pagure had I finished packaging it for Debian before the decision was made, but then again, to the best of my knowledge Salsa doesn’t use our Gitlab package anyway…
If you’re interested in maintaining the package, please get in touch with me. I will happily pass the torch to someone else who is still using the software and wants to keep it healthy in Debian. If there is nobody interested, then I will just orphan it.
]]>Deploying Forgejo was easy thanks to mash-playbook, which is a project I’ve been using more and more to deploy my services. I like how organized it is, and the maintainer is pretty responsive. On top of that, learning more about Ansible had been on my TODO list for quite a while.
All of this means that I decided to move away from Sourcehut (I might use it as a mirror for my public repositories, though). I did that because I wanted to self-host my git forge again (I’ve been doing that for more than a decade if you don’t count my migration to Sourcehut last year). Not liking some of Sourcehut’s creator’s opinions (and the way he puts them out there) may or may not have influenced my decision as well.
Something that I immediately missed when I setup Forgejo was a CI. I
don’t have that many uses for it, but when I was using Sourcehut I
setup its build system to automatically publish this blog whenever a
new commit was made to its git repository. Fortunately,
mash-playbook
also supports deploying Woodpecker CI, so after
fiddling during a couple of days with the Forgejo ↔ Woodpecker
integration, I managed to make it work just the way I wanted.
Write more :-). Really… It’s almost as if I like more to deploy things than to write on my blog! Which is true, but at the same isn’t. I’ve always liked writing, but somehow I grew so conscious of what to publish on this blog that I’m finding myself avoiding doing it at all. Maybe if I try to change the way I look at the blog I’ll get motivated again. We’ll see.
]]>The feature should work for a lot of packages from the archive, but not all of them. Keep reading to better understand why.
While debugging a package in Ubuntu, one of the first steps you need to take is to install its source code. There are some problems with this:
apt-get source
required dpkg-dev
to be installed, which ends up
pulling in a lot of other dependencies.dir
command, but finding the proper path to be is usually not trivial,
and you find yourself having to use more “complex” commands like
set substitute-path
, for example.So yeah, not a trivial/pleasant task after all.
Debuginfod can index source code as well as debug symbols. It is smart enough to keep a relationship between the source package and the corresponding binary’s Build-ID, which is what GDB will use when making a request for a specific source file. This means that, just like what happens for debug symbol files, the user does not need to keep track of the source package version.
While indexing source code, debuginfod will also maintain a record of the relative pathname of each source file. No more fiddling with paths inside the debugger to get things working properly.
Last, but not least, if there’s a need for a library source file and if it’s indexed by debuginfod, then it will get downloaded automatically as well.
In order to make debuginfod happy when indexing source files, I had to
patch dpkg
and make it always use -fdebug-prefix-map
when
compiling stuff. This GCC option is used to remap pathnames inside
the DWARF, which is needed because in Debian/Ubuntu we build our
packages inside chroots and the build directories end up containing a
bunch of random cruft (like /build/ayusd-ASDSEA/something/here
). So
we need to make sure the path prefix (the /build/ayusd-ASDSEA
part)
is uniform across all packages, and that’s where -fdebug-prefix-map
helps.
This means that the package must honour dpkg-buildflags
during its
build process, otherwise the magic flag won’t be passed and your DWARF
will end up with bogus paths. This should not be a big problem,
because most of our packages do honour dpkg-buildflags
, and those
who don’t should be fixed anyway.
Ubuntu enables LTO by default, and unfortunately we are affected by an
annoying (and complex) bug that results in those bogus pathnames not
being properly remapped. The bug doesn’t affect all packages, but if
you see GDB having trouble finding a source file whose full path
starts without /usr/src/...
, that is a good indication that you’re
being affected by this bug. Hopefully we should see some progress in
the following weeks.
If you have any comments, or if you found something strange that looks like a bug in the service, please reach out. You can either send an email to my public inbox (see below) or file a bug against the ubuntu-debuginfod project on Launchpad.
]]>Here’s a good summary of what debuginfod
is:
debuginfod is a new-ish project whose purpose is to serve
ELF/DWARF/source-code information over HTTP. It is developed under the
elfutils umbrella. You can find more information about it here:
https://sourceware.org/elfutils/Debuginfod.html
In a nutshell, by using a debuginfod service you will not need to
install debuginfo (a.k.a. dbgsym) files anymore; the symbols will be
served to GDB (or any other debuginfo consumer that supports debuginfod)
over the network. Ultimately, this makes the debugging experience much
smoother (I myself never remember the full URL of our debuginfo
repository when I need it).
If you follow the Debian project, you might know that I run their debuginfod service. In fact, the excerpt above was taken from the announcement I made last year, letting the Debian community know that the service was available.
With more and more GNU/Linux distributions offering a debuginfod
service to their users, I strongly believe that Ubuntu cannot afford
to stay out of this “party” anymore. Fortunately, I have a
manager who not only agrees with me
but also turned the right knobs in order to make this project one of
my priorities for this development cycle.
The deployment of this service will be made in stages. The first one,
whose results are due to be announced in the upcoming weeks,
encompasses indexing and serving all of the available debug symbols
from the official Ubuntu repository. In
other words, the service will serve everything from main
, universe
and multiverse
, from every supported Ubuntu release out there.
This initial (a.k.a. “alpha”) stage will also allow us to have an estimate of how much the service is used, so that we can better determine the resources allocated to it.
This is just the beginning. In the following cycles, I will be working on a few interesting projects to expand the scope of the service and make it even more useful for the broader Ubuntu community. To give you an idea, here is what is on my plate:
Working on the problem of indexing and serving source code as well. This is an interesting problem and I already have some ideas, but it’s also challenging and may unfold into more sub-projects. The good news is that a solution for this problem will also be beneficial to Debian.
Working with the snap developers to come up with a way to index and serve debug symbols for snaps as well.
Improve the integration of the service into Ubuntu. In fact, I have
already started working on this by making elfutils
(actually,
libdebuginfod
) install a customized shell snippet to automatically
setup access to Ubuntu’s debuginfod
instance.
As you can see, there’s a lot to do. I am happy to be working on this project, and I hope it will be helpful and useful for the Ubuntu community.
]]>This last Tuesday, February 23, 2021, I made an announcement at debian-devel-announce about a new service that I configured for Debian: a debuginfod server.
This post serves two purposed: pay the promise I made to Jonathan Carter that I would write a blog post about the service, and go into a bit more detail about it.
From the announcement above:
debuginfod is a new-ish project whose purpose is to serve
ELF/DWARF/source-code information over HTTP. It is developed under the
elfutils umbrella. You can find more information about it here:
https://sourceware.org/elfutils/Debuginfod.html
In a nutshell, by using a debuginfod service you will not need to
install debuginfo (a.k.a. dbgsym) files anymore; the symbols will be
served to GDB (or any other debuginfo consumer that supports debuginfod)
over the network. Ultimately, this makes the debugging experience much
smoother (I myself never remember the full URL of our debuginfo
repository when I need it).
Perhaps not everybody knows this, but until last year I was a Debugger
Engineer (a.k.a. GDB hacker) at Red Hat. I was not involved with
the creation of debuginfod
directly, but I witnessed discussions
about “having way to serve debug symbols over the internet” multiple
times during my tenure at the company. So this is not a new idea, and
it’s not even the first implementation, but it’s the first time that
some engineers actually got their hands dirty enough to have something
concrete in hands.
The idea to set up a debuginfod
server for Debian started to brew
after 2019’s GNU Tools
Cauldron, but as usual several
things happened in $LIFE (including a global pandemic and leaving Red
Hat and starting a completely different job at Canonical) which had
the effect of shuffling my TODO list “a little”.
Debian unfortunately is lagging behind when it comes to offer its
users a good debugging experience. Before the advent of our
debuginfod
server, if you wanted to debug a package in Debian you
would need to:
Add the debian-debug
apt repository to your
/etc/apt/sources.list
.
Install the dbgsym
package that contains the debug symbols for
the package you are debugging. Note that the version of the
dbgsym
package needs to be exactly the same as the version of
the package you want to debug.
Figure out which shared libraries your package uses and install the
dbgsym
packages for all of them. Arguably, this step is optional
but recommended if you would like to perform a more in-depth
debugging.
Download the package source, possibly using apt source
or some
equivalent command.
Open GDB, and make sure you adjust the source paths properly (more below). This can be non-trivial.
Finally, debug the program.
Now, with the new service, you will be able to start from step 4,
without having to mess with sources.list
, dbgsym
packages and
version mismatches.
It is important to mention an existing (but perhaps not well-known)
limitation of our debugging experience in Debian: the need to manually
download the source packages and adjust GDB to properly find them
(see step 4 above). debuginfod
is able to serve source code as
well, but our Debian instance is not doing that at the moment.
Debian does not provide a patched source tree that is ready to be
consumed by GDB nor debuginfod
(for a good example of a distribution
that does that, see Fedora’s debugsource
packages). Let me show you
an example of debugging GDB itself (using debuginfod
) on Debian:
$ HOME=/tmp DEBUGINFOD_URLS=https://debuginfod.debian.net gdb -q gdb
Reading symbols from gdb...
Downloading separate debug info for /tmp/gdb...
Reading symbols from /tmp/.cache/debuginfod_client/02046bac4352940d19d9164bab73b2f5cefc8c73/debuginfo...
(gdb) start
Temporary breakpoint 1 at 0xd18e0: file /build/gdb-Nav6Es/gdb-10.1/gdb/gdb.c, line 28.
Starting program: /usr/bin/gdb
Downloading separate debug info for /lib/x86_64-linux-gnu/libreadline.so.8...
Downloading separate debug info for /lib/x86_64-linux-gnu/libz.so.1...
Downloading separate debug info for /lib/x86_64-linux-gnu/libncursesw.so.6...
Downloading separate debug info for /lib/x86_64-linux-gnu/libtinfo.so.6...
Downloading separate debug info for /tmp/.cache/debuginfod_client/d6920dbdd057f44edaf4c1fbce191b5854dfd9e6/debuginfo...
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
Downloading separate debug info for /lib/x86_64-linux-gnu/libexpat.so.1...
Downloading separate debug info for /lib/x86_64-linux-gnu/liblzma.so.5...
Downloading separate debug info for /usr/lib/x86_64-linux-gnu/libbabeltrace.so.1...
Downloading separate debug info for /usr/lib/x86_64-linux-gnu/libbabeltrace-ctf.so.1...
Downloading separate debug info for /usr/lib/x86_64-linux-gnu/libipt.so.2...
Downloading separate debug info for /usr/lib/x86_64-linux-gnu/libmpfr.so.6...
Downloading separate debug info for /usr/lib/x86_64-linux-gnu/libsource-highlight.so.4...
Downloading separate debug info for /usr/lib/x86_64-linux-gnu/libxxhash.so.0...
Downloading separate debug info for /usr/lib/x86_64-linux-gnu/libdebuginfod.so.1...
Downloading separate debug info for /usr/lib/x86_64-linux-gnu/libstdc++.so.6...
Downloading separate debug info for /lib/x86_64-linux-gnu/libgcc_s.so.1...
Downloading separate debug info for /usr/lib/x86_64-linux-gnu/libglib-2.0.so.0...
Downloading separate debug info for /tmp/.cache/debuginfod_client/dbfea245d26065975b4084f4e9cd2d83c65973ee/debuginfo...
Downloading separate debug info for /usr/lib/x86_64-linux-gnu/libdw.so.1...
Downloading separate debug info for /usr/lib/x86_64-linux-gnu/libelf.so.1...
Downloading separate debug info for /usr/lib/x86_64-linux-gnu/libuuid.so.1...
Downloading separate debug info for /usr/lib/x86_64-linux-gnu/libgmp.so.10...
Downloading separate debug info for /usr/lib/x86_64-linux-gnu/libboost_regex.so.1.74.0...
Downloading separate debug info for /usr/lib/x86_64-linux-gnu/libcurl-gnutls.so.4...
Downloading separate debug info for /lib/x86_64-linux-gnu/libbz2.so.1.0...
Downloading separate debug info for /usr/lib/x86_64-linux-gnu/libicui18n.so.67...
Downloading separate debug info for /tmp/.cache/debuginfod_client/acaa831dbbc8aa70bb2131134e0c83206a0701f9/debuginfo...
Downloading separate debug info for /usr/lib/x86_64-linux-gnu/libicuuc.so.67...
Downloading separate debug info for /usr/lib/x86_64-linux-gnu/libnghttp2.so.14...
Downloading separate debug info for /usr/lib/x86_64-linux-gnu/libidn2.so.0...
Downloading separate debug info for /usr/lib/x86_64-linux-gnu/librtmp.so.1...
Downloading separate debug info for /usr/lib/x86_64-linux-gnu/libssh2.so.1...
Downloading separate debug info for /usr/lib/x86_64-linux-gnu/libpsl.so.5...
Downloading separate debug info for /usr/lib/x86_64-linux-gnu/libnettle.so.8...
Downloading separate debug info for /usr/lib/x86_64-linux-gnu/libgnutls.so.30...
Downloading separate debug info for /usr/lib/x86_64-linux-gnu/libldap_r-2.4.so.2...
Downloading separate debug info for /usr/lib/x86_64-linux-gnu/liblber-2.4.so.2...
Downloading separate debug info for /usr/lib/x86_64-linux-gnu/libbrotlidec.so.1...
Downloading separate debug info for /tmp/.cache/debuginfod_client/39739740c2f8a033de95c1c0b1eb8be445610b31/debuginfo...
Downloading separate debug info for /usr/lib/x86_64-linux-gnu/libunistring.so.2...
Downloading separate debug info for /usr/lib/x86_64-linux-gnu/libhogweed.so.6...
Downloading separate debug info for /usr/lib/x86_64-linux-gnu/libgcrypt.so.20...
Downloading separate debug info for /usr/lib/x86_64-linux-gnu/libp11-kit.so.0...
Downloading separate debug info for /usr/lib/x86_64-linux-gnu/libtasn1.so.6...
Downloading separate debug info for /lib/x86_64-linux-gnu/libcom_err.so.2...
Downloading separate debug info for /usr/lib/x86_64-linux-gnu/libsasl2.so.2...
Downloading separate debug info for /usr/lib/x86_64-linux-gnu/libbrotlicommon.so.1...
Downloading separate debug info for /lib/x86_64-linux-gnu/libgpg-error.so.0...
Downloading separate debug info for /usr/lib/x86_64-linux-gnu/libffi.so.7...
Downloading separate debug info for /lib/x86_64-linux-gnu/libkeyutils.so.1...
Temporary breakpoint 1, main (argc=1, argv=0x7fffffffebf8) at /build/gdb-Nav6Es/gdb-10.1/gdb/gdb.c:28
28 /build/gdb-Nav6Es/gdb-10.1/gdb/gdb.c: Directory not empty.
(gdb) list
23 in /build/gdb-Nav6Es/gdb-10.1/gdb/gdb.c
(gdb)
(See all those Downloading separate debug info for...
lines? Nice!)
As you can see, when we try to list
the contents of the file we’re
in, nothing shows up. This happens because GDB doesn’t know where the
file is. So you have to tell it. In this case, it’s relatively easy:
you see that the GDB package’s build directory is
/build/gdb-Nav6Es/gdb-10.1/
. When you apt source gdb
, you will
have a directory called $PWD/gdb-10.1/
containing the full source of the
package. Notice that the last directory’s name in both paths is the
same, so in this case we can use GDB’s set substitute-path
command
do the job for us (in this example $PWD
is /tmp/
):
$ HOME=/tmp DEBUGINFOD_URLS=https://debuginfod.debian.net gdb -q gdb
Reading symbols from gdb...
Reading symbols from /tmp/.cache/debuginfod_client/02046bac4352940d19d9164bab73b2f5cefc8c73/debuginfo...
(gdb) set substitute-path /build/gdb-Nav6Es/ /tmp/
(gdb) start
Temporary breakpoint 1 at 0xd18e0: file /build/gdb-Nav6Es/gdb-10.1/gdb/gdb.c, line 28.
Starting program: /usr/bin/gdb
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
Temporary breakpoint 1, main (argc=1, argv=0x7fffffffebf8) at /build/gdb-Nav6Es/gdb-10.1/gdb/gdb.c:28
warning: Source file is more recent than executable.
28 memset (&args, 0, sizeof args);
(gdb) list
23 int
24 main (int argc, char **argv)
25 {
26 struct captured_main_args args;
27
28 memset (&args, 0, sizeof args);
29 args.argc = argc;
30 args.argv = argv;
31 args.interpreter_p = INTERP_CONSOLE;
32 return gdb_main (&args);
(gdb)
Much better, huh? The problem is that this process is manual, and will change depending on how the package you’re debugging was built.
What can we do to improve this? What I personally would like to see is something similar to what the Fedora project already does: create a new debug package which will contain the full, patched source package. This would mean changing our building infrastructure and possibly other somewhat complex things.
At the time of this writing, I am working on an elfutils
Merge
Request
whose purpose is to implement a debconf question to ask the user
whether she wants to use our service by default.
If you would like to start using the service right now, all you have to do is set the following environment variable in your shell:
DEBUGINFOD_URLS="https://debuginfod.debian.net"
You can find more information about our debuginfod
service
here. Try to keep an eye on the
page as it’s being constantly updated.
If you’d like to get in touch with me, my email is my domain at debian dot org.
I sincerely believe that this service is a step in the right direction, and hope that it can be useful to you :-).
]]>I have my own personal opinions about the current review system we use (mailing list-based, in a nutshell), and I haven’t felt very confident to express it during the discussion. Anyway, the outcome was that at least 3 global maintainers have used or are currently using the Gerrit Code Review system for other projects, are happy with it, and that we should give it a try. Then, when it was time to decide who wanted to configure and set things up for the community, I volunteered. Hey, I’m already running the Buildbot master for GDB, what is the problem to manage yet another service? Oh, well.
Before we dive into the details involved in configuring and running gerrit in a machine, let me first say that I don’t totally support the idea of migrating from mailing list to gerrit. I volunteered to set things up because I felt the community (or at least the its most active members) wanted to try it out. I don’t necessarily agree with the choice.
Ah, and I’m writing this post mostly because I want to be able to close the 300+ tabs I had to open on my Firefox during these last weeks, when I was searching how to solve the myriad of problems I faced during the set up!
My very initial plan after I left the session room was to talk to the sourceware.org folks and ask them if it would be possible to host our gerrit there. Surprisingly, they already have a gerrit instance up and running. It’s been set up back in 2016, it’s running an old version of gerrit, and is pretty much abandoned. Actually, saying that it has been configured is an overstatement: it doesn’t support authentication, user registration, barely supports projects, etc. It’s basically what you get from a pristine installation of the gerrit RPM package in RHEL 6.
I won’t go into details here, but after some discussion it was clear to me that the instance on sourceware would not be able to meet our needs (or at least what I had in mind for us), and that it would be really hard to bring it to the quality level I wanted. I decided to go look for other options.
Have I mentioned the OSCI project before? They are absolutely awesome. I really love working with them, because so far they’ve been able to meet every request I made! So, kudos to them! They’re the folks that host our GDB Buildbot master. Their infrastructure is quite reliable (I never had a single problem), and Marc Dequénes (Duck) is very helpful, friendly and quick when replying to my questions :-).
So, it shouldn’t come as a surprise the fact that when I decided to look for other another place to host gerrit, they were my first choice. And again, they delivered :-).
Now, it was time to start thinking about the gerrit set up.
Over the course of these past 4 weeks, I had the opportunity to learn
a bit more about how gerrit does things. One of the first things that
negatively impressed me was the fact that gerrit doesn’t handle user
registration by itself. It is possible to have a very rudimentary
user registration “system”, but it relies on the site administration
manually registering the users (via htpasswd
) and managing
everything by him/herself.
It was quite obvious to me that we would need some kind of access control (we’re talking about a GNU project, with a copyright assignment requirement in place, after all), and the best way to implement it is by having registered users. And so my quest for the best user registration system began…
Gerrit supports some user authentication schemes, such as OpenID (not OpenID Connect!), OAuth2 (via plugin) and LDAP. I remembered hearing about FreeIPA a long time ago, and thought it made sense using it. Unfortunately, the project’s community told me that installing FreeIPA on a Debian system is really hard, and since our VM is running Debian, it quickly became obvious that I should look somewhere else. I felt a bit sad at the beginning, because I thought FreeIPA would really be our silver bullet here, but then I noticed that it doesn’t really offer a self-service user registration.
After exchanging a few emails with Marc, he told me about Keycloak. It’s a full-fledged Identity Management and Access Management software, supports OAuth2, LDAP, and provides a self-service user registration system, which is exactly what we needed! However, upon reading the description of the project, I noticed that it is written in Java (JBOSS, to be more specific), and I was afraid that it was going to be very demanding on our system (after all, gerrit is also a Java program). So I decided to put it on hold and take a look at using LDAP…
Oh, man. Where do I start? Actually, I think it’s enough to say that I just tried installing OpenLDAP, but gave up because it was too cumbersome to configure. Have you ever heard that LDAP is really complicated? I’m afraid this is true. I just didn’t feel like wasting a lot of time trying to understand how it works, only to have to solve the “user registration” problem later (because of course, OpenLDAP is just an LDAP server).
OK, so what now? Back to Keycloak it is. I decided that instead of thinking that it was too big, I should actually install it and check it for real. Best decision, by the way!
It’s pretty easy to set Keycloak up. The official website provides a
.tar.gz
file which contains the whole directory tree for the
project, along with helper scripts, .jar
files, configuration, etc.
From there, you just need to follow the documentation, edit the
configuration, and voilà.
For our specific setup I chose to use PostgreSQL instead of the built-in database. This is a bit more complicated to configure, because you need to download the JDBC driver, and install it in a strange way (at least for me, who is used to just editing a configuration file). I won’t go into details on how to do this here, because it’s easy to find on the internet. Bear in mind, though, that the official documentation is really incomplete when covering this topic! This is one of the guides I used, along with this other one (which covers MariaDB, but can be adapted to PostgreSQL as well).
Another interesting thing to notice is that Keycloak expects to be
running on its own virtual domain, and not under a subdirectory (e.g,
https://example.org
instead of https://example.org/keycloak
). For
that reason, I chose to run our instance on another port. It is
supposedly possible to configure Keycloak to run under a subdirectory,
but it involves editing a lot of files, and I confess I couldn’t make
it fully work.
A last thing worth mentioning: the official documentation says that Keycloak needs Java 8 to run, but I’ve been using OpenJDK 11 without problems so far.
The fun begins now!
The gerrit project also offers a .war
file ready to be deployed.
After you download it, you can execute it and initialize a gerrit
project (or application, as it’s called). Gerrit will create a
directory full of interesting stuff; the most important for us is the
etc/
subdirectory, which contains all of the configuration files for
the application.
After initializing everything, you can try starting gerrit to see if
it works. This is where I had my first trouble. Gerrit also requires
Java 8, but unlike Keycloak, it doesn’t work out of the box with
OpenJDK 11. I had to make a small but important addition in the file
etc/gerrit.config
:
[container]
...
javaOptions = "--add-opens=jdk.management/com.sun.management.internal=ALL-UNNAMED"
...
After that, I was able to start gerrit. And then I started trying to
set it up for OAuth2 authentication using Keycloak. This took a
very long time, unfortunately. I was having several problems with
Gerrit, and I wasn’t sure how to solve them. I
tried
asking for help on
the official mailing list, and was able to make some progress, but in
the end I figured out what was missing: I had forgotten to add the
AddEncodedSlashes On
in the Apache configuration file! This was
causing a very strange error on Gerrit (as you can see, a
java.lang.StringIndexOutOfBoundsException
!), which didn’t make
sense. In the end, my Apache config file looks like this:
<VirtualHost *:80>
ServerName gnutoolchain-gerrit.osci.io
RedirectPermanent / https://gnutoolchain-gerrit.osci.io/r/
</VirtualHost>
<VirtualHost *:443>
ServerName gnutoolchain-gerrit.osci.io
RedirectPermanent / /r/
SSLEngine On
SSLCertificateFile /path/to/cert.pem
SSLCertificateKeyFile /path/to/privkey.pem
SSLCertificateChainFile /path/to/chain.pem
# Good practices for SSL
# taken from: <https://mozilla.github.io/server-side-tls/ssl-config-generator/>
# intermediate configuration, tweak to your needs
SSLProtocol all -SSLv3
SSLCipherSuite ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES256-SHA:ECDHE-ECDSA-DES-CBC3-SHA:ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:DES-CBC3-SHA:!DSS
SSLHonorCipherOrder on
SSLCompression off
SSLSessionTickets off
# OCSP Stapling, only in httpd 2.3.3 and later
#SSLUseStapling on
#SSLStaplingResponderTimeout 5
#SSLStaplingReturnResponderErrors off
#SSLStaplingCache shmcb:/var/run/ocsp(128000)
# HSTS (mod_headers is required) (15768000 seconds = 6 months)
Header always set Strict-Transport-Security "max-age=15768000"
ProxyRequests Off
ProxyVia Off
ProxyPreserveHost On
<Proxy *>
Require all granted
</Proxy>
AllowEncodedSlashes On
ProxyPass /r/ http://127.0.0.1:8081/ nocanon
#ProxyPassReverse /r/ http://127.0.0.1:8081/r/
</VirtualHost>
I confess I was almost giving up Keycloak when I finally found the problem…
Anyway, after that things went more smoothly. I was finally able to make the user authentication work, then I made sure Keycloak’s user registration feature also worked OK…
Ah, one interesting thing: the user logout wasn’t really working as expected. The user was able to logout from gerrit, but not from Keycloak, so when the user clicked on “Sign in”, Keycloak would tell gerrit that the user was already logged in, and gerrit would automatically log the user in again! I was able to solve this by redirecting the user to Keycloak’s logout page, like this:
[auth]
...
logoutUrl = https://keycloak-url:port/auth/realms/REALM/protocol/openid-connect/logout?redirect_uri=https://gerrit-url/
...
After that, it was already possible to start worrying about configure gerrit itself. I don’t know if I’ll write a post about that, but let me know if you want me to.
If you ask me if I’m totally comfortable with the way things are set up now, I can’t say that I am 100%. I mean, the set up seems robust enough that it won’t cause problems in the long run, but what bothers me is the fact that I’m using technologies that are alien to me. I’m used to setting up things written in Python, C, C++, with very simple yet powerful configuration mechanisms, and an easy to discover what’s wrong when something bad happens.
I am reasonably satisfied with the Keycloak logs things, but Gerrit leaves a lot to be desired in that area. And both projects are written in languages/frameworks that I am absolutely not comfortable with. Like, it’s really tough to debug something when you don’t even know where the code is or how to modify it!
All in all, I’m happy that this whole adventure has come to an end, and now all that’s left is to maintain it. I hope that the GDB community can make good use of this new service, and I hope that we can see a positive impact in the quality of the whole patch review process.
My final take is that this is all worth as long as the Free Software and the User Freedom are the ones who benefit.
P.S.: Before I forget, our gerrit instance is running at https://gnutoolchain-gerrit.osci.io.
]]>gcore
command did not respect the COREFILTER_ELF_HEADERS
flag, which instructs it to dump memory pages containing ELF headers.
As you may or may not remember, I have
already
written about the broader topic of revamping GDB’s internal corefile dump algorithm;
it’s an interesting read and I recommend it if you don’t know how
Linux (or GDB) decides which mappings to dump to a corefile.
Anyway, even though the bug was interesting and had to do with a work I’d done before, I couldn’t really work on it at the time, so I decided to put it in the TODO list. Of course, the “TODO list” is actually a crack where most things fall through and are usually never seen again, so I was blissfully ignoring this request because I had other major priorities to deal with. That is, until a seemingly unrelated problem forced me to face this once and for all!
As the Fedora GDB maintainer, I’m routinely preparing new releases for Fedora Rawhide distribution, and sometimes for the stable versions of the distro as well. And I try to be very careful when dealing with new releases, because a regression introduced now can come and bite us (i.e., the Red Hat GDB team) back many years in the future, when it’s sometimes too late or too difficult to fix things. So, a mandatory part of every release preparation is to actually run a regression test against the previous release, and make sure that everything is working correctly.
One of these days, some weeks ago, I had finished running the regression check for the release I was preparing when I noticed something strange: a specific, Fedora-only corefile test was FAILing. That’s a no-no, so I started investigating and found that the underlying reason was that, when the corefile was being generated, the build-id note from the executable was not being copied over. Fedora GDB has a local patch whose job is to, given a corefile with a build-id note, locate the corresponding binary that generated it. Without the build-id note, no binary was being located.
Coincidentally or not, at the same I started noticing some users
reporting very similar build-id issues on the freenode’s #gdb
channel, and I thought that this bug had a potential to become a big
headache for us if nothing was done to fix it right now.
I asked for some help from the team, and we managed to discover that
the problem was also happening with upstream gcore
, and that it was
probably something that binutils was doing, and not GDB. Hmm…
ld
’s fault. Or is it?So there I went, trying to confirm that it was binutils’s fault, and not GDB’s. Of course, if I could confirm this, then I could also tell the binutils guys to fix it, which meant less work for us :-).
With a lot of help from Keith Seitz, I was able to bisect the problem and found that it started with the following commit:
commit f6aec96dce1ddbd8961a3aa8a2925db2021719bb
Author: H.J. Lu <hjl.tools@gmail.com>
Date: Tue Feb 27 11:34:20 2018 -0800
ld: Add --enable-separate-code
This is a commit that touches the linker, which is part of binutils. So that means this is not GDB’s problem, right?!? Hmm. No, unfortunately not.
What the commit above does is to simply enable the use of
--enable-separate-code
(or -z separate-code
) by default when
linking an ELF program on x86_64 (more on that later). On a first
glance, this change should not impact the corefile generation, and
indeed, if you tell the Linux kernel to generate a corefile (for
example, by doing sleep 60 &
and then hitting C-\
), you will
notice that the build-id note is included into it! So GDB was
still a suspect here. The investigation needed to continue.
-z separate-code
?The -z separate-code
option makes the code segment in the ELF file
to put in a completely separated segment than data segment. This was
done to increase the security of generated binaries. Before it,
everything (code and data) was put together in the same memory
region. What this means in practice is that, before, you would see
something like this when you examined /proc/PID/smaps
:
00400000-00401000 r-xp 00000000 fc:01 798593 /file
Size: 4 kB
KernelPageSize: 4 kB
MMUPageSize: 4 kB
Rss: 4 kB
Pss: 4 kB
Shared_Clean: 0 kB
Shared_Dirty: 0 kB
Private_Clean: 0 kB
Private_Dirty: 4 kB
Referenced: 4 kB
Anonymous: 4 kB
LazyFree: 0 kB
AnonHugePages: 0 kB
ShmemPmdMapped: 0 kB
Shared_Hugetlb: 0 kB
Private_Hugetlb: 0 kB
Swap: 0 kB
SwapPss: 0 kB
Locked: 0 kB
THPeligible: 0
VmFlags: rd ex mr mw me dw sd
And now, you will see two memory regions instead, like this:
00400000-00401000 r--p 00000000 fc:01 799548 /file
Size: 4 kB
KernelPageSize: 4 kB
MMUPageSize: 4 kB
Rss: 4 kB
Pss: 4 kB
Shared_Clean: 0 kB
Shared_Dirty: 0 kB
Private_Clean: 4 kB
Private_Dirty: 0 kB
Referenced: 4 kB
Anonymous: 0 kB
LazyFree: 0 kB
AnonHugePages: 0 kB
ShmemPmdMapped: 0 kB
Shared_Hugetlb: 0 kB
Private_Hugetlb: 0 kB
Swap: 0 kB
SwapPss: 0 kB
Locked: 0 kB
THPeligible: 0
VmFlags: rd mr mw me dw sd
00401000-00402000 r-xp 00001000 fc:01 799548 /file
Size: 4 kB
KernelPageSize: 4 kB
MMUPageSize: 4 kB
Rss: 4 kB
Pss: 4 kB
Shared_Clean: 0 kB
Shared_Dirty: 0 kB
Private_Clean: 0 kB
Private_Dirty: 4 kB
Referenced: 4 kB
Anonymous: 4 kB
LazyFree: 0 kB
AnonHugePages: 0 kB
ShmemPmdMapped: 0 kB
Shared_Hugetlb: 0 kB
Private_Hugetlb: 0 kB
Swap: 0 kB
SwapPss: 0 kB
Locked: 0 kB
THPeligible: 0
VmFlags: rd ex mr mw me dw sd
A few minor things have changed, but the most important of them is the
fact that, before, the whole memory region had anonymous data in
it, which means that it was considered an anonymous private
mapping (anonymous because of the non-zero Anonymous amount of
data; private because of the p
in the r-xp
permission bits).
After -z separate-code
was made default, the first memory mapping
does not have Anonymous contents anymore, which means that it is
now considered to be a file-backed private mapping instead.
It is important to mention that, unlike the Linux kernel, GDB doesn’t
have all of the necessary information readily available to decide the
exact type of a memory mapping, so when I revamped this code back in
2015 I had to create some heuristics to try and determine this
information. If you’re curious, take a look at the linux-tdep.c
file on GDB’s source tree, specifically at the
functions
dump_mapping_p
and
linux_find_memory_regions_full
.
When GDB is deciding which memory regions should be dumped into the
corefile, it respects the value found at the
/proc/PID/coredump_filter
file. The default value for this file is
0x33
, which, according to core(5)
, means:
Dump memory pages that are either anonymous private, anonymous
shared, ELF headers or HugeTLB.
GDB had the support implemented to dump almost all of these pages,
except for the ELF headers variety. And, as you can probably infer,
this means that, before the -z separate-code
change, the very first
memory mapping of the executable was being dumped, because it was
marked as anonymous private. However, after the change, the first
mapping (which contains only data, no code) wasn’t being dumped
anymore, because it was now considered by GDB to be a file-backed
private mapping!
Finally, that is the reason for the difference between corefiles generated by GDB and Linux, and also the reason why the build-id note was not being included in the corefile anymore! You see, the first memory mapping contains not only the program’s data, but also its ELF headers, which in turn contain the build-id information.
gcore
, meet ELF headersThe solution was “simple”: I needed to improve the current heuristics
and teach GDB how to determine if a mapping contains an ELF header or
not. For that, I chose to follow the Linux kernel’s algorithm, which
basically checks the first 4 bytes of the mapping and compares them
against \177ELF
, which is ELF’s magic number. If the comparison
succeeds, then we just assume we’re dealing with a mapping that
contains an ELF header and dump it.
In all fairness, Linux just dumps the first page (4K) of the mapping, in order to save space. It would be possible to make GDB do the same, but I chose the faster way and just dumped the whole mapping, which, in most scenarios, shouldn’t be a big problem.
It’s also interesting to mention that GDB will just perform this check if:
coredump_filter
).Linux also makes these checks, by the way.
I submitted the patch to the mailing list, and it was approved fairly quickly (with a few minor nits).
The reason I’m writing this blog post is because I’m very happy and proud with the whole process. It wasn’t an easy task to investigate the underlying reason for the build-id failures, and it was interesting to come up with a solution that extended the work I did a few years ago. I was also able to close a few bug reports upstream, as well as the one reported against Fedora GDB.
The patch has been
pushed,
and is also present at the latest version of Fedora GDB for Rawhide.
It wasn’t possible to write a self-contained testcase for this
problem, so I had to resort to using an external tool (eu-unstrip
)
in order to guarantee that the build-id note is correctly present in
the corefile. But that’s a small detail, of course.
Anyway, I hope this was an interesting (albeit large) read!
]]>This past Saturday, April 27th, 2019, Samuel Vale, Alex Volkov and I organized the Toronto Bug Squashing Party here in the city. I was very happy with the outcome, especially the fact that we had more than 10 people attending, including a bunch of folks that came from Montréal!
It was a cold day in Toronto, and we met at the Mozilla Toronto office at 9 in the morning. Right there at the door I met anarcat, who had just arrived from Montréal. Together with Alex, we waited for Will to arrive and open the door for us. Then, some more folks started showing up, and we waited until 10:30h to start the first presentation of the day.
Anarcat kindly gave us his famous “Packaging 101” presentation, in which he explains the basics of Debian packaging. Here’s a picture of the presentation:
And another one:
The presentation was great, and Alex recorded it! You can watch it here (sorry, youtube link…).
During the day, we’ve also taught a few tricks about the BTS, in order to help people file bugs, add/remove tags, comment on bugs, etc.
Then, we moved on to the actual hacking.
This part took most of the day, as was expected. We started by looking at the RC bugs currently filed against Buster, and deciding which ones would be interesting for us. I won’t go into details here, but I think we made great progress, considering this was the first BSP for many of us there (myself included).
You can look at the bugs we worked on, and you will see that we have actually fixed 6 of them! I even fixed a JavaScript bug, which is something totally out of my area of expertise ;-).
I also noticed something interesting. The way we look at bugs can vary wildly between one DD and another. I mean, this is something I always knew, especially when I was more involved with the debian-mentors effort, but it’s really amazing to feel this in person. I tend to be more picky when it comes to defining what to do when I start to work on a bug; I try really hard to reproduce it (and spend a lot of time doing so), and will really dive deep into the code trying to understand why some test is failing. Other developer may be less “pedantic”, and choose to (e.g.) disable certain test that is failing. In the end, I think everything is a balance and I tried to learn from this experience.
Anyway, given that we looked at 12 bugs and solved 6, I think we did great! And this also helped me to get my head “back in the Debian game”; I was too involved with GDB these past months (there’s a post about one of the things I did which is coming soon, stay tunned).
Look at us hacking:
At 19h (or 7p.m.), we had to wrap up and prepare to go. Because we had a sizeable number of Brazilians in the group (5!), the logical thing to do was to go to a pub and resume the conversation there :-). If I say it was one of the first times I went to a pub to drink with newly made friends in Toronto, you probably wouldn’t believe, so I won’t say anything…
I know one thing for sure: we want to make this again, and soon! In fact, my idea is to do another one after Buster is released (and after the summer is gone, of course), so maybe October. We’ll see.
I would like to thank Mozilla Toronto for hosting us; it was awesome to finally visit their office and enjoy their hospitality, personified by Will Hawkins. It is impossible not to thank anarcat, who came all the way from Montréal to give us his Debian Packaging 101 talk. Speaking of the French-Canadian (and Brazilian), it was super awesome meeting Tiago Vaz and Tássia Camões, and it was great seeing Valessio Brito again.
Let me also thank the “locals” who attended the party; it was great seeing everybody there! Hope I can see everybody again when we make the second edition of our BSP :-).
]]>É preciso falar praquele ignorante que ele não sabe o que é Software Livre. É preciso dizer que o Software Livre é muito maior do que o GNU, muito maior do que uma pessoa ou do que suas declarações. É preciso dizer que o ignorante tornou-se troll. É preciso dizer que ele não sabe o que fala, e que deve calar-se. É preciso deixar que ele viva sua adolescência conturbada e por vezes medíocre, mas tomando cuidado para que isso não influencie outras pessoas ignorantes a tornarem-se trolls também. É preciso que esse troll saia do Twitter, saia do BR-[GNU/]Linux, saia dos fóruns movidos a coisas proprietárias; ou talvez seja preciso que ele fique lá, destilando seu ódio, veneno e ignorância para seus semelhantes.
É preciso combater o liberalismo de fachada, que é um veículo para o ódio. É preciso combater o ódio. É preciso combater a ignorância, novamente. É preciso combater o reacionarismo disfarçado de “livre mercado”, é preciso combater a falta de bom senso que ocorre quando se generaliza um partido político por um comportamento, é preciso combater o comportamento, é preciso fazer progresso social sempre, é preciso parar de se importar tanto com aqueles que não se importam.
É preciso combater o pastor ignorante. É preciso combater a ignorância, uma terceira vez. É preciso combater a “trollagem” do pastor, dos fiéis e dos simpatizantes a eles. É preciso combater a onda de “radicalismo conservador” que aflige a todos. É preciso combater a falta de amor ao próximo e o excesso de arrogância. É preciso combater as falsas palavras divinas, as falsas vontades de uma entidade, as falsas aglomerações públicas em torno de um erro.
É preciso combater o apresentador idiota, ignorante e presunçoso. É preciso combater o que se destila de ódio naquele país, porque nem todos têm um soro contra veneno de cobra criada. É preciso combater a ignorância, novamente, porque ela é o caminho mais fácil para o ódio, e o ódio retroalimenta a ignorância num ciclo difícil de ser quebrado. É preciso ensinar a aprender, e aprender a ensinar. É preciso combater a preguiça, essa desculpa tão usada e repetida que chega a dar preguiça de combatê-la. É preciso sair do sofá, mas não para ir para o Twitter ou Facebook; é preciso sair do sofá e ser crítico o suficiente para saber o que se deve fazer, porque não sou eu quem vou falar.
]]>First, I cannot begin this post without a few acknowledgements and “thank you’s”. The first goes to Oleg Nesterov (sorry, I could not find his website), a Linux kernel guru who really helped me a lot through the whole task. Another “thank you” goes to Jan Kratochvil, who also provided valuable feedback by commenting my GDB patch. Now, back to the point.
The task was requested
here: GDB
needed to respect the /proc/<PID>/coredump_filter
file when generating
a coredump (i.e., when you use the gcore
command).
Currently, GDB has his own coredump mechanism implemented which, despite its limitations and bugs, has been around for quite some time. However, and maybe you don’t know that, but the Linux kernel has its own algorithm for generating the corefile of a process. And unfortunately, GDB and Linux were not really following the same standards here…
So, in the end, the task was about synchronizing GDB and Linux. To do
that, I first had to decipher the contents of the /proc/<PID>/smaps
file.
/proc/<PID>/smaps
fileThis special file, generated by the Linux kernel when you read it,
contains detailed information about each memory mapping of a certain
process. Some of the fields on this file are documented in the proc(5)
manpage, but others are missing there (asking for a patch!). Here is an
explanation of everything I needed:
The first line of each memory mapping has the following format:
The fields here are:
a) address is the address range, in the process’ address space, that the mapping occupies. This part was already treated by GDB, so I did not have to worry about it.
b) perms is a set of permissions (r ead, w rite, e x
ecute, s hared, p rivate [COW – copy-on-write])
applied to the memory mapping. GDB was already dealing with
rwx
permissions, but I needed to include the p
flag as well.
I also made GDB ignore the mappings that did not have the r
flag active, because it does not make sense to dump something
that you cannot read.
c) offset is the offset into the applied to the file, if the mapping is file-backed (see below). GDB already handled this correctly.
d) dev is the device (major:minor) related to the file, if there is one. GDB already handled this correctly, though I was using this field for more things (continue reading).
e) inode is the inode on the device above. The value of zero means that no inode is associated with the memory mapping. Nothing to do here.
f) pathname is the file associate with this mapping, if there is one. This is one of the most important fields that I had to use, and one of the most complicated to understand completely. GDB now uses this to heuristically identify whether the mapping is anonymous or not.
GDB is now also interested in Anonymous:
and AnonHugePages:
fields from the smaps
file. Those fields represent the content of
anonymous data on the mapping; if GDB finds that this content is
greater than zero, this means that the mapping is anonymous.
The last, but perhaps most important field, is the VmFlags:
field.
It contains a series of two-letter flags that provide very useful
information about the mapping. A description of the fields is:
a) sh
: the mapping is shared (VM_SHARED
)
b) dd
: this mapping should not be dumped in a corefile
(VM_DONTDUMP
)
c) ht
: this is HugeTLB mapping
With that in hands, the following task was to be able to determine whether a memory mapping is anonymous or file-backed, private or shared.
There can be four types of memory mappings:
It should be possible to uniquely identify each mapping based on the
information provided by the smaps
file; however, you will see that
this is not always the case. Below, I will explain how to determine each
of the four characteristics that define a mapping.
Anonymous
A mapping is anonymous if one of these conditions apply:
pathname
associated with it is either /dev/zero (deleted)
,
/SYSV%08x (deleted)
, or <filename> (deleted)
(see below).Anonymous:
or in the AnonHugePages:
fields of the mapping in the smaps
file.A special explanation is needed for the <filename> (deleted)
case. It
is not always guaranteed that it identifies an anonymous mapping; in
fact, it is possible to have the (deleted)
part for file-backed
mappings as well (say, when you are running a program that uses shared
libraries, and those shared libraries have been removed because of an
update, for example). However, we are trying to mimic the behavior of
the Linux kernel here, which checks to see if a file has no hard links
associated with it (and therefore is truly deleted).
Although it may be possible for the userspace to do an extensive check
(by stat
ing the file, for example), the Linux kernel certainly could
give more information about this.
File-backed
A mapping is file-backed (i.e., not anonymous) if:
pathname
associated with it contains a <filename>
, without
the (deleted)
part.As has been explained above, a mapping whose pathname
contains the
(deleted)
string could still be file-backed, but we decide to consider
it anonymous.
It is also worth mentioning that a mapping can be simultaneously
anonymous and file-backed: this happens when the mapping contains a
valid pathname
(without the (deleted)
part), but also contains
Anonymous:
or AnonHugePages:
contents.
Private
A mapping is considered to be private (i.e., not shared) if:
VmFlags
field (in the smaps
file), its
permission field has the flag p
.VmFlags
field is present, then the mapping is private if
we do not find the sh
flag there.Shared
A mapping is shared (i.e., not private) if:
VmFlags
in the smaps
file, the permission
field of the mapping does not have the p
flag. Not having this
flag actually means VM_MAYSHARE
and not necessarily VM_SHARED
(which is what we want), but it is the best approximation we have.VmFlags
field is present, then the mapping is shared if
we find the sh
flag there.With all that in mind, I hacked GDB to improve the coredump mechanism for GNU/Linux operating systems. The main function which decides the memory mappings that will or will not be dumped on GNU/Linux is linux_find_memory_regions_full; the Linux kernel obviously uses its own function, vma_dump_size, to do the same thing.
Linux has one advantage: it is a kernel, and therefore has much more
knowledge about processes’ internals than a userspace program. For
example, inside Linux it is trivial to check if a file marked as
“(deleted)
” in the output of the smaps
file has no hard links
associated with it (and therefore is not really deleted); the same
operation on userspace, however, would require root access to inspect
the contents of the /proc/<PID>/map_files/
directory.
The case described above, if you remember, is something that impacts the
ability to tell whether a mapping is anonymous or not. I am talking to
the Linux kernel guys to see if it is possible to export this
information directly via the smaps
file, instead of having to do the
current heuristic.
While doing this work, some strange behaviors were found in the Linux
kernel. Oleg is working on them, along with other Linux hackers. From
our side, there is still room for improvement on this code. The first
thing I can think of is to improve the heuristics for finding anonymous
mappings. Another relatively easy thing to do would be to let the user
specify a value for coredump_filter
on the command line, without
editing the /proc
file. And of course, keep this code always updated
with its counterpart in the Linux kernel.
If you are interested, you can see the discussions that happened upstream by going to this link. This is the fourth (and final) submission of the patch; you should be able to find the other submissions in the archive.
The final commit can be found in the official repository.
]]>Não sei dizer o porquê, mas às vezes tenho uma mania besta: gosto de ficar procurando “sarna pra me coçar”. Em outras palavras, eu fico procurando coisas que me deixam mal, mesmo sabendo que vou ficar mal depois de vê-las.
Não tenho explicação pra esse comportamento. É algo meio sabotador, meio sofredor, meio… Não sei. Às vezes, quando me vejo novamente nesse ciclo vicioso, consigo parar. No entanto, na maioria das vezes, eu entro num estado estranho: é como se eu estivesse me observando, estudando quais consequências aquele ato traz para mim. Fico me perguntando se sou a única pessoa desse mundo que faz isso…
Acho que um exemplo bom desse tipo de comportamento é o que tenho feito ultimamente. Às vezes, por algum motivo que me é estranho, leio coisas ruins escritas por pessoas extremamente insensatas. E, talvez pelo mesmo motivo misterioso, eu fico mal com o que leio, mesmo sabendo que, colocando na balança o que essas pessoas fazem e o que eu faço, a diferença é gigantesca. Então por que raios eu fico mal quando leio as besteiras que são praticamente vomitadas por essas pessoas?
Talvez algumas pessoas (eu incluso) tenham um radar pra sentimentos fortes. Por exemplo, um gesto de altruísmo é algo que consegue tocar o fundo da alma, e merece ser apreciado como um vinho raro. Mas, em contrapartida, uma expressão de raiva, desprezo ou incompreensão também capta a atenção de uma forma quase inevitável. O mistério que esse gesto, muitas vezes incoerente, esconde é algo que me deixa quase aficcionado, como se eu estivesse lendo um livro e não quisesse parar antes de chegar no final. Por que uma pessoa se coloca num papel por vezes ridículo, apenas por conta de uma opinião? Por que essa pessoa, na ânsia de criticar um comportamento, um pensamento, ou uma ideologia, muitas vezes exibe exatamente as mesmas características que repudia? O que faz um ser humano, cheio de falhas e limitações, subir num (muitas vezes falso) pedestal e esquecer que já esteve lá embaixo?
Felizmente, as questões acima, por mais intrigantes que sejam, não têm me prendido por muito tempo. Acho que, nesse processo de aprendizagem a que chamamos de “vida”, estou num ponto em que percebo claramente o caos que reina na cabeça dessas pessoas, e tento me afastar dele. Mas, mais importante que isso, acho que me dou conta de você pode escolher ser a mudança que quer ver no mundo (Gandhi), ou ficar ladrando enquanto a caravana passa… E eu definitivamente não quero perder meu tempo comparando códigos pra dizer quem é melhor.
]]>As I said, I agree with him. But I am going through a lot of situations in my life that are constantly reminding me that, maybe, I am that “radical” after all. I do not know whether this is good or bad, and I can say I have been questioning myself for a while now. This post, by the way, is going to be a lot about self-questioning.
Maybe the problem is that I am expecting too much from those that have the same beliefs that I do. Or maybe the cause is that I do not know what to expect from them in certain situations, and I am disappointed when I see that they do not follow what I think is best sometimes. On the other hand, when I look myself in the mirror, I do not know whether I am totally following what I think is best; and if I am not, then how can I even consider telling others to do that? And even if I am following my own advices, how can I be sure that they are good enough for others?
One good example of this is my opinion about FSF’s use of Twitter. The opinion is public, and has been criticized by many people already, including Free Software supporters. Shortly after I wrote the post, I mentioned it to Richard Stallman, and he told me he was not going to read it because he considered it “too emotional”. I felt deeply sad because of his reaction, especially because it came from someone who often appeals to emotions in order to teach what he has to say. But I also started questioning myself about the topic.
Is it really bad to use Twitter? This is what I ask myself sometimes. I see so many people using it, including those who defend Free Software as I do (like Matt Lee), or those who stand against privacy abuses (like Jacob Appelbaum), or who are worried about social causes, or… Yeah, you got the point. I refuse to believe that they did not think about Twitter’s issues, or about how they would be endorsing its use by using it themselves. Yet, they are there, and a lot of people is following their posts and discussing their opinions and ideas for a better world. As much as I try to understand their motivation for using Twitter (or even Facebook), I cannot convince myself that what they are doing is good for their goals. Am I being too narrow minded? Am I missing something?
Another example are my thoughts about Free Software programs that support (and sometimes even promote) unethical services. They (the thoughts) are also public. And it seems that this opinion, which is about something I called “Respectful Software”, is too strong (or “radical”?) for the majority of the developers, even considering Free Software developers. I saw very good arguments on why Free Software should support unethical services, and it is hard to disagree with them. I guess the best of those arguments is that when you support unethical services like Facebook, you are offering a Free Software option for those who want or need to use the service. In other words, you are helping them to slowly get rid of the digital handcuffs.
It seems like all those arguments (about Twitter, about implementing support for proprietary systems on Free Software, and others) are ultimately about reaching users that would otherwise remain ignorant of the Free Software philosophy. And how can someone have counter-arguments for this? It is impossible to argue that we do not need to take the Free Software message to everybody, because when someone does not use Free Software, she is doing harm to her community (thus, we want more people using Free Software, of course). When the Free Software Foundation makes use of Twitter to bring more people to the movement, and when I see that despite talking to people all around me I can hardly convince them to try GNU/Linux, who am I to criticize the FSF?
So, I have been thinking to myself whether it is time to change. What I am realizing more and more is that my fight for coherence perhaps is flawed. We are incoherent by nature. And the truth is that, no matter what we do, people change according to their own time, their own will, and their own beliefs (or to the lack of them). I remembered something that I once heard: changing is not binary, changing is a process. So, after all, maybe it is time to stop being a “GNU radical” (in the sense that I am radical even for the GNU project), and become a new type of activist.
]]>The question, strange as it may sound, is not only valid but also becoming more and more important these days. If you think that the four freedoms are enough to guarantee that the Free Software will respect the user, you are probably being oversimplistic. The four freedoms are essential, but they are not sufficient. You need more. I need more. And this is why I think the Free Software movement should have been called the Respectful Software movement.
I know I will probably hear that I am too radical. And I know I will hear it even from those who defend Free Software the way I do. But I need to express this feeling I have, even though I may be wrong about it.
It all began as an innocent comment. I make lots of presentations and talks about Free Software, and, knowing that the word “Free” is ambiguous in English, I started joking that Richard Stallman should have named the movement “Respectful Software”, instead of “Free Software”. If you think about it just a little, you will see that “respect” is a word that brings different interpretations to different people, just as “free” does. It is a subjective word. However, at least it does not have the problem of referring to completely unrelated things such as “price” and “freedom”. Respect is respect, and everybody knows it. What can change (and often does) is what a person considers respectful or not.
(I am obviously not considering the possible ambiguity that may exist in another language with the word “respect”.)
So, back to the software world. I want you to imagine a Free Software. For example, let’s consider one that is used to connect to so-called “social networks” like GNU Social or pump.io. I do not want to use a specific example here; I am more interested in the consequences of a certain decision. Which decision? Keep reading :-).
Now, let’s imagine that this Free Software is just beginning its life, probably in some code repository under the control of its developer(s), but most likely using some proprietary service like GitHub (which is an issue by itself). And probably the developer is thinking: “Which social network should my software support first?”. This is an extremely valid and important question, but sometimes the developer comes up with an answer that may not be satisfactory to its users. This is where the “respect” comes into play.
In our case, this bad answer would be “Facebook”, “Twitter”, “Linkedin”, or any other unethical social network. However, those are exactly the easiest answers for many and many Free Software developers, either because those “vampiric” services are popular among users, or because the developer him/herself uses them!! By now, you should be able to see where I am getting at. My point, in a simple question, is: “How far should we, Free Software developers, allow users to go and harm themselves and the community?”. Yes, this is not just a matter of self-inflicted restrictions, as when the user chooses to use a non-free software to edit a text file, for example. It is, in most cases, a matter of harming the community too. (I have written a post related to this issue a while ago, called “Privacy as a Collective Good”.)
It should be easy to see that it does not matter if I am using Facebook through my shiny Free Software application on my computer or cellphone. What really matters is that, when doing so, you are basically supporting the use of those unethical social networks, to the point that perhaps some of your friends are also using them because of you. What does it matter if they are using Free Software to access them or not? Is the benefit offered by the Free Software big enough to eliminate (or even soften) the problems that exist when the user uses an unethical service like Linkedin?
I wonder, though, what is the limit that we should obey. Where should we draw the line and say “I will not pass beyond this point”? Should we just “abandon” the users of those unethical services and social networks, while we lock ourselves in our not-very-safe world? After all, we need to communicate with them in order to bring them to our cause, but it is hard doing so without getting our hands dirty. But that is a discussion to another post, I believe.
Meanwhile, I could give plenty of examples of existing Free Softwares that are doing a disservice to the community by allowing (and even promoting) unethical services or solutions for their users. They are disrespecting their users, sometimes exploiting the fact that many users are not fully aware of privacy issues that come as a “gift” when you use those services, without spending any kind of effort to teach the users. However, I do not want this post to become a flamewar, so I will not mention any software explicitly. I think it should be quite easy for the reader to find examples out there.
Perhaps this post does not have a conclusion. I myself have not made my mind completely about the subject, though I am obviously leaning towards what most people would call the “radical” solution. But it is definitely not an easy topic to discuss, or to argument about. Nonetheless, we are closing our eyes to it, and we should not do so. The future of Free Software depends also on what kinds of services we promote, and what kinds of services we actually warn the users against. This is my definition of respect, and this is why I think we should develop Free and Respectful Software.
]]>Anyway, as one might expect, configuring GNU/Linux on notebooks is becoming harder as time goes by, either because the infamous Secure Boot (anti-)feature, or because they come with more and more devices that demand proprietary crap to be loaded. But fortunately, it is still possible to overcome most of those problems and still get a GNU/Linux distro running.
For main reference, I used the following websites:
I also used other references for small problems that I had during the configuration, and I will list them when needed.
The first thing you will probably want to do is to make a recovery image
of the ChromeOS that comes pre-installed in the machine, in case things
go wrong. Unfortunately, to do that you need to have a Google account,
otherwise the system will fail to record the image. So, if you want to
let Google know that you bought a Chromebook, login into the system,
open Chrome, and go to the special URL chrome://imageburner
. You will
need a 4 GiB pendrive/sdcard. It should be pretty straightforward to do
the recording from there.
Now comes the hard part. This notebook comes with a write-protect screw. You might be thinking: what is the purpose of this screw?
Well, the thing is: Chromebooks come with their own boot scheme, which unfortunately doesn’t work to boot Linux. However, newer models also offer a “legacy boot” option (SeaBIOS), and this can boot Linux. So far, so good, but…
When you switch to SeaBIOS (details below), the system will complain that it cannot find ChromeOS, and will ask if you want to reinstall the system. This will happen every time you boot the machine, because the system is still entering the default BIOS. In order to activate SeaBIOS, you have to press CTRL-L (Control + L) every time you boot! And this is where the screw comes into play.
If you remove the write-protect screw, you will be able to make the system use SeaBIOS by default, and therefore will not need to worry about pressing CTRL-L every time. Sounds good? Maybe not so much…
The first thing to consider is that you will lose your warranty the moment you open the notebook case. As I was not very concerned about it, I decided to try to remove the screw, and guess what happened? I stripped the screw! I am still not sure why that happened, because I was using the correct screw driver for the job, but when I tried to remove the screw, it seemed like butter and started to “decompose”!
Anyway, after spending many hours trying to figure out a way to remove the screw, I gave up. My intention is to always suspend the system, so I rarely need to press CTRL-L anyway…
Well, that’s all I have to say about this screwed screw. If you decide to try removing it, keep in mind that I cannot help you in any way, and that you are entirely responsible for what happens.
Now, let’s install the system :-).
You need to enable the Developer Mode in order to be able to enable SeaBIOS. To do that, follow these steps from the Arch[GNU/]Linux wiki page.
I don’t remember if this step works if you don’t have activated the ChromeOS (i.e., if you don’t have a Google account associated with the device). For my use, I just created a fake account to be able to proceed.
Now, you will need to access the superuser (root
) shell inside
ChromeOS, to enable SeaBIOS. Follow the steps described in the
Arch[GNU/]Linux wiki page.
For this specific step, you don’t need to login, which is good.
We’re almost there! The last step before you boot your Fedora LiveUSB is to actually enable SeaBIOS. Just go inside your superuser shell (from the previous step) and type:
> crossystem dev_boot_usb=1 dev_boot_legacy=1
And that’s it!
If you managed to successfuly remove the write-protect screw, you may also want to enable booting SeaBIOS by default. To do that, there is a guide, again on Arch[GNU/]Linux wiki. DO NOT DO THAT IF YOU DID NOT REMOVE THE WRITE-PROTECT SCREW!!!!
Now, we should finally be able to boot Fedora! Remember, you will have to press CTRL-L after you reboot (if you have not removed the write-protect screw), otherwise the system will just complain and not boot into SeaBIOS. So, press CTRL-L, choose the boot order (you will probably want to boot from USB first, if your Fedora is on a USB stick), choose to boot the live Fedora image, and… bum!! You will probably see a message complaining that there was not enough memory to boot (the message is “Not enough memory to load specified image”).
You can solve that by passing the mem
parameter to Linux. So, when
GRUB complains that it was unable to load the specified image, it will
give you a command prompt (boot:
), and you just need to type:
boot: linux mem=1980M
And that’s it, things should work.
I won’t guide you through the installation process; I just want to
remember you that you have a 32 GiB SSD drive, so think carefully before
you decide how you want to set up the partitions. What I did was to
reserve 1 GB for my swap, and take all the rest to the root partition
(i.e., I did not create a separate /home
partition).
You will also notice that the touchpad does not work (neither does the touchscreen). So you will have to do the installation using a USB mouse for now.
I strongly recommend you to read this Fedora bug, which is mostly about the touchpad/touchscreen support, but also covers other interesting topics as well.
Anyway, the bug is still being constantly updated, because the proposed patches to make the touchpad/touchscreen work were not fully integrated into Linux yet. So, depending on the version of Linux that you are running, you will probably need to run a different version of the scripts that are being kindly provided in the bug.
As of this writing, I am running Linux 3.16.2-201.fc20, and the script that does the job for me is this one. If you are like me, you will never run a script without looking at what it does, so go there and do it, I will wait :-).
OK, now that you are confident, run the script (as root
, of course),
and confirm that it actually installs the necessary drivers to make the
devices work. In my case, I only got the touchpad working, even though
the touchscreen is also covered by this script. However, since I don’t
want the touchscreen, I did not investigate this further.
After the installation, reboot your system and at least your touchpad should be working :-). Or kind of…
What happened to me was that I was getting strange behaviors with the touchpad. Sometimes (randomly), its sensitivity became weird, and it was very hard to move the pointer or to click on things. Fortunately, I found the solution in the same bug, in this comment by Yannick Defais. After creating this X11 configuration file, everything worked fine.
Now comes the hard part. My next challenge was to get suspend to work, because (as I said above) I don’t want to poweroff/poweron every time.
My first obvious attempt was to try to suspend using the current configuration that came with Fedora. The notebook actually suspended, but then it resumed 1 second later, and the system froze (i.e., I had to force the shutdown by holding the power button for a few seconds). Hmm, it smelled like this would take some effort, and my nose was right.
After a lot of search (and asking in the
bug), I found
out about a few Linux flags that I could provide in boot time. To save
you time, this is what I have now in my /etc/default/grub
file:
GRUB_CMDLINE_LINUX="tpm_tis.force=1 tpm_tis.interrupts=0 ..."
The final ...
means that you should keep whatever was there before you
included those parameters, of course. Also, after you edit this file,
you need to regenerate the GRUB configuration file on /boot
. Run the
following command as root
:
> grub2-mkconfig -o /boot/grub2/grub.cfg
Then, after I rebooted the system, I found that only adding those flags
was still not enough. I saw a bunch of errors on dmesg
, which showed
me that there was some problem with EHCI and xHCI. After a few more
research, I found the this
comment
on an Arch[GNU/]Linux forum. Just follow the steps there (i.e., create
the necessary files, especially the
/usr/lib/systemd/system-sleep/cros-sound-suspend.sh
), and things
should start to get better. But not yet…
Now, you will see that suspend/resume work OK, but when you suspend, the system will still resume after 1 second or so. Basically, this happens because the system is using the touchpad and the touchscreen to determine whether it should resume from suspend or not. So basically what you have to do is to disable those sources of events:
echo TPAD > /proc/acpi/wakeup
echo TSCR > /proc/acpi/wakeup
And voilà! Now everything should work as expected :-). You might want to
issue those commands every time you boot the system, in order to get
suspend to work every time, of course. To do that, you can create a
/etc/rc.d/rc.local
, which gets executed when the system starts:
> cat /etc/rc.d/rc.local
#!/bin/bash
suspend_tricks()
{
echo TPAD > /proc/acpi/wakeup
echo TSCR > /proc/acpi/wakeup
}
suspend_tricks
exit 0
Don’t forget to make this file executable:
> chmod +x /etc/rc.d/rc.local
Overall, I am happy with the machine. I still haven’t tried installing Linux-libre on it, so I am not sure if it can work without binary blobs and proprietary craps.
I found the keyboard comfortable, and the touchpad OK. The only extra
issue I had was using the Canadian/French/whatever keyboard that comes
with it, because it lacks some useful keys for me, like Page Up/Down,
Insert, and a few others. So far, I am working around this issue by
using xbindkeys
and xvkdb
.
I do not recommend this machine if you are not tech-savvy enough to follow the steps listed in this post. If that is the case, then consider buying a machine that can easily run GNU/Linux, because you feel much more comfortable configuring it!
]]>I use Gnus as my e-mail (and news!) reader for quite a while, and I can say it is a very nice piece of software (kudos to Lars and all the devs!). For those who are not aware, Gnus runs inside Emacs, which is a very nice operating system (and text editor also).
Emacs provides EasyPG for those who want to make use of cryptographic operations inside it, and Gnus also uses it to encrypt/decrypt the messages it handles. I am using it for my own messages, and it works like a charm. However, there was something that I had not had configured properly: the ability to read the encrypted messages that I was sending to my friends.
In a brief explanation, when you send an encrypted message GnuPG looks at the recipients of the message (i.e., the people that will receive it, listed in the “From:”, “Cc:” and “Bcc:” fields) and encrypts it according to each recipient’s public key, which must be present in your local keyring. But when you send a message to someone, you are not (usually) present in the original recipients list, so GnuPG does not encrypt the message using your public key, and therefore you are unable to read the message later. In fact, this example can be used to illustrate how secure this system really is, when not even the sender can read his/her message again!
Anyway, this behavior was mostly unnoticed by me because I rarely look
at my “Sent/” IMAP folder. Until today. And it kind of pissed me off,
because I wanted to read what I wrote, damn it! So, after looking for a
solution, I found a neat GnuPG setting called hidden-encrypt-to
. It
basically tells GnuPG to add a hidden recipient in every message it
encrypts. So, all I had to do was to provide my key’s ID and ask GnuPG
to always encrypt the message to myself too.
You basically have to edit your $HOME/.gnupg/gpg.conf
file and put
this setting there:
hidden-encrypt-to ID
That’s it. Now, whenever I send an encrypted message, GnuPG encrypts it for me as well, so I just need to go to my “Sent/” folder, and decrypt it to read.
I hope this tip helps you the same way it helped me!
]]>Antes de mais nada, gostaria de fazer um pequeno “jabá”. Acho que mereço, por conta do trabalho que tive pra fazer isso (já explico) dar certo! Estou falando da palestra do Diego Aranha, que foi um dos destaques dessa edição do evento. A palestra, entitulada Software Livre e Segurança Eleitoral (veja o vídeo dela aqui) é, na minha opinião, algo que todo cidadão brasileiro deveria assistir e refletir a respeito. Comecei a me envolver mais no assunto da urna eletrônica brasileira depois que assisti essa mesma palestra (proferida pelo próprio Diego), há mais de 1 ano atrás, na UNICAMP. Considero impossível não se sentir minimamente indignado com a falta de escrúpulos (e de competência!) daqueles que, teoricamente, estão zelando pela democracia no país.
Enfim, depois de assistir essa palestra pelo menos umas 3 vezes (sendo uma delas na edição do Software Freedom Day Campinas, que eu organizei em nome do LibrePlanet São Paulo), achei que devesse tentar “mexer os pauzinhos” e colocá-la na grade oficial do FISL. Só pra garantir, eu e o Diego também submetemos a mesma palestra pelo sistema normal de submissão. Mas no fim, depois de conversar com algumas pessoas “de dentro” (agradecimento especial ao Paulo Meirelles da UnB nesse ponto), consegui encaixar o Diego na grade de destaques do evento! Foi uma grande conquista, e tenho certeza de que quem viu a palestra saiu de lá com a pulga atrás da orelha…
Mas enfim, vamos aos fatos. Minha participação no FISL desse ano foi mais tímida do que no ano passado, mas após alguma reflexão, cheguei à conclusão de que ela também foi mais proveitosa. Apesar de ter submetido praticamente 8 propostas de palestras, cobrindo os mais diferentes níveis e assuntos, não tive nenhuma proposta aceita! Obviamente fiquei bastante chateado com isso, ainda mais depois de ver o nível de algumas palestras que foram aprovadas… Confesso que considerei não ir ao evento, já que, além de não ter tido nenhuma palestra aprovada (o que significava que eu não receberia nenhum patrocínio pra ir), também não ia poder rever muitos amigos que não puderam comparecer nessa edição (podem botar isso na conta da Copa).
Passada a fase de chorar as pitangas, decidi ir de qualquer maneira. O Alexandre Oliva havia me convidado para fazer parte de uma “mesa redonda” cujo objetivo era debater a suposta morte do movimento Software Livre no Brasil. Senti-me honrado com o convite, e como participo da causa há bastante tempo, tinha bastante coisa a dizer. Foi uma honra ter feito parte da mesa com o próprio Oliva, o Anahuac, o Fred, e o Panaggio. Tivemos 2 horas para falar nossas opiniões a respeito do tema, e abrir a discussão para o público presente no auditório. Infelizmente, acabou sendo muito pouco tempo para tanta coisa que tínhamos pra falar! Eu mesmo acabei dizendo muito pouco, e resolvi parar antes para deixar a platéia se manifestar, na esperança de que o microfone iria voltar às minhas mãos para que eu pudesse fazer as considerações finais. Ledo engano! Todos queriam um pedacinho do tempo, e acabou que ficou faltando muita coisa a ser dita, de ambos os lados (palestrantes e platéia). Aliás, se quiser ver o vídeo do debate, faça o download dele aqui.
Não é exagero dizer que esse debate explicitou um sentimento recorrente nos ativistas do movimento Software Livre. Há algum tempo vínhamos tendo essa “consciência coletiva” de que as coisas não estavam muito bem pro lado do Software Livre (ao contrário do Open Source, que vai de vento em popa). Eu mesmo já havia feito alguns posts a respeito do assunto, e do meu incômodo quando pedi para que o nome Software Livre não fosse utilizado indevidamente (post em inglês), e o Anahuac levantou esse ponto durante o debate também. Achei bastante sintomático isso. E depois que voltei do FISL comecei a pensar bastante a respeito desses (e outros) assuntos novamente, o que já gerou alguns posts por aqui.
Gostei, também, da maior parte das colocações que ouvi da platéia. Apesar de eu ter tido a impressão de que algumas pessoas não entenderam muito bem o que estava sendo discutido, considero que os contrapontos levantados por parte da platéia são dignos de serem pensados, mesmo que a pessoa que trouxe esses contrapontos não seja necessariamente uma ativista. Talvez eu prepare mais um post a respeito do que ouvi por lá…
Por último, já no final da palestra, não pude deixar de pedir o microfone pro Oliva e levantar um ponto que eu queria que tivesse tido mais atenção: precisamos hackear mais! O Software Livre, enquanto movimento social e político, precisa de pessoas que discutam e tragam à tona os problemas que nós, como sociedade, devemos resolver. No entanto, o Software Livre também é um movimento técnico, e como tal precisa de ferramentas que façam frente ao domínio proprietário. Hackers, precisamos de vocês :-).
Mas… mudando um pouco de assunto, eu também fui ao evento para divulgar, mais uma vez, o nosso grupo de Software Livre, chamado LibrePlanet São Paulo. Nesse ano, levamos duas propostas interessantes ao evento: contas grátis na nossa instância do GNU Social, e no nosso servidor Jabber.
O GNU Social, que antes era conhecido como StatusNet (e que era utilizado pelo site Identica, que depois migrou para um outro tipo de serviço), é como se fosse um “Twitter distribuído”, implementado com Software Livre. O ponto é que você consegue utilizar seu próprio servidor (se quiser), e consegue conversar com pessoas que estão usando GNU Social em outros servidores. Se quiser registrar sua conta na nossa instância do GNU Social, pode acessar a página de cadastro.
O Jabber (XMPP) é um “conhecido anônimo” de quase todos. É a tecnologia que o Google Talk, o Facebook Chat, o WhatsApp, e vários outros serviços proprietários utilizam. Nós, do LibrePlanet São Paulo, estamos oferecendo contas de graça no nosso servidor Jabber. Ainda não possuímos uma página de cadastro de usuários, então se você quiser uma conta, entre em contato comigo através do e-mail (ou deixe um comentário aqui). É importante dizer que o Jabber/XMPP também é um protocolo totalmente distribuído, e que você vai conseguir conversar com outras pessoas que estão utilizando Jabber em outros servidores! Infelizmente, você não vai poder falar com quem usa o Facebook Chat e o WhatsApp, porque essas empresas proíbem essa funcionalidade. O Google permitia isso para quem utilizava o Google Chat “normal”; se a pessoa já tiver migrado para o Hangout, ela também não vai conseguir falar com outros servidores Jabber. Mais um motivo pra largar esses “serviços” vampíricos, não acha? :-).
O saldo final foi de 5 contas Jabber criadas, e nenhuma conta GNU Social. Infelizmente, isso é absolutamente normal em qualquer tipo de evento; o FISL, apesar de ter “SL” no nome, é, em sua esmagadora maioria, composto por pessoas que às vezes não se importam tanto com a parte social.
Por último, gostaria de deixar registrado o excelente trabalho que o pessoal do LibrePlanet São Paulo e Espírito Santo fizeram durante o Encontro Comunitário dos grupos. Veja o vídeo do encontro aqui.
No final, fiquei feliz com o resultado do evento. O ponto alto, pra mim, certamente foi o debate. Acho que uma “mexida” no status quo é sempre bem vinda, e foi isso que tentamos fazer. Esse movimento acabou gerando atividade dos dois lados (Software Livre e Open Source), e também ajudou-nos a diferenciar melhor quem é quem nessa história toda. Agora, é esperar o próximo FISL pra ver o que saiu e o que ficou no lugar. Até lá!
Abraços!
]]>gnu-prog-discuss
mailing list. This is a closed list of the GNU
Project, and only GNU maintainers and contributors can
join, so I cannot put a link to the original message (by Mike
Gerwitz), but this topic is being discussed
over and over again at many places, so you will not have trouble finding
similar opinions. I am also “responding” to a recent discussion that I
had with Luiz Izidoro, which
is a “friend” (as he himself likes to say) of the LibrePlanet São Paulo
group.
Mike’s point is simple: we, Free Software activists, are the geeks (or nerds) at school, surrounded by the “popular guys” all over again. In case it is not clear, the “popular guys” are the people who do not care about the Free Software ideology; the programmers who license their softwares using permissive licenses using the excuse of “more freedom”, but give away their work to increase the proprietary world.
It is undeniable that the Free Software, as a technical movement, has won. Anywhere you look, you see Free Software being developed and used. It is important to say that by “Free Software” I mean not only copyleft programs, but also permissive ones. However, it is also undeniable that several proprietary programs and solutions are being developed with the help of those permissive Free Softwares, without giving anything back to the community, as usual.
Numbers speak for themselves, so I am posting here the example that Mike used in his message, about Trello, a “web-based project management application”, according to Wikipedia. It is quite popular among project managers, and I know about two or three companies that use it, though I have never used it myself (luckily). Being web-based, it is full of Javascript code, and I appreciated the work Mike had to determine which pieces of Free Software Trello uses. The result is:
jQuery, Sizzle, jQuery UI, jQuery Widget, jQuery UI Mouse, jQuery UI Position, jQuery UI Draggable, jQuery UI Droppable, jQuery UI Sortable, jQuery UI Datepicker, Hogan, Backbone, JSON2 (Crockford), Markdown.js, Socket.io, Underscore.js, Bootstrap, Backbone, and Mustache
You can see the license headers of all those projects here:
This is only on the client-side, i.e., the Javascript portion. I will not post the link to the full Javascript code (condensed in one single file) because I do not have permission to do so, but it should not be hard to take a look yourself if you are curious.
On the server side, Mike came up with this list of Free Softwares being used by Trello:
MongoDB, Redis, Node.js, HAProxy, Express, Connect, Cluster, node_redis, Mongoose, node-mogodb-native, async, CofeeScript, and probably more
Quite a lot of Free Software, right? And Trello advertises itself as being “free”, which might confuse the inexperient reader because they are talking about price, not about freedom.
The lesson we learn is obvious but no less painful. He who contributes to Free Software using permissive licenses is directly contributing to the dissemination of proprietary software. And the corolary should be obvious as well: you are being exploited. Another nice addition made by Mike is a quote by Larry Ellison, CEO and founder of Oracle Corporation, about Free Software (and Open Source):
“If an open source product gets good enough, we’ll simply take it…. So the great thing about open source is nobody owns it – a company like Oracle is free to take it for nothing, include it in our products and charge for support, and that’s what we’ll do. So it is not disruptive at all – you have to find places to add value. Once open source gets good enough, competing with it would be insane. … We don’t have to fight open source, we have to exploit open source.”
So, do you really think you have more freedom because you can choose BSD/MIT over GPL? Do you really think you it doesn’t matter what other people do to your code, which you released as a Free Software? What are your goal with this movement, contribute to a better Free Software ecosystem (which will lead to a society which is more fair), or just getting your name in the hall of (f|sh)ame?
Back to the initial point, about not being “popular” among your friends (or be the “radical”, “extremist”, and other adjectives), I believe Mike hit the nail when he said that, because that is exactly how I am feeling currently, and I know other Free Softwares activists feel exactly the same. To defend a copyleft license is to defend something that is wrong, because, in the “popular kids’ view”, copyleft is about anything but freedom! The cool thing now is to be indifferent, or even to think that it is nice that proprietary software can coexist with Free Software, so let’s give it a help and release everything we can under permissive licenses. I could mention lots of very nice Free Softwares that chose to be permissive because their maintainers thought (and still think) GPL is evil.
I contributed and still contribute to some Free Softwares that are permissive licensed. And despite trying to use only copyleft software, sometimes I replace some of them by permissive ones, and do not feel guilty about it. I do that because I believe in Free Software, and I believe we should support it in every way we can. But doing so is also nocive to our cause. We are supporting softwares that are contributing to the proprietary world, even if that is not what their developers want. We are making it very easy for people like Larry Ellison to win and think they can exploit what other people are doing for free(dom). We are feeding our own enemy in their mouths. And we should be very careful about that.
This post is a request. I am asking you a favor. Please, consider (re)licensing your project using a copyleft license. If you do value what Free Software is about (or even what Open Source is about!), then help spread it by not helping the proprietary side. I am not asking you to join our ideological cause (or maybe I am?); feel free to stay out of this if you want. But please, at least consider helping the Free Software community by avoiding making your code permissive, which will give too much power to the unethical side.
Thank you!
]]>Antes de mais nada, se quiser assistir ao debate, o link direto está aqui. Também sugiro uma visita à página wiki do grupo LibrePlanet São Paulo, na qual você pode encontrar algumas sugestões de outras palestras interessantes que rolaram no evento. Você pode acessá-la nesse link.
Mas voltando ao assunto. Meu objetivo no post não é discutir o debate em si; pretendo fazer isso em um post futuro. O ponto que quero discutir é o comportamento do que chamo de “zeladores da coerência”. São pessoas que existem em qualquer movimento social/político/filosófico, e não poderia deixar de existir no Software Livre. Mas curiosamente, vejo mais contundência naquelas pessoas que não defendem o Software Livre, do que naquelas que o fazem. Explico-me.
O Anahuac fez alguns posts recentemente atacando a falta de distinção entre os movimentos Open Source e Software Livre, especificamente por parte daqueles que fazem parte do primeiro mas se dizem defensores do segundo. Posso classificar, nesse meu post, o Anahuac como sendo um zelador da coerência, apesar de ele mesmo admitir algumas incoerências no seu comportamento, como o uso do Twitter. E, apesar de nem sempre concordar com o tom que ele usa em seus textos, muitas vezes combativos e até perigosamente ácidos, concordo com a maioria dos pontos que ele levanta nos dois artigos que mencionei. Se quiser lê-los, o primeiro é esse aqui, e o segundo tá nesse link. Há bastante tempo, publiquei minhas opiniões (em inglês) sobre esse mesmo assunto, nesse post aqui.
Pois bem, como o Anahuac não tem problema em levar pedradas, ele postou ambos os textos no site BR-[GNU/]Linux, notadamente um reduto Open Source brasileiro. Parei de ler o site há bastante tempo, por conta de diferenças de opinião com o conteúdo publicado, e principalmente por notar sempre um tom irônico e parcial travestido de uma suposta “isenção aos fatos” nos comentários que o autor do site faz sobre as notícias. No entanto, o próprio Anahuac fez questão de trazer à minha atenção a repercussão que os textos estavam tendo, e pediu-me pra ler os comentários do post no BR-[GNU/]Linux. Vale mencionar que o site utilizar o Disqus para oferecer um sistema de comentários, um serviço que não respeita a privacidade dos seus usuários e realiza tracking das atividades dos mesmos. Como não possuo uma conta lá, e utilizo alguns plug-ins para não executar código Javascript não-autorizado no meu navegador, acabei tendo um pouco de trabalho pra conseguir ler os comentários de forma mais ou menos anônima. Mas no fim, consegui. E o que vi, apesar de ser “mais do mesmo”, me deixou bem pensativo.
Não esperava uma reação diferente de parte da comunidade. Como disse, os textos do Anahuac são feitos pra “tocar na ferida” de uma forma às vezes brusca, e que desagrada muita gente. Vários comentários eram ad hominem, e nem merecem menção. Mas o que me chamou a atenção foi a quantidade de pessoas apontando incoerências (supostas ou verídicas) que o Anahuac comete, e retirando dele o direito de apontar qualquer tipo de incoerência na própria comunidade da qual faz parte. E aí fico pensando, será que nós mesmos, ativistas do Software Livre, não estamos colhendo o que plantamos?
Não consigo deixar de falar da minha experiência. Sempre tentei basear meus atos e opiniões em cima da minha própria coerência. Sei que é difícil, e, apesar de muitas vezes (pré)julgar alguém por uma incoerência cometida, tento sempre lembrar que eu mesmo já usei Gmail e Twitter para criticar o Software Livre. Obviamente que, na época, eu não tinha tanto conhecimento a respeito dos perigos de se usar essas ferramentas, mas mesmo assim nada impedia (como, de fato, não impediu!) que alguém chegasse e me acusasse de incoerência. Já, inclusive, condenei o uso do Facebook para divulgar um ex-grupo de Software Livre do qual fazia parte, e recebi como resposta um “conselho” (não muito educado) dizendo que, se eu quisesse usar apenas Software Livre, deveria parar de usar computador, já que independente da máquina eu ia ter que usar algo proprietário. Isso foi proferido por um dos fundadores do tal grupo, um rapaz muito famoso pela falta de educação, mas que, há bastante tempo atrás, acreditava nos mesmos ideais que eu.
É muito difícil rebater um argumento desse tipo. Aliás, é muito difícil rebater um dedo apontado na sua cara mostrando alguma incoerência que você comete, e que está lá como uma resposta a uma acusação sua de uma outra incoerência. Algumas pessoas tendem a se defender justificando seus erros através dos erros dos outros, e quando elas podem usar o próprio “acusador” como um exemplo, melhor ainda (pra elas)! É isso que está havendo, e é essa maré desses zeladores da coerência alheia que me preocupa um pouco. Afinal, sempre vai ser possível encontrar incoerências em qualquer pessoa.
Não sei direito onde quero chegar com esse texto, mas acho que uma coisa está ficando um pouco clara na minha cabeça, ou pelo menos eu estou começando a ver um lado diferente da história toda. Apontar incoerências, por mais evidentes que elas estejam aos nossos olhos, pode não ser a melhor forma de conseguirmos explicar nossos ideais. Pode parecer óbvio (e talvez seja), mas ninguém gosta de ser colocado contra a parede, e pouquíssimas pessoas têm a coragem necessária pra assumir publicamente um erro. Talvez o caminho para a cabeça e o coração das pessoas seja outro. Durante o debate no FISL, o Fred falou algo que tem estado na minha mente com cada vez mais frequência. Pode parecer piegas, mas nós precisamos de mais amor ao próximo, até para podermos entender que nós, também, já estivemos do lado de lá. O Software Livre, como movimento social, político e filosófico que é, vai florescer cada vez mais quando cada ativista olhar pra si mesmo e reconhecer, mesmo que com dificuldade, aquele a quem espera passar um pouco do seu ideal.
É difícil, mas é necessário.
]]>First of all, this article is not a copy of Benjamin Mako’s Google Has Most of My Email Because It Has All of Yours. And I would also like to take this opportunity to recommend this great article; it provides many insights that some people do not even realize.
But back to the point: privacy is a collective good, and we should preserve it. The explanation of why I am calling privacy something “collective” is simple, and if you read Ben’s article you probably know it by now: whenever I send an e-mail to someone who uses Gmail, Google will have a copy of it, even if I don’t have a Google account. What does it mean? It means that I pay my own server in order to run my own e-mail infrastructure and not have my privacy disrespected, but in the end of the day the majority of my efforts are useless. Which boils down to something that may be hard to read, but is true: you are not respecting my privacy. Your displicence with your privacy is forcing me, who needs to communicate with you, to give up my privacy as well, even if for a small portion of time. But it’s not only about e-mail…
Another common example is Facebook. I don’t have an account there, and don’t plan to have one, despite the pressure coming from the society sometimes. However, when you take a picture of me and post it there, or when you mention something about me on your Facebook, you are also disrespecting my privacy. If I don’t have Facebook, it is because I do not want to become a product for them and have my personal data sold to advertisement companies, nor have it shared with the NSA. You, on the other hand, do not care about this, and post things about me and other people without their permission. This is wrong, and you are disrespecting my privacy.
I chose to use this argument because oftentimes people are not concerned about their privacy, and think that “if I have nothing to hide, then I don’t need privacy”. I won’t even begin discussing this absurd, because that is not the point of this article. Instead, I noticed that sometimes people pay more attention if you say that they are disrespecting someone else’s right. Maybe I am wrong, but I still think it is worth trying to open everyone’s eyes for something that seems to have been forgotten by most.
]]>When I installed my personal server, I chose Jabberd2 as my Jabber server. At that time, this choice seemed the most logical to me because of a few reasons:
So, the decision seemed pretty simple for me: Jabberd2 would be my choice! And then the problems started…
The first issue I had to solve was not Jabberd2’s fault: I am using Debian Wheezy (stable) in my server, and Jabberd2 is only available for Debian Jessie (testing) or Sid (unstable). Therefore, I had to create my own version of the Jabberd2 Debian package (and all its dependencies that were not packaged) for Wheezy, which took me about 1 day. But after that, I managed to install the software in my server. Then, the configuration hell began…
Jabberd2 uses configuration files written in XML. They are well documented, with helpful comments inside. But they are confuse, as confuse as XML can be. Of course you have to take into account that it was my first time configuring a Jabber server, which added a lot to the complexity of the task. However, I feel compelled to say that the way Jabberd2 organizes its configuration files makes it a much more complex work than it should be. Nevertheless, and after lots of fails, I managed to set the server up properly. Yay!
Now, before I continue complaining, one good thing about Jabberd2: it has never crashed with me. I consider this to be something good because I am a software developer myself and I know that, despite our best efforts, bad things can happen. But Jabberd2 takes the gold medal on this one…
However… My confidence on Jabberd2’s security was severily damaged when I found that the SQLite backend could not encrypt the users’s passwords!!! I stumbled on this issue by myself, while naively dumping my SQLite database to check something there… You can imagine how (badly) impressed I was when I saw my password there, in plaintext. I decided to fix this issue ASAP. Hopefully next users will benefit from this fix.
After that, the bell rang in my head and I started to look for alternatives for Jabberd2. Though I still want to contribute to the project eventually (I am even working on a patch to merge all the database backends), I wanted to have a little bit more confidence in the software that I use as my Jabber server.
Prosody came to my attention when I was setting up the server for our local Free Software group in Brazil. You can reach our wiki here (in pt_br, portugues) if you are interested. We wanted to offer a few services to our members/friends, and Jabber was obviously one of them. This happened after I discovered the bug in Jabberd2’s SQLite backend, so using Jabberd2 was not a choice anymore. We had heard ejabberd, which was being used by Jabber-BR (they recently migrated to Prosody as well), but the fact that it is written in Erlang, a language that I am not familiar with, has contributed to our decision of dropping the idea. So, the only choice left was Prosody itself.
Since I am brazilian, I also feel a little bit proud of Prosody because it is writte in Lua, a programming language designed by brazilians.
We installed Prosody on our server, and it was amazingly easy to configure it! The configuration file is writte in Lua as well, which makes it a lot easier to read than XML. It is also well documented, and I felt that they were more organized too: you have small configuration files splitted by categories, instead of one big XML to edit.
The modular structure of Prosody also impressed me. You can load and unload many modules very easily, generally just by (un)commenting lines on the configuration file. Neat.
Prosody also offers a command-line program to manage the server, which is really helpful if you want to automatize some tasks and write scripts. There is a little thing that still annoys me, which is the fact that this command-line program does not have a very useful “–help” command, but I plan to propose a patch to fix that.
And at last, but definitely not least, Prosody is also very robust, and have not crashed one single time with us. It runs smoothly in the server, and although I haven’t really compared the memory footprint of Jabberd2 and Prosody, I have nothing to complain about it too.
Well, so after all this story, I think it is clear why I decided to migrate to Prosody. However, it was not an easy task.
Before we begin to understand the procedure needed to do the migration, I would like to say a few things. First, I would like to thank the guys at the Prosody chatroom, who were very helpful and provided several resources to make this migration possible. And I would also like to say that these instructions apply if you are running jabberd2_2.2.17-1 and prosody-0.8.2-4+deb7u2!! I have not tested with other versions of those softwares, so do it at your own risk.
The first thing you have to do is to convert Jabberd2’s database to XEP-0227. This XEP is very nice: it defines a standard format to import/export user data to and from XMPP servers. Unfortunately, not every server supports this XEP, and Jabberd2 is one of those… So I started looking for ways to extract the information which was inside Jabberd2’s SQLite database in a XEP-0227 compatible way. Thanks to the guys at the Prosody chatroom, I found a tool called sleekmigrate. It allowed me to generate a XEP-0227 file that could be imported into Prosody. Nice! But… I needed to extract this information from Jabberd2, and sleekmigrate could not do it. Back to the beginning…
It took me quite a while to figure out how to extract this info from Jabberd2. I was initially looking for ways (other than using sleekmigrate) that would allow me to extract this info directly from Jabberd2’s SQLite database, but could not find it. Only when I read that sleekmigrate could actually work with jabberd14 data directories directly, I had the idead to find a way to convert my SQLite database into a jabberd14 data directory, and then I found this link: it teaches how to migrate from Jabberd2 to ejabberd, and has separate instructions on how to do the Jabberd2 -> Jabberd14 conversion! Sweet!
The first thing you have to do is to download the
j2to1 Perl
script. I had to patch the script to make it work with SQLite, and also
to fix a little bug in a SQL query; you can grab my patched version
here. Save
the file as j2to1.pl
, and run the script (don’t forget to edit the
source code in order to provide the database name/file):
$> perl j2to1.pl jabberd14-dir/
Converting user@host...
$>
This will convert the database from Jabberd2 to Jabberd14, and put the
XML file of each Jabber user in the server into jabberd14-dir/host/
.
Now, you have a Jabberd14 version of your user data. Let’s proceed with
the migration.
After following the instructions on the sleekmigrate page on how to set it up, you can run it on your Jabberd14 data directory in order to finally generate a XEP-0227 XML file that will be imported into Prosody.
$> ./sleekmigrate.py -j /path/to/jabberd14-dir/
This should create a file called 227.xml
on your current directory,
which is the exported version of the Jabberd14 data directory. As a side
note, it is always recommended to check those generated files in order
to see if everything is OK.
Right, so now you have 227.xml
, which means you can finally import it
into Prosody. Fortunately, Prosody has a tool to help you with that: it
is a Lua script called
xep227toprosody.lua.
However, if you are doing this using Debian and the same versions of the
softwares that I was using, you may find it harder than it seems to run
this script without errors. Here is what I had to do.
First, grab a copy of version 0.8.2 of
Prosody. I
had to do that because using the latest version of the script was not
working. I also had to build some POSIX module of Prosody in order to
make everything work. To do that, unpack the tar.gz
file, go to the
Prosody source code directory, and do:
$> apt-get build-dep prosody && ./configure --ostype=debian && make
Only after I did that I could finally run the conversion script
successfully. The script is locate inside the tools/
directory. To run
it:
$> cd tools && lua ./xep227toprosody.lua /path/to/227.xml
And yay! I finally had everything imported into Prosody!!!! Then it was just a matter of finishing the server configuration, initializing it, and everything was there: my contacts, my user, etc.
The migration was not very easy, especially because Jabberd2 does not support XEP-0227. I found a bug against Jabberd2 that requested this feature to be implemented, but it was not receiving any attention. Of course, if Jabberd2 implemented XEP-0227 it would make it easier for people to migrate from it, but it would also make it easier to migrate to it, so it is definitely not a bad thing to have.
Despite some difficulties, Prosody made it really easy to import my
data, so kudos to it. The Prosody community is also very responsive and
helpful, which made me feel very good about it. I hope I can contribute
some patches to the project :-)
.
So, that’s it. I hope this guide will be helpful to anyone who is planning to do this migration. Feel free to contact me about mistakes/comments/suggestions.
Happy migration!
]]>O movimento de Software Livre, visto por um ângulo um pouco não-ortodoxo, funciona na base do “dar e receber”. Você contribui com tempo, dedicação, código, relatórios de problemas, correções, arte, texto, e no fim espera, mesmo que inconscientemente, receber crédito pelo esforço colocado no projeto. Não há nada de errado nisso, e, se o crédito for realmente merecido (o que é uma outra reflexão por vezes dificílima de ser feita!), nada mais justo do que dá-lo.
Por outro lado, é interessante analisar o que ocorre quando o devido crédito não é dado. Sem entrar no mérito do porquê isso aconteceu (relapso, esquecimento, má fé), a pessoa que devia receber esse crédito, mesmo que não o estivesse conscientemente esperando, sofre um abalo — irreversível, por vezes — na vontade de continuar dedicando seu tempo a determinada tarefa. Pode parecer óbvio, mas é preciso olhar para isso com cuidado. O movimento de Software Livre é composto não somente por funcionários de empresas interessadas (financeiramente) no sucesso de determinado software, mas também (e principalmente) por voluntários.
E onde eu entro nisso tudo? Contribuo com Softwares Livres há bastante tempo, e já passei pelas duas situações: fui agraciado com o devido reconhecimento, e fui “esquecido” depois de me esforçar por alguma coisa. Felizmente, na esmagadora maioria dos casos o devido crédito foi-me dado, e não tenho do que reclamar. Mas recentemente passei pelo caso inverso, e senti na pele, mais uma vez, como é ruim não ser lembrado pelo trabalho que realizei, mesmo que isso tenha ocorrido por falta de comunicação e sem nenhuma maldade envolvida.
Tentei, com algum esforço, me colocar na posição de observador, e deixar o papel de “vítima” um pouco de lado. É uma situação muito complicada, e por qualquer ponto que eu tente olhar, não consigo ver uma solução diferente daquela que, de modo egoísta, elegi como a melhor para mim.
Sei o quanto me esforcei para conseguir colocar em movimento uma engrenagem nem sempre fácil de funcionar, que é a de um grupo de apoio ao movimento de Software Livre. Talvez você se lembre do anúncio de fundação do grupo, há mais de 1 ano atrás. E agora, depois de ter feito muita coisa pelo grupo, senti falta de ter um reconhecimento de alguém que considero bastante. Sei que, numa análise mais cuidadosa, a culpada disso foi a falta de comunicação. Mas às vezes não consigo deixar de pensar em como seria bom ter tido um pouco do gostinho de “fiz minha parte, e aquele cara reconhece isso!”.
Enfim, coisas da vida. Esse post ia ficar bem maior, mas decidi cortar mais da metade dele porque não quero ficar no “chororô”. O que importa, no final das contas, é o quanto você acha que está fazendo a coisa certa. No fim do dia, é você quem vai dormir tranquilo, sabendo que se esforçou bastante e que nada do que fez foi em vão. O resto, se vem ou não, é um complemento àquilo que você fez.
]]>Este ponto relaciona-se mutuamente com os outros dois pontos (que também relacionam-se mutuamente entre si). É claro, tudo está conectado nesse mundo, até mesmo (e principalmente!) os motivos que levam alguém a se desconectar de alguns valores morais e éticos.
Eu vejo pessoas preguiçosas o tempo todo. Às vezes, sou uma delas (por mais que tente me afastar desse comportamento). Mas creio que existe uma diferença entre alguém inerentemente preguiçoso, e alguém que se deixa levar pela tentação da preguiça por conta de algum outro fator. A minha reclamação, aqui, é com o primeiro tipo de pessoas.
O “teste” pra saber se você se encaixa nesse grupo é: quando você se depara com algum problema difícil de ser resolvido, qual seu modus operandi? Buscar soluções, ou desistir? Tentar você mesmo, ou pedir pra alguém? Aprender com seus erros, ou repetí-los ad eternum? Se você não quis nem pensar sobre esse teste, então acho a resposta é óbvia…
Mas o que isso tem a ver com ativismo? Tudo. Ser ativista é, por definição, ter que enfrentar situações difíceis e desanimadoras, platéias apáticas e desconfiadas, pessoas descrentes e alienadas. E isso tudo é absurdamente frustrante, principalmente quando você acredita naquilo que está falando, e sabe que as pessoas que estão ouvindo precisam entender também! Afinal, como eu falei em outro post, a privacidade (mas não só ela!) é um bem coletivo. A manutenção dela depende da compreensão da comunidade sendo espionada.
Em outras palavras, as empresas, entidades e governos que estão lutando para que você tenha cada vez menos direitos não dormem no ponto. Não vai ser muito legal se nós dormirmos…
Só que esse ponto não se aplica somente aos ativistas em si. Obviamente, encontramos (muitos!) preguiçosos (e preguiçosas) do outro lado, na platéia. É sempre bom (e necessário) assumir que as pessoas pra quem você está falando são ignorantes naquele assunto, e portanto precisam ser instruídas minimamente para que possam tomar decisões maduras e inteligentes. No entanto, mesmo depois de serem alertadas sobre vários fatos e consequências dos seus atos, as pessoas ainda assim preferem continuar na ignorância!! Existem vários nomes pra essa “teimosia”, mas eu costumo achar que um dos fatores que contribui pra isso é a preguiça.
Preguiça em levantar da cadeira e procurar soluções que respeitem você e sua comunidade. Preguiça em continuar pensando (ou seja, “sempre alerta”) sobre quais os riscos você está efetivamente correndo quando usa aquela “rede social”. Preguiça em mudar os hábitos. Preguiça em lutar por seus direitos virtuais. Enfim, preguiça.
Esse é um dos pontos mais problemáticos. O preconceito está enraizado nas pessoas, sem exceção. E o preconceito contra ativistas, de qualquer tipo, é evidente.
Ser ativista não é somente acreditar em algo. Ser ativista é principalmente saber de algo, e querer levar essa sabedoria para as pessoas. Obviamente, existem vários tipos de ativismo, mas quando olho pro que eu faço, eu me vejo mais como alguém que sente ser sua obrigação ensinar as pessoas sobre algo que é desconhecido da maioria. Apesar de realmente esperar que as pessoas acreditem nos valores que eu tento passar (e quem não espera?), acredito que meu objetivo principal seja o de “habilitar” a sociedade a tomar decisões conscientes sobre os assuntos que tento “ensinar”.
Algumas pessoas têm medo ou vergonha de me falar que usam Facebook, Twitter, ou algum software não-livre. Mas eu noto que, na maior parte dos casos, o medo delas decorre do fato de elas saberem que eu não “gosto” de nenhum desses itens, e não do fato de elas saberem por que eu não gosto deles. E nesse caso, eu não sinto raiva ou decepção pela pessoa com quem estou conversando, mas sim uma necessidade de realmente explicar o motivo de eu não concordar com a utilização desses programas! Sei que se eu explicar, na verdade eu estarei dando ferramentas pra que a pessoa consiga, ela mesma, decidir se quer continuar usando-os. Essa é minha tarefa, no final das contas. Permitir que o usuário de tecnologia consiga, de forma consciente e ética, escolher o que quer e o que não quer. Mas aí entra o preconceito…
Quando começo a falar, é inevitável usar expressões como “liberdade”, “respeito”, “ética”, “comunidade”, “privacidade”, “questões sociais”, etc. Elas são o cimento pra que eu possa construir meus argumentos, e não creio que palavras ou expressões por si só possam definir um liberal de um conservador, por exemplo. No entanto, o que mais vejo são pessoas que confundem ativistas de Software Livre com comunistas ou socialistas. E como hoje a moda é o conservadorismo, às vezes as pessoas ignoram tudo aquilo que falamos por conta desse preconceito idiota.
Meu objetivo não é discutir sobre se é bom ou ruim ser socialista/comunista (apesar de eu definitivamente não ser “conservador”, e achar esse preconceito absurdo). Mas o que deve ficar claro é que o Software Livre, apesar de ser um movimento político, não é um movimento partidário. Defendemos valores bem definidos, que podem ou não ter a ver com idéias comunistas/socialistas, mas que não advogam a favor desse movimento político. Também é importante mencionar que, por ser um movimento social, é natural que muitas idéias e preceitos defendidos pelos ativistas de Software Livre sejam simpáticos à causa socialista/comunista. Mas isso obviamente não faz com que Stallman seja o novo Stalin (apesar da semelhança dos sobrenomes).
Enfim, o meu pedido para a comunidade em geral é: ouçam a mensagem, independente do interlocutor, e pensem a respeito, independente da sua orientação político-partidária. Aquilo pelo qual lutamos independe de partido, religião, time de futebol, nacionalidade. Depende simplesmente de seres humanos, de uma comunidade que não tem fronteiras, não tem uma única cultura, mas que merece mais respeito. Só que, infelizmente, vamos ter que exigir isso.
]]>You may not agree with me on everything I write here, and I am honestly expecting some opposition, but I would like to make it crystal clear that my purpose is to raise awareness for the most important “feature” an organization should have: coherence.
I first learned about the Twitter account on IRC. I was hanging around
in the #fsf
channel on Freenode, when someone mentioned that “…
something has just been posted on FSF’s Twitter!” (yes, it was a happy
announcement, not a complaint). I thought it was a joke, but before
laughing I decided to confirm. And to my deepest sorrow, I was wrong.
The Free Software Foundation has a Twitter account. The implications
of this are mostly bad not only for the Foundation itself, but also for
us, Free Software users and advocates.
Twitter uses Free Software to run its services. So does Facebook, and I would even bet that Microsoft runs some GNU/Linux machines serving intranet pages… But the thing is not about what a web service uses. It is about endorsement. And I will explain.
I remember having this crazy thought some years ago, when I saw some small company in Brazil putting the Facebook logo in their product’s box. What surprised me was that the Facebook logo was actually bigger than the company’s logo! What the heck?!?! This is “Marketing 101”: you are drawing attention to Facebook, not to your company who actually made the product. And from that moment on, every time I see Coca Cola putting a “Find us on http://facebook.com/cocacola” (don’t know if the URL is valid, it’s just an example) I have this strange feeling of how an internet company can twist the rules of marketing and get free ads everywhere…
My point is simple: when a company uses a web service, it is endorsing the use of this same web service, even if in an indirect way. And the same applies to organizations, or foundations, for that matter. So the question I had in my mind when I saw FSF’s Twitter account was: do we really want to endorse Twitter? So I sent them an e-mail…
I have exchanged some interesting messages with Kyra, FSF’s Campaign Organizer, and with John Sullivan, FSF’s Executive Director. I will not post the messages here because I don’t have their permission to do so, but I will try to summarize what we discussed, and the outcomings.
My first message was basically requiring some clarifications. I had read this interesting page about the presence of FSF on Twitter, and expressed my disagreement about the arguments used there.
They explicitly say that Twitter uses nonfree JavaScript, and suggest that the reader use a free client to access it. Yet, they still close their eyes to the fact that a big part of the Twitter community use it through the browser, or through some proprietary application.
They also acknowledge that Twitter accounts have privacy issues. This is obvious for anyone interested in privacy, and the FSF even provides a link to an interesting story about subpoenas during the Occupy Wall Street movement.
Nevertheless, the FSF still thinks it’s OK to have a Twitter account, because it uses Twitter via a bridge which connects FSF’s StatusNet instance to Twitter. Therefore, in their vision, they are not really using Twitter (at least, they are not using the proprietary JavaScript), and well, let the bridge do its job…
This is nonsense. Again: when a foundation uses a web service, it is endorsing it, even if indirectly! And that was the main argument I have used when I wrote to them. Let’s see how they replied…
The answer I’ve got to my first message was not very good (very weak arguments), so I won’t even bother talking about it here. I had to send another message to make it clear that I was interested in real answers.
After the second reply, it became clear to me that the main point of the FSF is to reach as many people as they can, and pass along the message of software user freedom. I have the impression that it doesn’t really matter the means they will use for that, as long as it is not Facebook (more on that latter). So if it takes using a web service that disrespects privacy and uses nonfree Javascript, so be it.
It also seems to me that the FSF believes in an illusion created by themselves. In some messages, they said that they would try to do a harder job at letting people know that using Twitter is not the solution, but part of the problem (the irony is that they would do that using Twitter). However, sometimes I look at FSF’s Twitter account, and so far nothing has been posted about this topic. Regular people just don’t know that there are alternatives to Twitter.
I will take the liberty to tell a little story now. I told the same story to them, to no avail. Let’s imagine the following scenario: John has just heard about Free Software and is beginning to study about it. He does not have a Twitter account, but one of the first things he finds when he looks for Free Software on the web is FSF’s Twitter. So, he thinks: “Hey, I would like to receive news about Free Software, and it’s just a Twitter account away! Neat!”. Then, he creates a Twitter account and starts following FSF there.
Can you imagine this happening in the real world? I definitely can.
The FSF is also mistaken when they think that they should go to Twitter in order to reach people. I wrote them, and I will say it again here, that I think we should create ways to reach those users “indirectly” (which, as it turns out, would be more direct!), trying to promote events, conferences, talks, face-to-face gatherings, etc. The LibrePlanet project, for example, is a great way of doing this job through local communities, and the FSF should pay a lot more attention to it in my opinion! These are “offline” alternatives, and I confess I think we should discuss the “online” ones with extra care, because we are in such a sad situation regarding the Internet now that I don’t even know where to start…
And last, but definitely not least, the FSF is being incoherent. When it says that “it is OK to use Twitter through a bridge in a StatusNet instance”, then it should also be coherent and do the same thing for Facebook. One can use Facebook through bridges connecting privacy-friendly services such as Diaspora and Friendica (the fact that Diaspora itself has a Facebook account for the project is a topic I won’t even start to discuss). And through those bridges, the FSF will be able to reach much more people than through Twitter.
I am not, in any way, comparing Twitter and Facebook. I am very much aware that Facebook has its own set of problems, which are bigger and worse than Twitter’s (in the most part). But last time I checked, we were not trying to find the best between both. They are both bad in their own ways, and the FSF should not be using either of them!
My conversation with the FSF ended after a few more messages. It was clear to me that they would not change anything (despite their promises to raise awareness to alternatives to Twitter, as I said above), and I don’t believe in infinite discussions about some topic, so I decided to step back. Now, this post is the only thing I can do to try to let people know and think about this subject. It may seem a small problem to solve, and I know that the Free Software community must be together in order to promote the ideas we share and appreciate, but that is precisely why I am writing this.
The Free Software movement was founded on top of ideas and coherence. In order to be successful, we must remain coherent to what we believe. This is not an option, there is no alternative. If we don’t defend our own beliefs, no one will.
]]>Antes, um breve relato dos dois eventos. Gostei parcialmente do resultado que obtivemos com o Upstream. Acho que a qualidade dos palestrantes foi ótima, e as discussões tiveram um nível muito bom. No entanto, os workshops deixaram a desejar. Pelo pouco que pensei a respeito, cheguei à conclusão de que faltou organização para definirmos os assuntos que iriam ser abordados, e principalmente o melhor modo de abordá-los. Assumo minha parcela de culpa nisso, afinal eu tentei ajudar na organização do workshop de toolchain e ele não saiu do modo como esperávamos. Problemas na infra-estrutura do local também atrapalharam no resultado final. Mas, de modo geral, e levando em conta que essa foi a primeira edição do evento, acho que conseguimos nos sair razoavelmente bem. Certamente já temos muitas coisas pra pensar e melhorar para a próxima edição!
Já sobre o SFD, apesar de várias pessoas muito boas terem participado do evento, a minha impressão inicial (e forte) foi a de que fazer a sociedade se interessar (ou ao menos ouvir, se bem que os dois conceitos são intrinsecamente ligados) por assuntos que são de suma importância para a manutenção (ou, no caso, a restauração) de um Estado que a respeite é mais difícil do que eu pensava. E essa é também a primeira reflexão do post.
Há um conflito muito grande acontecendo com as pessoas. Provavelmente ele não é “de hoje”, mas de qualquer modo ele existe e precisa ser resolvido. O conflito, do modo que vejo, pode ser resumido da seguinte forma: “até que ponto eu quero sentir indignação sobre um assunto, de modo que eu não precise necessariamente tomar alguma atitude sobre ele?”. Ou seja, a pessoa opta voluntariamente por permanecer na ignorância parcial, para que ela não se sinta obrigada a tomar uma posição sobre determinado problema que a atinge.
Tomemos o exemplo do Facebook. Alguém que tenha uma conta lá (i.e., “quase todo mundo”) prefere se manter na ignorância sobre os termos de serviço e privacidade que o site possui. Não estou entrando no mérito de operações clandestinas de espionagem; estou falando sobre os textos disponíveis no site do Facebook e que explicam (talvez não de maneira muito clara, mas isso já é outro problema) o que o site faz e não faz a respeito dos seus dados. É uma opção. É mais fácil apenas usar o site, compartilhar imagens engraçadas com seus mil “amigos”, e não olhar para uma questão que deveria ser muito mais importante do que qualquer “like” que possa ser dado.
Não sou sociólogo e estou longe de poder dar opiniões acadêmicas sobre esse assunto, mas tenho a impressão de que o que acontece é um “retardo social” na maioria dos cidadãos deste planeta. Não deixa de ser um paradoxo o fato de que esse comportamento é exacerbado através de uma “rede social”, que se traveste de facilitadora de comunicações entre indivíduos para poder exercer a derradeira função de uma empresa: ganhar dinheiro. É importante frisar que não sou contra “ganhar dinheiro”, mas sou contra vários meios que são usados pra atingir esse objetivo.
No final, o produto somos nós, ou nossa privacidade. E quando eu digo “nós” ao invés de “eles”, é porque eu fiz uma outra reflexão…
Pode parecer paradoxal à primeira vista, mas pare e pense um pouco. A privacidade é sim um direito do indivíduo, mas quando você opta por não tê-la, você está fazendo essa opção em nome de todas as pessoas que se comunicam com você. Afinal, se você não se importa se alguém está lendo suas mensagens, então qualquer tipo de comunicação que chega até você pode e vai ser lida. E se essa comunicação partir de alguém que preza pela própria privacidade, não vai fazer diferença alguma: a mensagem será lida de qualquer jeito, porque você escolheu isso.
Estou acostumado a ouvir pessoas dizerem que elas não são tão importantes a ponto de despertarem interesse em algum governo para que ele queira espioná-las. “Portanto”, dizem as pessoas, “não preciso me preocupar”. Bem, acho que esse argumento não invalida de maneira alguma o fato de que proteger a própria privacidade é importante. Não interessa o quão público alguém é; se ele não preza pela sua privacidade, ele está abrindo mão de algo que afeta direta ou indiretamente várias pessoas.
O meu ponto aqui é simples. Faça a sua parte e proteja a sua privacidade. Ninguém vai fazer isso por você, mas todos precisam e podem fazer suas respectivas partes. É um trabalho em conjunto, mas que depende da cooperação de todos. Se alguém perto de você não se importar, você provavelmente vai ser prejudicado.
]]>You can read a detailed description of the problem in the message Gary sent to the gdb-patches mailing list, but to summarize: GDB needs to interface with the linker in order to identify which shared libraries were loaded during the inferior’s (i.e., program being debugged) life.
Nowadays, what GDB does is to put a breakpoint in _dl_debug_state
,
which is an empty function called by the linker every time a shared
library is loaded (the linker calls it twice, once before modifying the
list of loaded shlibs, and once after). But GDB has no way to know what
has changed in the list of loaded shlibs, and therefore it needs to load
the entire list every time something happens. You can imagine how bad
this is for performance…
What Gary did was to put SDT probes strategically on the linker, so that
GDB could make use of them when examining for changes in the list of
loaded shlibs. It improves performance a lot, because now GDB doesn’t
need to stop twice every time a shlib is loaded (it just needs to do
that when stop-on-solib-events
is set); it just needs to stop at the
right probe, which will inform the address of the link-map entry of the
first newly added library. It means GDB also won’t need to walk through
the list of shlibs and identify what has changed: you get that for free
by examining the probe’s argument.
Gary also mentions a discrepancy that happened on Solaris libc, which has also been solved by his patch.
And now, the most impressing thing: the numbers! Take a look at this table, which displays the huge improvement in the performance when using lots of shlibs (the time is in seconds):
Number of shlibs | 128 | 256 | 512 | 1024 | 2048 | 4096 |
---|---|---|---|---|---|---|
Old interface | > 0 | > 1 | > 4 | > 12 | > 47 | > 185 |
New interface | > 0 | > 0 | > 2 | > 4 | > 10 | > 36 |
Impressive, isn’t it?
This is one the things I like most in Free Software projects: the possibility of extending and improving things by using what others did before. When I hacked GDB to implement the integration between itself and SystemTap, I had absolutely no idea that this could be used for improving the interface between the linker and the debugger (though I am almost sure that Tom was already thinking ahead!). And I can say it is a pleasure and I feel proud when I see such things happening. It just makes me feel more and more certain that Free Software is the way to go :-).
]]>The DFD (or Document Freedom Day) 2013 in Campinas was organized by the LibrePlanet São Paulo (link in pt_BR) group. If you follow this blog, and if you speak portuguese, then you have probably read the announcement of the group that I made last year. If you haven’t: LibrePlanet São Paulo is part of the LibrePlanet project (sponsored by the Free Software Foundation), and "… is a global network of free software activists and teams working together to help further the ideals of software freedom by advocating and contributing to free software.".
The DFD 2013 was an important event to us because it was the first serious event that we organized as a group. Despite some mistakes and errors, I believe we did fine and were able to learn some great lessons for the next events that we plan to do. By the way, if you want to see the official page which we used to promote the event (and organize it too), take a look here. The page is in pt_br, portugues.
Basically, we should have: (a) focused more on defining the venue as soon as possible, because that would have made it possible to (b) start sending announcements about the event earlier. We also should have contacted the Document Freedom organization and asked swags and banners earlier, because when we did it was too late for the shipment to arrive in time. And last but not least, we should really have taken pictures!! Unfortunately, I have absolutely no pictures to post here, so you will have to believe just in the words I write…
But well, nothing is perfect, and hey, the event happened!. So let’s talk it :-).
DFD 2013 occurred on Wednesday, March 27th. After some discussion, we decided to schedule the event from 13h (1 p.m.) to 17h (5 p.m.), with 4 presentations of 50 minutes each, approximately. The venue chosen was CCUEC, the Center of Computing at the University of Campinas, UNICAMP. This center has some great people working on it who are involved with Free Software since the beginning of the movement, particularly Rubens Queiroz de Almeida, a very nice guy (very famous in the Brazilian Free Software scene) who helped us a lot with the organization of this event.
We understand that doing the event on a Wednesday afternoon was something that made it very hard for most people to attend, and that is probably the main reason for the low attendance: only 8 people in the audience. I have to say I was a little frustrated in the beginning, but hey, what really matters is that we spread the word about Free Software to 8 brave souls there, who will hopefully spread the word again to more people, and so on :-). So, it was time for the show to begin!
Our schedule was (presentation titles translated):
So my presentation was scheduled to be the first one, and I really liked it (surprise!). It was virtually the first time I gave a “philosophical” talk, and a very important one: a general presentation about Free Software, its history, the present, and a little bit of the future. In my opinion, what I liked about my talk is that I focused less on the “freedom” part, and more on the “respect” part of the philosophy. This is something I did because I wanted to use a different argument that was on my head for a long time: that the main thing behing the Free Software is the respect towards others, and only with that one can achieve freedom.
I watched Rubens too, who gave an excelent presentation about why we need free documents and standards. Rubens is very talkative and warm, which makes the audience feel relaxed. People liked his presentation a lot, from what I noticed.
Unfortunately, Ricardo Panaggio had a problem with his computer before his presentation, so we decided to switch: Raniere Silva would take his place as the third presenter, while Ricardo tried to fix the problem. I helped him with his problems, and because of this I was unable to watch Raniere’s talk. In the end, we could not solve Ricardo’s problem and he decided to give his presentation without any slides. In my opinion, he managed to catch everyone’s attention (also because HTML5 is such a hot topic today), so I guess the missing slides were not so important after all!
At 17h o’clock, we declared DFD 2013 finished. I still had time to distribute some Free Software stickers (from FSF), and talk a little with two or three people there, who were satisfied with the presentations! It made my day, of course :-). And just because of that I now feel motivated to organized another DFD next year!
I would like to thank Rubens Queiroz for helping with the promotion, the location, and the presentation during the event. DFD 2013 would have been impossible without his help. Thanks, Rubens!
The LibrePlanet São Paulo team, specially Ricardo Panaggio, were also deeply involved with me in the organization. And I hope we manage to make a bigger event next year!
Finally, I would like to thank everyone who attended the event, even for watch only one talk. Your presence there was really, really important to all of us. See you all next year!
]]>Since my childhood, I am fascinated by the power of the words. I always liked reading a lot, and despite not knowing the grammar rules (either in pt_BR or en_US, the former being my native language, the latter being the only idiom I can consider myself fluent in), I am deeply interested in what words (and their infinite meanings) can do to us. (If you can read in portuguese, and if you also like to study or admire in this subject, I strongly recommend a romance by José Saramago called “O Homem Duplicado”). So now, what I am seeing everywhere is that people are being as careless as ever with words, their meanings, and specially their implications.
The problem I am seeing, and it is a serious problem in my opinion, is the constant use of the term “free software” when “open source” should be used. This is obviously not a recent problem, and I really cannot recall when was the first time I noticed this happening. But maybe because I am much more involved with (real) free software movements now, I have the strong impression that this “confusion” is starting to grow out of control. So here I am, trying to convince some people to be a little more coherent.
When you create a group to talk about free software, or when you join a group whose goal is to promote free software ideas, you should really do that. First of all, you should understand what free software is about. It is not about open source, for starters. It is also a political movement, not only a technical one.
I was part of a group in my former university which had “Free Software” in its name. For a long time, I believed the group really was about free software, even after receiving e-mails with heavy negative critics about my opinions when I defended something related to the free software ideology (e.g., when I suggested that we should not have a Facebook page, which had been created for the group by one of its members). Well, when I really could not hide the truth from myself anymore, I packed my things and left the group (this was actually the start of a new free software group that I founded with other friends in Brazil).
I also like a lot to go to events. And not only because of the presentations, but mostly because I really like to talk to people. Brazilians are fortunately very warm and talkative, so events here are really a fertile soil for my social skills :-). However, even when the event has “free software” in its name and description, it is very hard to find someone who really understands the philosophy behind the term. And I’m not just talking about the attendees: the event staff is also usually ignorant (and prefer to remain like this)! I feel really depressed when I start to defend the (real) free software, and people start looking at me and saying “You’re radical.”. It’s like going in a “Debugger Conference” and feel ridicularized when you start talking about GDB! I cannot understand this…
But the worst part of all this is that newcomers are learning that “free software” is “Linux”, or something which is not free software. This is definitely not a good thing, because people should be aware that the world is not just about software development: there are serious issues, including privacy and freedom menaces by Facebook/Google/Apple/etc, which we should fight against. Free software is about that as well. Awareness should be raised, actions should be taken, and people should refuse those impositions.
So, to finish what I want to say, if you do not consider yourself a free software activist, please consider becoming one. And if, after giving it a thought, you decided that you really do not want to be a free software activist, then do not use the name “free software” in your event/group/whatever, unless you really intend to talk about it and not open source.. In other words, if you don’t want to help, please don’t spread confusion.
]]>Finalmente consegui um pouco de tempo na minha agenda, e resolvi escrever no blog para anunciar a criação do grupo LibrePlanet São Paulo!
O projeto LibrePlanet teve início em 2006, durante a reunião de membros da FSF (a Free Software Foundation). Ele foi criado para ajudar a organizar maneiras de levar o movimento de Software Livre ao conhecimento da população em geral.
Os grupos são organizados geograficamente, e cada um é responsável por definir metas e estratégias visando fomentar o Software Livre na região. É importante deixar claro: o objetivo é trabalhar em prol do Software Livre, e não do open source. Para saber mais a respeito da definição de Software Livre, recomendo que leia este artigo.
Essa história é um pouco longa, mas vou tentar resumir :-).
Tudo começou quando eu, Ricardo Panaggio, Ivan S. Freitas e Raniere Gaia Silva começamos a trocar alguns e-mails sobre assuntos como privacidade, software livre, soluções e serviços livres, etc. Eu e o Panaggio já estávamos nos sentindo muito insatisfeitos com os rumos que um grupo local, teoricamente “pró software livre”, estava tomando (como quase tudo hoje em dia, o nome “software livre” está lá simplesmente porque ninguém se tocou de que devia ser “open source” ainda…). E essa insatisfação já vinha nos fazendo querer criar um novo grupo, fiel à ideologia do Software Livre, no qual pudéssemos dar nossas opiniões sem medo de sermos esmagados por uma maioria que não se importa com “essas coisas”.
Bem, começamos a conversar, e logo o Ivan e o Raniere deram sinais de que eles topariam participar do grupo, sem problemas. Portanto, o solo já estava fértil para novas idéias :-).
Um dia, eu acordei e vi na minha INBOX uma mensagem do Raniere dizendo que havia encontrado algo sobre um projeto interessante, o LibrePlanet, na Internet. Foi a faísca que faltava pra começar a movimentação! Recordei-me de que eu já havia conversado com o Matt Lee, também da FSF, sobre o LibrePlanet, e depois de uma rápida busca na wiki do projeto, vi que ainda não havia nenhum grupo brasileiro. Então, depois de alguma conversa interna, decidimos criar um grupo para o Estado de São Paulo.
Hoje, pouco mais de 2 semanas depois da criação, contamos com 10 membros cadastrados na Wiki, e aproximadamente 7 membros ativos no nosso canal de IRC. Também temos uma lista de discussão, e já estamos começando a conversar sobre possíveis projetos para 2013.
É simples! Siga os seguintes passos:
{% raw %}{{user SP}}{% endraw %}
. Ele faz com que você passe a
pertencer ao grupo LibrePlanet de São Paulo.#lp-br-sp
! É lá que a maior
parte das discussões acontece, então seria muito legal se você
também pudesse participar delas!Acho que é isso :-). Se você ainda tiver alguma dúvida sobre qualquer assunto tratado neste post (objetivos do grupo, inscrição, etc), ou se quiser fazer algum comentário, sinta-se à vontade!
Saudações livres!
]]>I finally got some time to finish this series of posts, and I hope you like the overall result. For those of you who are reading this blog for the first time, you can access the first post here, and the second here.
My goal with this third post is to talk a little bit about how you can
use the SDT
probes with tracepoints
inside GDB
. Maybe this
particular feature will not be so helpful to you, but I recommend
reading the post either way. I will also give a brief explanation about
how the SDT
probes are laid out inside the binary. So, let’s start!
In my last post, I forgot to mention that the SDT
probe support
present on older versions of Fedora GDB
is not exactly as the way I
described here. This is because Fedora GDB
adopted this feature much
earlier than upstream GDB
itself, so while this has a great positive
aspect in terms of how the distro’s philosophy works (i.e., Fedora
contains leading-edge features, so if you want to know how to FLOSS
community will be in a few months, use it!), it also has the downside of
delivering older/different versions of features in older Fedoras. But of
course, this SDT
feature will be fully available on Fedora 18, to be
announced soon.
My suggestion is that if you use a not-so-recent Fedora (like Fedora 16,
15, etc), please upgrade it to the last version, or compile your own
version of GDB
yourself (it’s not that hard, I will make a post about
it in the next days/weeks!).
With that said, let’s move on to our main topic here.
Before anything else, let me explain what a tracepoint
is. Think of it
as a breakpoint which doesn’t stop the program’s execution
when it hits. In fact, it’s a bit more than that: you can define
actions associated with a tracepoint
, and those actions will be
performed when the tracepoint
is hit. Neat, huh? :-)
There is a nice description of what a tracepoint
in the GDB
documentation,
I recommend you give it a reading to understand the concept.
Ok, so now we have to learn how to put tracepoints
in our code, and
how to define actions for them. But before that, let’s remember our
example program:
#include <sys/sdt.h>
int
main (int argc, char *argv[])
{
int a = 10;
STAP_PROBE1 (test_program, my_probe, a);
return 0;
}
Very simple, isn’t it? Ok, to the tracepoints
now, my friends.
tracepoints
inside GDB
In order to properly use tracepoints
inside GDB
, you will need to
use gdbserver
, a tiny version of GDB
suitable for debugging programs
remotely, over the net or serial line. In short, this is because GDB
cannot put tracepoints on a program running directly under it, so we
have to run it inside gdbserver
and then connect GDB
to it.
gdbserver
In our case, we will just start gdbserver
in our machine, order it to
listen to some high port, and connect to it through localhost
, so
there will be no need to have access to another computer or device.
First of all, make sure you have gdbserver
installed. If you use
Fedora, the package name you will have to install is gdb-gdbserver
. If
you have it installed, you can do:
$ gdbserver :3001 ./test_program
Process ./test_program created; pid = 17793
Listening on port 3001
The second argument passed to gdbserver
instructs it to listen on the
port 3001 of your loopback interface, a.k.a. localhost
.
You will notice that gdbserver
will stay there indefinitely, waiting
for new connections to arrive. Don’t worry, we will connect to it soon!
GDB
to gdbserver
Now, go to another terminal and start GDB
with our program:
$ gdb ./test_program
...
(gdb) target remote :3001
Remote debugging using :3001
Reading symbols from /lib64/ld-linux-x86-64.so.2...(no debugging symbols found)...done.
Loaded symbols for /lib64/ld-linux-x86-64.so.2
0x0000003d60401530 in _start () from /lib64/ld-linux-x86-64.so.2
The command you have to use inside GDB
is target remote
. It takes as
an argument the host and the port to which you want to connect. In our
case, we just want it to connect to localhost
, port 3001. If you saw
an output like the above, great, things are working for you (don’t pay
attention to the messages about
glibc debug information). If you didn’t see it, please check to see if
you’re connecting to the right port, and if no other service is using
it.
Ok, so now it is time to start our trace experiment!
tracepoints
Every command should be issued on GDB, not on gdbserver!
In your GDB
prompt, put a tracepoint
in the probe named my_probe
:
(gdb) trace -probe-stap my_probe
Tracepoint 1 at 0x4005a9
As you can see, the trace
command takes exactly the same arguments as
the break
command. Thus, you need to use the -probe-stap
modified in
order to instruct GDB
to put the tracepoint
in the probe.
And now, let’s define the actions associated with this tracepoint
.
To do that, we use the actions
command, which is an interactive
command inside GDB
. It takes some specific keywords, and if you want
to learn more about it, please take a look at this
link.
For this example, we will use only the collect
keyword, which tells
GDB
to… hm… collect something :-). In our case, it will collect
the probe’s first argument, or $_probe_arg0
, as you may remember.
(gdb) actions
Enter actions for tracepoint 1, one per line.
End with a line saying just "end".
>collect $_probe_arg0
>end
(gdb)
Simple as that. Finally, we have to define a breakpoint
in the last
instruction of our program, because it is necessary to keep it running
on gdbserver
in order to examine the tracepoints
later. If we didn’t
put this breakpoint
, our program would finish and gdbserver
would
not be able to provide information about what happened with our
tracepoints
. In our case, we will simply put a breakpoint
on line
10, i.e., on the return 0;
:
Ok, time to run our trace experiment. First, we must issue a tstart
to
tell GDB
to start monitoring the tracepoints
. And then, we can
continue our program normally.
(gdb) tstart
(gdb) continue
Continuing.
Breakpoint 1, main (argc=1, argv=0x7fffffffde88) at /tmp/test_program.c:10
10 return 0;
(gdb) tstop
(gdb)
Remember, GDB
is not going to stop your program, because
tracepoints
are designed to not interfere with the execution of it.
Also notice that we have also stopped the trace experiment after the
breakpoint
hit, by using the tstop
command.
Now, we will be able to examine what the tracepoint
has collected.
First, we will the tfind
command to make sure the tracepoint
has
hit, and then we can inspect what we ordered it to collect:
(gdb) tfind start
Found trace frame 0, tracepoint 1
8 STAP_PROBE1 (test_program, my_probe, a);
(gdb) p $_probe_arg0
$1 = 10
And it works! Notice that we are printing the probe argument using the
same notation as with breakpoints
, even though we are not exactly
executing the STAP_PROBE1
instruction. What does it mean? Well, with
the tfind start
command we tell GDB
to actually use the trace frame
collected during the program’s execution, which, in this case, is the
probe argument. If you know GDB
, think of it as if we were using the
frame
command to jump back to a specific frame, where we would have
access to its state.
This is a very simple example of how to use the SDT
probe support in
GDB
with tracepoints
. There is much more you can do, but I hope I
could explain the basics so that you can start playing with this
feature.
SDT
probe is laid out in the binaryYou might be interested in learning how the probes are created inside
the binary. Other than reading the source code of
/usr/include/sys/sdt.h
, which is the heart of the whole feature, I
also recommend this
page,
which explains in detail what’s going on under the hood. I also
recommend that you study a little about how the ELF format works,
specifically about notes in the ELF file.
After this series of blog posts, I expect that you will now be able to
use the not-so-new feature of SDT
probe support on GDB
. Of course,
if you find some bug while using this, please feel free to report it
using our bugzilla. And if you have
some question, use the comment system below and I will answer ASAP :-).
See ya, and thanks for reading!
]]>It’s been a long time since I wrote the first post about this subject, and since then the patches have been accepted upstream, and GDB 7.5 now has official support for userspace SystemTap probes :-). Yay!
Well, but enough of cheap talk, let’s get to the business!
Frank Ch. Eigler, one of SystemTap’s maintainers, kindly mentioned something that I should say about SystemTap userspace probes.
Basically, it should be clear that SDT
probes are not the only kind of
userspace probing one can do with SystemTap. There is yet another kind
of probe (maybe even more powerful, depending on the goals):
DWARF-based function/statement probes. SystemTap supports this kind
of probing mechanism for quite a while now.
It is not the goal of this post to explain it in detail, but you might
want to give it a try by compiling your binary with debuginfo support
(use the -g
flag on GCC
), and do something like:
$ stap -e 'probe process("/bin/foo").function("name") { log($$parms) }' -c /bin/foo
$ stap -e 'probe process("/bin/foo").statement("*@file.c:443") { log($$vars) }' -c /bin/foo
And that’s it. You can read SystemTap’s documentation, or this guide to learn how to add userspace probes.
Well, now let’s get to the interesting part. It is time to make GDB
work with the SDT
probe that we have put in our example code. Let’s
remember it:
#include <sys/sdt.h>
int
main (int argc, char *argv[])
{
int a = 10;
STAP_PROBE1 (test_program, my_probe, a);
return 0;
}
It is a very simple example, and we will have to extend it later in order to show more features. But for now, it will do.
The first thing to do is to open GDB
(with SystemTap support, of
course!), and check to see if it can actually see probe inserted in our
example.
$ gdb ./test_program
GNU gdb (GDB) 7.5.50.20121014-cvs
Copyright (C) 2012 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
...
(gdb) info probes
Provider Name Where Semaphore Object
test_program my_probe 0x00000000004004ae /home/sergio/work/src/git/build/gdb/test_program
Wow, it actually works! :-)
If you have seen something like the above, it means your GDB
is
correctly recognizing SDT
probes. If you see an error, or if your
GDB
doesn’t have the info probes
command, then you’d better make
sure you have a recent version of GDB
otherwise you won’t be able to
use the SDT
support.
Anyway, now it is time to start using this support. The first thing I want to show you is how to put a breakpoint in a probe.
(gdb) break -probe-stap my_probe
Breakpoint 1 at 0x4004ae
That’s all! We have chosen to extend the break
command in order to
support the new -probe-stap
parameter. If you’re wondering … why
the -probe prefix?, it is because I was asked to implement a complete
abstraction layer inside GDB
in order to allow more types of probes to
be added in the future. So, for example, if someone implements support
for an hypothetical type of probe called xyz
, you would have
break -probe-xyz
. It took me a little more time to implement this
layer, but it is worth the effort.
Anyway, as you have see above, GDB
recognize the probe’s name and
correctly put a breakpoint in it. You can also confirm that it has done
the right thing by matching the address reported by info probes
with
the one reported by break
: they should be the same.
Ok, so now, with our breakpoint
in place, let’s run the program and
see what happens.
(gdb) run
Starting program: /home/sergio/work/src/git/build/gdb/test_program
Breakpoint 1, main (argc=1, argv=0x7fffffffdf68) at /tmp/example-stap.c:8
8 STAP_PROBE1 (test_program, my_probe, a);
As you can see, GDB
stopped at the exact location of the probe.
Therefore, you are now able to put marks (i.e., probes) in your source
code which are location-independent. It means that it doesn’t really
matter where in the source code your probe is, and it also doesn’t
matter if you change the code around it, changing the line numbers, or
even moving it to another file. GDB
will always find your probe, and
always stop at the right location. Neat!
But wait, there’s more! Remember when I told you that you could also inspect the probe’s arguments? Yes, let’s do it now!
Just remember that, in SDT
’s parlance, the current probe’s argument is
a
. So let’s print its value.
(gdb) p $_probe_arg0
$1 = 10
(gdb) p a
$2 = 10
“Hey, captain, it seems the boat really floats!”
Check the source code above, and convince yourself that a
’s value is
10
:-). As you might have seen, I have used a fairly strange way of
printing it. It is because the probe’s arguments are available inside
GDB
by means of convenience variables. You can see a list of them
here.
Since SDT
probes can have up to 12 arguments (i.e., you can use
STAP_PROBE1
… STAP_PROBE12
), we have created inside GDB
12
convenience variables, named $_probe_arg0
until $_probe_arg11
. I
know, it is not an easy name to remember, and even the relation between
SDT
naming and GDB
naming is not direct (i.e., you have to subtract
1 from the SDT
probe number). If you are not satisfied with this,
please open a bug in our bugzilla and
I promise we will discuss other options.
I would like to emphasize something here: just as you don’t need
debuginfo support for dealing with probes inside GDB
, you also don’t
need debuginfo support for dealing with their arguments as well. It
means that you can actually compile your code without debuginfo support,
but still have access to some important variables/expressions when
debugging it. Depending on how GCC
optimizes your code, you may
experience some difficulties with argument printing, but so far I
haven’t heard of anything like that.
Ok, now we have covered more things about the SDT
probe support inside
GDB
, and I hope you understood all the concepts. It is not hard to get
things going with this, specially because you don’t need extra libraries
to make it work.
In the next post, I intend to finish this series by explaining how to
use tracepoints
with SDT
probes. Also, as I said in the previous
post of this series, maybe I will talk a little bit about how the SDT
probes are organized within the binary.
See you soon!
]]>With this post I will start to talk about the integration between GDB and SystemTap. This is something that Tom Tromey and I did during the last year. The patch is being reviewed as I write this post, and I expect to see it checked-in in the next few days/weeks. But let’s get our hands dirty…
You probably use (or have at least heard of) SystemTap, and maybe you think the tool is only useful for kernel inspections. If that’s your case, I have a good news: you’re wrong! You can actually use SystemTap to inspect userspace applications too, by using what we call SDT probes, or Static Defined Tracing probes. This is a very cheap and easy way to include probes in your application, and you can even specify arguments to those probes.
In order to use the probes (see an example below), you must include the
<sys/sdt.h>
header file in your source code. If you are using Fedora
systems, you can obtain this header file by installing the package
systemtap-sdt-devel
, version equal or greater than 1.4
.
Here’s a simple example of an application with a one-argument probe:
#include <sys/sdt.h>
int
main (int argc, char *argv[])
{
int a = 10;
STAP_PROBE1 (test_program, my_probe, a);
return 0;
}
As you can see, this is a very simple program with one probe, which contains one argument. You can now compile the program:
$ gcc test_program.c -o test_program
Now you must be thinking: “Wait, wait… Didn’t you just forget to link
this program against some SystemTap-specific library or something?” And
my answer is no. One of the spetacular things about this
<sys/sdt.h>
header is that it does not have any dependencies at all!
As Tom said in his blog post, this is
“a virtuoso display of ELF and GCC asm wizardy”.
If you want to make sure your probe was inserted in the binary, you can
use readelf
command:
$ readelf -x .note.stapsdt ./test_program
Hex dump of section '.note.stapsdt':
0x00000000 08000000 3a000000 03000000 73746170 ....:.......stap
0x00000010 73647400 86044000 00000000 88054000 sdt...@.......@.
0x00000020 00000000 00000000 00000000 74657374 ............test
0x00000030 5f70726f 6772616d 006d795f 70726f62 _program.my_prob
0x00000040 65002d34 402d3428 25726270 29000000 e.-4@-4(%rbp)...
(I will think about writing an explanation on how the probes are laid
out on the binary, but for now you just have to care if you actually
see an output from this readelf
command.)
You can also use SystemTap to perform this verification:
$ stap -L 'process("./test_program").mark("*")'
process("./test_program").mark("my_probe") $arg1:long
So far, so good. If you see an output like the one above, it means your probe is correctly inserted. You could obviously use SystemTap to inspect this probe, but I won’t do this right now because this is not the purpose of this post.
For now, we have learned how to:
SDT probe
in our source code, and compile it;In the next post, I will talk about the GDB support that allows you to
inspect, print arguments, and gather other information about
SDT probes
. I hope you like it!
I have been working with GDB for quite some time now, and even though the project officially uses CVS (yes, you read it correctly, it is CVS indeed!) as its version control system, fortunately we also have a git mirror. In the end, what happens is that almost every developer uses the git mirror and just goes to CVS to commit something. But this is another discussion. Aside of this git mirror, we also have the Archer repository (which uses git by default).
My plan here is to show you how I do my daily work with GDB. The workflow is pretty simple, but maybe you will see something here that might help you.
The first thing to do is to check out the code. I only have one GDB repository here, and I make branches out of it whenever I want to hack. So, to check out (or clone, in git’s parlance) the code, I do (or did):
With this, we have just cloned the GDB repository, and also added another remote (i.e., repository). This is useful because we might want to hack on a branch which is on Archer, but use GDB’s master branch as a base.
So, now it’s time to create a new branch for you. Here I use one of my
little “tricks” (taught to me by my friend
Dodji), which is the command
git-new-workdir
. This is a nice command because it creates a new
working directory for your project!
Maybe you’re wondering why this is so cool. Well, if you ever worked
with git, and more specifically, if you ever used more than one branch
at a time, then maybe you will understand my excitement. In this
scenario, having to constantly switch between the branches is not
something rare. When you have uncommited work in your tree you can
always use git stash
, but that is not the ideal solution (for me).
Sometimes I would forget what was on the stash, and later when I checked
it, it was full of crap. Also, I like to have a separate directory for
every project I am working on.
It is also important to mention that git-new-workdir
is under the
directory /usr/share/doc/git-VERSION/contrib/workdir/
, so I created an
alias that will automagically call the script for me:
So, after setting up the script, here is what I do:
In order to build the project, I create a build-64
directory inside my
project directory (which, in the example above, is
work/lazy-debuginfo-reading
).
GDB fortunately supports VPATH building (i.e., build the project outside of the source tree). I strongly recommend you to use it.
As you may have noticed, I use -g3
(include debuginfo) and -O0
(do
not optimize the code) in CFLAGS
. Also, since some of the features I
work on may affect code in other architectures, I use
--enable-targets=all
. It will tell configure to compile everything
related to all architectures (not only x86_64
, for example). At last,
I specify a separate debug directory which GDB should use to search for
debuginfo files.
After that, you will have a fresh GDB binary compiled in the build-64
directory. But that is not enough yet, since you will also want to test
GDB and make sure you didn’t insert a bug while hacking on it. In my
next post, I will explain what is my “testflow”. I hope it will be
useful for someone :-).
Stay tuned!
]]>