Tag : package

post image

How to create a snap for a Python app with networking using snapcraft in Ubuntu 16.04

Update #1: 6 Feb 2017  httpstat needs curl. Originally, this HowTo would freshly compile curl from the github source. Now, this HowTo shows how to reuse the existing curl package from the Ubuntu repositories.

In this post we see how to create a snap package (or, just snap) of some software in Ubuntu 16.04. A snap is a confined package, that can only do as much as we allow it. Once we create it, we can install the same snap in different Linux distributions!

Each software has different needs. This one 1) requires access to the Internet and 2) requires an additional binary to be added to the snap. And nothing else.

Here is what we will be doing today.

  1. Get to know this cool software that we are going to snap, httpstat.
  2. Setup snapcraft that helps us create the snap.
  3. Start building the snap incrementally, trial and error.
  4. Complete the httpstat snap
  5. Register the name on the Ubuntu Store
  6. Upload and release the snap to the Ubuntu Store so anyone can install it!
  7. Install and run the new snap!

About httpstat

httpstat can be found at https://github.com/reorx/httpstat and it is a network utility that shows how fast is the access to a website.

$ apt search httpstat
Sorting... Done
Full Text Search... Done

It is not available yet as an apt package, so all the better to create a snap.

Let’s get the source of httpstat and run it manually. This will give us a good idea what dependencies it may have.

$ git clone https://github.com/reorx/httpstat
Cloning into 'httpstat'...
remote: Counting objects: 251, done.
remote: Total 251 (delta 0), reused 0 (delta 0), pack-reused 251
Receiving objects: 100% (251/251), 330.04 KiB | 336.00 KiB/s, done.
Resolving deltas: 100% (138/138), done.
Checking connectivity... done.
$ cd httpstat/
$ ls
httpstat.py  httpstat_test.sh  LICENSE  Makefile  README.md  screenshot.png  setup.py
$ python httpstat.py 
Usage: httpstat URL [CURL_OPTIONS]
       httpstat -h | --help
       httpstat --version

  URL     url to request, could be with or without `http(s)://` prefix

  CURL_OPTIONS  any curl supported options, except for -w -D -o -S -s,
                which are already used internally.
  -h --help     show this screen.
  --version     show version.

  HTTPSTAT_SHOW_BODY    Set to `true` to show response body in the output,
                        note that body length is limited to 1023 bytes, will be
                        truncated if exceeds. Default is `false`.
  HTTPSTAT_SHOW_IP      By default httpstat shows remote and local IP/port address.
                        Set to `false` to disable this feature. Default is `true`.
  HTTPSTAT_SHOW_SPEED   Set to `true` to show download and upload speed.
                        Default is `false`.
  HTTPSTAT_SAVE_BODY    By default httpstat stores body in a tmp file,
                        set to `false` to disable this feature. Default is `true`
  HTTPSTAT_CURL_BIN     Indicate the curl bin path to use. Default is `curl`
                        from current shell $PATH.
  HTTPSTAT_DEBUG        Set to `true` to see debugging logs. Default is `false`

$ python httpstat.py www.google.com
Connected to from

HTTP/1.1 302 Found
Cache-Control: private
Content-Type: text/html; charset=UTF-8
Location: http://www.google.gr/?gfe_rd=cr&ei=4bWUWJAnqML68Ae3gKqYBQ
Content-Length: 258
Date: Fri, 03 Feb 2017 14:54:57 GMT

Body stored in: /tmp/tmpRGDhKE

  DNS Lookup   TCP Connection   Server Processing   Content Transfer
[    12ms    |      51ms      |       52ms        |        1ms       ]
             |                |                   |                  |
    namelookup:12ms           |                   |                  |
                        connect:63ms              |                  |
                                      starttransfer:115ms            |

We cloned the repository, then noticed that there is a httpstat.py file in there, and we ran the file. The Python script accepts as a parameter a URL and that’s it. No need for compilation, aye!

We show the httpstat output for google.com. httpstat shows how much time it took for each individual stage of the transfer, in this case (no SSL/TLS, just the redirected page) is

  1. DNS Lookup
  2. TCP Connection
  3. Server Processing
  4. Content Transfer

A system administrator would need to reduce those times. For example, they could switch to a different server, get a faster server or optimize the server software so that it delivers the content much faster.

Set up Snapcraft

snapcraft is a utility that helps us create snaps. Let’s install snapcraft.

$ sudo apt update
Reading state information... Done
All packages are up to date.
$ sudo apt install snapcraft
Reading package lists... Done
Building dependency tree       
Reading state information... Done
The following NEW packages will be installed:
Preparing to unpack .../snapcraft_2.26_all.deb ...
Unpacking snapcraft (2.26) ...
Setting up snapcraft (2.26) ...

In Ubuntu 16.04, snapcraft was updated today (early Feb) and has a few differences from the previous version. Make sure you have snapcraft 2.26 or newer.

Let’s create a new directory for the development of the httpstat snap and initialize it with snapcraft so that create the necessary initial files.

$ mkdir httpstat
$ cd httpstat/
$ snapcraft init
Created snap/snapcraft.yaml.
Edit the file to your liking or run `snapcraft` to get started
$ ls -l
total 4
drwxrwxr-x 2 myusername myusername 4096 Feb   3 17:09 snap
$ ls -l snap/
total 4
-rw-rw-r-- 1 myusername myusername 676 Feb   3 17:09 snapcraft.yaml
$ _

We are in this httpstat/ directory and from here we run snapcraft in order to create the snap. snapcraft will take the instructions from snap/snapcraft.yaml and do its best to create the snap.

These are the contents of snap/snapcraft.yaml:

$ cat snap/snapcraft.yaml 
name: my-snap-name # you probably want to 'snapcraft register <name>'
version: '0.1' # just for humans, typically '1.2+git' or '1.3.2'
summary: Single-line elevator pitch for your amazing snap # 79 char long summary
description: |
  This is my-snap's description. You have a paragraph or two to tell the
  most important story about your snap. Keep it under 100 words though,
  we live in tweetspace and your description wants to look good in the snap

grade: devel # must be 'stable' to release into candidate/stable channels
confinement: devmode # use 'strict' once you have the right plugs and slots

    # See 'snapcraft plugins'
    plugin: nil
$ _

This snap/snapcraft.yaml configuration file is actually usable and can create an (empty) snap. Let’s create this empty snap, install it, uninstall it and then clean up to the initial pristine state.

$ snapcraft 
Preparing to pull my-part 
Pulling my-part 
Preparing to build my-part 
Building my-part 
Staging my-part 
Priming my-part 
Snapping 'my-snap-name' |                                                                 
Snapped my-snap-name_0.1_amd64.snap
$ snap install my-snap-name_0.1_amd64.snap 
error: cannot find signatures with metadata for snap "my-snap-name_0.1_amd64.snap"
$ snap install my-snap-name_0.1_amd64.snap --dangerous
error: cannot perform the following tasks:
- Mount snap "my-snap-name" (unset) (snap "my-snap-name" requires devmode or confinement override)
Exit 1
$ snap install my-snap-name_0.1_amd64.snap --dangerous --devmode
my-snap-name 0.1 installed
$ snap remove my-snap-name
my-snap-name removed
$ snapcraft clean
Cleaning up priming area
Cleaning up staging area
Cleaning up parts directory
$ ls
my-snap-name_0.1_amd64.snap  snap/
$ rm my-snap-name_0.1_amd64.snap 
rm: remove regular file 'my-snap-name_0.1_amd64.snap'? y
removed 'my-snap-name_0.1_amd64.snap'
$ _

While developing the snap, we will be going through this cycle of creating the snap, testing it and then removing it. There are ways to optimize a bit this process, learn soon we will.

In order to install the snap from a .snap file, we had to use –dangerous because the snap has not been digitally signed. We also had to use –devmode because snapcraft.yaml specifies the developer mode, which is a relaxed (in terms of permissions) development mode.

Creating the httpstat snapcraft.yaml, first part

Here is the first part of the httpstat snapcraft.yaml. The first part is about the description and we are not going to change anything from here later. The second part is the snap creation instructions and it is the interesting part for trial and error.

$ cat snap/snapcraft.yaml 
name: httpstat # you probably want to 'snapcraft register <name>'
version: '1.1.3' # just for humans, typically '1.2+git' or '1.3.2'
summary: Curl statistics made simple # 79 char long summary
description: |
    httpstat is a utility that analyses show fast is a website
    when you are trying to connect to it.
    This utility is particularly useful to Web administrators

grade: stable # must be 'stable' to release into candidate/stable channels
confinement: strict # use 'strict' once you have the right plugs and slots

First, for the a name, we use httpstat. We are going to actually register it (snap register) further down in this HowTo, just before publishing the snap. If someone else is a maintainer of a package and it is a known major software package, we might need some extra steps to contact the maintainers or make our own httpstat-unofficial snap.

Second, we select a version. Instead of using the latest development version that might not work or may happen to be broken momentarily, you can pick and choose the stable branch or the latest tag.

The tag “v1.1.3” will do!

Third and fourth, we add a summary and a description from text we got from the httpstat Github page.

Fifth, we select a grade. That would be either devel or stable. This snap is going to be stable.

Sixth, we select the confinement of the snap. The best of the best is strict, which means that no much outside access is allowed outside. It is us who need to specify (afterwards) explicitly what actually is allowed. If you were to keep it at devmode, then there is no confinement at all and all is allowed just like with deb packages.

Creating the snapcraft.yaml, second part

Let’s start off with this initial version of the second part. This guide has helped us to figure out the initial stuff.

    command: bin/httpstat

    plugin: python
    source: https://github.com/reorx/httpstat.git
    source-tag: v1.1.3

First, in apps section we specify that the users will be running an executable named httpstat, which can be found as bin/httpstat. Whether the real executable is bin/httpstat or something else, is determined by the parts section.

Second, in the parts section we provide instructions as to how to process this httpstat target.  We specify that we want to use the python plugin (snapcraft plugin reference), which is a plugin that performs python setup.py build and python setup.py install. Other plugins would ./configure; make; make install and so on. We specify the git URL for the source (note that it ends with .git) and the tag (snapcraft source reference).

Let’s join together the first part (it is the final version) and this second part (needs some more love),

$ cat snap/snapcraft.yaml 
name: httpstat # you probably want to 'snapcraft register <name>'
version: '1.1.3' # just for humans, typically '1.2+git' or '1.3.2'
summary: Curl statistics made simple # 79 char long summary
description: |
    httpstat is a utility that analyses show fast is a website
    when you are trying to connect to it.
    This utility is particularly useful to Web administrators

grade: stable # must be 'stable' to release into candidate/stable channels
confinement: strict # use 'strict' once you have the right plugs and slots

    command: bin/httpstat

    plugin: python
    source: https://github.com/reorx/httpstat.git
    source-tag: v1.1.3

Now we can run snapcraft, create the snap, install it and test it on a website. If we get an error, we fix it and try again.

$ snapcraft 
Preparing to pull httpstat 
Pulling httpstat 
Note: checking out '0f0e653309982178302ec1d5023bda2de047a72d'.
Successfully downloaded httpstat
Preparing to build httpstat 
Building httpstat 
Installing collected packages: httpstat
Successfully installed httpstat-1.1.3
Staging httpstat 
Priming httpstat 
Snapping 'httpstat' \                                                                     
Snapped httpstat_1.1.3_amd64.snap
$ snap install httpstat_1.1.3_amd64.snap --dangerous
httpstat 1.1.3 installed
$ httpstat www.google.com
Traceback (most recent call last):
  File "/snap/httpstat/x4/bin/httpstat", line 11, in <module>
  File "/snap/httpstat/x4/lib/python3.5/site-packages/httpstat.py", line 155, in main
    p = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE, env=cmd_env)
  File "/snap/httpstat/x4/usr/lib/python3.5/subprocess.py", line 947, in __init__
    restore_signals, start_new_session)
  File "/snap/httpstat/x4/usr/lib/python3.5/subprocess.py", line 1551, in _execute_child
    raise child_exception_type(errno_num, err_msg)
FileNotFoundError: [Errno 2] No such file or directory: 'curl'
Exit 1

We are almost there. Our confined httpstat snap needs to use a curl binary. Two ways to include this binary. The easy way is to stage the binary from the existing APT package of the Ubuntu repositories. The more involved way would be to freshly compile curl from source, with the opportunity to enable compilation flags to cut down the binary to the bare minimum.

If you prefer to try the more involved way, append these lines to the snapcraft.yaml file (therefore they go in the parts section) and you are set to go. These lines add a new parts subsection for curl, instructing to use autotools (./configure; make; make install) to compile the source of curl from github.

    plugin: autotools
    source: https://github.com/curl/curl.git

In the following we are going to reuse the existing package for curl, found in the repositories. It is the same .deb file you get when you would run sudo apt install curl.

Here are the instructions to reuse an existing APT package that is found in the repositories. We use stage-packages and is put in the same level in the httpstat parts.

        - curl

Here is the new version of snapcraft.yaml,

$ cat snap/snapcraft.yaml 
name: httpstat # you probably want to 'snapcraft register <name>'
version: '1.1.3' # just for humans, typically '1.2+git' or '1.3.2'
summary: Curl statistics made simple # 79 char long summary
description: |
    httpstat is a utility that analyses show fast is a website
    when you are trying to connect to it.
    This utility is particularly useful to Web administrators

grade: stable # must be 'stable' to release into candidate/stable channels
confinement: strict # use 'strict' once you have the right plugs and slots

    command: bin/httpstat

    plugin: python
    source: https://github.com/reorx/httpstat.git
    source-tag: v1.1.3
        - curl

Let’s run snapcraft again and produce an updated snap.

$ snapcraft
Preparing to pull httpstat 
Get:42 http://gb.archive.ubuntu.com/ubuntu xenial-updates/main amd64 curl amd64 7.47.0-1ubuntu2.2 [139 kB]
Pulling httpstat 
Successfully downloaded httpstat
Preparing to build httpstat 
Building httpstat 
Collecting httpstat
Installing collected packages: httpstat
Successfully installed httpstat-1.1.3
Staging httpstat 
Priming httpstat 
Snapping 'httpstat' -                                                                     
Snapped httpstat_1.1.3_amd64.snap
$ _

We are ready to install the new snap and test it out.

$ snap install httpstat_1.1.3_amd64.snap --dangerous
 httpstat 1.1.3 installed
 $ httpstat google.com
 > curl -w <output-format> -D <tempfile> -o <tempfile> -s -S google.com
 curl error: curl: (6) Couldn't resolve host 'google.com'
 Exit 6

We are almost there! The snap does not have access to the Internet because of the strict confinement. We need to allow networking access while keep blocking everything else (for example, the httpstat would not have any access to our home directory).

To allow networking access to a snap, we need to specify the networking is OK for this snap. network is an interface in snaps and it is one of the many supported interfaces for snaps. There is the notion of plugs (provider of resource) and slots (consumer of resource). For most cases, like this one here, we need a plug, for network.

Here is how it is specified (and in general can have multiple items from the interface reference page for snaps),

      - network

Once we add these two lines, we reach the final version of snapcraft.yaml for httpstat.

name: httpstat # you probably want to 'snapcraft register <name>'
version: '1.1.3' # just for humans, typically '1.2+git' or '1.3.2'
summary: Curl statistics made simple # 79 char long summary
description: |
    httpstat is a utility that analyses show fast is a website
    when you are trying to connect to it.
    This utility is particularly useful to Web administrators

grade: stable # must be 'stable' to release into candidate/stable channels
confinement: strict # use 'strict' once you have the right plugs and slots

    command: bin/httpstat
      - network

    plugin: python
    source: https://github.com/reorx/httpstat.git
    source-tag: v1.1.3
      - curl

Let’s get going, produce the snap, install it and test it!

$ snapcraft clean
Cleaning up priming area
Cleaning up staging area
Cleaning up parts directory
$ snapcraft 
Preparing to pull httpstat 
Get:42 http://gr.archive.ubuntu.com/ubuntu xenial-updates/main amd64 curl amd64 7.47.0-1ubuntu2.2 [139 kB]
Pulling httpstat 
Successfully downloaded httpstat
Preparing to build httpstat 
Building httpstat 
Collecting httpstat
Installing collected packages: httpstat
Successfully installed httpstat-1.1.3
Staging httpstat 
Priming httpstat 
Snapping 'httpstat' /                                                                     
Snapped httpstat_1.1.3_amd64.snap
$ snap install httpstat_1.1.3_amd64.snap  --dangerous
httpstat 1.1.3 installed
$ httpstat https://www.google.com

HTTP/1.1 302 Found
Cache-Control: private
Content-Type: text/html; charset=UTF-8
Location: https://www.google.gr/?gfe_rd=cr&ei=qCWWWPn3BLOT8QfItrHIDA
Content-Length: 259
Date: Sat, 04 Feb 2017 12:04:08 GMT
Alt-Svc: quic=":443"; ma=2592000; v="35,34"

Body stored in: /tmp/tmpwotxl1u0

  DNS Lookup   TCP Connection   SSL Handshake   Server Processing   Content Transfer
[     2ms    |       6ms      |     109ms     |       48ms        |        0ms       ]
             |                |               |                   |                  |
    namelookup:2ms            |               |                   |                  |
                        connect:8ms           |                   |                  |
                                    pretransfer:117ms             |                  |
                                                      starttransfer:165ms            |
$ _

And that’s it! The snap, with strict confinement, works.

When we try the snap further, we might find enhancements and other ways to make it better. If someone wants to help, they simply need just this small snapcraft.yaml (gist) in order to try themselves to make the snap better. The first revision of the gist is the version that compiles curl from source.

Register the snap name

Before publishing publicly a snap, you first need to create an account for the Ubuntu Store (use your Launchpad.net account or Ubuntu Single Sign On (SSO) as it is now called, if you already have one). Then run snapcraft login to log in and then, snapcraft register to register the new package name. Here we go,

$ snapcraft login
Enter your Ubuntu One SSO credentials.
Email: me@example.com
Password: **********
We strongly recommend enabling multi-factor authentication: https://help.ubuntu.com/community/SSO/FAQs/2FA
Login successful.
$ snapcraft register httpstat
Registering httpstat.
Congratulations! You're now the publisher for 'httpstat'.

That’s it. Let’s see the snap in the list of our registered snaps,

$ snapcraft list-registered
Name         Since                 Visibility    Price    Notes
httpstat     2017-02-03T17:28:45Z  public        -        -

We are now ready to publish the snap.

Publish the snap

We publish the snap by first running snapcraft push. We push the *.snap file instead of the snapcraft.yaml. The process of creating an account to pushing and releasing the snap, is described at Publishing your snap.

$ snapcraft push httpstat_1.1.3_amd64.snap 
Pushing 'httpstat_1.1.3_amd64.snap' to the store.
Uploading httpstat_1.1.3_amd64.snap [                                                              ]   0%
Uploading httpstat_1.1.3_amd64.snap [==============================================================] 100%
Ready to release!|                                                                                       
Revision 1 of 'httpstat' created.

The snap has been uploaded, but not released yet to the public in one of the four channels, stable, candidate, beta or edge. Let’s see the snap online. Log in to https://myapps.developer.ubuntu.com/

When we click on the snap name, we get the detailed page for the snap. Here it is,

The important part is where it says Package status is Ready to publish. When that happens, and when it happens immediately, it means our snap managed to pass all the automated tests and it is ready to publish. For more complicated snaps, it requires a manual review process.

There are two ways to release a snap to the public, through the Web interface of the Ubuntu Store, and using snapcraft. We examine both options.

Release the snap using the Web interface of the Ubuntu Store

As shown in the screenshot of the snap details Web page, we click where it says the revision (#1) and the version (1.1.3). It looks like «#1       1.1.3» at the screenshot above. Here is the page of the Technical details for this revision and version.

A snap is released into channels. In the screenshot above, there is Channels, and a link called Release. Click on the link for Release.

Here we select the channels to release the snap in, and click on Release. We selected all channels. The most important is stable, which means the snap is public to everyone.

If you performed the release using Ubuntu Store, skip the following subsection about releasing the snap using snapcraft.

Release the snap using snapcraft

Let’s release the snap, with snapcraft release. There are four channels, stable, candidate, beta and edge. We release the snap to all four channels.

$ snapcraft release httpstat 1 edge
The 'edge' channel is now open.
$ snapcraft release httpstat 1 beta
The 'beta' channel is now open.
$ snapcraft release httpstat 1 candidate
The 'candidate' channel is now open.
$ snapcraft release httpstat 1 stable
The 'stable' channel is now open.
$ snapcraft status httpstat
Arch    Channel    Version    Revision
amd64   stable     1.1.3      1
        candidate  1.1.3      1
        beta       1.1.3      1
        edge       1.1.3      1

That’s it!

Install and run the new snap

We are going to uninstall the locally installed httpstat so that we can get it from the Ubuntu Store!

$ snap remove httpstat
httpstat removed
$ snap info httpstat
name:      httpstat
summary:   "Curl statistics made simple"
publisher: simosx
description: |
  httpstat is a utility that analyses show fast is a website
  when you are trying to connect to it.
  This utility is particularly useful to Web administrators
  stable:    1.1.3 (1) 9MB -
  candidate: 1.1.3 (1) 9MB -
  beta:      1.1.3 (1) 9MB -
  edge:      1.1.3 (1) 9MB -

$ snap install httpstat
httpstat 1.1.3 from 'simosx' installed
$ snap info httpstat
name:      httpstat
summary:   "Curl statistics made simple"
publisher: simosx
description: |
  httpstat is a utility that analyses show fast is a website
  when you are trying to connect to it.
  This utility is particularly useful to Web administrators
  - httpstat
tracking:    stable
installed:   1.1.3 (1) 9MB -
refreshed:   2017-02-04 20:57:31 +0200 EET
  stable:    1.1.3 (1) 9MB -
  candidate: 1.1.3 (1) 9MB -
  beta:      1.1.3 (1) 9MB -
  edge:      1.1.3 (1) 9MB -

As an alternative to installing by using the snap command, we can also install the snap in Ubuntu deskop using the Ubuntu Software. Here is how it looks!

Everything looks fine! Let’s finish the tutorial by httpstat-ing ubuntu.com 🙂

$ httpstat https://www.ubuntu.com
HTTP/1.1 200 OK
Date: Sat, 04 Feb 2017 20:23:50 GMT
Server: gunicorn/17.5
Strict-Transport-Security: max-age=15768000
Content-Type: text/html; charset=utf-8
Age: 17
Content-Length: 33671
X-Cache: HIT from privet.canonical.com
X-Cache-Lookup: HIT from privet.canonical.com:80
Via: 1.0 privet.canonical.com:80 (squid/2.7.STABLE7)
Vary: Accept-Encoding

Body stored in: /tmp/tmpybh9xabj

  DNS Lookup   TCP Connection   SSL Handshake   Server Processing   Content Transfer
[    10ms    |      61ms      |     135ms     |       63ms        |       123ms      ]
             |                |               |                   |                  |
    namelookup:10ms           |               |                   |                  |
                        connect:71ms          |                   |                  |
                                    pretransfer:206ms             |                  |
                                                      starttransfer:269ms            |

Workaround for bad fonts in Google Earth 5 (Linux)

Update Jan 2010: The following may not work anymore. Use with caution. See relevant discussions at http://forum.ubuntu-gr.org/viewtopic.php?f=5&t=15607 and especially http://kigka.blogspot.com/2010/11/google-6.html

Older post follows:

So you just installed Google Earth 5 and you can’t figure out what’s wrong with the fonts? If your language does not use the Latin script, you cannot see any text?

Here is the workaround. The basic info comes from this google earth forum post and the reply that suggests to mess with the QT libraries.

Google Earth 5 is based on the Qt library, and Google is using their own copies of the Qt libraries. This means that the customisation (including fonts) that you do with qtconfig-qt4 does not affect Google Earth. Here we use Ubuntu 8.10, and we simply installed the Qt libraries in order to use some Qt programs. You probably do not have qtconfig-qt4 installed, so you need to get it.

So, by following the advice in the post above and replacing key Qt libraries from Google Earth with the ones provided by our distro, solves (read: workaround) the problem. Here comes the science:

If you have a 32-bit version of Ubuntu,

cd /opt/google-earth/
sudo mv libQtCore.so.4 libQtCore.so.4.bak
sudo mv libQtGui.so.4 libQtGui.so.4.bak
sudo mv libQtNetwork.so.4 libQtNetwork.so.4.bak
sudo mv libQtWebKit.so.4 libQtWebKit.so.4.bak
sudo ln -s /usr/lib/libQtCore.so.4.4.3  libQtCore.so.4
sudo ln -s /usr/lib/libQtGui.so.4.4.3  libQtGui.so.4
sudo ln -s /usr/lib/libQtNetwork.so.4.4.3  libQtNetwork.so.4
sudo ln -s /usr/lib/libQtWebKit.so.4.4.3  libQtWebKit.so.4

If you have the 64-bit version of Ubuntu, try

cd /opt/google-earth/

sudo getlibs googleearth-bin
sudo mv libQtCore.so.4 libQtCore.so.4.bak
sudo mv libQtGui.so.4 libQtGui.so.4.bak
sudo mv libQtNetwork.so.4 libQtNetwork.so.4.bak
sudo mv libQtWebKit.so.4 libQtWebKit.so.4.bak
sudo ln -s /usr/lib32/libQtCore.so.4.4.3  libQtCore.so.4
sudo ln -s /usr/lib32/libQtGui.so.4.4.3  libQtGui.so.4
sudo ln -s /usr/lib32/libQtNetwork.so.4.4.3  libQtNetwork.so.4
sudo ln -s /usr/lib32/libQtWebKit.so.4.4.3  libQtWebKit.so.4

Requires to have getlibs installed, and when prompted, install the 32-bit versions of the packages as instructed.

Now, with qtconfig-qt you can configure the font settings.

Playing with Git

Git is a version control system (VCS) software that is used for source code management (SCM). There are several examples of VCS software, such as CVS and SVN. What makes Git different is that it is a distributed VCS, that is, a DVCS.

Being a DVCS, when you use Git you create fully capable local repositories that can be used for offline work. When you get the files of a repository, you actually grab the full information (this makes the initial creation of local repositories out of a remote repository slower, and the repositories are bigger).

You can install git by installing the git package. You can test it by opening a terminal window, and running

git clone git://github.com/schacon/whygitisbetter.git

The files appear in a directory called whygitisbetter. In a subdirectory called .git/,git stores all the controlling information it requires to manage the local repository. When you enter the repository directory (whygitisbetter in our case), you can issue commands that will figure out what’s going on because of the info in .git/.

With git, we create local copies of repositories by cloning. If you have used CVS or SVN, this is somewhat equivalent to the checkout command. By cloning, you create a full local repository. When you checkout with CVS or SVN, you get the latest snapshot only of the source code.

What you downloaded above is the source code for the http://www.whygitisbetterthanx.com/ website. It describes the relative advantages of git compared to other VCS and DVCS systems.

Among the different sources of documentation for git, I think one of the easiest to read is the Git Community Book. It is consise and easy to follow, and it comes with video casting (videos that show different tasks, with audio guidance).

You can create local repositories on your system. If you want to have a remote repository, you can create an account at GitHub, an attractive start-up that offers 100MB free space for your git repository. Therefore, you can host your pet project on github quite easily.

GitHub combines source code management with social networking, no matter how strange that may look like. It comes with tools that allows to maintain your own copies of repositories (for example, from other github users), and helps with the communication. For example, if I create my own copy of the whygitisbetter repository and add something nice to the book, I can send a pull request (with the click of a button) to the maintainer to grab my changes!

If you have already used another SCM tool (non-distributed), it takes some time to get used to the new way of git. It is a good skill to have, and the effort should pay off quickly. There is a SVN to Git crash course available.

If you have never used an SCM, it is cool to go for git. There is nothing to unlearn, and you will get a new skill.

Git is used for the developement of the Linux kernel, the Perl language, Ruby On Rails, and others.

How to install the 64-bit Adobe Flash Player 10 for Linux in Ubuntu Linux?

Update 2 May 2010: There is a repository for Flash 64bit at https://launchpad.net/~sevenmachines/+archive/flash Though I have not tried this, I suggest to give it a try. I tried this and it works like a charm. Uninstall flashplugin-nonfree, add the new PPA repository with sudo add-apt-repository ppa:sevenmachines/flash and then install flashplugin64-installer.

Update 10 Nov 2010: The package name changed to ‘flashplugin64-installer’. It was to be called -nonfree. The commands below have been updated. You can simply copy and paste. The latest Flash 64-bit for Linux is 10.2 d1161. See Tools→Addons→Plugins to verify the version.

Here are the commands

sudo apt-get remove flashplugin-nonfree flashplugin-installer

sudo add-apt-repository ppa:sevenmachines/flash

sudo apt-get update

sudo apt-get install flashplugin64-installer

Original post: So you just read the announcement from Adobe for the alpha version of the 64-bit Flash Player 10 for Linux and you want to install in Ubuntu Linux?

Here is how to do it.

  1. First, we understand that the flashplugin-nonfree package that is currently available to those with 64-bit Ubuntu Linux, installs the 32-bit version of Flash and uses the nspluginwrapper tool to make it work.
  2. After some time, I expect that the flashplugin-nonfree will stop using nspluginwrapper and will simply install Adobe Flash Player 10 (64-bit) for Linux. So you need to have a look in your package manager and the package description in case flashplugin-nonfree has already been updated. If flashplugin-nonfree has been updated, stop reading now.
  3. Close Mozilla Firefox.
  4. Uninstall the flashplugin-nonfree package using your package manager, or simply running sudo apt-get remove flashplugin-nonfree
  5. Download the alpha version of the 64-bit Adobe Flash Player 10 for Linux and extract the file from the archive. You will get a libflashplayer.so file, which is about 10MB is size.
  6. If you want all users in your system to have this alpha version of Adobe Flash Player 10 for Linux, copy the libflashplayer.so file to /usr/lib/mozilla/plugins/. The command is sudo cp libflashplayer.so /usr/lib/mozilla/plugins/
  7. If you want just the current user to try out the Flash player, copy the libflashplayer.so file to /home/yourUSERNAME/.mozilla/plugins/. The command is cp libflashplayer.so ~/.mozilla/plugins/
  8. Check that in ~/.mozilla/plugins/ there is no dormant file with the name npwrapper.libflashplayer.so. A common issue with people who migrate their profiles is to perform a simply copy of the profile. The effect of this is sometimes there is an actual file called npwrapper.libflashplayer.so instead of a symbolic link. The result is that these people would end up using some old buggy version of nspluginwrapper which might be the cause of Firefox crashes! When you backup, use cp -a, so symbolic links remain symbolic links.
  9. You can now start Mozilla Firefox. Visit about:plugins and verify that the version of Flash is something like Shockwave Flash 10.0 d20. Make sure there is no remnant of any other previous Flash player.
  10. If you want to return back to the 32-bit Flash Player with emulation, remove the file we just added and install again the flashplugin-nonfree package.

The instructions for other distributions should be fairly similar.

Εγκατάσταση του Ubuntu 8.04

Ολοκλήρωσα πριν λίγο την εγκατάσταση του Ubuntu 8.04 (έκδοση 64bit). Περιγράφω παρακάτω τις επιλογές που έκανα και τα τυχόν προβλήματα που αντιμετώπισα.
Αν και είχα Ubuntu 7.10, επέλεξα να κάνω εγκατάσταση από την αρχή, διότι έχω κάνει τόσες αλλαγές στο σύστημα που είναι πολύ πιθανό η αναβάθμιση να μη λειτουργήσει. Ωστόσο, ο πιο σημαντικός λόγος της εγκατάστασης ξανά είναι να γλιτώσω από τη σαβούρα που μαζεύεται στην εγκατάσταση μετά από χρήση πολλών μηνών.
Έκανα λοιπόν εκκίνηση του υπολογιστή μου από το DVD του Ubuntu 8.04 (64bit), και εκτέλεσα το Σύστημα/Διαχείριση συστήματος/Partition editor. Εδώ, επέλεξα να κρατήσω αντίγραφο ασφαλείας της κατάτμησης / σε εξωτερικό δίσκο, σε περίπτωση που χρειαστώ να γυρίσω ξανά πίσω στο 7.10. Αυτό βοηθάει ακόμα για να δω λεπτομέρειες από την προηγούμενη εγκατάσταση, που πολύ πιθανό να χρειαστώ. Αργότερα είδα ότι το Ubuntu 8.04 κατάλαβε ότι υπάρχει εγκατάσταση 7.10 και την έβαλε στο μενού της εκκίνησης του υπολογιστή. Έτσι, έχω την επιλογή να μπω σε 7.10 πολύ εύκολα, με εκκίνηση στον εξωτερικό δίσκο USB.
Αυτό το καιρό έχω δύο κατατμήσεις, μια για το / και μια για /home. Έτσι, επέλεξα κατά την εγκατάσταση να μορφοποιηθεί η κατάτμηση για το /, ενώ να διατηρηθεί ως έχει το /home. Ακόμα, μιας και θέλω να ξεκαθαρίσω τα αρχεία μου στον κατάλογο χρήστη, επέλεξα διαφορετικό όνομα χρήστη. Έτσι, μετά το τέλος της εγκατάστασης έχω ένα καθαρό λογιαριασμό, και είμαι σε θέση να μεταφέρω (mv) καταλόγους ρυθμίσεων και αρχεία από τον παλαιό κατάλογο χρήστη κατά βούληση.
Με αυτό το τρόπο, πέρασα την αλληλογραφία μου με την εντολή
mv /home/oldaccount/.mozilla-thunderbird ~
Με τον ίδιο τρόπο μετέφερα το προφίλ μου για το Firefox. Πριν είχα Firefox 2.0, και στην μεταφορά έγινε ενημέρωση του προφίλ για το Firefox 3 του Ubuntu 8.04. Φυσικά ο Firefox πετάει αλλά αυτό το έχετε ακούσει από παντού.
Μετά εγκατέστησα το VirtualBox (πλήρη έκδοση, με υποστήριξη USB). Η βασική εγκατάσταση και χρήση μπορεί να γίνει εξ ολοκλήρου από το γραφικό περιβάλλον, και έτσι το έκανα. Κάνουμε τη λήψη του .deb (για τώρα επιλέγουμε την έκδοση για gutsy), και από την επιφάνεια εργασίας κάνουμε διπλό κλικ για να εγκατασταθεί το πακέτο. Μόλις ενημερωθεί το virtualbox.org με το νέο αποθετήριο για το Ubuntu 8.04, τότε μπορούμε να αποφύγουμε τη διαδικασία αυτή με προσθήκη του virtualbox στους τρίτους κατασκευαστές (third party), στις πηγές λογισμικού του Ubuntu Linux.
Χρειάζεται να προσθέσουμε τους χρήστες του συστήματός μας στην ομάδα vboxusers (γίνεται από το Σύστημα/Διαχείριση/Χρήστες και ομάδες) και μετά αποσύνδεση και σύνδεση ξανά.
Για να ενεργοποιήσουμε την υποστήριξη USB, χρειάζεται να περάσουμε στο τερματικό και να κάνουμε τις απαραίτητες τροποποιήσεις για τα αρχεία /etc/init.d/mountdevsusbfs.sh, /etc/init.d/mountkernfs.sh και /etc/fstab (τα δύο τελευταία είναι για ενεργοποίηση συσκευών όπως εξωτερικοί δίσκοι USB και άλλες πολλές συσκευές). Μετά θέλει επανεκκίνηση του συστήματος.
Η εγκατάσταση του Skype 2.0 σε 64-bit είναι γενικά μπελάς. Αυτό που έκανα είναι να προσθέσω το αποθετήριο medibuntu στις πηγές λογισμικού, τρίτους κατασκευαστές, και μετά να βάλω το πακέτο Skype. Κανένα πρόβλημα.
Εγκατέστησα και το Google Earth από το medibuntu, αλλά τίποτα άλλο. Είδα τα υπόλοιπα πακέτα του medibuntu, και αποφάσισα να απενεργοποιήσω για τώρα το αποθετήριο αυτό.
Ένα σημαντικό που δε δούλεψε με τη μια είναι ο ήχος. Διαβάζοντας από εδώ και εκεί, είδα αναφορές για λήψη του πηγαίου κώδικα της Alsa και μεταγλώττιση ξανά. Αυτό το πράγμα χρειαζόταν για το Ubuntu 7.10, ωστόσο ήμουν σίγουρος ότι δεν χρειάζεται πλέον για την κάρτα μου (snd-hda-intel). Με λίγο ψάξιμο είδα ότι απλά χρειάζεται να προστεθεί στο /etc/modprobe.d/alsa-base η γραμμή
options snd-hda-intel model=lenovo
και μετά επανεκκίνηση. Αυτό ήταν.
Αντίθετα με ότι κάνουν άλλοι χρήστες, προσωπικά δε θα βάλω χύμα κόντεκ για τις διάφορες μορφές πολυμέσων. Με χρήση των εφαρμογών Αναπαραγωγής ήχου και βίντεο, το σύστημα προτρέπει κάθε φορά να γίνει εγκατάσταση των απαιτούμενων πακέτων, αυτόματα.

Κοιτώντας την γενική υποστήριξη ελληνικών στο Ubuntu 8.04, πρέπει να πω ότι είναι αρκετά ευχαριστημένος. Έχει γίνει σημαντική δουλειά σε κάθε σημείο, από την αρχική οθόνη που ξεκινά ο υπολογιστής με το CD/DVD του Ubuntu μέχρι την επιφάνεια εργασίας. Στην οθόνη εκκίνησης έχουμε όλα (σχεδόν!) τα μηνύματα στα ελληνικά, εκτός από δύο (π.χ. Test memory, ήμουν σίγουρος ότι το έλεγξα αυτό αλλά δεν μπήκε στην τελική έκδοση!).
Στα παιχνίδια του Ubuntu, λάβαμε την απόφαση να έχουμε τα ονόματα στα ελληνικά (έτσι, Τετραβέξ αντί Tetravex). Αυτό είναι καλό μιας και τα παιδιά και οι δημόσιοι υπάλληλοι θα παίζουν αποκλειστικά τα παιχνίδια. Αφού και ο db0 δεν χρησιμοποιεί το ελληνικό περιβάλλον, δεν θα υπάρχει σχόλιο ούτε από την πλευρά του. Πάντως, αυτό που συνέβει είναι δύο παιχνίδια βγήκαν αμετάφραστα (Same GNOME, Gnometris) που χαλάνε κάπως τη λίστα, και θα διορθωθούν σε επόμενη έκδοση.
Στη μικροεφαρμογή του ρολογιού είναι δυνατόν να βάλει κάποιος την πόλη του, και το σύστημα θα δείξει την ώρα και τις καιρικές συνθήκες. Δοκιμάστε με πόλεις όπως Θεσσαλονίκη, Καβάλα, Ρόδος, Χανιά, κτλ.
Μιας ακόμα αλλαγή είναι στη συντόμευση για την ελληνική γλώσσα, στην μικροεφαρμογή για την ένδειξη πληκτρολογίου. Τώρα βάλαμε ΕΛΛ (για Ελλάδα), αντί του προηγούμενου Ελλ. Έτσι, συμβαδίζει με το αμερικάνικο πληκτρολόγιο και την ένδειξη ΗΠΑ. Σε επόμενη έκδοση του Ubuntu θα δείχει ΑΓΓ για κάθε αγγλική διάταξη πληκτρολογίου, και αυτό θα γίνει στο πακέτο μετάφρασης για την ελληνική γλώσσα.
Ενημέρωση #1: Ένα πρόβλημα που αντιμετώπισα στην αρχή ήταν η κάρτα γραφικών. Δεν ήταν ζήτημα με τη βασική υποστήριξη της καρτάς γραφικών. Μιας και έχω κάρτα γραφικών Intel, το σύστημα ενεργοποίησε από τη πρώτη στιγμή ακόμα και 3D. Το ζήτημα είχε να κάνει με τη χρήση της εκτεταμμένης επιφάνειας εργασίας (dual head). Για τη χρήση δύο οθονών έπρεπε να επέμβω με το χέρι στο αρχείο ρυθμίσεων /etc/X11/xorg.conf και να καθορίσω το μέγεθος της ιδεατής (Virtual) οθόνης. Δεν κατάφερα να βρω τρόπο για να κάνω το ίδιο πράγμα χωρίς να μπω σε τερματικό.

FOSDEM ’08, summary and comments

I attended FOSDEM ’08 which took place on the 23rd and 24th of February in Brussels.

Compared to other events, FOSDEM is a big event with over 4000 (?) participants and over 200 lectures (from lightning talks to keynotes). It occupied three buildings at a local university. Many sessions were taking place at the same time and you had to switch from one room to another. What follows is what I remember from the talks. Remember, people recollect <8% of the material they hear in a talk.

The first keynote was by Robin Rowe and Gabrielle Pantera, on using Linux in the motion picture industry. They showed a huge list of movies that were created using Linux farms. The first big item in the list was the movie Titanic (1997). The list stopped at around 2005 and the reason was that since then any significant movie that employs digital editing or 3D animation is created on Linux systems. They showed trailers from popular movies and explained how technology advanced to create realistic scenes. Part of being realistic, a generated scene may need to be blurred so that it does not look too crisp.

Next, Robert Watson gave a keynote on FreeBSD and the development community. He explained lots of things from the community that someone who is not using the distribution does not know about. FreeBSD apparently has a close-knit community, with people having specific roles. To become a developer, you go through a structured mentoring process which is great. I did not see such structured approach described in other open-source projects.

Pieter Hintjens, the former president of the FFII, talked about software patents. Software patents are bad because they describe ideas and not some concrete invention. This has been the view so that the target of the FFII effort fits on software patents. However, Pieter thinks that patents in general are bad, and it would be good to push this idea.

CMake is a build system, similar to what one gets with automake/autoconf/makefile. I have not seen this project before, and from what I saw, they look quite ambitious. Apparently it is very easy to get your compilation results on the web when you use CMake. In order to make their project more visible, they should make effort on migration of existing projects to using CMake. I did not see yet a major open-source package being developed with CMake, apart from CMake itself.

Richard Hughes talked about PackageKit, a layer that removes the complexity of packaging systems. You have GNOME and your distribution is either Debian, Ubuntu, Fedora or something else. PackageKit allows to have a common interface, and simplifies the workflow of managing the installation of packages and the updates.

In the Virtualisation tracks, two talks were really amazing. Xen and VirtualBox. Virtualisation is hot property and both companies were bought recently by Citrix and Sun Microsystems respectively. Xen is a Type 1 (native, bare metal) hypervisor while VirtualBox is a Type 2 (hosted) hypervisor. You would typically use Xen if you want to supply different services on a fast server. VirtualBox is amazingly good when you want to have a desktop running on your computer.

Ian Pratt (Xen) explained well the advantages of using a hypervisor, going into many details. For example, if you have a service that is single-threaded, then it makes sense to use Xen and install it on a dual-core system. Then, you can install some other services on the same system, increasing the utilisation of your investment.

Achim Hasenmueller gave an amazing talk. He started with a joke; I have recently been demoted. From CEO to head of virtualisation department (name?) at Sun Microsystems. He walked through the audience on the steps of his company. The first virtualisation product of his company was sold to Connectix, which then was sold to Microsoft as VirtualPC. Around 2005, he started a new company, Innotek and the product VirtualBox. The first customers were government agencies in Germany and only recently (2007) they started selling to end-users.

Virtualisation is quite complex, and it becomes more complex if your offering is cross platform. They manage the complexity by making VirtualBox modular.

VirtualBox comes in two versions; an open-source version and a binary edition. The difference is that with the binary edition you get USB support and you can use RDP to access the host. If you installed VirtualBox from the repository of your distribution, there is no USB support. He did not commit whether the USB/RDP support would make it to the open-source version, though it might happen since Sun Microsystems bought the company. I think that if enough people request it, then it might happen.

VirtualBox uses QT 3.3 as the cross platform toolkit, and there is a plan to migrate to QT 4.0. GTK+ was considered, though it was not chosen because it does not provide yet good support in Win32 (applications do not look very native on Windows). wxWidgets were considered as well, but also rejected. Apparently, moving from QT 3.3 to QT 4.0 is a lot of effort.

Zeeshan Ali demonstrated GUPnP, a library that allows applications to use the UPnP (Universal Plug n Play) protocol. This protocol is used when your computer tells your ADSL model to open a port so that an external computer can communicate directly with you (bypassing firewall/NAT). UPnP can also be used to access the content of your media station. The gupnp library comes with two interesting tools; gupnp-universal-cp and gupnp-network-light. The first is a browser of UPnP devices; it can show you what devices are available, what functionality they export, and you can control said devices. For example, you can use GUPnP to open a port on your router; when someone connects from the Internet to port 22 on your modem, he is redirected to your server, at port 22.

You can also use the same tool to figure out what port mapping took place already on your modem.

The demo with the network light is that you run the browser on one computer and the network light on another, both on the local LAN (this thing works only on the local LAN). Then, you can use the browser to switch on/off the light using the UPnP protocol.

Dimitris Glezos gave a talk on transifex, the translation management framework that is currently used in Fedora. Translating software is a tedious task, and currently translators spent time on management tasks that have little to do with translation. We see several people dropping from translations due to this. Transifex is an evolving platform to make the work of the translator easier.

Dimitris talked about a command-line version of transifex coming out soon. Apparently, you can use this tool to grab the Greek translation of package gedit, branch HEAD. Do the translation and upload back the file.

What I would like to see here is a tool that you can instruct it to grab all PO files from a collection of projects (such as GNOME 2.22, UI Translations), and then you translate with your scripts/tools/etc. Then, you can use transifex to upload all those files using your SVN account.

The workflow would be something like

$ tfx --project=gnome-2.22 --collection=gnome-desktop --action=get
Reading from http://svn.gnome.org/svn/damned-lies/trunk/releases.xml.in... done.
Getting alacarte... done.
Getting bug-buddy... done.
Completed in 4:11s.
$ _

Now we translate any of the files we downloaded, and we push back upstream (of course, only those files that were changed).

$ tfx --project=gnome-2.22 --collection=gnome-desktop --user=simos --action=send
 Reading local files...
Found 6 changed files.
Uploading alacarte... done.
Completed uploading translation files to gnome-2.22.
$ _

Berend Cornelius talked about creating OpenOffice.org Wizards. You get such wizards when you click on File/Wizards…, and you can use them to fill in entries in a template document (such as your name, address, etc in a letter), or use to install the spellchecker files. Actually, one of the most common uses is to get those spellchecker files installed.

A wizard is actually an OpenOffice.org extension; once you write it and install it (Tools/Extensions…), you can have it appear as a button on a toolbar or a menu item among other menus.

You write wizards in C++, and one would normally work on an existing wizard as base for new ones.

When people type in a word-processor, they typically abuse it (that’s my statement, not Berend’s) by omitting the use of styles and formatting. This makes documents difficult to maintain. Having a wizard teach a new user how to write a structured document would be a good idea.

Perry Ismangil talked about pjsip, the portable open-source SIP and media stack. This means that you can have Internet telephony on different devices. Considering that Internet Telephony is a commodity, this is very cool. He demonstrated pjsip running two small devices, a Nintendo DS and an iPhone. Apparently pjsip can go on your OpenWRT router as well, giving you many more exciting opportunities.

Clutter is a library to create fast animations and other effects on the GNOME desktop. It uses hardware acceleration to make up for the speed. You don’t need to learn OpenGL stuff; Clutter is there to provide the glue.

Gutsy has Clutter 0.4.0 in the repositories and the latest version is 0.6.0. To try out, you need at least the clutter tarball from the Clutter website. To start programming for your desktop, you need to try some of the bindings packages.

I had the chance to spend time with the DejaVu guys (Hi Denis, Ben!). Also met up with Alexios, Dimitris x2, Serafeim, Markos and others from the Greek mission.

Overall, FOSDEM is a cool event. In two days there is so much material and interesting talks. It’s a recommended technical event.

Create flash videos of your desktop with recordmydesktop

John Varouhakis is the author of recordmydesktop and gtk-recordmydesktop (front-end) which is a tool to help you record a session on your Linux desktop and save it to a Flash video (.flv).

To install, click on System/Administration/Synaptic Package Manager, and search for gtk-recordmydesktop. Install it. Then, the application is available from Applications/Sound&Video/gtkRecordMyDesktop.

Screenshot of gtk-recordmydesktop

Before you are ready to capture your Flash video, you need to select the video area. There are several ways to do this; the most common is to click on Select Window, then click on the Window you want to record. A common mistake is that people try to select the window from the preview above. If you do that, when you would have selected the recorder itself to make a video of, which is not really useful. You need to click on the real window in order to select it; then, in the desktop preview you can see the selected window. In the above case, I selected the OpenOffice Writer window.

Assuming that you do not need to do any further customisation, you can simple press Record to start recording. Generally, it is good to check the recording settings using the GNOME Sound recorder beforehand. While recording, you can notice a special icon on the top panel. This is gtk-recordmydesktop. Once you press it, recording stops and the program will do the post-processing of the recording. The resulting file goes into your home folder, and has the extension .ogv.

Some common pitfalls include

  • I did not manage to get audio recording to work well for my system; I had to disable libasound so that the audio recording would not skip. With ALSA, sound skips while with OSS emulation it does not. Weird. Does it work for you?
  •  The post-processing of the recording takes some time. If you have a long recording, it may take some time to show that it makes progress, so you might think it crashed. Have patience.

I had made one such recording, which can be found at the Greek OLPC mailing list. John told me that the audio part of the video was not loud enough, and one can use extra post-processing to make it sound better. For example, one could extract the audio stream of the video, remove the noise, beautify (how?) and then add back to the video.

It’s good to try out gtk-recordmydesktop, even for a small recording. Do you have some cool tips from your Linux desktop that you want to share? Record your desktop!

ert-archives.gr: “Linux/Unix operating systems are not supported”

ERT (Hellenic Broadcasting Corporation) is the national radio/television organisation of Greece.

ERT recently made available online part of its audio and video archive, at the website http://www.ert-archives.gr/

When browsing the website from Linux, you were blocked with a message that Linux/Unix operating systems are not supported. This message was appearing due to User-Agent filtering. Even if you altered your User-Agent, the page would not show the multimedia.

There has been a heated discussion on this on local mailing lists, with many users sending their personal polite comments to the feedback page at the ERT website. Many individual, personal comments have value and are taken into account.

Since today, http://www.ert-archives.gr/ does no do filtering on the User-Agent, and has changed the wording at the support page saying that

Σχετικά με υπολογιστές που χρησιμοποιούν λειτουργικό σύστημα Linux σχετικές οδηγίες θα υπάρξουν στο άμεσο μέλλον.

which means that they will be providing instructions for Linux systems in the immediate future.

Going through the HTML code of http://www.ert-archives.gr/ one can see that the whole system would work well under Linux, out of the box, if they could change

<embed id=”oMP” name=”oMP” width=”800″ height=”430″ type=”application/x-ms-wmp


<embed id=”oMP” name=”oMP” width=”800″ height=”430″ type=”video/x-ms-wmp

Firefox, with the mplayerplugin, supports the video/x-ms-wmp streaming format. You can verify if you have it by writing about:plugins in the location bar and pressing Enter. For my system it says

Windows Media Player Plugin

File name: mplayerplug-in-wmp.so
mplayerplug-in 3.40Video Player Plug-in for QuickTime, RealPlayer and Windows Media Player streams using MPlayer
JavaScript Enabled and Using GTK2 Widgets
MIME Type Description Suffixes Enabled
video/x-ms-wmp Windows Media wmp,* Yes

I am not sure if the mplayerplugin package is installed by default in Ubuntu, and I do not know what is the workflow from the message that says that a plugin is missing to the process of getting it installed. If you use the Totem Media Player, it instructs you to download and install the missing packages. I would appreciate your input on this one.

A workaround is to write a Greasemonkey script to replace the string so that Firefox works out of the box. However, the proper solution is to have ERT fix the code.

I must say that I would have preferred to have Totem Movie Player used to view those videos.
ERT Ecology
I just finished watching a documentary from the 80s about ecology and sustainability of the forests on my Linux system. It is amazing to listen again to the voice-over which is sort of a signature voice for such documentaries of the said TV channel. The screenshot shows goats in a forest, and mentioning the devastating effects of said animals on recently-burnt forests.

Update (22Mar08): The problem has not been resolved yet. Dimitris Diamantis offers a work-around at the Ubuntu-gr mailing list.

StixFonts, finally available (beta)!

The STIX Fonts project (website) has been developing for over 10 years a font suitable to be used in academic publications. It boasts support from Elsevier, IEEE and other academic publishers or associations.

A few days ago, they published a beta version of the font in an effort to get public feedback. The beta period runs until the 15th December.

STIX Fonts Beta showing Greek (Regular), from STIX Fonts Beta

STIX Fonts Beta currently support modern Greek. An effort to get support for Greek Polytonic did not work out well a few years back.

STIX Fonts Beta showing Greek (Italic), from STIX Fonts Beta

The main benefit of STIX Fonts is the support for mathematical and other technical symbols. This helps when writing academic publications and other technical documents.

STIX Fonts Beta showing Greek (Bold), from STIX Fonts Beta

STIX Fonts have extensive support of mathematical symbols, symbols that exist in Unicode Plane-1.

STIX Fonts Beta showing Greek (Bold Italic), from STIX Fonts Beta

If there is any modification that we would like to have in STIX fonts, we should do now. Once they are released, they will be widely distributed. Currently, Fedora has packaged STIX Fonts and made them available already.

One-line hardware support (USB Wireless Adapter)

I got recently a USB Wireless Adaptor, produced by Aztech. It was a good buy for several reasons:

  • It advertised Linux support
  • It was affordable
  • It had good quality casing; you can step on it and it won’t break
  • It had the Penguin on the box and was really really cheap

When I plugged it in on my Linux system, it did not work out of the box. The kernel acknowledged that a USB device was inserted (two lines in /var/log/messages) but no driver claimed the device.

With the package came a CD which had drivers for several operating systems, including Linux. Apparently one would need to install the specific driver. I think the driver was available in both source code and as a binary package (for some kernel version).

The kernel module on the CD was called zd1211, so I checked whether my kernel had such a module installed. To my surprise, there was such a kernel module, called zd1211rw. I hope you have better chance with the URL because now the website appears to be down (Error 500).

Therefore, what was wrong with my zd1211rw kernel module? Reading the documentation of project website, I figured out that you have to report the ID (called the USB ID) of your adapter  so that it is included in the kernel module, and when you plug in your device, it will be automatically detected.

You can find the USB ID by running the command lsusb. Then, it is a one-line patch for the zd1211rw driver to add support for the device,

— zd1211rw.linux2.6.20/zd_usb.c      2007-09-25 14:48:06.000000000
+++ zd1211rw/zd_usb.c    2007-09-28 11:35:51.000000000 +0300
@@ -64,6 +64,7 @@
{ USB_DEVICE(0x13b1, 0x0024), .driver_info = DEVICE_ZD1211B },
{ USB_DEVICE(0x0586, 0x340f), .driver_info = DEVICE_ZD1211B },
{ USB_DEVICE(0x0baf, 0x0121), .driver_info = DEVICE_ZD1211B },
+       { USB_DEVICE(0x0cde, 0x001a), .driver_info = DEVICE_ZD1211B },
/* “Driverless” devices that need ejecting */
{ USB_DEVICE(0x0ace, 0x2011), .driver_info = DEVICE_INSTALLER },
{ USB_DEVICE(0x0ace, 0x20ff), .driver_info = DEVICE_INSTALLER },

What Aztech should have done is to submit the USB ID to the developers of the zd1211rw driver. In this way, any Linux distribution that comes out with the updated kernel will have support for the device.

It is very important to get the manufacturers to change mentality. From offering a CD with “drivers”, for free and open-source software they should also work upstream with the device driver developers of the Linux kernel. The effort is small and the customer benefits huge.

Greek OLPC localisation status

The Greek OLPC localisation effort is ongoing and here is a report of the current status.

For discussions, reading discussion archives and commenting, please see the Greek OLPC Discussion Group.

We are localising two components, the UI (User Interface) and applications of the OLPC, and the main website at http://www.laptop.org/

The UI is currently being translated at the OLPC Wiki, at OLPC_Greece/Translation. At this page you can see the currently available packages, what is pending and which is the page that you also can help translate.

At this stage we need people with skills in music terminology to help out with the localisation of TamTam. In addition, there are more translations that need review and comments before they are sent upstream.

Moreover, if you find a typo and a better suggestion for a term in the submitted translations, feel free to tell us at the Greek OLPC Discussion Group.

The other project we are working on is the localisation of the Greek version of www.laptop.org. The pages are not 100% translated yet, so if you want to finish the difficult parts, see the Web translation page of laptop.org.

The translators that helped up to now have done an amazing job.

Important MO file optimisation for en_* locales, and partly others

During GUADEC, Tomas Frydrych gave a talk on exmap-console, a cut-down version of exmap that can work well on mobile devices.

During the presentation, Tomas showed how to use the tool to find the culprits in memory (ab)use on the GNOME desktop. One issue that came up was that the MO files taking up space though the desktop showed English. Why would the MO translation files loaded in memory be so big in size?

gtk20.mo                             : VM   61440  B, M   61440  B, S   61440  B

atk10.mo                      	     : VM    8192  B, M    8192  B, S    8192  B

libgnome-2.0.mo			: VM   28672  B, M   24576  B, S   24576  B

glib20.mo			     : VM   20480  B, M   16384  B, S   16384  B

gtk20-properties.mo           : VM     128 KB, M     116 KB, S     116 KB

launchpad-integration.mo  : VM    4096  B, M    4096  B, S    4096  B

A translation file looks like

msgid “File”

msgstr “”

When translated to Greek it is

msgid “File”

msgstr “Αρχείο”

In the English UK translation it would be

msgid “File”

msgstr “File”

This actually is not necessary because if you leave those messags untranslated, the system will use the original messages that are embedded in the executable file.

However, for the purposes of the English UK, English Canadian, etc teams, it makes sense to copy the same messages in the translated field because it would be an indication that the message was examined by the translation. Any new messages would appear as untranslated and the same process would continue.

Now, the problem is that the gettext tools are not smart enough when they compile such translation files; they replicate without need those messages occupying space in the generated MO file.

Apart from the English variants, this issue is also present in other languages when the message looks like

msgid “GConf”

msgstr “GConf”

Here, it does not make much sense to translate the message in the locale language. However, the generated MO file contains now more than 10 bytes (5+5) , plus some space for the index.

Therefore, what’s the solution for this issue?

One solution is to add to msgattrib the option to preprocess a PO file and remove those unneeded copies. Here is a patch,

— src.ORIGINAL/msgattrib.c 2007-07-18 17:17:08.000000000 +0100
+++ src/msgattrib.c 2007-07-23 01:20:35.000000000 +0100
@@ -61,7 +61,8 @@
REMOVE_FUZZY = 1 << 2,
+ REMOVE_COPIED = 1 << 6
static int to_remove;

@@ -90,6 +91,7 @@
{ “help”, no_argument, NULL, ‘h’ },
{ “ignore-file”, required_argument, NULL, CHAR_MAX + 15 },
{ “indent”, no_argument, NULL, ‘i’ },
+ { “no-copied”, no_argument, NULL, CHAR_MAX + 19 },
{ “no-escape”, no_argument, NULL, ‘e’ },
{ “no-fuzzy”, no_argument, NULL, CHAR_MAX + 3 },
{ “no-location”, no_argument, &line_comment, 0 },
@@ -314,6 +316,10 @@
to_change |= REMOVE_PREV;

+ case CHAR_MAX + 19: /* –no-copied */
+ to_remove |= REMOVE_COPIED;
+ break;
@@ -436,6 +442,8 @@
–no-obsolete remove obsolete #~ messages\n”));
printf (_(“\
–only-obsolete keep obsolete #~ messages\n”));
+ printf (_(“\
+ –no-copied remove copied messages\n”));
printf (“\n”);
printf (_(“\
Attribute manipulation:\n”));
@@ -536,6 +544,21 @@
: to_remove & REMOVE_NONOBSOLETE))
return false;

+ if (to_remove & REMOVE_COPIED)
+ {
+ if (!strcmp(mp->msgid, mp->msgstr) && strlen(mp->msgstr)+1 >= mp->msgstr_len)
+ {
+ return false;
+ }
+ else if ( strlen(mp->msgstr)+1 < mp->msgstr_len )
+ {
+ if ( !strcmp(mp->msgstr + strlen(mp->msgstr)+1, mp->msgid_plural) )
+ {
+ return false;
+ }
+ }
+ }
return true;
However, if we only change msgattrib, we would need to adapt the build system for all packages.

Apparently, it would make sense to change the default behaviour of msgfmt, the program that compiles PO files into MO files.

An e-mail was sent to the email address for the development team of gettext regarding the issue. The development team does not appear to have a Bugzilla to record these issues. If you know of an alternative contact point, please notify me.

Update #1 (23Jul07): As an indication of the file size savings, the en_GB locale on Ubuntu in the installation CD occupies about 424KB where in practice it should have been 48KB.

A full installation of Ubuntu with some basic KDE packages (only for the basic libraries, i.e. KBabel – (ls k* | wc -l = 499)) occupies about 26MB of space just for the translation files. When optimising in the MO files, the translation files occupy only 7MB. This is quite important because when someone installs for example the en_CA locale, all en_?? locales are added.

The reason why the reduction is more has to do with the message types that KDE uses. For example,

msgid “”
“_: Unknown State\n”
msgstr “Unknown”

I cannot see a portable way to code the gettext-tools so that they understand that the above message can be easily omitted. For the above reduction to 7MB, KDE applications (k*) occupy 3.6MB. The non-KDE applications include GNOME, XFCE and GNU traditional tools. The biggest culprits in KDE are kstars (386KB) and kgeography (345KB).

Update #2 (23Jul07): (Thanks Deniz for the comment below on gweather!) The po-locations translations (gnome-applets/gweather) of all languages are combined together to generate a big XML file that can be found at usr/share/gnome-applets/gweather/Locations.xml (~15MB).

This file is not kept in memory while the gweather applet is running.
However, the file is parsed when the user opens the properties dialog to change the location.
I would say that the main problem here is the file size (15.8MB) that can be easily reduced when stripping copied messages. This file is included in any Linux distribution, whatever the locale.

The po-locations directory currently occupies 107MB and when copied messages are eliminated it occupies 78MB (a difference of 30MB). The generated XML file is in any case smaller (15.8MB without optimisation) because it does not include repeatedly the msgid lines for each language.

I regenerated the Locations.xml file with the optimised PO files and the resulting file is 7.6MB. This is a good reduction in file space and also in packaging size.

Update #3 (25Jul07): Posted a patch for gettext-tools/msgattrib.c. Sent an e-mail to the kde-i18n-doc mailing list and got good response and a valid argument for the proposed changes. Specifically, there is a case when one gives custom values to the LANGUAGE variable. This happens when someone uses the LANGUAGE variable with a value such as “es:fr” which means show me messages in Spanish and if something is untranslated show me in French. If a message has msgid==msgstr for Spanish but not for French, then it would show in French if we go along with the proposed optimisation.