Friday, December 31, 2010

Restoring files manually from a mondo backup

As already mentioned in earlier posts, I uses mondo to create system backups of all my systems. By system backups I mean; create a backup of the live system, without any personal data. In short, this is a backup of everything except home folders. The home folders are backed up using rsync and external disks, but this is another story.
Restoring from a mondo backup is easy; simply burn the images, put it in a cd drive and nuke the system (yes, this is a restore option in mondo :) ).
If you just want to restore some files from the backup, without overwriting the whole system, there's an easy way to do this. Suppose you created an ISO a while ago and these are stored on some disk. First, you will need to mount the ISO as a loopback device like so:

mount -o loop MyIso.iso /mnt

The ISO is now mounted in /mnt.
Now locate the file you want to restore (I was going to restore smb.conf):

$ grep smb.conf /mnt/archives/filelist.*
/mnt/archives/filelist.11:/var/lib/ucf/cache/:etc:samba:smb.conf
/mnt/archives/filelist.36:/usr/share/man/man5/smb.conf.5.gz
/mnt/archives/filelist.43:/usr/share/doc/samba-doc/htmldocs/manpages/smb.conf.5.html
/mnt/archives/filelist.44:/usr/share/doc/samba-doc/examples/dce-dfs/smb.conf
/mnt/archives/filelist.44:/usr/share/doc/samba-doc/examples/smb.conf.default.gz
/mnt/archives/filelist.44:/usr/share/doc/samba-doc/examples/tridge/smb.conf
/mnt/archives/filelist.44:/usr/share/doc/samba-doc/examples/tridge/smb.conf.fjall
/mnt/archives/filelist.44:/usr/share/doc/samba-doc/examples/tridge/smb.conf.lapland
/mnt/archives/filelist.44:/usr/share/doc/samba-doc/examples/tridge/smb.conf.vittjokk
/mnt/archives/filelist.44:/usr/share/doc/samba-doc/examples/tridge/smb.conf.WinNT
/mnt/archives/filelist.46:/usr/share/doc/smbldap-tools/examples/smb.conf
/mnt/archives/filelist.48:/usr/share/samba/smb.conf
/mnt/archives/filelist.48:/usr/share/samba/smb.conf.dapper
/mnt/archives/filelist.48:/usr/share/samba/smb.conf.etch
/mnt/archives/filelist.48:/usr/share/samba/smb.conf.gutsy
/mnt/archives/filelist.5:/etc/samba/smb.conf
/mnt/archives/filelist.5:/etc/samba/smb.conf.ucf-dist
$

The file I needed was somewhere in an archive identified by number 5.
Now create some folder in a temporary location:

$ mkdir /tmp/restore
$ cd /tmp/restore

Now restore the contents of the archive you want (you might need to download and install afio):

afio -ZP bzip2 -i /mnt/archives/5.afio.bz2

The whole archive is now restored underneath /tmp/restore. So my smb.conf is now located underneath /tmp/restore/etc/samba/smb.conf. Pretty cool eh.

Anyway, I don't deserve any credits for this post, I found a lot of help here.

EDIT: Apparently there is an easier, more graphical way to restore stuff from a mondo archive.

Thursday, December 30, 2010

Failed to issue the StartTLS instruction: Protocol error

I encountered the error mentioned in the title of this post after upgrading my samba install on Debian Lenny using Enterprise Samba binaries. The latest version they distribute at the time of writing is 3.4.9. My samba install talks to an ldap backend and the above error was shown upon starting the new version. Seems they added (or changed the default) option for the ldap protocol in smb.conf. Adding:

ldap ssl = off
makes the error go away.
Cool.

Tuesday, December 28, 2010

Building queries ... the easy way

On a past project, users needed to be able to create a custom query and execute it. To do this, the user was able to select a field, an operator and select or fill in a value. If, for example, we were searching for people living in Belgium, the user would select person.address.country for the field, like for the operator and fill in "BE" for the value. I'm sure you all know how the resulting SQL would look like.

There are different ways to create a query builder. If you're using Hibernate, the above example could easily be translated to HQL using the criteria API. In some cases, however, the resulting SQL is not exactly what you wanted (e.g. not performing well) or maybe Hibernate is just not capable generating a correct SQL. Unfortunately, at that time, we discovered we were suffering both aforementioned problems, so the only solution was to create our own query builders using string concatenation. Yes, this can get ugly really fast, when not being careful, but I managed to create something that works and looks nice ... more or less.

On a more recent project, users were also capable of creating custom queries. Instead of re-using parts of the query builders written in the past, I decided to ask Google for help. Surely there had to be someone or some project out there that came across the same issues I was having with Hibernates criteria API. During my search, I came across querydsl which seemed to do what I wanted to accomplish.

querydsl comes in different "flavours". You can use it to query your database using HQL or SQL. You can even use it to query objects in a simple collection (similar to lambdaj I guess). To do this, querydsl uses so called "Q" entities. So, if I'm querying a person and its addresses, I would have a QPerson and a QAddress. These entities can be generated from your JPA annotated domain model, or can be reverse engineered from your database. Of course, you can generate these "Q" entities by hand, since it's a very clean and easy to understand API. Once you have your "Q" entities in place, you can start querying the database in a very fluent API, like this: query.from(person).join(address).on(person.id.eq(address.person)).where(address.country.like("BE")). This API is also type-safe (HQL or criteria aren't), so you won't be querying a person's name using it's birth date :)

The only remaining difficulty was to translate the criteria sent by the user interface to something understood by querydsl. To do this, I created something that "observes" the creation of the "Q" entities. When a QPerson has a field "firstName", the observer knows this. The observer also knows there is a relation between the QPerson and the QAddress entities expressed by the relation person.address. In the end, the observer holds a complete mapping between the fields available in my "Q" entities and their actual names (e.g.: StringPath("FIRST_NAME") maps to person.firstName). Once you understand the insides of querydsl, this isn't very hard to create. Finally, you just need to merge this mapping with the criteria sent by the user interface and your query is ready to be executed against the database.

I wish I could publish the code written on this project, but the nature of the project doesn't allow me to do so. In this post, I just wanted to tell you something about querydsl and how easy it is to plug this into an existing project. It's also very extendible and has a very active community and core developer. Should you have any questions on this, please leave a comment.

Saturday, December 11, 2010

Updating the location for your photo's in f-spot

I've been using f-spot for quite some time now to manage and categorise all my pictures. In older Ubuntu versions (8.04 and earlier), the default location for f-spot to store its photo's was: /home/username/Photos. For some reason, Ubuntu 10.04 (probably earlier versions as well) changed this folder to: /home/username/Pictures/Photos. I didn't notice this until today, while I was importing photo's and checking the contents of /home/username/Photos only to see nothing was copied to this location. That's when I discovered they changed the default location :)
Since f-spot is backed by an sqlite database, this wasn't very hard to solve:
  • create a backup of the sqlite database photos.db (should be underneath /home/username/.config/f-spot)
  • now update the location with sqlite3:

    kenneth@pavane:/data/home/kenneth/.config/f-spot$ sqlite3 photos.db
    SQLite version 3.6.22
    Enter ".help" for instructions
    Enter SQL statements terminated with a ";"
    sqlite> update photos set base_uri=replace(base_uri, 'file:///home/kenneth/Pictures', 'file:///home/kenneth');
    sqlite> update photo_versions set base_uri=replace(base_uri, 'file:///home/kenneth/Pictures', 'file:///home/kenneth');
    .exit
  • now move your pictures to their correct location

Phew, solved.

Tuesday, May 18, 2010

Kubuntu 10.04, the aftermath (2)

Now that I had my system up and running, it was time to add some of the applications I love to use. The previous LTS version shipped with Thunderbird 2, whereas the current LTS comes with Thunderbird 3. The new version recognized my existing profile (underneath HOME/.thunderbird, which is where all accounts are kept) without any problems. No mails got lost in the transition. The new version comes with smart folders, which gives an aggregated view on all folders when using different accounts (e.g. webmail, gmail, your ISP's pop mail, ...). My favorite photo management software, f-spot, is also updated and I was very pleased to see it converted my database from the older version without any issues.
In short, all software I used on 8.04 managed to convert my personal settings and data without any problems, which was a relief.

For the rare occasions I still need Windows (only to sync my GPS software with my old PDA, actually), I've set up a virtual machine inside vmware server. Unlike vmware's workstation and fusion, vmware server is a free product. You don't get any fancy stuff with it, like hardware acceleration, but I don't really need this. I also prefer vmware in favor of other virtualization products, like virtualbox, because I'm used to it and it allows me to run virtual machines from work without any modifications. Installing vmware server on a Linux OS with a fairly recent kernel has always been a nightmare. I had a lot of issues installing it on 8.04 and was facing the same issues installing it on 10.04. Thankfully Google was my friend a I came across this thread, explaining in detail how to install vmware on a modern Linux. All you need to do is follow these steps inside a terminal:
cd /usr/local/src
wget http://codebin.cotescu.com/vmware/vmware-server-2.0.x-kernel-2.6.3x-install.sh
tar xvzf raducotescu-vmware-server-linux-2.6.3x-kernel-592e882.tar.gz
cd raducotescu-vmware-server-linux-2.6.3x-kernel-592e882/
tar xvzf VMware-server-2.0.2-203138.x86_64.tar.gz #OF COURSE you have to copy the tar.gz to this dir first..
chmod +x vmware-server-2.0.x-kernel-2.6.3x-install.sh
./vmware-server-2.0.x-kernel-2.6.3x-install.sh

After following these steps, you should have vmware server up and running. Unfortunately you're not out of the bush yet. The console, which is launched from within firefox and connects to the virtual machine's console, is incompatible with firefox 3.6. Luckily, there's a workaround for this. You need to unzip (yes, using the unzip command) /usr/lib/vmware/webAccess/tomcat/apache-tomcat-6.0.16/webapps/ui/plugin/vmware-vmrc-linux-x86.xpi (or vmware-vmrc-linux-x64.xpi when you're running a 64bit Linux) to a temporary folder. When extracted, locate the plugins folder. Inside it, you should find an executable called vmware-vmrc. You should be able to launch it (only from within the folder) and connect to your freshly installed vmware server.



Next, you can select a virtual machine you just started ...



... and your done.



Don't use ctrl-alt-del, use ctrl-alt-print screen instead :)

Hopefully, the people from vmware will release a new version of their server product soon and make sure it's compatible with a more recent kernel and browser.

Wednesday, May 12, 2010

Kubuntu 10.04, the aftermath (1)

I've been using Ubuntu since Hoary, which was released 5 years ago. It has always been, and still is, my preferred OS for desktop computing. April 29th the latest LTS version, 10.04, was released. This week, I decided to take it for a spin.
Being able to upgrade / dist-upgrade on a Debian based Linux system, has always been one of the main reasons why I like Ubuntu so much. In Ubuntu's early days, however, breakage was very likely after dist-upgrading your system. dist-upgrading from Hoary (5.04) towards Dapper (6.06) every six months has been a true nightmare. After that experience, I decided to stick with every LTS version (i.e., 6.06, 8.04, 10.04, ...) and reinstall from scratch. Having my home directory on a separate partition, eases this process a lot. Just reinstall Ubuntu (after backing everything up of course), mount the home partition and your done.
Since I became a big fan of KDE, over the years I've been using Linux, I decided to replace my Kubuntu 8.04 (which was, let's face it, crap) with Kubuntu 10.04. The installation went smoothly and in a matter of minutes I had my new, shiny desktop.
My PC is equipped with an ancient NVidia graphics adapter (GeForce 6 series). Kubuntu 10.04 comes with the nouveau graphics driver, which is an open source effort to eliminate the need for NVidia's binary driver which is, of course, closed source. I am a big fan of open source projects and the community surrounding it, but the nouveau driver is nothing compared to NVidia's binary version.
Since the nouveau driver comes preloaded, I had a hard time installing NVidia's binary driver. First, you have to "apt-get remove" all packages named "something-nvidia-something".
dpkg -l | grep nvidia
shows a list of packages that are pre-installed.
Next, you need to blacklist the nouveau driver, so the kernel doesn't load it while booting. To do this, open /etc/module.d/blacklist.conf and append blacklist=nouveau.
After rebooting, in single user mode, you'll see that nouveau isn't loaded and you can safely execute the NVidia installer script.
Once installed, another problem arises. There is currently a bug which makes the splash screen (i.e. the logo and stuff that appear while booting) looks very ugly after installing NVidia's (or ATI's) binary driver. The people from Ubuntu have provided a temporary fix, enabling a splash screen in 16 color mode. IMHO this still looks like crap, so I disabled the splash screen altogether.
Having to blacklist modules to be able to install NVidia's driver and the crappy plymouth screen is, IMHO, unacceptable. Ubuntu supposed to be a user friendly system. Forcing people to use the nouveau driver is unacceptable as well. I am free to choose whatever driver I want for me graphics card, albeit a closed source driver. Using the closed driver increases graphical performance a lot and the desktop and text look much "sharper". Not so much with the open source driver.
The crappy plymouth screen proves, again, that the people from Ubuntu didn't do their homework and forgot there are a lot of users out there using closed source drivers for their graphics card.
All in all, I am still pleased with my 10.04 install. It looks nice and boots very fast on my 5 year old PC. But somehow, I feel like they failed again to release a Linux distribution, ready for the desktop and ready to replace Windows on the desktop. Maybe we'll have to wait for 12.04 :)

Tuesday, May 04, 2010

When June 1 1900 is not June 1 1900

We're developing a fairly large application in Java. There are 2 front end applications, one written in Flex, the other in plain Spring-MVC and Spring-WebFlow. Both of them are using 3 main applications deployed as 3 different wars on the same application server. The front end applications talk with the back end applications using RMI exposed as HTTP. The back end applications also talk to each other using the same protocol.

One of the applications deployed in the back end is responsible for validating data entered in the front end applications. One of the rules in the, so called ValidationService, checks if the SSIN of a person is valid.
In Belgium, all people have a unique SSIN (Social Security Identification Number), comprised of 11 digits. The first 6 digits are based on the person's birth date. So if this person was born on April 21, 1978, the SSIN starts with 780421xxxyy. To check if an SSIN is valid, you need to check the person's birth date against these first 6 digits. In fact, you need to do a lot more to check if the SSIN is valid, but the issue I was having was related to the person's birth date.

For test purposes, one of the front end applications was running on a Windows machine, while the back end applications where running on a Linux machine (some version of Ubuntu). When entering 06/01/1900 and an SSIN starting with 000106, the ValidationService threw an error at me claiming the SSIN did not match the given birth date. While debugging the ValidationService, I saw the date that was entering the system was 05/31/1900 (23h50m), instead of 06/01/1900 (00h00m). How strange.

To be able to understand the above problem, we need to know how object serialization works in Java. Whenever objects are transfered over a wire, they are serialized. Each object knows how it can serialize and deserialize itself by implementing the private methods writeObject and readObject. The java.util.Date object simply converts itself to a long value while serializing. This long value represents the number of milliseconds since January 1, 1970, 00:00:00 GMT.
For some reason, I don't know why, this long value on Linux is different from the long value on Windows. I can prove this using a simple test. Write a simple test class:

public class Test {
public static void main(java.lang.String[] args) {

System.out.println(new java.util.Date(0,5,1));

System.out.println(new java.util.Date(0,5,1).getTime());

}
}

Now, compile it and run it on Windows, this should be the output (Java 6):

Fri Jun 01 00:00:00 CET 1900
-2195942961000

Do the same on Linux, and this should be the output (Java 6):

Fri Jun 01 00:00:00 CET 1900
-2195942400000


Interesting, isn't it. I wonder if there are other platforms affected as well. I mean, what's the output of the simple class on Mac OS X, BSD, AIX, Solaris, ... . IMHO they should all render the same long value, and I have a sneaking suspicion which platform had an F in mathematics :)

The strange thing is, not all dates are affected. I was doing a batch upload of 600.000+ records, all with different birth dates and SSINs and this was the only one throwing the aforementioned error.

Anyway, the above should be kept in mind when deploying different services talking with each other over RMI, on different platforms. I suspect this is a bug, but who cares. The Date and Calendar stuff in Java is a true nightmare and we all should use joda time instead.

Sunday, February 21, 2010

The story of the Cobalt Qube

The other week, a friendly colleague of mine brought me a small present. He gave me one of his Cobalt Qubes (for free!) which was collecting dust at his home. He was certain that I could give it a better use.

I think I can still remember the days these things hit the market (somewhere in the late 90's). This was a home / office server appliance way before the people of Microsoft thought up their Windows Home Server. It was running some modified version of Linux RedHat, featuring a web interface for all administrative tasks. It also had 2 network cards, so you could easily turn it into a gateway / firewall. The RedHat version running on the Qube was known to be notoriously insecure. It was also running an older 2.0.x kernel which was outdated, even at that time. The Qube 2700 and Qube 2 were both equipped with a MIPS processor. Later models had an i386 architecture. Mine was a Qube 2.

Getting the serial connection to work


The Qube does not have any VGA adapter, so if you want to see what is happening inside, you'll need to get your serial connection to work. The LCD on the back also spits out some messages, but these are rather brief. Now, we need to connect a NULL modem cable to the serial console port of the Qube. You can easily create these NULL modem cables yourself. All you need is:
  • a few metres of phone cable (4 wires)
  • 2 DB9 female connectors
. Solder wire_1 to pin 2, wire_2 to pin 3 and wire_3 to pin 5 on one DB9 connector. Next, solder wire_1 to pin 3, wire_2 to pin 2 and wire_3 to pin 5 and your done. As you can see, pin 3 and pin 2 are crossed, pin 5 is used for mass.
You can connect to the serial console from any Linux box by typing:screen /dev/ttyS0 115200. The output should be similar to the text shown below:

Cobalt Microserver Diagnostics - 'We serve it, you surf it'
Built Wed Mar 3 21:26:25 PST 1999

1.LCD Test................................PASS
2.Controller Test.........................PASS
5.Bank 0:.................................64M
6.Bank 1:.................................64M
7.Bank 2:.................................64M
8.Bank 3:.................................64M
9.Serial Test.............................PASS
10.PCI Expansion Slot....................**EMPTY**
12.IDE Test................................PASS
13.Ethernet Test...........................PASS
16.RTC Test................................PASS
BOOTLOADER: trying to boot from partition /dev/hda1
Decompressing done
Decompressing \ done.

Opening the box


I head no idea what kind of hard drive was in there, or the size, so I decided to open the box and have a look. Apparently, my box had a Western Digital hard drive of 10 GB, which is rather small for today's standards. It also had 16 MB of RAM, which is also rather small.
While surfing the Internet, I found some people putting larger drives in the Qube and more RAM. Apparently, it supports up to 265 MB of RAM, but I had no idea which modern hard drives are supported.

Adding more memory


Unlike other reports you can find on the Internet, the Qube does not use some proprietary memory format. In fact, I've managed to put 2 16 MB modules in there that were salvaged from an old Pentium I computer (standard 72 pin EDO RAM). However, 32 MB is not much either, so I decided to buy 2 modules of 128 MB from MemoryTen. It only took 2 days to get the memory in Belgium and the Qube is now running happily with 256 MB.


Adding a larger disk


Since I'll be using the Qube as a LAN disk, I wanted to upgrade the hard drive to a larger format. I was not sure, however, if the Qube was going to support large drives (i.e. drives larger than, say 40 GB). After a quick search on the Internet, it seemed that all MIPS based Qubes have an LBA 48 IDE controller. This means that the capacity of the drive is limited to 144 petabytes (!!). All later models (including RAQ) based on i386 architecture have an LBA 24 IDE controller, limiting the size to 137 GB (bummer).
My local computer store still had some Western Digital Caviar disks of 320 GB, so I decided to buy one. The drive has an 8 MB buffer and is running at 7200 rpm, which should be a lot faster than the standard 10 GB drive.

Installing Debian Lenny


Martin Michlmayr has written a very good installation guide to install Debian Lenny on a Qube. The Qube can easily boot an install image from the network, but the installer requires at leas 32 MB of memory. Apparently, 32 MB did not suffice, since the installer crashed in the middle of the procedure. Having 256 MB helped a lot. If you don't want to upgrade your memory, you can still install Debian by putting the drive in an external USB enclosure and extract the tarball containing a Debian base install onto the mounted drive. This tarball (and explanation) is also available on Martin Michlmayr's site.
Since my serial connection was working properly, I used the serial installer to install Debian on the new hard drive. As with all Debian installs, this went like a breeze.

Booting the device


Booting from the new drive seemed to be difficult, however. For some reason, not entirely sure why, the Qube refuses to boot from this drive. Debian Linux did support this drive, since I was able to install it, so I needed a different device to boot from. A possible solution would be to boot from the 10 GB drive and to use the 320 GB drive as storage. The Qube does not have a lot of room to spare, however, and powering 2 IDE drives might kill the external power supply. Fortunately I had a Compact Flash to IDE adapter lying around, so I decided to try and use this as boot device.

This seemed to work just fine. The Compact Flash adapter requires an fdd power cable, which I could salvage from an old 486DX100. The fdd cable was soldered to a molex to molex cable and looks more or less like the figure below.

The Compact Flash card now contains the boot partition of 100 MB. Using a larger boot partition or the whole Compact Flash (1 GB) did not work. The rest of the file system is mounted from the 320 GB disk, to limit the number of writes to the Compact Flash.

Conclusion


Enhancing an old Qube's capabilities was a very pleasant experience. It was a lot more fun than installing the Wyse and I must say, it looks a lot cooler. Right now, the Qube isn't doing anything yet, but I will be using it as a Samba / NFS server to store backups. Keep reading this blog, as I will post some more on the Qube in the near future.