Today, our first project done in Ruby on Rails went live.

Christoph has done a wonderful job on it. The only thing I had to do was to fix up some CSS buglets in IE and install a deployment environement (developement was done using the Rails-integrated WEBRick server)

Personally, I think I'd have preferred using LightTPD with FastCGI instead of Apache, but the current setup pretty much prevented me from doing so.

Which is why I've installed mod_fastcgi on apache which was very, very easy on Gentoo (emerge mod_fastcgi - as usual).

Once I've corrected the interpreter path in dispatch.fcgi (which was set to the location of Christophs developement environment), the thing began working quite nicely.

And fast.

Considering the incredible amount of magic rails does behind the scenes, those 73.15 requests per second I got are very, very impressive (ab -n 100 -c 5). And actually so much faster than a comparable PHP application running using mod_php on a little faster server (19.36 req/s, same ab call).

The results have to be taken with a grain of salt as it's different machines, different load and a different application.

But it's similar enough to be comparable for me: the PHP application is running on a framework somewhat similar to rails with lesser optimization but also with lesser complexity. Both benchmarks ran against the unauthenticated start page which comes pretty much down to including some files and rendering a template. No relevant database queries.

I wonder how much of this higher speed is caused by FastCGI (a very convincing technology) instead of running the code in the apache server itself and how much is just rails being faster.

I will set up a test environement which is better defined to actually allow an accurate performance comparison: Comparable application in mod_php, php-fastcgi and rails-fastcgi. And if I have time, I'm going to run the two fastcgi-tests on LightTPD aswell.

Benchmarking is fun. Time-consuming, but fun.

For now, I'm content with the knowledge that an application that took a very small effort to write (even considering that Christoph had to learn the rails environment first) is running fast enough for its intended purpose.

As Christoph said: Rails Rules

thanks, guys

Read on →

Transferring large amounts of data is a problem to overcome for all the IM networks.

You see: Usually you don't transfer files over the central IM server because it will use the IM providers bandwith which is better used sending out those small text messages. That's why files are usually transferred in a P2P fashion.

The problem here is that usually there's a NATing router or even a firewall working at one, or most of the time both ends of the communication. This usually forbids the making of a direct connection - at least without some serious trickery (warning: PDF link) going on

This is why I never expect a file transfer to work - even when using the native IM client.

And today, a guy sent me some image. Via Jabber. Via PyMSN-t.

Just when I wanted to write that it's never going to work, I watched the bytes flow in.

I'm still unable to belive it: Wildfire/PyAIM-t/Psi succeeded in making a direct P2P file transfer happen from a friend in MSN to my PC running a jabber client.

transferworks.png


Read on →

I have been talking about jabber before on this blog (here and here). And each time the euphory was put back by one or another malfunctioning element. You know: I would never use a third party service for a fun-project if I can be my own provider aswell.

And being ones own provider is one of the biggest advantages of using jabber. Over the years (the experiment began in the winter of 2002/2003), I have been running a jabber-server here and then.

First it was after reading the jabber book (very interesting read) which ended with me installing a jabber server with transports for aim (aim-t) and icq (jit? I don't remember). Installing jabber on debian was quite hard because there was no (useable) package (too old - as usual), but once I got it to work, it was fun.

The problems began with the advent of iChat and .Mac: aim-t was not able to detect the presence of .Mac users and thus I was unable to talk with them via the jabber server. Unfortunately, Richard is one of those .Mac guys, so I had to find another solution to talk with him.

For long, the only solution was original AIM itself, but in the fall of 2003 Trillian 2.0 was released with AIM/iChat-Support. This was the demise of my jabber-solution.

While I've always liked to have the whole client-configuration, contactlist and whatnot stored on the server, the advantage of actually being able to chat with richard made me switch to trillian and thus even pay for a IM solution - regardless of the many free alternatives.

Remember: At the time, Trillian was the only AIM client capable of talking with the .Mac guys. The original AOL client excluded of course, but who wants to be running a ton of IM clients at the same time (most of my buddies were on ICQ which was not compatible with AIM then)? And who wants to cope with advertising all over the place?

After that, I was keeping Trillian, where I used a Jabber-Plugin to still be connected to the Jabber-Server (which was completely pointless as I was and am the only user on that server and no-one I know is using jabber (any more)).

Then the Debian installation went away and Gentoo came. I've written about the pleasant experience with jabber on Gentoo. Still: As I was the only user, that jabber installation lived not very long either (I've never come around to have jabberd start automatically, so after a power outage, the service was gone. And I did not even notice *sigh*)

Only last week, my interest came back when I've seen that iChat provides jabber-support. Don't ask me why. I just wanted to check the progress of the various projects once more.

I immediately noticed ejabberd which is what's currently powering jabber.org

On their site I read about PyAIM-t. Finally a replacement for that old aim-t without .Mac support. And I checked the readme-file: Yes. PyAIM-t uses the oscar protocol which is what's needed to get the presence info of those .Mac users

Installing ejabberd failed miserably though.

For one, the gentoo ebuilds are outdated(!) and I never managed to install the whole thing in a way that the command-line administration tool was able to access the (working) server. I admit: I've not invested nearly enough time to understand that erlang-thing. But why should I? It's a for-fun-only project after all.

Via the installation instructions of that PyAIM-t transport I found out about Wildfire. Wildfire is GPL, but backed by a company with strict commercial interest. A bit like the MySQL-thing. For me it did not matter as I did not want to integrate the thing into a commercial solution. Heck! I did not even want to use the unmidified thing commercionally.

Installing wildfire was - even though it required Java - easy to do. Especially as Gentoo provides a current ebuild (hard masked though because Wildfire depends on Java 1.5). Getting the thing to work was a matter of emerge wildfire, /etc/init.d/wildfire start and rc-update add wildfire default as it's the norm with Gentoo.

Then I read the documentation to learn how to add a SSL certificate (signed by our company's CA) which was a bit hairy (note: the web interface does not work. if you use the web interface, you corrupt the certificate store).

Installing the transports (PyAIM-t, PyMSN-t, PyICQ-t) was a matter of untaring the archive, entering the right connections settings I've configured in wildfire and launching the scripts. Easy enough.

Then I went to select the right client (on windows this time around): I've already known jajc, new for me were Exodus and Psi and Pandion. I could have kept trillian, but the nicest thing about the jabber clients is that they can store their settings on the jabber server. Trillian can't do that. So if I'm working on a new machine, I have to reconfigure Trillian where every pure jabber client will just fetch the settings from the server. Also, I wanted to have an OpenSource solution.

Now, that client-thing is a very subjective thing as functionality-wise, all three are identical - at least concerning the jabber-featureset (I'm not counting addons like RSS readers or whatever).

So here's my subjective review:

Jajc is not open source, provides a ton of settings to tweak (too many for my taste) and does not look that attractive (UI-wise).

Exodus seemingly does not provide a way to make the different contacts on the list look differently depending on which transport they use and the chat window is very, very limited in featureset and looks. If you dislike good looking programs with tons of unimportant settings to tweak, go for Exodus (this was not meant with disrespect. I was one of those users myself).

What remains is Pandion and Psi.

What I like about Pandion is the nice contact list display. You know: With avatar display (which works cross-im-network with those python transports!) I also like the nice looking chat window. What I dislike is the limited amount of settings to tweak (hehe... It's hard to make it right for me. Isn't it?).

I like the space-economic, yet still nice looking contact list in PSI. I also like the design of the chat window and the count of settings to tweak.

Personally, I can't decide between Psi and Pandion, so I'm running both of them currently. One day I will sure as hell know which of them I want to use.

So finally I'm up to speed with jabber again: Nice opensource client, working server and - finally - the .Mac AIM-Users on my contact list, while even able to chat with them.

So, you may ask: Why go through all this? Why not just stick with trillian?

Easy!

  • A pure Open Source solution. No strange subscription model
  • As settings are stored on the server, equal configuration wherever I am.
  • Jabber has inherent support for multiple connections with the same account.
  • Jabber works on many mobile phones. That way I can IM with my mobile phone while not being locked into a specific service
  • It was fun to set up!

*happy*

Read on →
putty.png

This is putty, showing the output of top on one of our servers. You may see that there are three processes running which are obviously VMWare related.

What's running there is their new VMWare Server. Here's a screenshot of the web-interface which gives an overview over all running virtual machines and allowing to attach a remote console to anyone of them:

web.png

As you can see, that server (which is not a very top-notch one) has more than enough capacity to do the work for three servers: A gentoo test machine and a Windows 2003 Server machine doing some reporting work.

Even under high load on the host machine or the two virtual machines, the whole system remains stable and responsive. And there's so much work needed to even get the VM's to high load, so that this configuration could even be used in production right now.

Well... what's so great about this, you might ask.

Running production servers in virtual machines has some very nice advantages:

  • It's hardware independant. You need more processing power? More ram? Just copy the machine to a new machine. No downtime, no reinstallation.
  • Need to move your servers to a new location? Easy. Just move one or two machines instead for five or more.
  • It's much easier to administer. Kernel update with the system not booting any more? typing "shutdown -h" instead of "shutdown -r" (both happened to me)? Well... just attach the remote console. No visiting the housing center anymore
  • Cost advantage. The host-server you see is not one of the largest ones ever. Still it's able to handle real-world-traffic for three servers and we still have reserve for at least two more virtual machines. Why buy expensive hardware?
  • Set up new machines in no time: Just copy over the template VM-folder and you're done.

And in case you wonder about the performance? Well, the VM's don't feel the slightest bit slower than the host (I've not benchmarked anything yet though).

We're currently testing this to put a configuration like this into real production use, but what I've seen so far looks very, very promising.

Even though I don't think we're going to need support for this (it's really straight-forward and stable), I'm more than willing to pay for a fine product like this one (the basic product will be free, while you pay for the support).

Now, please add a native 64bit edition ;-)

Read on →