gnegg2017-06-30T12:05:00+00:00http://pilif.github.comPhilip Hofstettermail@pilif.meJoining Debian to ActiveDirectory2017-03-27T00:00:00+00:00http://pilif.github.com/2017/03/debian-activedirectory-magic<p>This blog post is a small list of magic incantations and to be issued and animals to be sacrificed in order to join a Unix machine (Debian in this case) to a (samba-powered) ActiveDirectory domain.</p>
<p>All of these things have to be set up correctly or you will suffer eternal damnation in non-related-error-message hell:</p>
<ul>
<li>Make absolutely sure that DNS works correctly
<ul>
<li>the new member server’s hostname must be in the DNS domain of the AD Domain</li>
<li>This absolutely includes reverse lookups.</li>
<li>Same goes for the domain controller. Again: Absolutely make sure that you set up a correct PTR record for your domain controller or you will suffer the curse of <code class="highlighter-rouge">GSSAPI Error: Unspecified GSS failure. Minor code may provide more information (Server not found in Kerberos database)</code></li>
</ul>
</li>
<li>Disable IPv6 everywhere. I normally always advocate against disabling IPv6 in order to solve problems and instead just solve the problem, but <a href="https://lists.samba.org/archive/samba/2014-January/177987.html">bugs exist</a>. Failing to disable IPv6 on either the server or the client will also cause you to suffer in <code class="highlighter-rouge">Server not found in Kerberos database</code> hell.</li>
<li>If you made previous attempts to join your member server, even when you later left the domain again, there’s probably a lingering host-name added by a previous dns update attempt. If that exists, your member server will be in <code class="highlighter-rouge">ERROR_DNS_UPDATE_FAILED</code> hell even if DNS is configured correctly.
<ul>
<li>In order to check, use <code class="highlighter-rouge">samba-tool</code> on the domain controller <code class="highlighter-rouge">samba-tool dns query your.dc.ip.address your.domain.name memberservername ALL</code></li>
<li>If there’s a hostname, get rid of it using <code class="highlighter-rouge">samba-tool dns delete your.dc.ip.address your.domain.name memberservername A ip.returned.above</code></li>
</ul>
</li>
<li>make sure that the TLS certificate served by your AD server is trusted, either directly or chained to a trusted root. If you’re using a self-signed root (you’re probably doing that), add the root as a PEM-File (but with <code class="highlighter-rouge">.crt</code> extension!) to <code class="highlighter-rouge">/usr/local/share/ca-certificates/</code> ad run <code class="highlighter-rouge">/usr/sbin/update-ca-certificates</code>. If you fail to do this correctly, you will suffer in <code class="highlighter-rouge">ldap_sasl_interactive_bind_s: Can't contact LDAP server (-1)</code> hell (no. Nothing will inform you of a certificate error - all you get is <code class="highlighter-rouge">can't connect</code>)</li>
<li>In order to check that everything is set up correctly, before even trying <code class="highlighter-rouge">realmd</code> or <code class="highlighter-rouge">sssd</code>, use ldapsearch: <code class="highlighter-rouge">ldapsearch -H ldap://your.dc.host/ -Y GSSAPI -N -b "dc=your,dc=base,dc=dn" "(objectClass=user)"</code></li>
<li>Aside of all that, you can <a href="http://www.alandmoore.com/blog/2015/05/06/joining-debian-8-to-active-directory/">follow this guide</a>, but also make sure that you manually install the <code class="highlighter-rouge">krb5-user</code> package. The debian package database has a missing dependency, so the package doesn’t get pulled in even though it is required.</li>
</ul>
<p>All in all, this was a really bad case of <a href="https://xkcd.com/979/">XKCD 979</a> and in case you ask yourself whether I’m bitter, then let me tell you, that yes. I am bitter.</p>
<p>I can totally see that there are a ton of moving parts involved in this and I’m willing to nudge some of these parts in order to get the engine up and running. But it would definitely help if the various tools involved would give me meaningful log output. samba on the domain controller doesn’t log, <code class="highlighter-rouge">tcpdump</code> is pointless thanks to SSL everywhere, <code class="highlighter-rouge">realmd</code> fails silently while still saying that everything is ok (also, it’s unconditionally removing the config files it feeds into the various moving parts involved, so good luck trying to debug this), <code class="highlighter-rouge">sssd</code> emits cryptic error messages (see above) and so on.</p>
<p>Anyways. I’m just happy I go this working and now for reproducing it one more time, but this time recording everything in Ansible.</p>
Apple Watch starting to be useful2016-08-05T00:00:00+00:00http://pilif.github.com/2016/08/apple-watch-useful<p>Even after the <a href="http://www.timeforcoffee.ch/">Time for Coffee</a> app has been updated for WatchOS 2.0 support last year and my Apple Watch has become significantly more useful, the fact that the complication didn’t get a chance to update very often and the fact that launching the app took an eternity kind of detracted from the experience.</p>
<p>Which lead to me not really using the watch most of the time. I’m not a watch person. Never was. And while the temptation of playing with a new gadget lead to me wearing it on and off, I was still waiting for the killer feature to come around.</p>
<p>This summer, this has changed a lot.</p>
<p>I’m in the developer program, so I’m running this summer’s beta versions and Apple has also launched Apple Pay here in Switzerland.</p>
<p>So suddenly, by wearing the watch, I get access to a lot of very nice features that present themselves as huge user experience improvements:</p>
<ul>
<li>While «Time for Coffee»’s complication currently is flaky at best, I can easily attribute this to WatchOSes current Beta state. But that doesn’t matter anyways, because the Watch now keeps apps running, so whenever I need public transport departure information and when the complication is flaky, I can just launch the app which now comes up instantly and loads the information within less than a second.</li>
<li>Speaking of leaving apps running: The watch can now be configured to revert to the clock face only after more than 8 minutes have passed since the last use. This is perfect for the <a href="https://www.getbring.com/#!/app">Bring</a> shopping list app which now suddenly is useful. No more taking the phone out while shopping.</li>
<li>Auto-Unlocking the Mac by the presence of an unlocked and worn watch has gone from not working at all, to working rarely, to working most of the time as the beta releases have progressed (and since Beta 4 we also got the explanation that WiFi needs to be enabled on the to-be-unlocked mac, so now it works on all machines). This is very convenient.</li>
<li>While most of the banks here in Switzerland boycott Apple Pay (a topic for another blog entry - both the banks and Apple are in the wrong), I did get a Cornèrcard which does work with Apple Pay. Being able to pay contactless with the watch even for amounts larger than CHF 50 (which is the limit for passive cards) is amazing.</li>
</ul>
<p>Between all these features, I think there’s finally enough justification for me to actually wear the watch. It still happens that I forget to put it on here and then, but overall, this has totally put new life into this gadget, to the point where I’m inclined to say that it’s a totally new and actually very good experience now.</p>
<p>If you were on the fence before, give it a try come next autumn. It’s really great now.</p>
another fun project: digipass2016-07-05T00:00:00+00:00http://pilif.github.com/2016/07/digipass<p>As a customer of <a href="https://www.digitec.ch">digitec</a>, I often deal with their collection notices which I get via email and which invite me to go to their store and fetch my order (yes. I could have the goods delivered, but I’m impatient and not willing to pay the credit card surcharge).</p>
<p>Ever since Passbook happened on iOS 6, I wished for these collection notices to be iOS Passes as they have a lot of usability benefits:</p>
<ul>
<li>passes are location aware an pop up automatically when you get close to the location</li>
<li>Wallet automatically turns the screen brightness all the way up</li>
<li>passes could potentially be updated remotely</li>
<li>once added to the Wallet, passes don’t clutter your mailbox and you’ll never lose them in the noise of your inbox.</li>
</ul>
<p>So my latest fun project is <a href="https://github.com/pilif/digipass">digipass</a>.</p>
<p>Next time you get a digitec collection notice, just forward it to</p>
<p style="text-align: center; font-size: 32px;">digipass@pilif.me</p>
<p>After a few seconds, you will get the same collection notice again, but with the PDF replaced by an iOS Wallet pass that you can add to your Wallet.</p>
<p><img src="/assets/images/digipass.png" width="414" height="736" /></p>
<p>I have slightly altered the logo and the name to make it clear that there’s no affiliation to digitec.</p>
<p>The pass will be geo-coded to the correct store, so it will automatically pop up as you get close to the store.</p>
<p>As I don’t want access to your digitec account and because digitec doesn’t have any kind of API, I unfortunately can’t automatically remove the pass when your fetch your order - that’s something only digitec can do.</p>
<p>The source code for the server is available under the MIT license.</p>
<p>Disclaimer:</p>
<ul>
<li>I’m not affiliated with digitec aside of being a customer of theirs. If they want me to shut this down, I will.</li>
<li>I am not logging the collection notices you’re forwarding me. If you don’t trust me, you can self-host, or redact the notice to contain nothing but the URLs (I need these in order to build the pass).</li>
<li>This is a fun project. If it’s down, it’s down. If it doesn’t work, submit a pull request. Don’t expect any support</li>
<li>The LMTP daemon powering this is running in my home. I have a <a href="https://pilif.github.io/2014/09/geek-heaven/">very good connection</a>, but I also have not signed an SLA or anything. If it’s down, it’s down (the message will get queued though).</li>
<li>The moment I see this being abused, it will be shut down. Just like my previous <a href="https://pilif.github.io/2010/04/announcing-tempalias-com/">email based fun project</a></li>
</ul>
AV Programs as a Security Risk2016-04-05T00:00:00+00:00http://pilif.github.com/2016/04/av-programs-as-a-security-risk<p>Imagine you were logged into your machine as an administrator. Imagine you’re going to double-click every single attachment in every single email you get. Imagine you’re going to launch every single file your browser downloads. Imagine you answer affirmative on every single prompt to install the latest whatever. Imagine you unpack every single archive sent to you and you launch ever single file in those archives.</p>
<p>This is the position that AV programs put themselves on your machine if they want to have any chance at being able to actually detect malware. Just checking whether a file contains a known byte signature has stopped being a reliable method for detecting viruses long ago.</p>
<p>It makes sense. If I’m going to re-distribute some well-known piece of malware, all I have to do is to obfuscate it a little bit or encrypt it with a static key and my piece of malware will naturally not match any signature of any existing malware.</p>
<p>The loader-stub might, but if I’m using any of the existing installer packagers, then I don’t look any different than any other setup utility for any other piece of software. No AV vendor can yet affort to black-list all installers.</p>
<p>So the only reliable way to know whether a piece of software is malware or not, is to start running it in order to at least get it to extract/decrypt itself.</p>
<p>So here we are in a position where a anti malware program either <a href="http://www.attactics.org/2016/03/bypassing-antivirus-with-10-lines-of.html">is useless placebo</a> or it has to put itself into the position I have started this article with.</p>
<p>Personally, I think it is impossible to safely run a piece of software in a way that it cannot do any harm to the host machine.</p>
<p>AV vendors could certainly try to make it as hard as possible for malware to take over a host machine, but here we are in 2016 where most of the existing AV programs are based on projects started in the 90ies where software quality and correctness was even less of a focus than it is today.</p>
<p>We see AV programs <a href="http://krebsonsecurity.com/2010/08/anti-virus-products-mostly-ignore-windows-security-features/">disabling OS security features</a>, <a href="https://bugs.chromium.org/p/project-zero/issues/detail?id=703">installing and starting VNC servers</a> and providing any malicious web site with <a href="https://bugs.chromium.org/p/project-zero/issues/detail?id=693">full shell access to the local machine</a>. Or allow malware to <a href="http://googleprojectzero.blogspot.ch/2015/06/analysis-and-exploitation-of-eset.html">completely take over a machine</a> if a few bytes are read no matter where from.</p>
<p>This doesn’t cover the <a href="http://www.engadget.com/2015/09/19/avg-privacy-policy-update/">privacy issues</a> yet which are caused by more and more price-pressure the various AV vendors are subject to. If you have to sell the software too cheap to pay for its development (or even give it away for free), then you need to open other revenue streams.</p>
<p>Being placed in such a privileged position like AV tools are, it’s no wonder what kinds of revenue streams are now in process of being tapped…</p>
<p>AV programs by definition put themselves into an extremely dangerous spot on your machine: In order to read every file your OS wants to access, it has to run with adminitrative rights and in order to actually protect you it has to understand many, many more file formats than what you have applications for on your machine.</p>
<p>AV software has to support every existing archive format, even long obsolete ones because who knows - you might have some application somewhere that can unpack it; it has to support every possibly existing container format and it has to support all kinds of malformed files.</p>
<p>If you try to open a malformed file with some application, then the application has the freedom to crash. An AV program must keep going and try even harder to see into the file to make sure it’s just corrupt and not somehow malicious.</p>
<p>And as stated above: Once it finally got to some executable payload, it often has no chance but to actually execute it, at least partially.</p>
<p>This must be some of the most difficult thing to get right in all of engineering: Being placed at a highly privileged spot and being tasked to then deal with content that’s malicious per definitionem is an incredibly difficult task and when combined with obviously bad security practices (see above), I come to the conclusion that installing AV programs is actually <strong>lowering the overall security of your machines</strong>.</p>
<p>Given a fully patched OS, installing an AV tool will greatly widen the attack surface as now you’re putting a piece of software on your machine that will try and make sense of every single byte going in and out of your machine, something your normal OS will not do.</p>
<p>AV tools have the choice of doing nothing against any but the most common threats if they decide to do pure signature matching, or of potentially putting your machine at risk.</p>
<p>AV these days might provide a very small bit of additional security against well-known threats (though against those you’re also protected if you apply the latest OS patches and don’t work as an admin) but they open your installation wide for all kinds of targeted attacks or really nasty 0-day exploits that can bring down your network without any user-interaction what so ever.</p>
<p>If asked what to do these days, I would give the strong recommendation to not install AV tools. Keep all the software you’re running up to date and white-list the applications you want your users to run. Make use of white-listing by code-signatures to, say, allow everything by a specific vendor. Or all OS components.</p>
<p>If your users are more tech-savy (like developers or sys admins), don’t whitelist but also don’t install AV on their machines. They are normally good enough to not accidentally run malware and the risk of them screwing up is much lower than the risk of somebody exploiting the latest flaw in your runs-as-admin-and-launches-every-binary piece of security software.</p>
The new AppleTV2015-11-25T00:00:00+00:00http://pilif.github.com/2015/11/new-apple-tv<p>When the 2nd generation of the AppleTV came out and offered AirPlay support, I bought one more or less for curiosity value, but it worked so well in conjunction with <a href="http://inmethod.com/airvideohd/index.html">AirVideo</a> that it has completely replaced my previous attempts at an in-home media center system.</p>
<p>It was silent, never really required OS or application updates, never crashed and never overheated. And thanks to AirVideo it was able to play everthing I could throw at it (at the cost of a server running in the closet of course).</p>
<p>The only inconvenience was the fact that I needed too many devices. Playing a video involved my TV, the AppleTV and my iOS device. Plus remotes for TV and AppleTV. Personally, I didn’t really mind much, but while I would have loved to give my parents access to my media library (<a href="/2014/09/geek-heaven/">1 Gbit/s upstream FTW</a>), the requirement to use three devices and to correctly switch on AirPlay has made this a complete impossibility due to the complexity.</p>
<p>So I patiently awaited the day when the AppleTV would finally be able to run apps itself. There was no technical reason to prevent that - the AppleTV totally was powerful enough for this and it was already running iOS.</p>
<p>You can imagine how happy I was when finally I got what I wanted and the new 4th generation AppleTV was announced. Finally a solution my parents can use. Finally something to help me to ditch the majority of the devices involved.</p>
<p>So of course I bought the new device the moment it became available.</p>
<p>I even had to go through additional trouble due to the lack of the optical digital port (the old AppleTV was connected to a <a href="http://www.sonos.com/en-wo/shop/playbar?r=1">Sonos playbar</a>), but I found an audio extractor that works well enough.</p>
<p>So now after a few weeks of use, the one thing that actually pushed me to write this post here is the fact that the new AppleTV is probably the most unfinished and unpolished product that I have ever bought from Apple. Does it work? Yes. But the list of small oversights and missing pieces is as big as I have never seen in an Apple product. Ever.</p>
<p>Let me give you a list - quite like <a href="/2003/02/the-13-most-annoying-things-of-the-p800-phone/">what I’ve done 12 years ago</a> for a very different device</p>
<ul>
<li>While the AppleTV provides you with an option to touch it with an iOS device for the configuration of the Wifi and the appleid settings, I still had to type in my appleid password twice: Once for the AppStore and once for the game center. Mind you, my appleid password is 30 characters long, containing uppercase, lowercase digits and symbols. Have fun doing this on the on-screen keyboard</li>
<li>The UI is laggy. The reason for having to type in the game center password was because the UI was still loading the system appleid as I was pressing the “Press here to login button”. First nothing happened, then the button turned into a “Press here to sign out button” and then the device reacted to my button press. <em>Thank you</em></li>
<li>The old AppleTV supported either the Remote app on an iPhone or even a bluetooth keyboard for character entry. The new one doesn’t support any of this, so there’s really no way around the crappy on-screen keyboard.</li>
<li>While the device allows you to turn off automatic app updates, there is no list of apps with pending updates. There’s only “Recently updated”, but that’s a) limited to 20 apps, b) lists <em>all</em> recently updated apps, c) gives no indication what app is updated yet and what isn’t and finally d) isn’t even sorted by date of the last update. This UI is barely acceptable for enabled automatic updates, but completely unusable if you want them disabled to the point that I decided to just bite the bullet and enable them.</li>
<li>The sound settings offer “Automatic”, “Stereo” and “Dolby Surround”. Now, “Dolby Surround” is a technology from the mid-90-ies that encodes one additional back-channel in a stereo signal and is definitely not what you want (which would be “Dolby Digital”). Of course I’ve assumed that there’s some “helpfulnes” at work here, detecting the fact that my TV doesn’t support Dolby Digital (but the playbar does, so it’s perfectly fine to send out AC/3 sinal). Only after quite a bit of debugging I found out that what Apple calls “Dolby Surround” is actually “Dolby Digital”. WHY??</li>
<li>The remote is <em>way</em> too sensitive. If you so much as lift it up, you’ll start seeking your video (which works way better than anything I’ve seen before, but still…)</li>
<li>Until the first update (provided without changelog or anything the like), the youtube app would start to constantly interrupt playback and reload the stream once you paused a video once.</li>
<li>Of course in Switzerland, Siri doesn’t work, even though I would totally be able to use it in english (or german - it’s available in Germany after all) Not that it matters because the Swiss store is devoid of media I’d actually be interested in anyways and there’s no way for third-parties to integrate into the consolidated system-wide interface for media browsing.</li>
<li>Home Sharing doesn’t work for me. At. All. Even after typing in my Apple ID password a third time (which, yes, it asked me to).</li>
<li>It still doesn’t wake up on network access, nor appear in the list of Airplay-able devices of my phone when it’s in sleep mode. This only happens in one segment of my network, so it might be an issue with a switch though - wouldn’t be the first time :/</li>
</ul>
<p>I’m sure as time goes on we’ll see updates to fix this mess, but I cannot for the life of me understand why Apple thought that the time was ready to release this.</p>
<p>Again: It works fine and I will be brining one to my mother next Friday because I know she’ll be able to use it just fine (especially using the Plex app). But this kind of lack of polish we’re used to on Android and Windows. How can Apple produce something like this?</p>
SNI progressive enhancment2015-10-20T00:00:00+00:00http://pilif.github.com/2015/10/sni-progressive-enhancement<p>Today marks another big milestone in the availabilibty of ubuquitous SSL encryption: The «Let’s Encrypt» project <a href="https://letsencrypt.org/2015/10/19/lets-encrypt-is-trusted.html">got their cross-signature</a>, so come a few more weeks, they will be ready for the public to use.</p>
<p>However, with an unlimited amount of available free SSL certificates, we get another problem: Because back in the day nobody thought about name based virtual hosting, the initial implementation of SSL didn’t support the client telling the server what host it’s trying to connect to. This means that the server didn’t know what certificate to present when multiple host names were to be used for the same address.</p>
<p>This meant that for every site you wanted to offer over SSL, you needed an IP address, which are harder to get as time moves on and we’re running out of them.</p>
<p>«<a href="https://en.wikipedia.org/wiki/Server_Name_Indication">SNI</a>» is a protocol extension that allows the client to tell the server the host-name it’s connecting to, so the server can chose the correct certificate to serve. This fixes above issue and finally allows virtual hosting based on the host name even over SSL.</p>
<p>Unfortunately, SNI isn’t as widely supported as we’d like: Older Android devices and all IEs under Windows XP (which still is a sizeable portion of our users) dont’ support SNI.</p>
<p>What’s also tricky is that you don’t know a client doesn’t support SNI until it’s too late: They connect to your port 443, don’t send a host name and now the server needs to a) answer and b) send a server certificate. So unless the client accidentally hit the correct host name, the client will get a certificate mismatch and it will thus display the usual SSL error message.</p>
<p>This is of course not very good UX as you don’t even get to tell the user what’s wrong before they see the browser-specific error message.</p>
<p>However, I still want to support SSL for all my sites wherever I can. If I could have non-SNI-supporting clients on an unencrypted site and then adding encryption only if they support SNI, then encryption would become a progressive enhancement. The sites I’m dealing with aren’t that far in the «needs encryption» territory, so offering encryption only for good (read: non-outdated) browsers is a viable option, especially as I want to offer this for free for the sites I’m hosting and I only have so many IP addresses at my disposal right now.</p>
<p>Generally, the advice to do that is to <a href="http://serverfault.com/a/389818">do user agent sniffing</a> but that’s error-prone. I’d much rather feature detect.</p>
<p>So after a bit of thinking, I came up with this (it requires JS though):</p>
<ul>
<li>Over port 80, serve the normal site unencrypted instead of just redirecting to https.</li>
<li>On that regular site do a jsonp request for some beacon file on your site over https.</li>
<li>If that beacon loads properly, then your client is obviously SNI compliant, so redirect to the https version of your site using JS.</li>
<li>If the beacon doesn’t load, then the browser probably doesn’t support SNI, so keep them on the unencrypted page. If you want to, you can set a cookie to prevent further probing on subsequent requests.</li>
<li>On port 443, serve a <a href="https://en.wikipedia.org/wiki/HTTP_Strict_Transport_Security">HSTS header</a>, so next time the browser visits, they’ll use HTTPS from the start.</li>
</ul>
<p>IE8 will still show the page correctly but also show a warning that it has blocked content for your own security, so you might want to immediately redirect again (with the cookie set) in order to get rid of the warning.</p>
<p>Contrary to the normal immediate redirect to HTTPS, this means that the first page-view even of compliant browsers will be unencrypted, so absolutely make sure that you serve all your cookies with the <code class="highlighter-rouge">secure</code> flag. This also means that in order to get to the encrypted version of the page, you need JavaScript enabled - at least for the first time.</p>
<p>Maybe you can come up with some crazy hack using frames, but this method seems to be the cleanest.</p>
IPv6 in production2015-08-07T00:00:00+00:00http://pilif.github.com/2015/08/ipv6-in-production<p>Yesterday, I talked about <a href="/2015/08/why-ipv6/">why we need IPv6</a> and to make that actually happend, I decided to do my part and make sure that all of our infrastructure is available over IPv6.</p>
<p>Here’s a story of how that went:</p>
<p>First was to request an IPv6 allocation by our hosting provider: Thankfully our contract with them included a /64, but it was never enabled and when I asked for it, they initially tried to bill us CHF 12/mt extra, but after pointing them to the contract, they started to make IPv6 happen.</p>
<p>That this still took them multiple days to do was a pointer to me that they were not ready at all and by asking, I was forcing them into readyness. I think I have done a good deed there.</p>
<h2 id="dns">dns</h2>
<p>Before doing anything else, I made sure that our DNS servers are accessible over IPv6 and that IPv6 glue records existed for them.</p>
<p>We’re using PowerDNS, so actually supporting IPv6 connectivity was trivial, though there was a bit of tweaking needed to tell it about what interface to use for outgoing zone transfers.</p>
<p>Creating the glue records for the DNS servers was trivial too - <a href="http://nic.ch">nic.ch</a> has a nice UI to handle the glue records. I’ve already had IPv4 glue records, so all I had to do was to add the V6 addresses.</p>
<h2 id="web-properties">web properties</h2>
<p>Making our web properties available over IPv6 was trivial. All I had to do was to assign an IPv6 address to our frontend load balancer.</p>
<p>I did not change any of the backend network though. That’s still running IPv4 and it will probably for a long time to come as I have already carefully allocated addresses, configured DHCP and I even know IP addresses by heart. No need to change this.</p>
<p>I had to update the web application itself a tiny bit in order to copy with a <code class="highlighter-rouge">REMOTE_ADDR</code> that didn’t quite look the same any more though:</p>
<ul>
<li>there were places where we are putting the remote address into the database. Thankfully, we are using PostgreSQL whose <a href="http://www.postgresql.org/docs/9.4/static/datatype-net-types.html">native <code class="highlighter-rouge">inet</code> type</a> (it even supports <a href="http://www.postgresql.org/docs/9.4/static/functions-net.html">handy type specific operators</a>) supports IPv6 since practically forever. If you’re using another database and you’re sotoring the address in a <code class="highlighter-rouge">VARCHAR</code>, be prepared to lengthen the columns as IPv6 addreses are much longer.</li>
<li>There were some places where we were using CIDR matching for some privileged API calls we are allowing from the internal network. Of course, because I haven’t changed the internal network, no code change was strictly needed, but I have updated the code (and unit tests) to deal with IPv6 too.</li>
</ul>
<p>The last step was to add the AAAA record for our load balancer.</p>
<p>From that moment on, our web properties were available via IPv6 and while there’s not a lot of traffic from Switzerland, over in Germany, about 30% of all requests are happening over IPv6.</p>
<h2 id="email">email</h2>
<p>Of the bunch, dealing with email was the most complicated step. Not so much for enabling IPv6 support in the MTA as that was supported since forever (we’re <a href="/2004/06/all-time-favourite-tools/">using Exim</a> (warning: very old post)).</p>
<p>The difficulty lied in getting everything else to work smoothly though - mostly in regards to SPAM filtering:</p>
<ul>
<li>Many RBLs don’t support IPv6, so I had to make sure we weren’t accidentally treating all mail delivered to us over IPv6 as spam.</li>
<li>If you want to have <em>any</em> chance at your mail being accepted by remote parties, then you must have a valid PTR record for your mail server. This meant getting reverse DNS to work right for IPv6.</li>
<li>Of course you also need to update the SPF record now that you are sending email over IPv6.</li>
</ul>
<h2 id="ptr-record">PTR record</h2>
<p>The PTR record was actually the crux of the matter.</p>
<p>In IPv4, it’s inpractical or even impossible to get a reverse delegation for anything smaller than a /24, because of the way how reverse lookup works in DNS. There was <a href="https://www.ietf.org/rfc/rfc2317.txt">RFC 2317</a> but that was just too much hassle for most ISPs to implement.</p>
<p>So the process normally was to let the ISP handle the few PTR records you wanted.</p>
<p>This changes with IPv6 in two ways: As the allocation is mostly fixed to a /64 or larger and because there are so many IPv6 addreses to allow splitting networks at byte boundaries without being stingy, it is trivially easy to do proper reverse delegation to customers.</p>
<p>And because there are so many addresses available for a customer (a /64 allocation is enough addresses to cover 2^32 whole internets), reverse delegation is the only way to make good use of all these addresses.</p>
<p>This is where I hit my next roadblock with the ISP though.</p>
<p>They were not at all set up for proper reverse delegation - the support ticket I have opened in November of 2014 took over 6 months to finally get closed in May of this year.</p>
<p>As an aside: This was a professional colocation provider for business customers that was, in 2014, not prepared to even just hand out IPv6 addresses and who required 6 months to get reverse delegation to work.</p>
<p>My <a href="https://fiber7.ch">awesome ISP</a> was handing out IPv6 addresses since the late 90ies and they offer reverse delgation <em>for free</em> to anybody who asks. As a matter of fact, it was <em>them</em> to ask <em>me</em> whether I wanted a reverse delegation last year when I signed up with them.</p>
<p>Of course I said yes :-)</p>
<p>This brought me to the paradoxical situation of having a fully working IPv6 setup at home while I had to wait for 6 months for my commercial business ISP to get there.</p>
<h3 id="its-done-now">it’s done now</h3>
<p>So after spending about 2 days learning about IPv6, after spending about 2 days updating our application, after spending one day convincing our ISP to give us the IPv6 allocation they promised in the contract and after waiting <em>6 months</em> for the reverse delegation, I can finally say that all our services are now accessible via IPv6.</p>
<p>Here are the headers of the very first Email we’ve transmitted via IPv6</p>
<p><img src="/assets/images/ipv6_headers.png" alt="screenshot showing off array support" /></p>
<p>And here’s the achievement badge I waited so patiently (because of the PTR delegation) to finally earn 🎉</p>
<p><a href="https://ipv6.he.net/certification/scoresheet.php?pass_name=pilif" target="_blank"><img src="https://ipv6.he.net/certification/create_badge.php?pass_name=pilif&badge=1" style="border: 0; width: 128px; height: 128px" alt="IPv6 Certification Badge for pilif" /></img></a></p>
<p>I can’t wait for the accompanying T-Shirt to arrive 😃</p>
Why we need IPv62015-08-06T00:00:00+00:00http://pilif.github.com/2015/08/why-ipv6<p>As we are running out of IPv4 network addresses (and yes, we are), there’s only two possible future scenarios and one of the two, most people are not going to like at all.</p>
<p>As IP addresses get more and more scarce, things will start to suck for both clients and content providers.</p>
<p>As more and more clients connect, carrier grade NAT will become the norm. NAT already sucks, but at least you get to control it and using NAT-PMP or UPnP, applications in your network get some control over being able to accept incoming connections.</p>
<p>Carrier Grade NAT is different. That’s NAT being done on the ISPs end, so you don’t get to open ports at all. This will affect gaming performance, it will affect your ability to use VoIP clients and of course file sharing clients.</p>
<p>For content providers on the other hand, it will become more and more difficult to get the public IP addresses needed for them to be able to actually provide content.</p>
<p>Back in the day, if you wanted to launch a service, you would just do it. No need to <a href="/2011/09/asking-for-permission/">ask anybody for permission</a>. But in the future, as addresses become scarce and controlled by big ISPs which are also acting as content provider, the ISPs become the gatekeepers for new services.</p>
<p>Either you do something they like you to be doing, or you don’t get an address: As there will be way more content providers fighing over addresses than there will be addresses available, it’s easy for them to be picky.</p>
<p>Old companies who still have addresses of course are not affected, but competing against them will become hard or even impossible.</p>
<p>More power to the ISPs and no competition for existing content providing services both are very good things for players already in the game, so that’s certainly a possible future they are looking forward to.</p>
<p>If we want to prevent this possible future from becoming reality, we need a way out. IPv4 is draining up. IPv6 exists for a long time, but people are reluctant to upgrade their infrastructure.</p>
<p>It’s a vicious cycle: People don’t upgrade their infrastructure to IPv6 because nobody is using IPv6 and nobody is using IPv6 because there’s nothing to be gained from using IPv6.</p>
<p>If we want to keep the internet as an open medium, we need to break the cycle. Everybody needs to work together to provide services over IPv6, to the point of even offering services over IPv6 exclusively.</p>
<p>Only then can we start to build pressure for ISPs to support IPv6 on their end.</p>
<p>If you are a content provider, ask your ISP for IPv6 support and start offering your content over IPv6. If you are an end user, pressure your ISP to offer IPv6 connectivity.</p>
<p>Knowing this, even one year ago, after getting motivated by my <a href="/2014/09/geek-heaven/">awesome ISP</a> who offered IPv6 connectivity ever since, I started to get our commercial infrastructure up to speed.</p>
<p><a href="/2015/08/ipv6-in-production/">Read on</a> to learn how that went.</p>
The Future of the JRPG genre2015-05-04T00:00:00+00:00http://pilif.github.com/2015/05/the-future-of-the-jrpg<p>After an underwhelming false start with <a href="http://en.wikipedia.org/wiki/Xenoblade_Chronicles">Xenoblade Chronicles</a> back when the game came out, the re-release on the 3DS made my give it another try and now that I’m nearly through with the game (just beat the 3rd last main quest boss), I feel compelled to write my first game review after many years of non-gaming content here.</p>
<p>«Review» might not be the entirely correct term though as this article is about to explain why I personally believe Xenoblade to be one of the best instances of the <a href="http://en.wikipedia.org/wiki/Japanese_role-playing_game">JPRG genre</a> and might actually be very high up there in my list of all-time favorite games.</p>
<p>But first, let’s talk about what’s not so good at the game and why I nearly have missed this awesome game: If I had to list the shortcomings in this masterpiece, it would be the UI design of the side-questing system and the very, very slow start of the story.</p>
<p>First the story: After maybe an hour of play time, the player is inclined to think to have been thrown into the usual revenge plot, this time about a fight against machine based life-forms, but a simple revenge-plot none the less. Also, to be honest, it’s not even a really interesting revenge-plot. It feels predictable and not at all like what we’re usually used to from the genre.</p>
<p>Once you reach the half-time mark of the game, the subtle hints that the game’s dropping on you before that start to become less and less subtle, revealing to the player that they got it all wrong.</p>
<p>The mission of the game changes completely to the point of even completely changing whom you are fighting against and turning around many things you’ve taken for granted for the first half.</p>
<p>This is some of the most impressive story-development I’ve seen so far and also came as a complete surprise to me.</p>
<p>So what felt like the biggest shortcoming of the game (lackluster story) suddenly turned into one of its strongest points.</p>
<p>«Other games of the genre also did this» you might think as you compare this to Final Fantasy XII, but where that game unfortunately never really takes off nor adds any bigger plot-twists, the thing that Xenoblade does after the half-time marker is simply mind-blowing to the point of me refusing to post any spoilers even though the game is quite old by now.</p>
<p>So we have a game that gets amazing after 20-40 hours (depending on how you deal with the side-quests). What’s holding us over until then?</p>
<p>The answer to that question is the reason why I think that Xenoblade is one of the best JRPGs so far: What’s holding us over in the first 40 hours of the game is, you know, gameplay.</p>
<p>The battle system feels like it has been lifted from current MMORPGs (I’m mostly referring to World of Warcraft here as that’s the one I know best), though while it has been scaled down in sheer amount of skills, the abilities themselves have been much better balanced between the characters, which of course is possible in a single-player game.</p>
<p>The game’s affinity system also greatly incentivises the player to switch their party around as they play the game. This works really well when you consider the different play styles offered by the various characters. A tank plays differently from DPS which plays differently from the (unfortunately only one) healer.</p>
<p>But even between members of the same class there are differences in play style leading to a huge variety for players.</p>
<p>This is the first JRPG where I’m actually <em>looking forward</em> to combat - it’s <em>that</em> entertaining.</p>
<p>While the combat sometimes can be a bit difficult, especially because randomness still plays a huge part, it’s refreshing to see that the game doesn’t punish you at all for failing: If you die you just respawn at the last waypoint and usually there’s one of these right in front of the boss.</p>
<p>Even better, normally, the fight just starts again, skipping all introductory cutscenes. And even if there still is some cut-scenes not skipped automatically: The game always allows cutscenes to be skipped.</p>
<p>This makes a lot of sense, because combat is actually so much fun that there’s considerable replay-value to the game which gets much enforced by skippable cutscenes, though some of them you would never ever in your life want to skip - they are <em>so good</em> (you know which ones I’m referring to).</p>
<p>Combat is only one half of the gameplay, the other is exploration: The world of the game is huge and for the first time ever in a JPRG, the simple rule of «you can see it, you can go there» applies. For the first time ever, the huge world is yours to explore and to enjoy.</p>
<p>Never have I seen such variety in locations, especially, again, in the second half of the game which I really don’t want to spoil here.</p>
<p>Which brings us to the side-quests: Imagine that you have a quest-log like you’re used to from MMORPGs with about the same style of quests: Find this item, kill these normal mobs, kill that elite mob, talk to that other guy - you know the drill.</p>
<p>The non-unique and somewhat random dialog lines between the characters as they accept these side-quests break the immersion a bit.</p>
<p>But the one big thing that’s really annoying about the side-quests is discoverability: As a player you often have no idea where to go due to the vague quest texts and, worse, many (most) quests are hidden and only become available after you trigger some event or you talk to the correct (seemingly unrelated) NPC.</p>
<p>While I can understand the former issue (vague quest descriptions) from a game-play perspective, the latter is inexcusable, especially as the leveling curve of the game and the affinity system both really are designed around you actually doing these side-quests.</p>
<p>It’s unfair and annoying that playing hide-and seek for hours is basically a fixed requirement to having a chance at beating the game. This feels like a useless prolonging of the existing game for no reason but to, you know, prolong the game.</p>
<p>Thankfully though, by now, <a href="http://xenoblade.wikia.com/wiki/Xenoblade_Wiki">the Wiki</a> exists, so whether you’re on the Wii or the 3DS, just have an iPad or Laptop close to you as you do the side-questy parts of the game.</p>
<p>Once you’re willing to live with this issue, then the absolutely amazing gameplay comes into effect again: Because exploration is so much fun, because the battle system is so much fun, then suddenly the side-quests become fun too, once you remove the annoying hide-and-seek aspect.</p>
<p>After all, it’s the perfect excuse to do more of what you enjoy the most: Playing the game.</p>
<p>This is why I strongly believe that this game would have been so much better with a more modern quest-log system: Don’t hide (most of the) quests! Be precise in explaining where to find stuff! You don’t have to artificially prolong the game: Even when you know where to go (I did thanks to the Wiki), there’s still more than 100 hours of entertainment there to be had.</p>
<p>The last thing about quests: Some of the quests require you to find rare items which to get you have a random chance by collecting «item orbs» spread all over the map. This is of course another nice way to encourage exploration.</p>
<p>But I see no reason why the drop rate must be random, especially as respawning the item orbs either requires you to wait 10 to 30 minutes or, saving and reloading the game.</p>
<p>If you want to encourage exploration, hide the orbs! There’s so much content in this game that aritifically prolonging it with annyoing saving and reloading escapades is completely unnecessary.</p>
<p>At least, the amount of grinding required isn’t so bad to the point of being absolutely bearable for me and I have nearly zero patience for grinding.</p>
<p>Don’t get me wrong though: Yes, these artificial time-sinks were annoying (and frankly 100% unneeded), but because the actual gameplay is so much fun, I didn’t really mind them that much.</p>
<p>Finally, there are some technical issues which I don’t really mind that much however: Faces of characters look flat and blurry which is very noticable in the cut-scenes which are all rendered by the engine itself (which is a very good thing).</p>
<p>Especially on the 3DS the low resolution of the game is felt badly (the 3DS is much worse than the Wii to the point of objects sometimes being invisible) and there’s some objects popping into view at times. This is mostly a limitation of the hardware which just doesn’t play well with the huge open world, so I can totally live with it. It only minimally affects my immersion into the game.</p>
<p>If you ask me what is the preferred platform to play this on, I would point at the Wii version though, of course, it’ll be very hard to get the game at this point in time (no. you can’t have my copy).</p>
<h2 id="the-good">the good</h2>
<p>So after all of this, here’s a list of the unique features of this game it has over all other members of its genre:</p>
<ul>
<li>Huge world that can be explored completely. No narrow hallways but just huge open maps.</li>
<li>Absolutely amazing battle system that goes far beyond of the usual «select some action from this text-based menu»</li>
<li>Skippable cutscenes which together with the battle system make for a high replayability</li>
<li>Many different playable characters with different play styles</li>
<li>Great music by the god-like Mr. Mitsuda</li>
<li>A very, very interesting story once you reach the mid-point of the game</li>
<li>Very believable characters and very good character development</li>
<li>Some of the best cutscene direction I have ever seen in my life - again, mostly after the half-time mark (you people who played the game know which particular one I’m talking about - still sends shivers down my spine).</li>
</ul>
<h2 id="my-wishes-for-the-future">My wishes for the future</h2>
<p>The game is nearly perfect in my opinion, but there are two things I think would be great to be fixed in the successor or any other games taking their inspiration from Xenoblade:</p>
<p>First, please fix the quest log and bring it to the current decade of what we’re used to from MMORPGs (where you lifted the quest design off to begin with): Show us where to get the quests, show us where to do them.</p>
<p>Second, and this one is even bigger in my opinion: Please be more considerate in how you represent women in the game. Yes, the most bad-ass characters in the game are women (again, I can’t spoil anything here). Yes, there’s a lot of depth to the characters of women in this game and they are certainly not just there for show but are actually instrumental to the overall story development (again, second part).</p>
<p>But why does most of the equipment for the healer in the game have to be practically underwear? Do you really need to spend CPU resources on (overblown) breast physics when you render everybodies faces blurry and flat?</p>
<p>Wouldn’t it be much better for the story and the immersion if the faces looked better at the cost of some (overblown) jiggling?</p>
<p>Do you really have to constantly show close-ups of way too big breasts of one party member? This is frankly distracting from what is going on in the game.</p>
<p>I don’t care about cultural differences: You managed to design very believable and bad-ass women into your game. Why do you have to diminish this by turning them into a piece of furniture to look at? They absolutely stand on their own with their abilities and their character progression.</p>
<p>It is the year 2015. We can do better than this (though, of course, the world was different in 2010 when the game initially came out).</p>
<h2 id="conclusion">Conclusion</h2>
<p>All of that aside: Because of the amazing game play, because of the mind-blowing story, because of the mind-blowing custscene-direction and because of the huge world that’s all but narrow passages, I love this game more than many others.</p>
<p>I think that this is the first time that the JRPG game really has moved forward in about a decade and I would definitely like to see more games ripping off the good aspects of Xenoblade (well - basically everything).</p>
<p>As such I’m very much looking forward for the games successor to become available here in Europe (it has just come out in Japan and my Japanese still is practically non-existent) and I know for a fact that I’m going to play it a lot, especially as I now know to be patience with the side-quests.</p>
Geek heaven2014-09-19T00:00:00+00:00http://pilif.github.com/2014/09/geek-heaven<p>If I had to make a list of attributes I would like the ISP of my dream to
have, then, I could write quite the list:</p>
<ul>
<li>I would really like to have native IPv6 support. Yes. IPv4 will be
sufficient for a very long time, but unless pepole start having access to
IPv6, it’ll never see the wide deployment it needs if we want the internet
to continue to grow. An internet where addresses are only available to
people with a lot of money is not an internet we all want to be subjected to
(see my post <a href="http://pilif.github.io/2011/09/asking-for-permission/">«asking for permission»</a>)</li>
<li>I would want my ISP to accept or even support network neutrality. For this
to be possible, the ISP of my dreams would need to be nothing but an ISP so
their motivations (provide better service) align with mine (getting better
service). ISPs who also sell content have all the motivation to provide
crappy Internet service in order to better sell their (higher-margin)
content.</li>
<li>If I have technical issues, I want to be treated as somebody who obviously
has a certain level of technical knowledge. I’m by no means an expert in
networking technology, but I do know about powering it off and on again. If
I have to say «<a href="http://xkcd.com/806/">shibboleet</a>» to get to a real
technicial, so be it, but if that’s not needed, that’s even better.</li>
<li>The networking technology involved in getting me the connectivity I want
should be widely available and thus easily replacable if something breaks.</li>
<li>The networking technology involved should be as simple as possible: The
more complex the hardware involved, the more stuff can break, especially
when you combine cost-pressure for end-users with the need for high
complexity.</li>
<li>The network equipment I’m installing at my home and which has thus access
to my LAN needs to be equipment I own and I control fully. I do not accept
leased equipment to which I do not have full access to.</li>
<li>And last but not least, I would really like to have as much bandwidth as possible</li>
</ul>
<p>I’m sure I’m not alone with these wishes, even though, for «normal people»
they might seem strange.</p>
<p>But honestly: They just don’t know it, but they too have the same interests.
Nobody wants an internet that works like TV where you pay for access to a
curated small list of “approved” sites (see network neutrality and IPv6
support).</p>
<p>Nobody wants to get up and reboot their modem here and then because it
crashed. Nobody wants to be charged with downloading illegal content
because their Wifi equipment was suddenly repurposed as an open access point
for other customers of an ISP.</p>
<p>Most of the wishes I list above are the basis needed for these horror
scenarios never coming to pass, however unlikely the might seem now (though
getting up and rebooting the modem/router is something we already have to
deal with today).</p>
<p>So yes. While it’s getting rarer and rarer to get all the points of my list
fulfilled, to the point where I though this to be impossible to get all of
it, I’m happy to say that here in Switzerland, there is at least one ISP that
does all of this and more.</p>
<p>I’m talking about <a href="http://init7.ch">Init7</a> and especially their
<a href="https://www.fiber7.ch/">awesome FTTH offering Fiber7</a> which very recently
became available in my area.</p>
<p>Let’s deal with the technology aspect first as this really isn’t the
important point of this post: What you get from them is pure 1Gbit/s
Ethernet. Yes, they do sell you a router box if you want one, but you can
just as well just get a simple media converter, or just an SFP module to plug
into any (managed) switch (with SFP port).</p>
<p>If you have your own routing equipment, be it a linux router like my
<a href="/2006/07/computers-under-my-command-issue-1-shion/">shion</a> or be it any
Wifi Router, there’s no need to add any kind of additional complexity to
your setup.</p>
<p>No additional component that can crash, no software running in your home to
which you don’t have your password to and certainly no sneakily opened
public WLANs (I’m looking at you,
<a href="http://www.upc-cablecom.ch/de/support/tools/wi-free/">cablecom</a>).</p>
<p>Of course you get native IPv6 (a /48 which incidentally is room for
281474976710656 whole internets in your apartment) too.</p>
<p>But what’s really remarkable about Init7 isn’t the technical aspect (though,
again, it’s bloody amazing), but everything else:</p>
<ul>
<li>Init7 was one of the first ISPs in Switzerland to offer IPv6 to end users.</li>
<li>Init7 doesn’t just <a href="http://www.init7.net/en/about/sozial">support network neutrality</a>.
They actively <a href="http://webapp.sonntagszeitung.ch/read/sz_29_06_2014/gesellschaft/Gegen-das-Goliath-Gehabe-8963">fight for it</a></li>
<li>They <a href="https://fiber7.ch/fiber7-technologie/fiber7-tripleplay/">explicitly state</a>
that they are not selling content and they don’t intend to start doing so. They are just an ISP and as such their motivations totally align with mine.</li>
</ul>
<p>There are a lot of geeky soft factors too:</p>
<ul>
<li>Their press releases are written in Open Office (check the PDF properties
of <a href="https://fiber7.ch/documents/20/20140908_111tage_fiber7-medienmitteilung_final.pdf">this one</a>
for example)</li>
<li>I got an email from a technical person on their end that was written using
f’ing <a href="http://www.claws-mail.org/">Claws Mail</a> on Linux</li>
<li>Judging from the <code class="highlighter-rouge">Recieved</code> headers of their Email, they are using IPv6 in
their internal LAN - down to the desktop workstations. And related to that:</li>
<li>The machines in their LAN respond to ICMPv6 pings which is utterly crazy
cool. Yes. They are firewalled (<em>cough</em> I had to try. Sorry.), but they let
ICMP through. For the not as technical readers here: This is as good an
internet citizen as you will ever see and it’s extremely unexpected these
days.</li>
</ul>
<p>If you are a geek like me and if your ideals align with the ones I listed
above, there is no question: You have to support them. If you can have their
Fiber offering in your area, this is a no-brainer. You can’t get synchronous
1GBit/s for CHF 64ish per month anywhere else and even if you did, it
wouldn’t be plain Ethernet either.</p>
<p>If you can’t have their fiber offering, it’s still worth considering their
other offers. They do have some DSL based plans which of course are
technically inferior to plain ethernet over fiber, but you would still
support one of the few remaining pure ISPs.</p>
<p>It doesn’t have to be Init7 either. For all I know there are many others,
maybe even here in Switzerland. Init7 is what I decided to go with initially
because of the Gbit, but the more I leared about their philosophy, the less
important the bandwith got.</p>
<p>We need to support companies like these because companies like these are
what ensures that the internet of the future will be as awesome as the
internet is today.</p>
Thoughts on IPv62014-07-14T00:00:00+00:00http://pilif.github.com/2014/07/ipv6<p>A few months ago, the awesome provider Init7 has released their
<a href="https://www.fiber7.ch/">awesome FTTH offering Fiber7</a> which provides
synchronous 1GBit/s access for a very fair price. Actually, they are by
far the cheapest provider for this kind of bandwith.</p>
<p>Only cablecom comes close at matching them bandwidth wise with their 250Mbits
package, but that’s 4 times less bandwith for nearly double the price. Init7
also is one of the only providers who <a href="https://www.fiber7.ch/fiber7-technologie/fiber7-tripleplay/">officially states</a> that
their triple-play strategy is that they don’t do it. <em>Huge-ass</em> kudos for
that.</p>
<p>Also, their technical support is using Claws Mail on GNU/Linux - to give you
some indication of the geek-heaven you get when signing up with them.</p>
<p>But what’s really exciting about Init7 is their support for IPv6. In-fact,
Init7 was one of the first (if not <em>the</em> first) providers to offer IPv6 for
end users. Also, we’re talking about a real, non-tunneled, no strings attached
plain /48.</p>
<p>In case that doesn’t ring a bell, a /48 will allow for 2<sup>16</sup> networks
consisting of 2<sup>64</sup> hosts each. Yes. That’s <em>that many</em> hosts.</p>
<p>In eager anticipation of getting this at home natively (of course I ordered
Fiber7 the moment I could at my place), I decided to play with IPv6 as far as
I could with my current provider, which apparently lives in the stone-age and
still doesn’t provide native v6 support.</p>
<p>After getting abysmal pings using 6to4 about a year ago, this time I decided
to go with <a href="https://tunnelbroker.net">tunnelbroker</a> which these days also
provides a nice dyndns-alike API for updating the public tunnel endpoint.</p>
<p>Let me tell you: Setting this up is trivial.</p>
<p>Tunnelbroker provides you with all the information you need for your tunnel
and with the prefix of the /64 you get from them and setting up for your own
network is trivial using <code class="highlighter-rouge">radvd</code>.</p>
<p>The only thing that’s different from your old v4 config: All your hosts will
immediately be accessible from the public internet, so you might want to
configure a firewall from the get-go - but see later for some thoughts in that
matter.</p>
<p>But this isn’t any different from the NAT solutions we have currently. Instead
of configuring port forwarding, you just open ports on your router, but the
process is more or less the same.</p>
<p>If you need direct connectivity however, you can now have it. No strings attached.</p>
<p>So far, I’ve used devices running iOS 7 and 8, Mac OS X 10.9 and 10.10,
Windows XP, 7 and 8 and none of them had any trouble reaching the v6 internet.
Also, I would argue that configuring <code class="highlighter-rouge">radvd</code> is easier than configuring DHCP.
There’s less thought involved for assigning addresses because
autoconfiguration will just deal with that.</p>
<p>For me, I had to adjust how I’m thinking about my network for a bit and I’m
posting here in order to explain what change you’ll get with v6 and how some
paradigms change. Once you’ve accepted these changes, using v6 is trivial and
totally something you can get used to.</p>
<ul>
<li>Multi-homing (multiple adresses per interface) was something you’ve rarely
done in v4. Now in v6, you do that all the time. Your OSes go as far as to
grab a new random one every few connections in order to provide a means of
privacy.</li>
<li>The addresses are so long and hex-y - you probably will never remember them.
But that’s ok. In general, there are much fewer cases where you worry about
the address.
<ul>
<li>Because of multi-homing every machine has a guaranteed static address
(built from the MAC address of the interface) by default, so there’s no
need to statically assign addresses in many cases.</li>
<li>If you want to assign static addresses, just pick any in your /64.
Unless you manually hand out the same address to two machines,
autoconfiguration will make sure no two machines pick the same address.
In order to remember them, feel free to use cute names - finally you got
some letters and leetspeak to play with.</li>
<li>To assign a static address, just do it on the host in question. Again,
autoconfig will make sure no other machine gets the same address.</li>
</ul>
</li>
<li>And with Zeroconf (avahi / bonjour), you have fewer and fewer oportunities
to deal with anything that’s not a host-name anyways.</li>
<li>You will need a firewall because suddenly all your machines will be
accessible for the whole internet. You might get away with just the local
personal firewall, but you probably should have one on your gateway.</li>
<li>While that sounds like higher complexity, I would argue that the complexity
is lower because if you were a responsible sysadmin, you were dealing with
<em>both</em> NAT <em>and</em> a firewall whereas with v6, a firewall is all you need.</li>
<li>Tools like nat-pmp or upnp don’t support v6 yet as far as I can see, so
applications in the trusted network can’t yet punch holes in the firewall
(what is the equivalent thing to forwarding ports in the v4 days).</li>
</ul>
<p>Overall, getting v6 running is really simple and once you adjust your mindset
a bit, while stuff is unusual and taking some getting-used-to, I really don’t
see v6 as being more complicated. Quite to the contrary actually.</p>
<p>As I’m thinking about firewalls and opening ports, actually, as hosts get
wiser about v6, you actually really might get away without a strict firewall
as hosts could grab a new random v6 address for every connection they want to
use and then they would just bind their servers to that address.</p>
<p>Services binding to all addresses would never bind to these temporary addresses.</p>
<p>That way none of the services brought up by default (you know - all those
ports open on your machine when it runs) would be reachable from the outside.
What would be reachable is the temporary addresses grabbed by specific
services running on your machine.</p>
<p>Yes. An attacker could port-scan your /64 and try to find the non-temporary
address, but keep in mind that finding that one address out of 2<sup>64</sup>
addresses would mean that you have to port-scan 4 billion traditional v4
internets per attack target (good luck) or randomly guessing with an average
chance of 1:2<sup>63</sup> (also good luck).</p>
<p>Even then a personal firewall could block all unsolicited packets from
non-local prefixes to provide even more security.</p>
<p>As such, we really might get away without actually needing a firewall at the
gateway to begin with which will actually go great lengths at providing the
ubiquitous configuration-free p2p connectivity that would be ever-so-cool and
which we have lost over the last few decades.</p>
<p>Me personally, I’m really happy to see how simple v6 actually is to get
implemented and I’m really looking forward to my very own native /48 which I’m
probably going to get somehwere in September/October-ish.</p>
<p>Until then, I’ll gladly play with my tunneled /64 (for now still firewalled,
but I’ll investigate into how OS X and Windows deal with the temporary
addresses they use which might allow me to actually turn the firewall off).</p>
pdo_pgsql improvements2014-03-28T00:00:00+00:00http://pilif.github.com/2014/03/pdo_pgsql-improvements<p>Last autumn, I was <a href="/2013/09/pdo-pgsql-needs-love">talking about</a> how I would
like to see pdo_pgsql for PHP to be improved.</p>
<p>Over the last few days I had time to seriously start looking into making sure
I get my wish. Even though my C is very rusty and I have next to no
experience in dealing with the PHP/Zend API, I made quite a bit of progress
over the last few days.</p>
<p>First, JSON support</p>
<p><img src="/assets/images/json.png" alt="screenshot showing off json support" /></p>
<p>If you have the json extension enabled in your PHP install (it’s enabled by
default), then any column of data type <code class="highlighter-rouge">json</code> will be automatically parsed
and returned to you as an array.</p>
<p>No need to constantly repeat yourself with <code class="highlighter-rouge">json_parse()</code>. This works, of
course, with directly selected json columns or with any expression that
returns json (like <code class="highlighter-rouge">array_to_json</code> or the direct typecast shown in the
screenshot).</p>
<p>This is off by default and can be enabled on a per-connection or a per-
statement level as to not break backwards compatibility (I’ll need it off
until I get a chance to clean up PopScan for example).</p>
<p>Next, array support:</p>
<p><img src="/assets/images/array.png" alt="screenshot showing off array support" /></p>
<p>Just like with JSON, this will automatically turn any array expression (of the
built-in array types) into an array to use from PHP.</p>
<p>As I’m writing this blog entry here, this only works for <code class="highlighter-rouge">text[]</code> and it’s
always enabled.</p>
<p>Once I have an elegant way to deal with the various types of arrays and
convert them into the correct PHP types, I’ll work on making this
turnoffable (technical term) too.</p>
<p>I’ll probably combine this and the automatic JSON parsing into just one
setting which will include various extended data types both Postgres and PHP
know about.</p>
<p>Once I’ve done that, I’ll look into more points on my wishlist (better error
reporting with 9.3 and later and a way to quote identifiers comes to mind) and
then I’ll probably try to write a proper RFC and propose this for inclusion
into PHP itself (though don’t get your hopes up - they are a conservative
bunch).</p>
<p>If you want to follow along with my work, have a look at my
<a href="https://github.com/pilif/php-src/tree/pdo_pgsql-improvements">pdo_pgsql-improvements</a>
branch on github (tracks to PHP-5.5)</p>
Ansible2014-03-05T00:00:00+00:00http://pilif.github.com/2014/03/ansible<p>In the summer of 2012, I had the great oportunity to clean up our hosting
infrastructure. Instead of running many differently configured VMs, mostly one
per customer, we started building a real redundant infrastructure with two
<em>really</em> beefy physical database machines (yay) and quite many (22) virtual
machines for caching, web app servers, file servers and so on.</p>
<p>All components are fully redundant, every box can fail without anybody really
needing to do anything (one exception is the database - that’s also redundant,
but we fail over manually due to the huge cost in time to failback).</p>
<p>Of course you don’t manage ~20 machines manually any more: Aside of the fact
that it would be really painful to do for those that have to be configured in an
identical way (the app servers come to mind), you also want to be able to
quickly bring a new box online which means you don’t have time to manually go
through the hassle of configuring it.</p>
<p>So, In the summer of 2012, when we started working on this, we decided to go
with <a href="http://puppetlabs.com/">puppet</a>. We also considered Chef but their server
was really complicated to set up and install and there was zero incentive for
them to improve because that would, after all, disincentivse people from
becoming customers of their hosted solutions (the joys of open-core).</p>
<p>Puppet is also commerically backed, but everything they do is available as open
source and their approach for the central server is much more «batteries
included» than what Chef has provided.</p>
<p>And finally, after playing around a bit with both Chef and puppet, we noticed
that puppet was way more bitchy and less tolerant of quick hacks around issues
which felt like a good thing for people dabbling with HA configuration of a
multi machine cluster for the first time.</p>
<p>Fast forward one year: Last autumn I found out about
<a href="https://github.com/ansible/ansible/">ansible</a> (linking to their github page -
their website reads like a competition in buzzword-bingo) and after reading
their documentation, I immediately was convinced:</p>
<ul>
<li>No need to install an agent on managed machines</li>
<li>Trivial to bootstrap machines (due to above point)</li>
<li>Contributors don’t need to sign a CLA (thank you so much, ansibleworks!)</li>
<li>No need to manually define dependencies of tasks: Tasks are run requentially</li>
<li>Built-in support for <a href="http://en.wikipedia.org/wiki/Cowsay">cowsay</a> by default</li>
<li>Many often-used modules included by default, no hunting for, say, a <code class="highlighter-rouge">sysctl</code>
module on github</li>
<li>Very nice support for rolling updates</li>
<li>Also providing a means to quickly do one-off tasks</li>
<li>Very easy to make configuration entries based on the host inventory (which requires puppetdb and an external database in the case of puppet)</li>
</ul>
<p>Because ansible connects to each machine individually via SSH, running it
against a full cluster of machines is going to take a bit longer than with
puppet, but our cluster is small, so that wasn’t that much of a deterrent.</p>
<p>So last Sunday evening I started working on porting our configuration over from
puppet to Ansible and after getting used to the YAML syntax of the playbooks, I
made very quick progress.</p>
<p><img src="/assets/images/ansible.png" alt="progress" /></p>
<p>Again, I’d like to point out the excellent, built-in, on-by-default support for
cowsay as one of the killer-features that made me seriously consider starting
the porting effort.</p>
<p>Unfortunately though, after a very promising start, I had to come to the
conclusion that we will be sticking with puppet for the time being because
there’s one single feature that Ansible doesn’t have and that I really, really
want a configuration management system to have:</p>
<p><em>I’ts not possible in Ansible to tell it to keep a directory clean of files not
managed by Ansible in some way</em></p>
<p>There are, of course, workarounds, but they come at a price too high for me to
be willing to pay.</p>
<ul>
<li>
<p>You could first clean a directory completely using a shell command, but this
will lead to ansible detecting a change to that folder every time it runs which
will cause server restarts, even when they are not needed.</p>
</li>
<li>
<p>You could do something like <a href="http://stackoverflow.com/questions/16385507/ansible-delete-unmanaged-files-from-directory">this stack overflow question</a>
but this has the disadvantage that it forces you into a configuration file
specific playbook design instead of a role specific one.</p>
</li>
</ul>
<p>What I mean is that using the second workaround, you can only have one playbook
touching that folder. But imagine for example a case where you want to work with
<code class="highlighter-rouge">/etc/sysctl.d</code>: A generic role would put some stuff there, but then your
firewall role might put more stuff there (to enable ip forwarding) and your
database role might want to add other stuff (like tweaking shmmax and shmall,
though that’s thankfully not needed any more in current Postgres releases).</p>
<p>So suddenly your <code class="highlighter-rouge">/etc/sysctl.d</code> role needs to know about firewalls and
databases which totally violates the really nice separation of concerns between
roles. Instead of having a firewall and a database role both doing something to
<code class="highlighter-rouge">/etc/sysctl.d</code>, you know need a sysctl-role which does different things
depending on what other roles a machine has.</p>
<p>Or, of course, you just don’t care that stray files never get removed, but
honestly: Do you really want to live with the fact that your <code class="highlighter-rouge">/etc/sysctl.d</code>, or
worse, <code class="highlighter-rouge">/etc/sudoers.d</code> can contain files not managed by ansible and likely not
intended to be there? Both sysctl.d and sudoers.d are more than capable of doing
immense damage to your boxes and this sneakily behind the watching eye of your
configuration management system?</p>
<p>For me that’s inacceptable.</p>
<p>So despite all the nice advantages (like cowsay), this one feature is something
that I really need and can’t have right now and which, thus, forces me to stay
away from Ansible for now.</p>
<p>It’s a shame.</p>
<p>Some people tell me that implementing my feature would require puppet’s feature
of building a full state of a machine before doing anything (which is error-
prone and frustrating for users at times), but that’s not really true.</p>
<p>If ansible modules could talk to each other - maybe loosly coupled by firing
some events as they do stuff, you could just name the task that makes sure the
directory exists first and then have that task register some kind of event
handler to be notified as other tasks touch the directory.</p>
<p>Then, at the end, remove everything you didn’t get an event for.</p>
<p>Yes. This would probably (I don’t know how Ansible is implemented internally)
mess with the decouplling of modules a bit, but it would be <em>so far removed</em>
from re-implementing puppet.</p>
<p>Which is why I’m posting this here - maybe, just maybe, somebody reads my plight
and can bring up a discussion and maybe even a solution for this. Trust me: I’d
so much rather use Ansible than puppet, it’s crazy, but I also want to make sure
that no stray file in <code class="highlighter-rouge">/etc/sysctl.d</code> will bring down a machine.</p>
<p>Yeah. This is probably the most words I’ve ever used for a feature request, but
this one is really, really important for me which is why I’m so passionate about
this. Ansible got <em>so f’ing much</em> right. It’s such a shame to still be left
unable to really use it.</p>
<p>Is this a case of <a href="https://xkcd.com/1172/">xkcd1172</a>? Maybe, but to me, my
request seems reasonable. It’s not? Enlighten me! It is? Great! Let’s work on
fixing this.</p>
the new (2013) MacPro2014-02-19T00:00:00+00:00http://pilif.github.com/2014/02/mac-pro<p>Like many others, I couldn’t wait for Apple to finally upgrade their MacPro and like many others, when they could finally be ordered, I queued up to get mine.</p>
<p>Last Monday, after two months of wait, the package finally arrived and I could start playing with it. I have to say: The thing is very impressive.</p>
<p>The hardware itself is very lightweight and compact. Compared to the old aluminium MacPro it was replacing, it felt even smaller than it is. Also, the box is nearly silent - so silent in fact, that now the hum of the dimmed background light in my old 30” Cinema Display is louder than the machine itself.</p>
<p>Speaking of that 30” display: It’s using a dual-link DVI port. That means a <a href="http://store.apple.com/us/product/MB571Z/A/mini-displayport-to-dual-link-dvi-adapter?fnode=51">special adapter</a> is required to connect it to the new Thunderbolt ports - at least if you want to use a higher resolution than 1280x800 (which you definitely <em>do</em>).</p>
<p>The adapter is kinda difficult to get, especially as I totally forgot about it and I reall wanted to migrate to the new machine, so I had to look through local retail (only the one from Apple even remotely available) as opposed to Amazon (three other models available, some cheaper).</p>
<p>The device is huge by the way. I’m sure there’s some electronics in there (especially when you consider that you have to plug it into a USB port for power), probably to split the full 2560x1600 pixels sent over Thunderbolt into two images of 1280x800, only to be reassembled in the display I guess.</p>
<p>The fact that there obviously is processing going on leaves a bit of a bad taste as it’s one more component that could now break and, of course, there might be display lag or quality degradation.</p>
<p>At some time, there was for sure, if the adapters reviews are to be believed, but so far, I wasn’t able to notice bad quality nor lag, but the fact that now there’s one more active component involved in bringing me a picture makes me just a tad bit nervous.</p>
<p>Anyways - let’s talk about some more pleasant things.</p>
<p>One is the WiFi: With the old MacPro I had peak transfer of about 3 MBytes/s which was just barely good enough for me to not wanting to go through the trouble of laying cable, even though it really pissed me off at times.</p>
<p>On the new Pro, I reached 18 MBytes/s over the exact same WiFi yesterday which removes any need for ever considering installing a physial cable. Very handy. Remember: It’s not a file server, it doesn’t run a torrent client, it doesn’t serve movies to my home network. The really large bulk transfers it does are mainly caused by Steam which clearly is the bottleneck here (it never manages to saturate my 150MBit/s downstream).</p>
<p>Another thing that really surprises me is the sleeping behavior of the box. Well, actually, the waking up behavior: When asleep, the thing wakes up instantly (less than a second) - never in my live have I seen such a quick waking up from sleep in a computer.</p>
<p>Yes. I’m waiting for the fan to spin down and all audible noise to go away, but still. Hit any key on the keyboard and the machine’s back. We’re talking “waking an iphone from sleep” speeds here.</p>
<p>It might be that the machine has multiple levels of sleep states, but the instant wake-up also happens after sleeping for over 12 hours at which point a deeper sleep would totally make sense if there was any.</p>
<p>What is strange though: I seem to be able to wake the machine by pinging it. Yes. I know about the bonjour proxy, but in this case, I’m pinging it directly by IP and it wakes up (the first ping has a roundtrip time for 500ish ms - yes. it wakes THAT quickly).</p>
<p>This leads me to believe that the machine might not actually be sleeping for real though because waking from a direct ping requires quite a bit more technology than waking from a WOL packet.</p>
<p>Somdeday, I’ll play with <code class="highlighter-rouge">tcpdump</code> to learn what’s going on here.</p>
<p>Performance-wise, I haven’t done that much testing, but replaying a test Postgres database dump that takes 5ish minutes on a 2012 retina MacBook Pro completes in 1:12 minutes on the pro - pretty impressive.</p>
<p>And one last thing: When you get a machine as powerful as this, there’s of course also the wish of playing a game or two on it. As I had one SSD dedicated to Bootcamp in the old Pro, I was curious whether I might be able to keep this setup: The built-in flash drive dedicated to MacOS and Windows on its own (the old one) dedicated SSD.</p>
<p>Now that we don’t have internal drive bays any more, this might seem tricky, but yesterday, I managed to install Windows 8 nicely on that SSD after connecting it via Thunderbolt using <a href="http://www.amazon.com/Seagate-GoFlex-Thunderbolt-Adapter-STAE121/dp/B006P1QWOQ">this adapter</a> (no affiliate code - I got the link straight from google).</p>
<p>I guess the fact that it’s using Thunderbolt makes Windows think it’s a built-in hard drive which is what makes this work: You’re not allowed to install Windows on a portable drive due to licensing issues.</p>
<p>The adapter is not actually intended for use with arbitrary drives (it’s an accessory to some Seagate portable drives), but it works totally well and is (physically) stable enough. I’ll have to do a bit of benchmarking to see how much performance I lose compared to the old built-in solution, but it certainly doesn’t feel any slower.</p>
<p>Overall, I’m really happy with my new toy. Yes, it’s <s>probably</s> overpowered for my needs, but it’s also cool has hell, it is the first MacPro I own where sleep works reliably (though I’m inclined to say that it works suspiciously well - it might be cheating) and the fact that bootcamp still works with a dedicated external drive makes me really happy too.</p>
sensational ag is hiring2013-10-14T00:00:00+00:00http://pilif.github.com/2013/10/sensational-needs-people<p><a href="http://www.sensational.ch">Sensational AG</a> is the company I founded
together with a collegue back in 2000. Ever since then, we had a very
nice combination of fun, interesting work and a very successful
business.</p>
<p>We’re a very small team - just four programmers, one business guy and a
product designer. Me personally, I would love to keep the team as small
and tightly-knit as possible as that brings huge advantages: No
politics, a lot of freedoms for everybody and mind-blowing
productivity.</p>
<p>I’m still amazed to see what we manage to do with our small team time
and time again and yet still manage to keep the job fun. It’s not just
the stuff we do outside of immediate work, like UT2004 matches,
<a href="/2009/03/double-blind-cola-test/">Cola Double Blind Tests</a>, sometimes
hosting JSZurich and much more - it’s also the work itself that we try
to make as fun as possible for everybody.</p>
<p>Sure - sometimes, work just has to be done, but we try as much as
possible to distribute the fun parts of the work between everybody.
Nobody has to be a pure code monkey; nobody constanly pulls the
“change this logo there for the customer” card (though, that card
certainly exists to be pulled - we just try to distribute it).</p>
<p>Most of the work we do flows into our big eCommerce project: Whenever
you order food in a restaurant here in Switzerland, if the restaurant
is big enough for them to get the food delivered to them, the stuff
you eat will have been ordered using the product of ours.</p>
<p>Whenever you visit a dentist, the things they put in your mouth likely
have been ordered using the product of ours.</p>
<p>The work we do helps countless people daily to get their job done more
quickly allowing them to go home earlier. The work we do is essential
for the operations of many, many companies here in Switzerland, in
Germany and in Austria.</p>
<p>From a technical perspective, the work we do is very interesting too:
While the main part of the application is a web application, there are
many components around it: Barcode Scanners, native Smartphone
applications and our very own highly available cluster (real, physical
hardware) that hosts the application for the majority of our customers.</p>
<p>When you work with us, you will have to deal with any of</p>
<ul>
<li>The huge home-grown PHP (5.5)-application (its history goes back to
2004 and it has changed version controlling systems three times so
far - from CVS to SVN to git)</li>
<li>Backend Jobs written in PHP, Ruby and JavaScript</li>
<li>Frontend-Code written in JavaScript and Coffee Script (heck, we were
using XmlHttpRequest before using it was called AJAX)</li>
<li>Software for Barcode-Scanners written in C#, C and soon whatever
we’d like to use on Android</li>
<li>Software to read data from USB barcode Scanners written in
Objective-C and Delphi of all things</li>
<li>Puppet to manage our cluster of 5 physical and about 25 virtual
machines (though if only I knew about Ansible when we started this)</li>
<li>git where all our code lives, some on github, some on our own server.</li>
<li>PostgreSQL which is our database of choice (constantly updated to be
able to play with the latest and gratest toys^Wfeatures) and I love it
so much that no day goes past where I don’t try to convert people over
:-)</li>
<li>Ubuntu Linux which is the underlying OS of our whole server
infrastructure.</li>
</ul>
<p>The platform we use to develop on is everybodys own choice. Everybody
here uses Macs, but whatever you are most productive with is what you
use.</p>
<p>All of the code that we work with daily is home-grown (minus some
libraries, of course). We control all of it and we get to play with
all the components of the big thing. No component can’t be changed,
though we certainly prefer changing some over the others :-)</p>
<p>Between the Cola tests and the technical versatility and challenges
described above: If I can interest <em>you</em>, dear reader, to join our
crazy productive team in order to improve one hell of a suite of
applications, then now is your chance to join up: We need more people
to join our team of developers.</p>
<p>You’d have to be able to live with the huge chunk of PHP code though
as that’s too big to migrate away from, no matter how much we’d all
love to, but aside of that, chose your battles in any of the above
list of technologies.</p>
<p>Also, if your particular problem is better solved in $LANGUAGE of your
choice, feel free to just do it. Part of the secret behind our
productivity is that we know our tools and know when to use them. Good
code is readable in any language (though I’d hve to brush up my lisp
if you chose to go that route).</p>
<p>Interested? I would love to hear from you at
<a href="mailto:phofstetter@sensational.ch">phofstetter@sensational.ch</a>.</p>
pdo_pgsql needs some love2013-09-09T00:00:00+00:00http://pilif.github.com/2013/09/pdo-pgsql-needs-love<p>Today, <a href="http://www.postgresql.org/about/news/1481/">PostgreSQL 9.3 was released</a>.
September is always the month of PostgreSQL as every September a new
Major Release with awesome new feature is released and every September
I have to fight the urgue to run and immediately update the production
systems to the new version of my
<a href="/2009/02/all-time-favourite-tools-update/">favorite</a> <a href="/2004/06/all-time-favourite-tools/">toy</a></p>
<p>As every year, I want to talk the awesome guys (and girls I hope) that
make PostgreSQL one of my favorite pieces of software overall and for
certain my most favorite database system.</p>
<p>That said, there’s another aspect of PostgreSQL that needs some serious
love: While back in the days PHP was known for its robust database
client libraries, over time other language environments have caught up
and long since surpassed what’s possible in PHP.</p>
<p>To be honest, the PostgreSQL client libraries as they are currently
available in PHP are in serious need of some love.</p>
<p>If you want to connect to a PostgreSQL database, you have two options:
Either you use the thin wrapper over libpq, the <a href="http://www.php.net/pgsql">pgsql extension</a>,
or you go PDO at which point, you’d use <a href="http://www.php.net/pdo_pgsql">pdo_pgsql</a></p>
<p>Both solutions are, unfortunately, quite inadequate solutions that fail
to expose most of the awesomeness that is PostgreSQL to the user:</p>
<h2 id="pgsql">pgsql</h2>
<p>On the positive side, being a small wrapper around libpq, the pgsql
extension knows quite a bit about Postgres’ internals: It has excellent
support for COPY, it knows about a result sets data types (but doesn’t
use that knowledge as you’ll see below), it has <code class="highlighter-rouge">pg_quote_identifier</code>
to correctly quote identifiers, it support asynchronous queries and it
supports NOTIFY.</p>
<p>But, while pgsql knows a lot about Postgres’ specifics, to this day,
the <code class="highlighter-rouge">pg_fetch_*</code> functions convert all columns into strings. Numeric
types? String. Dates? String. Booleans? Yes. String too (‘t’ or ‘f’,
both trueish values to PHP).</p>
<p>To this day, while the extension supports prepared statements, their
use is terribly inconvenient, forcing you to name your statements and
to manually free them.</p>
<p>To this day, the <code class="highlighter-rouge">pg_fetch_*</code> functions load the whole result set into
an internal buffer, making it impossible to stream results out to the
client using an iterator pattern. Well. Of course it’s still possible,
but you waste the memory for that internal buffer, forcing you to
manually play with DECLARE CURSOR and friends.</p>
<p>There is zero support for advanced data types in Postgres and the
library doesn’t help at all with todays best practices for accessing a
database (prepared statements).</p>
<p>There are other things that make the extension unpractical for me, but
they are not the extensions fault, so I won’t spend any time explaining
them here (like the lack of support by newrelic - but, as I said,
that’s not the extensions fault)</p>
<h2 id="pdo_pgsql">pdo_pgsql</h2>
<p>pdo_pgsql gets a lot of stuff right that the pgsql extension doesn’t:
It doesn’t read the whole result set into memory, it knows a bit about
data types, preserving numbers and booleans and, being a PDO driver, it
follows the generic PDO paradigms, giving a unified API with other PDO
modules.</p>
<p>It also has good support for prepared statements (not perfect, but
that’s PDOs fault).</p>
<p>But it also has some warts:</p>
<ul>
<li>There’s no way to safely quote an identifier. Yes. That’s a PDO
shortcoming, but still. It should be there.</li>
<li>While it knows about numbers and booleans, it doesn’t know about any of the other more advanced data types.</li>
<li>Getting metadata about a query result actually makes it query the
database - once per column, even though the information is right there
in libpq, available to use (look at the
<a href="https://github.com/php/php-src/blob/master/ext/pdo_pgsql/pgsql_statement.c#L571">source</a>
of <code class="highlighter-rouge">PDOStatement::getColumnMeta</code>). This makes it impossible to fix above issue in userland.</li>
<li>It has zero support for COPY</li>
</ul>
<h2 id="if-only">If only</h2>
<p>Imagine the joy of having a pdo_pgsql that actually cares about
Postgres. Imagine how selecting a JSON column would give you its data
already decoded, ready to use in userland (or at least an option to).</p>
<p>Imagine how selecting dates would at least give you the option of
getting them as a <code class="highlighter-rouge">DateTime</code> (there’s loss of precision though -
Postgres’ <code class="highlighter-rouge">TIMESTAMP</code> has more precision than <code class="highlighter-rouge">DateTime</code>)</p>
<p>Imagine how selecting an array type in postgres would actually give you
back an array in PHP. The string that you have to deal with now is
notoriously hard to parse. Yes. There now is <code class="highlighter-rouge">array_to_json</code> in
Postgres, but hat shouldn’t be needed.</p>
<p>Imagine how selecting a HSTORE would give you an associative array.</p>
<p>Imagine using COPY with pdo_pgsql for very quickly moving bulk data.</p>
<p>Imagine the new features of <code class="highlighter-rouge">PGResult</code> being exposed to userland.
Giving applications the ability to detect what constraint was just
violated (very handy to detect whether it’s safe to retry).</p>
<p>Wouldn’t that be fun? Wouldn’t that save us from having to type so much
boilerplate all day?</p>
<p>Honestly, what I think should happen is somebody should create a
<code class="highlighter-rouge">pdo_pgsql2</code> that breaks backwards compatibility and adds all these
features.</p>
<p>Have <code class="highlighter-rouge">getColumnMeta</code> just return the OID instead of querying the
database. Have a <code class="highlighter-rouge">quoteIdentifier</code> method (yes. That should be in PDO
itself, but let’s fix it where we can).</p>
<p>Have <code class="highlighter-rouge">fetch()</code> return Arrays or Objects for JSON columns. Have it
return Arrays for arrays and HSTOREs. Have it optionally return
<code class="highlighter-rouge">DateTime</code>s instead of strings.</p>
<p>Wouldn’t that be great?</p>
<p>Unfortunately, while I can write <em>some</em> C, I’m not nearly good enough
to produce something that I could live with other people using, so any
progress I can achieve will be slow.</p>
<p>I’m also unsure of whether this would ever have a chance to land in PHP
itself. Internals are very adverse to adding new features to stuff that
already “works” and no matter how good the proposal, you need a very
thick skin if you want to ever get something merged, no matter whether
you can actually offer patches or not.</p>
<p>Would people be using an external pdo_pgsql2? Would it have a chance as
a pecl extension? Do other people see a need for this? Is somebody
willing to help me? I really think something needs to be done and I’m
willing to get my hands dirty - I just have my doubts about the quality
of the result I’m capable of producing. But I can certainly try.</p>
<p>And I will.</p>
when in doubt - SSL2013-09-05T00:00:00+00:00http://pilif.github.com/2013/09/when-in-doubt-ssl<p>Since 2006, as part of our product, we are offering barcode scanners
with GSM support to either send orders directly to the vendor or to
transmit products into the web frontend where you can further edit them.</p>
<p>Even though the devices (Windows Mobile. Crappy. In progress of
updating) do support WiFi, we really only support GSM because that means we don’t have to share the end users infrastructure.</p>
<p>This is a huge plus because it means that no matter how locked-down the
customer’s infrastructure, no matter how crappy the proxy, no matter the IDS in use, we’ll always be able to communicate with our server.</p>
<p>Until, of course, the mobile carrier most used by our customers decides
to add a “transparent” (do note the quotes) proxy to the mix.</p>
<p>We were quite stomped last week when we got reports of an HTTP error 408 to be reported by the mobile devices, especially because we were not seeing error 408 in our logs.</p>
<p>Worse, using <code class="highlighter-rouge">tcpdump</code> has clearly shown how we were getting a RST
packet from the client, sometimes before sending data, sometimes while
sending data.</p>
<p>Strange: Client is showing 408, server is seeing a RST from the client.
Doesnt’ make sense.</p>
<p>Tethering my Mac using the iPhones personal hotspot feature and a SIM
card of the mobile provider in question made it clear: No longer are we
talking directly to our server. No. What the client receives is a 408
<a href="/assets/stuff/408.txt">HTML formatted error message</a> by a proxy server.</p>
<p>Do note the “DELETE THIS LINE” and “your organization here” comments.
What a nice touch. Somebody was really spending alot of time getting
this up and running.</p>
<p>Now granted, taking 20 seconds before being able to produce a response
is a bit on the longer side, but unfortunately, some versions of the
scanner software require gzip compression and gzip compression needs to
know the full size of the body to compress, so we have to prepare the
full response (40 megs uncompressed) before being able to send anything</p>
<ul>
<li>that just takes a while.</li>
</ul>
<p>But consider long-polling or <a href="http://dev.w3.org/html5/eventsource/">server sent events</a> - receiving a 408 after
just 20 seconds? That’s annoying, wasting resources and probably not
something you’re prepared for.</p>
<p>Worse, nobody was notified of this change. For 7 years, the clients
were able to connect directly to our server. Then one day it changes
and now they aren’t. No communication, no time to prepare and
<em>certainly</em> too strict limits in order to not affect anything (not
just us - see my remark about long polling).</p>
<p>The solution in the end is, like so often, to use SSL. SSL connections
are opaque to any reverse proxy. A proxy can’t decrypt the data without
the client noticing. An SSL connection can’t be inspected and an SSL
connection can’t be messed with.</p>
<p>Sure enough: The exact same request that fails with that 408 over HTTP
goes through nicely using HTTPS.</p>
<p>This trick works every time when somebody is messing with your
connection. Something f’ing up your WebSocket connection? Use SSL!
Something messing with your long-polling? Use SSL. Something
decompressing your response but not stripping off the Content-Encoding
header (yes. that happend to me once)? Use SSL. Something replacing
arbitrary numbers in your response with asterisks (yepp. happened too)?
You guessed it: Use SSL.</p>
<p>Of course, there are three things to keep in mind:</p>
<ol>
<li>
<p>Due to the lack of SNI in the world’s most used OS and Browser
combination (any IE under Windows XP), every SSL site you host requires
one dedicated IP address. Which is bad considering that we are running
out of addresses.</p>
</li>
<li>
<p>All of the bigger mobile carriers have their CA in the browsers
trusted list. Aside of ethics, there is no reason what so ever for them
to not start doing all the crap I described and just re-encrypting the
connection, faking a certificate using their trusted ones.</p>
</li>
<li>
<p>failing that, they still might just block SSL at some point, but as
more and more sites are going SSL only (partially for above reasons no
doubt), outright blocking SSL is going to be more and more unlikely to
happen.</p>
</li>
</ol>
<p>So. Yes. When in doubt: Use SSL. Not only does that help your users
privacy, it also fixes a ton of technical issues created by practically
non-authorized third-party messing with you.</p>
how to accept SSL client certificates2013-07-12T00:00:00+00:00http://pilif.github.com/2013/07/how-to-accept-ssl-client-certificates<p>Yesterday I was asked on twitter how you would use client certificates
on a web server in order to do user authentication.</p>
<p>Client certificates are very handy in a controlled environment and they
work really well to authenticate API requests. They are, however,
completely <a href="/2008/05/why-is-nobody-using-ssl-client-certificates/">unusable for normal people</a>.</p>
<p>Getting meaningful information from client side certificates is
something that’s happening as part of the SSL connection setup, so it
must be happening on whatever piece of your stack that terminates the
client’s SSL connection.</p>
<p>In this article I’m going to look into doing this with nginx and Apache
(both traditional frontend web servers) and in node.js which you might
be using in a setup where clients talk directly to your application.</p>
<p>In all cases, what you will need is a means for signing certificates in
order to ensure that only client certificates you signed get access to
your server.</p>
<p>In my use cases, I’m usually using openssl which comes with some
subcommands and helper script to run as a certificate authority. On the
Mac if you prefer a GUI, you can use Keychain Access which has all you
need in the “Certificate Assistant” submenu of the application menu.</p>
<p>Next, you will need the public key of your users. You can have them
send in a traditional CSR and sign that on the command line (use
<code class="highlighter-rouge">openssl req</code> to create the CSR, use <code class="highlighter-rouge">openssl ca</code> to sign it), or you
can have them submit an HTML form using the <code class="highlighter-rouge"><keygen></code> tag (yes. that
exists. Read up on it on <a href="https://developer.mozilla.org/en-US/docs/Web/HTML/Element/keygen">MDN</a>
for example).</p>
<p>You absolutely <em>never ever in your lifetime</em> want the private key of
the user. Do not generate a keypair for the user. Have them generate a
key and a CSR, but never ever have them send the key to you. You only
need their CSR (which contains their public key, signed by their
private key) in order to sign their public key.</p>
<p>Ok. So let’s assume you got that out of your way. What you have now is
your CAs certificate (usually self-signed) and a few users which now
own certificates you have signed for them.</p>
<p>Now let’s make use of this (I’m assuming you know reasonably well how
to configure these web servers in general. I’m only going into the
client certificate details).</p>
<h3 id="nginx">nginx</h3>
<p>For nginx, make sure you have enabled SSL using the usual steps. In
addition to these, set <code class="highlighter-rouge">ssl_client_certificate</code>
(<a href="http://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_client_certificate">docs</a>)
to the path of your CA’s certificate. nginx will only accept client
certificates that have been signed by whatever <code class="highlighter-rouge">ssl_client_certificate</code>
you have configured.</p>
<p>Furthermore, set <code class="highlighter-rouge">ssl_verify_client</code>
(<a href="http://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_verify_client">docs</a>)
to <code class="highlighter-rouge">on</code>. Now only requests that provide a client certificate signed by
above CA will be allowed to access your server.</p>
<p>When doing so, nginx will set a few additional variables for you to
use, most importantly <code class="highlighter-rouge">$ssl_client_cert</code> (full certificate),
<code class="highlighter-rouge">$ssl_client_s_dn</code> (the subject name of the client certificate),
<code class="highlighter-rouge">$ssl_client_serial</code> (the serial number your CA has issued for their
certificate) and most importantly <code class="highlighter-rouge">$ssl_client_verify</code> which you should
check for <code class="highlighter-rouge">SUCCESS</code>.</p>
<p>Use <code class="highlighter-rouge">fastcgi_param</code> or <code class="highlighter-rouge">add_header</code> to pass these variables through to
your application (in the case of add_header make sure that it was
really nginx who set it and not a client faking it).</p>
<p>I’ll talk about what you do with these variables a bit later on.</p>
<h3 id="apache">Apache</h3>
<p>As with nginx, ensure that SSL is enabled. Then set
<code class="highlighter-rouge">SSLCACertificateFile</code> to the path to your CA’s certificate. Then set
<code class="highlighter-rouge">SSLVerifyClient</code> to <code class="highlighter-rouge">require</code>
(<a href="http://httpd.apache.org/docs/2.4/mod/mod_ssl.html">docs</a>).</p>
<p>Apache will also set many variables for you to use in your application.
Most notably <code class="highlighter-rouge">SSL_CLIENT_S_DN</code> (the subject of the client
certificate)and <code class="highlighter-rouge">SSL_CLIENT_M_SERIAL</code> (the serial number your CA has
issued). The full certificate is in <code class="highlighter-rouge">SSL_CLIENT_CERT</code>.</p>
<h3 id="nodejs">node.js</h3>
<p>If you want to handle the whole SSL stuff on your own, here’s an
example in node.js. When you call <code class="highlighter-rouge">http.createServer</code>
(<a href="http://nodejs.org/api/https.html#https_https_createserver_options_requestlistener">docs</a>),
pass in some options. One is <code class="highlighter-rouge">requestCert</code> which you would set to true.
The other is is <code class="highlighter-rouge">ca</code> which you should set to an array of strings in PEM
format which is your CA’s certificate.</p>
<p>Then you can check whether the certificate check was successful by
looking at the <code class="highlighter-rouge">client.authorized</code> property of your <code class="highlighter-rouge">request</code> object.</p>
<p>If you want to get more info about the certificate, use
<code class="highlighter-rouge">request.connection.getPeerCertificate()</code>.</p>
<h3 id="what-now">what now?</h3>
<p>Once you have the information about the client certificate (via
fastcgi, reverse proxy headers or apache variables in your module),
then the question is what you are going to do with that information.</p>
<p>Generally, you’d probably couple the certificate’s subject and its
serial number with some user account and then use the subject and
serial as a key to look up the user data.</p>
<p>As people get new certificates issued (because they might expire), the
subject name will stay the same, but the serial number will change, so
depending on your use-case use one or both.</p>
<p>There are a couple of things to keep in mind though:</p>
<ul>
<li>Due to a flaw in the SSL protocol which was <a href="http://www.educatedguesswork.org/2009/11/understanding_the_tls_renegoti.html">discovered in 2009</a>,
you cannot safely have only parts of your site require a certificate.
With most client libraries, this is an all-or-nothing deal. There is
a secure renegotiation, but I don’t think it’s widely supported at
the moment.</li>
<li>There is no notion of signing out. The clients have to present their
certificate, so your clients will always be signed on (which might
be a good thing for your use-case)</li>
<li>The UI in traditional browsers to handle this kind of thing is
<a href="/2008/05/why-is-nobody-using-ssl-client-ertificates/">absolutely horrendous</a>.
I would recommend using this only for APIs or with managed devices
where the client certificate can be preinstalled silently.</li>
</ul>
<p>You do however gain a very good method for uniquely identifying
connecting clients without a lot of additional protocol overhead. The
SSL negotiation isn’t much different whether the client is presenting a
certificate or not. There’s no additional application level code
needed. Your web server can do everything that’s needed.</p>
<p>Also, there’s no need for you to store any sensitive information. No
more leaked passwords, no more fear of leaking passwords. You just
store whatever information you need from the certificate and make sure
they are properly signed by your CA.</p>
<p>As long as you don’t lose your CAs private key, you can absolutely
trust your clients and no matter how much data they get when they
break into your web server, they won’t get passwords, not the ability
to log in as any user.</p>
<p>Conversely though, make sure that you keep your CA private key
absolutely safe. Once you lose it, you will have to invalidate all
client certificates and your users will have to go through the process
of generating new CSRs, sending them to you and so on. Terribly
inconvenient.</p>
<p>In the same vein: Don’t have your CA certificate expire too soon. If it
does expire, you’ll have the same issue at hand as if you lost your
private key. Very annoying. I learned <em>that</em> the hard way back in
2001ish and that was only for internal use.</p>
<p>If you need to revoke a users access, either blacklist his serial
number in your application or, much better, set up a proper CRL for
your certificate authority and have your web server check that.</p>
<p>So. Client certificates can be useful tool in some situations. It’s
your job to know when, but at least now you have some hints to get you
going.</p>
<p>Me personally, I was using this once around 2009ish for a REST
API, but I have since replaced that with oAuth because that’s what most
of the users knew best (read: “at all”). Depending on the audience,
client certificates might be totally foreign to them.</p>
<p>But if it works for you, perfect.</p>
why I don't touch crypto2013-07-11T00:00:00+00:00http://pilif.github.com/2013/07/why-I-dont-touch-crypto<p>When doing our work as programmers, we screw up. Small bugs, big bugs,
lazyness - the possibilties are endless.</p>
<p>Usually, when we screw up, we know that immediately: We get a failing
test, we get an exception logged somewhere, or we hear from our users
that such and such feature doesn’t work.</p>
<p>Also, most of the time, no matter how bad the bug, the issue can be
worked around and the application keeps working overall.</p>
<p>Once you found the bug, you fix it and everybody is happy.</p>
<p>But imagine you had one of these off-by-one errors in your code (those
that constantly happen to all of us) and further imagine that the
function where the error was in was still apparently producing the same
output as if the error wasn’t there.</p>
<p>Imagine that because of that error the apparently correctly looking
output is completely useless and your whole application has just now
utterly broken.</p>
<p>That’s crypto for you.</p>
<p>Crypto can’t be a «bit broken». It can’t be «mostly working». Either
it’s 100% correct, or you shouldn’t have bothered doing it at all. The
weakest link breaks the whole chain.</p>
<p>Worse: looking at the data you are working with doesn’t show any sign
of wrongness when you look at it. You encrypt something, you see random
data. You decrypt it, you see clear text. Seems to work fine. Right!
Right?</p>
<p>Last week’s <a href="http://nakedsecurity.sophos.com/2013/07/09/anatomy-of-a-pseudorandom-number-generator-visualising-cryptocats-buggy-prng/">issue in the random number generator</a> in Cryptocat is a very good example.</p>
<p>The bug was an off-by-one error in their random number generator. The
output of the function was still random numbers, looking at the output
would clearly show random numbers. Given that fact, the natural bias
for seeing code as being correct is only reinforced.</p>
<p>But yet it was wrong. The bug was there and the random numbers weren’t
really random (enough).</p>
<p>The weakest link was broken, the whole effort in security practically
pointless, which is even worse in this case of an application whose
only purpose is, you know, security.</p>
<p>Security wasn’t just an added feature to some other core functionality.
It <em>was</em> the core functionality.</p>
<p>That small off-by-one error has completely broken the whole application
and was completely unnoticable by just looking at the produced output.
Writing a testcase for this would have required complicated thinking
and coding which would be as likely to contain an error as it was
likely for the code to be tested to contain an error.</p>
<p>This, my friends, is why I keep my hands off crypto. I’m just plain not
good enough. Crypto is a world where understanding the concepts,
understanding the math and writing tests just isn’t good enough.</p>
<p>The goal you <em>have</em> to reach is perfection. If you fail to reach that,
than you have failed utterly.</p>
<p>Crypto is something I leave to others to deal with. Either they have
reached perfection at which point they have my utmost respect. Or they
fail at which point they have my understanding.</p>
armchair scientists2012-07-06T00:00:00+00:00http://pilif.github.com/2012/07/armchair-scientists<p>The place: London. The time: Around 1890.</p>
<p>Imagine a medium sized room, lined with huge shelves filled with dusty
books. The lights are dim, the air is heavy with cigar smoke. Outside
the last shred of daylight is fading away.</p>
<p>In one corner of the room, you spot two large leather armchairs and a
small table. On top of the table, two half-full glasses of Whiskey. In
each of the armchair an elderly person.</p>
<p>One of them opens the mouth to speak</p>
<blockquote>
<p>«If <em>I</em> were in charge down there in South Africa, we’d be so much
better off - running a colony just can’t be so hard as they make it
out to be»</p>
</blockquote>
<p>Concievably to have happened? Yeah. Very likely actually. Crazy and
misguided? Of course - we learned about that in school,
<a href="http://en.wikipedia.org/wiki/Imperialism">imperialism</a>
<a href="http://en.wikipedia.org/wiki/World_War_I">doesn’t</a>
<a href="http://en.wikipedia.org/wiki/World_War_II">work</a>.</p>
<p>Of course that elderly guy in the little story is wrong. The problems
are way too complex for a bystander to even understand, let alone
solve. More than likely he doesn’t even have a fraction of the
background needed to understand the complexities.</p>
<p>And yet he sits there, in his comfortable chair, in the warmth of his
club in cozy London and yet he explains that he knows so much better
than, you know, the people actually doing the work.</p>
<p>Now think today.</p>
<p>Think about that article you just read that was explaining a problem
the author was solving. Or that other article that was illustrating a
problem the author is having, still in search of a solution.</p>
<p>Didn’t you feel the urge to go to <a href="http://news.ycombinator.com">Hacker News</a>
and reply how much you know better and how crazy the original poster
must be not to see the obvious simple solution?</p>
<p>Having trouble scaling 4chan? <a href="http://news.ycombinator.com/item?id=4206544">How can <em>that</em> be hard</a>?
Having trouble with your programming environment feeling unable to assign a string to another?
Well. <a href="https://raw.github.com/candlerb/string19/47b0cba0a2047eca0612b4e24a540f011cf2cac3/soapbox.rb">It’s just strings</a>, why is that so hard?</p>
<p>Or those idiots at Amazon who can’t even keep their cloud service
running? <em>Clearly</em> it can’t be that hard!</p>
<p>See a connection? By stating opinion like that, you are not even a
little bit better than the elderly guy in the beginning of this essay.</p>
<p>Until you know all the facts, until you were there, on the ladder
holding a hose trying to extinguish the flames, until then, you don’t
have the right to assume that you’d do better.</p>
<p>The world we live in is incredibly complicated. Even though computer
science might boil down to math, our job is dominated by side-effects
and uncontrollable external factors.</p>
<p>Even if you think that you know the big picture, you probably won’t
know all the details and without knowing the details, it’s
increasingly likely that you don’t understand the big picture either.</p>
<p>Don’t be an armchair scientist.</p>
<p>Be a scientist. Work with people. Encourage them, discuss solutions,
propose ideas, ask what obvious fact you missed or was missing in the
problem description.</p>
<p>This is 2012, not 1890.</p>
background events2012-05-28T00:00:00+00:00http://pilif.github.com/2012/05/background-events<p>Today is the day that one of the coolest things I had the pleasure to
develop so far in my life has gone live to production use.</p>
<p>One installation of <a href="http://www.popscan.com">PopScan</a> is connected to
a SAP system that had at times really bad performance and yet it
needed to be connected even just to query for price information.</p>
<p>This is a problem because of features like our persistent shopping
basket or the users templates which cause a lot of products to be
displayed at once.</p>
<p>Up until now, PopScan synchronously queried for the prices and would
not render any products until all the product data has been assembled.</p>
<p>When you combine this with the sometimes bad performance of that SAP
system, you’ll quickly see unhappy users waiting for the pages to
finally load.</p>
<p>We decided to fix this problem for the users.</p>
<p>Aside of the price, all product data is in PopScan’s database anyways, so
while we need to wait for prices, everything else, we could display
immediately.</p>
<p>So that’s what we do now: Whenever we load products and we don’t have a price
yet, we’ll launch a background job which asynchronously retrieves the prices.
The frontend will immediately get the rendered products minus the prices.</p>
<p>But of course, we still need to show the user the fully loaded products once
they become available and this is where the cool server based event framework
comes into play:</p>
<p>The JS client in PopScan now gets notified on arbitrary events that can happen
on the server (like “product data loaded”, but also “GPRS scaner data
received”). The cool thing about this is that events are seemingly pushed
through instantly as they happen on the server giving the user the immediate
response they would want and lessening the load on the server as there’s no
(well. only long-) polling going on.</p>
<figure class="highlight"><pre><code class="language-javascript" data-lang="javascript"><span class="nx">$</span><span class="p">(</span><span class="nx">ServerEvents</span><span class="p">).</span><span class="nx">bind</span><span class="p">(</span><span class="s1">'product-data'</span><span class="p">,</span> <span class="kd">function</span><span class="p">(</span><span class="nx">data</span><span class="p">){</span>
<span class="c1">// product data has changed!</span>
<span class="p">}</span></code></pre></figure>
<p>is all that we need on the client. The rest happens automatically.</p>
<p>Also remember though that PopScan is often used in technology-hostile
enterprise environments. Thus, features like web-sockets are out and in
general, we had to support ancient software all over the place.</p>
<p>We still managed to make it work and today this framework went to production
use for that one customer with the badly performing SAP system.</p>
<p>Over the course of the next few weeks, I might write in detail about how this
stuff works given the constratins (ancient client-software behind hostile
firewalls) and what software components we used.</p>
<p>Seeing this work go life fills me with joy: I’ve spend to many hours designing
this framework in a fool-proof way in order to not lose events and in order to
gracefully continue working as components in the big picture die.</p>
<p>Now it’s finally live and already contributing to lower waiting times for all
users.</p>
sacy 0.4-beta 12012-04-30T00:00:00+00:00http://pilif.github.com/2012/04/sacy-0.4-beta1<p>I’ve just pushed version <a href="https://github.com/downloads/pilif/sacy/sacy-0.4-beta1.tar.bz2">0.4-beta1</a> of <a href="http://pilif.github.com/sacy">sacy</a>
to its <a href="https://github.com/pilif/sacy">github repository</a>. Aside of requiring
PHP 5.3 now, it also has support for transforming contents of inline-tags.</p>
<p>So if you always wanted to write</p>
<figure class="highlight"><pre><code class="language-html" data-lang="html"><span class="nt"><script </span><span class="na">type=</span><span class="s">"text/coffeescript"</span><span class="nt">></span>
<span class="nx">hello</span> <span class="o">=</span> <span class="p">(</span><span class="nx">a</span><span class="p">)</span><span class="o">-></span>
<span class="nx">alert</span> <span class="s2">"Hello #{a}"</span>
<span class="nx">hello</span> <span class="s2">"World"</span>
<span class="nt"></script></span></code></pre></figure>
<p>and have the transformation done on the server-side, then I have good news
for you: Now you can! Just wrap the script with
<code class="highlighter-rouge"><span class="p">{</span><span class="err">asset_compile</span><span class="p">}</span><span class="err">...</span><span class="p">{</span><span class="err">/asset_compile</span><span class="p">}</span></code>.</p>
<p>I’m not saying that having inline-scripts (or even stylesheets) is a good idea
but sometimes, we have to pass data between our HTML templates and the JS
code and now we can do it in Coffee Script.</p>
<h4 id="development-note">Development note</h4>
<p>When you take a look at the commits leading to the release, you will notice
that I more or less hacked the support for inline tags into the existing
codebase (changing the terminology from files to work units in the process
though).</p>
<p>Believe me, I didn’t like this.</p>
<p>When I sat down to implement this, what I had in mind was a very nice
architecture where various components just register themselves and then
everything falls into place more or less automatically.</p>
<p>Unfortunately, what ever I did (I used <code class="highlighter-rouge">git checkout .</code> about three times) to
start over, I never got a satisfactory solution:</p>
<ul>
<li>
<p>sometimes, I was producing a ton of objects, dynamically looking up what
methods to call and what classes to instantiate.</p>
<p>This would of course be very clean and cool, but also terribly slow. Sacy
is an embeddable component, not an application in its own right.</p>
</li>
<li>
<p>sometimes, I had a simplified object model that kind of worked right until I
thought of some edge-case at which point we would have either ended up back in
hack-land or the edge-cases would have had to remain unfixed</p>
</li>
<li>
<p>sometimes I had something flexible enough to do what I need, but it still
had code in it that had to know whether it was dealing with instances of Class
A or Class B which is as inacceptable as the current array-mess.</p>
</li>
</ul>
<p>In the end, it hit me: Sacy is already incomplete in that it simplifies the
problem domain quite a lot already. To cleanly get out of this, I would have to
actually parse and manipulate the DOM instead of dealing with regexes and I
would probably even have to go as far as to write a <code class="highlighter-rouge">FactoryFactory</code> in order
to correctly abstract away the issues.</p>
<p>Think of it: We have a really interesting problem domain here:</p>
<ul>
<li>the same type of asset can use different tags (style and link for
stylesheets)</li>
<li>Different attributes are used to refer to external resources (href for
stylesheets, src for scripts)</li>
<li>File-backed assets can (and should) be combined</li>
<li>Conent-backed assets should be transformed and immediately inlined</li>
<li>Depending on the backing (content or file), the assets use a different
method to determine cache-freshness (modification-time/size vs. content)</li>
<li>And last but not least, file based asset caching is done on the client side,
content based asset caching is done on the server-side.</li>
</ul>
<p>Building a nice architecture that would work without the <code class="highlighter-rouge">if</code>s I learned to
hate lately would mean huge levels of indirections and abstractions.</p>
<p>No matter what I tried, I always ended up with a severe case of object-itis and
architectur-itis, both of which I deemed completely inacceptable for a
supposedly small and embeddable library.</p>
<p>Which is why I decided to throw away all my attempts and make one big
compromise and rely on <code class="highlighter-rouge">CacheRenderer::renderWorkUnits</code> to be called with
unified workunits (either all file or all content-based).</p>
<p>That made the backend code a lot easier.</p>
<p>And I could keep the lean <code class="highlighter-rouge">array</code> structure for describing a unit of work to do
for the backend.</p>
<p>I would still, at some point, love to have a nice way for handlers to register
themselves, but that’s something I’ll handle another day. For now, I’m happy
that I could accomplish my goal in a very lean fashion at the cost of a public
interface of the backend that is really, really inconvenient to use which leaves way too much code in the fronend.</p>
<p>At least I got away without an <code class="highlighter-rouge">AssetFactoryFactory</code> though :-)</p>
My worst mistakes in programming2012-01-13T00:00:00+00:00http://pilif.github.com/2012/01/my-worst-mistakes<p>I’m in the middle of refactoring a big infrastructure piece in our
product <a href="http://www.popscan.com">PopScan</a>. It’s very early code, rarely
touched since its inception in 2004, so I’m dealing mainly with my sins
of the past.</p>
<p>This time like no time before, I’m feeling the two biggest mistake I
have ever made in designing a program, so I though I’d make this post
here in order to help others not fall into the same trap.</p>
<p>Remember this: Once you are no longer alone working on your project,
the code you have written sets an example. Mistakes you have made are
copied - either verbatim or in spirit. The design you have chosen
lives on in the code that others write (rightfully so - you should
strive to keep code consistent).</p>
<p>This makes it even more important not to screw up.</p>
<p>Back in 2004 I have failed badly at two places.</p>
<ul>
<li>
<p>I chose a completely wrong abstraction in class design, mixing two
things that should be separate.</p>
</li>
<li>
<p>I chose - in a foolhearted whish to save on CPU time to create a ton
of internal state instead of fetching the data when it’s needed (I
could still cache then, but I missed that).</p>
</li>
</ul>
<p>So here’s the story.</p>
<h2 id="one-is-the-architectural-issue">One is the architectural issue.</h2>
<p>Let me tell you, dear reader, should you <em>ever</em> be in the position of
having to do anything even remotely related to an ecommerce solution
dealing with products and orders, so repeat with me:</p>
<blockquote>
<p>Product lists are not the same thing as orders. Orders are not the same thing as baskets.</p>
</blockquote>
<p>and even more importantly:</p>
<blockquote>
<p>A product and a line item are two completely different things.</p>
</blockquote>
<p>A line item describes how a specific product is placed in a list, so
at best, a product is contained in a line item. A product doesn’t have
a quantity. A product doesn’t have a total price.</p>
<p>A line item does.</p>
<p>And when we are at it: «quantity» is not a number. It is the entitiy
that describes the amount of times the product is contained within the
line item. As such a quantity usually consists of an amount and a
unit. If you change the unit, you change the quantity. If you change
the amount, you change the quantity.</p>
<p>Anyways - sitting down and thinking of the entities in the feature
that you are implementing is an essential part of the work that you
do. Even it it seems “kinda right” at the time, even if it works
“right” for years - once you make a mistake at a bad place, you are
stuck with it.</p>
<p>PopScan is about products and ordering them. Me missing the
distinction between a product and a line item back in 2004 worked fine
until now, but as this is a core component of PopScan, it has grown
the most over the years, more and more intertwining product and line
item functionality to the point of where it’s too late to fix this now
or at least it would require countless hours of work.</p>
<p>Work that will have to be done sooner rather than later. Work that
deeply affects a core component of the product. Work that will change
the API greatly and as such can only be tested for correctness in
integration tests. Unit tests become useless as the units that are
being tested won’t exist any more in the future.</p>
<p>Painful work.</p>
<p>If only I had more time and experience those 8 years ago.</p>
<h2 id="the-other-issue-is-about-state">The other issue is about state</h2>
<p>Let’s say you have a class <code class="highlighter-rouge">FooBar</code> with a property <code class="highlighter-rouge">Foo</code> that is
exposed as part of the public API via a <code class="highlighter-rouge">getFoo</code> method.</p>
<p>That <code class="highlighter-rouge">Foo</code> relies of some external data - let’s call it <code class="highlighter-rouge">foodata</code>.</p>
<p>Now you have two options of dealing with that <code class="highlighter-rouge">foodata</code>:</p>
<ol>
<li>
<p>You could read <code class="highlighter-rouge">foodata</code> into an internal <code class="highlighter-rouge">foo</code> field at
construction time. Then, whenever your <code class="highlighter-rouge">getFoo()</code> is called, you
return the value you stored in <code class="highlighter-rouge">foo</code>.</p>
</li>
<li>
<p>Or you could read nothing until <code class="highlighter-rouge">getFoo()</code> is called and then read
<code class="highlighter-rouge">foodata</code> and return that (optionally caching it for the next call to
<code class="highlighter-rouge">getFoo()</code>)</p>
</li>
</ol>
<p>Chosing the first design for most of the models back in 2004 was the
second biggest coding mistake I have ever made in my life.</p>
<p>Aside of the fact that constructing one of these <code class="highlighter-rouge">FooBar</code> objects
becomes more and more expensive the more stuff you preload (likely
never to be used for the lifetime of the object), you have also
contributed to a huge amount of internal state of the object.</p>
<p>The temptation to write a <code class="highlighter-rouge">getBar()</code> method that has a side effect of
also altering the internal foo field is just too big. And now you end
up with a <code class="highlighter-rouge">getBar()</code> that suddenly also depends on the internal state
of <code class="highlighter-rouge">foo</code> which suddenly is disconnected from the initial <code class="highlighter-rouge">foodata</code>.</p>
<p>Worse, suddenly calling code will see different results depending on
whether it calls <code class="highlighter-rouge">getBar()</code> before it’s calling <code class="highlighter-rouge">getFoo()</code>. Which will
of course lead to code depending on that fact, so fixing it becomes
very hard (but at least caught by unit tests).</p>
<p>Having the internal fields also leads to <code class="highlighter-rouge">FooBar</code>’s implementation
preferring these fields over the public methods, which is totally
fine, as long as <code class="highlighter-rouge">FooBar</code> stands alone.</p>
<p>But the moment there’s a <code class="highlighter-rouge">FooFooBar</code> which inherits from <code class="highlighter-rouge">FooBar</code>, you
lose all the advantages of polymorphism. <code class="highlighter-rouge">FooBar</code>’s implementation will
always only use its own private fields. It’s impossible for <code class="highlighter-rouge">FooFooBar</code>
to affect <code class="highlighter-rouge">FooBar</code>’s implementation, causing the need to override many
more methods than what would have been needed if <code class="highlighter-rouge">FooBar</code> used its own
public API.</p>
<h2 id="conclusion">Conclusion</h2>
<p>These two mistakes cost us hours and hours of working around our
inability to do what we want. It cost us hours of debugging and it
causes new features to come out much more clunky than they need to be.</p>
<p>I have done so many bad things in my professional life. A <code class="highlighter-rouge">shutdown -h</code>
instead of -r on a remote server. A <code class="highlighter-rouge">mem=512</code> boot parameter (yes.
That number is/was interpreted as bytes. And yes. Linux needs more
than 512 bytes of RAM to boot), an <code class="highlighter-rouge">update</code> without <code class="highlighter-rouge">where</code> clause -
I’ve screwed up so badly in my life.</p>
<p>But all of this is <em>nothing</em> compared to these two mistakes.</p>
<p>These are not just inconveniencing myself. These are inconveniencing
my coworkers and our customers (because we need more time to implement
features).</p>
<p>Shutting down a server by accident means 30 minutes of downtime at
worst (none since we heavily use VMWare). Screwing up a class design
twice is the gift that keeps on giving.</p>
<p>I’m so sorry for you guys having to put up with <code class="highlighter-rouge">OrderSet</code> of doom.</p>
<p>Sorry guys.</p>
Abusing LiveConnect for fun and profit2011-12-22T00:00:00+00:00http://pilif.github.com/2011/12/grave-digging<p>On december 20th I gave a talk at the JSZurich user group meeting in Zürich.
The talk is about a decade old technology which can be abused to get full,
unrestricted access to a client machine from JavaScript and HTML.</p>
<p>I was showing how you would script a Java Applet (which is completely hidden
from the user) to do the dirty work for you while you are creating a very nice
user interface using JavaScript and HTML.</p>
<iframe class="youtube-player" type="text/html" width="640" height="385" src="http://www.youtube.com/embed/zOhyjaTkjI4" frameborder="0">
</iframe>
<p>The slides are <a href="http://bit.ly/vUmkZH">available in PDF format</a> too.</p>
<p>While it’s a very cool tech demo, it’s IMHO also a very bad security issue
which browser vendors and Oracle need to have a look at. The user sees nothing
but a dialog like this:</p>
<p><img src="/assets/images/java-prompt.png" alt="security prompt" /></p>
<p>and once they click OK, they are completely owned.</p>
<p>Even worse, while this dialog is showing the case of a valid certificate, the
dialog in case of an invalid (self-signed or expired) certificate isn’t much
different, so users can easily tricked into clicking allow.</p>
<p>The source code of the demo application is on <a href="https://github.com/pilif/gravedigging">github</a>
and I’ve already written about this on this blog <a href="/2009/04/javascript-and-applet-interaction/">here</a>,
but back then I was mainly interested in getting it work.</p>
<p>By now though, I’m really concerned about putting an end to this, or at least
increasing the hurdle the end-user has to jump through before this goes off -
maybe force them to click a visible Applet. Or just remove the <a href="http://en.wikipedia.org/wiki/LiveConnect">LiveConnect</a> feature all
together from browsers, thus forcing applets to be visible.</p>
<p>But aside of the security issues, I still think that this is a very
interesting case of long forgotten technology. If you are interested, do have
a look at the talk and travel back in time to when stuff like this was only
half as scary as it is now.</p>
updated sacy - now with external tools2011-11-09T00:00:00+00:00http://pilif.github.com/2011/11/updated-sacy-again<p>I’ve just updated the <a href="https://github.com/pilif/sacy">sacy repository</a> again and tagged a v0.3-beta1 release.</p>
<p>The main feature since yesterday is support for the official compilers and
tools if you can provide them on the target machine.</p>
<p>The drawback is that these things come with hefty dependencies at times (I
don’t think you’d find a shared hoster willing to install node.js or Ruby for
you), but if you can provide the tools, you can get some really nice
advantages over the PHP ports of the various compilers:</p>
<ul>
<li>
<p>the PHP port of sass has <a href="http://code.google.com/p/phamlp/issues/detail?id=116">an issue</a> that prevents
@import from working. sacy’s build script does patch that, but the way they
were parsing the file names doesn’t inspire confidence in the library. You
might get a more robust solution by using the official tool.</p>
</li>
<li>
<p>uglifier-js is a bit faster than JSMin, produces significantly smaller
output and comes with a better license (JSMin isn’t strictly free software
as it has this “do no evil” clause)</p>
</li>
<li>
<p>coffee script is under very heavy development, so I’d much rather use the
upstream source than some experimental fun project. So far I haven’t seen
issues with coffeescript-php, but then I haven’t been using it much yet.</p>
</li>
</ul>
<p>Absent from the list you’ll find less and css minification:</p>
<ul>
<li>
<p>the PHP native <a href="http://code.google.com/p/cssmin/">CSSMin</a> is really good and
there’s no single official external tool out that demonstrably better (maybe
the YUI compressor, but I’m not going to support something that requires me
to deal with Java)</p>
</li>
<li>
<p><a href="http://leafo.net/lessphp/">lessphp</a> is very lightweight and yet very full
featured and very actively developed. It also has a nice advantage over the
native solution in that the currently released native compiler does not
support reading its input from STDIN, so if you want to use the official
less, you have to go with the git HEAD.</p>
</li>
</ul>
<p>Feel free to try this out (and/or send me a patch)!</p>
<p>Oh and by the way: If you want to use uglifier or the original coffee script
and you need node but can’t install it, have a look at the
<a href="http://pilif.github.com/2011/11/node-to-go/">static binary</a> I created</p>
updated sacy - now with more coffee2011-11-08T00:00:00+00:00http://pilif.github.com/2011/11/updated-sacy<p>I’ve just updated the <a href="https://github.com/pilif/sacy">sacy repository</a>
to now also provide support for compiling Coffee Script.</p>
<figure class="highlight"><pre><code class="language-xml" data-lang="xml">{asset_compile}
<span class="nt"><script</span> <span class="na">type=</span><span class="s">"text/coffeescript"</span> <span class="na">src=</span><span class="s">"/file1.coffee"</span><span class="nt">></script></span>
<span class="nt"><script</span> <span class="na">type=</span><span class="s">"text/javascript"</span> <span class="na">src=</span><span class="s">"/file2.js"</span><span class="nt">></script></span>
{/asset_compile}</code></pre></figure>
<p>will now not compile file1.coffee into JS before creating and linking one big chunk of minified JavaScript.</p>
<figure class="highlight"><pre><code class="language-xml" data-lang="xml"><span class="nt"><script</span> <span class="na">type=</span><span class="s">"text/javascript"</span> <span class="na">src=</span><span class="s">"/assetcache/file2-deadbeef1234.js"</span><span class="nt">></script></span></code></pre></figure>
<p>As always, the support is seamless - this is all you have to do.</p>
<p>Again, in order to keep deployment simple, I decided to go with a pure PHP solution (<a href="https://github.com/alxlit/coffeescript-php">coffeescript-php</a>).</p>
<p>I do see some advantages in the native solutions though (performance, better output), so I’m actively looking into a solution to detect the availability of native converters that I could shell out to without having to hit the file system on every request.</p>
<p>Also, when adding the coffee support, I noticed that the architecture of sacy isn’t perfect for doing this transformation stuff. Too much code had to be duplicated between CSS and JavaScript, so I will do a bit of refactoring there.</p>
<p>Once both the support for external tools and the refactoring of the transformation is completed, I’m going to release v0.3, but if you want/need coffee support right now, go ahead and clone
<a href="https://github.com/pilif/sacy">the repository</a>.</p>
node to go2011-11-07T00:00:00+00:00http://pilif.github.com/2011/11/node-to-go<p>Having node.js around on your machine can be very useful - not just if you are
<a href="/tags/tempalias/">building your new fun project</a>, but also for
quite real world applications.</p>
<p>For me it was <a href="http://jashkenas.github.com/coffee-script/">coffee script</a>.</p>
<p>After reading some incredibly beautiful coffee code by <a href="https://twitter.com/brainlock">@brainlock</a>
(work related, so I can’t link the code), I decided that I wanted to use
coffee in PopScan and as such I need coffee support in sacy which handles
asset compilation for us.</p>
<p>This means that I need node.js on the server (sacy is allowing us a very cool
checkout-and-forget deployment without any build-scripts, so I’d like to keep
this going on).</p>
<p>On servers we manage, this isn’t an issue, but some customers insist on
hosting PopScan within their DMZ and provide a pre-configured Linux machine
running OS versions that weren’t quite current a decade ago.</p>
<p>Have fun compiling node.js for these: There are so many dependencies to meet
(a recent python for example) to build it - if you even manage to get it to
compile on these ancient C compilers available for these ancient systems.</p>
<p>But I really wanted coffee.</p>
<p>So here you go: Here’s a statically linked (this required a bit of trickery)
binary of node.js v0.4.7 compiled for 32bit Linux. This runs even on an
ancient RedHat Enterprise 3 installation, so I’m quite confident that it runs
everywhere running at least Linux 2.2:</p>
<p><a href="http://www.pilif.ch/node-x86-v0.4.7.bz2" checksum="sha256:142085682187a57f312d095499e7d8b2b7677815c783b3a6751a846f102ac7b9">node-x86-v0.4.7.bz2</a>
(SHA256: 142085682187a57f312d095499e7d8b2b7677815c783b3a6751a846f102ac7b9)</p>
<div class="highlighter-rouge"><pre class="highlight"><code>pilif@miscweb ~ % file node-x86-v0.4.7
node-x86: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), for GNU/Linux 2.2.5, statically linked, for GNU/Linux 2.2.5, not stripped
</code></pre>
</div>
<p>The binary can be placed wherever you want and executed from there - node
doesn’t require any external files (which is very cool).</p>
<p>I’ll update the file from time to time and provide an updated post. 0.4.7 is good enough to run coffee script though.</p>
protecting siri2011-10-31T00:00:00+00:00http://pilif.github.com/2011/10/protecting-siri<p>Over the last weekend, 9to5mac.com <a href="http://9to5mac.com/2011/10/29/siri-hacked-to-fully-run-on-the-iphone-4-and-ipod-touch-iphone-4s-vs-iphone-4-siri-showdown-video-interview/">posted about a hack</a> which shows that it’s possible to run Siri on a iPhone 4 and
an iPod Touch 4g and possibly even oder devices - considering how much of Siri
is running on Apple’s servers.</p>
<p>We’ve always suspected that the decision to restrict Siri to the 4S is
basically a marketing decision and I don’t really care about this either.
Nobody is forcing you to use Siri and thus nobody is forcing you to update to
anything.</p>
<p>Siri is Apple’s product and so are the various iPhones. It’s their decision
whom they want to sell what to.</p>
<p>What I find more interesting is that it was even possible to have a hacked
Siri on a non 4S-phone talk to Apple’s servers. If I were in Apple’s shoes, I
would have made that (practically) impossible.</p>
<p>And here’s how:</p>
<p>Having a device that you put into users hands and trusting it is always a very
hard, if impossible thing to do as the device can (more or less) easily be
tampered with.</p>
<p>So to solve this problem, we need some component that we know reasonably well
to be safe from the user’s tampering and we need to find a way for that
component to prove to the server that indeed the component is available and
healthy.</p>
<p>I would do that using public key crypto and specialized hardware that works
like a TPM. So that would be a chip that contains a private key embedded in
hardware, likely not updatable. Also, that private key will never leave that
device. There is no API to read it.</p>
<p>The only API the chip provides is either a relatively high-level API to sign
an arbitrary binary blob or, more likely, a lower level one to encrypt some
small input (a SHA1 hash for example) with the private key.</p>
<p>OK. Now we have that device (also, it’s likely that the iPhone already has
something like that for its secured boot process). What’s next?</p>
<p>Next you make sure that the initial handshake with your servers requires that
device. Have the server post a challenge to the phone. Have the phone solve it
and have the response signed by that crypto device.</p>
<p>On your server, you will have the matching public key. If the signature checks
out, you talk to the device. If not, you don’t.</p>
<p>Now, it is possible using very expensive hardware to extract that key from the
hardware (by opening the chip’s casing and using a microscope and a lot of
skills). If you are really concerned about this, give each device a unique
private key. If a key gets compromised, blacklist it.</p>
<p>This greatly complicates the manufacturing process of course, so you might go
ahead with just one private key per hardware type and hope that cracking the
key will take longer than the lifetime of the hardware (which is very likely).</p>
<p>This isn’t at all specific to Siri of course. Whenever you have to trust a
device that you put into consumers hands, this is the way to go and I’m sure
we’ll be seeing more of this in the future (imagine the uses for copy
protection - let’s hope we don’t end up there).</p>
<p>I’m not particularly happy that this is possible, but I’d rather talk about it
than to hope that it’s never going to happen - it will and <a href="/2011/09/asking-for-permission/">I’ll be pissed</a>.</p>
<p>For now I’m just wondering why Apple wasn’t doing it to protect Siri.</p>
A new fun project2011-10-12T00:00:00+00:00http://pilif.github.com/2011/10/new-fun-project<p>Like <a href="http://blip.tv/jsconfeu/by-philip-hofstetter-node-js-in-production-use-tempalias-com-4258344">back in 2010</a> I went to JSConf.eu this year around.</p>
<p>One of the many impressive facts about JSConf is the quality of their Wifi
connection. It’s not just free and stable, it’s also fast. Not only that, this
time around, they had a very cool feature: You authenticated via twitter.</p>
<p>As most of the JS community seems to be having twitter accounts anyways, this
was probably the most convenient solution for everyone: You didn’t have to
deal with creating an account or asking someone for a password and on the
other hand, the organizers could make sure that, if abuse should happen,
they’d know whom to notify.</p>
<p>On a related note: This was in stark contrast to the WiFi I had in the hotel
which was unstable, slow and cost a ton of money to use and it didn’t use
Twitter either :-)</p>
<p>In fact, the twitter thing was so cool to see in practice, that I want to use
it for myself too.</p>
<p>Since the days of WEP-only Nintendo DS, I’m running two WiFi networks at home:
One is WPA protected and for my own use, the other is open, but it runs over
a different interface on <a href="/2006/07/computers-under-my-command-issue-1-shion/">shion</a>
which has no access to any other machine in my network. This is even more
important as <a href="/2005/05/lots-of-fun-with-openvpn/">I have a permanent OpenVPN connection</a>
to my office and I definitely don’t want to give the world access to that.</p>
<p>So now the plan would be to change that open network so that it redirects to a
captive portal until the user has authenticated with twitter (I might add
other providers later on - LinkedIn would be <em>awesome</em> for the office for
example).</p>
<p>In order for me to actually get the thing going, I’m doing a tempalias on this
one too and keep a diary of my work.</p>
<p>So here we go. I really think that every year I should do some fun-project
that’s programming related, can be done on my own and is at least of some use.
<a href="/tags/tempalias/">Last time it was tempalias</a>, this time, it’ll be
<em>Jocotoco</em> (more about the name in the next installment).</p>
<p>But before we take off, let me give, again, huge thanks to the JSConf crew for
the amazing conference they manage to organize year after year. If I could,
I’d already preorder the tickets for next year :p</p>
<p>Attending a JSConf feels like a two-day drug-trip that lasts for at least two
weeks.</p>
E_NOTICE stays off.2011-10-06T00:00:00+00:00http://pilif.github.com/2011/10/e-notice-stays-off<p>I’m sure you’ve used this idiom a lot when writing JavaScript code</p>
<figure class="highlight"><pre><code class="language-javascript" data-lang="javascript"><span class="nx">options</span><span class="p">[</span><span class="s1">'a'</span><span class="p">]</span> <span class="o">=</span> <span class="nx">options</span><span class="p">[</span><span class="s1">'a'</span><span class="p">]</span> <span class="o">||</span> <span class="s1">'foobar'</span><span class="p">;</span></code></pre></figure>
<p>It’s short, it’s concise and it’s clear what it does. In ruby, you can even be more concise:</p>
<figure class="highlight"><pre><code class="language-ruby" data-lang="ruby"><span class="n">params</span><span class="p">[</span><span class="ss">:a</span><span class="p">]</span> <span class="o">||=</span> <span class="s1">'foobar'</span></code></pre></figure>
<p>So you can imagine that I was happy with PHP 5.3’s new ?: operator:</p>
<figure class="highlight"><pre><code class="language-php" data-lang="php"><span class="cp"><?</span> <span class="nv">$options</span><span class="p">[</span><span class="s1">'a'</span><span class="p">]</span> <span class="o">=</span> <span class="nv">$options</span><span class="p">[</span><span class="s1">'a'</span><span class="p">]</span> <span class="o">?:</span> <span class="s1">'foobar'</span><span class="p">;</span> <span class="cp">?></span></code></pre></figure>
<p>In all three cases, the syntax is concise and readable, though arguably, the PHP one could read a bit better, but, ?: still is better than writing the full ternary expression, spelling out <code class="highlighter-rouge">$options['a']</code> three times.</p>
<p><a href="http://www.popscan.com">PopScan</a>, since forever (forever being 2004) runs with E_NOTICE turned off. Back in the times, I felt it provided just baggage and I just wanted (had to) get things done quickly.</p>
<p>This, of course, lead to people not taking enough care for the code and
recently, I had one too many case of a bug caused by accessing a variable that
was undefined in a specific code path.</p>
<p>I decided that I’m willing to spend the effort in cleaning all of this up and
making sure that there are no undeclared fields and variables in all of
PopScans codebase.</p>
<p>Which turned out to be quite a bit of work as a lot of code is apparently
happily relying on the default <code class="highlighter-rouge">null</code> that you can read out of undefined
variables. Those instances might be ugly, but they are by no means bugs.</p>
<p>Cases where the <code class="highlighter-rouge">null</code> wouldn’t be expected are the ones I care about, but I
don’t even what to go and discern the two - I’ll just fix all of the instances
(embarrassingly many, most of them, thankfully, not mine).</p>
<p>Of course, if I put hours into a cleanup project like this, I want to be sure
that nobody destroys my work again over time.</p>
<p>Which is why I was looking into running PHP with <code class="highlighter-rouge">E_NOTICE</code> in development
mode at least.</p>
<p>Which brings us back to the introduction.</p>
<figure class="highlight"><pre><code class="language-php" data-lang="php"><span class="cp"><?</span> <span class="nv">$options</span><span class="p">[</span><span class="s1">'a'</span><span class="p">]</span> <span class="o">=</span> <span class="nv">$options</span><span class="p">[</span><span class="s1">'a'</span><span class="p">]</span> <span class="o">?:</span> <span class="s1">'foobar'</span><span class="p">;</span> <span class="cp">?></span></code></pre></figure>
<p>is wrong code. Any accessing of an undefined index of an array always raises a
notice. It’s not like Python where you can chose (accessing a dictionary using
[] will throw a KeyError, but there’s get() which just returns None). No. You
don’t get to chose. You only get to add boilerplate:</p>
<figure class="highlight"><pre><code class="language-php" data-lang="php"><span class="cp"><?</span> <span class="nv">$options</span><span class="p">[</span><span class="s1">'a'</span><span class="p">]</span> <span class="o">=</span> <span class="nb">isset</span><span class="p">(</span><span class="nv">$options</span><span class="p">[</span><span class="s1">'a'</span><span class="p">])</span> <span class="o">?</span> <span class="nv">$options</span><span class="p">[</span><span class="s1">'a'</span><span class="p">]</span> <span class="o">:</span> <span class="s1">'foobar'</span><span class="p">;</span> <span class="cp">?></span></code></pre></figure>
<p>See how I’m now spelling <code class="highlighter-rouge">$options['a']</code> three times again? <code class="highlighter-rouge">?:</code> just got a
whole lot less useful.</p>
<p>But not only that. Let’s say you have code like this:</p>
<figure class="highlight"><pre><code class="language-php" data-lang="php"><span class="cp"><?</span>
<span class="k">list</span><span class="p">(</span><span class="nv">$host</span><span class="p">,</span> <span class="nv">$port</span><span class="p">)</span> <span class="o">=</span> <span class="nb">explode</span><span class="p">(</span><span class="s1">':'</span><span class="p">,</span> <span class="nb">trim</span><span class="p">(</span><span class="nv">$def</span><span class="p">))</span>
<span class="nv">$port</span> <span class="o">=</span> <span class="nv">$port</span> <span class="o">?:</span> <span class="mi">11211</span><span class="p">;</span> <span class="cp">?></span></code></pre></figure>
<p>IMHO very readable and clear what it does: It extracts a host and a port and
sets the port to 11211 if there’s none in the initial string.</p>
<p>This of course won’t work with E_NOTICE enabled. You either lose the very
concise list() syntax, or you do - <em>ugh</em> - this:</p>
<figure class="highlight"><pre><code class="language-php" data-lang="php"><span class="cp"><?</span>
<span class="k">list</span><span class="p">(</span><span class="nv">$host</span><span class="p">,</span> <span class="nv">$port</span><span class="p">)</span> <span class="o">=</span> <span class="nb">explode</span><span class="p">(</span><span class="s1">':'</span><span class="p">,</span> <span class="nb">trim</span><span class="p">(</span><span class="nv">$def</span><span class="p">))</span> <span class="o">+</span> <span class="k">array</span><span class="p">(</span><span class="kc">null</span><span class="p">,</span> <span class="kc">null</span><span class="p">);</span>
<span class="nv">$port</span> <span class="o">=</span> <span class="nv">$port</span> <span class="o">?:</span> <span class="mi">11211</span><span class="p">;</span> <span class="cp">?></span></code></pre></figure>
<p>Which looks ugly as hell. And no, you can’t write a wrapper to explode() which
always returns an array big enough, because you don’t know what’s big enough.
You would have to pass the amount of nulls you want into the call too. That
would look nicer then above hack, but it still doesn’t even come close in
conciseness to the solution which throws a notice.</p>
<p>So. In the end, I’m just complaining about syntax you might think? I though so too and I wanted to add the syntax I liked, so I did a bit of experimenting.</p>
<p>Here’s a little something I’ve come up with:</p>
<script src="https://gist.github.com/1267568.js?file=e_notice_stays_off.php"><!-- *sigh* thanks, github markdown parser --></script>
<p>The wrapped array solution looks really compelling syntax-wise and I could totally see myself using this and even forcing everybody else to go there. But of course, I didn’t trust PHP’s interpreter and thus benchmarked the thing.</p>
<div class="highlighter-rouge"><pre class="highlight"><code>pilif@tali ~ % php e_notice_stays_off.php
Notices off. Array 100000 iterations took 0.118751s
Notices off. Inline. Array 100000 iterations took 0.044247s
Notices off. Var. Array 100000 iterations took 0.118603s
Wrapped array. 100000 iterations took 0.962119s
Parameter call. 100000 iterations took 0.406003s
Undefined var. 100000 iterations took 0.194525s
</code></pre>
</div>
<p>So. Using nice syntactic sugar costs 7 times the performance. The second best
solution? Still 4 times. Out of the question. Yes. It could be seen as a
micro-optimization, but 100’000 iterations, while a lot is not <em>that many</em>.
Waiting nearly a second instead of 0.1 second is crazy, especially for a
common operation like this.</p>
<p>Interestingly, the most bloated code (that checks with isset()) is twice as
fast as the most readable (just assign). Likely, the notice gets fired
regardless of error_reporting() and then just ignored later on.</p>
<p>What really pisses me off about this is the fact that everywhere else PHP
doesn’t give a damn. ‘0’ is equal to 0. Heck, even ‘abc’ is equal to 0. It
even fails silently many times.</p>
<p>But in a case like this, where there is even newly added nice and concise
syntax, it has to be anal and bitchy. And there’s no way to get to the needed
solution but to either write too expensive wrappers or ugly boilerplate.</p>
<p>Dynamic languages give us a very useful tool to be dynamic in the APIs we
write. We can create functions that take a dictionary (an array in PHP) of
options. We can extend our objects at runtime by just adding a property. And
with PHP’s (way too) lenient data conversion rules, we can even do math with
user supplied string data.</p>
<p>But can we read data from $_GET without boilerplate? No. Not in PHP. Can we
use a dictionary of optional parameters? Not in PHP. PHP would require
boilerplate.</p>
<p>If a language basically mandates retyping the same expression three times,
then, IMHO, something is broken. And if all the workarounds are either crappy
to read or have very bad runtime properties, then something is terribly
broken.</p>
<p>So, I decided to just fix the problem (undefined variable access) but leave
E_NOTICE where it is (off). There’s always <code class="highlighter-rouge">git blame</code> and I’ll make sure I
will get a beer every time somebody lets another undefined variable slip in.</p>
Asking for permission2011-09-22T00:00:00+00:00http://pilif.github.com/2011/09/asking-for-permission<p>Only just last year, I told <a href="https://twitter.com/brainlock">@brainlock</a>
(in real life, so I can’t link) that the coolest thing about our industry was that
you don’t have to ask for permission to do anything.</p>
<p>Want to start the next big web project? Just start it. Want to write about
your opinions? Just write about them. Want to get famous? It’s still a lot of
work and marketing, but nothing (aside of lack of talent) is stopping you.</p>
<p>Whenever you have a good idea for a project, you start working on it, you see
how it turns out and you decide whether to continue working on it or whether
to scrap it. Aside of a bit of cash for hosting, you don’t need anything else.</p>
<p>This is very cool because is empowers “normal people”. Heck, I probably
wouldn’t be where I currently am if it wasn’t for this. Back in 1996 I had no
money, I wasn’t known, I had no past experience. What I had though was
enthusiasm.</p>
<p>Which is all that’s needed.</p>
<p>Only a year later though, I’m sad to see that we are at the verge of losing
all of this. Piece by piece.</p>
<p>First was apple with their iPhone. Even with all the enthusiasm of the world,
you are not going to write an app that other people can run on the phone. No.
First you will have to ask Apple for permission.</p>
<p>Want to access some third-party hardware from that iPhone app? Sure. But now
you have to not only ask Apple, but also the third party vendor for
permission.</p>
<p>The explanation we were given is that a malicious app could easily bring down
the mobile network. Thus they needed to be careful what we could run on our
phones.</p>
<p>But then, we got the iPad with the exact same restrictions even though not all
of them even have mobile network access.</p>
<p>The explanation this time? Security.</p>
<p>As nobody wants their machine to be insecure, everybody just accepts it.</p>
<p>Next came Microsoft: In the Windows Mobile days before the release of 7, you
didn’t have to ask anybody for permission. You bought (or pirated if you
didn’t have money) Visual Studio, you wrote your app, you published it.</p>
<p>All of this is lost now. Now you ask for permission. Now you hope for the
powers that be to allow you to write your software.</p>
<p>Finally, <a href="http://mjg59.dreamwidth.org/5552.html">you can’t even do what you want with your PC</a> - all because of security.</p>
<p>So there’s still the web you think? I wish I could be positive about that, but
as we are running out of IP-addresses and the adoption of IPv6 is slow as
ever, I believe that public IP addresses are becoming a scarce good at which
point, again, you will be asking for permission.</p>
<p>In some countries, even today, it’s not possible to just write a blog post
because the government is afraid of “unrest” (read: losing even more
credibility). That’s not just countries we always perceived as “not free” -
heck, <s>even in Italy you must register with the government if you want to have
a blog</s> (it turns out that law didn’t come to pass - let’s hope no other country
has the same bright idea). In Germany, if you read the law by the letter, you
can’t blog at all without getting every post approved - you could write
something that a minor might see.</p>
<p>«But permission will be granted anyways», you might say. Are you sure though?
What if you are a minor wanting to create an application for your first
client? Back in my days, I could just do it. Are you sure that whatever entity
is going to have to give permission wan’t to do business with minors? You <em>do</em>
know that you can’t have a Gmail account if you are younger than 13 years, do
you? So age barriers exist.</p>
<p>What if your project competes with whatever entity has to give permission?
Remember the <a href="http://www.google.com/search?ie=UTF-8&q=google+voice+iphone+rejection">story about the Google Voice app</a>?
Once we are out of IP addresses, the big provider and media companies who still
have addresses might see you little startup web project as competition in some
way. Are you sure you will still get permission?</p>
<p>Back in 1996 when I started my company in High-School, all you needed to earn
your living was enthusiasm and a PC (yes - I started doing web programming
without having access to the internet)</p>
<p>Now you need signed contracts, signed NDAs, lobbying, developer program
memberships, cash - the barriers to entry are infinitely higher at this point.</p>
<p>I’m afraid though, that this is just the beginning. If we don’t stand up now,
if we continue to let big companies and governments take away our freedom of
expression piece by piece, if we give up more and more of our freedom because
of the false promise of security, then, at one point, all of what we had will
be lost.</p>
<p>We won’t be able to just start our projects. We won’t be able to create - only
to work on other peoples projects. We will lose all that makes our profession
interesting.</p>
<p>Let’s not go there.</p>
<p>Please.</p>
<p><a href="https://news.ycombinator.com/item?id=3025245">Discussion on HackerNews</a></p>
Lion Server authentication issues2011-09-19T00:00:00+00:00http://pilif.github.com/2011/09/lion-password-server<p>Lately I was having an issue with a Lion Server that refused logins of users stored in OpenDirectory. A quick check of <code class="highlighter-rouge">/var/log/opendirectoryd.log</code> revealed an issue with the «Password Server»:</p>
<div class="highlighter-rouge"><pre class="highlight"><code>Module: AppleODClient - unable to send command to Password Server - sendmsg() on socket fd 16 failed: Broken pipe (5205)
</code></pre>
</div>
<p>As this message apparently doesn’t appear on Google yet, there’s my contribution to solving this.</p>
<p>The fix was to kill -9 the kerberos authentication daemon:</p>
<div class="highlighter-rouge"><pre class="highlight"><code>sudo killall kpasswdd
</code></pre>
</div>
<p>which in fact didn’t help (sometimes <a href="http://xkcd.com/149/">even sudo isn’t enough</a>), so I had to be more persuasive to get rid of the apparently badly hanging process:</p>
<div class="highlighter-rouge"><pre class="highlight"><code>sudo killall -9 kpasswdd
</code></pre>
</div>
<p>This time the process was really killed and subsequently instantly restarted by launchd.</p>
<p>After that, the problem went away.</p>
serialize() output is binary data!2011-09-15T00:00:00+00:00http://pilif.github.com/2011/09/serialize-mistake<p>When you call <a href="http://www.php.net/serialize">serialize()</a> in PHP, to serialize a value into something that you store for later use with <a href="http://www.php.net/unserialize">unserialize()</a>, then be very careful what you are doing with that data.</p>
<p>When you look at the output, you’d be tempted to assume that it’s text data:</p>
<div class="highlighter-rouge"><pre class="highlight"><code>php > $a = array('foo' => 'bar');
php > echo serialize($a);
a:1:{s:3:"foo";s:3:"bar";}
php >
</code></pre>
</div>
<p>and as such, you’d be tempted to treat this as text data (i.e. store it in a TEXT column in your database).</p>
<p>But what looks like text on first glance isn’t text data at all. Assume that my terminal is in ISO-8859-1 encoding:</p>
<div class="highlighter-rouge"><pre class="highlight"><code>php > echo serialize(array('foo' => 'bär'));
a:1:{s:3:"foo";s:3:"bär";}
</code></pre>
</div>
<p>and now assume it’s in UTF-8 encoding:</p>
<div class="highlighter-rouge"><pre class="highlight"><code>php > echo serialize(array('foo' => 'bär'));
a:1:{s:3:"foo";s:4:"bär";}
</code></pre>
</div>
<p>You will notice that the format encodes the strings length together with the string. And because PHP is inherently not unicode capable, it’s not encoding the strings character length, but its <em>byte-length</em>.</p>
<p>unserialize() checks whether the encoded length matches the actual delimited strings length. This means that if you treat the serialized output as text and your databases’s encoding changes along the way, that the retrieved string can’t be unserialized any more.</p>
<p>I just learned that the hard way (even though it’s obvious in hindsight) while migrating <a href="http://www.popscan.ch">PopScan</a> from ISO-8859-1 to UTF-8:</p>
<p>The databases of existing systems now contain a lot of output from serialize() which was run over ISO strings but now that the client-encoding in the database client is set to utf-8, the data will be retrieved as UTF-8 and because the serialize() output was stored in a TEXT column, it happily gets UTF-8 encoded.</p>
<p>If we remove the database from the picture and express the problem in code, this is what’s going on:</p>
<div class="highlighter-rouge"><pre class="highlight"><code>unserialize(utf8encode(serialize('data with 8bit chàracters')));
</code></pre>
</div>
<p>i.e the data gets altered after serializing and the way it gets altered is a way that unserialize can’t deal with the data any more.</p>
<p>So, for everybody else not yet in this dead end:</p>
<p>The output of serialize() is <em>binary data</em>. It looks like textual data, bit it isn’t. Treat it as binary. If you store it somewhere, make sure that the medium you store it to treats the data as binary. No transformation what so ever must ever be made on it.</p>
<p>Of course, that leaves you with a problem later on if you switch character sets and you have to unserialize, but at least you get to unserialize then. I have to go great lengths now to salvage the old data.</p>
Another platform change2011-08-03T00:00:00+00:00http://pilif.github.com/2011/08/another-platform-change<p>If you can read this, then it has happened - this blog moved again.</p>
<p>Eons ago in internet time, <a href="/2002/11/welcome/">this project has started</a> wich lasted for 4 years at which point I got spammed so badly that I had to move away.</p>
<p>So, still ages ago, <a href="/2006/06/new-face-new-engine-new-everything/">I moved to Serendipity</a> which fixed the spam issue for me.</p>
<p>This lastes only two years before <a href="/2008/03/another-new-look/">I moved again</a>to WordPress this time - the nicer admin tool and the richer theme selection pushed me over the edge.</p>
<p>While I’m still happy with WordPress in general, over time I learned a few things:</p>
<ul>
<li>
<p>while running your own server is fun, having it compromised is not. Using
any well-known blogging engine that relies on server-side generation of
content is ultimately a way to get compromised unless you constantly patch
security issues, taking up a lot of time in the process.</p>
</li>
<li>
<p>While the old name of this blog (gnegg) was a cool pun for people who knew
me, it didn’t at all convey my identity on the internet. Me? I’m
<a href="http://pilif.me">pilif</a>, so this should be conveyed at least by the URL</p>
</li>
<li>
<p>Most of my posts very relying heavily on custom markup, making the WP
WYSIWYG editor more annoying than useful.</p>
</li>
</ul>
<p>So when <a href="https://twitter.com/rmurphey">@rmurphey</a> <a href="http://rmurphey.com/blog/2011/07/25/switching-to-octopress/">blogged about octopress</a> I immediately recognized the huge opportunity <a href="http://octopress.org">Octopress</a> provides:</p>
<p>I can host static files on a server I don’t own and thus don’t have to care about compromising, I can blog using my favorite tools (any text editor and git) and I still get an acceptable layout.</p>
<p>So here we are - at the end of yet another conversion.</p>
<p>While the old URLs already 301 redirect, pictures are still missing and I’ll work on getting them back. The comments I’ll try to port over too the moment I see how disqus handles my WordPress export.</p>
<p>The gnegg branding is gone and has been replaced by something that doesn’t look like a name but isn’t while still being a fun pun for people who know me. The tagline, of course, stays the same.</p>
<p>So.</p>
<p>Welcome to my new home and let’s hope this lasts as long as the previous instances!</p>
AJAX, Architecture, Frameworks and Hacks2011-04-13T00:00:00+00:00http://pilif.github.com/2011/04/ajax-architecture-frameworks-and-hacks<p>Today I was talking with <a href="http://twitter.com/brainlock">@brainlock</a> about JavaScript, AJAX and Frameworks and about two paradigms that are in use today:</p>
<p>The first is the “traditional” paradigm where your JS code is just glorified view code. This is how AJAX worked in the early days and how people are still using it. Your JS-code intercepts a click somewhere, sends an AJAX request to the server and gets back either more JS code which just gets evaulated (thus giving the server kind of indirect access to the client DOM) or a HTML fragment which gets inserted at the appropriate spot.</p>
<p>This means that<em> your JS code will be ugly</em> (especially the code coming from the server), but it has the advantage that all your view code is right there where all your controllers and your models are: on the server. You see this pattern in use on the 37signals pages or in the <a href="http://github.com">github</a> file browser for example.</p>
<p>Keep the file browser in mind as I’m going to use that for an example later on.</p>
<p>The other paradigm is to go the other way around an promote JS to a first-class language. Now you build a framework on the client end and transmit only data (XML or JSON, but mostly JSON these days) from the server to the client. The server just provides a REST API for the data plus serves static HTML files. All the view logic lives only on the client side.</p>
<p>The advantages are that you can organize your client side code much better, for example using <a href="http://documentcloud.github.com/backbone/">backbone</a>, that there’s no expensive view rendering on the server side and that you basically get your third party API for free because the API is the only thing the server provides.</p>
<p>This paradigm is used for the new twitter webpage or in my very own <a href="http://tempalias.com">tempalias.com</a>.</p>
<p>Now <a href="http://twitter.com/brainlock">@brainlock</a> is a heavy proponent of the second paradigm. After being enlightened by the great Crockford, we both love JS and we both worked on huge messes of client-side JS code which has grown over the years and lacks structure and feels like copy pasta sometimes. In our defense: Tons of that code was written in the pre-enlightened age (2004).</p>
<p>I on the other hand see some justification for the first pattern aswell and I wouldn’t throw it away so quickly.</p>
<p>The main reason: It’s more pragmatic, it’s more DRY once you need graceful degradation and arguably, you can reach your goal a bit faster.</p>
<p>Let me explain by looking at the github file browser:</p>
<p>If you have a browser that supoports the HTML5 history API, then a click on a directory will reload the file list via AJAX and at the same time the URL will be updated using push state (so that the current view keeps its absolute URL which is valid even after you open it in a new browser).</p>
<p>If a browser doesn’t support pushState, it will gracefully degrade by just using the traditional link (and reloading the full page).</p>
<p>Let’s map this functionality to the two paradigms.</p>
<p>First the hacky one:</p>
<ol>
<li>You render the full page with the file list using a server-side template</li>
<li>You intercept clicks to the file list. If it's a folder:</li>
<li>you request the new file list</li>
<li>the server now renders the file list partial (in rails terms - basically just the file list part) without the rest of the site</li>
<li>the client gets that HTML code and inserts it in place of the current file list</li>
<li>You patch up the url using push state</li>
</ol>
<p>done. The view code is only on the server. Whether the file list is requested using the AJAX call or the traditional full page load doesn’t matter. The code path is exactly the same. The only difference is that the rest of the page isn’t rendered in case of an AJAX call. You get graceful degradation and no additional work.</p>
<p>Now assuming you want to keep graceful degradation possible and you want to go the JS framework route:</p>
<ol>
<li>You render the full page with the file list using a server-side template</li>
<li>You intercept the click to the folder in the file list</li>
<li>You request the JSON representation of the target folder</li>
<li>You use that JSON representation to fill a client-side template which is a copy of the server side partial</li>
<li>You insert that HTML at the place where the file list is</li>
<li>You patch up the URL using push state</li>
</ol>
<p>The amount of steps is the same, but the amount of work isn’t: If you want graceful degradation, then you write the file list template twice: Once as a server-side template, once as a client-side template. Both are quite similar but usually you’ll be forced to use slightly different syntax. If you update one, you have to update the other or the experience will be different whether you click on a link or you open the URL directly.</p>
<p>Also you are duplicating the code which fills that template: On the server side, you use ActiveRecord or whatever other ORM. On the client side, you’d probably use Backbone to do the same thing but now your backend isn’t the database but the JSON response. Now, Backbone is really cool and a huge timesaver, but it’s still more work than not doing it at all.</p>
<p>OK. Then let’s skip graceful degradation and make this a JS only client app (<a href="http://www.google.com/search?ie=UTF-8&q=gawker+redesign">good luck trying to get away with that</a>). Now the view code on the server goes away and you are just left with the model on the server to retrieve the data, with the model on the client (Backbone helps a lot here, but there’s still a substatial amount of code that needs to be written that otherwise wouldn’t) and with the view code on the client.</p>
<p>Now don’t ge me wrong.</p>
<p>I <strong>love</strong> the idea of promoting JS to a first class language. I <strong>love</strong> JS frameworks for big JS only applications. I <strong>love</strong> having a “free”, dogfooded-by-design REST API. I <strong>love</strong> building cool architectures.</p>
<p>I’m just thinking that at this point it’s so much work doing it right, that the old ways do have their advantages and that we should not condemn them for being hacky. True. They are. But they are also <em>pragmatic</em>.</p>
DNSSEC to fix the SSL mess?2011-04-07T00:00:00+00:00http://pilif.github.com/2011/04/dnssec-to-clean-the-ssl-mess<p>After <a href="http://codebutler.com/firesheep">Firesheep</a> it has become clear that there’s no way around SSL.</p>
<p>But still many people (and I’m including myself) are unhappy with the fact that to roll out SSL, you basically have to pay a sometimes significant premium for the certificate. And that’s not all: You have to pay the same fee every n years (and while you could say that the CA does some work the first time, every following year, it’s plain sucking money from you) and you have to remember to actually do it unless you want <a href="http://forum.skype.com/index.php?showtopic=784971">embarrassing warnings</a> pop up to your users.</p>
<p>The usual suggestion is to make browsers accept self-signed certificates without complaining, but that doesn’t really work to prevent a Firesheep style attack and is arguably even worse as it would allow not only your session id, but also your password to leak from sites that use the traditional SSL-for-login-HTTP-afterwards mechanism.</p>
<p>See <a href="http://news.ycombinator.com/item?id=2348836">my comment on HackerNews</a> for more details.</p>
<p>To make matters worse, last week news about a CA being compromised and issuing fraudulent (but still trusted) certificates made the rounds, so now even with the current CA based security mechanism, we still can’t completely trust the infrastructure.</p>
<p>Thinking about this, I had an idea.</p>
<p>Let’s assume that one day, one glorious day, DNSSEC will actually be deployed.</p>
<p>If that’s the case, then if I was the owner of gnegg.ch, I could just publish the certificate (or its fingerprint or a link to the certificate over SSL) in the DNS as a TXT record. DNSSEC would ensure that it was the owner of the domain who created the TXT entry and that the domain is the real one and not a faked one.</p>
<p>So if that entry says that gnegg.ch is supposed to serve a certificate with the fingerprint 0xdeadbeef, then a connecting browser would be sure that if the site is serving that certificate (and has the matching private key), then the connection would be secure and not man-in-the-middle’d.</p>
<p>Even better: If I lose the private key of gnegg.ch, I would just update the TXT record, making the old key useless. No non-working CRL or OCSP. Just one additional DNS query.</p>
<p>And you know what? It would put CAs out of business for signing of site certificates as a self-signed certificate would be as good as an official one (they would still be needed to sign your DNSSEC zone file of course, but that could be done by the TLD owners).</p>
<p>Oh and by the way: I could create my certificate with an incredibly long (if ever) expiration time: If I want the certificate to be invalid, I remove or change the TXT record and I’m done. As simple as that. No more embarrassing warnings. No more fear of missing the deadline.</p>
<p>Now, this feels so incredibly simple that there <strong>must</strong> be something I’m missing. What is it? Is it just that politics is preventing DNSSEC from ever being real? Is there an error in my thinking?</p>
<p> </p>
rails, PostgreSQL and the native uuid type2011-03-07T00:00:00+00:00http://pilif.github.com/2011/03/rails-postgresql-and-the-native-uuid-type<p>UUID have the very handy property that they are uniqe and there are quite many of them for you to use. Also they are difficult to guess and knowing the UUID of one object, it’s very hard to guess a valid UUID of another object.</p>
<p>This makes UUIDs perfect for identifying things in web applications:</p>
<ul>
<li>Even if you shard across multiple machines, each machine can independently generate primary keys without (realistic) fear of overlapping.</li>
<li>You can generate them without using any kind of locks.</li>
<li>Sometimes, you have to expose such keys to the user. If possible, you will of course do authorization checks, but it still makes sense not allowing users know about neighboring keysThis gets even more important when you are not able to do authorization keys because the resource you are referring to is public (like a <a href="http://tempalias.com">mail alias</a>) but it should still not possible to know other items if you know one.</li>
</ul>
<p>Knowing that <a href="http://www.codinghorror.com/blog/2007/03/primary-keys-ids-versus-guids.html">UUIDs are a good thing</a>, you might want to use them in your application (or you just have to in the last case above).</p>
<p>There are multiple recipes out there that show how to do it in a rails application (<a href="http://stackoverflow.com/questions/2487837/uuids-in-rails3">this one for example</a>).</p>
<p>All of these recipes store UUIDs as varchar’s in your database. In general, that’s fine and also the only thing you can do as most databases don’t have a native data type for UUIDs.</p>
<p><a href="http://www.postgresql.org">PostgreSQL</a> the other hand indeed has a native 128 bit integer type to store UUID.</p>
<p>This is more space efficient than storing the UUID in string form (288 bit) and it might be a tad bit faster when doing comparison operations on the database as integer operations (even if they are this big) require a constant amount of operations whereas comparing two string UUIDs is a string comparison which is dependent on the string size and size of the matching parts.</p>
<p>So maybe for the (minuscule) speed increase or for the purpose of correct semantics or just for interoperability with other applications, you might want to use native PostgreSQL UUIDs from your Rails (or other, but without the abstraction of a “Migration”, just using UUID is trivial) applications.</p>
<p>This already works quite nicely if you generate the columns as strings in your migrations and then manually send an <code>alter table</code> (whenever you restore the schema from scratch).</p>
<p>But if you want to create the column with the correct type directly from the migration and you want the column to be created correctly when using <code>rake db:schema:load</code>, then you need a bit of additional magic, especially if you want to still support other databases.</p>
<p>In my case, I was using PostgreSQL in production (<a href="http://www.gnegg.ch/2004/06/all-time-favourite-tools/">what</a> <a href="http://www.gnegg.ch/2009/02/all-time-favourite-tools-update/">else</a>?), but on my local machine, for the purpose of getting started quickly, I wanted to still be able to use SQLite for development.</p>
<p>In the end, everything boils down to monkey patching ActiveRecord::ConnectionAdapters::<em>Adapters and PostgreSQLColumn of the same module. So here’s what I’ve addded to <code>config/initializers/uuuid_support.rb</code> (Rails 3.0.</em>):</p>
<figure class="highlight"><pre><code class="language-ruby" data-lang="ruby"><span class="k">module</span> <span class="nn">ActiveRecord</span>
<span class="k">module</span> <span class="nn">ConnectionAdapters</span>
<span class="no">SQLiteAdapter</span><span class="p">.</span><span class="nf">class_eval</span> <span class="k">do</span>
<span class="k">def</span> <span class="nf">native_database_types_with_uuid_support</span>
<span class="n">a</span> <span class="o">=</span> <span class="n">native_database_types_without_uuid_support</span>
<span class="n">a</span><span class="p">[</span><span class="ss">:uuid</span><span class="p">]</span> <span class="o">=</span> <span class="p">{</span><span class="ss">:name</span> <span class="o">=></span> <span class="s1">'varchar'</span><span class="p">,</span> <span class="ss">:limit</span> <span class="o">=></span> <span class="mi">36</span><span class="p">}</span>
<span class="k">return</span> <span class="n">a</span>
<span class="k">end</span>
<span class="n">alias_method_chain</span> <span class="ss">:native_database_types</span><span class="p">,</span> <span class="ss">:uuid_support</span>
<span class="k">end</span> <span class="k">if</span> <span class="no">ActiveRecord</span><span class="o">::</span><span class="no">Base</span><span class="p">.</span><span class="nf">connection</span><span class="p">.</span><span class="nf">adapter_name</span> <span class="o">==</span> <span class="s1">'SQLite'</span>
<span class="k">if</span> <span class="no">ActiveRecord</span><span class="o">::</span><span class="no">Base</span><span class="p">.</span><span class="nf">connection</span><span class="p">.</span><span class="nf">adapter_name</span> <span class="o">==</span> <span class="s1">'PostgreSQL'</span>
<span class="no">PostgreSQLAdapter</span><span class="p">.</span><span class="nf">class_eval</span> <span class="k">do</span>
<span class="k">def</span> <span class="nf">native_database_types_with_uuid_support</span>
<span class="n">a</span> <span class="o">=</span> <span class="n">native_database_types_without_uuid_support</span>
<span class="n">a</span><span class="p">[</span><span class="ss">:uuid</span><span class="p">]</span> <span class="o">=</span> <span class="p">{</span><span class="ss">:name</span> <span class="o">=></span> <span class="s1">'uuid'</span><span class="p">}</span>
<span class="k">return</span> <span class="n">a</span>
<span class="k">end</span>
<span class="n">alias_method_chain</span> <span class="ss">:native_database_types</span><span class="p">,</span> <span class="ss">:uuid_support</span>
<span class="k">end</span>
<span class="no">PostgreSQLColumn</span><span class="p">.</span><span class="nf">class_eval</span> <span class="k">do</span>
<span class="k">def</span> <span class="nf">simplified_type_with_uuid_support</span><span class="p">(</span><span class="n">field_type</span><span class="p">)</span>
<span class="k">if</span> <span class="n">field_type</span> <span class="o">==</span> <span class="s1">'uuid'</span>
<span class="ss">:uuid</span>
<span class="k">else</span>
<span class="n">simplified_type_without_uuid_support</span><span class="p">(</span><span class="n">field_type</span><span class="p">)</span>
<span class="k">end</span>
<span class="k">end</span>
<span class="n">alias_method_chain</span> <span class="ss">:simplified_type</span><span class="p">,</span> <span class="ss">:uuid_support</span>
<span class="k">end</span>
<span class="k">end</span>
<span class="k">end</span>
<span class="k">end</span></code></pre></figure>
<p>In your migrations you can then use the :uuid type. In my sample case, this was it:</p>
<figure class="highlight"><pre><code class="language-ruby" data-lang="ruby"><span class="k">class</span> <span class="nc">AddGuuidToSites</span> <span class="o"><</span> <span class="no">ActiveRecord</span><span class="o">::</span><span class="no">Migration</span>
<span class="k">def</span> <span class="nc">self</span><span class="o">.</span><span class="nf">up</span>
<span class="n">add_column</span> <span class="ss">:sites</span><span class="p">,</span> <span class="ss">:guuid</span><span class="p">,</span> <span class="ss">:uuid</span>
<span class="n">add_index</span> <span class="ss">:sites</span><span class="p">,</span> <span class="ss">:guuid</span>
<span class="k">end</span>
<span class="k">def</span> <span class="nc">self</span><span class="o">.</span><span class="nf">down</span>
<span class="n">remove_column</span> <span class="ss">:sites</span><span class="p">,</span> <span class="ss">:guuid</span>
<span class="k">end</span>
<span class="k">end</span></code></pre></figure>
<p>Maybe with a bit better Ruby knowledge than I have, it should be possible to just monkey-patch the parent <code>AbstractAdaper</code> while still calling the method of the current subclass. This would not require a separate patch for all adapters in use.</p>
<p>For my case which was just support for SQLite and PostgreSQL, the above initializer was fine though.</p>
How I back up gmail2011-02-28T00:00:00+00:00http://pilif.github.com/2011/02/how-i-back-up-gmail<p>There was a <a href="http://news.ycombinator.com/item?id=2269346">discussion on HackerNews</a> about Gmail having lost the email in some accounts. One sentiment in the comments was clear:</p>
<p>It’s totally the users problem if they don’t back up their cloud based email.</p>
<p>Personally, I think I would have to agree:</p>
<p>Google is a provider like every other ISP or basically any other service too. There’s no reason to believe that your data is more save on Google than it is any where else. Now granted, they are not exactly known for losing data, but there’s other things that can happen.</p>
<p>Like your account being closed because whatever automated system believed your usage patterns were consistent with those of a spammer.</p>
<p>So the question is: What would happen if your Google account wasn’t reachable at some point in the future?</p>
<p>For my company (using commercial Google Apps accounts), I would start up that IMAP server which serves all mail ever sent to and from Gmail. People would use the already existing webmail client or their traditional IMAP clients. They would lose some productivity, but no single byte of data.</p>
<p>This was my condition for migrating email over to Google. I needed to have a back up copy of that data. Otherwise, I would not have agreed to switch to a cloud based provider.</p>
<p>The process is completely automated too. There’s not even a backup script running somewhere. Heck, <strong>not even the Google Account passwords have to be stored anywhere for this to work</strong>.</p>
<p>So. How does it work then?</p>
<p>Before you read on, here are the drawbacks of the solution:</p>
<ul>
<li>I'm a die-hard <a href="http://exim.org/">Exim</a> fan (long story. It served me very well once - up to saving-my-ass level of well), so the configuration I'm outlining here is for Exim as the mail relay.</li>
<li>Also, this <strong>only works with paid Google accounts</strong>. You can get somewhere using the free ones, but you don't get the full solution (i.e. having a backup of all sent email)</li>
<li>This requires you to have full control over the MX machine(s) of your domain.</li>
</ul>
<p>If you can live with this, here’s how you do it:</p>
<p>First, you set up your Google domain as normal. Add all the users you want and do everything else just as you would do it in a traditional set up.</p>
<p>Next, we’ll have to configure Google Mail for <a href="http://www.gnegg.ch/2010/06/google-apps-provisioning-two-legged-oauth/">two-legged OAuth access</a> to our accounts. I’ve written about this <a href="http://www.gnegg.ch/2010/06/google-apps-provisioning-two-legged-oauth/">before</a>. We are doing this so we don’t need to know our users passwords. Also, we need to enable the provisioning API to get access to the list of users and groups.</p>
<p>Next, our mail relay will have to know about what users (and groups) are listed in our Google account. Here’s what I quickly hacked together in Python (my first Python script ever - be polite while flaming) using the GData library:</p>
<figure class="highlight"><pre><code class="language-python" data-lang="python"><span class="kn">import</span> <span class="nn">gdata.apps.service</span>
<span class="n">consumer_key</span> <span class="o">=</span> <span class="s">'yourdomain.com'</span>
<span class="n">consumer_secret</span> <span class="o">=</span> <span class="s">'2-legged-consumer-secret'</span> <span class="c">#see above</span>
<span class="n">sig_method</span> <span class="o">=</span> <span class="n">gdata</span><span class="o">.</span><span class="n">auth</span><span class="o">.</span><span class="n">OAuthSignatureMethod</span><span class="o">.</span><span class="n">HMAC_SHA1</span>
<span class="n">service</span> <span class="o">=</span> <span class="n">gdata</span><span class="o">.</span><span class="n">apps</span><span class="o">.</span><span class="n">service</span><span class="o">.</span><span class="n">AppsService</span><span class="p">(</span><span class="n">domain</span><span class="o">=</span><span class="n">consumer_key</span><span class="p">)</span>
<span class="n">service</span><span class="o">.</span><span class="n">SetOAuthInputParameters</span><span class="p">(</span><span class="n">sig_method</span><span class="p">,</span> <span class="n">consumer_key</span><span class="p">,</span>\
<span class="n">consumer_secret</span><span class="o">=</span><span class="n">consumer_secret</span><span class="p">,</span> <span class="n">two_legged_oauth</span><span class="o">=</span><span class="bp">True</span><span class="p">)</span>
<span class="n">res</span> <span class="o">=</span> <span class="n">service</span><span class="o">.</span><span class="n">RetrieveAllUsers</span><span class="p">()</span>
<span class="k">for</span> <span class="n">entry</span> <span class="ow">in</span> <span class="n">res</span><span class="o">.</span><span class="n">entry</span><span class="p">:</span>
<span class="k">print</span> <span class="n">entry</span><span class="o">.</span><span class="n">login</span><span class="o">.</span><span class="n">user_name</span>
<span class="kn">import</span> <span class="nn">gdata.apps.groups.service</span>
<span class="n">service</span> <span class="o">=</span> <span class="n">gdata</span><span class="o">.</span><span class="n">apps</span><span class="o">.</span><span class="n">groups</span><span class="o">.</span><span class="n">service</span><span class="o">.</span><span class="n">GroupsService</span><span class="p">(</span><span class="n">domain</span><span class="o">=</span><span class="n">consumer_key</span><span class="p">)</span>
<span class="n">service</span><span class="o">.</span><span class="n">SetOAuthInputParameters</span><span class="p">(</span><span class="n">sig_method</span><span class="p">,</span> <span class="n">consumer_key</span><span class="p">,</span>\
<span class="n">consumer_secret</span><span class="o">=</span><span class="n">consumer_secret</span><span class="p">,</span> <span class="n">two_legged_oauth</span><span class="o">=</span><span class="bp">True</span><span class="p">)</span>
<span class="n">res</span> <span class="o">=</span> <span class="n">service</span><span class="o">.</span><span class="n">RetrieveAllGroups</span><span class="p">()</span>
<span class="k">for</span> <span class="n">entry</span> <span class="ow">in</span> <span class="n">res</span><span class="p">:</span>
<span class="k">print</span> <span class="n">entry</span><span class="p">[</span><span class="s">'groupName'</span><span class="p">]</span></code></pre></figure>
<p>Place this script somewhere on your mail relay and run it in a cron job. In my case, I’m having its output redirected to <code>/etc/exim4/gmail_accounts</code>. The script will emit one user (and group) name per line.</p>
<p>Next, we’ll deal with incoming email:</p>
<p>In the Exim configuration of your mail relay, add the following routers:</p>
<figure class="highlight"><pre><code class="language-text" data-lang="text">yourdomain_gmail_users:
driver = accept
domains = yourdomain.com
local_parts = lsearch;/etc/exim4/gmail_accounts
transport_home_directory = /var/mail/yourdomain/${lc:$local_part}
router_home_directory = /var/mail/yourdomain/${lc:$local_part}
transport = gmail_local_delivery
unseen
yourdomain_gmail_remote:
driver = accept
domains = yourdomain.com
local_parts = lsearch;/etc/exim4/gmail_accounts
transport = gmail_t</code></pre></figure>
<p>yourdomain_gmail_users is what creates the local copy. It accepts all mail sent to yourdomain.com, if the local part (the stuff in front of the @) is listed in that gmail_accounts file. Then it sets up some paths for the local transport (see below) and marks the mail as unseen so the next router gets a chance too.</p>
<p>Which is yourdomain_gmail_remote. This one is again checking domain and the local part and if they match, it’s just delegating to the gmail_t remote transport (which will then send the email to Google).</p>
<p>The transports look like this:</p>
<figure class="highlight"><pre><code class="language-text" data-lang="text">gmail_t:
driver = smtp
hosts = aspmx.l.google.com:alt1.aspmx.l.google.com:\
alt2.aspmx.l.google.com:aspmx5.googlemail.com:\
aspmx2.googlemail.com:aspmx3.googlemail.com:\
aspmx4.googlemail.com
gethostbyname
gmail_local_delivery:
driver = appendfile
check_string =
delivery_date_add
envelope_to_add
group=mail
maildir_format
directory = MAILDIR/yourdomain/${lc:$local_part}
maildir_tag = ,S=$message_size
message_prefix =
message_suffix =
return_path_add
user = Debian-exim
create_file = anywhere
create_directory</code></pre></figure>
<p>the gmail_t transport is simple. The local one you might have to patch up users and groups plus the location where you what to write the mail to.</p>
<p>Now we are ready to reconfigure Google as this is all that’s needed to get a copy of every inbound mail into a local maildir on the mail relay.</p>
<p>Here’s what you do:</p>
<ul>
<li>You change the MX of your domain to point to this relay of yours</li>
</ul>
<p>The next two steps are the reason you need a paid account: These controls are not available for the free accounts:</p>
<ul>
<li>In your Google Administration panel, you visit the Email settings and configure the outbound gateway. Set it to your relay.</li>
<li>Then you configure your inbound gateway and set it to your relay too (and to your backup MX if you have one).</li>
</ul>
<p>This screenshot will help you:</p>
<p><a href="http://www.gnegg.ch/wp-content/uploads/2011/02/gmail-config.png"><img src="/assets/gmail-config-300x102.png" alt="gmail config" /></a></p>
<p>All email sent to your MX (over the gmail_t transport we have configured above) will now be accepted by gmail.</p>
<p>Also, Gmail will now send all outgoing Email to your relay which needs to be configured to accept (and relay) email from Google. This pretty much depends on your otherwise existing Exim configuration, but here’s what I added (which will work with the default ACL):</p>
<figure class="highlight"><pre><code class="language-text" data-lang="text">hostlist google_relays = 216.239.32.0/19:64.233.160.0/19:66.249.80.0/20:\
72.14.192.0/18:209.85.128.0/17:66.102.0.0/20:\
74.125.0.0/16:64.18.0.0/20:207.126.144.0/20
hostlist relay_from_hosts = 127.0.0.1:+google_relays</code></pre></figure>
<p>And lastly, the tricky part: Storing a copy of all mail that is being sent through Gmail (we are already correctly sending the mail. What we want is a copy):</p>
<p>Here is the exim router we need:</p>
<figure class="highlight"><pre><code class="language-text" data-lang="text">gmail_outgoing:
driver = accept
condition = "${if and{\
{ eq{$sender_address_domain}{yourdomain.com} }\
{=={${lookup{$sender_address_local_part}lsearch{/etc/exim4/gmail_accounts}{1}}}{1}}} {1}{0}}"
transport = store_outgoing_copy
unseen</code></pre></figure>
<p>(did I mention that I severely dislike RPN?)</p>
<p>and here’s the transport:</p>
<figure class="highlight"><pre><code class="language-text" data-lang="text">store_outgoing_copy:
driver = appendfile
check_string =
delivery_date_add
envelope_to_add
group=mail
maildir_format
directory = MAILDIR/yourdomain/${lc:$sender_address_local_part}/.Sent/
maildir_tag = ,S=$message_size
message_prefix =
message_suffix =
return_path_add
user = Debian-exim
create_file = anywhere
create_directory</code></pre></figure>
<p>The maildir I’ve chosen is the correct one if the IMAP-server you want to use is Courier IMAPd. Other servers use different methods.</p>
<p>One little thing: When you CC or BCC other people in your domain, Google will send out multiple copies of the same message. This will yield some message duplication in the sent directory (one per recipient), but as they say: Better backup too much than too little.</p>
<p>Now if something happens to your google account, just start up an IMAP server and have it serve mail from these maildir directories.</p>
<p>And remember to back them up too, but you can just use rsync or rsnapshot or whatever other technology you might have in use. They are just directories containing one file per email.</p>
sacy 0.2 - now with less, sass and scss2011-02-16T00:00:00+00:00http://pilif.github.com/2011/02/sacy-0-2-now-with-less-sass-and-scss<p>To fresh up your memory (<a href="/2009/09/introducing-sacy-the-smarty-asset-compiler/">it has been a while</a>): <a href="http://github.com/pilif/sacy">sacy</a> is a <a href="http://www.smarty.net">Smarty</a> (both 2 and 3) plugin that turns</p>
<figure class="highlight"><pre><code class="language-xml" data-lang="xml">{asset_compile}
<span class="nt"><link</span> <span class="na">type=</span><span class="s">"text/css"</span> <span class="na">rel=</span><span class="s">"stylesheet"</span> <span class="na">href=</span><span class="s">"/styles/file1.css"</span> <span class="nt">/></span>
<span class="nt"><link</span> <span class="na">type=</span><span class="s">"text/css"</span> <span class="na">rel=</span><span class="s">"stylesheet"</span> <span class="na">href=</span><span class="s">"/styles/file2.css"</span> <span class="nt">/></span>
<span class="nt"><link</span> <span class="na">type=</span><span class="s">"text/css"</span> <span class="na">rel=</span><span class="s">"stylesheet"</span> <span class="na">href=</span><span class="s">"/styles/file3.css"</span> <span class="nt">/></span>
<span class="nt"><link</span> <span class="na">type=</span><span class="s">"text/css"</span> <span class="na">rel=</span><span class="s">"stylesheet"</span> <span class="na">href=</span><span class="s">"/styles/file4.css"</span> <span class="nt">/></span>
<span class="nt"><script</span> <span class="na">type=</span><span class="s">"text/javascript"</span> <span class="na">src=</span><span class="s">"/jslib/file1.js"</span><span class="nt">></script></span>
<span class="nt"><script</span> <span class="na">type=</span><span class="s">"text/javascript"</span> <span class="na">src=</span><span class="s">"/jslib/file2.js"</span><span class="nt">></script></span>
<span class="nt"><script</span> <span class="na">type=</span><span class="s">"text/javascript"</span> <span class="na">src=</span><span class="s">"/jslib/file3.js"</span><span class="nt">></script></span>
{/asset_compile}</code></pre></figure>
<p>into</p>
<figure class="highlight"><pre><code class="language-xml" data-lang="xml"><span class="nt"><link</span> <span class="na">type=</span><span class="s">"text/css"</span> <span class="na">rel=</span><span class="s">"stylesheet"</span> <span class="na">href=</span><span class="s">"/assets/files-1234abc.css"</span> <span class="nt">/></span>
<span class="nt"><script</span> <span class="na">type=</span><span class="s">"text/javascript"</span> <span class="na">src=</span><span class="s">"/assets/files-abc123.js"</span><span class="nt">></script></span></code></pre></figure>
<p>It does this without you ever having to manually run a compiler, without serving all your assets through some script (thus saving RAM) and without worries about stale copies being served. In fact, you can serve all static files generated with sacy with cache headers telling browsers to never revisit them!</p>
<p>All of this, using two lines of code (wrap as much content as you want in {asset_compile}…{/asset_compile})</p>
<p>Sacy has been around for a bit more than a year now and has since been in production use in <a href="http://www.popscan.com">PopScan</a>. During this time, no single bug in Sacy has been found, so I would say that it’s pretty usable.</p>
<p>Coworkers have bugged me enough about how much better <a href="http://lesscss.org/">less</a> or <a href="http://sass-lang.com/">sass</a> would be compared to pure CSS so that I finally decided to update <a href="http://github.com/pilif/sacy">sacy</a> to allow us to use less in PopScan:</p>
<p>Aside of consolidating and minimizing CSS and JavaScript, sacy can now also transform less and sass (or scss) files using the exact same method as before but just changing the mime-type:</p>
<figure class="highlight"><pre><code class="language-xml" data-lang="xml"><span class="nt"><link</span> <span class="na">type=</span><span class="s">"text/x-less"</span> <span class="na">rel=</span><span class="s">"stylesheet"</span> <span class="na">href=</span><span class="s">"/styles/file1.less"</span> <span class="nt">/></span>
<span class="nt"><link</span> <span class="na">type=</span><span class="s">"text/x-sass"</span> <span class="na">rel=</span><span class="s">"stylesheet"</span> <span class="na">href=</span><span class="s">"/styles/file2.sass"</span> <span class="nt">/></span>
<span class="nt"><link</span> <span class="na">type=</span><span class="s">"text/x-scss"</span> <span class="na">rel=</span><span class="s">"stylesheet"</span> <span class="na">href=</span><span class="s">"/styles/file3.scss"</span> <span class="nt">/></span></code></pre></figure>
<p>Like before, you don’t concern yourself with manual compilation or anything. Just use the links as is and sacy will do the magic for you.</p>
<p>Interested? Read the (by now huge) <a href="https://github.com/pilif/sacy/blob/v0.2/README.markdown">documentation</a> on <a href="http://github.com/pilif">my github page</a>!</p>
Find relation sizes in PostgreSQL2011-02-07T00:00:00+00:00http://pilif.github.com/2011/02/find-relation-sizes-in-postgresql<p>Like so many times before, today I was yet again in the situation where I wanted to know which tables/indexes take the most disk space in a particular PostgreSQL database.</p>
<p>My usual procedure in this case was to <code>\dt+</code> in psql and scan the sizes by eye (this being on my development machine, trying to find out the biggest tables I could clean out to make room).</p>
<p>But once you’ve done that a few times and considering that <code>\dt+</code> does nothing but query some PostgreSQL internal tables, I thought that I want this solved in an easier way that also is less error prone. In the end I just wanted the output of \dt+ sorted by size.</p>
<p>The lead to some digging in the source code of psql itself (<code>src/bin/psql</code>) where I quickly found the function that builds the query (<code>listTables</code> in <code>describe.c</code>), so from now on, this is what I’m using when I need to get an overview over all relation sizes ordered by size in descending order:</p>
<figure class="highlight"><pre><code class="language-sql" data-lang="sql"><span class="k">select</span>
<span class="n">n</span><span class="p">.</span><span class="n">nspname</span> <span class="k">as</span> <span class="nv">"Schema"</span><span class="p">,</span>
<span class="k">c</span><span class="p">.</span><span class="n">relname</span> <span class="k">as</span> <span class="nv">"Name"</span><span class="p">,</span>
<span class="k">case</span> <span class="k">c</span><span class="p">.</span><span class="n">relkind</span>
<span class="k">when</span> <span class="s1">'r'</span> <span class="k">then</span> <span class="s1">'table'</span>
<span class="k">when</span> <span class="s1">'v'</span> <span class="k">then</span> <span class="s1">'view'</span>
<span class="k">when</span> <span class="s1">'i'</span> <span class="k">then</span> <span class="s1">'index'</span>
<span class="k">when</span> <span class="s1">'S'</span> <span class="k">then</span> <span class="s1">'sequence'</span>
<span class="k">when</span> <span class="s1">'s'</span> <span class="k">then</span> <span class="s1">'special'</span>
<span class="k">end</span> <span class="k">as</span> <span class="nv">"Type"</span><span class="p">,</span>
<span class="n">pg_catalog</span><span class="p">.</span><span class="n">pg_get_userbyid</span><span class="p">(</span><span class="k">c</span><span class="p">.</span><span class="n">relowner</span><span class="p">)</span> <span class="k">as</span> <span class="nv">"Owner"</span><span class="p">,</span>
<span class="n">pg_catalog</span><span class="p">.</span><span class="n">pg_size_pretty</span><span class="p">(</span><span class="n">pg_catalog</span><span class="p">.</span><span class="n">pg_relation_size</span><span class="p">(</span><span class="k">c</span><span class="p">.</span><span class="n">oid</span><span class="p">))</span> <span class="k">as</span> <span class="nv">"Size"</span>
<span class="k">from</span> <span class="n">pg_catalog</span><span class="p">.</span><span class="n">pg_class</span> <span class="k">c</span>
<span class="k">left</span> <span class="k">join</span> <span class="n">pg_catalog</span><span class="p">.</span><span class="n">pg_namespace</span> <span class="n">n</span> <span class="k">on</span> <span class="n">n</span><span class="p">.</span><span class="n">oid</span> <span class="o">=</span> <span class="k">c</span><span class="p">.</span><span class="n">relnamespace</span>
<span class="k">where</span> <span class="k">c</span><span class="p">.</span><span class="n">relkind</span> <span class="k">IN</span> <span class="p">(</span><span class="s1">'r'</span><span class="p">,</span> <span class="s1">'v'</span><span class="p">,</span> <span class="s1">'i'</span><span class="p">)</span>
<span class="k">order</span> <span class="k">by</span> <span class="n">pg_catalog</span><span class="p">.</span><span class="n">pg_relation_size</span><span class="p">(</span><span class="k">c</span><span class="p">.</span><span class="n">oid</span><span class="p">)</span> <span class="k">desc</span><span class="p">;</span></code></pre></figure>
<p>Of course I could have come up with this without source code digging, but honestly, I didn’t know about relkind s, about pg_size_pretty and pg_relation_size (I would have thought that one to be stored in some system view), so figuring all of this out would have taken much more time than just reading the source code.</p>
<p>Now it’s here so I remember it next time I need it.</p>
overpriced data roaming2010-11-04T00:00:00+00:00http://pilif.github.com/2010/11/overpriced-data-roaming<p>You shouldn’t complain if something gets cheaper. But if something just gets 7 times cheaper from one day to the next, then that leaves you thinking whether the price offered so far might have been a tad bit too high.</p>
<p>I’m talking about Swisscom’s data roaming charges.</p>
<p>Up to now, you paid CHF 50 per 5 MB (CHF 10 per MB) when roaming in the EU. Yes. That’s around $10 and EUR 6.60 per <strong>Megabyte</strong>. Yes. Megabyte. Not Gigabyte. And you people complain about getting limited to 5 GB for your $30.</p>
<p>Just now I got a <a href="http://www.swisscom.ch/NR/exeres/FC2C644E-DFB7-49E4-8326-93C03D317BAF,frameless.htm?lang=de">press release</a> form Swisscom that they are changing their roaming charges to CHF 7 per 5 MB. That’s CHF 1.40 per MB which is <strong>7 times cheaper.</strong></p>
<p>If you can make a product of yours 7 times cheaper from one day to the other, the rates you charged before that were clearly way too high.</p>
How to kill IE performance2010-10-07T00:00:00+00:00http://pilif.github.com/2010/10/how-to-kill-ie-performance<p>While working on my day job, we are often dealing with huge data tables in HTML augmented with some JavaScript to do calculations with that data.</p>
<p>Think huge shopping cart: You change the quantity of a line item and the line total as well as the order total will change.</p>
<p>This leads to the same data (line items) having three representations:</p>
<ol>
<li>The model on the server</li>
<li>The HTML UI that is shown to the user</li>
<li>The model that's seen by JavaScript to do the calculations on the client side (and then updating the UI)</li>
</ol>
<p>You might think that the JavaScript running in the browser would somehow be able to work with the data from 2) so that the third model wouldn’t be needed, but due to various localization issues (think number formatting) and data that’s not displayed but affects the calculations, that’s not possible.</p>
<p>So the question is: Considering we have some HTML templating language to build 2), how do we get to 3).</p>
<p>Back in 2004 when I initially designed that system (using AJAX before it was widely called AJAX even), I hadn’t seen <a href="http://video.yahoo.com/watch/111593/1710507">Crockford’s lecture</a>s yet, so I still lived in the “JS sucks” world, where I’ve done something like this</p>
<figure class="highlight"><pre><code class="language-xml" data-lang="xml"><span class="c"><!-- lots of TRs --></span>
<span class="nt"><tr></span>
<span class="nt"><td></span>Column 1 <span class="nt"><script></span>addSet(1234 /*prodid*/, 1 /*quantity*/, 10 /*price*/, /* and, later, more, stuff, so, really, ugly */)<span class="nt"></script></td></span>
<span class="nt"><td></span>Column 2<span class="nt"></td></span>
<span class="nt"><td></span>Column 3<span class="nt"></td></span>
<span class="nt"></tr></span>
<span class="c"><!-- lots of TRs --></span></code></pre></figure>
<p>(Yeah - as I said: 2004. No object literals, global functions. We had a lot to learn back then, but so did you, so don’t be too angry at me - we improved)</p>
<p>Obviously, this doesn’t scale: As the line items got more complicated, that parameter list grew and grew. The HTML code got uglier and uglier and of course, cluttering the window object is a big no-no too. So we went ahead and built a beautiful design:</p>
<figure class="highlight"><pre><code class="language-xml" data-lang="xml"><span class="c"><!-- lots of TRs --></span>
<span class="nt"><tr</span> <span class="na">class=</span><span class="s">"lineitem"</span> <span class="na">data-ps-lineitem=</span><span class="s">'{"prodid": 1234, "quantity": 1, "price": 10, "foo": "bar", "blah": "blah"}'</span><span class="nt">></span>
<span class="nt"><td></span>Column 1<span class="nt"></td></span>
<span class="nt"><td></span>Column 2<span class="nt"></td></span>
<span class="nt"><td></span>Column 3<span class="nt"></td></span>
<span class="nt"></tr></span>
<span class="c"><!-- lots of TRs --></span></code></pre></figure>
<p>The first iteration was then parsing that JSON every time we needed to access any of the associated data (and serializing again whenever it changed). Of course this didn’t go that well performance-wise, so we began caching and did something like this (using jQuery):</p>
<figure class="highlight"><pre><code class="language-javascript" data-lang="javascript"><span class="nx">$</span><span class="p">(</span><span class="kd">function</span><span class="p">(){</span>
<span class="nx">$</span><span class="p">(</span><span class="s1">'.lineitem'</span><span class="p">).</span><span class="nx">each</span><span class="p">(</span><span class="kd">function</span><span class="p">(){</span>
<span class="k">this</span><span class="p">.</span><span class="nx">ps_data</span> <span class="o">=</span> <span class="nx">$</span><span class="p">.</span><span class="nx">parseJSON</span><span class="p">(</span><span class="nx">$</span><span class="p">(</span><span class="k">this</span><span class="p">).</span><span class="nx">attr</span><span class="p">(</span><span class="s1">'data-ps-lineitem'</span><span class="p">));</span>
<span class="p">});</span>
<span class="p">});</span></code></pre></figure>
<p>Now each DOM element representing one of these <tr>’s had a ps_data member which allowed for quick access. The JSON had to be parsed only once and then the data was available. If it changed, writing it back didn’t require a re-serialization either - you just changed that property directly.</p>
<p>This design is reasonably clean (still not as DRY as the initial attempt which had the data only in that JSON string) while still providing enough performance.</p>
<p>Until you begin to amass datasets. That is.</p>
<p>Well. Until you do so and expect this to work in IE.</p>
<p>800 rows like this made IE lock up its UI thread for 40 seconds.</p>
<p>So more optimization was in order.</p>
<p>First,</p>
<figure class="highlight"><pre><code class="language-javascript" data-lang="javascript"><span class="nx">$</span><span class="p">(</span><span class="s1">'.lineitem'</span><span class="p">)</span></code></pre></figure>
<p>will kill IE. Remember: IE (still) doesn’t have getElementsByClassName, so in IE, jQuery has to iterate the whole DOM and check whether each elements class attribute contains “lineitem”. Considering that IE’s DOM isn’t really fast to start with, this is a HUGE no-no.</p>
<p>So.</p>
<figure class="highlight"><pre><code class="language-javascript" data-lang="javascript"><span class="nx">$</span><span class="p">(</span><span class="s1">'tr.lineitem'</span><span class="p">)</span></code></pre></figure>
<p>Nope. Nearly as bad considering there are still at least 800 tr’s to iterate over.</p>
<figure class="highlight"><pre><code class="language-javascript" data-lang="javascript"><span class="nx">$</span><span class="p">(</span><span class="s1">'#whatever tr.lineitem'</span><span class="p">)</span></code></pre></figure>
<p>Would help if it weren’t 800 tr’s that match. Using <a href="http://ajax.dynatrace.com/pages/">dynaTrace AJAX</a> (highly recommended tool, by the way) we found out that just selecting the elements alone (without the iteration) took more than 10 seconds.</p>
<p>So the general take-away is: Selecting lots of elements in IE is painfully slow. Don’t do that.</p>
<p>But back to our little problem here. Unserializing that JSON at DOM ready time is not feasible in IE, because no matter what we do to that selector, once there are enough elements to handle, it’s just going to be slow.</p>
<p>Now by chunking up the amount of work to do and using setTimeout() to launch various deserialization jobs we could fix the locking up, but the total run time before all data is deserialized will still be the same (or slightly worse).</p>
<p>So what we have done in 2004, even though it was ugly, was way more feasible in IE.</p>
<p>Which is why we went back to the initial design with some improvements:</p>
<figure class="highlight"><pre><code class="language-xml" data-lang="xml"><span class="c"><!-- lots of TRs --></span>
<span class="nt"><tr</span> <span class="na">class=</span><span class="s">"lineitem"</span><span class="nt">></span>
<span class="nt"><td></span>Column 1 <span class="nt"><script></span>PopScan.LineItems.add({"prodid": 1234, "quantity": 1, "price": 10, "foo": "bar", "blah": "blah"});<span class="nt"></script></td></span>
<span class="nt"><td></span>Column 2<span class="nt"></td></span>
<span class="nt"><td></span>Column 3<span class="nt"></td></span>
<span class="nt"></tr></span>
<span class="c"><!-- lots of TRs --></span></code></pre></figure>
<p><em>phew</em> crisis averted.</p>
<p>Loading time went back to where it was in the 2004 design. It was still bad though. With those 800 rows, IE was still taking more than 10 seconds for the rendering task. dynaTrace revealed that this time, the time was apparently spent rendering.</p>
<p>The initial feeling was that there’s not much to do at that point.</p>
<p>Until we began suspecting the script tags.</p>
<p>Doing this:</p>
<figure class="highlight"><pre><code class="language-xml" data-lang="xml"><span class="c"><!-- lots of TRs --></span>
<span class="nt"><tr</span> <span class="na">class=</span><span class="s">"lineitem"</span><span class="nt">></span>
<span class="nt"><td></span>Column 1<span class="nt"></td></span>
<span class="nt"><td></span>Column 2<span class="nt"></td></span>
<span class="nt"><td></span>Column 3<span class="nt"></td></span>
<span class="nt"></tr></span>
<span class="c"><!-- lots of TRs --></span></code></pre></figure>
<p>The page loaded instantly.</p>
<p>Doing this</p>
<figure class="highlight"><pre><code class="language-xml" data-lang="xml"><span class="c"><!-- lots of TRs --></span>
<span class="nt"><tr</span> <span class="na">class=</span><span class="s">"lineitem"</span><span class="nt">></span>
<span class="nt"><td></span>Column 1 <span class="nt"><script></span>1===1;<span class="nt"></script></td></span>
<span class="nt"><td></span>Column 2<span class="nt"></td></span>
<span class="nt"><td></span>Column 3<span class="nt"></td></span>
<span class="nt"></tr></span>
<span class="c"><!-- lots of TRs --></span></code></pre></figure>
<p>it took 10 seconds again.</p>
<p>Considering that IE’s JavaScript engine runs as a COM component, this isn’t actually that surprising: Whenever IE hits a script tag, it stops whatever it’s doing, sends that script over to the COM component (first doing all the marshaling of the data), waits for that to execute, marshals the result back (depending on where the DOM lives and whether the script accesses it, possibly crossing that COM boundary many, many times in between) and then finally resumes page loading.</p>
<p>It has to wait for each script because, potentially, that JavaScript could call document.open() / document.write() at which point the document could completely change.</p>
<p>So the final solution was to loop through the server-side model twice and do something like this:</p>
<figure class="highlight"><pre><code class="language-xml" data-lang="xml"><span class="c"><!-- lots of TRs --></span>
<span class="nt"><tr</span> <span class="na">class=</span><span class="s">"lineitem"</span><span class="nt">></span>
<span class="nt"><td></span>Column 1 <span class="nt"></td></span>
<span class="nt"><td></span>Column 2<span class="nt"></td></span>
<span class="nt"><td></span>Column 3<span class="nt"></td></span>
<span class="nt"></tr></span>
<span class="c"><!-- lots of TRs --></span>
<span class="nt"></table></span>
<span class="nt"><script></span>
PopScan.LineItems.add({prodid: 1234, quantity: 1, price: 10, foo: "bar", blah: "blah"});
// 800 more of these
<span class="nt"></script></span></code></pre></figure>
<p>Problem solved. Not too ugly design. Certainly no 2004 design any more.</p>
<p>And in closing, let me give you a couple of things you can do if you want to bring the performance of IE down to its knees:</p>
<ul>
<li>Use broad jQuery selectors. <code>$('.someclass')</code> will cause jQuery to loop through <em>all</em> elements on the page.</li>
<li>Even if you try not to be broad, you can still kill performance: <code>$('div.someclass')</code>. The most help jQuery can expect from IE is getElementsByTagName, so while it's better than iterating <em>all</em> elements, it's still going over all div's on your page. Once it's more than 200, the performance extremely quickly falls down (probably doing some O(n^2) thing somehwere).</li>
<li>Use a lot of <script>-tags. Every one of these will force IE to marshal data to the scripting engine COM component and to wait for the result.</li>
</ul>
<p>Next time, we’ll have a look at how to use jQuery’s delegate() to handle common cases with huge selectors.</p>
Why node.js excites me2010-09-27T00:00:00+00:00http://pilif.github.com/2010/09/why-node-js-excites-me<p>Today, on <a href="http://news.ycombinator.com">Hacker News</a>, an article named “<a href="http://www.eflorenzano.com/blog/post/why-node-disappoints-me/">Why node.js disappoints me</a>” appeared - right on the day I returned back from jsconf.eu (awesome conference. Best days of my life, I might add) where I was giving <a href="http://bit.ly/b4gsrL">a talk about using node.js for a real web application</a> that provides real use: <a href="http://tempalias.com">tempalias.com</a></p>
<p>Time to write a rebuttal, I guess.</p>
<p>The main gripe Eric has with node is a gripe with the libraries that are available. It’s not about performance. It’s not about ease of deployment, or ease of development. In his opinion, the libraries that are out there at the moment don’t provide anything new compared to what already exists.</p>
<p>On that level, I totally agree. The most obvious candidates for development and templating try to mimik what’s already out there for other platforms. What’s worse: There seems to be no real winner and node itself doesn’t seem to make a recommendation or even include something with the base distribution.</p>
<p><strong>This is inherently a good thing though</strong>. Node.js isn’t your complete web development stack. Far from it.</p>
<p>Node is an awesome platform to very easily write very well performing servers. Node is an awesome platform to use for your daily shell scripting needs (allowing you to work in your favorite language even for these tasks). Node isn’t about creating awesome websites. It’s about giving you the power to easily build servers. Web, DNS, SMTP - we’ve seen all.</p>
<p>To help you with web servers and probably to show us users how it’s done, node also provides a very good library to interact with the HTTP protocol. This isn’t about generating web pages. This isn’t about URL routing, or MVC or whatever. This is about writing a web server. About interracting with HTTP clients. Or HTTP servers. On the lowest level.</p>
<p>So when comparing node with other platforms, you must be careful to compare apple with apples. Don’t compare pure node.js to rails. Compare it to mod_wsgi, to fastcgi, to a servlet container (if you must) or to mod_php (the module that allows a script of yours access to server internals. Not the language) or mod_perl.</p>
<p>In that case, consider this. With node.js you don’t worry about performance, you don’t worry about global locks (you do worry about never blocking though),<em> and you really, truly and most awesomely don’t worry about race conditions</em>.</p>
<p>Assuming</p>
<figure class="highlight"><pre><code class="language-javascript" data-lang="javascript"> <span class="kd">var</span> <span class="nx">a</span> <span class="o">=</span> <span class="mi">0</span><span class="p">;</span>
<span class="kd">var</span> <span class="nx">f</span> <span class="o">=</span> <span class="kd">function</span><span class="p">(){</span>
<span class="kd">var</span> <span class="nx">t</span> <span class="o">=</span> <span class="nx">a</span><span class="p">;</span> <span class="c1">// proving a point here. I know it's not needed</span>
<span class="nx">a</span> <span class="o">=</span> <span class="nx">t</span> <span class="o">+</span> <span class="mi">1</span><span class="p">;</span>
<span class="p">}</span>
<span class="nx">setTimeout</span><span class="p">(</span><span class="nx">f</span><span class="p">,</span> <span class="mi">100</span><span class="p">);</span>
<span class="nx">setTimeout</span><span class="p">(</span><span class="nx">f</span><span class="p">,</span> <span class="mi">100</span><span class="p">);</span></code></pre></figure>
<p>you’d always end up with a === 2 once both timeouts have executed. There is no interruption between the assignment of t and the increment. No worries about threading. No hours wasted trying to find out why a suddenly (and depending on the load on your system) is either 1, 2 or 3.</p>
<p>In the years we got experience in programming, we learned that what f does in my example above is a bad thing. We feel strange when typing code like this - seeking for any method of locking, of specifying a critical section. <em>With node, there’s no need to.</em></p>
<p><em>This</em> is why writing servers (remember: highly concurrent access to potentially the same code) is so much fun in node.</p>
<p>The perfect little helpers that were added to deal with the HTTP protocol are just the icing on the cake, but in so many other frameworks (<em>cough</em> WSGI <em>cough</em>) stuff like chunking, multipart parsing, even just reading the client’s data from an input stream are hard if you do them on your own, or completely beyond your control if you let the libraries do them.</p>
<p>With node you get to the knobs to turn in the easiest way possible.</p>
<p>Now we know that we can easily write well performing servers (of any kind with special support for HTTP) in node, so let’s build a web site.</p>
<p>In traditional frameworks, your first step would be to select a framework (because the HTTP libraries are so <em>effing</em> (technical term) hard to use).</p>
<p>You’d end up with something lightweight like, say, mnml or werkzeug in python or something more heavy like rails for ruby (though rack isn’t nearly as bad as wsgi) or django for python. You’d add some kind of database abstraction or even ORM layer - maybe something that comes with your framework.</p>
<p>Sure. You could do that in node too. There are frameworks around.</p>
<p>But remember: Node is an awesome tool for you to write highly specialized servers.</p>
<p>Do you need to build your whole site in node?</p>
<p>Do you see this as a black or white situation?</p>
<p>Over the last year, I’ve done two things.</p>
<p>One is to layout a way how to augment an existing application (PHP, PostgreSQL) with a WebSocket based service using node to greatly reduce the load on the existing application. I didn’t have time to implement this yet, but it would work wonders.</p>
<p>The other thing was to prove a point and to implement a whole web application in node.</p>
<p>I built <a href="http://tempalias.com">tempalias.com</a></p>
<p>At first I fell into the same trap that anybody coming from the “old world” would be falling. I selected what seemed to be the most used web framework (Express) and rolled with that, but I soon found out that I have it all backwards.</p>
<p>I don’t want to write the 50iest web application. I wanted to do something else. Something new.</p>
<p>When you look at the <a href="http://github.com/pilif/tempalias">tempalias source code</a> (yeah - the whole service is open source so all of us can learn from it), you’ll notice that <em>no single byte of HTML is dynamically generated</em>.</p>
<p>I ripped out Express. I built a RESTful API for the main functionality of the site: Creating aliases. I built a server that does just that and nothing more.</p>
<p>I leveraged all the nice features JavaScript as a language provides me with to build a really cool backend. I used all the power that node provides me with to build a really cool (and simple!) server to web-enable that API (posting and reading JSON to and from the server)</p>
<p>The web client itself is just a client to that API. No single byte of that client is dynamically generated. It’s all static files. It’s using <a href="http://code.quirkey.com/sammy/">Sammy</a>, <a href="http://jquery.com/">jQuery</a>, HTML and CSS to do its thing, but it doesn’t do anything the API I built on node doesn’t expose.</p>
<p>Because it’s static HTML, I could serve that directly from nginx I’m running in front of node.</p>
<p>But because I wanted the service to be self-contained, I plugged in <a href="http://github.com/felixge/node-paperboy/">node-paperboy</a> to serve the static files from node too.</p>
<p>Paperboy is very special and very, very cool.</p>
<p>It’s not trying to replace node’s HTTP library. It’s not trying to abstract away all the niceties of node’s excellent HTTP support. It’s not even trying to take over the creation of the actual HTTP server. Paperboy is just a function you call with the request and response object you got as part of node’s HTTP support.</p>
<p>Whether you want to call it or not is your decision.</p>
<p>If you want to handle the request, you handle it.</p>
<p>If you don’t, you pass it on to paperboy.</p>
<p>Or foobar.</p>
<p>Or whatever.</p>
<p>Node is the UNIX of the tools to build servers with: It provides small dedicated tools that to one task, but truly, utterly excel at doing so.</p>
<p>So the libraries you are looking for are not the huge frameworks that do everything but just the one bit you really need.</p>
<p>You are looking for the excellent small libraries that live the spirit of node. You are looking for libraries that do one thing well. You are looking for libraries like paperboy. And you are relying on the excellent HTTP support to build your own libraries where the need arises.</p>
<p>It’s still very early in node’s lifetime.</p>
<p>You can’t expect everything to be there, ready to use it.</p>
<p>For some cases, that’s true. Need a DNS server? You can do that. Need an SMTP daemon? Easy. You can do that. Need a HTTP server that understands the HTTP protocol really well and provides excellent support to add your own functionality? Go for it.</p>
<p>But above all: You want to write your server in a kick-ass language? You want to never have to care about race conditions when reading, modifying and writing to a variable? You want to be sure not to waste hours and hours of work debugging code that looks right but isn’t?</p>
<p>Then node is for you.</p>
<p>It’s no turnkey solution yet.</p>
<p>It’s up to you to make the most out of it. To combine it with something more traditional. Or to build something new, maybe rethinking how you approach the problem. Node can help you to provide an awesome foundation to build upon. It alone will never provide you with a blog in 10 minutes. Supporting libraries don’t at this time provide you with that blog.</p>
<p>But they empower you to build it in a way that withstands even the heaviest pounding, that makes the most out of the available resources and above all, they allow you to use your language of choice to do so.</p>
<p>JavaScript.</p>
Skype over 3G - calculations2010-07-28T00:00:00+00:00http://pilif.github.com/2010/07/skype-over-3g-calculations<p>With the availability of an iPhone Skype client with background availability, I wanted to find out, how much it would cost me if I move calls from the regular mobile network over to Skype over 3G.</p>
<p>Doing VoIP over 3G is a non-issue from a political standpoint here in Switzerland as there are no unlimited data plans available and the metered plans are expensive enough for the cellphone companies to actually be able to make money with, so there’s no blocking or anything else going on over here.</p>
<p>To see how much data is accumulated, I hold a phone conversation with my girlfriend that lasted exactly 2 minutes in which time, we both talked. Before doing so, I reset the traffic counter on my phone and immediately afterwards I checked.</p>
<p>After two minutes, I sent out 652 KB of data and received 798 KB.</p>
<p>This is equal to around 750 KB/minute (being conservative here and gracefully rounding up).</p>
<p>My subscription comes with 250 MB of data per month. Once that’s used, there’s a flat rate of CHF 5 per day I’m using any data, so I really should not go beyond the 250MB.</p>
<p>As I’m not watching video (or audio) over 3G, my data usage is otherwise quite low - around 50MB.</p>
<p>That leaves 200MB unused.</p>
<p>With 750KB/minute, this equals <strong>4.4 hours of free Skype conversation</strong>. Which is something I would never ever reach wich means that at least with skype-enabled people, I can talk for free now.</p>
<p>Well. Could.</p>
<p>While the voice quality in Skype over 3G is simply astounding, the solution unfortunately still isn’t practical due to two issues:</p>
<ol>
<li>Skype sucks about 20% of battery per hour even when just idling in the background.</li>
<li>Skype IM doesn't know the concept of locations so all IM sent is replicated to all clients. This means that whenever I type something for a coworker, my phone will make a sound and vibrate.</li>
</ol>
<p>2) I could work around by quitting Skype when in front of a PC, but 1) really is a killer. Maybe the new iPhone 4 (if I go that route instead of giving Andorid another try) with its bigger battery will be of help.</p>
stabilizing tempalias2010-07-02T00:00:00+00:00http://pilif.github.com/2010/07/stabilizing-tempalias<p>While the <a href="http://www.gnegg.ch/2010/06/do-spammers-find-pleasure-in-destroying-fun-stuff/">maintenance last weekend</a> brought quite a bit of stabilization to the tempalias service, I quickly noticed that it was still dying sooner or later and while before updating node, it died due to not being able to allocate more memory, this time, it died by just not answering any requests any more.</p>
<p>A look at the error log quickly revealed quite many exceptions complaining about a certain request type not being allowed to have a body and finally one complaining about not being able to open a file due to having run out of file handles.</p>
<p>So I quickly improved error logging and restarted the daemon in order to get a stacktrace leading to these tons of exceptions.</p>
<p>[caption id=”attachment_754” align=”alignleft” width=”173” caption=”Picture by L.G.Mills”]<a href="http://www.flickr.com/photos/lmillsphotography/2659662694/"><img class="size-full wp-image-754 " title="Milkweed Bug" src="http://www.gnegg.ch/wp-content/uploads/2010/07/2659662694_9502870853_m.jpg" alt="" width="173" height="173" /></a>[/caption]</p>
<p>This quickly pointed to paperboy which was sending the file even if the request was a HEAD request. <code>http.js</code> in node checks for this and throws whenever you send a body when you should not. That exception lead then to paperboy never closing the file (have I already complained how incredibly difficult it is to do proper exception handling the moment continuations get involved? I think not and I also think it’s a good topic for another diary entry). With the help of <code>lsof</code> I’ve quickly seen that my suspicions were true: the node process serving tempalias had tons of open handles to <a href="http://github.com/pilif/tempalias/blob/master/public/index.html">public/index.html</a>.</p>
<p>I sent a patch for this behavior to <a href="http://twitter.com/felixge">@felixge</a> which was <a href="http://github.com/felixge/node-paperboy/commit/8c37d6fa32ca10e4198490af8a25595bdb5abf16">quickly applied</a>, so that’s fixed now. I hope it’s of some use for other people too.</p>
<p>Now knowing that having a look at <code>lsof</code> here and then might be a good idea, quickly revealed another problem: While the file handles were gone, I’ve noticed tons and tons of SMTP sockets staying open in CLOSE_WAIT state. Not good as that too will lead to handle starvation sooner or later.</p>
<p>On a hunch, I found out that connecting to the SMTP daemon and then disconnecting, not sending QUIT to let the server disconnect was what was causing the lingering sockets. Clients disconnecting like that is very common in case the sender sends a 5xx response which is what the tempalias daemon was designed for.</p>
<p>So <a href="http://github.com/pilif/node-smtp/commit/a95d80720af58d5495a2cd9f63c2e5c88e73c3f6">I had to fix that</a> in my fork of the node smtp daemon (the original upstream isn’t interested in daemon functionality and the owner I forked the daemon for doesn’t respond to my pull requests. Hence I’m maintaining my own fork for now).</p>
<p>Futher looks at lsof prove that now we are quite stable in resource consumption: No lingering connections, no unclosed file handles.</p>
<p>But the error log was still filling up. This time something about <code>removeListener</code> needing a function. Thanks to the callstack I now had in my error log, I quickly hunted that one down <a href="http://github.com/pilif/node-smtp/commit/c9e04139483cd61abd4e276fef02965465c31d43">and fixed it</a> - that was a very stupid mistake. Thankfully, because the mails I usually deliver are small enough so that socket draining usually wasn’t required.</p>
<p>Onwards to the next issue filling the error log: «This deferred has already been resolved».</p>
<p>This comes from the <code>Promise.js</code> library if you <code>emit*()</code> multiple times on the same promise. This time, of course, the callstack was useless (… at <anonymous> - why, thank you), but I was very lucky again in that I tested from home and my mail relay didn’t trust my home IP address and thus denied relaying with a 500 which immediately led to the exception.</p>
<p>Now, this one is crazy: When you call <code>.addErrback()</code> on a Promise before calling <code>addCallback()</code>, your callback will be executed no matter if the errback was executed first.</p>
<p>Promise.js does some really interesting things to simulate polymorphism in JavaScript and I really didn’t want to fix up that library as lately, node.js itself seems go to a simpler continuation style using a callback parameter, so sooner or later, I’ll have to patch up the smtp server library anyways to remove Promise.js if I want to adhere to current node style.</p>
<p>So I <a href="http://github.com/pilif/node-smtp/commit/c22d333e344325e0f36fa801b5bf91bba7285439">took the workaround route</a> by just using addCallback() before addErrback() even though the other order feels more natural to me. In addition, <a href="http://github.com/kriszyp/node-promise/issues/issue/1">I reported an issue</a> with the author as this is clearly unexpected behavior.</p>
<p>Now the error log is pretty much silent (minus some ECONNRESET exceptions due to clients sending RST packets in mid-transfer, but I <em>think </em>they are uncritical to resource consumption), so I hope the overall stability of the site has improved a bunch - I’d love not having to restart the daemon for more than a day :-)</p>
Nokia N900 and iPhone headsets?2010-06-30T00:00:00+00:00http://pilif.github.com/2010/06/nokia-n900-and-iphone-headsets<p>For a geek like me, the <a href="http://maemo.nokia.com/n900/">Nokia N900</a> is paradise on earth: It’s full Debian Linux in your bag. It has the best IM integration I have ever seen on any mobile device. It has the best VoIP (Skype, SIP) integration I have ever seen on any mobile device and it has one of the coolest multitasking implementations I’ve seen on any mobile device (the card-based task/application switching is <em>fantastic</em>).</p>
<p>Unfortunately, there’s one thing that prevents me from using it (or many other phones) to replace my iPhone: While the whole world agreed on one way to wire a microphone/headphone combination, Apple thought it wise to do it another way, which leads to Apple compatible headsets not working with the N900.</p>
<p>By not working I don’t just mean “no microphone” or even “no sound”. No. I mean “deafening buzzing on both the left and right channel and headset still not being recognized in the software”.</p>
<p>The problem is that I already own iPhone compatible headsets and that it’s way easier to get good iPhone compatible ones around here. I’m <em>constantly</em> listening to audio on my phone (Podcast, Audiobooks). Having to grab the phone out of my bag and unplugging the headphones whenever it rings is inacceptable to me, so I need to have a microphone with my headphones.</p>
<p>Just now though, I found <a href="http://www.kvconnection.com/product-p/km-354r3m-r2f.htm">a small adapter</a> which promises to solve that problem, proving once again, that there’s nothing that’s not being sold on the internet.</p>
<p>I ordered one (thankfully one of the international shipping options was less than the adapter itself - something I’m not used to with the smaller stores), so we’ll see how that goes. If it means that I can use a N900 as my one and only device, I’ll be a very happy person indeed.</p>
Do spammers find pleasure in destroying fun stuff?2010-06-28T00:00:00+00:00http://pilif.github.com/2010/06/do-spammers-find-pleasure-in-destroying-fun-stuff<p>Recently, while reading through the log file of the mail relay used by <a href="http://tempalias.com">tempalias</a>, I noticed a disturbing trend: Apparently, SPAM was being sent through tempalias.</p>
<p>I’ve seen various behaviours. One was to strangely create an alias per second to the same target and then delivering email there.</p>
<p>While I completely fail to understand this scheme, the other one was even more disturbing: Bots were registering {max-usage: 1, days: null} aliases and then sending one mail to them - probably to get around RBL checks they’d hit when sending SPAM directly.</p>
<p>Aside of the fact that I do not want to be helping spammers, this also posed a technical issue: node.js head which I was running back when I developed the service tended to leak memory at times forcing me to restart the service here and then.</p>
<p>Now the additional huge load created by the bots forced me to do that way more often than I wanted to. Of course, the old code didn’t run on current node any more.</p>
<p>Hence I had to take tempalias down for maintenance.</p>
<p><a href="http://www.gnegg.ch/wp-content/uploads/2010/06/tempalias-down.png"><img class="aligncenter size-medium wp-image-747" title="tempalias-down" src="http://www.gnegg.ch/wp-content/uploads/2010/06/tempalias-down-300x188.png" alt="" width="300" height="188" /></a></p>
<p>A quick look at <a href="http://github.com/pilif/tempalias/commits/master">my commits on GitHub</a> will show you what I have done:</p>
<ul>
<li>the tempalias SMTP daemon now does RBL checks and immediately disconnects if the connected host is listed.</li>
<li>the tempalias HTTP daemon also does RBL checks on alias creation, but it doesn't check the various DUL lists as the most likely alias creators are most certainly listed in a DUL</li>
<li>Per IP, aliases can only be generated every 30 seconds.</li>
</ul>
<p>This should be some help. In addition, right now, the mail relay is configured to skip sender-checks and <a href="http://marc.merlins.org/linux/exim/sa.html">sa-exim</a> scans (Spam Assassin on SMTP time as to reject spam before even accepting it into the system) for hosts where relaying is allowed. I intend to change that so that sa-exim and sender verify is done regardless if the connecting host is the tempalias proxy.</p>
<p>Looking at the mail log, I’ve seen the spam count drop to near-zero, so I’m happy, but I know that this is just a temporary victory. Spammers will find ways around the current protection and I’ll have to think of something else (I do have some options, but I don’t want to pre-announce them here for obvious reasons).</p>
<p>On a more happy note: During maintenance I also fixed a few issues with the Bookmarklet which should now do a better job at not coloring all text fields green eventually and at using the target site’s jQuery if available.</p>
Windows 2008 / NAT / Direct connections2010-06-23T00:00:00+00:00http://pilif.github.com/2010/06/windows-2008-nat-direct-connections<p>Yesterday I ran into an interesting problem with Windows 2008’s implementation of NAT (don’t ask - this was the best solution - I certainly don’t recommend using Windows for this purpose).</p>
<p>Whenever I enabled the NAT service, I was unable to reliably connect to the machine via remote desktop or even any other service that machine was offering. Packets sent to the machine were dropped as if a firewall was in between, but it wasn’t and the Windows firewall was configured to allow remote desktop connections.</p>
<p>Strangely, <em>sometimes</em> and from <em>some hosts</em> I was able to make a connection, but not consistently.</p>
<p>After some digging, this turned out to be a problem with the interface metrics and the server tried to respond over the interface with the private address that wasn’t routed.</p>
<p>So if you are in the same boat, configure the interface metrics of both interfaces manually. Set the metric of the private interface to a high value and the metrics of the public (routed) one to a low value.</p>
<p>At least for me, this instantly fixed the problem.</p>
<p><a href="http://www.gnegg.ch/wp-content/uploads/2010/06/interface-metric.png"><img class="aligncenter size-medium wp-image-739" title="interface-metric" src="http://www.gnegg.ch/wp-content/uploads/2010/06/interface-metric-253x300.png" alt="" width="253" height="300" /></a></p>
Google Apps - Provisioning - Two-Legged OAuth2010-06-10T00:00:00+00:00http://pilif.github.com/2010/06/google-apps-provisioning-two-legged-oauth<p>Our company uses Google Apps premium for Email and shared documents, but in order to have more freedom in email aliases, in order to have more control over email routing and finally, because there are a couple of local parts we use to direct mail to some applications, all our mail, even though it’s created in Google Apps and finally ends up in Google Apps, goes via a central mail relay we are running ourselves (well. I’m running it).</p>
<p>Google Apps premium allows you to do that and it’s a really cool feature.</p>
<p>One additional thing I’m doing on that central relay is to keep a backup of all mail that comes from Google or goes to Google. The reason: While I trust them not to lose my data, there are stories around of people losing their accounts to Googles anti-spam automatisms. This is especially bad as there usually is nobody to appeal to.</p>
<p>So I deemed it imperative that we store a backup of every message so we can move away from google if the need to do so arises.</p>
<p>Of course that means though that our relay needs to know what local parts are valid for the google apps domain - after all, I don’t want to store mail that would later be bounced by google. And I’d love to bounce directly without relaying the mail unconditionally, so that’s another reason why I’d want to know the list of users.</p>
<p>Google provides their provisioning API to do that and using the GData python packages, you can easily access that data. In theory.</p>
<p>Up until very recently, the big problem was that the provisioning API didn’t support OAuth. That meant that my little script that retreives the local parts had to have a password of an administrator which is something that really bugged me as it meant that either I store my password in the script or I can’t run the script from cron.</p>
<p>With the Google Apps Marketplace, they fixed that somewhat, but it still requires a strange dance:</p>
<p>When you visit the OAuth client configuration (https://www.google.com/a/cpanel/YOURDOMAIN/ManageOauthClients), it lists you domain with the note “This client has access to all APIs.”.</p>
<p>This is totally not true though as Google’s definition of “all” apparently doesn’t include “Provisioning” :-)</p>
<p>To make two-legged OAuth work for the provisioning API, you have to explicitly list the feeds. In my case, this was Users and Groups:</p>
<p>Under “Client Name”, add your domain again (“example.com”) and unter One or More API Scopes, add the two feeds like this: “https://apps-apis.google.com/a/feeds/group/#readonly,https://apps-apis.google.com/a/feeds/user/#readonly”</p>
<p><a href="http://www.gnegg.ch/wp-content/uploads/2010/06/oauth-google.png"><img class="aligncenter size-medium wp-image-732" title="oauth-google" src="http://www.gnegg.ch/wp-content/uploads/2010/06/oauth-google-300x101.png" alt="" width="300" height="101" /></a>This will enable two-legged OAuth access to the user and group lists which is what I need in my little script:</p>
<figure class="highlight"><pre><code class="language-python" data-lang="python"><span class="kn">import</span> <span class="nn">gdata.apps.service</span>
<span class="kn">import</span> <span class="nn">gdata.apps.groups.service</span>
<span class="n">consumer_key</span> <span class="o">=</span> <span class="s">'YOUR.DOMAIN'</span>
<span class="n">consumer_secret</span> <span class="o">=</span> <span class="s">'secret'</span> <span class="c">#check advanced / OAuth in you control panel</span>
<span class="n">sig_method</span> <span class="o">=</span> <span class="n">gdata</span><span class="o">.</span><span class="n">auth</span><span class="o">.</span><span class="n">OAuthSignatureMethod</span><span class="o">.</span><span class="n">HMAC_SHA1</span>
<span class="n">service</span> <span class="o">=</span> <span class="n">gdata</span><span class="o">.</span><span class="n">apps</span><span class="o">.</span><span class="n">service</span><span class="o">.</span><span class="n">AppsService</span><span class="p">(</span><span class="n">domain</span><span class="o">=</span><span class="n">consumer_key</span><span class="p">)</span>
<span class="n">service</span><span class="o">.</span><span class="n">SetOAuthInputParameters</span><span class="p">(</span><span class="n">sig_method</span><span class="p">,</span> <span class="n">consumer_key</span><span class="p">,</span> <span class="n">consumer_secret</span><span class="o">=</span><span class="n">consumer_secret</span><span class="p">,</span> <span class="n">two_legged_oauth</span><span class="o">=</span><span class="bp">True</span><span class="p">)</span>
<span class="n">res</span> <span class="o">=</span> <span class="n">service</span><span class="o">.</span><span class="n">RetrieveAllUsers</span><span class="p">()</span>
<span class="k">for</span> <span class="n">entry</span> <span class="ow">in</span> <span class="n">res</span><span class="o">.</span><span class="n">entry</span><span class="p">:</span>
<span class="k">print</span> <span class="n">entry</span><span class="o">.</span><span class="n">login</span><span class="o">.</span><span class="n">user_name</span>
<span class="n">service</span> <span class="o">=</span> <span class="n">gdata</span><span class="o">.</span><span class="n">apps</span><span class="o">.</span><span class="n">groups</span><span class="o">.</span><span class="n">service</span><span class="o">.</span><span class="n">GroupsService</span><span class="p">(</span><span class="n">domain</span><span class="o">=</span><span class="n">consumer_key</span><span class="p">)</span>
<span class="n">service</span><span class="o">.</span><span class="n">SetOAuthInputParameters</span><span class="p">(</span><span class="n">sig_method</span><span class="p">,</span> <span class="n">consumer_key</span><span class="p">,</span> <span class="n">consumer_secret</span><span class="o">=</span><span class="n">consumer_secret</span><span class="p">,</span> <span class="n">two_legged_oauth</span><span class="o">=</span><span class="bp">True</span><span class="p">)</span>
<span class="n">res</span> <span class="o">=</span> <span class="n">service</span><span class="o">.</span><span class="n">RetrieveAllGroups</span><span class="p">()</span>
<span class="k">for</span> <span class="n">entry</span> <span class="ow">in</span> <span class="n">res</span><span class="p">:</span>
<span class="k">print</span> <span class="n">entry</span><span class="p">[</span><span class="s">'groupName'</span><span class="p">]</span></code></pre></figure>
tempalias - validity limits2010-05-26T00:00:00+00:00http://pilif.github.com/2010/05/tempalias-validity-limits<p>I’ve just pushed a small update to tempalias.com that imposes some (generous) limits to the values you can provide for the validity:</p>
<ul>
<li>the maximum amount of days an alias can be valid is now <strong>60 days</strong>.</li>
<li>the maximum amount of messages that can be sent to an aliases is now set to <strong>100 messages</strong>.</li>
</ul>
<p>I realized that there might be some potential for abusing tempalias.com if the aliases have a practically unlimited duration. Besides, then they wouldn’t be <strong>temp</strong>aliases any more. Right?</p>
<p>Already generated aliases with longer durations stay valid - true to the spirit of not looking into the data my users provided me with, I’m not going to check the existing aliases.</p>
tempalias.com - now with bookmarklet2010-05-24T00:00:00+00:00http://pilif.github.com/2010/05/tempalias-com-now-with-bookmarklet<p>let’s say you want to create one of these temporary aliases, but you don’t actually want to leave the page you are on.</p>
<p>Good news is: Now you can.</p>
<object classid="clsid:d27cdb6e-ae6d-11cf-96b8-444553540000" width="505" height="467" codebase="http://download.macromedia.com/pub/shockwave/cabs/flash/swflash.cab#version=6,0,40,0"><param name="allowfullscreen" value="true" /><param name="allowscriptaccess" value="always" /><param name="src" value="http://vimeo.com/moogaloop.swf?clip_id=11995145&server=vimeo.com&show_title=0&show_byline=0&show_portrait=0&color=00ADEF&fullscreen=1" /><embed type="application/x-shockwave-flash" width="505" height="467" src="http://vimeo.com/moogaloop.swf?clip_id=11995145&server=vimeo.com&show_title=0&show_byline=0&show_portrait=0&color=00ADEF&fullscreen=1" allowscriptaccess="always" allowfullscreen="true" /></embed></object>
<ol>
<li>Visit <a href="http://tempalias.com">tempalias.com</a> once.</li>
<li>Create any alias you want the bookmarklet to create for you in the future</li>
<li>In the confirmation screen, you will be offered the bookmarklet to drag to your bookmarks bar.</li>
</ol>
<p>Now whenever you are on a site you want to create a temporary alias for, just click that bookmarklet, hover the email field and press the left mouse button. The alias will be generated and filled into that email form.</p>
<p>If you are interested in how this was made, read the <a href="http://www.gnegg.ch/2010/05/tempalias-com-creating-the-bookmarklet/">next entry of my development diary</a>.</p>
<p>If you like to find out more about tempalias and more projects of mine, you should follow me on twitter <a href="http://twitter.com/pilif">here</a>.</p>
tempalias.com - creating the bookmarklet2010-05-24T00:00:00+00:00http://pilif.github.com/2010/05/tempalias-com-creating-the-bookmarklet<p>Now that the bookmarklet feature is finished, let me take a few minutes to reflect on its creation, in the spirit of continuing the development diary.</p>
<p>The reason for the long silence after the launch is, believe it or not, the weather: Over the time I made the initial tempalias service, I began to really enjoy taking my 17inch MacBook Pro outside on the balcony and write code from there. In fact, I enjoyed it so much that I really wanted to continue that tradition when doing more work on the site.</p>
<p>Unfortunately from May first until May 21st it was raining constantly which made coding on the balcony kind of no-fun to do.</p>
<p>Now the weather was great and I could finish what I began way earlier.</p>
<p>So. How does one create a bookmarklet?</p>
<p>I didn’t know much either, but in the end, the essence of a bookmarklet is JavaScript code that gets executed in the context of the page you are on when you are executing it. So that’s something to work with.</p>
<p>Of course, you don’t want to add <strong>all</strong> the code you need for your magic to work into that link target - that would be unmaintainable and there’s some risk of breakage once the link gets too big - who knows at what size of the script browsers begin cutting off the code.</p>
<p>So you basically do nothing but creating a script tag sourcing the real script. This is what I’m doing too - the non-minified version of that code is in <a href="http://github.com/pilif/tempalias/blob/master/util/bookmarklet_launcher_test.js">util/bookmarklet_launcher_test.js</a>.</p>
<p>Looking at that file, you’ll notice that the bookmarklet itself is configurable using that c variable (keeping the names short to keep the code as short as possible). The configuration is done on the results page that is shown once the alias has been generated (<a href="http://github.com/pilif/tempalias/blob/master/public/templates/result.template#L32">public/templates/result.template</a>).</p>
<p>Why the host name? Because the script that is injected (<a href="http://github.com/pilif/tempalias/blob/master/public/bookmarklet.js">public/bookmarklet.js</a>) doesn’t know it - when it’s sourced, window.location would still point to the site it was sourced on. The script is static code, so the server can’t inject the correct host name either - in fact, all of tempalias is static code aside of that one RESTful endpoint (/aliases).</p>
<p>This is a blessing as it keeps the code clean and a curse as it makes stuff harder than usual at places - this time it’s just the passing around of the host name (which I don’t want to hard-code for easier deployment and development).</p>
<p>The next thing of note is how the heavy lifting script is doing its work: Because the DOM manipulation and event-hooking up needed to make this work is too hard for my patience, I decided that I wanted to use jQuery.</p>
<p>But the script is running in the context of the target site (where the form field should be filled out), so we neither can be sure that jQuery is available nor should we blindly load it.</p>
<p>So the script is really careful:</p>
<ul>
<li>if jQuery is available and of version 1.4.2, that one is used.</li>
<li>If jQuery is available, but not of version 1.4.2, we load our own (well - the official one from Google's CDN) and use that, while restoring the old jQuery to the site.</li>
<li>If jQuery is not available, we load our own, restoring window.$ if it pointed to something beforehand.</li>
</ul>
<p>This procedure would never work if jQuery wasn’t as careful as it is not to pollute the global namespace - juggling two values (window.$ and window.jQuery) is possible - anything more is breakage waiting to happen.</p>
<p>The last thing we need to take care of, finally, is the fact that the bookmarklet is now running in the context of the target site and, hence, cannot do AJAX requests to tempalias.com any more. This is what JSONp was invented for and I had to slightly modify the node backend to make JSONp work for the bookmarklet script (this would be commit <a href="http://github.com/pilif/tempalias/commit/1a6e8c0faca7826b8a49f2ba99faa3d3702f10bd">1a6e8c</a> - not something I’m proud of - tempalias_http.js needs some modularization now).</p>
<p>All in all, this was an interesting experience between cross domain restrictions and trying to be a good citizen on the target page. Also I’m sure the new knowledge will be of use in the future for similar projects.</p>
<p>Unfortunately, the weather is getting bad again, so the next few features will, again, have to wait. Ideas for the future are:</p>
<ul>
<li>use tempalias.com as MX and CNAME as to create your own aliases for our own domain</li>
<li>create an iphone / android client app for the REST API (/aliases)</li>
<li>daemonize the main code on its own without the help of some shell magic</li>
<li>maybe find a way to still hook some minimal dynamic content generation into paperboy.</li>
</ul>
tempalias.com - bookmarklet work2010-04-26T00:00:00+00:00http://pilif.github.com/2010/04/tempalias-com-bookmarklet-work<p>While the user experience on tempalias.com is already really streamlined, compared to other services that encode the expiration settings and sometimes even the target) into the email address (and are thus exploitable and in some cases requiring you to have an account with them), it loses in that, when you have to register on some site, you will have to open the tempalias.com website in its own window and then manually create the alias.</p>
<p>Wouldn’t it be nice if this worked without having to visit the site?</p>
<p>This video is showing how I want this to work and how the <a href="http://github.com/pilif/tempalias/tree/bookmarklet">bookmarklet branch</a> on the github project page is already working:</p>
<object classid="clsid:d27cdb6e-ae6d-11cf-96b8-444553540000" width="505" height="410" codebase="http://download.macromedia.com/pub/shockwave/cabs/flash/swflash.cab#version=6,0,40,0"><param name="allowfullscreen" value="true" /><param name="allowscriptaccess" value="always" /><param name="src" value="http://vimeo.com/moogaloop.swf?clip_id=11193192&server=vimeo.com&show_title=1&show_byline=0&show_portrait=0&color=00ADEF&fullscreen=1" /><embed type="application/x-shockwave-flash" width="505" height="410" src="http://vimeo.com/moogaloop.swf?clip_id=11193192&server=vimeo.com&show_title=1&show_byline=0&show_portrait=0&color=00ADEF&fullscreen=1" allowscriptaccess="always" allowfullscreen="true" /></embed></object>
<p>The workflow will be that you create your first (and probably only) alias manually. In the confirmation screen, you will be presented with a bookmarklet that you can drag to your bookmark bar and that will generate more aliases like the one just generated. This works independently of cookies or user accounts, so it would even work across browsers if you are synchronizing bookmarks between machines.</p>
<p>The actual bookmarklet is just a very small stub that will contain all the configuration for alias creation (so the actual bookmarklet will be the minified version of <a href="http://github.com/pilif/tempalias/blob/bookmarklet/util/bookmarklet_launcher_test.js">this file here</a>). The bookmarklet, when executed will add a script tag to the page that actually does the heavy lifting.</p>
<p>The <a href="http://github.com/pilif/tempalias/blob/bookmarklet/public/bookmarklet.js">script</a> that’s running in the video above tries really hard to be a good citizen as it’s run in the context of a third party webpage beyond my control:</p>
<ul>
<li>it doesn't pollute the global namespace. It has to add one function, window.$__tempalias_com, so it doesn't reload all the script if you click the bookmark button multiple times.</li>
<li>while it depends on jQuery (I'm not doing this in pure DOM), it tries really hard to be a good citizen:
<ul>
<li>if jQuery 1.4.2 is already used on the site, it uses that.</li>
<li>if any other jQuery version is installed, it loads 1.4.2 but restores window.jQuery to what it was before.</li>
<li>if no jQuery is installed, it loads 1.4.2</li>
<li>In all cases, it calls jQuery.noConflict if $ is bound to anything.</li>
</ul>
</li>
<li>All DOM manipulation uses really unique class names and event namespaces</li>
</ul>
<p>While implementing, I noticed that you can’t unbind live events with just their name, so $().die(‘.ta’) didn’t work an I had to provide all events I’m live-binding to. I’m using live here because the bubbling up delegation model works better in a case where there might be many matching elements on any particular page.</p>
<p>Now the next step will be to add some design to the whole thing and then it can go live.</p>
tempalias.com - Public launch2010-04-25T00:00:00+00:00http://pilif.github.com/2010/04/tempalias-com-public-launch<p>After announcing <a href="http://tempalias.com">tempalias.com</a> here on my blog and sleeping over it, hoping the live server wouldn’t die over night, last friday I first implemented a <a href="http://github.com/pilif/tempalias/blob/master/garbage_collect.js">garbage collection facility</a> to prune expired aliases and then publicly announced tempalias.com on both <a href="http://news.ycombinator.com/item?id=1287874">Haker News</a> and <a href="http://www.reddit.com/r/programming/comments/bv41i/ask_reddit_please_review_my_nodejs_based_fun/">Reddit</a>.</p>
<p>The echo was overall positive and in the first two hours after the announcment, I fixed a lot of small things based upon suggestions of people posting to my announcement:</p>
<ul>
<li>I now serve a <a href="http://tempalias.com/images/shortcut.png">shortcut icon</a>.</li>
<li>While I'm expiring aliases, <a href="http://github.com/pilif/tempalias/commit/6caa488fef0611f005ccc3fab028862db82eace8">I'm also making sure that once used aliases are never reused</a>.</li>
<li>Node's HTTP parser throws under some circumstances and it's impossible to catch these errors which is why I had to greate a handler for the uncaughtException event as to keep the server up and running.</li>
</ul>
<p>During the first day after its announcement, I had 4700 visits and in the second day it was still 1403 which might be some indication of the service being used by some people. As of right now, there are 652 valid aliases in redis.</p>
<p>During peak time, I got around 20 concurrent requests which the server handled easily (load of 0.01).</p>
<p>What was most interesting to me was that the announcement also generated quite a bit of traffic (3000 visits, so 75% conversion from the service to the blog which is nice) on this blog here and what I liked even more was the fact that the various entries in my development diary were read and sometimes commented upon which in turn lead to, drumroll please, 3 more twitter followers.</p>
<p>The project on github now has 22 watchers and on release day has seen 1496 page views according to their stats.</p>
<p>One question I was asked a lot is why I was writing an SMTP proxy instead of just hooking into an existing MTA. In retrospect, I was a bit unclear when I stated in the <a href="http://www.gnegg.ch/2010/04/tempalias-com-development-diary/">first entry of the diary</a>:</p>
<blockquote>Of course this old solution had one big problem: It required a mail server on the receiving end and it required you as a possible user to hook the script into that mailserver (also, I never managed to do just that with exim before losing interest, but by now, I would probably know how to do it).</blockquote>
<p>My reasoning behind writing a proxy was the fact that I wanted <strong>you</strong>, my dear reader to fetch the source code and experiment with it or even host your own clone of tempalias.com. You should be able to do so with minimal effort, hence the solution should be as self-contained as possible without requiring a lot of infrastructure. Relying on a specific mail server would have severely limited the size of the audience, especially as the mail server I would have written the plugin for was to be Exim which isn’t that widely used these days.</p>
<p>Then, there’s another reason: As a long-time mail server administrator, I know that it is <em>imperative </em>to fork as little as possible during mail delivery. Hooking this into an existing mail server would have meant the server to fork for each incoming email, only to ultimately reject it in most of the cases as tempalias is much more about rejecting email than it is about delivering it.</p>
<p>No. Using the awesome performance of Node.js to reject tons and tons of email relying on any SMTP server as a smarthost only if needed felt more robust and easier to access for my readers. Hence I went the SMTP proxy route.</p>
<p>So. Am I happy with the launch?</p>
<p>Yes. I was able to make a service that is useful to some people. I was able to learn node.js from the inside out. I got to know some really bright developers in the process and I was able to contribute to open source projects.</p>
<p>On a personal level though, I would have hoped that spending 44 hours in developing an useful (and good looking) web service in a quite unknown but really sexy programming environment, documenting the steps in the process would have yielded a bit more social interaction with the community than a whole three twitter followers.</p>
<p>Maybe I should have stated my goal more clearly:</p>
<p style="text-align: center;">You should follow me on twitter <a href="http://twitter.com/pilif">here</a>.</p>
<p>(this was a friendly nod to an <a href="http://dustincurtis.com/you_should_follow_me_on_twitter.html">article of the same name</a> by Dustin Curtis, a person obviously way better in marketing than I will ever be)</p>
<p>Next time: Bookmarklet fun!</p>
tempalias.com - debriefing2010-04-23T00:00:00+00:00http://pilif.github.com/2010/04/tempalias-com-debriefing<p>This is the last part of the development diary I was keeping about the creation of a new web service in node.js. You can <a href="/2010/04/tempalias-com-learning-css/">read the previous installment here</a>.</p>
<p>It’s done.</p>
<p>The layout is finished, the last edges too rough for pushing the thing live are smoothed. <a href="http://tempalias.com">tempalias.com</a> is live. After coming really close to finishing the thing yesterday (hence the lack of a posting here - I was too tired when I had to quit at 2:30am) last night, now I could complete the results page and add the needed finishing touches (like a really cool way of catching enter to proceed from the first to the last form field - my favorite hidden feature).</p>
<p>I guess it’s time for a little debriefing:</p>
<p>All in all, the project took a time span of 17 days to implement from start to finish. I did this after work and mostly during weekdays and sundays, so it’s actually 11 days in which work was going on (I also was sick two days). Each day I worked around 4 hours, so all in all, this took around 44 hours to implement.</p>
<p>A significant part of this time went into modifications of third party libraries, while I tried to contact the initial authors to get my changes merged upstream:</p>
<ul>
<li>The author of node-smtp isn't interested in the SMTP daemon functionality (that wasn't there when I started and is now completed)</li>
<li>The author of redis-node-client didn't like my patch, but we had a really fruitful discussion and node-redis-client got a lot better at handling dropped connection in the process.</li>
<li>The author of node-paperboy has merged my patch for a nasty issue and even <a href="http://twitter.com/felixge/status/12645935137">tweeted about it</a> (THANKS!)</li>
</ul>
<p>Before I continue, I want to say a huge thanks to <a href="http://github.com/fictorial">fictorial</a> on github for the awesome discussion I was allowed to have with him about node-redis-client’s handling of dropped connections. I’ve enjoyed every word I was typing and reading.</p>
<p>But back to the project.</p>
<p>Non-third-party code consists of just 1624 lines of code (using wc -l, so not an accurate measurement). This doesn’t factor in the huge amount of changes I made to <a href="http://github.com/pilif/node-smtp">my fork of node-smtp</a> the daemon part of which was basically non-existant.</p>
<p>Overall, the learnings I made:</p>
<ul>
<li>git and github are awesome. I knew that beforehand, but this just cemented my opinion</li>
<li>node.js and friends are still in their infancy. While node removes previously published API on a nearly daily basis (it's mostly bug-free though), none of the third-party libraries I am using were sufficiently bug-free to use them without change.</li>
<li>Asynchronous programming can be fun if you have closures at your disposal</li>
<li>Asynchronous programming can be difficult once the nesting gets deep enough</li>
<li>Making any variable not declared with var global is the worst design decision I have ever seen in my life especially in node where we are adding concurrency to the mix)</li>
<li>While it's possible (and IMHO preferrable) to have a website done in just RESTful webservices and static/javascript frontend, sometimes just a tiny little bit of HTML generation could be useful. Still. Everything works without emitting even a single line of dynamically generated HTML code.</li>
<li>Node is crazy fast.</li>
</ul>
<p>Also, I want to take the opportunity and say huge thanks to:</p>
<ul>
<li>the guys behind <a href="http://nodejs.org">node.js</a>. I would have had to do this in PHP or even rails (which is even less fitting than PHP as it provides so much functionality around generating dynamic HTML and so little around pure JSON based web services) without you guys!</li>
<li>Richard for his awesome layout</li>
<li><a href="http://github.com/fictorial">fictorial</a> for redis-node-client and for the awesome discussion I was having with him.</li>
<li><a href="http://github.com/kennethkalmer">kennethkalmer</a> for his work on node-smtp even though it was still incomplete - you lead me on the right tracks how to write an SMTP daemon. Thank you!</li>
<li><a href="http://twitter.com/felixge">@felixge</a> for node-paperboy - static file serving done right</li>
<li>The guys behind <a href="http://code.quirkey.com/sammy/">sammy</a> - writing fully JS based AJAX apps has never been easier and more fun.</li>
</ul>
<p>Thank you all!</p>
<p>The next step will be marketing: Seing this is built on node.js and an actually usable project - way beyond the usual little experiments, I hope to gather some interest in the Hacker community. Seing it also provides a real-world use, I’ll even go and try to submit news about the project on more general outlets. And of course on the Security Now! feedback page as this is inspired by their episode 242.</p>
Announcing tempalias.com2010-04-23T00:00:00+00:00http://pilif.github.com/2010/04/announcing-tempalias-com<p>Have you ever been in the situation where you had to provide a web service with an email address to get that confirmation email, full well knowing that you will not only get that, but also “important announcements” and “even more important product information”?</p>
<p>Wouldn’t it be nice if they could just send you the confirmation link but nothing more?</p>
<p>That’s possible now!</p>
<p>Head over to</p>
<p style="text-align: center; font-size: 32px;"><a href="http://tempalias.com">tempalias.com</a></p>
<p>type the email address that should receive the confirmation mail, specify how many mails you want to receive and for how many days. Then hit the button and - boom - there’s your unique email address that you can provide to the service. Once the usage or time limit has been met, no more mail to that alias will be accepted.</p>
<p>tempalias.com is a fun-project of mine and also a learning experience. tempalias is written in <a href="http://nodejs.org">node.js</a>, a framework I had no prior experience with (but a whole lot of curiosity for). This is why I not only created the site, but I also documented my steps along the way. Here are the links to the various postings in chronological order (oldest first - I bolded the ones that contain useful information above just reporting on progress or bugs):</p>
<ol>
<li><a href="/2010/04/tempalias-com-development-diary/">development diary</a></li>
<li><a href="/2010/04/tempalias-com-another-day/"><strong>another day</strong></a></li>
<li><a href="/2010/04/tempalias-com-persistence/">persistence</a></li>
<li><a href="/2010/04/tempalias-com-smtp-and-design/">SMTP and design</a></li>
<li><a href="/2010/04/tempalias-com-config-file-smtp-cleanup-beginnings-of-a-server/"><strong>config file, SMTP cleanup, beginnings of a server</strong></a></li>
<li><a href="/2010/04/tempalias-com-the-cake-is-a-lie/">the cake is a lie</a></li>
<li><a href="/2010/04/tempalias-com-rewrites/">rewrites</a></li>
<li><a href="/2010/04/tempalias-com-sysadmin-work/"><strong>sysadmin work</strong></a></li>
<li><a href="/2010/04/tempalias-com-learning-css/">learning CSS</a></li>
<li><strong><a href="/2010/04/tempalias-com-debriefing/">debriefing</a></strong></li>
</ol>
<p>If you want to get in touch with me to report bugs, or ask questions, to rant or maybe to send a patch to, please send me an email to y8b3@tempalias.com - erm… no. just kidding (you can try sending an email there - it’s good for one day and one email - so good luck). Send it to <a href="mailto:pilif@gnegg.ch">pilif@gnegg.ch</a> or contact me on twitter <a href="http://twitter.com/pilif">@pilif</a>.</p>
tempalias.com - learning CSS2010-04-21T00:00:00+00:00http://pilif.github.com/2010/04/tempalias-com-learning-css<p>This is one more episode in the development diary outlining the creation of a node.js based web service. You can <a href="/2010/04/tempalias-com-sysadmin-work/">read the previous installment here</a>.</p>
<p>Today I could finally start with creating the HTML and CSS that will become the web frontend of the tempalias.com site. On Sunday, when I initially wanted to start, I was <a href="/2010/04/tempalias-com-rewrites/">hindered</a> by strangeness and overengineering of the express framework and yesterday it was general breakage in the redis client library for node.</p>
<p>But today I had no excuse and I started doing the HTML and CSS work with the intention of converting Richard’s awesome Photoshop designs into real-world HTML.</p>
<p>My main issue with this task: <strong>I plain don’t know CSS</strong>. Of course I know the syntax and how it should work in general, but there’s a huge difference between being able to read the syntax and writing basic code and actually being able to understand all the minor details and tricks that make it possible to achieve what you want in a reasonable time frame.</p>
<p>In contrast to real programming languages where you are usually developing for one target (sure - there might be plattform differences, but even nowaways, while learning, you can get away with restricting yourself to one plattform), HTML and CSS provide the additional difficulty that you have to develop for multiple moving targets, all of which containing different subtle bugs.</p>
<p>Combine that with the fact that more than basic CSS definitely isn’t part of my daily work and you’ll understand why I was struggling.</p>
<p>In the end I seem to have gotten into the thinking that’s needed to make elements appear in the general vicinity of where you suppose they should end up. I even got used to the IMHO very non-intuitive way of having margin and border be part of the elements dimensions in addition to their padding so all the pixel calculations fell into place and the whole thing looks more or less acceptable.</p>
<p>Until you begin changing the text size of course. But there’s so much manual pixel painting involved in the various backgrounds (gradient support isn’t quite there yet - even in browsers) that it’s probably impossible to create a really well-scaling layout anyways, so what I currently have is what I’m content with.</p>
<p>You want to have a peek?</p>
<p>I didn’t upload anything to the public site yet because there’s no functionality and I wouldn’t want to confuse users reaching the site by accident, so a screenshot will have to do. Or you clone <a href="http://github.com/pilif/tempalias">my repository on github</a> and run it yourself.</p>
<p>Here it is:</p>
<p><a href="http://www.gnegg.ch/wp-content/uploads/2010/04/Screen-shot-2010-04-21-at-00.25.40.png"><img class="aligncenter size-medium wp-image-700" title="tempalias HTML running in Chrome" src="http://www.gnegg.ch/wp-content/uploads/2010/04/Screen-shot-2010-04-21-at-00.25.40-297x300.png" alt="Screenshot of tempalias HTML running in Chrome" width="297" height="300" /></a></p>
<p>The really tricky thing and conversely the thing I’m really the most proud of is the alignment of both the spy and the reflection of the main page content. You witness some really creative margin- and background positioning at work there. Oh. And I just don’t want to know in what glorious ways the non-browser IE butchers this layout.</p>
<p>I. just. plain. don’t. care. This is supposed to be a FUNproject.</p>
<p>Tomorrow: Hooking in Sammy to add links to all the static pages.</p>
<p>It looks now as if we are going live this week :-)</p>
tempalias.com - sysadmin work2010-04-20T00:00:00+00:00http://pilif.github.com/2010/04/tempalias-com-sysadmin-work<p>This is yet another episode in the development diary behind the creation of a new web service. <a href="/2010/04/tempalias-com-rewrites/">Read the previous installment here</a>.</p>
<p>Now that I<a href="/2010/04/tempalias-com-the-cake-is-a-lie/"> made the SMTP proxy do its thing</a> and that I’m <a href="/2010/04/tempalias-com-rewrites/">able to serve out static files</a>, I though it was time to actually set up the future production environment so that I can give it some more real-world testing and to check the general stability of the solution when exposed to the internet.</p>
<p>So I went ahead and set up a new VM using Ubuntu Lucid beta, running the latest (HEAD) redis and node and finally made it run the tempalias daemons (which I consolidated into one opening SMTP and HTTP ports at the same time for easier handling).</p>
<p>I always knew that deployment will be something of a problem to tackle. SMTP needs to run on port 25 (if you intend to be running on the machine listed as MX) and HTTP should run on port 80.</p>
<p>Both being sub 1024 in consequence require root privileges to listen on and I definitely didn’t want to run the first ever node.js code I’ve written to run with root privileges (even though it’s a VM - I don’t like to hand out free root on a machine that’s connected to the internet).</p>
<p>So additional infrastructure was needed and here’s what I came up with:</p>
<p>The tempalias web server listens only on localhost on port 8080. A reverse <a href="http://nginx.org/">nginx</a> proxy listens on public port 80 and forwards the requests (all of them - node is easily fast enough to serve the static content). This solves another issue I had which is HTTP content compression: Providing compression (Content-Encoding: gzip) is imperative these days and yet not something I want to implement myself in my web application server.</p>
<p>Having the reverse proxy is a tremendous help as it can handle the more advanced webserver tasks - like compression.</p>
<p>I quickly noticed though that the stable nginx release provided with Ubuntu Lucid didn’t seem to be willing to actually do the compression despite it being turned on. A bit of experimentation revealed that stable nginx, when comparing content-types for <code>gzip_types</code> checks the full response content-type including the charset header.</p>
<p>As node-paperboy adds the “;charset: UTF-8” to all requests it serves, the default setting didn’t compress. Thankfully though, nginx could live with</p>
<pre>gzip_types "text/javascript; charset: UTF-8" "text/html; charset: UTF-8"</pre>
<p>so that settled the compression issue.</p>
<p><strong>Update:</strong> of course it should be “charset<strong>=</strong>UTF-8” instread of “charset: UTF-8” - with the equal sign, nginx actually compresses correctly. My patch to paperboy has since been accepted by upstream, so you won’t have to deal with this hassle.</p>
<p>Next was SMTP. As we are already an SMTP proxy and there are no further advantages of having incoming connections proxied further (no compression or anything), I wanted clients to somehow directly connect to the node daemon.</p>
<p>I quickly learned that even the most awesome iptables setup won’t make the Linux kernel accept on the <code>lo</code> interface anything that didn’t originate from <code>lo</code>, so no amount of NATing allows you to redirect a packet from a public interface to the local interface.</p>
<p>Hence I went by reconfiguring the SMTP server component of tempalias to listen on all interfaces, port 2525 and then redirect the port of packets on the public port from 25 to 2525.</p>
<p>This of course left the port 2525 open on the public interface which I don’t like.</p>
<p>A quickly created iptables rule rejecting (as opposed to dropping - I don’t want a casual port scanner to know that iptables magic is going on) any traffic going to 2525 also dropped the redirected traffic which of course wasn’t much help.</p>
<p>In comes the MARK extension. Here’s what I’ve done:</p>
<figure class="highlight"><pre><code class="language-bash" data-lang="bash"><span class="c"># mark packets going to port 25</span>
iptables -t mangle -A PREROUTING -i eth0 -p tcp --dport 25 -j MARK --set-mark 99
<span class="c"># redirect packets going to port 25 to 2525</span>
iptables -t nat -A PREROUTING -p tcp -i eth0 --dport 25 -j REDIRECT --to-ports 2525
<span class="c"># drop all incoming packets to 2525 which are not marked</span>
iptables -A INPUT -i eth0 -p tcp --dport 2525 -m mark ! --mark 99 -j REJECT</code></pre></figure>
<p>So. Now the host responds on public port 25 (but not on public port 2525).</p>
<p>Next step was to configure DNS and tell Richard to create himself an alias using</p>
<figure class="highlight"><pre><code class="language-bash" data-lang="bash">curl --no-keepalive -H <span class="s2">"Content-Type: application/json"</span> <span class="se">\</span>
--data-binary <span class="s1">'{"target":"t@example.com","days": 3,"max-usage": 5}'</span> <span class="se">\</span>
-qsS http://tempalias.com/aliases</code></pre></figure>
<p>(yes. you too can do that right now - it’s live baby!)</p>
<p>Of course it blew up the moment the redis connection timed out, taking the whole node server with it.</p>
<p>Which was the topic of yesterdays coding session: The redis-node-client library is very brittle what connection tracking and keeping is concerned. I needed something quick, so I hacked the library to provide an additional very explicit connection management method.</p>
<p>Then I began discussing the issues I was having with redis-node-client’s author. He’s such a nice guy and we had one hell of a nice discussion which is still ongoing, so I will probably have to rewrite the backend code once more once we found out how to do this the right way.</p>
<p>Between all that sysadmin and library-fixing time, unfortunately, I didn’t yet have time to do all too much on the public facing website: <a href="http://tempalias.com">http://tempalias.com</a> at this point contains nothing but a gradient. But it’s a really nice gradient. One of the best.</p>
<p>Today: More redis-node-client hacking (provided I get another answer from fictorial) or finally some real HTML/CSS work (which I’m not looking forward to).</p>
<p>This is taking shape.</p>
tempalias.com - rewrites2010-04-19T00:00:00+00:00http://pilif.github.com/2010/04/tempalias-com-rewrites<p>This is yet another installment in my series of posts about building a web service in node.js. The <a href="/2010/04/tempalias-com-the-cake-is-a-lie/">previous post is here</a>.</p>
<p>Between the last post and current trunk of tempalias, there lie two substantial rewrites of core components of the service. One thing is that I completely misused Object.create() which takes an object to be the prototype of the object you are creating. I was of the wrong opinion that it works like Crockford’s object.create() which is creating a clone of the object you are passing.</p>
<p>Also, I learned that only Function objects actually have a prototype.</p>
<p>Not knowing these two things made it impossible to actually deserialize the JSON representation of an alias that was previously stored in redis. This lead to the first rewrite - this time of <a href="http://github.com/pilif/tempalias/blob/master/lib/tempalias.js">lib/tempalias.js</a>. Now aliases work more like standard JS objects and require to be instantiated using the new operator, on the plus side though, they work as expected now.</p>
<p>Speaking of serialization. I learned that in V8 (and Safari)</p>
<figure class="highlight"><pre><code class="language-javascript" data-lang="javascript"><span class="nx">isNan</span><span class="p">(</span><span class="nb">Date</span><span class="p">.</span><span class="nx">parse</span><span class="p">(</span> <span class="p">(</span><span class="k">new</span> <span class="nb">Date</span><span class="p">()).</span><span class="nx">toJSON</span><span class="p">()</span> <span class="p">))</span> <span class="o">===</span> <span class="kc">true</span></code></pre></figure>
<p>which, according to the ES5 spec is a bug. The spec states that Date.parse() should be able to parse a string created by Date.doISOStirng() which is what is used by toJSON.</p>
<p>This ended up with me doing an ugly hack (string replacement) and reporting a <a href="http://crbug.com/41754">bug in Chrome</a> (where the bug happens too).</p>
<p>Anyhow. Friday and Saturday I took off the project, but today I was on it again. This time, I was looking into serving static content. This is how we are going to serve the web site after all.</p>
<p>Express does provide a Static plugin, but it’s fairly limited in that it doesn’t do any client side caching which, even though Node.js is crazy fast, seems imperative to me. Also while allowing you to configure the file system path it should serve static content from, it insists on the static content’s URL being /public/whatever, where I would much rather have kept the URL-Space together.</p>
<p>I tried to add If-Modified-Since-support to express’ static plugin, but I hit some strange interraction in how express handles the HTTP request that caused some connections to never close - not what I want.</p>
<p>After two hours of investigating, I was looking at a different solution, which leads us to rewrite two:</p>
<p>tempalias trunk now doesn’t depend on express any more. Instead, it serves the web service part of the URL space manually and for all the static requests, it uses <a href="http://github.com/felixge/node-paperboy">node-paperboy</a>. paperboy doesn’t try to convert node into Rails and it provides nothing but a simple static file handler for your web server which also works completely inside node’s standard method for handling web requests.</p>
<p>I prefer this solution by much because express was doing too much in some cases and too little in others: Express tries to somewhat imitate rails or any other web framework in that it not only provides request routing but also template rendering (in HAML and friends). It also abstracts away node’s HTTP server module and it does so badly as eveidenced by this strange connection not-quite-ending problem.</p>
<p>On the other hand, it doesn’t provide any help if you want to write something that doesn’t return text/html.</p>
<p>Personally, if I’m doing a RESTful service anyways, I see no point in doing any server-side HTML generation. I’d much rather write a service that exposes an API at some URL endpoints and then also a static page that uses JavaScript / AJAX to consume said API. This is where express provides next to no help at all.</p>
<p>So if the question is whether to have a huge dependency which fails at some key points and doesn’t provide any help with other key points or to have a smaller dependency that handles the stuff I’m not interested in, but otherwise doesn’t interfer, I’d much prefer that solution to the first one.</p>
<p>This is why I went with this second rewrite.</p>
<p>Because I was already using a clean MVC separation (the “view” being the JSON I emit in the API - there’s no view in the traditional sense yet), the rewrite was quite hassle-free and basically nothing but syntax work.</p>
<p>After completing that, I felt like removing the known issues from my blog post <a href="http://www.gnegg.ch/2010/04/tempalias-com-persistence/">where I was writing about persistence</a>: Alias generation is now race-free and alias length is stored in redis too. The architecture can still be improved in that I’m currently doing two requests to Redis per ALIAS I’m creating (SETNX and SET). By moving stuff around a little bit, I can get away with just the SETNX.</p>
<p>On the other hand, let me show you this picture here:</p>
<p><a href="http://www.gnegg.ch/wp-content/uploads/2010/04/Screen-shot-2010-04-19-at-00.36.52.png"><img class="aligncenter size-medium wp-image-689" title="Screenshot of ab running" src="http://www.gnegg.ch/wp-content/uploads/2010/04/Screen-shot-2010-04-19-at-00.36.52-168x300.png" alt="Screenshot of ab running in a terminal" width="168" height="300" /></a>Considering that the current solution is already creating 1546 aliases per second at a concurrency of 100 requests, I can probably get away without changing the alias creation code any more.</p>
<p>And in case you ask: The static content is served with 3000 requests per second - again with a concurrency of 100.</p>
<p>Node is fast.</p>
<p>Really.</p>
<p>Tomorrow: Philip learns CSS - I’m already dreading this final step to enlightenment: Creating the HTML/CSS front-end UI according to the awesome design provided by Richard.</p>
tempalias.com - the cake is a lie2010-04-16T00:00:00+00:00http://pilif.github.com/2010/04/tempalias-com-the-cake-is-a-lie<p>This is another installment of my development diary for tempalias.com, a web service that will allow you to create self-destructing email aliases. You can <a href="/2010/04/tempalias-com-config-file-smtp-cleanup-beginnings-of-a-server/">read the last previous here</a>.</p>
<blockquote>This was a triumph.
I'm making a note here: HUGE SUCCESS.
It's hard to overstate my satisfaction.</blockquote>
<p>I didn’t post an update on wednesday evening because it got very late and I just wanted to sleep. Today, it’s late yet again, but I can gladly report that <strong>the backend service is now feature complete. </strong></p>
<p>We are still missing the UI, but with a bit of curl on the command line, you can use the restful web service interface to create aliases and you can use the generated aliases to send email via the <strong>now completed SMTP proxy </strong>- including time and usage based expiration.</p>
<p>As a reminder: All the code (i.e. the completed backend) is available on my <a href="http://github.com/pilif/tempalias">github repository</a>, though keep in mind that there is no documentation what so ever. That I will save for later when this is really going public. If you are brave, feel free to clone it.</p>
<p>You will need the trunk versions for both redis and node.</p>
<p><a href="http://www.gnegg.ch/wp-content/uploads/2010/04/Screen-shot-2010-04-16-at-01.00.41.png"><img class="aligncenter size-medium wp-image-683" title="Consuming an alias" src="http://www.gnegg.ch/wp-content/uploads/2010/04/Screen-shot-2010-04-16-at-01.00.41-225x300.png" alt="Screenshot of a terminal showing three consumptions of an alias and a fourth failng." width="225" height="300" /></a></p>
<p>The screenshot is is showing me consuming an alias four times in a row. Three times, I get the data back, the fourth time, it’s gone.</p>
<p>The website itself is still in the process of being designed and I can promise you, it will be awesome. Richard’s last design was simply mind-blowing. Unfortunately I can’t show it here yet, because he used a non-free picture. Besides, we obviously can’t use non-free artwork for a Free Software project.</p>
<p>So this update concerns itself with two days of work. What was going on?</p>
<p>On wednesday, I wanted to complete the SMTP server, but before I went ahead doing so, I revised the servers design. At the end of the last posting here, we had a design where the SMTP proxy would connect to the smarthost the moment a client connects. It would then proceed to proxy through command by command, returning error messages as they are returned by the smarthost.</p>
<p>The issue with this design lies in the fact that tempalias.com is, by definition, not about sending mail, <em>but about rejecting mail</em>. This means that once it’s up and running, the majority of mail deliveries will simply fail at the RCPT state.</p>
<p>From this perspective, it doesn’t make sense to connect to the smarthost when a client connects. Instead, we should do the handshake up to and including the RCPT TO command, at which time we do the alias expansion. If that fails (which is the more likely case), we don’t need to bother to connect to upstream but we can simply deny the recipient.</p>
<p>The consequence of course is that our RCPT TO can now return errors that happened during MAIL FROM on the upstream server. But as MAIL FROM usually only fails with a 5xx error, this isn’t terribly wrong anyways - the saved resources far outweigh the not-so-perfect error messages.</p>
<p>Once I completed that design change, the next roadblock I went into was the fact that both the smtp server and the smtp client libraries weren’t quite as asynchronous as I would have wanted: The server was reading the complete mail from the client into memory and the client wanted the complete mail as a parameter to its data method.</p>
<p>That felt unpractical to me as in the majority of cases, we won’t get the whole mail at once, but we can certainly already begin to push it through to the smarthost, keeping memory usage of our smtp server as low as possible.</p>
<p>So <a href="http://github.com/pilif/node-smtp">my clone of the node SMTP library</a> now contains support for asynchronous handling for DATA. The server fires <code>data</code>, <code>data_available</code> and <code>data_end</code> and the client provides <code>startData()</code>, <code>sendData()</code> and <code>endData()</code>. Of course the old functionality is still available, but the <a href="http://github.com/pilif/tempalias/blob/master/tempalias_smtp.js">tempalias.com SMTP server</a> is using the new interface.</p>
<p>So, that was Wednesday’s work:</p>
<ul>
<li>only connect to the smarthost when it's no longer inevitable</li>
<li>complete the smtp server node library</li>
<li>made the smtp server and client libraries fully asynchronous</li>
<li>complete the SMTP proxy (but without alias expansion yet)</li>
</ul>
<p>Before I went to bed, the SMTP server was accepting mail and sending it using the smarthost. It didn’t do alias expansion yet but just rewrote the recipient to my private email address.</p>
<p>This is where I picked up Thursday night: The plan was to hook the alias model classes into the SMTP server as to complete the functionality.</p>
<p>While doing that, I had one more architectural thing to clear: How to make sure that I can decrement the usage-counter race-free? Once that was settled, the rest was pure grunt work by just writing the needed code.</p>
<p>As we are getting long and as it’s quite late again, I’m saving the post-mortem of this last task for tomorrow. You’ll get a chance learn about bugs in node, about redis’ DECR command and finally you will get a chance to laugh at me for totally screwing up the usage of Object.create().</p>
<p>Stay tuned.</p>
tempalias.com - config file, SMTP cleanup, beginnings of a server2010-04-14T00:00:00+00:00http://pilif.github.com/2010/04/tempalias-com-config-file-smtp-cleanup-beginnings-of-a-server<p>Welcome to the next installment of a series of blog posts about the creation of a new web service in node.js. The posts serve as a diary of how the development of the service proceeds and should give you some insight in how working with <a href="http://nodejs.org">node.js</a> feels right now. You can <a href="/2010/04/tempalias-com-smtp-and-design/">read the previous episode here</a>.</p>
<p>Yesterday, I unfortunately didn’t have a lot of time to commit to the project, so I chose a really small task to complete: create a configuration file, a configuration file parser and use both to actually configure how the application should behave.</p>
<p>The task in general was made a lot easier by the fact that (current) node contains a really simple parser for INI style configuration files. For the simple type of configuration data I have to handle, the INI format felt perfect and as I got a free parser with node itself, that’s what I went with. So as of monday it’s possible to configure listening addresses and ports for both HTTP and SMTP daemons and additional settings for the SMTP part of the service.</p>
<p>Today I had more time.</p>
<p>The idea was to seriously look into the SMTP transmission. The general idea is that email sent to the tempalias.com domain will have to end up on the node server where the alias expansion is done and the email is prepared for final delivery.</p>
<p>While I strive to keep the service as self-contained as possible, I opted into forcing a smarthost to be present to do the actual mail delivery.</p>
<p>You see, mail delivery is a complicated task in general as you must deliver the mail and if you can’t, you have to notify the sender. Reasons for a failing delivery can be permanent (that’s easy - you just tell the sending server that there was a problem and you are done) or temporary. In case of many temporary errors you end up with the responsibility of needing to handle them.</p>
<p>Handling in case of temporary errors usually means: Keep the original email around in a queue and retry after the initial client has long disconnected. If you don’t succeed for a reasonably large amount of delivery attempts or if a permanent problem creeps up, then you have to bounce the message back to the initial sender.</p>
<p>If you want to do the final email delivery, so that your app runs without any other dependencies, then you will end up not only writing an SMTP server but also a queueing system, something that’s <em>way</em> beyond the scope of simple alias resolution.</p>
<p>Even if I wanted to go through that hassle, it still wouldn’t help much as aside of the purely technical hurdles, there are also others on a more meta level:</p>
<p>If you intend to do the final delivery nowadays, you practically need to have a valid PTR record, you need to be in good standing with the various RBL’s, you need to handle SSL - the list goes on and on. Much of this is administrative in nature and might even create additional cost and is completely pointless considering the fact that you do usually have a dedicated smarthost around that takes your mail and does the final delivery. And even if you don’t: Installing a local MTA for the queue handling is easily done and whatever you install, it’ll be way more mature than what I could write in any reasonable amount of time.</p>
<p>So it’s decided: <strong>The tempalias codebase will require a smarthost to be configured</strong>. As mine doesn’t require authentication from a certain IP range, I can even get away without writing any SMTP authentication support.</p>
<p>Once that was clear, the next design decision was clear too: the tempalias smtp daemon should be a really, really thin layer around the smarthost. When a client connects to tempalias, we will connect to the smarthost (500ing (or maybe 400ing) out if we can’t - remember: immediate and permanent errors are easy to handle). When a client sends MAIL FROM, just relay it to the smarthost, returning back to the client whatever we got - you get the idea: the <strong>tempalias mail daemon is an SMTP proxy</strong>.</p>
<p>This keeps the complexity low while still providing all the functionality we need (i.e. rewriting RCPT TO).</p>
<p>Once all of this was clear, I sat down and had a look at the node-smtp servers and clients and it was immediately clear that both need a lot of work to even to the simple thing I had in mind.</p>
<p>This means that most of todays work went into <a href="http://github.com/pilif/node-smtp">my fork of node-smtp</a>:</p>
<ul>
<li>made the hostname in the banner configurable</li>
<li>made the smtp client library work with node trunk</li>
<li>fire additional events (on close, on mail from, on rcpt to)</li>
<li>fixed various smaller bugs</li>
</ul>
<p>Of course I have notified upstream of my changes - we’ll see what they think about.</p>
<p>On the other hand, the <a href="http://github.com/pilif/tempalias/blob/master/tempalias_smtp.js">SMTP server part</a> of tempalias (incidentally the first SMTP server I’m writing. ever) also took shape somewhat. It now correctly handles proxying from initial connection up until DATA. It doesn’t do real alias expansion yet, but that’s just a matter of hooking it into the backend model class I already have - for now I’m happy with it rewriting all passed recipients to my own email address for testing.</p>
<p>I already had a look at how node-smtp’s daemon handles the DATA command and I have to say that gobbling up data into memory until the client stops sending data or we run out of heap isn’t quite what I need, so tomorrow I will have to change node-smtp even more in that it fires events for every bit of data that was received. That way a consumer of the API can do some validation on the various chunks (mostly size validation) and I can pass the data directly to the smarthost as it arrives.</p>
<p>This keeps memory usage of the node server small.</p>
<p>So that’s what I’m going to do tomorrow.</p>
<p>On a different note, I had some thought going into actual deployment, which probably will end up with me setting up a reverse proxy after all, but this is a topic for another discussion.</p>
tempalias.com - SMTP and design2010-04-12T00:00:00+00:00http://pilif.github.com/2010/04/tempalias-com-smtp-and-design<p>After being sick the end of last week, only today I found time and willpower to continue working on this little project of mine.</p>
<p>For people just coming to the series with this article: This is a development diary about the creation of a web service for autodestructing email addresses. <a href="http://www.gnegg.ch/2010/04/tempalias-com-persistence/">Read the previous installment here</a>.</p>
<p>The funny thing about the projcet is that people all around me seem to like the general idea behind the service. I even got some approval from Ebi (who generally dislikes everything that’s new) and this evening I was having dinner with a former coworker of mine whom I know for doing kick-ass web design.</p>
<p>He too liked the idea of the project and I could con him into creating the screen design of tempalias.com. This is a really good thing as whatever Richard touches comes out beautiful and usable.</p>
<p>For example, he told me that it makes <em>way</em> more sense to just expose a valid until date and in the form of “Valid for x days” instead of asking the user to provide a real date. This is not only much clearer and easier to use, it also fixes a brewing timezone problem I had with my previous design:</p>
<p>Valid for “3 days from now” is 3 days from now wherever on the world you are. But valid until 2010-04-16 is different depending on where you are.</p>
<p>This is a rare case of where adding usability also keeps the code simpler.</p>
<p>So, this is what Richard came up with so far:</p>
<p><a href="http://www.gnegg.ch/wp-content/uploads/2010/04/tempalias_schwarz_heller.jpg"><img class="aligncenter size-medium wp-image-677" title="tempalias mokcup" src="http://www.gnegg.ch/wp-content/uploads/2010/04/tempalias_schwarz_heller-300x233.jpg" alt="Mockup of the tempalias website design" width="300" height="233" /></a>It’s not finalized yet, but in the spirit of publishing here early and often, I’m posting this now. It’s actually the third iteration already and Richard is still working on making it even nicer. But it’s already 2124 times better than what I could ever come up with.</p>
<p>On the code-front, I was looking into the SMTP server, where I found @kennethkalmer’s <a href="http://github.com/kennethkalmer/node-smtp">node-smtp project</a> which provides a very rough implementation of an SMTP daemon.</p>
<p>Unfortunately, it doesn’t run under node trunk (or even 0.1.30), but with the power of github, I was able to create my own fork at</p>
<p><a href="http://github.com/pilif/node-smtp">http://github.com/pilif/node-smtp</a></p>
<p>My fork contains a bit of additional code compared to the source:</p>
<ul>
<li>Runs under node trunk (where trunk is defined as "node as it was last tuesday")</li>
<li>Enforces proper SMTP protocol sequence (first: HELO, then MAIL FROM, then RCPT TO and finally DATA)</li>
<li>Supports multiple recipients (by handling multiple RCPT TO)</li>
<li>Does some email address validation (which is way too strict for being RFC compliant)</li>
</ul>
<p>Tomorrow, I’m going to use this fork to build an SMTP server that we’ll be using for alias processing, where I will have to put some thought into actual mail delivery: Do I deliver the mail myself? Am I offloading it to a mail relay (I really want to do this. But read more tomorrow)? If so, how is this done with the most memory efficiency?</p>
<p>We’ll see.</p>
tempalias.com - persistence2010-04-07T00:00:00+00:00http://pilif.github.com/2010/04/tempalias-com-persistence<p>(This is the third installment of a development diary about the creation of a self destructing email alias service. <a href="http://www.gnegg.ch/2010/04/tempalias-com-another-day/">Read the previous episode here</a>.)</p>
<p>After the earlier clear idea on how to handle the aliases identity, the next question I needed to tackle was the question of persistence: How do I want to store these aliases? Do I want them to persist a server restart? How would I access them?</p>
<p>On the positive side remains the fact that the data structure for this service is practically non-existant: Each alias has its identity and some data associated with it, mainly a target address and the validity information. And lookup will <em>always</em> happen using that identity (with the exception of garbage collection - something I will tackle later).</p>
<p>So this is a clear candiate to use a very simple key/value store. As I hope to gain at least some traction though (wait until I coded the bookmarklet), I would want this to be at least of <em>some</em> robustness, hence writing flat-files seemed like a bad idea.</p>
<p>Ironically, if you want a really simple, built-in solution for data persistance in node.js, you have two options: Either write your own (which is where I don’t want to go to) or use SQLite which is total overkill for the current solution.</p>
<p>So I had the option of just keeping stuff in memory (as plain JS objects or using memcache) or to use any of the supported key/value storage services.</p>
<p>Aliases going away on server restart felt like a bad thing, so I looked into the various key/value stores.</p>
<p>While looking at the available libraries, I went for the one that was most recently updated, which is <a href="http://github.com/fictorial/redis-node-client">redis-node-client</a>. Of course, this meant that I had to use both redis trunk and node trunk as the library is really tracking the bleeding edge. I don’t mind that much though because both redis and node are very self-contained and compile easily on both linux (deployment) and mac os (development) while requiring next to no configuration.</p>
<p>So with a decision made for both persistence and identity, I went ahead and wrote more code.</p>
<p>On the <a href="http://github.com/pilif/tempalias">project page</a>, you will see few commits completing the full functionality I wanted a POST to /aliases to have - including persistence using redis and identity using the previously described method of brute-forcing the issue.</p>
<p>I still have two issues at the moment that will need tackling</p>
<ol>
<li>The initial length of the pseudo-uuid isn't persisted. This means that once enough aliases are created that we are increasing the length and I'm restarting the server, I will get needless collisions or even a too heavily-used keyspace.</li>
<li>The current method of checking for ID availability and later usage is totally non-race-proof and needs some <strong>serious</strong> looking-into.</li>
</ol>
<p>Stuff I learned:</p>
<ul>
<li>node is extremely work-in-progress. While it runs flawlessly and <strong>never</strong> surprises me with irreproducible or even just seemingly illogical behavior, features appear and disappear at will.</li>
<li>This state of flux in node makes it really hard to work with external dependencies. In this case, multipart.js vanished from node trunk (without change log entry either), but express still depends upon that. On the other hand, I'm forced to use node trunk otherwise redis client won't work.</li>
<li>Date("<timestamp>") in node is dependent on the local timezone and changing process.env.TZ post-startup doesn't have any effect. This means that I'm going to have to set TZ=UTC in my start script.</li>
<li>Working with an asynchronous API seems strange sometimes, but the power of closures usually comes to the rescue. I certainly wouldn't want to have to write software like this if I didn't have closures at my disposal (and, NO, global variables are NOT a viable alternative...)</li>
</ul>
tempalias.com - development diary2010-04-06T00:00:00+00:00http://pilif.github.com/2010/04/tempalias-com-development-diary<p>After listening to this week’s <a href="http://www.grc.com/securitynow.htm">Security Now!</a> podcast where they were discussing <a href="http://disposeamail.com">disposeamail.com</a>. That reminded me of this little idea I had back in 2002: <a href="http://www.pilif.ch/stuff/adaddr/index.php">Selfdestructing Email Addresses</a>.</p>
<p>Instead of providing a web interface for a catchall alias, my solution was based around the idea of providing a way to encode time based validity information and even an usage counter into an email address and then check that information on reception of the email to decide whether to alias the source address to a target address or whether to decline delivery with an “User unknown” error.</p>
<p>This would allow you to create temporary email aliases which redirect to your real inbox for a short amount of time or amount of emails, but instead of forcing you to visit some third-party web interface, you would get the email right there where the other messages end up in: In your personal inbox.</p>
<p>Of course this old solution had one big problem: It required a mail server on the receiving end and it required you as a possible user to hook the script into that mailserver (also, I never managed to do just that with exim before losing interest, but by now, I would probably know how to do it).</p>
<p>Now. Here comes the web 2.0 variant of the same thing.</p>
<p><strong>tempalias.com</strong> (yeah. it was still available. so was .net) will provide you with a web service that will allow you to create a temporary mail address that will redirect to your real address. This temporary alias will be valid only for a certain date range and/or a certain amount of email sent to it. You will be able to freely chose the date range and/or invocation count.</p>
<p>In contrast to the other services out there, the alias will direct to your standard inbox. No ad-filled web interface. No security problems caused by typos and no account registration.</p>
<p>Also, the service will be completely <em>open source</em>, so you will be able to <strong>run your own</strong>.</p>
<p>My motivation is to learn something new, which is why I am</p>
<ul>
<li>writing this thing in <a href="http://nodejs.org">Node.js</a> (also, because a simple REST based webapp and a simple SMTP proxy is just what node.js was invented for)</li>
<li>documenting my progress of implementation here (which also hopefully keeps me motivated).</li>
</ul>
<p>My progress in implementing the service will always be visible to the public on the projects GitHub page:</p>
<p><a href="http://github.com/pilif/tempalias">http://github.com/pilif/tempalias</a></p>
<p>As you can see, there’s already stuff there. Here’s what I’ve learned about today and what I’ve done today:</p>
<ul>
<li>I learned <a href="http://book.git-scm.com/5_submodules.html">how to use git submodules</a></li>
<li>I learned a bunch about node.js - how to install it, how it works, how module separation works and how to export stuff from modules.</li>
<li>I learned about the <a href="http://expressjs.com/">Express micro framework</a> (which does <em>exactly</em> what I need here)
<ul>
<li>I learned how request routing works</li>
<li>I learned how to configure the framework for my needs (and how that's done internally)</li>
<li>I learned how to play with HTTP status codes and how to access information about the request</li>
</ul>
</li>
</ul>
<p>What I’ve accomplished code-wise is, considering the huge amount of stuff I had plain no clue about, quite little:</p>
<ul>
<li>I added the web server code that will run the webapp</li>
<li>I created a handler that handles a POST-request to /aliases</li>
<li>Said handler checks the content type of the request</li>
<li>I added a very rudimentary model class for the aliases (and learned how to include and use that)</li>
</ul>
<p>I still don’t know how I will store the alias information. In a sense, it’s a really simple data model mapping an alias ID to its information, so it’s predestined for the cool key/value stores out there. On the other hand, I want the application to be simple and I don’t feel like adding a key/value store as a huge dependency just for keeping track of 3 values per alias.</p>
<p>Before writing more code, I’ll have to find out how to proceed.</p>
<p>So the next update will probably be about that decision.</p>
tempalias.com - another day2010-04-06T00:00:00+00:00http://pilif.github.com/2010/04/tempalias-com-another-day<p>This is the second installment of an article series about creating a web service for self-destructing email aliases. <a href="/2010/04/tempalias-com-development-diary/">Read part 1 here</a>.</p>
<p>Today, I spent a lot of thought and experimentation with two issues:</p>
<ol>
<li>How would I name and identify the temporary aliases?</li>
<li>How would I store the temporary aliases</li>
</ol>
<p>Naming and identifying sounds easy. One is inclined to just use an incrementing integer or something alike. But that won’t work for security reasons. If the address you got is 12@tempalias.net, with any likelyhood, there will be an 11@ and a 13@.</p>
<p>Using that information, you could easily bring the whole service down (and endlessly annoy its users) by requesting an address to get the current ID and then sending a lot of mail to the neighboring IDs. If those were created without a mail count limitation, then you could spam the recipient for the whole validity period and if they were created with a count limitation, you could use up all allowed mails.</p>
<p>So the aliases need to be random.</p>
<p>Which leads to the question of how to ensure uniqueness.</p>
<p>Unique random numbers you ask? Isn’t this what UUIDs were invented for?</p>
<p>True. But considering the length of an UUID, would you really want to have an alias in the form e8ea98ce-dabc-42f8-8fcd-c50d20b1f2c5@tempalias.net? That address is so long that it might even hit some length limitation of the target site, which of course is true even if you apply cheap tricks like removing the dashes.</p>
<p>Of course, using base16 to encode an UUID (basically an 128 bit integer) is hopelessly inefficient. By increasing the amount of characters we use, we might be able to decrease the amount of characters.</p>
<p>Keep in mind though, that the string in question is to be a local part of an email address and those tend to be case insensitive with not much guarantees that case is preserved over the process of delivering the message.</p>
<p>That, of course, limits the amount of characters we can use to basically 0-9 and A-Z (plus a few special characters like + . - and _).</p>
<p>This is what <a href="http://en.wikipedia.org/wiki/Base32">Base32</a> was invented for, but unfortunately, a base32 encoded UUID would still be around 26 characters in length. While that’s a bit better, I still wouldn’t want the email address scheme to be eda3u3rzcfer3fztdvvd6xnd3i@tempalias.com</p>
<p>So in the end, we need something way smaller (adding + . - and _ to the character space wouldn’t help much - what comes out is about 20 characters in length).</p>
<p>In the end, I would probably have to create a elaborate scheme doing something like this:</p>
<ul>
<li>pick a UUID. Use the first n bytes.</li>
<li>base32 encode.</li>
<li>Check whether that ID is free. If not, add 1 to n and try again.</li>
<li>Keep n around so that in the future, we can already start with taking bigger chunks.</li>
</ul>
<p>So the moment we reach the first collision, we increase the keyspace eight-fold. That feels sufficiently safe from collisions to me, but of course it increases the maintenance burden somewhat.</p>
<p>The next question was how to get UUIDs and how to base32 encode them from JavaScript.</p>
<p>I tried different aspects, one of which even included using <a href="http://bitbucket.org/nikhilm/uuidjs/">uuidjs</a> and doing the b32 encoding/decoding in C. The good part about that: I now have a general idea of how to extend nodejs with C++ code (yeah. it has to be C++ and my b32 code was C, so I had to do a bit of trickery there too).</p>
<p>In the end though, considering that I can’t use UUIDs anyways, we can go forward using <a href="http://www.broofa.com/Tools/Math.uuid.js">Math.uuid.js</a> and use their call using both len and radix (with the additional change of only using lowercase to encode the data), increasing the length as we hit collisions.</p>
<p>So the next issue is storage: How to store the alias data? How to access it?</p>
<p>This will be part of the next posting here.</p>
No. It's not «just» strings2010-03-03T00:00:00+00:00http://pilif.github.com/2010/03/no-its-not-just-strings<p>On <a href="http://news.ycombinator.com/item?id=1162122">Hacker News</a>, I came across <a href="http://github.com/candlerb/string19/raw/47b0cba0a2047eca0612b4e24a540f011cf2cac3/soapbox.rb">this rant about strings</a> in Ruby 1.9 where a developer was complaining about the new string handling in Ruby. Now, I’m no Ruby developer by even a long shot, but I am really interested in strings and string encoding which is why I posted the following comment which I reprint here as it’s too big to just be a comment:</p>
<p>Rants about strings and character sets that contain words of the following spirit are usually neither correct nor worth of any further thought:</p>
<blockquote>It's a +String+ for crying out loud! What other language requires you to understand this
level of complexity just to work with strings?!</blockquote>
<p>Clearly the author lives in his ivory tower of English language environments where he is able to use the word “just” right next to “strings” and he probably also can say that he “switched to UTF-8” without actually really having done so because the parts of UTF-8 he uses work exactly the same as the ASCII he used before.</p>
<p>But the rest of the world works differently.</p>
<p>Data can appear in all kinds of encodings and can be required to be in different other kinds of encodings. Some of those can be converted into each other, others can’t.</p>
<p>Some Japanese encodings (Ruby’s creator is Japanese) can’t be converted to a unicode representation for example.</p>
<p>Nowadays, as a programming language, you have three options of handling strings:</p>
<p>1) pretend they are bytes.</p>
<p>This is what older languages have done and what Ruby 1.8 does. This of course means that your application has to keep track of encodings. Basically for every string you keep in your application, you need to also keep track what it is encoded in. When concatenating a string of encoding a to another string you already have that is in encoding b, you must do the conversion manually.</p>
<p>Additionally, because strings are bytes and the programming language doesn’t care about encoding, you basically can’t use any of the built-in string handling routines because they assume each byte representing one character.</p>
<p>Of course, if you are one of these lucky english UTF-8 users, getting data in ASCII and english text in UTF-8, you can easily “switch” your application to UTF-8 by still pretending strings to be bytes because, well, they are. For all intents and purposes, your UTF-8 is just ASCII called UTF-8.</p>
<p>This is what the author of the linked post wanted.</p>
<p>2) use an internal unicode representation</p>
<p>This is what Python 3 does and what I feel to be a very elegant solution if it works for you: A String is just a collection of Unicode code points. Strings don’t worry about encoding. String operations don’t worry about it. Only I/O worries about encoding. So whenever you get data from the outside, you need to know what encoding it is in and then you decode it to convert it to a string. Conversely, whenever you want to actually output one of these strings, you need to know in what encoding you need the data and then encode that sequence of Unicode code points to any of these encodings.</p>
<p>You will never be able to convert a bunch of bytes into a string or vice versa without going through some explicit encoding/decoding.</p>
<p>This of course has some overhead associated with it, as you always have to do the encoding and because operations on that internal collection of unicode code points might be slower than the simple array-of-byte-based approach, especially if you are using some kind of variable-length encoding (which you probably are to save memory).</p>
<p>Interestingly, whenever you receive data in an encoding that cannot be represented with Unicode code points and whenever you need to send out data in that encoding, then, <strong>you are screwed</strong>.</p>
<p>This is a defficiency in the Unicode standard. Unicode was specifically made so that it can be used to represent every encoding, but it turns out that it can’t correctly represent some Japanese encodings.</p>
<p>3) The third option is to store an encoding with each string and expose both the strings contents and the encoding to your users</p>
<p>This is what Ruby 1.9 does. It combines methods 1 and 2: It allows you to chose whatever internal encoding you need, it allows you to convert from one encoding to the other and it removes the need to externally keep book of every strings encoding because it does that for you. It also makes sure that you don’t intermix encodings, but I’m getting ahead of myself.</p>
<p>You can still use the languages string library functions because they are aware of the encoding and usually do the right thing (minus, of course, bugs)</p>
<p>As this method is independent of the (broken?) Unicode standard, you would never get into the situation where just reading data in some encoding makes you unable to write the same data back in the same encoding as in this case, you would just create a string using this problematic encoding and do your stuff on that.</p>
<p>Nothing prevents the author of the linked post to use Ruby 1.9’s facility to do exactly what Python 3 does (of course, again, ignoring the Unicode issue) by internally keeping all strings in, say, UTF-16 (you can’t keep strings in “Unicode” - Unicode is no encoding - but that’s for another post). You would transcode all incoming and outgoing data to and from that encoding. You would do all string operations on that application-internal representation.</p>
<p>A language throwing an exception when you concatenate a Latin 1-String to a UTF-8 string is <em>a good thing</em>! You see: Once that concatenation happened by accident, it’s really hard to detect and fix.</p>
<p>At least it’s fixable though because not every Latin1-String is also a UTF-8 string. But if it so happens that you concatenate, say Latin1 and Latin8 by accident, then you are really screwed and there’s no way to find out where Latin1 ends and Latin8 begins as every valid Latin 1 string is also a valid Latin 8 string. Both are arrays of bytes with values between 0 and 255 (minus some holes).</p>
<p>In todays small world, you <em>want</em> that exception to be thrown.</p>
<p>In conclusion, what I find really amazing about this complicated problem of character encoding is the fact that nobody feels it’s complicated because it usually just works - especially method 1 described above that has constantly been used in years past and also is very convenient to work with.</p>
<p><span style="color: #000000;">Also, it still works.</span></p>
<p>Until your application leaves your country and gets used in countries where people don’t speak ASCII (or Latin1). Then all these interesting problems arise.</p>
<p>Until then, you are annoyed by every of the methods I described but method 1.</p>
<p>Then, you will understand what great service Python 3 has done for you and you’ll switch to Python 3 which has very clear rules and seems to work for you.</p>
<p>And then you’ll have to deal with the japanese encoding problem and you’ll have to use binary bytes all over the place and have to stop using strings altogether because just reading input data destroys it.</p>
<p>And then you might finally see the light and begin to care for the seemingly complicated method 3.</p>
<p></span></p>
Sticking to the iPhone2010-02-22T00:00:00+00:00http://pilif.github.com/2010/02/sticking-to-the-iphone<p>Recently, I got a chance to play around with a Nexus One phone and I was using it as my main phone with the intent to use it as my new main phone. I had enough of the lack of background apps and the closedness of the iPhone, so I thought, I should really go through with this.</p>
<p>Unfortunately though, this didn’t work out so well.</p>
<p>People who haven’t tried both devices would probably never understand this, but the Nexus One touch screen is <a href="http://www.appleinsider.com/articles/10/01/11/touchscreen_analysis_shows_iphone_accuracy_lead_over_droid.html">really, really bad</a>. The bit of squigglyness you see on the picture in the linked article seems like no big deal, but after one week of Nexus One and then going back to the iPhone, you can’t imagine how smooth it feels to use the iPhone again.</p>
<p>It’s like being in a very noisy environment and then stepping back into a quiet one.</p>
<p>Why did I try the iPhone again?</p>
<p>While I got Podcast listening to work correctly on the Android phone, I noticed that a lot of my commuting time is not just spent by listening to podcasts, but that some games (currently Doodle Jump and Plants vs. Zombies) play a huge role too and the supply of games on the Android plattform is really, really bad.</p>
<p>And don’t get me started on the keyboard: Neither the built-in one nor the one I had switched to even comes close to what the iPhone provides. I’m about 5 times as fast on the iPhone than on the Android. Worse: After switching to the Nexus One, I again began dreading having to write SMSes which usually spells death to any phone for me.</p>
<p>Speaking of keyboard: The built-in one is completely unusable for multilingual people: The text I write on a phone is about 50% english and 50% german. The Android keyboard doesn’t allow switching the language on the fly (while the english and german keyboards are quite alike, the keyboard language also determines the auto correction language), and it couples the keyboard language to the phone UI language.</p>
<p>This is really bad, as over the years I bacame so accustomed to english UIs that I frankly cannot work with german UIs any more - also because of the usually really bad translations. Eek.</p>
<p>So, let’s tally.</p>
<table id="mobtable" border="0" cellspacing="0" cellpadding="0">
<thead>
<tr>
<td></td>
<td width="50%">iPhone</td>
<td width="50%">Android Device</td>
</tr>
</thead>
<tbody>
<tr>
<th>Advantages</th>
<td>
<ul>
<li>Working touch screen</li>
<li>Smoother graphics and thus more fluent usage.</li>
<li>Never crashes</li>
<li>Apps I learned to depend on are available (Wemlin, Doodle Jump [...])</li>
<li>No background noise in the headphones</li>
</ul>
</td>
<td>
<ul>
<li>Background-Applications (I wanted this for working IM as the notification based solutions on the iPhone never seemed to work)</li>
<li>Built-in applications can be replaced at will</li>
<li>Ability to <a href="http://buzz.google.com">buzz</a> pictures (yeah. I know. Who needs this?)</li>
<li>On-the-fly podcast download.</li>
</ul>
</td>
</tr>
<tr>
<th>Disadvantages</th>
<td>
<ul>
<li>Can't replace internal apps by better ones</li>
<li>Needs iTunes to download podcasts</li>
<li>No background apps</li>
<li>No buzzing of pictures (at least not if you want a location attached to your buzz)</li>
</ul>
</td>
<td>
<ul>
<li>Really bad touch screen (jumpy, inaccurate, sometimes losing calibration until I reboot it)</li>
<li>Very mediocre applications available</li>
<li>UI sometimes slow</li>
<li>Very bad battery life (doesn't make it through one day even when not heavily used)</li>
<li>Crashes about once a day</li>
<li>Did I already write "really bad touch screen" - I guess I did, but: "really bad touch screen"</li>
<li>Sometimes really bad, sometimes just bad background noise in the headphones. According to HTC, this can be fixed by periodically turning off the phone and removing the battery(!).</li>
<li>No audible support (I know I could probably remove the DRM, but why bother at the moment?)</li>
</ul>
</td>
</tr>
</tbody>
</table>
<p>While I thought I could live with the touch screen, the moment I turned on the iPhone again to play a round of “Plants vs. Zombies” that just came out for the i-Devices, I’ve seen how a touch screen is supposed to work and I could not bring myself around to going back, but I still wanted some of the one big iPhone disadvantage, which is lack of non-SMS-based messaging fixed for me, so here’s what I’ve done:</p>
<ul>
<li><a href="http://www.whatsapp.com/">WhatsApp </a>on the iPhone works really well as an SMS replacement (something I was after for a very long time)</li>
<li><a href="http://www.meebo.com/">meebo</a> so far never disconnected me on the iPhone which is something all other iPhone IM clients have done for me - and even on the android, meebo tended to disconnect and not reconnect.</li>
</ul>
<p>For me, that’s it. No more experiments. What ever I tried to get away from Apple’s dictate, it always failed. The N900 is a geeks heaven but doesn’t support my expensive in-ear iPhone headset and doesn’t provide any halfway interesting games. Android has a bad touchscreen, next to no battery life, is slow and crashy.</p>
<p>It’s really hard to admit for me as a geek and strong believer in freedom to use something I bought for whatever purpose I want to use it for, but Apple, even after two years, still rules the phone market in usability and hardware build quality.</p>
<p>Can’t wait to see what the next iteration of the iPhone will be, though they don’t have to change anything as long as their competition still thinks it’s ok to save $2 on each phone by using a crappy touchscreen and a crappy battery.</p>
Sprite avatars in Gravatar2010-02-15T00:00:00+00:00http://pilif.github.com/2010/02/sprite-avatars-in-gravatar<p><img class="size-full wp-image-656 alignright" title="Frog" src="http://www.gnegg.ch/wp-content/uploads/2010/02/Frog-Front-1.gif" alt="" width="32" height="48" /></p>
<p>After the release of Google Buzz, my <a href="http://www.google.com/profiles/phofstetter">Google profile</a> which I had for years finally became somewhat useful. Seeing that I really liked the avatar I’ve added to that profile, I decided, that <a href="http://en.wikipedia.org/wiki/List_of_characters_in_Chrono_Trigger#Frog">Frog</a> should henceforth be my official avatar.</p>
<p>This also meant that I wanted to add <a href="http://en.wikipedia.org/wiki/List_of_characters_in_Chrono_Trigger#Frog">Frog</a> to my Gravatar profile which, unfortunately proved to be… let’s say interesting.</p>
<p>The image resizer Gravatar provides on their site to fit the uploaded image to the sites need apparently was not designed for sprites as it tries to blow up sprites way out of proportion only to resize them back down. At first I though I could get away with cheating by uploading above picture with a huge margin added to it, but that only lead to a JavaScript error in their uploader.</p>
<p>In the end, this is what I have done:</p>
<ol>
<li>Convert the picture into the TGA format</li>
<li>Scale it using <a href="http://web.archive.org/web/20080208215126/http://www.hiend3d.com/hq3x.html">hq3x</a> (<a href="http://en.wikipedia.org/wiki/Pixel_art_scaling_algorithms">explanation of hq3x</a>)</li>
<li>Convert it back to png and re-add transparency (hq3x had trouble with transparency in the TGA file)</li>
<li>Scale it to 128 pixels in height</li>
<li>paste it into a pre-prepared 128x128 canvas</li>
<li>upload that.</li>
</ol>
<p>This is how my gravatar looks now, which feels quite acceptable to me:</p>
<p><img class="aligncenter" title="Gravatar" src="http://www.gravatar.com/avatar/117112d883960c8ed0e13823f88e45f1" alt="My Gravatar" width="80" height="80" /></p>
<p>The one in google’s profile was way easier to create: Paste the original image into a 64 by 64 canvas and let google do the resizing. It’s not as perfect as the hq3x algorithm, but that suffers by the downsizing to make frog fit 128 pixels in height anyways.</p>
<p>The other option would be to scale using hq2x and the paste that into a 128 by 128 canvas yielding this sharper, but smaller image:</p>
<p><a href="http://www.gnegg.ch/wp-content/uploads/2010/02/gravatar-sharp.png"><img class="aligncenter size-full wp-image-657" title="Sharper Frog" src="http://www.gnegg.ch/wp-content/uploads/2010/02/gravatar-sharp.png" alt="" width="128" height="128" /></a>But what ever I do, frog will still be resized by Gravatar (and thus destroyed), so I went with the image that contains more colored pixels at the expense of a bit of sharpness.</p>
Google Buzz, Android and Google Apps Accounts2010-02-11T00:00:00+00:00http://pilif.github.com/2010/02/google-buzz-android-and-google-apps-accounts<p>I was looking at the Google Android Maps Application that is now providing integrated Google Buzz support, showing buzzes directly on the map and allowing you to buzz (around where I live and work, there has been a tremendous uptake of Google Buzz which makes this really compelling).</p>
<p>However, there’s a little peculiarity about the Android maps application: If your main Google Account you configured (that’s the first one you configure) on the phone is a Google Apps account, Maps will use that for buzz-support (apparently, there’s already some kind of infrastructure for inter-company Buzzing in place). This means that you would only see buzzes from other people in your domain and, because there’s no official support for this out there, only if they are also using an Android phone.</p>
<p>“Mittelpraktisch” as I would say in German.</p>
<p>The obvious workaround is to configure your private gmail account to be your primary account (this is only possible by factory-resetting your device by the way), but this has some disadvantages, mainly the fact that the calendar on the Android phones only supports syncing with the primary account and as it happens, usually it’s the work-calendar (the Apps one) you want synchronized; not the private one (that lingers unused in my case).</p>
<p>To work around this issue, share your work calendar with your private Google account.</p>
<p>Unfortunately, I couldn’t do that as I’m posting this, because the default in the domain configuration is to not allow this. Thankfully, I’m that domain’s administrator, so I could change it (small company. remember.), but it seems to take a while to propagate into the calendar account.</p>
<p>I’ll post more as my investigation turns out more, though it is my gut feeling that this mess will solve itself as Google fixes their Maps application to not use that phantom corporate buzz account.</p>
PHP 5.3 and friends on Karmic2010-02-08T00:00:00+00:00http://pilif.github.com/2010/02/php-5-3-and-friends-on-karmic<p>I have been patient. For months I hoped that Ubuntu would sooner or later get PHP 5.3, a release I’m very much looking forward to, mainly because of the addition of anonymous inner functions to spell the death of create_function or even eval.</p>
<p>We didn’t get 5.3 for Karmic and <a href="https://bugs.launchpad.net/ubuntu/+source/php5/+bug/394385">who knows about Lucid</a> even (it’s crazy that nearly one year after the release of 5.3, there is still debate on whether to include it in the next version of Ubuntu that will be the current LTS release for the next four years. This is IMHO quite the disservice against PHP 5.3 adoption).</p>
<p>Anyways: We are in the process of releasing a huge update to PopScan that is heavily focussed on getting rid of cruft, increasing speed all over the place and increasing overall code quality. Especially the last part could benefit from having 5.3 and seeing that at this point PopScan already runs well on 5.3, I really wanted to upgrade.</p>
<p>In comes Al-Ubuntu-be, a coworker of mine and his awesome Debian packaging skills: Where there are already a few PPAs out there that contain a 5.3 package, Albe went the extra step and added not only PHP 5.3 but quite many other packages we depend upon that might also be useful to my readers. Packages like APC, memcache, imagick and xdebug for development.</p>
<p>While we can make no guarantees that these packages will be maintained heavily, they will get some security update treatment (though highly likely by version bumping as opposed to backporting).</p>
<p>So. If you are on Karmic (and later Lucid if it won’t get 5.3) and want to run PHP 5.3 with APC and Memcache, head over to <a href="https://launchpad.net/~alberto-piai/+archive/ppa">Albe’s PPA</a>.</p>
<p>Also, I’d like to take the opportunity to thank Albe for his efforts: Having a PPA with real .deb packages as opposed to just my self-compiled mess I would have done gives us a much nicer way of updating existing installations to 5.3 and even a much nicer path back to the original packages once they come out. Thanks a lot.</p>
Things I can't do with an iPhone/iPad2010-02-01T00:00:00+00:00http://pilif.github.com/2010/02/things-i-cant-do-with-an-iphoneipad<ul>
<li>have a VoIP call going on when a mobile call/SMS arrives</li>
<li>read Kindle ebooks (I can now, but knowing Apple's stance on "competing functionality", with the advent of iBook, how long do you think this will last?)</li>
<li>give it to our customers as another device to use with PopScan (It's not down-lockable and there's no way for centralized app deployment that doesn't go over apple)</li>
<li>plug any peripheral that isn't apple sanctioned</li>
<li>plug a peripheral and use it system-wide</li>
<li>play a SNES ROM (or any other console rom)</li>
<li>install Adblock (which especially hurts on the iPad)</li>
<li>consistenly use IM (background notifications don't work consistently)</li>
</ul>
<p>The iPhone provides me with many advantages and thus I can live with its inherent restrictions (which are completely arbitrary - there’s no technical reason for them), but I see no point to buy yet another locked-down device that does half of the stuff I’d want it to do and does it half-assed at that.</p>
<p>Also it’s a shame that Apple obviously doesn’t need any corporate customers (at least for a small company, I see no possibility).</p>
<p>I just hope, the open and usable Mac computer remains. I would not know what to go back to? Windows? Never. Linux? Sure. But on what hardware?</p>
How we use git2010-01-20T00:00:00+00:00http://pilif.github.com/2010/01/how-we-use-git<p>the following article was a <a href="http://news.ycombinator.com/item?id=1063392">comment I made</a> on <a href="http://news.ycombinator.com">Hacker News</a>, but as it’s quite big and as I want to keep my stuff at a central place, I’m hereby reposting it and adding a bit of formating and shameless self-promotion (i.e. links):</p>
<p><a href="http://www.sensational.ch">My company</a> is working on a - by now - quite <a href="http://www.popscan.com">large web application</a>. Initially (2004), I began with CVS and then moved to SVN and in the second half of last year, to git (after a one-year period of personal use of git-svn).</p>
<p><span style="color: #000000;">We deploy the application for our customers - sometimes to our own servers (both self-hosted and in the cloud) and sometimes to their machines.</span></p>
<p>Until middle year, as a consequence of SVN’s really crappy handling of branches (it can branch, but it fails at merging), we did very incremental development, adding features on customer requests and bugfixes as needed, often times uploading specific fixes to different sites, committing them to trunk, but rarely ever updating existing applications to trunk to keep them stable.</p>
<p><em>Huge mess.</em></p>
<p>With the switch to git, we also initiated a real release management, doing one feature release every six months and keeping the released versions on strict maintenance (for all intents and purposes - the web application is highly customizable and we do make exceptions in the customized parts as to react to immediate feature-wishes of clients).</p>
<p>What we are doing git-wise is the reverse of what the article shows: Bug-fixes are (usually) done on the release-branches, while all feature development (except of these customizations) is done on the main branch (we just use the git default name “master”).</p>
<p>We branch off of master when another release date nears and then tag a specific revision of that branch as the “official” release.</p>
<p>There is a central gitosis repository which contains what is the “official” repository, but every one of us (4 people working on this - so we’re small compared to other projects I guess) has their own gitorious clone which we heavily use for code-sharing and code review (“hey - look at this feature I’ve done here: Pull branch foobar from my gitorious repo to see…”).</p>
<p>With this strict policy of (for all intents and purposes) “fixes only” and especially “no schema changes”, we can even auto-update customer installations to the head of their respective release-branches which keeps their installations bug-free. This is a huge advantage over the mess we had before.</p>
<p>Now. As master develops and bug-fixes usually happen on the branch(es), how do we integrate them back into the mainline?</p>
<p>This is where the concept of the “<em>Friday merge</em>” comes in.</p>
<p>On Friday, my coworker or I usually merge all changes in the release-branches upwards until they reach master. Because it’s only a week worth of code, conflicts rarely happen and if they do, we remember what the issue was.</p>
<p>If we do a commit on a branch that doesn’t make sense on master because master has sufficiently changed or a better fix for the problem is in master, then we mark these with [DONTMERGE] in the commit message and revert them as part of the merge commit.</p>
<p>On the other hand, in case we come across a bug during development on master and we see how it would affect production systems badly (like a security flaw - not that they happen often) and if we have already devised a simple fix that is save to apply to the branch(es), we fix those on master and then cherry-pick them on the branches.</p>
<p>This concept of course heavily depends upon clean patches, which is another feature git excels at: Using features like interactive rebase and interactive add, we can actually create commits that</p>
<ul>
<li>Either do whitespace or functional changes. Never both.</li>
<li>Only touch the lines absolutely necessary for any specific feature or bug</li>
<li>Do one thing and only one.</li>
<li>Contain a very detailed commit message explaining exactly what the change encompasses.</li>
</ul>
<p>This on the other hand, allows me to create extremely clean (and exhaustive) change logs and NEWS file entries.</p>
<p>Now some of these policies about commits were a bit painful to actually make everyone adhere to, but over time, I was able to convince everybody of the huge advantage clean commits provide even though it may take some time to get them into shape (also, you gain that time back once you have to do some blame-ing or other history digging).</p>
<p>Using branches with only bug-fixes and auto-deploying them, we can increase the quality of customer installations and using the concept of a “Friday merge”, we make sure all bug-fixes end up in the development tree without each developer having to spend an awful long time to manually merge or without ending up in merge-hell where branches and master have diverged too much.</p>
<p>The addition of gitorious for easy exchange of half-baked features to make it easier to talk about code before it gets “official” helped to increase the code quality further.</p>
<p>git was a tremendous help with this and I would never in my life want to go back to the dark days.</p>
<p><span style="color: #000000;">I hope this additional insight might be helpful for somebody still thinking that SVN is probably enough.</span></p>
linktrail - a failed startup - introduction2010-01-04T00:00:00+00:00http://pilif.github.com/2010/01/linktrail-a-failed-startup-introduction<p>I guess it’s inevitable. Good ideas may fail. And good ideas may be years ahead of their time. And of course, sometimes, people just don’t listen.</p>
<p>But one never stops learning.</p>
<p>In the year 2000, I took part in a plan of a couple of guys to become the next Yahoo (Google wasn’t quite there yet back then), or, to use the words we used on the site,</p>
<blockquote>For these reasons, we have designed an online environment that offers a truly new way for people to store, manage and share their favourite online resources and enables them to engage in long-lasting relationships of collaboration and trust with other users.</blockquote>
<p>The idea behind the project, called linktrail, was basically what would much later on be picked up by the likes of twitter, facebook (to some extent) and the various community based news sites.</p>
<p>The whole thing went down the drain, but the good thing is that I was able to legally salvage the source code, the install it on a personal server of mine and to publish the source code. And now that so many years have passed, it’s probably time to tell the world about this, which is why I have decided to start this little series about the project. What is it? How was it made? And most importantly: Why did it fail? And concequently: What could we have done better?</p>
<p>But let’s first start with the basics.</p>
<p>As I said, I was able to legally acquire the database and code (which is mostly written by me anyways) and to install the site on a server of mine, so let’s get that out to start with. The site is available at <a href="http://linktrail.pilif.ch">linktrail.pilif.ch</a>. What you see running there is the result of 6 months of programming by myself after a concept done by the guys I’ve worked with to create this.</p>
<p>What is linktrail?</p>
<p>If the tour we made back then is any good, then just <a href="http://linktrail.pilif.ch/Tour/">taking it</a> would probably be enough, but let me phrase in my words: The site is a collection of so called trails which in turn are small units, comparable to blogs, consisting of links, titles and descriptions. These micro-blogs are shown in a popup window (that’s what we had back then) beside the browser window to allow quick navigation between the different links in the trail.</p>
<p>Trails are made by users, either by each user on their own or as a collaborative work between multiple users. The owner of a trail can hand out permissions to everybody or their friends (using a system quite similar to what we currently see on facebook for example)</p>
<p>A trail is placed in a directory of trails which was built around the directory structures we used back then, though by now, we would probably do this much more different. Users can subscribe to trails they are interested in. In that case, they will be notified if a trail they are subscribed to is updated either by the owner or anybody else with the rights to update the trail.</p>
<p>Every user (called expert in the site’s terms) has their profile page (<a href="http://linktrail.pilif.ch/Experts/pilif">here’s mine</a>) that lists the trails they created and the ones they are subscribed to.</p>
<p>The idea was for you as an user to find others with similar interests and form a community around those interests to collaborate on trails. An in-site messaging-system helped users to communicate with each other: Aside of just sending plain text messages, it’s possible to recommend trails (for easy one-click subscription) .</p>
<p>linktrail was my first real programming project, basically 6 months after graduating in what the US would call high school. Combine that fact with the fact that it was created during the high times of the browser wars (year 2000, remember) with web standards basically non-existing, then you can imagine what a mess is running behind the scenes.</p>
<p>Still, the site works fine within those constraints.</p>
<p>In future posts, I will talk about the history of the project, about the technology behind the site, about special features and, of course, about why this all failed and what I would do differently - both in matters of code and organization.</p>
<p>If I woke your interest, feel free to have a look at the code of the site which I just now converted from CVS (I started using CVS about 4 months into development, so the first commit is HUGE) to SVN to git and <a href="http://github.com/pilif/linktrail">put it up on github</a> for public consumption. It’s licensed under a BSD license, but I doubt that you’d find anything in this mess of PHP3(!) code (though it runs unchanged(!) on PHP5 - topic of another post I guess), HTML 3.2(!) tag soup and java-script hacks.</p>
<p>Oh and if you can read german, I have also <a href="http://github.com/pilif/ltr-concept">converted the CVS repository</a> that contained the concept papers that were written over the time.</p>
<p>In preparation of this series of blog-posts, I have already made some changes to the code base (available at github):</p>
<ul>
<li>login after register now works</li>
<li>warning about unencrypted(!) passwords in the registration form</li>
<li>registering requires you to solve a reCAPTCHA.</li>
</ul>
JSONP. Compromised in 3â¦2â¦1â¦2009-12-01T00:00:00+00:00http://pilif.github.com/2009/12/jsonp-compromised-in-3%e2%80%a62%e2%80%a61%e2%80%a6<p>To embed a vimeo video on some page, I had a look at their different methods for embedding and the easiest one seemed to be what is basically JSONP - a workaround for the usual restriction of disallowing AJAX over domain boundaries.</p>
<p>But did you know, that JSONP not only works around the subdomain restriction, it basically is one huge cross site scripting exploit and there’s nothing you can do about it?</p>
<p>You might have heard this and you might have found articles like <a href="http://tav.espians.com/sanitising-jsonp-callback-identifiers-for-security.html">this one</a> thinking that using libraries like that would make you save. But that’s an incorrect assumption. The solution provided in the article has it backwards and only helps to protect the originating site against itself, but it does not help at all to protect the calling site from the remote site.</p>
<p>You see, the idea behind JSONP is that you source the remote script using <script src=”http://remote-service.example.com/script.js”> and the remote script then (after being loaded into your page and thus being part of your page) is supposed to call some callback of the original site (from a browsers standpoint it is part the original site).</p>
<p>The problem is that you do not get control over the loading let alone content of that remote script. Because the cross-domain restrictions prevent you from making an AJAX request to a remote server, you are using the native HTML methods for cross domain requests (which should not have been allowed in the first place) and at that moment you relinquish all control over your site as that remotely loaded script runs in the context of your page, which is how you get around the cross domain restrictions - by loading the remote script into your page and executing it in the context of your page.</p>
<p>Because you never see that script until it is loaded, you cannot control what it can do.</p>
<p>Using JSONP is basically subjecting yourself to an XSS attack by giving the remote end complete control over your page.</p>
<p>And I’m not just talking about malicious remote sites… what if they themselves are vulnerable to some kind of attack? What if they were the target of a successful attack? You can’t know and once you do know it’s too late.</p>
<p>This is why I would recommend you never to rely on JSONP and find other solutions for remote scripting: Use a local proxy that does sanitization (i.e. strict JSON parsing which will save you), rely on cross-domain messaging that was added in later revisions of the upcoming HTML5 standard.</p>
Sense of direction vs. field of view2009-10-08T00:00:00+00:00http://pilif.github.com/2009/10/sense-of-direction-vs-field-of-view<p>Last saturday, I bought the <a href="http://www.amazon.com/Metroid-Prime-Trilogy-Collectors-Nintendo-Wii/dp/B002ATY7JE/ref=sr_1_2?ie=UTF8&s=videogames&qid=1255016237&sr=8-2">Metroid Prime Triloogy</a> for the Wii. I didn’t yet have the Wii Metroid and it’s impossible for me to use the GameCube to play the old games as the distance between my couch and the reciever is too large for the GameCube’s wired joypads. It has been a<a href="/2006/11/the-atmosphere-in-good-games/"> long while</a> since I last played any of the 3D Metroids, and seeing the box in a store made me want to play them again.</p>
<p>So all in all, this felt like a good deal to me: Getting the third Prime plus the possibility to easily play the older two for the same price that they once asked for the third one alone.</p>
<p>Now I’m in the middle of the first game and I made a really interesting observation: My usually very good sense of direction seems to require a minimum sized field of view to get going: While playing on the GameCube, I was constantly busy looking at the map and felt unable to recognize even the simplest landmarks.</p>
<p>I spent the game in a constant state of feeling lost, not knowing where to go and forgetting how to go back to places where I have seen then unreachable powerups.</p>
<p>Now it might just be that I remember the world from my first playthrough, but this time, playing feels completely differently to me: I constantly know where to go and where I am. Even with rooms that are very similar to each other, I constantly know where I am and how to get from point a to point b.</p>
<p>When I want to re-visit a place, I just go there. No looking at the map. No backtracking.</p>
<p>This is how I usually navigate the real world, so after so many years of feeling lost in 3D games, I’m finally able to find my way in them as well.</p>
<p>Of course I’m asking myself what has changed and in the end it’s either the generally larger screen size of the wide-screen format of the Wii port or maybe the controls via the Wiimote that feel much more natural: The next step for me will be to try and find out which it is by connecting the Wii to a smaller (but still wide) screen.</p>
<p>But aside of all that, Metroid just got even better - not that I believed that to be possible.</p>
Programming languages names2009-09-30T00:00:00+00:00http://pilif.github.com/2009/09/programming-languages-names<p>Today in the office, a discussion about the merits of Ruby compared to Python and the other way around (isn’t it fun to have people around actually willing to discuss such issues?) lead into us making fun of different programming languages by interjecting some sore points about them into their names.</p>
<p>The Skype conversation went roughly as follows (I removed some stuff for brevity but all the language names are intact):</p>
<blockquote><strong>thepilif</strong>: <em>ja-long variable names and no function pointers-va really sucks</em>
<strong>thepilif</strong>: though there's always <em>C(*^~**<<)++</em>
<strong>thepilif</strong>: and then there's alyways <em>Del-Access violation at address 02E41C10. Read of address 02E41C10-phi</em>
<strong>thepilif</strong>: or <em>P-false==true-HP</em>
<strong>Coworker</strong>: ok so for the sake of it i should add <em>py thon</em>
<strong>thepilif</strong>: or <em>java-everything is global-script</em>
<strong>thepilif</strong>: too bad it doesn't work for C
<strong>thepilif</strong>: <em>C-sigsegv</em>
<strong>thepilif</strong>: they know why they just chose one letter
<strong>Coworker</strong>: exactly, k&r are smart
<strong>Coworker</strong>: <em>has-how the fuck do i do a print-skell</em>?
<strong>Coworker</strong>: <em>pe/(^$^)/rl</em>
<strong>thepilif</strong>: or <em>pe-module? object? hash? what's the difference-rl</em>
<strong>Coworker</strong>: so we could say <em>pe/$^/rl</em>
<strong>thepilif</strong>: and <em>ru-lets rewrite our syntax on the fly-by</em>
<strong>Coworker</strong>: <em>l(i(s(p)))</em>
<strong>thepilif</strong>: can't you wrap this into another pair of ()?
<strong>thepilif</strong>: <em>(l(i(s(p))))))</em>
<strong>Coworker</strong>: yes even better
<strong>thepilif</strong>: and add the syntax error
<strong>thepilif</strong>: one too many )
<strong>Coworker</strong>: it's impossible to match them just by looking
<strong>thepilif</strong>: totally impossible. yes
<strong>Coworker</strong>: <strong>the human brain is no fucking pushdown automata</strong>
<strong>Coworker</strong>: but maybe the lisp people are
<strong>Coworker</strong>: vb! vb needs one
<strong>thepilif</strong>: <em>visual-on error resume next-basic</em>
<strong>thepilif</strong>: and of course <em>brain-<<<<<******<<<>>>>-fuck</em>
<strong>thepilif</strong>: <em>c-tries to be dynamic, but var just doesn't cut it-#</em>
<strong>thepilif</strong>: <em>c-not quite java nor c(++)?-#</em>
<strong>thepilif</strong>: though the first one feels better
<strong>thepilif</strong>: oh.. and of course <em>HT-unknown error-ML</em>
<strong>thepilif</strong>: as a tribute to IE6
<strong>thepilif</strong>: and of course <em>la-no bugs but still not usable-tex</em>
<strong>thepilif</strong>: sorry, Knuth
<strong>thepilif</strong>: and <em>send-$*$_**^$$$-mail</em></blockquote>
<p>So the question is: Do you have anything to add? Do you feel that we were overly unfair?</p>
Introducing sacy, the Smarty Asset Compiler2009-09-15T00:00:00+00:00http://pilif.github.com/2009/09/introducing-sacy-the-smarty-asset-compiler<p>We all know how beneficial to the performance of a web application it can be to serve assets like CSS files and JavaScript files in larger chunks as opposed to smaller ones.</p>
<p>The main reason behind this is the latency incurring from requesting a resource from the server plus the additional bandwidth of the request metadata which can grow quite large when you take cookies into account.</p>
<p>But knowing this, we also want to keep files separate during development to help us with the debugging and development process. We also want the deployment to not increase too much in difficulty, so we naturally dislike solutions that require additional scripts to run at deployment time.</p>
<p>And we certainly don’t want to mess with the client-side caching that HTTP provides.</p>
<p>And maybe we’re using Smarty and PHP.</p>
<p>So this is where <a href="http://github.com/pilif/sacy">sacy</a>, the <a href="http://github.com/pilif/sacy">Smarty Asset Compiler</a> plugin comes in.</p>
<p>The only thing (besides a one-time configuration of the plugin) you have to do during development is to wrap all your <link>-Tags with {asset_compile}….{/asset_compile} and the plugin will do everything else for you, where everything includes:</p>
<ul>
<li>automatic detection of actually linked files</li>
<li>automatic detection of changed files</li>
<li>automatic minimizing of linked files</li>
<li>compilation of all linked files into one big file</li>
<li>linking that big file for your clients to consume. Because the file is still served by your webserver, there's no need for complicated handling of client-side caching methods (ETag, If-Modified-Since and friends): Your webserver does all that for you.</li>
<li>Because the cached file gets a new URL every time any of the corresponding source files change, you can be sure that requesting clients will retrieve the correct, up-to-date version of your assets.</li>
<li>sacy handles concurrency, without even blocking while one process is writing the compiled file (and of course without corrputing said file).</li>
</ul>
<p>sacy is released under the MIT license and ready to be used (though it currently only handles CSS files and ignores the media-attribute - stuff I’m going to change over the next few days).</p>
<p>Interested? Visit the <a href="http://github.com/pilif/sacy">project’s page on GitHub</a> or even better, fork it and help improving it!</p>
Twisted Tornado2009-09-14T00:00:00+00:00http://pilif.github.com/2009/09/twisted-tornado<p>Lately, the net is all busy talking about the new <a href="http://github.com/facebook/tornado">web server</a> released by <a href="http://www.friendfeed.com">FriendFeed</a> last week and how their server basically does the same thing as the <a href="http://twistedmatrix.com/trac/">Twisted framework</a> that was around so much longer. One <a href="http://pwpwp.blogspot.com/2009/09/python-community-in-anguish-pain.html">blog entry</a> ends with</p>
<blockquote>Why Facebook/Friendfeed decided to create a new web server is completely beyond us.</blockquote>
<p>Well. Let me add my two cents. Not from a Python perspective (I’m quite the Python newbie, only having completed one bigger project so far), but from a software development perspective. I feel qualified to add the cents because I’ve <em>been there and done that</em>.</p>
<p>When you start any project, you will be on the lookout for a framework or solution to base your work on. Often times, you already have some kind of idea of how you want to proceed and what the different requirements of your solution will be.</p>
<p>Of course, you’ll be comparing existing requirements against the solutions around, but chances are that none of the existing solutions will match your requirements exactly, so you will be faced with changing them to match.</p>
<p>This involves not only the changes themselves but also other considerations:</p>
<ul>
<li>is it even possible to change an existing solution to match your needs?</li>
<li>if the existing solution is an open source project, is there a chance of your changes being accepted upstream (this is <em>not a given</em>, by the way).</li>
<li>if not, are you willing to back- and forward-port your changes as new upstream versions get released? Or are you willing to stick with the version for eternity, manually back-porting security-issues?</li>
</ul>
<p>and most importantly</p>
<ul>
<li>what takes more time: Writing a tailor-made solution from scratch or learning how the most-matching solutions ticks to make it do what you want?</li>
</ul>
<p>There is a very strong perception around, that too many features mean bloat and that a simpler solution always trumps the complex one.</p>
<p>Have a look at articles like «<a href="http://briancarper.net/blog/clojure-1-php-0">Clojure 1, PHP 0</a>» which compares a home-grown, tailor-made solution in one language to a complete framework in another and it seems to favor the tailor-made solution because it was more performant and felt much easier to maintain.</p>
<p>The truth is, you can’t have it both ways:</p>
<p>Either you are willing to live with «bloat» and customize an existing solution, adding some features and not using others, or you are unwilling to accept any bloat and you will do a tailor-made solution that may be lacking in features, may reimplement other features of existing solutions, but will contain <em>exactly</em> the features you want. Thus it will not be «bloated».</p>
<p>FriendFeed decided to go the tailor-made route but instead of many other projects each day who go the tailor made route (take Django’s reimplementations of many existing Python technologies like templating and ORM as another example) and keep using that internally, they actually went public.</p>
<p>Not with the intention to bad-mouth Twisted (though it kinda sounded that way due to bad choice of words), but with the intention of telling us: «Hey - here’s the tailor-made implementation which we used to solve our problem - maybe it is or parts of it are useful to you, so go ahead and have a look».</p>
<p>Instead of complaining that reimplementation and a bit of NIH was going on, the community could embrace the offering and try to pick the interesting parts they see fitting for their implementation(s).</p>
<p>This kind of reinventing the wheel is a standard process that is going on all the time, both in the Free Software world as in the commercial software world. There’s no reason to be concerned or alarmed. Instead we should be thankful for the groups that actually manage to put their code out for us to see - in so many cases, we never get a chance to see it and thus lose a chance at making our solutions better.</p>
Snow Leopard and PHP2009-08-29T00:00:00+00:00http://pilif.github.com/2009/08/snow-leopard-and-php<p>Earlier versions of Mac OS X always had pretty outdated versions of PHP in their default installation, so what you usually did was to go to <a href="http://www.entropy.ch">entropy.ch</a> and fetch the packages provided there.</p>
<p>Now, after updating to Snow Leopard you’ll notice that the entropy configuration has been removed and once you add it back in, you’ll see Apache segfaulting and some missing symbol errors.</p>
<p>Entropy has not updated the packages to snow leopard yet, so you could have a look at PHP that came with stock snow leopard: This time it’s even bleeding edge: Snow Leopard comes with PHP 5.3.0.</p>
<p>Unfortunately though, some vital extensions are missing, most notably for me, the PostgeSQL extension.</p>
<p>This time around though, Snow Leopard comes with a functioning PHP development toolset, so there’s nothing stopping you to build it yourself, so here’s how to get the official PostgreSQL extension working on Snow Leopard’s stock php:</p>
<ol>
<li>Make sure that you have installed the current Xcode Tools. You'll need a working compiler for this.</li>
<li>Make sure that you have installed PostgreSQL and know where it is on your machine. In my case, I've used the <a href="http://www.enterprisedb.com/products/pgdownload.do#osx">One-click installer</a> from EnterpriseDB (which persisted the update to 10.6).</li>
<li>Now that Snow Leopard uses a full 64bit userspace, we'll have to make sure that the PostgreSQL client library is available as a 64 bit binary - or even better, as an universal binary.Unfortunately, that's not the case with the one-click installer, so we'll have to fix that first:
<ol>
<li>Download the sources of the PostgreSQL version you have installed from postgresql.org</li>
<li>Open a terminal and use the following commands:
<figure class="highlight"><pre><code class="language-bash" data-lang="bash"><span class="gp">% </span>tar xjf postgresql-[version].tar.bz2
<span class="gp">% </span><span class="nb">cd </span>postgresql-[version]
<span class="gp">% </span><span class="nv">CFLAGS</span><span class="o">=</span><span class="s2">"-arch i386 -arch x86_64"</span> ./configure --prefix<span class="o">=</span>/usr/local/mypostgres
<span class="gp">% </span>make</code></pre></figure>
make will fail sooner or later because you the postgres build scripts can't handle building an universal binary server, but the compile will progress enough for us to now build libpq. Let's do this:
<figure class="highlight"><pre><code class="language-bash" data-lang="bash"><span class="gp">% </span>make -C src/interfaces
<span class="gp">% </span>sudo make -C src/interfaces install
<span class="gp">% </span>make -C src/include
<span class="gp">% </span>sudo make -C src/include install
<span class="gp">% </span>make -C src/bin
<span class="gp">% </span>sudo make -C src/bin install</code></pre></figure>
</li>
</ol>
</li>
<li>Download the php 5.3.0 source code from their website. I used the bzipped version.</li>
<li>Open your Terminal and cd to the location of the download. Then use the following commands:
<figure class="highlight"><pre><code class="language-bash" data-lang="bash"><span class="gp">% </span>tar -xjf php-5.3.0.tar.bz2
<span class="gp">% </span><span class="nb">cd </span>php-5.3.0/ext/pgsql
<span class="gp">% </span>phpize
<span class="gp">% </span>./configure --with-pgsql<span class="o">=</span>/usr/local/mypostgres
<span class="gp">% </span>make -j8 <span class="c"># in case of one of these nice 8 core macs :p</span>
<span class="gp">% </span>sudo make install
<span class="gp">% </span><span class="nb">cd</span> /etc
<span class="gp">% </span>cp php.ini-default php.ini</code></pre></figure>
</li>
<li>Now edit your new php.ini and add the line <code>extension=pgsql.so</code></li>
</ol>
<p>And that’s it. Restart Apache (using apachectl or the System Preferences) and you’ll have PostgreSQL support.</p>
<p>All in all this is a tedious process and it’s the price us early adopters have to pay constantly.</p>
<p>If you want an honest recommendation on how to run PHP with PostgreSQL support on Snow Leopard, I’d say: Don’t. Wait for the various 3rd party packages to get updated.</p>
Alt-Space2009-08-27T00:00:00+00:00http://pilif.github.com/2009/08/alt-space<p>Today, I was looking into the new jnlp_href way of launching a Java Applet. Just like applet-launcher, this allows one to create applets that depend on native libraries without the usual hassle of manually downloading the files and installing them.</p>
<p>Contrary to applet-launcher, it’s built into the later versions of Java 1.6 and it’s officially supported, so I have higher hopes concerning its robustness.</p>
<p>It’s even possible to keep the applet-launcher calls in there if the user has an older Java Plugin that doesn’t support jnlp_href yet.</p>
<p>So in the end, you just write a .jnlp file describing your applet and add</p>
<pre><param name="jnlp_href" value="http://www.example.com/path/to/your/file.jnlp"></pre>
<p>and be done with it.</p>
<p>Unless of course, your JNLP file has a syntax error. Then you’ll get this in your error console (at least in case of this specific syntax error):</p>
<pre>java.lang.NullPointerException
at sun.plugin2.applet.Plugin2Manager.findAppletJDKLevel(Unknown Source)
at sun.plugin2.applet.Plugin2Manager.createApplet(Unknown Source)
at sun.plugin2.applet.Plugin2Manager$AppletExecutionRunnable.run(Unknown Source)
at java.lang.Thread.run(Unknown Source)
Ausnahme: java.lang.NullPointerException</pre>
<p>How helpful is that?</p>
<p>Thanks, by the way, for insisting to display a half-assed German translation on my otherwise english OS: <a href="/2008/09/automatic-language-detection/">Never use locale info for determining the UI langauge</a>, please.</p>
<p>Of course, this error does not give any indication of what the problem could be.</p>
<p>And even worse: The error in question is the topic of this blog post: It’s the dreaded Alt-Space character, 0xa0, or NBSP in ISO 8859-1.</p>
<p>0xa0 looks like a space, feels like a space, is incredibly easy to type instead of a space, but it’s not a space - not in the least. Depending on your compiler/parser, this will blow up in various ways:</p>
<pre>pilif@celes ~ % ls | grep gnegg
zsh: command not found: grep
pilif@celes ~ %
pilif@celes ~ % cat test.php
<?
echo "gnegg";
?>
pilif@celes ~ % php test.php
PHP Parse error: syntax error, unexpected T_CONSTANT_ENCAPSED_STRING in /Users/pilif/test.php on line 2
Parse error: syntax error, unexpected T_CONSTANT_ENCAPSED_STRING in /Users/pilif/test.php on line 2
pilif@celes ~ %</pre>
<p>and so on.</p>
<p>Now you people in the US with US keyboard layouts might think that I’m just one of those whiners - after all, how stupid must one be to press Alt-Space all the time? Probably stupid enough to deserve stuff like this.</p>
<p>Before you think these nasty thoughts, I ask you to consider the Swiss German keyboard layout though: Nearly all the characters use programmers use are accessed by pressing Alt-[some letter]. At least on the Mac. Windows uses AltGr, or right-alt, but on the mac, any alt will do.</p>
<p>So when you look at the shell line above:</p>
<pre>ls | grep gnegg</pre>
<p>you’ll see how easy it is to hit alt-space: First I type ls, then space. Then I press and hold alt-7 for the pipe and then, I am supposed to let go of alt and hit space. But because my left hand is on alt and the right one is pressing space, it’s very easy to hit space before letting go of alt.</p>
<p>Now instead of getting immediate feedback, nothing happens. It looks as if the space had been added, when in fact, something else has been added and that something is not recognized as a white space character and thus is something completely different from a space - despite looking exactly the same.</p>
<p>As much fun as reading <code>hexdump -C</code> output is - I need this to stop.</p>
<p>Dear internet! How can I make my Mac (or Linux when using the Mac keyboard layout) stop recognizing Alt-Space?</p>
<p>To take air out of the eventually arriving troll’s sails:</p>
<ul>
<li>I won't use Windows again. Thank you. Neither do I want to use Linux on my desktop.</li>
<li>I cannot use the US keybindings because my brain just can't handle the keyboard layout changing all the time and as I'm a native German speaker, I do have to type umlauts here and then - actually often enough, so that the ¨+vocal combo isn't acceptable.</li>
<li>While running Mac OS X, I'm stuck with the mac keyboard layout - I can't use the Windows one.</li>
</ul>
<p>Above JNLP error (printed here just in case somebody else has the same issue) caused me to lose nearly 5 hours of my life and will force me to work this weekend - who’d expect a XML parser error due to a space that isn’t one when seeing above call stack?</p>
<p><strong>Update: </strong>A <a href="http://www.reddit.com/r/programming/comments/9epdp/the_stupid_altspace_character_0xa0_is_killing_me/c0cha99">commenter on reddit.com</a> has recommended to use <a href="http://scripts.sil.org/cms/scripts/page.php?site_id=nrsi&item_id=ukelele">Ukelele</a> which I did and it helped me to create a custom keyboard layout that makes alt-space work like just space. That’s the best solution for my specific taste, so thanks a lot!</p>
OpenStreetMap2009-08-13T00:00:00+00:00http://pilif.github.com/2009/08/openstreetmap<p>The <a href="http://www.twit.tv/floss81">last episode of FLOSS Weekly</a> consisted of an interview with Steve Coast from <a href="http://www.openstreetmap.org">OpenStreetMap</a>. I knew about the project, but I was of the impression that it was in its infancy both content-wise and from a technical perspective.</p>
<p>During the interview I learned that it’s surprisingly complete (unless, of course, you need a map of Canada it seems) and highly advanced from a technical point of view.</p>
<p>But what’s really interesting is the fact how terribly easy it is to contribute. For smaller edits, you just click the edit-Link and use the Flash editor to paint a road or give it a name. If you need or want to do more, then there’s a really easy to use Java based editor:</p>
<p>First you drag a rectangle onto a pre-rendered version of the map which will cause the server to send you the vector information consisting of that part and then you can edit whatever you want.</p>
<p>If you have them, you can import traces of a GPS logger to help you add roads and paths and when you are finished, you press a button and the changes get uploaded and will be visible to the public a few minutes later (though one modification I made took about an hour to arrive on the web).</p>
<p>When the same nodes where updated in the meantime, a really nice conflict resolution assistant will help you to resolve the conflicts.</p>
<p>For me personally, this has the potential to become my new after-work time sink as it combines quite many passions of mine:</p>
<ul>
<li>The GPS tracking, importing and painting of maps is pure technology fun.</li>
<li>Actually being outside to generate the traces is healthy and also a lot of fun</li>
<li>Maps also are a passion of mine. I love to look at maps and I love to compare them to my mental image of the places they are showing.</li>
</ul>
<p>And besides all that, Open Street Map is complete enough to be of real use. For biking or hiking it even trumps Google Maps by much.</p>
<p>Still, at least near where I live, there are many small issues that can easily be fixed.</p>
<p>As the different editors are really easy to use, fixing these issues is a lot of fun and I’m totally seeing myself cleaning out all small mistakes I come across or even adding stuff that’s missing. After all, this also provides me with a very good reason to visit the places where I grew up to complete some parts.</p>
<p>The whole concept behind being able to update a map by just a couple of mouse clicks is very compelling too as it finally gives us the potential to have really accurate maps in a very timely fashion. For example: Last October, one of the roads near my house closed and just recently the tracks of the <a href="/2004/03/some-suburban-railways-i/">Forchbahn</a> were moved a bit.</p>
<p>Just today I added these changes to OpenStreetMap and now OSM is the only publically available map that correctly shows the traffic situation. And all that with 15 minutes of easy but interesting work.</p>
<p>For those interested, my <a href="http://www.openstreetmap.org/user/pilif">Open Street Map user profile</a> is, of course, pilif.</p>
SMS is dead2009-07-07T00:00:00+00:00http://pilif.github.com/2009/07/sms-is-dead<p>BeejiveIM is the first multiprotocol IM application for the iPhone that supports the new background notification features of firmware 3.0. Yesterday I went ahead and bought that application, curious to see how well it would work.</p>
<p>And just now my phone vibrated and on the display, there was an IM message a coworker sent me via Google Talk. The user experience was exactly the same as it would have been with an SMS - well - nearly the same - the phone made a different sound.</p>
<p>So the dream I had <a href="http://www.gnegg.ch/2003/03/just-like-sms-only-cheaper/">many moons ago</a> (6 years - boy - how time flies) has finally come true, with one difference: Whereas back then the MB cost CHF 7, now it’s practically free, considering that I’m unable to actually use up my traffic quota and even then, it’s only CHF 0.10 now.</p>
<p>So let’s keep that in mind and also consider that SMS pricing hasn’t changed in the last six years.</p>
<p>So while IM was 52 times cheaper than SMS back then, now the price advantage ranges from somewhere between 3500 times cheaper and infinity times cheaper.</p>
<p>SMS pricing needs to be looked at. This just <strong>cannot</strong> be.</p>
Of all the hardware that can break...2009-07-06T00:00:00+00:00http://pilif.github.com/2009/07/of-all-the-hardware-that-can-break<p>… it has to be the one that’s most difficult to replace.</p>
<p>Today, my <a href="http://www.gnegg.ch/2006/09/upgrading-the-home-entertainment-system/">Gefen HDMI over Cat5 adapter</a> died. Well. It didn’t die completely, it just lost its ability to produce a stable image. What is transmitted is very intermittent and in the few seconds the image is available, it’s heavily distorted.</p>
<p>Also, it’s not the obvious issue (faulty cabling) as the problems did not go away after using two very short (1m) cat 5 cables to test.</p>
<p>Now this is really bad for a variety of reasons:</p>
<ul>
<li>Only just last Saturday I bought <a href="http://en.wikipedia.org/wiki/Star_Ocean:_The_Last_Hope">Star Ocean</a> and <a href="http://en.wikipedia.org/wiki/Tales_of_Vesperia">Tales of Vesperia</a> for my 360, giving me a total play time of 1.5 hours so far.</li>
<li>Yesterday I noticed that Worms: Armageddon was released for Xbox arcade and I have already invited Ebi after the huge success that was our earlier Worms evening on the 360.</li>
<li>My setup is totally dependent on the two extenders as I am covering more than 20 meters of distance between receiver and projector. No extender, no Xbox, no Wii, no projector.</li>
<li>Last time I waited around six weeks for the extender to arrive</li>
</ul>
<p>Of all the hardware I’m having at home, the HDMI extender is the worst to break. Not only is it very hard to replace (see above), it’s so deeply integrated into my home cinema setup that just <em>debugging</em> what was going on took a ladder, a screwdriver, a hex-wrench and unwinding an ungodly heap of cables.</p>
<p>All of that in an apartment whose temperature is currently at 30°C (86 °F) and with a hell of a headache.</p>
<p>I’d take anything else going down. Anything but that Gefen extender. My XBox? Sure. Shion? It’d suck, but sure if it has to be, go ahead. My reciever? That would hurt as it was very expensive, but at least it’s easily replaced.</p>
<p>Why did it have to be that Gefen extender? Why??</p>
PostgreSQL 8.42009-07-01T00:00:00+00:00http://pilif.github.com/2009/07/postgresql-8-4<p>Like a clockwork, about one year after the release of PostgreSQL 8.3, the team behind the best database on this world did it again and <a href="http://www.postgresql.org/about/news.1108">released PostgreSQL 8.4</a>, the latest and greatest in a long series of awesomeness.</p>
<p>Congratulations to everyone involved and might you have the strength to continue to improve your awesome piece of work.</p>
<p>For me, the hightlights of this new release are</p>
<ul>
<li><a href="http://www.postgresql.org/docs/8.4/interactive/app-pgrestore.html">parallel restore</a>: I just tried this out and restoring a dump that usually took around 40 minutes (in standard sql/text format) now takes 5 minutes.</li>
<li>The improvements to psql usability just make it even clearer that psql isn't just a command line database tool, but that it's one of the best interfaces to access the data and administer the server. psql hands-down beats whatever database GUI tool I have seen so far.</li>
<li>truncate table reset identity is very useful during development</li>
<li>no more max_fsm_pages makes maintaining the database even easier and removes one variable to keep track of.</li>
</ul>
<p>Thanks again for yet another awesome release.</p>
iPhone works for me2009-06-26T00:00:00+00:00http://pilif.github.com/2009/06/iphone-works-for-me<p>A year ago, <a href="http://www.gnegg.ch/2008/06/which-phone-for-me/">I was comparing mobile phones</a>, I bought a Touch Diamond and <a href="http://www.gnegg.ch/2008/07/what-sucks-about-the-touch-diamond/">regretted it</a> and then I bought an iPhone 3G which I used for a year and now I even upgraded to the 3GS. Now that I just got yet another comment to my post about the Touch Diamond, I thought I should recycle that comparison table from a year ago, but this time I’ll compare my assumptions about the iPhone back then with how it actually turned out.</p>
<p>So, here’s the table:</p>
<table id="mobtable" border="0" cellspacing="0" cellpadding="0">
<thead>
<tr>
<td></td>
<th>assumed</th>
<th>actually</th>
</tr>
</thead>
<tbody>
<tr class="devider">
<th colspan="4">Phone usage</th>
</tr>
<tr>
<th>Quick dialing of arbitrary numbers</th>
<td></td>
<td>actually, using the favorites list, and even using the touch keypad with its very large buttons, I never had a problem dialing a number.</td>
</tr>
<tr>
<th>Acceptable battery life (more than two days)</th>
<td>?</td>
<td>meh - two to three days, but as I'm syncing podcasts every day, I get to charge the phone every day as well, so this doesn't matter as much</td>
</tr>
<tr>
<th>usable as modem</th>
<td>probably not</td>
<td>it is now (using a <a href="http://help.benm.at">little help</a> for my Swisscom contract). As I was bound to my old contract with sunrise until may, I would have been able to use my old phone in an emergency, but that thankfully didn't happen.</td>
</tr>
<tr>
<th>usable while not looking at the device</th>
<td></td>
<td>I got really dependent upon the small button on my headset plus the volume hardware buttons on the side of the device, both allowing me to do 90% of the stuff I was able to do on the old phone without looking at it.</td>
</tr>
<tr>
<th>quick writing of SMS messages</th>
<td></td>
<td>actually, I'm nearly as fast as with the T9 - having all keys at my disposal eliminates the need to select the right word in the menus, but not having the physical keys lets me wrestle with typos or auto correction which removes a bit of the advantage. It's not nearly as bad as I have imagined though.</td>
</tr>
<tr>
<th>Sending and receiving of MMS messages</th>
<td></td>
<td>works now. I missed the feature about once or twice in the 2.0 days, but usually sending a picture via email worked just as well (and was cheaper).</td>
</tr>
<tr class="devider">
<th colspan="4">PIM usage</th>
</tr>
<tr>
<th>synchronizes with google calendar/contacts</th>
<td>maybe</td>
<td>yes. Since the beginning of the year, this works really well because <a href="http://www.google.com/mobile/apple/sync.html">Google just pretends to be Exchange</a></td>
</tr>
<tr>
<th>synchronizes with Outlook</th>
<td>maybe</td>
<td>yes, directly via ActiveSync - but since February, our company went the Google Apps route, so this has become irrelevant.</td>
</tr>
<tr>
<th>usable calendar</th>
<td>yes</td>
<td>yes</td>
</tr>
<tr>
<th>usable todo list</th>
<td></td>
<td></td>
</tr>
<tr class="devider">
<th colspan="4">media player usage</th>
</tr>
<tr>
<th>integrates into current iTunes based podcast workflow</th>
<td>yes</td>
<td>yes</td>
</tr>
<tr>
<th>straight forward audio playing interface</th>
<td>yes</td>
<td>yes (see my note about the button on the headset above)</td>
</tr>
<tr>
<th>straight forward video playing interface</th>
<td></td>
<td>actually, the interface is perfectly fine</td>
</tr>
<tr>
<th>acceptable video player</th>
<td>limited</td>
<td>kinda yes. Using my 8 core Mac Pro, it's quick and easy to convert a video, but lately I'm using my home cinema equipment for the real movies/tv series and the iPhone for video podcasts which already come in the native format. Still, it's no generic video player capable of playing video in the most common formats and it doesn't really support playing from any server in my home network.</td>
</tr>
<tr class="devider">
<th colspan="4">hackability</th>
</tr>
<tr>
<th>ssh client</th>
<td>maybe</td>
<td>yes. TouchTerm works very well - much better than any of the mobile Putty variants (Symbian an Winmob)</td>
</tr>
<tr>
<th>skype client</th>
<td>maybe</td>
<td>note quite. Actually usable with the speakerphone or headset, but not as useful in general use due to the inability to run in the background</td>
</tr>
<tr>
<th>OperaMini (browser usable on GSM)</th>
<td></td>
<td>not needed any more due to UMTS and near-flat rates.</td>
</tr>
<tr>
<th>WLAN-Browser</th>
<td>yes</td>
<td>yes</td>
</tr>
</tbody></table>
<p>Nearly all my gripes about the iPhone have either become irrelevant or turned out not to be a problem after all.</p>
<p>Combine the very acceptable performance as a phone with the perfect performance as a podcast player, music player, acceptable gaming platform and perfect mobile internet device, then it becomes clear that the iPhone has become the perfect phone for me.</p>
<p>I upgraded to the 3GS mainly because of the larger capacity, but now that I have it, the speed improvement actually matters much more than the capacity increase as 32 GB still is not enough to fit all my audio books, so I’m still limited to all my music, all unlistened podcasts and a selection of audio books.</p>
<p>But the speed improvement from the 3G to the 3GS is so incredible, that I’m still very happy I made the purchase. All the other features are either not quite ready for prime time (voice control) or not really interesting to me (video recording, compass).</p>
<p>Still. After looking for the perfect phone for 8 years now, I finally found the hardware in the iPhone.</p>
802.11n, Powerline and Sonos2009-06-10T00:00:00+00:00http://pilif.github.com/2009/06/80211n-powerline-and-sonos<p>I decided to have a look into the networking setup for my bedroom as lately, I was getting really bad bandwidth.</p>
<p>Earlier, while unable to stream 1080p into my bedrom, I was able to watch 720p, but lately even that has become choppy at best.</p>
<p>In my bedroom, I was using a Sonos Zone Player 100 connected via Ethernet to a Devolo A/V 200MBit power line adapter.</p>
<p>I have been using the switch integrated into the zone player to connect the bedrom MacMini media center and the PS3 to the network. The idea was that powerline will provide better bandwidth than WiFi, which it initially seemed to do, but as I said, lately, this system became really painful to use.</p>
<p>Naturally I had enough and wanted to look into other options.</p>
<p>Here’s a quick list of my findings:</p>
<ul>
<li>The Sonos ZonePlayer actually acts as a bridge. If one player is connected via Ethernet, it'll use its mesh network to wirelessly bridge that Ethernet connection to the switch inside the Sonos. I'm actually deeply astonished that I even got working networking with my configuration.</li>
<li>Either my Devolo adaptor is defective or something strange is going on in my power line network - a test using FTP never yielded more than 1 MB/s throughput which explains why 720p didn't work.</li>
<li>While still not a ratified standard, 802.11n, at least as implemented by Apple works really well and delivers constant 4 MB/s throughput in my configuration.</li>
<li>Not wanting to risk cross-vendor incompatibilities (802.11n is not ratified after all), I went the Apple Airport route, even though there probably would have been cheaper solutions.</li>
<li>Knowing that bandwidth rapidly decreases with range, I bought one AirPort Extreme Base Station and three AirPort Expresses which I'm using to do nothing but extend the 5Ghz n network.</li>
<li>All the AirPort products have a nasty constantly lit LED which I had to cover up - this is my bedroom after all, but I still wanted line of sight to optimize bandwidth. There is a configuration option for the LED, but it only provides two options: Constantly on (annoying) and blinking on traffic (very annoying).</li>
<li>While the large AirPort Extreme can create both a 2.4 GHz and a 5 GHz network, the Express ones can only extend either one of them!</li>
</ul>
<p>This involved a lot of trying out, changing around configurations and a bit of research, but going from 0.7 MB/s to 4 MB/s in throughput certainly was worth the time spent.</p>
<p>Also, yes, these numbers are in Mega<strong>bytes</strong> unless I’m writing MBits in which case it’s Mega<strong>bits</strong>.</p>
(Unicode-)String handling done right2009-05-22T00:00:00+00:00http://pilif.github.com/2009/05/unicode-string-handling-done-right<p>Today, found myself reading the <a href="http://diveintopython3.org/strings.html">chapter about strings</a> on <a href="http://diveintopython3.org">diveintopython3.org</a>.</p>
<p>Now, I’m no Python programmer by any means. Sure. I know my share of Python and I really like many of the concepts behind the language. I have even written some smaller scripts in Python, but it’s not my day-to-day language.</p>
<p>That chapter about string handling really really impressed me though.</p>
<p>In my opinion, handling Unicode strings they way python 3 is doing is exactly how it should be done in every development environment: Keep strings and collections of bytes completely separate and provide explicit conversion functions to convert from one to the other.</p>
<p>And hide the actual implementation from the user of the language! A string is a collection of characters. I don’t have to care how these characters are stored in memory and how they are accessed. When I need that information, I will have to convert that string to a collection of bytes, giving an explicit encoding how I want that to be done.</p>
<p>This is exactly how it should work, but implementation details leaking into the language are mushing this up in every other environment I know of making it a real pain to deal with multibyte character sets.</p>
<p>Features like this is what convinces me to look into new stuff. Maybe it IS time to do more python after all.</p>
Why I love the reddit crowd2009-05-19T00:00:00+00:00http://pilif.github.com/2009/05/why-i-love-the-reddit-crowd<p><a href="http://www.gnegg.ch/wp-content/uploads/2009/05/picture-1.png"><img class="aligncenter size-full wp-image-576" title="picture-1" src="http://www.gnegg.ch/wp-content/uploads/2009/05/picture-1.png" alt="picture-1" width="490" height="127" /></a></p>
<p>that’s why.</p>
No more hard drives for me!2009-05-13T00:00:00+00:00http://pilif.github.com/2009/05/no-more-hard-drives-for-me<p>Last week I noticed that the <a href="http://www.digitec.ch">hardware store of my choice</a> had these fancy new (and fast) Intel SSDs in stock - reason enough for me to go ahead and buy two to try them out in my two MacPro desktop machines. <a href="http://en.wikipedia.org/wiki/KOS-MOS#KOS-MOS">Kos-Mos</a>, my home mac was the first to be converted.</p>
<p>But before that, there was this hardware problem to overcome. See: The SSDs are 2.5 inch drives whereas the MacPro has 3.5 inch slots. While the connectors (SATA) are compatible, the smaller form factor of the Intel drives prevents the usual drive sliders of the MacPro from working.</p>
<p>The solution was to buy <a href="http://www.maxupgrades.com/istore/index.cfm?fuseaction=product.display&product_id=180">one of these adapters</a> for the SSDs. Before doing that, I read about other solutions, some of them involving duct tape, but this felt like it was the cleanest way and it was: The kits fit perfectly, so installing the drive was a real piece of cake.</p>
<p>The next problem was about logistics:</p>
<figure class="highlight"><pre><code class="language-bash" data-lang="bash">pilif@kosmos /Volumes/Macintosh HD
<span class="gp"> % </span>df -h | grep Macintosh
/dev/disk2s2 365Gi 319Gi 46Gi 88% /Volumes/Macintosh HD</code></pre></figure>
<p>Whereas the largest Intel SSD available to date has just 160GB of capacity (149 “really usable”), so at least<em> some </em>kind of reorganization had to be done.</p>
<p>Seeing that the installation running on the traditional drive was ages old anyways (dating back to the last quarter of 2006), I decided that the sanest way to proceed was to just install another copy of Leopard to the new drive and use that as the boot device, coping over the applications and parts of the user profile I really needed.</p>
<p>Been there, done that.</p>
<p>I didn’t do any real benchmark, but boot-time is now <em>sub 10 seconds</em>. Eclipse starts up in <em>sub 5 seconds</em>. The installation of all the updates since the pristine 10.5.1 that was on the DVDs that came with the machine took <em>less than three minutes - including the reboots </em>(I’ve installed the 10.5.7 update this morning and it took around 10 minutes on the same machine).</p>
<p>And to make things even better: The machine is significantly quieter than before - at least once the old hard drive powers down.</p>
<p>I will never, ever, again use non-SSD drives in any machine I’m working at from now on.</p>
<p>The perceived speedup was as significant as going from 8MB or RAM to 32MB back in the days. The machine basically feels like a new computer.</p>
<p>Of course I ran into one really bad issue:</p>
<p>The idea was to symlink ~/Music to my old drive because my iTunes Library (mostly due to Podcasts and audio books) was too large to conveniently copy to the SSD. I renamed ~/Music to ~/Music.old, created the symlink and started iTunes for the first time, only to get screwed with an empty library.</p>
<p>According to the preferences though, iTunes did correctly follow the symlink and was pointing to the right path (WTF?). I tried to manually re-add the library folder which did kind of work, but screwed over all my podcasts - completely.</p>
<p>This is where I noticed that somehow iTunes still found ~/Music.old and used that one. A quick ps turned out my best friend, the iTunes helper was running, so I shut that one down and moved ~/Music.old away to /, just to be sure.</p>
<p>Restarted iTunes just to run into the very same problems again (now, this is a serious WTF).</p>
<p>The only way to get this to work was to quit iTunes (that includes killing the helper) and to completely remove all traces of that Music folder.</p>
<p>Now iTunes is finally using the Music folder on my traditional hard drive. This kind of work should not be needed and I seriously wonder what kind of magic was going on behind the scenes there - after killing the helper and renaming the folder, it should not have used it any more.</p>
<p>Still: SSDs are fun. And I would never again want to miss the kind of speed I’m now enjoying.</p>
<p><a href="http://en.wikipedia.org/wiki/Celes#Celes">celes</a> in the office is next :-)</p>
JavaScript and Applet interaction2009-04-29T00:00:00+00:00http://pilif.github.com/2009/04/javascript-and-applet-interaction<p>As I said <a href="/2009/04/this-months-find-jna-and-applet-launcher/">earlier this month</a>: While Java applets are dead for games and animations and whatever else they were used back in the nineties, they still have their use when you have to access the local machine from your web application in some way.</p>
<p>There are other possibilities of course, but they all are limited:</p>
<ul>
<li>Flash loads quickly and is available in most browsers, but you can only access the hardware Adobe has created an API for. That's upload of files the user has to manually select, webcams and microphones.</li>
<li>ActiveX doesn't work in browsers, but only in IE.</li>
<li>.NET dito.</li>
<li>Silverlight is neither commonly installed on your users machines, nor does it provide the native hardware access.</li>
</ul>
<p>So if you need to, say, access a bar code scanner. Or access a specific file on the users computer - maybe stored in a place that is inconvenient for the user to get to (%Localappdata% for example is hidden in explorer). In this case, a signed Java applet is the only way to go.</p>
<p>You might tell me that a website has no business accessing that kind of data and generally, I would agree, but what if your requirements are to read data from a bar code scanner without altering the target machine at all and without requiring the user to perform any steps but to plug the scanner and click a button.</p>
<p>But Java applets have that certain 1996 look to them, so even if you access the data somehow, the applet still feels foreign to your cool Web 2.0 application: It doesn’t quite fit the tight coupling between browser and server that AJAX gets us and even if you use Swing, the GUI will never look as good (and customized) as something you could do in HTML and CSS.</p>
<p>But did you know that Java Applets are fully scriptable?</p>
<p>Per default, any JavaScript function on a page can call any public method of any applet on the site. So let’s say your applet implements</p>
<figure class="highlight"><pre><code class="language-java" data-lang="java"><span class="kd">public</span> <span class="n">String</span> <span class="nf">sayHello</span><span class="o">(</span><span class="n">String</span> <span class="n">name</span><span class="o">){</span>
<span class="k">return</span> <span class="s">"Hello "</span><span class="o">+</span><span class="n">name</span><span class="o">;</span>
<span class="o">}</span></code></pre></figure>
<p>Then you can use JavaScript to call that method (using jQuery here):</p>
<figure class="highlight"><pre><code class="language-javascript" data-lang="javascript"><span class="nx">$</span><span class="p">(</span><span class="s1">'#some-div'</span><span class="p">).</span><span class="nx">html</span><span class="p">(</span>
<span class="nx">$</span><span class="p">(</span><span class="s1">'#id_of_the_applet'</span><span class="p">).</span><span class="nx">get</span><span class="p">(</span><span class="mi">0</span><span class="p">).</span><span class="nx">sayHello</span><span class="p">(</span>
<span class="nx">$</span><span class="p">(</span><span class="s1">'#some-form-field'</span><span class="p">).</span><span class="nx">val</span><span class="p">())</span>
<span class="p">);</span></code></pre></figure>
<p>If you do that, you have to remember though that any applet method called this way will <strong>run inside the sandbox</strong> regardless if the applet is signed or not.</p>
<p>So how do you access the hardware then?</p>
<p>Simple: Tell the JRE that you are sure (you are. aren’t you?) that it’s ok for a script to call a certain method. To do that, you use AccessController.doPrivileged(). So if for example, you want to check if some specific file is on the users machine. Let’s further assume that you have a singleton RuntimeSettings that provides a method to check the existence of the file and then return its name, you could do something like this:</p>
<figure class="highlight"><pre><code class="language-java" data-lang="java"> <span class="kd">public</span> <span class="n">String</span> <span class="nf">getInterfaceDirectory</span><span class="o">(){</span>
<span class="k">return</span> <span class="o">(</span><span class="n">String</span><span class="o">)</span> <span class="n">AccessController</span><span class="o">.</span><span class="na">doPrivileged</span><span class="o">(</span>
<span class="k">new</span> <span class="nf">PrivilegedAction</span><span class="o">()</span> <span class="o">{</span>
<span class="kd">public</span> <span class="n">Object</span> <span class="nf">run</span><span class="o">()</span> <span class="o">{</span>
<span class="k">return</span> <span class="n">RuntimeSettings</span><span class="o">.</span><span class="na">getInstance</span><span class="o">().</span><span class="na">getInterfaceDirectory</span><span class="o">();</span>
<span class="o">}</span>
<span class="o">}</span>
<span class="o">);</span>
<span class="o">}</span></code></pre></figure>
<p>Now it’s safe to call this method from JavaScript despite the fact that RuntimeSettings.getInterfaceDirectory() directly accesses the underlying system. Whatever is in PrivilegedAction.run() will have full hardware access (provided the applet in question is signed and the user has given permission).</p>
<p>Just keep one thing in mind: Your applet is fully scriptable and if you are not very careful where that Script comes from, your applet may be abused and thus the security of the client browser might be at risk.</p>
<p>Keeping this in mind, try to:</p>
<ul>
<li>Make these elevated methods do one and only one thing.</li>
<li>Keep the interface between the page and the applet as simple as possible.</li>
<li>In elevated methods, do not call into javascript (see below) and certainly do not eval() any code coming from the outside.</li>
<li>Make sure that your pages are sufficiently secured against XSS: Don't allow any user generated content to reach the page unescaped.</li>
</ul>
<p>The explicit and cumbersome declaration of elevated actions was put in place to make sure that the developer keeps the associated security risk in mind. So be a good developer and do so.</p>
<p>Using this technology, you can even pass around Java objects from the Applet to the page.</p>
<p>Also, if you need your applet to call into the page, you can do that too, of course, but you’ll need a bit of additional work.</p>
<ol>
<li>You need to import JSObject from netscape.javascript (yes - that's how it's called. It works in all browsers though), so to compile the applet, you'll have to add plugin.jar (or netscape.jar - depending on the version of the JRE) from somewhere below your JRE/JDK installation to the build classpath. On a Mac, you'll find it below /System/Library/Frameworks/JavaVM.framework/Versions/<your version>/Home/lib.</li>
<li>You need to tell the Java plugin that you want the applet to be able to call into the page. Use the <em>mayscript</em> attribute of the java applet for that (interestingly, it's just mayscript - without value, thus making your nice XHTML page invalid the moment you add it - mayscript="true" or the correct mayscript="mayscript" don't work consistently on all browsers).</li>
<li>In your applet, call the static JSObject.getWindow() and pass it a reference to your applet to acquire a reference to the current pages window-object.</li>
<li>On that reference you can call eval() or getMember() or just call() to call into the JavaScript on the page.</li>
</ol>
<p>This tool set allows you to add the applet to the page with 1 pixel size in diameter placed somewhere way out of the viewport distance and with visibility: hidden, while writing the actual GUI code in HTML and CSS, using normal JS/AJAX calls to communicate with the server.</p>
<p>If you need access to specific system components, this (together with <a href="/2009/04/this-months-find-jna-and-applet-launcher/">JNA and applet-launcher</a>) is the way to go, IMHO as it solves the anachronism that is Java GUIs in applets.</p>
<p>There is still the long launch time of the JRE, but that’s getting better and better with every JRE release.</p>
<p>I was having so much fun last week discovering all that stuff.</p>
Do not change base library behavior2009-04-28T00:00:00+00:00http://pilif.github.com/2009/04/do-not-change-base-library-behavior<p>Modern languages like JavaScript or Ruby provide the programmer with an option to “reopen” any class to add additional behavior to them. In the case of Ruby and JavaScript, this is not constrained in any way: You are able to reopen any class - even the ones that come with your language itself and there are no restrictions on the functionality of your extension methods.</p>
<p>Ruby at least knows of the concept of private methods and fields which you can’t call from your additional methods, but that’s just Ruby. JS knows of no such thing.</p>
<p>This provides awesome freedom to the users of these languages. Agreed. Miss a method on a class? Easy. Just implement that and call it from wherever you want.</p>
<p>This also helps to free you from things like</p>
<figure class="highlight"><pre><code class="language-java" data-lang="java"><span class="n">BufferedReader</span> <span class="n">br</span> <span class="o">=</span> <span class="k">new</span> <span class="n">BufferedReader</span><span class="o">(</span><span class="k">new</span> <span class="n">InputStreamReader</span><span class="o">(</span><span class="k">new</span> <span class="n">FileInputStream</span><span class="o">(</span><span class="n">of</span><span class="o">)));</span></code></pre></figure>
<p>which is lots of small (but terribly inconventiently named) classes wrapped into each other to provide the needed functionality. In this example, what the author wanted is to read a file line-by-line. Why exactly do I need three objects for this? Separation of concern is nice, but stuff like this make learning a language needlessly complicated.</p>
<p>In the world of Ruby or JS, you would just extend FileInputStream with whatever functionality you need and then call that, creating code that is much easier to read.</p>
<figure class="highlight"><pre><code class="language-javascript" data-lang="javascript"><span class="nx">FileInputStream</span><span class="p">.</span><span class="nx">prototype</span><span class="p">.</span><span class="nx">readLine</span> <span class="o">=</span> <span class="kd">function</span><span class="p">(){...}</span>
<span class="c1">//...</span>
<span class="nx">of</span><span class="p">.</span><span class="nx">readLine</span><span class="p">();</span>
<span class="c1">//...</span></code></pre></figure>
<p>And yet, if you are a library (as opposed to consumer code), <strong>this is a terrible, terrible thing to do</strong>!</p>
<p>We have seen <a href="http://github.com/raganwald/homoiconic/blob/master/2009-04-09/my_objection_to_sum.md#readme">previous instances</a> of the kind of problems you will cause: Libraries adding functionality to existing classes create real problems when multiple libraries are doing the same thing and the consuming application is using both libraries.</p>
<p>Let’s say for example, that your library A added that method sum() to the generic Array class. Let’s also say that your consumer also uses library B which does the same thing.</p>
<p>What’s the deal about this, you might ask? It’s pretty clear, what sum does after all?</p>
<p>Is it? It probably is when that array contains something that is summable. But what if there is, say, a string in the array you want to sum up? In your library, the functionality of sum() could be defined as “summing up all the numeric values in the array, assuming 0 for non-numeric values”. In the other library, sum() could be defined as “summing up all the numeric values in the array, throwing an exception if sum() encounters invalid value”.</p>
<p>If your consumer loads your library A first and later on that other library B, <strong>you will be calling B’s Array#sum()</strong>.</p>
<p>Now due to your definition of sum(), you assume that it’s pretty safe to call sum() with an array that contains mixed values. But because you are now calling B’s sum(), you’ll get an exception you certainly did not expect in the first place!</p>
<p>Loading B after A in the consumer caused A to break because both created the same method conforming to different specs.</p>
<p>Loading A after B would fix the problem in this case, but what, say, if both you and B implement Array#avg, but with reversed semantics this time around?</p>
<p>You see, <strong>there is no escape</strong>.</p>
<p>Altering classes in the global name space breaks any name spacing facility that may have been available in your language. Even if all your “usual” code lives in your own, unique name space, the moment you alter the global space, you break out of your small island and begin to compete with the rest of the world.</p>
<p>If you are a library, you cannot be sure that you are alone in that competition.</p>
<p>And even if you are a top level application you have to be careful not to break implementations of functions provided by libraries you either use directly or, even worse, indirectly.</p>
<p>If you need a real-life example, the following code in an (outdated) version of scriptaculous’ effects.js broke jQuery, despite the latter being very, very careful to check if it can rely on the base functionality provided:</p>
<figure class="highlight"><pre><code class="language-javascript" data-lang="javascript"><span class="nb">Array</span><span class="p">.</span><span class="nx">prototype</span><span class="p">.</span><span class="nx">call</span> <span class="o">=</span> <span class="kd">function</span><span class="p">()</span> <span class="p">{</span>
<span class="kd">var</span> <span class="nx">args</span> <span class="o">=</span> <span class="nx">arguments</span><span class="p">;</span>
<span class="k">this</span><span class="p">.</span><span class="nx">each</span><span class="p">(</span><span class="kd">function</span><span class="p">(</span><span class="nx">f</span><span class="p">){</span> <span class="nx">f</span><span class="p">.</span><span class="nx">apply</span><span class="p">(</span><span class="k">this</span><span class="p">,</span> <span class="nx">args</span><span class="p">)</span> <span class="p">});</span>
<span class="p">}</span></code></pre></figure>
<p>Interestingly enough, Array#call wasn’t used in the affected version of the library. This was a code artifact that actually did nothing but break a completely independent library (I did not have time to determine the exact nature of the breakage).</p>
<p>Not convinced? After all I was using an outdated version of scriptaculous and I should have updated (which is not an option if you have even more libraries dependent on bugs in exactly that version - unless you update all other components as well and then fix all the then broken unit tests).</p>
<p>Firefox 3.0 was the first browser to add document.getElementByClassName, a method also implemented by Prototype. Of course the functionality in Firefox was slightly different from the implementation in Prototype, which now called the built-in version instead its own version which caused a <a href="http://ejohn.org/blog/getelementsbyclassname-pre-prototype-16/">lot of breakage all over the place</a>.</p>
<p>So, dear library developers, stay in your own namespace, please. You’ll make us consumers (and your own) lives so much more easier.</p>
This month's find: jna and applet-launcher2009-04-21T00:00:00+00:00http://pilif.github.com/2009/04/this-months-find-jna-and-applet-launcher<p>Way way back, I was talking about <a href="http://www.gnegg.ch/2004/01/java-and-native-libraries/">java applets and native libraries</a> and the things you need to consider when writing applets that need access to native libraries (mostly for hardware access). And let’s be honest - considering how far HTML and JavaScript have come, native hardware access is probably the only thing you still needs applets for.</p>
<p>Java is slow and bloated and users generally don’t seem to like it very much, but the moment you need access to specific hardware - or even just to specific files on the users filesystem, Java becomes an interesting option as it is the only technology readily available on multiple platforms and browsers.</p>
<p>Flash only works for hardware Adobe has put an API in (cameras and microphones) and doesn’t allow access to arbitrary files. .NET doesn’t work on browsers (it works on IE, but the solution at hand should work on browsers too) and ActiveX is generally horrible, doesn’t work in browsers and additionally only works under windows (.NET works in theory on Unixes and Macs as well).</p>
<p>Which leaves us with Java.</p>
<p>Because applets are scriptable, you get away with hiding the awful user interface that is Swing (or, god forbid, AWT) and writing a nice integrated GUI using web technologies.</p>
<p>But there’s still the issue with native libraries.</p>
<p>First, your applet needs to be signed - no way around that. Then, you need to manually transfer all the native libraries and extension libraries. Also, you’ll need to put them in certain predefined places - some of which require administration privileges to be written into.</p>
<p>And don’t get me started about JNI. Contrary to .NET, you can’t just call into native libraries. You’ll have to write your own glue layer between the native OS and the JRE. That glue layer is platform specific of course, so you better have your C compiler ready - and the plattforms you intend to run on, of course.</p>
<p>So even if Java is the only way, it still sucks.</p>
<p>Complex deployment, administrative privileges and antiquated glue layers. Is this what you would want to work with?</p>
<p>Fortunately, I’ve just discovered two real pearls completely solving the two problems leaving me with the hassle that is Java itself, but it’s always nice to keep some practice in multiple programming languages, as long as it doesn’t involve C <em>shudder</em>.</p>
<p>The first component I’m going to talk about is JNA (<a href="https://jna.dev.java.net/">Java Native Access</a>) which is for Java what P/Invoke is for .NET: A way for directly calling into the native API from your Java code. No JNI and thus no custom glue code and C compiler needed. Translating the native calls and structures into what JNA wants still isn’t as convenient as P/Invoke, but it sure as hell beats JNI.</p>
<p>In my case, I needed to get find the directory corresponding to CSIDL_LOCAL_APPDATA when running under Windows. While I could have hacked something together, the only really reliable way of getting the correct path is to query the Windows API, for which JNA proved to be the perfect fit.</p>
<p>JNA of course comes with its own glue layer (available in precompiled form for more plattforms than I would ever want to support in the first place), so this leads us directly to the second issue: Native libraries and applets don’t go very well together.</p>
<p>This is where <a href="https://applet-launcher.dev.java.net/">applet-launcher</a> comes into play. Actually, applet-launcher’s functionality is even <a href="https://jdk6.dev.java.net/plugin2/jnlp/">built into the JRE itself</a> - provided you target JRE 1.6 Update 10 and later, which isn’t realistic in most cases (just today I was handling a case where an applet had to work with JRE 1.3 which was superseded in 2002), so for now, applet-launcher which works with JRE 1.4.2 and later is probably the way to go.</p>
<p>The idea is that you embed the applet-launcher applet instead of the applet you want to embed in the first place. The launcher will download a JNLP file from the server, download and extract external JNI glue libraries and finally load your applet.</p>
<p>When compared with the native 1.6 method, this has the problem that the library which uses the JNI glue has to have some special hooks in place, but it works like a charm and fixes all the issues I’ve previously had with native libraries in applets.</p>
<p>These two components renewed my interest in Java as a glue layer between the webbrowser where your application logic resides and the hardware the user is depending upon. While earlier methods kind of worked but were either hacky or a real pain to implement, this is as clean as it gets and works like a charm.</p>
<p>And next time we’ll learn about scripting Java applets.</p>
digg bar controversy2009-04-16T00:00:00+00:00http://pilif.github.com/2009/04/digg-bar-controversy<p><strong>Update: </strong>I’ve actually written this post yesterday and scheduled it for posting today. In the mean time, digg has <a href="http://blog.digg.com/?p=664">found an even better solution</a> and only shows their bar for logged in users. Still - a solution like the one provided here would allow for the link to go to the right location regardless of the state of the digg bar settings.</p>
<p>Recently, digg.com added a controversial feature, the digg bar, which basically frames every posted link in a little IFRAME.</p>
<p>Rightfully so, webmasters were concerned about this and quite quickly, we had the usual religious war going on between the people finding the bar quite useful and the webmasters hating it for lost page rank, even worse recognition of their site and presumed affiliation with digg.</p>
<p><a href="http://revcanonical.appspot.com/">Ideas crept up</a> over the weekend, but turned out <a href="http://ciaranmcnulty.com/blog/2009/04/rev-canonical-should-be-handled-with-care">not to be so terribly good</a>.</p>
<p>Basically it all boils down to digg.com screwing up on this, IMHO.</p>
<p>I know that they let you turn off that dreaded digg bar, but all the links on their page still point to their own short url. Only then is the decision made whether to show the bar or not.</p>
<p>This means that all links on digg currently just point to digg itself, not awarding any linked page with anything but the traffic which they don’t necessarily want. Digg-traffic isn’t worth much in terms of returning users. You get dugg, you melt your servers, you return back to be unknown.</p>
<p>So you would probably appreciate the higher page rank you get from being linked at by digg as that leads to increased search engine traffic which generally is worth much more.</p>
<p>The solution on diggs part could be simple: Keep the original site url in the href of their links, but use some JS-magic to still open the digg bar. That way they still get to keep their foot in the users path away from the site, but search engines will now do the right thing and follow the links to their actual target, thus giving the webmasters their page rank back.</p>
<p>How to do this?</p>
<p>Here’s a few lines of jQuery to automatically make links formated in the form</p>
<figure class="highlight"><pre><code class="language-xml" data-lang="xml"><span class="nt"><div</span> <span class="na">id=</span><span class="s">"link_container"</span><span class="nt">></span>
<span class="nt"><a</span> <span class="na">id=</span><span class="s">"xxbc34fb"</span> <span class="na">href=</span><span class="s">"http://example.com/articles/cool_article"</span><span class="nt">></span>Cool Article<span class="nt"></a></span>
<span class="nt"><a</span> <span class="na">id=</span><span class="s">"xxbc38fc"</span> <span class="na">href=</span><span class="s">"http://example.com/articles/cool_article2"</span><span class="nt">></span>Cool Article 2<span class="nt"></a></span>
<span class="nt"><a</span> <span class="na">id=</span><span class="s">"xxbc39fk"</span> <span class="na">href=</span><span class="s">"http://example.com/articles/cool_article3"</span><span class="nt">></span>Cool Article 3<span class="nt"></a></div></span></code></pre></figure>
<p>be opened via the digg bar while still working correctly for search engines (assuming that the link’s ID is the digg shorturl):</p>
<figure class="highlight"><pre><code class="language-javascript" data-lang="javascript"><span class="nx">$</span><span class="p">(</span><span class="kd">function</span><span class="p">(){</span>
<span class="nx">$</span><span class="p">(</span><span class="s1">'div#link_container a'</span><span class="p">).</span><span class="nx">click</span><span class="p">(</span><span class="kd">function</span><span class="p">(){</span>
<span class="nx">$</span><span class="p">(</span><span class="k">this</span><span class="p">).</span><span class="nx">attr</span><span class="p">(</span><span class="s1">'href'</span><span class="p">)</span> <span class="o">=</span> <span class="s1">'http://digg.com/'</span> <span class="o">+</span> <span class="k">this</span><span class="p">.</span><span class="nx">id</span><span class="p">;</span>
<span class="p">});</span>
<span class="p">});</span></code></pre></figure>
<p>piece of cacke.</p>
<p>No further changes needed and all the web masters will be so much happier while digg gets to keep all the advantages (and it may actually help digg to increase their pagerank as I could imagine that a site with a lot of links pointing to different places could rank higher than one without any external links).</p>
<p>Webmasters then still could do their usual parent.location.href trickery to get out of the digg bar if they want to, but they could also retain their page rank.</p>
<p>No need to add further complexity to the webs standards because one site decides not to play well.</p>
Playing Worms Armageddon on a Mac2009-04-15T00:00:00+00:00http://pilif.github.com/2009/04/playing-worms-armageddon-on-a-mac<p>Last weekend, I had a real blast with the Xbox 360 Arcade version of worms. Even after so many years, this game still rules them all, especially (if not only) in multiplayer mode.</p>
<p>The only drawback of the 360 version is the lack of weapons.</p>
<p>While the provided set is all well, the game is just not the same without the Super Banana Bomb or the Super Sheep.</p>
<p><a href="http://www.gnegg.ch/wp-content/uploads/2009/04/worms.png"><img class="aligncenter size-full wp-image-547" title="Worms Screenshot" src="http://www.gnegg.ch/wp-content/uploads/2009/04/worms.png" alt="Worms Screenshot" width="464" height="273" /></a></p>
<p>So this is why I looked for my old Worms Armageddon CD and tried to get it to work on todays hardware.</p>
<p>Making it work under plain Vista was easy enough (get the latest beta patch for armageddon, by the way):</p>
<p>Right-Click the Icon, select the compatibility tab, chose Windows XP, Disable Themes and Desktop composition and run the game with administrative privileges.</p>
<p>You may get away with not using one option or the other, but this one worked consistently.</p>
<p>To be really useful though, I wanted to make the game run under OS X as this is my main environment and I really dislike going through the lengthy booting process that is bootcamp.</p>
<p>I tried the various virtualization solutions around - something that should work seeing that the game doesn’t really need much in terms of hardware support.</p>
<p>But unfortunately, this was way harder than anticipated:</p>
<ul>
<li>The initial try was done using VMWare Fusion which looked very good at first, but failed miserably later on: While I was able to launch (and actually use) the games frontend, the actual game was a flickery mess with no known workaround.</li>
<li>Parallels failed by displaying a black menu. It was still clickable, but there was nothing on the screen but blackness and a white square border. Googling around a bit led to the idea to set SlowFrontendWorkaround in the registry to 0 which actually made the launcher work, but the game itself crashed consistenly without error message.</li>
</ul>
<p>In the end, I’ve achieved success using <a href="http://virtualbox.org">VirtualBox</a>. The SlowFrontendWorkaround is still needed to make the launcher work and the mouse helper of the VirtualBox guest tools needs to be disabled (on the Machine menu, the game still runs with the helper enabled, but you won’t be able to actually control the mouse pointer consistently), but after that, the game runs flawlessly.</p>
<p>Flickerless and with a decent frame rate. And with sound, of course.</p>
<p>To enable the workaround I talked about, use <a href="http://www.pilif.ch/wormsvboxfix.reg">this .reg file</a>.</p>
<p>Now the slaughter of worms can begin :-)</p>
Double-blind cola test2009-03-23T00:00:00+00:00http://pilif.github.com/2009/03/double-blind-cola-test<p><a href="http://www.gnegg.ch/wp-content/uploads/2009/03/blah.jpg"><img title="Double blind cola test" align="right" src="http://www.gnegg.ch/wp-content/uploads/2009/03/blah-225x300.jpg" alt="The final analysis" width="225" height="300" /></a></p>
<p>Two of my coworkers decided today after lunch that it was time to solve the age-old question: Is it possible to actually detect different kinds of cola just by tasting them.</p>
<p>In the spirit of true science (and a hefty dose of Mythbusters), we decided to do this the right way and to create a double blind test. The idea is that not only the tester has to not know the different test subjects, but also the person administering the test to make sure that the tester is not influenced in any way.</p>
<p>So here’s what we have done:</p>
<ol>
<li>We bought 5 different types of cola: A can of coke light, a can of standard coke, a PET bottle of standard coke, a can of coke zero and finally, a can of the new Red Bull cola (in danger of spoiling the outcome: eek).</li>
<li>We marked five glasses with numbers from 1 to 5 at the bottom.</li>
<li>We asked a coworker not taking part in the actual test to fill the glasses with the respective drink.</li>
<li>We put the glasses on our table in random order and designated each glasses position with letters from A to E.</li>
<li>One after another, we drank the samples and noted which glass (A-E) we thought to contain what drink (1-5). As to not influence ourselves during the test, the kitchen area was off-limits for everyone but the test subject and each persons results where to be kept secret until the end of the test.</li>
<li>We compared notes.</li>
<li>We checked the bottom of the glasses to see how we fared.</li>
</ol>
<p>The results are interesting:</p>
<ul>
<li>Of the four people taking part in the test, all but one person guessed all types correctly. The one person who failed wasn't able to correctly distinguish between bottled and canned standard coke.</li>
<li>Everyone instantly recognized the Red Bull Cola (no wonder there, it's much brighter than the other contenders and it smells like cough medicine)</li>
<li>Everyone got the coke light and zero correctly.</li>
<li>Although the tester pool was way too small, it's interesting that 75% of the testers were able to discern the coke in the bottle from the coke in the can - I would not have guessed that, but then, there's only a 50% chance to be wrong on this one - we may all just have been lucky - at least I was, to be honest.</li>
</ul>
<p>Fun in the office doing pointless stuff after lunch, I guess.</p>
New MacMini (early 09) and Linux2009-03-04T00:00:00+00:00http://pilif.github.com/2009/03/new-macmini-early-09-and-linux<p>The new MacMinis that were announced this week come with a Firewire 800 port which was reason enough for me to update shion yet again (keeping the host name of course).</p>
<p>All my media she’s serving to my various systems is stored on a second generation Drobo which is currently connected via USB2, but has a lingering FW800 port.</p>
<p>Of course the upgrade to FW800 will not double the transfer rate to and from the drobo, but it should increase it significantly, so I went ahead and got one of the new Minis.</p>
<p>As usual, I entered the Ubuntu (Intrepid) CD, hold c while turning the device on and completed the installation.</p>
<p>This left the Mini in an unbootable state.</p>
<p>It seems that this newest generation of Mac Hardware isn’t capable of booting from an MBR partitioned harddrive. Earlier Macs complained a bit if the harddrive wasn’t correctly partitioned, but then went ahead and booted the other OS anyways.</p>
<p>Not so much with the new boxes it seems.</p>
<p>To finally achieve what I wanted I had to do the following complicated procedure:</p>
<ol>
<li>Install <a href="http://refit.sourceforge.net/">rEFIt</a> (just download the package and install the .mpkg file)</li>
<li>Use the Bootcamp assistant to repartition the drive.</li>
<li>Reboot with the Ubuntu Desktop CD and run parted (the partitioning could probably be accomplished using the console installer, but I didn't manage to do it correctly).</li>
<li>Resize the FAT32-partition which was created by the Bootcamp partitioner to make room at the end for the swap partition.</li>
<li>Create the swap partition.</li>
<li>Format the FAT32-partition with something useful (ext3)</li>
<li>Restart and enter the rEFIt partitioner tool (it's in the boot menu)</li>
<li>Allow it to resync the MBR</li>
<li>Insert the Ubuntu Server CD, reboot holding the C key</li>
<li>Install Ubuntu normally, but don't change the partition layout - just use the existing partitions.</li>
<li>Reboot and repeat steps 7 and 8</li>
<li>Start Linux.</li>
</ol>
<p>Additionally, you will have to keep using rEFIt as the boot device control panel item does not recognize the linux partitions any more, so can’t boot from them.</p>
<p>Now to find out whether that stupid resistor is still needed to make the new mini boot headless.</p>
All-time favourite tools - update2009-02-22T00:00:00+00:00http://pilif.github.com/2009/02/all-time-favourite-tools-update<p>It has been more than four years since I’ve <a href="/2004/06/all-time-favourite-tools/">last talked about</a> my all-time favourite tools. I guess it’s time for an update.</p>
<p>Surprisingly, I still stand behind the tools listed there: My love for <a href="http://www.exim.org">Exim</a> is still un-changed (it just got bigger lately - but that’s for another post). <a href="http://www.postgresql.org">PostgreSQL</a> is cooler than ever and powers <a href="http://www.popscan.net">PopScan</a> day-in, day-out without flaws.</p>
<p>Finally, I’m still using InnoSetup for my Windows Setup programs, though that has lost a bit of importance in my daily work as we’re shifting more and more to the web.</p>
<p>Still. There are two more tools I must add to the list:</p>
<ul>
<li><a href="http://www.jquery.org">jQuery</a> is a JavaScript helper libary that allows you to interact with the DOM of any webpage, hiding away browser incompatibilities. There are a couple of libraries out there which do the same thing, but only jQuery is such a pleasure to work with: It works flawlessly, provides one of the most beautiful APIs I've ever seen in any library and there are tons and tons of self-contained plug-ins out there that help you do whatever you would want to on a web page.
jQuery is an integral part of making web applications equivalent to their desktop counterparts in matters of user interface fluidity and interactivity.
All while being such a nice API that I'm actually looking forward to do the UI work - as opposed to the earlier days which can most accurately be described as <em>UI sucks</em>.</li>
<li><a href="http://git-scm.com/">git</a> is my version control system of choice. There are many of them out there in the world and I've tried the majority of them for one thing or another. But only git combines the awesome backwards-compatibility to what I've used before and what's still in use by my coworkers (SVN) with abilities to beautify commits, have feature branches, very high speed of execution and very easy sharing of patches.
No single day passes without me using git and running into a situation where I'm reminded of the incredible beauty that is git.</li>
</ul>
<p>In four years, I’ve not seen one more other tool I’ve as consistenly used with as much joy as git and jQuery, so those two certainly have earned their spot in my heart.</p>
Google Apps: Mail Routing2009-02-12T00:00:00+00:00http://pilif.github.com/2009/02/google-apps-mail-routing<p>Just today while beginning the evaluation of a Google Apps For Your Domain Premium account, I noticed something that may be obvious to all of you Google Apps user out there, but certainly isn’t documented well enough for you to notice before you sign up:</p>
<p>Google Apps Premium has kick-ass mail routing functionality.</p>
<p>Not only can you configure Gmail to only accept mails from defined upstream-server, thus allowing you to keep the MX to some already existing server where you can do alias resolution for example. No. You can also tell Gmail to <strong>send</strong> outgoing mail via an external relay.</p>
<p>This is ever so helpful as it allows you to keep all the control you need over incoming email - for example if you have email-triggered applications running. Or you have email-aliases (basically forwarders where xxx@domain.com is forwarded to yyy@other-domain.com) which Google Apps does not support.</p>
<p>Because you can keep your old MX, your existing applications keep working and your aliases continue to resolve.</p>
<p>Allowing you to send all outgoing mail via your relay, in turn, allows you to get away without updating SPF records and forcing customers to change filters they may have set up for you.</p>
<p>This feature alone can decide between a go or no-go when evaluating Google Apps and I cannot understand why they have not emphasized on this way more than they currently do.</p>
My new friend: git rebase -i2009-02-01T00:00:00+00:00http://pilif.github.com/2009/02/my-new-friend-git-rebase-i<p>Last summer, I was into <a href="/2008/07/beautifying-commits-with-git/">making git commits look nice</a> with the intent of pushing a really nice and consistent set of patches to the remote repository.</p>
<p>The idea is that a clean remote history is a convenience for my fellow developers and for myself. A clean history means very well-defined patches - should a merge of a branch be neccesary in the future. It also means much easier hunting for regressions and generally more fun doing some archeology in the code.</p>
<p>My last post was about using <code>git add -i</code> to refine the commits going into the repository. But what if you screw up the commit anyways? What if you forget to add a new file and notice it only some commits later?</p>
<p>This is where <code>git rebase -i</code> comes into play as this allows you to reorder your local commits and to selectively squash multiple commits into one.</p>
<p>Let’s see how we would add a forgotten file to a commit a couple of commits ago.</p>
<ol>
<li>You add the forgotten file and commit it. The commit message doesn't really matter here.</li>
<li>You use <code>git log</code> or <code>gitk</code> to find the commit id before the one you want to amend this new file to. Let's say it's 6bd80e12707c9b51c5f552cdba042b7d78ea2824</li>
<li>Pick the first few characters (or the whole ID) and pass them to git rebase -i.</li>
</ol>
<pre> % git rebase -i 6bd80e12</pre>
<p>git will now open your favorite editor displaying your list of commits since the revision you have given. This could look like this.</p>
<pre>pick 6bd80e1 some commit message. This is where I have forgotten the file
pick 4c1d210 one more commit message
pick 5d2f4ed this is my forgotten file
# Rebase fc9a0c6..5d2f4ed onto fc9a0c6
#
# Commands:
# p, pick = use commit
# e, edit = use commit, but stop for amending
# s, squash = use commit, but meld into previous commit
#
# If you remove a line here THAT COMMIT WILL BE LOST.
# However, if you remove everything, the rebase will be aborted.
#</pre>
<p>The comment in the file says it all - just reorder the first three (or how many there are in your case) to look like this:</p>
<pre>pick 6bd80e1 some commit message. This is where I have forgotten the file
squash 5d2f4ed this is my forgotten file
pick 4c1d210 one more commit message</pre>
<p>Save the file. Git will now do some magic and open the text editor again where you can amend the commit message for the commit you squashed your file into. If it’s really just a forgotten file, you’ll probably keep the message the same.</p>
<p>One word of caution though: Do not do this on branches you have already pushed to a remote machine or otherwise shared with somebody else. git gets badly confused if it has to pull altered history.</p>
<p>Isn’t it nice that after moths you still find new awesomeness in your tool of choice?</p>
<p>I guess I’ll have to update my <a href="/2004/06/all-time-favourite-tools/">all-time favorite tools</a> list. It’s from 2004, so it’s probably ripe for that update.</p>
<p>Git rules.</p>
The consumer loses once more2009-01-30T00:00:00+00:00http://pilif.github.com/2009/01/the-consumer-loses-once-more<p>DRM strikes again. This time, apparently, the PC version of Gears of War <a href="http://arstechnica.com/gaming/news/2009/01/pc-gears-of-war-drm-causes-title-to-shut-down-starting-today.ars">stopped working</a>. This time it seems to be caused by an expired certificate.</p>
<p>Even though I do not play Gears of War, I take issue in this because of a multitude of problems:</p>
<p>First, it’s another reason where DRM does nothing to stop piracy but punishes the honest user for buying the original - no doubt, the cracked versions of the game will continue to work due to the stripped out certificate check.</p>
<p>Second, using any form of DRM with any type of media is incredibly shortsighted if it requires any external support to work correctly. Be it a central authorization server, be it a correct clock - you name it. Sooner or later you won’t sell any more of your media and thus you will shut your DRM servers down, screwing the most loyal of your customers.</p>
<p>This is especially apparent with the games market. Like no other market, there exists a really vivid and ever growing community of retro gamers. Like no other type of media, games seem to make users to want to go back to them and see them again - even after ever so many years.</p>
<p>Older games are <a href="http://speeddemosarchive.com/">speedrunned</a>, <a href="http://www.metroid2002.com/">discussed</a> and even <a href="http://tasvideos.org/NewMovies.html">utterly destroyed</a>. Even if the count in players declines over the years, it will never reach zero.</p>
<p>Now imagine DRM in all those old games once you turn off the DRM server or a certificate expires: No more speedruns. No more discussion forums. Nothing. The games are devalued and you as a game producer shut out your most loyal customers (those that keep playing your game after many years).</p>
<p>And my last issue is with this Gears of War case in particular: A time limited certificate does not make any sense in this case. It’s identity that must be checked. Let’s say the AES key used to encrypt the game was encrypted with the private key of the publisher (thus the public key will be needed to decrypt it) and the public key is signed by the publishers CA, then, while you check the identity of the publishers certificate, checking the time certainly is not needed. If it was valid once, it’s probably valid in the future as well.</p>
<p>Or better: A cracker with the ability to create certificates that look like they were signed by the publisher will highly likely also be able to make them with any timed validity.</p>
<p>This issue here is that Gears of War probably uses some library function to check for the certificate and this library function also checks the timestamp on the certificate. The person that issued the certificate either thought that “two years is well enough” or just used the default value in their software.</p>
<p>The person using the library function just uses that, not thinking about the timestamp at all.</p>
<p>Maybe, the game just calls some third-party DRM library which in turn calls the X.509 certificate validation routines and due to “security by obscurity” doesn’t document how the DRM works, thus not even giving the developer (or certificate issuer) any chance to see that the game will stop working once the certificate runs out.</p>
<p>This is lazyness.</p>
<p>So it’s not just monetary issues that would lead to DRMed stuff stop working. It’s also lazyness and wrong sense of security.</p>
<p>DRM is doomed to fail and the industry finally needs to see that.</p>
Managed switch2009-01-14T00:00:00+00:00http://pilif.github.com/2009/01/managed-switch<p><a href="/2009/01/life-is-good/">Yesterday</a> I’ve talked about configuring a VLAN in my home network.</p>
<p><a href="http://en.wikipedia.org/wiki/VLAN">VLAN</a> is a technology using some bits in Ethernet frames to create virtual network segments on the same physical network, but just go ahead and read the linked Wikipedia article as it’s more detailed than what I would want to go into.</p>
<p>To really make use of VLANs, you are going to need at least one managed switch (two in my case). I knew this and I was looking around for something useful.</p>
<p>In the end, I ended up with two <a href="http://h10010.www1.hp.com/wwpc/uk/en/sm/WF06b/12883-12883-3445275-3445282-3445282-3231819-3231825.html">HP ProCurve 1800-8G</a>’s: I wanted something that has at least 8 ports and was Gigabit capable as I was feeling the bandwidth cap on the previous 100M connection between <a href="/2006/07/computers-under-my-command-issue-1-shion/">shion</a> and my media center when streaming 1080p content.</p>
<p>That’s something I hope to solve with the 1G connection, though the drobo may still be the limiting factor here, but theoretical 480Mbit is better (where are the MacMinis with the Firewire800 interface?) than the 100MBit I was constrained to with the old setup.</p>
<p>The ProCurves are fanless, provide 8 ports and have a really nice web interface which is very easy to use and works on all browsers (as opposed to some linksys things which only work with IE6 (not even IE7 does the trick)). Also, the interface is very responsive and it even comes with an excellent online help.</p>
<p>With only 10 minutes of thought going into the setup and another 5 minutes to configure the two switches I was ready to hook them up and got instant satisfaction: In my server-room I plugged a test machine to any of the ports 2-7 and got onto VLAN1 (the internal network). Then I plugged it into port 8 and promptly was on VLAN2 (as evidenced by the public IP I got).</p>
<p>I have only three minor issues with the configuration of the two switches so far:</p>
<ol>
<li>They come with an empty administration password by default and don't force you to change it. Now granted, on a switch you cannot do as many mischief as on a router or worse, a NAS or access point, but it's still not a good thing.</li>
<li>They come preconfigured with the address 192.168.2.10 and DHCP disabled, practically forcing you to configure them locally before plugging them. I would have hoped for either DHCP enabled or, even better, the possibility of configuring them using <a href="http://en.wikipedia.org/wiki/Reverse_Address_Resolution_Protocol">RARP</a>. Or they could provide a serial interface which they do not.</li>
<li>To reset them, you have to unplug them, connect port 1 with port 2 and restart them. While this prevents you from accidentally resetting them, the procedure is a pain to do and when the time comes that I will have to do this, I'll probably have forgotten the procedure.</li>
</ol>
<p>But these are minor issues. The quick web interface, the excellent online help and the small fanless design make this the optimal switch once you have advanced requirements to fulfill despite not needing more than 8 ports.</p>
<p>There’s a larger 24 port cousin of the 1800-8G, but that one has a fan, so it was no option in my case - especially not in the sideboard where I’m now at the end of the 8 port capacity.</p>
pointers, sizes2009-01-13T00:00:00+00:00http://pilif.github.com/2009/01/pointers-sizes<p>Just a small remember for myself:</p>
<p>If</p>
<figure class="highlight"><pre><code class="language-delphi" data-lang="delphi">TMyRecord = record
pointer1: pointer;
pointer2: pointer;
pointer3: pointer;
pointer4: pointer
end;
PMyRecord = ^TMyRecord;</code></pre></figure>
<p>then</p>
<figure class="highlight"><pre><code class="language-delphi" data-lang="delphi"> sizeof(TMyRecord) <> sizeof(PMyRecord)</code></pre></figure>
<p>So</p>
<figure class="highlight"><pre><code class="language-delphi" data-lang="delphi"> var rec: PMyRecord;
rec = AllocMem(sizeof(rec));</code></pre></figure>
<p>is probably <strong>not</strong> a sensible thing to do (at least not if you intend to actually put something into that space the pointer points to).</p>
<p>At least it began breaking very, very soonish and consistently once TMyRecord got enough members - too bad though that I first looked at the completely wrong space.</p>
<p>Nothing beats the joy of seeing a very non-localized access violation go away after two hours of debugging though.</p>
Life is good2009-01-13T00:00:00+00:00http://pilif.github.com/2009/01/life-is-good<p>Remember <a href="/2009/01/bugs-bugs-and-more-bugs/">last week</a> when I was ranting about nothing working as it should?</p>
<p>Well - this weeks feels a lot more successful than the last one. It may very well be one of the nicest weeks I’ve had in IT so far.</p>
<ul>
<li> The plugin system I've written for our PopScan Windows Client doesn't just work, it's also some of the shiniest code I've written in my life. Everything is completely transparent and thus easy to debug and extend. Once more, simplicity lead to consistency and consistency is what I'm striving for.</li>
<li>Yesterday, we finally managed to kill a long standing bug in a certain PopScan installation which seemed to manifest itself in intermittently non-working synchronization but was apparently not at all working synchronization. Now it works <em>consistently</em>.</li>
<li>Over the weekend, I finally got off my ass and used some knowledge in physics and and a water-level to re-balance my projector on the ceiling mount making the picture fit the screen perfectly.</li>
<li>Just now, I've configured two managed switches at home to carry cable modem traffic over a separate VLAN allowing me to abandon my previously whacky setup wasting a lot of cable and looking really bad. I was forced to do that because a TV connector I've had mounted stopped working consistently (here's the word again).
The configuration I thought out worked instantly and internet downtime at home (as if somebody counts) was 20 seconds or so - the TCP connections even stayed all up.</li>
<li>I finally got <a href="http://www.fireflymediaserver.org/">mt-daapd</a> to work consistently with all the umlauts in the file names of my iTunes collection.</li>
</ul>
<p>If this week is an indication of how the rest of the year will be, then I’m really looking forward to this.</p>
<p>As the title says: Life is good.</p>
Tunnel munin nodes over HTTP2009-01-12T00:00:00+00:00http://pilif.github.com/2009/01/tunnel-munin-nodes-over-http<p><a href="/2009/01/monitoring-servers-with-munin/">Last time</a> I’ve talked about Munin, the one system monitoring tool I feel working well enough for me to actually bother to work with. Harsh words, I know, but the key to every solution is simplicity. And simple Munin is. Simple, but still powerful enough to do everything I would want it to do.</p>
<p>The one problem I had with it is that the querying of remote nodes works over a custom TCP port (4949) which doesn’t work behind firewalls.</p>
<p>There are some<a href="http://munin.projects.linpro.no/wiki/MuninSSHTunneling"> SSH tunneling solutions</a> around, but what do you do if even SSH is no option because the remote access method provided to you relies on some kind of VPN technology or access token.</p>
<p>Even if you could keep a long-running VPN connection, it’s a very performance intensive solution as it requires resources on the VPN gateway. But this point is moot anyways because nearly all VPNs terminate long running connections. If re-establishing the connection requires physical interaction, then you are basically done here.</p>
<p>This is why I have created a neat little solution which tunnels the munin traffic over HTTP. It works with a local proxy server your munin monitoring process will connect to and a little CGI-script on the remote end.</p>
<p>This will cause multiple HTTP connections per query interval (the proxy uses Keep-Alive though so it’s not TCP connections we are talking about - it’s just hits in the access.log you’ll have to filter out somehow) because it’s impossible for a CGI script to keep the connection open and send data both ways - at least not if your server-side is running plain PHP which is the case in the setup I was designing this for.</p>
<p>Aynways - the solution works flawlessly and helps me to monitor a server behind one hell of a firewall <em>and </em>behind a reverse proxy.</p>
<p>You’ll find the code <a href="http://github.com/pilif/munin-http-tunnel">here</a> (on GitHub as usual) and some explanation on how to use it <a href="http://github.com/pilif/munin-http-tunnel/tree/master/README.markdown">is here</a>.</p>
<p>Licensed under the MIT license as usual.</p>
Bugs, Bugs and more Bugs2009-01-09T00:00:00+00:00http://pilif.github.com/2009/01/bugs-bugs-and-more-bugs<p>I love my job. Ever loved it, always will love it.</p>
<p>But if you ask me what the most annoying aspect of it is, then I would answer you that it’s stuff always breaking all around me.</p>
<p>Whatever I do, there is no guarantee that any defined thing will work like it’s expected to, it will break from one moment to another or it will never work. There are hardware failures, OS failures, software failures - each and every day I lose at least one or two hours due to stuff not working or suddenly stopping to work.</p>
<p>Let me give you an account of what happened since the beginning of 2009:</p>
<ul>
<li>When installing two previously configured servers at a collocation center, one didn't start up at all (opening and reclosing the case fixed that) and the ESX server on the other machine refused to connect to the VMWare license server despite a working TCP/IP connection between them which turned out to be a missing host file entry despite connecting via IP-address.</li>
<li>One day later, Outlook on a computer of someone I'm looking after the PC a bit decided to trash the .PST-file and I had to remotely guide (on the phone) the person to restore it from the backup.</li>
<li>Yesterday, my Firebug suddenly stopped working. At least the console-object wasn't any longer available in my scripts and the console itself didn't work. Reinstalling the Addon helped (WTF?)</li>
<li>One of my two Vista Media Center PCs suddenly stopped to play any video file, despite me not doing updates on these machines to prevent stuff like this from happening. To this date I have no idea how to fix this.</li>
<li>My Delphi 2007 installation just now decided to stop displaying the online help. Trying to fix that by reinstalling it ended with an Error message containing title and content of "Error", but not after first completely uninstalling Delphi with no way of getting it back (you know... "Error" again). This was fixed by removing D2009 and then reinstalling 2007 and 2009 - a process that took 2 hours of installation time and another three to figure out what's going on.</li>
<li>When I was frustrated enough and wanted to vent (i.e. write this post), my Wordpress just now decided to do something really strange to the layout of the "Add New Post" page which made it impossible to post anything. Disabling Google Gears and restarting the browser helped.</li>
</ul>
<p>Our everyday technology is becoming more and more complex, thus causing more and more strange problems, requiring more and more knowledge and time to work around them. If we continue on that path, sooner or later it will be impossible to keep up with fixing problems popping up.</p>
<p>That will be the day when I’ll hopefully live on some island way off the net and all this stuff.</p>
Monitoring servers with munin2009-01-07T00:00:00+00:00http://pilif.github.com/2009/01/monitoring-servers-with-munin<p>If you want to monitor runtime parameters of your machines, there are quite many tools available.</p>
<p>But in the past, I’ve never been quite happy with any of them. Some didn’t work, others didn’t work right and some others worked ok but then stopped working all of a sudden.</p>
<p>All of them were a pain to install and configure.</p>
<p>Then, a few days ago, I found <a href="munin.projects.linpro.no">Munin</a>. Munin is optimized for simplicity, which makes it work very, very well. And the reports actually look nice and readable which is a nice additional benefit.</p>
<p>[caption id=”attachment_479” align=”aligncenter” width=”400” caption=”Apache parameters”]<img class="size-full wp-image-479" title="Screenshot of some Apache parameters" src="http://www.gnegg.ch/wp-content/uploads/2009/01/munin-overview.png" alt="Apache parameters" width="400" height="339" />[/caption]</p>
<p>Like many other system monitoring solutions, Munin relies on custom plugins to access the various system parameters. Unlike other solutions though, the plugins are very easy to write, understand and debug which encourages you to write your own.</p>
<p>Adding additional servers to be watched is a matter of configuring the node (as in “apt-get install munin-node”) and adding two lines to your master server configuration file.</p>
<p>Adding another plugin for a new parameter to monitor is a matter of creating one symlink and restarting the node.</p>
<p>[caption id=”attachment_480” align=”aligncenter” width=”495” caption=”Manifestation of a misconfigured cronjob”]<img class="size-full wp-image-480" title="Manifestation of a misconfigured cronjob" src="http://www.gnegg.ch/wp-content/uploads/2009/01/cpu-week.png" alt="Manifestation of a misconfigured cronjob" width="495" height="336" />[/caption]</p>
<p>On the first day after deployment the tool already proved useful in finding a misconfigured cronjob on on server which ran every minute for one hour every second hour instead of once per two hours.</p>
<p>Munin may not have all the features of the foll-blown solutions, but it has three real advantages over everything else I’ve seen so far:</p>
<ol>
<li>It's very easy to install and configure. What good is an elaboration solution if you can never get it to work correctly?</li>
<li>It looks very nicely and clean. If looking at the reports hurts the eyes, you don't look at them or you don't understand what they want to tell you.</li>
<li>Because the architecture is so straight-forward, you can create customized counters in <em>minutes</em> which in the end provides you with a much better overview over what is going on.</li>
</ol>
<p>The one big drawback is that the master data collector needs to access the monitored servers on port 4949 which is not exactly firewall-friendly.</p>
<p>Next time, we’ll learn how to work around that (and I don’t mean the usual ssh tunnel solution).</p>
Windows Media Encoder: File not found2008-12-12T00:00:00+00:00http://pilif.github.com/2008/12/windows-media-encoder-file-not-found<p>Today I have come across an installation of a Windows Media Encoder that refused to actually encode media. Whenever I started the encoding process, the encoder quit with the error 0x80070002 and gave the very helpful unformation that “the system cannot find the file specified”.</p>
<p>The problem appeared quite suddenly after working perfectly fine for the last three months. As the system is behind a very air-tight firewall and is the only machine in the network segment (aside of some IP cameras), the system hasn’t even been updated via Windows Update. So I have to conclude, that the problem appeared out of the blue. One day it worked, the next it stopped working.</p>
<p>I’ve tried everything to fix this (the encoder in question was encoding a live stream for a client of ours): From reinstalling the Axis capture driver to reinstalling Windows Media Encoder - nothing worked - the error message stayed the same.</p>
<p>Even googling proved all but helpful: There are quite many pages apparently mirroring all and the same MSDN forum on which someone actually posted the same problem but never got an answer. How annoying is that? You find 10 or more hits, everyone having your problem right in the title and everyone on a different page, but in the end, it’s all the same posting mirrored by different sites and plastered with advertisements.</p>
<p>On a hunch though, I have deleted “%Localappdata%\Microsoft\Windows Media” and “%Localappdata%\Microsoft\Windows Media Player” seeing that these folders stayed intact after a reinstallation while also being somewhat Windows media related.</p>
<p>Of course that helped!</p>
<p>So if you ever are in the same problem and Media Encoder suddenly stops encoding, it’s maybe caused by a corrupted cache of sorts. In that case, remove the cache and be encoding again, but note though, that if you are on a client machine with all your media on, removing these folders may be unwise as they could contain some meta information about your media.</p>
<p>In my case that didn’t matter though.</p>
Dropbox2008-11-20T00:00:00+00:00http://pilif.github.com/2008/11/dropbox<p><a href="http://www.getdropbox.com">Dropbox</a> is cloud storage on the next level: You install their little application - available for Linux, Mac OS X and Windows - which will create a folder which will automatically be kept synchronized between all the computers where you have installed that little application on.</p>
<p>Because it synchronizes in the background and always keeps the local copy around, the access-speed isn’t different from a normal local folder - mainly because it is, after all, a local folder you are accessing. Dropbox is not one of these slow “online hard drives” it’s more like rsync in the background (and rsync it is - the application is intelligent enough to only transmit deltas - even from binary files).</p>
<p>They do provide you with a web interface of course, but the synchronizing aspect is the most interesting.</p>
<p>The synchronized data ends up somewhere in Amazon’s S3 service, which is fine with me.</p>
<p>Unfortunately, while the data stored in an encrypted fashion on S3, the key is generated by the Dropbox server and thus known to them, which makes Dropbox completely unusable for sensitive unencrypted data. They do state in the FAQ that this will maybe change sometime in the future, but for not it is as it is.</p>
<p>Still, I found some use for Dropbox: ~/Library/Preferences, ~/.zshrc and ~/.ssh all are now stored in ~/Dropbox/System and symlinked back to their original place. This means that a large chunk of my user profile is availalbe on all the computers I’m working on. I would even try the same trick with ~/Library/Application Support, but that seems risky due to the missing encryption and due to the fact that Application Support sometimes contains database files which get corrupted for sure when moved around while they are open - like the Firefox profile.</p>
<p>This naturally even works when the internet connection is down - DropBox synchronizes changes locally, so when the internet (or Dropbox) is down, I just have the most recent copy of when the service was still working - that’s more than good enough.</p>
<p>Another use that comes to mind for Dropbox storage are game save files or addons you’d want to have access to on every computer you are using - just move your stuff to ~/Dropbox and symlink it back to the original place.</p>
<p>Very convenient.</p>
<p>Now if only they’d provide me with a way to provide my own encryption key. That way I would instantly buy the pro account with 25GB of storage and move lots and lots of data in there.</p>
<p>Dropbox is the answer to the ever increasing amount of computers in my life because now I don’t care about setting up the same stuff over and over again. It’s just there and ready. Very helpful.</p>
Listen to your home music from the office2008-11-03T00:00:00+00:00http://pilif.github.com/2008/11/listen-to-your-home-music-from-the-office<p>My MP3 collection is safely stored on <a href="/2006/07/computers-under-my-command-issue-1-shion/">shion</a>, on a drobo mounted as /nas. Naturally, I want to listen to said music from the office - especially considering my <a href="/2005/05/lots-of-fun-with-openvpn/">fully routed VPN access</a> between the office and my home infrastructure and the upstream which suffices for at least 10 concurrent 128bit streams (boy - technology has changed in the last few years - I remember the times where you couldn’t reliably stream 128 bit streams - let alone my 160/320 mp3s).</p>
<p>I’ve tried many things so far to make this happen:</p>
<ul>
<li>serve the files with a tool like <a href="http://www.jinzora.org">jinzora</a>. This works, but I don't really like jinzora's web interface and I was never able to get it to work correctly on my Ubuntu box. I was able to trace it down to null bytes read from their tag parser, but the code is very convoluted and practically unreadable without putting quite some effort into that. Considering that I didn't much like the interface in the first place, I didn't want to invest time into that.</li>
<li>Use a SlimServer (now <a href="http://www.slimdevices.com/pi_features.html">Squeezecenter</a>) with a softsqueeze player. Even though I don't use my squeezebox (an original model with the original slimdevices brand, not the newer Logitech one) any more because the integrated amplifier in the Sonos players works much better for my current setup. This solution worked quite ok, but the audio tends to stutter a bit at the beginning of tracks, indicating some buffering issues.</li>
<li>Use iTune's integrated library sharing feature. This seemed both undoable and unpractical. Unpractical because it would force me to keep my main mac running all the time and undoable because iTunes sharing can't pass subnet boundaries. Aside of that, it's a wonderful solution as audio doesn't stutter, I already know the interface and access is very quick and convenient.</li>
</ul>
<p>But then I found out how to make the iTunes thing both very much doable and practical.</p>
<p>The network boundary problem can be solved using <a href="http://www.chaoticsoftware.com/ProductPages/NetworkBeacon.html">Network Beacon</a>, a ZeroConf proxy. Start the application, create a new beacon. Chose any service name, use «_daap._tcp.» as service type, set the port number to 3689, enable the host proxy, keep the host name clear and enter the IP address of the system running iTunes (or firefly - see below).</p>
<p>Oh, and the target iTunes refuses to serve out data to machines in different subnets, so to be able to directly access a remote iTunes, you’d also have to set up an SSH tunnel.</p>
<p>Using Network Beacon, ZeroConf quickly begins working across any subnet boundaries.</p>
<p>The next problem was about the fact that I was forced to keep my main workstation running at home. I fixed that with <a href="http://www.fireflymediaserver.org/">Firefly Media Server </a>for which even a pretty recent prebuilt package exists for Ubuntu (apt-get install mt-daapd).</p>
<p>I’ve installed that, configured iptables to drop packets for port 3689 on the external interface, configured Firefly to use the music share (which basically is a current backup of the itunes library of my main workstation - rsync for the win).</p>
<p>Firefly in this case even detects the existing iTunes playlists (as the music share is just a backup copy of my iTunes library - including the iTunes Library.xml), though smart playists don’t work, but can easily be recreated in the firefly web interface.</p>
<p>This means that I can access my complete home mp3 library from the office, stutter free, using an interface I’m well used to, without being forced to keep my main machine running all the time.</p>
<p>And it isn’t even that much of a hack and thus easy to rebuild should the need arise.</p>
<p>I’d love to not be forced to do the Network Beacon thing, but avahi doesn’t relay ZeroConf information across VPN interfaces.</p>
Sonos news2008-10-28T00:00:00+00:00http://pilif.github.com/2008/10/sonos-news<p>Today, <a href="http://www.sonos.com">Sonos</a> announced their new 2.7 software version for their home appliances with some additional web radio features in which I’m not particularly interested as I’m more or less only listening to <a href="http://ormgas.rainwave.cc/">one web radio station</a>. What they’ve also announced though was much more interesting: An <a href="http://phobos.apple.com/WebObjects/MZStore.woa/wa/viewSoftware?id=293523031&mt=8">iPhone Version of their Controller application</a> (iTunes Link).</p>
<p>The thing doesn’t just look nice, it also works perfectly well and provides all the functionality you are used to have in your sonos controller, but without the controllers bulkyness (the thing is heavy and quite large). I’m constantly carrying my iPhone around anyways and it’s constantly connected to the WiFi network in my home, so it’s the perfect fit to be a sonos controller.</p>
<p>The application starts up quite instantly: It does show a splash screen for around three seconds, but that is still way shorter than a controller booting up from deep sleep, which you have to put it into if you want it to last longer than a day or so.</p>
<p>Functionality-wise the iPhone application provides everything a real controller does - well… nearly everything. I truly miss the alarm functionality, but I’m quite sure that’ll come soon enough.</p>
<p>Aside of that, I’m inclined to say that this little application more or less obsoletes the original controller. And in every case but the 32GB iPod Touch, it’s always cheaper to buy any Apple device and install the application than it is to buy the original Sonos controller (here in Switzerland, you can get an 8 GB touch for half the price of a Sonos controller) - if you can live with setting up alarms in the desktop software. It’s certainly possible (and thankfully much quicker than with the original controller) to cancel a running alarm in the iPhone controller.</p>
<p>Very nice indeed.</p>
<p>On related news: I have updated my <a href="/ogg2mp3/">ogg to mp3 stream converter</a> to stop looking at the url to decide whether the url to play is a stream itself or a playlist, but instead to fetch the information from the HTTP response header themselves, thus making the script to continue to work with Rainwave despite them having changed the URL for the tune in link.</p>
My favourite asterisk feature2008-10-22T00:00:00+00:00http://pilif.github.com/2008/10/my-favourite-asterisk-feature<p>I’ve just included this into the context of the dialplan where the calls from and to our internal phones live.</p>
<pre>[intercom]
exten => _55[6-8][1-9],1,SIPAddHeader("Call-Info: sip:asterisk\;<strong>answer-after=0</strong>")
exten => _55[6-8][1-9],2,Dial(SIP/${EXTEN:2})</pre>
<p>this is as useless as it is fun.</p>
<p>Too bad softphones are getting some real attention in the office lately, as they don’t support the answer-after feature and even if they did, where is the fun of just making yourself heard on the headphones of the victim as opposed to doing that directy on their speaker - loud enough to be heard in the whole office.</p>
<p>VoIP is fun and it’s about time I do more to our asterisk config but just watch it work and never fail.</p>
IRC user interface idea2008-10-07T00:00:00+00:00http://pilif.github.com/2008/10/irc-user-interface-idea<p>Don’t you know this problem?</p>
<p>You are connected to some amount of IRC servers and you are watching a certain amount of channels.</p>
<p>Every IRC client I know either uses tabs or windows to separate these channels in their own context, usually providing some visual clue if there was activity in a different channel you are not currently watching.</p>
<p>While this metaphor probably makes a lot of sense in very busy channels, I think that consolidating every channel into one single window probably is a much better way for you to follow what’s going on and to talk back to the channels - especially when you are watching lesser populated channels.</p>
<p>This frees you from the burden of constantly switching channel windows (or tabs) to see what is going on.</p>
<p>Let’s say you are connected to irc1.example.com and irc2.example.com. On irc1, you are connected to #channel1a and #channel1b and on irc2, you are connected to #channel2a</p>
<p>Now, to my knowledge, every current IRC client either uses three windows or three tabs (maybe even 5 windows/tabs because the server themselves get a window too) to represent this information. In window based clients, you can arrange all of them aside of each other, but talking to a certain channel still forces you to focus different windows.</p>
<p>Now with my idea, you would consolidate these channels. You would only get one window which contains all the messages from all channels.</p>
<p>So in the simplest incarnation, you’d probably see something like this in your chat window:</p>
<pre>1) irc1/#channel1a [user1aa]> hi there!
2) irc1/#channel1b [user1a]> hi there!
1) irc1/#channel1a [user1ab]> hi user1aa
3) irc2/#channel2a [user2aa]> hi folks!</pre>
<p>though you would probably understand much more easily what’s going on if the server-, channel- and user names were a bit more… well… distinct.</p>
<p>Of course, you could add color. You assign each channel a color, like this:</p>
<pre><span style="color: #99cc00;">1) irc1/#channel1a [user1aa]> hi there!</span>
<span style="color: #ff6600;">2) irc1/#channel1b [user1a]> hi there!</span>
<span style="color: #99cc00;">1) irc1/#channel1a [user1ab]> hi user1aa</span>
<span style="color: #0000ff;">3) irc2/#channel2a [user2aa]> hi folks!</span></pre>
<p>and if you need to, you can still color nicks.</p>
<pre><span style="color: #99cc00;">1) irc1/#channel1a [<span style="color: #ff0000;">user1aa</span>]> hi there!</span>
<span style="color: #ff6600;">2) irc1/#channel1b [<span style="color: #003300;">user1a</span>]> hi there!</span>
<span style="color: #99cc00;">1) irc1/#channel1a [<span style="color: #cc99ff;">user1ab</span>]> hi user1aa</span>
<span style="color: #0000ff;">3) irc2/#channel2a [<span style="color: #ffcc00;">user2aa</span>]> hi folks!</span></pre>
<p>Now… how to talk back?</p>
<p>Easy. Every channel is assigned a number for quick access. Just type /[channel number] to switch to that channel and type, then hit enter. The channel you last talked to is predefined and sticks around until you hit /[another channel number].</p>
<p>This feels so much an easier and more intuitive way to handle multiple connections, especially in cases where the channels you are joined are not too active, as in this way, you can easily watch everything that is going on.</p>
<p>Also, usually, discussions happen in intervals in different channels. You will only very rarely see the color concert I’ve shown above as usually, you have a discussion going on in one channel while the others are relatively quiet.</p>
<p>I’ll probably have to go and implement a proof-of-concept sometime in the future, but this feels like such a nice idea when just looking at it. Why is nobody doing it? What am I missing?</p>
Automatic language detection2008-09-23T00:00:00+00:00http://pilif.github.com/2008/09/automatic-language-detection<p>If you write a website, do not use Geolocation to determine the language to display to your user.</p>
<p>If you write a desktop application, do not use the region setting to determine the language to display to your user.</p>
<p>This is incredibly annoying for some of us, especially for me which is why I’m ranting here.</p>
<p>The moment Google released their (awful) German translation for their RSS reader, I was served the German version just because I have a Swiss IP address.</p>
<p>Here in Switzerland, we actually speak one of three (or four, depending on who you ask) languages, so defaulting to German is probably not of much help for the people in the french speaking part.</p>
<p>Additionally, there are many users fluent in (at least reading) English. We always prefer the original language if at all possible because generally, translations never quite work. Even if you have the best translators at work, translated texts never feel fluid. Especially not when you are used to the original version.</p>
<p>So, Google, what were you thinking to switch me over to the German version of the reader? I have been using the English version for more than a year, so clearly, I understood enough of that language to be able to use it. More than 90% of the RSS feeds I’m subscribed to are, in fact, in English. Can you imagine how pissed I was to see the interface changed?</p>
<p>This is even worse on the iPhone/iPod frontend, because, there, you don’t even provide an option to change the language aside of manually hacking the URL.</p>
<p>Or take desktop applications. I live in the German speaking parts of Switzerland. True. So <em>naturally</em> I have set my locale settings to Swiss German. You know: I want to have the correct number formatting, I want my weeks to start on Mondays. I want the correct currency. I want my 24 hours clock I’m used to.</p>
<p>Actually, I also want the German week and month names, because I will be using these in most of my letters and documents, which are, in fact, German too.</p>
<p>But my OS installation is English. I am used to English. I prefer English. Why do so many programs insist to use the locale setting to determine the display language? Do you developers think it’s funny to have a mish-mash of languages on the screen? Don’t you think that me using an English OS version may be an indication that <em>I do not</em> want to read your crappy German translation alongside the English user interface of my OS?</p>
<p>Don’t you think that it feels really stupid to have a button in a German dialog box open another, English, dialog (the first one is from Chrome, the one that opens once you click “Zertifikate verwalten” (Manage certificates) is from Windows itself)?</p>
<p><a href="http://www.gnegg.ch/wp-content/uploads/2008/09/langmix.png"><img class="aligncenter size-full wp-image-458" title="langmix" src="http://www.gnegg.ch/wp-content/uploads/2008/09/langmix.png" alt="" width="362" height="363" /></a></p>
<p>In Chrome, I can at least fix the language - once I found the knob to turn. At first, it was easier for me to just delete the German localization file from the chrome installation because, due to being completely unused to German UIs, I was unable to find the right setting.</p>
<p>This is really annoying and I see this particular problem being neglected on an incredibly large scale. I know that I am a minority, but the problem is so terribly easy to fix:</p>
<ul>
<li>All current browsers send an Accept-Language header. In contrast to the earlier times, nowadays, it is actually correctly preset in all the common browsers. Use that. Don't use my IP-address.</li>
<li>Instead of reading the locale setting in my OS, ask the OS for its UI language and use that to determine which localization to load (actually, this is the recommended way of doing things according to Microsoft's guidelines at least since Windows XP which was 2001).</li>
</ul>
<p>Using these two simple tricks, you help a minority <em>without hindering the majority in any way</em> and without additional development overhead!</p>
<p>Actually, you’ll be getting away a lot cheaper than before. GeoIP is expensive if you want it to be accurate (and you <strong>do</strong> want that. Don’t you?), whereas there are ready-to-use libraries to determine the correct language even from the most complex Accept-Language-Header.</p>
<p>Asking the OS for the UI language isn’t harder than asking it for the locale, so no overhead there either.</p>
<p>Please, developers, please have mercy! Stop the annoyance! Stop it now!</p>
VMWare Fusion Speed2008-09-19T00:00:00+00:00http://pilif.github.com/2008/09/vmware-fusion-speed<p>This may be totally placebo, but I noticed that using Vista inside a VMWare Fusion VM has just turned from nearly unbearable slow to actually quite fast by updating from 2.0 Beta 2 to 2.0 Final.</p>
<p>It may very well be that the beta versions contained additional logging and/or debug code which was keeping the VM from reaching its fullest potential.</p>
<p>So if you are too lazy to upgrade and still running one of the Beta versions, you should consider updating. For me at least, it really brought a nice speed-up.</p>
Dynamic object creation in Delphi2008-09-17T00:00:00+00:00http://pilif.github.com/2008/09/dynamic-object-creation-in-delphi<p>In a quite well-known pattern, you have a certain amount of classes, all inheriting from a common base and you have a factory that creates instances of these classes. Now let’s go further ahead and assume that the factory will have no knowledge of what classes will be available at run-time.</p>
<p>Each of this classes registers itself at run-time depending on a certain condition and then the factory will create instances depending on that registration.</p>
<p>This post is about how to do this in Delphi. Remember that this sample is very much abstracted and the real-world application is quite a bit more complex, but this sample should be enough to demonstrate the point.</p>
<p>Let’s say, we have these classes:</p>
<figure class="highlight"><pre><code class="language-delphi" data-lang="delphi">type
TJob = class(TObject)
public
constructor Create;
end;
TJobA = class(TJob)
public
constructor Create;
end;
TJobB = class(TJob)
public
constructor Create;
end;
TJobAA = class(TJobA)
public
constructor Create;
end;</code></pre></figure>
<p>Each of these constructors does something to initialize the instance and thus calls its parent using ‘inherited’.</p>
<p>Now, let’s further assume that we have a Job-Repository that stores a list of available jobs:</p>
<figure class="highlight"><pre><code class="language-delphi" data-lang="delphi">type
TJobRepository = class(TObject)
private
FAvailableJobs: TList;
public
procedure registerJob(cls: TClass);
function getJob(Index: Integer): TClass;
end;</code></pre></figure>
<p>Now we can register our jobs</p>
<figure class="highlight"><pre><code class="language-delphi" data-lang="delphi"> rep = TJobRepository.Create;
if condition then
rep.RegisterJob(TJobAA);
if condition2 then
rep.RegisterJob(TJobB);</code></pre></figure>
<p>and so on. Now at runtime, depending on some condition, we will instantiate any of these registered jobs. This is how we’d do that:</p>
<figure class="highlight"><pre><code class="language-delphi" data-lang="delphi"> job = rep.getJob(0).Create; </code></pre></figure>
<p>Sounds easy. But this doesn’t work.</p>
<p>job in this example will be of type TJobAA (good), but its constructor will not be called (bad). The solution is to</p>
<ol>
<li>Declare the constructor of TJob as being virtual.</li>
<li>Create a Meta-Class for TJob, because the Constructor of TObject is NOT virtual, to when you dynamically instantiate an object from a TClass only the constructor of TObject will be called.</li>
<li>Override the inherited virtual constructor.</li>
</ol>
<p>So in code, it looks like this:</p>
<pre>type
TJobClass = class of TJob;
TJob = class(TObject)
public
constructor Create; <strong>virtual;</strong>
end;
TJobA = class(TJob)
public
constructor Create; <strong>override;</strong>
end;
TJobAA = class(TJobA)
public
constructor Create; <strong>override;</strong>
end;
TJobRepository = class(TObject)
private
FAvailableJobs: TList;
public
procedure registerJob(cls: TClass);
function getJob(Index: Integer): T<strong>Job</strong>Class;
end
</pre>
<p>This way, Delphi knows that when you call</p>
<figure class="highlight"><pre><code class="language-delphi" data-lang="delphi"> job = rep.getJob(0).Create; </code></pre></figure>
<p>that you are creating an instance of a TJobAA object which has a constructor that overrides the virtual Constructor of TJob by the virtue that the Class of TJobAA is a class of TJob.</p>
<p>Personally, I would have assumed that this just works without the need of declaring the Meta-Class and the trickery with the need to explicitly declare the constructor as virtual. But seeing that Delphi is a compiled static language, actually, I’m happy that this works at all.</p>
iTunes 8 visualization2008-09-10T00:00:00+00:00http://pilif.github.com/2008/09/itunes-8-visualization<p><img class="aligncenter size-full wp-image-441" title="iTunes Visualizaion" src="http://www.gnegg.ch/wp-content/uploads/2008/09/itunes.jpg" alt="" width="450" height="292" /></p>
<p>Up until now I have not been a very big fan of iTunes’ visualization engine, probably because I’ve been spoiled with <a href="http://www.nullsoft.com/free/milkdrop/">MilkDrop</a> in my Winamp days (which still owns the old iTunes display on so many levels).</p>
<p>But with the release of iTunes 8 and their new visualization, I have to admit that, when you chose the right music (in this case it’s <a href="http://ax.phobos.apple.com.edgesuite.net/WebObjects/MZStore.woa/wa/browserRedirect?url=itms%253A%252F%252Fax.phobos.apple.com.edgesuite.net%252FWebObjects%252FMZStore.woa%252Fwa%252FviewAlbum%253Fi%253D70980317%2526id%253D70980839%2526s%253D143459">Liberi Fatali</a> from Final Fantasy 8), you can really get something out of this.</p>
<p>The still picture really doesn’t do it justice, so I have created <a href="http://www.gnegg.ch/wp-content/uploads/2008/09/itunes8-visu.mov">this video</a> (it may be a bit small, but you’ll see what I’m getting at) to visualize my point. Unfortunately, near the end it gets worse and worse, but the beginning is something of the more impressive shows I have ever seen generated out of this particular piece of music.</p>
<p>This may even beat MilkDrop and I could actually see myself assembling a playlist of some sort and put this thing on full screen.</p>
<p>Nice eyecandy!</p>
Food for thought2008-09-10T00:00:00+00:00http://pilif.github.com/2008/09/food-for-thought<p> </p>
<ol>
<li>When you open a restaurant, you know the risk of people going to the supermarket and cook their own meal, not paying you as the restaurant owner.</li>
<li>When you publish a book, you know there are going to be libraries where people can share one copy of your work.</li>
<li>When you build a house and sell it, you know the people living there will be going in and out of your house for year without ever paying you anything more.</li>
<li>When you live in a family and clean the parents car for one Euro, you know about the risk of your sister doing it for 50 cents next time around.</li>
</ol>
<p><em>But</em></p>
<ol>
<li>The music industry claims to have a monopoly on their work, managing to get laws created that allow them to control distribution and disallow anybody to create a lookalike without paying them.</li>
<li>The game industry is hard at work making it impossible for honest customers to even use the game they bought on multiple devices. And now they even begin to go after the used games market (think about that SNES pearl you just saw in your small games store. The one you wanted so badly ever since you've been young. Wouldn't it be a shame it was illegal for them to sell it?)</li>
<li>The entertainment industry is hard at work to make you pay for every device you want to play the same content on.</li>
<li>Two words. "SMS pricing".</li>
</ol>
<p>Why do things applying to “small people” not apply to the big shots? Why does the government create laws to turn around well-known facts we have grown up with just so that the wealthy companies (the ones not paying nearly enough taxes) can get even wealthier?</p>
<p>I just don’t get it.</p>
OAuth signature methods2008-08-26T00:00:00+00:00http://pilif.github.com/2008/08/oauth-signature-methods<p>I’m currently looking into web services and different methods of request authentication, especially as what I’m aiming to end up with is something inherently RESTful as this method will provide me with the best flexibility when designing a frontend to the service and generally, the arguments of the REST crowd seem to convince me (works like the human readable web, inherently scalable, enforces clean structure of resources and finally: easy to program against due to “obvious” API).</p>
<p>As different services are going to communicate with themselves, sometimes acting as users of their respective platforms and because I’m not really inclined to pass credentials around (or make the user do one half of the tasks on one site and the other half on another site), I was looking into different methods of authentication and authorization which work in a RESTful enviroment and work without passing around user credentials.</p>
<p>The first thing I did was to note the requirements and subsequently, I quickly designed something using public key cryptography which would have worked quite nicely (possibly - I’m no expert in this field - yet).</p>
<p>Then I learned about <a href="http://oauth.net">OAuth</a> which was designed precisely to solve my issues.</p>
<p>Eager, I read through <a href="http://oauth.net/core/1.0/">the specification</a>, but I was put off by one single fact: The default method for signing requests, the method that is most widely used, the method that is most widely supported, relies on a <strong>shared secret</strong>.</p>
<p>Even worse: The shared secret must be known in clear on both the client and the server (using the common terminology here; OAuth speaks of consumers and providers, but I’m (still) more used to the traditional naming).</p>
<p>This is bad on multiple levels:</p>
<ul>
<li>As the secret is stored on two places (client and server), it's twice as probable to leak out than if it's only stored on one place (the client).</li>
<li>If the token is compromised, the attacker can act in the name of the client with no way of detection.</li>
<li>Frankly, it's responsibility I, as a server designer, would not want to take on. If the secret is on the client and the client screws up and lets it leak, <strong>it's their problem</strong>, if the secret is stored on the server and the server screws up, <strong>it's my problem</strong> and I have to take responsibility.
Personally, I'm quite confident that I would not leak secret tokens, but can I be sure? Maybe. Do I even want to think about this? Certainly not if there is another option.</li>
<li>If, god forbid, the whole table containing all the shared secrets is compromised, I'm really, utterly screwed as the attacker can use all services, impersonating any user at will.</li>
<li>As the server <em>needs to know all shared secrets</em>, the risk of losing all of them is <strong>only even created</strong>. If only the client knows the secret, an attacker has to compromise each client individually. If the server knows the secret, it <em>suffices to compromise the server to get all clients</em>.</li>
<li>As per the point above, the server gets to be a really interesting target for attacks and thus needs to be extra secured and even needs to take measures against all kinds of more-or-less intelligent attacks (usually ending up DoSing the server or worse).</li>
</ul>
<p>In the end, HMAC-SHA1 is just repeating history. At first, we stored passwords in the clear, then we’ve learned to hash them, then we even salted them and now we’re exchanging them for tokens stored in the clear.</p>
<p>No.</p>
<p>What I need is something that keeps the secret on the client.</p>
<p>The secret should never ever need to be transmitted to the server. The server should have no knowledge at all of the secret.</p>
<p>Thankfully, OAuth contains a solution for this problem: RSA-SHA1 as defined in section 9.3 of the specification. Unfortunately, it leaves a lot to be desired though. Whereas the rest of the specification is a pleasure to read and very, well, specific, 9.3 contains the following phrase:</p>
<blockquote>It is assumed that the Consumer has provided its RSA public key in a verified way to the Service Provider, in a manner which is beyond the scope of this specification.</blockquote>
<p>Sure. Just specify the (IMHO) useless way using shared secrets and leave out the interesting and IMHO only functional method.</p>
<p>Sure. Transmitting a Public Key is a piece of cake (it’s public after all), but this puts another burden on the writer of the provider documentation and as it’s unspecified, implementors will be forced to amend the existing libraries with custom code to transmit the key.</p>
<p>Also I’m unclear on header size limitations. As the server needs to know what public key was used for signature (oauth_consumer_key), it must be sent on each requests. While manually generated public token can be small, a public key certainly isn’t. Is there a size-limit for HTTP-headers? I’ll have to check that.</p>
<p>I could just transmit the key ID (the key is known on the server) or the key fingerprint as the consumer key, but is that following the standard? I didn’t see this documented anywhere and examples in the wild are very scarcely implemented.</p>
<p>Well… as usual, the better solution just requires more work and I can live with that, especially considering as, for now, I’ll be the person to write both server and client, but I feel the upcoming pain, should third party consumers decide to hook up with that provider.</p>
<p>If you ask me what I would have done in the footsteps of the OAuth guys, I would only have specified RSA-SHA1 (and maybe PLAINTEXT) and not even bothered with HMAC-SHA1. And I would have specified a standard way for public key exchange between consumer and provider.</p>
<p>Now the train has left and everyone interested in creating a really secure (and convenient - at least for the provider) solution will be left with more work and not standardized methods.</p>
... and back to Thunderbird2008-08-25T00:00:00+00:00http://pilif.github.com/2008/08/and-back-to-thunderbird<p>It has been a while since I’ve last posted about email - still a topic very close to my heart, be it on the server side or on the client side (though the server side generally works very well here, which is why I don’t post about it).</p>
<p>Waybackwhen, I’ve <a href="/2003/03/mail-for-windows-as-i-like-it/">written about Becky!</a> which is also where I’ve declared the points I deemed important in a mail client. A bit later, <a href="/2003/08/another-mail-client/">I’ve talked about The Bat!</a>, but in the end, I’ve settled with Thunderbird, just to <a href="/2006/05/mac-mail-can-software-perform-worse/">switch to Mac Mail</a> when I’ve switched to the Mac.</p>
<p>After that came my <a href="/2007/08/gmail-the-review/">excursion to Gmail</a>, but now I’m back to Thunderbird again.</p>
<p>Why? After all, my Gmail review sounded very nice, didn’t it?</p>
<p>Well…</p>
<ul>
<li>Gmail is blazingly fast once it's loaded, but starting the browser and then gmail (it loads so slow that "starting (the) gmail (application)" is a valid term to use) is always slower than just keeping a mail client open on the desktop.</li>
<li>Google Calendar Sync sucks and <a href="/2003/10/each-problem-has-a-solution/">we're using Exchange/Outlook</a> here (and are actually quite happy with it - for calendaring and address books - it sucks for mail, but it provides decent IMAP support), so there was no way for the other folks here to have a look at my calendar.</li>
<li>Gmail always posts a "Sender:"-Header when using a custom sender domain which technically is the right thing to do, but Outlook on the receiving end screws up by showing the mail as being "From xxx@gmail.com on behalf of xxx@domain.com" which isn't really what I'd want.</li>
<li>Google's contact management offering is sub par compared even to Exchange.</li>
<li>iPhone works better with Exchange than it does with Google (yes. iPhone, but that's another story).</li>
<li>The cool Gmail MIDP client doesn't work/isn't needed on the iPhone, but originally was one of the main reasons for me to switch to Gmail.</li>
</ul>
<p>The one thing I really loved about Gmail though was the option of having a clean inbox by providing means for archiving messages with just a single keyboard shortcut. Using a desktop mail client without that funcationality wouldn’t have been possible for me any more.</p>
<p>This is why I’ve installed <a href="https://addons.mozilla.org/en-US/thunderbird/addon/2487">Nostalgy</a>, a Thunderbird extension allowing me to assign a “Move to Folder xxx” action to a single keystroke (y in my case - just like gmail).</p>
<p>Using Thunderbird over Mac Mail has its reasons in the <a href="/2006/05/mac-mail-can-software-perform-worse/">performance</a> and in the crazy idea of Mac Mail to always download all the messages. Thunderbird is no race horse, but Mail.app isn’t even a slug.</p>
<p>Lately, more and more interesting posts regarding the development of Thunderbird have appeared on Planet Mozilla, so I’m looking forward to see Thunderbird 3 taking shape in its revitalized form.</p>
<p>I’m all but conservative in my choice of applications and gadgets, but Mail - also because of its importance for me - must work <strong>exactly</strong> as I want it. None of the solutions out there are doing that to the full extent, but TB certainly comes closest. Even after years of testing and trying out different solutions, TB is the thing that solves most of my requirements without adding new issues.</p>
<p>Gmail is splendid too, but it presents some shortcomings TB doesn’t come with.</p>
Internet at home2008-08-16T00:00:00+00:00http://pilif.github.com/2008/08/internet-at-home<p>I’m a usually very happy customer of <a href="http://www.cablecom.ch">Cablecom</a>. They provide internet-over-tv-cable and as here in Switzerland, basically everyone has tv cable and because they provide nice pure ip addresses (no PPPoE stuff) and because when you are not trapped in the administrative trap, then it just works. Cablecom internet is never down, very speedy and usually I’m envied for my pings in online matches of whatever game.</p>
<p>All these are very good reasons to become a customer of Cablecom and depite of what you are going to read here shortly, I would probably still recommend them to other users - at least those with some technical background because, quite frankly, of all the ways to get broadband here in Switzerland, this one is the one that works the easiest and the most consistent.</p>
<p>But once you fall into the administrative trap, all hell breaks lose.</p>
<p>Here’s what happened to me (also, read my other <a href="/2006/11/living-without-internet-at-home/">post about Cablecom’s service</a>):</p>
<p>Somewhere around the end of May I got a letter telling me that I would get sent a new cable modem. Once I’ve got that, I should give them a call so they can deactivate my old one. Also, if I don’t call, they’d automatically disable the old modem after a couple of weeks.</p>
<p>Unfortunately, I never got that modem. I don’t know who’s to blame and I don’t care. Also, I could not have anticipated the story as it’s now unfolding because the letter clearly said that I’d get the modem at an unknown later date, so I wasn’t worried at the time.</p>
<p>At the beginning of June, I’ve noticed the network going down. Not used to that, especially not as it was down for a whole day, I called the hotline and told them that I suspected them of shutting of my service despite me not reciving the modem.</p>
<p>They’ve confirmed that and promised me to resend the modem. Re-enabling the old one was not possible they’ve told me futher on.</p>
<p>One week later - not having recived the modem - I’ve called again and they told me that the order was delayed due to some CRM software change at their end, but they’ve promised me to send it that week.</p>
<p>Another week passes. No modem. I call again and they tell me that the reporcessing of orders was delayed, but that I will get the modem that week for sure. Knowing that this probably won’t be the case, I’ve told them that I will be on vacation and that they should send it to my office address.</p>
<p>Another week passes and I go to vacation.</p>
<p>Another week passes and I call the office to ask if the modem (that was supposed to arrive two weeks ago the latest) has arrived. Of course it didn’t. What made me actually make the call was the fact that I’ve received a press release from Cablecom announcing more customers than ever - the irony of that bringing my memory back to the non-existing internet at my home.</p>
<p>So I called support again. They did notice that my order was late, but they had no idea why it was taking so long, there was no way of speeding it up and they had no idea when I would get the modem (keep in mind that I’m paying CHF 79/mt for not working internet access).</p>
<p>At this point I’ve had enough and I’ve called someone higher up I know working at Cablecom.</p>
<p>In the end, I was able to get internet access using that route, but it’s not entirely official and I still have not the slightest idea of when/if the problem with my actual account will ever be fixed.</p>
<p>Pathetic.</p>
<p>Still: If everything goes well, then you have nothing to fear. From a technical standpoint, Cablecom owns all other currently widely available methods for broadband internet access, so this is what I will be sticking with. Just be prepared for longer service intermissions once you fall into the administrative trap.</p>
Beautifying commits with git2008-07-16T00:00:00+00:00http://pilif.github.com/2008/07/beautifying-commits-with-git<p>When you look at our Subversion log, you’d often see revisions containing multiple topics, which is something I don’t particularly like. The main problem is merging patches. The moment you cramp up multiple things into a single commit, you are doomed should you ever decide to merge one of the things into another branch.</p>
<p>Because it’s such an easy thing to do, I began committing really, really often in git, but whenever I was writing the changes back to subversion, I’ve used <tt>merge –squash</tt> as to not clutter the main revision history with abandoned attempts at fixing a problem or implementing a feature.</p>
<p>So in essence, this meant that by using git, I was working against my usual goals: The actual commits to SVN were larger than before which is the exact opposite thing of how I’d want the repository to look.</p>
<p>I’ve lived with that, until I learned about the interactive mode of <tt>git add</tt>.</p>
<p>Beginners with git (at least those coming from subversion and friends) always struggle to really <em>get </em>the concept of the index and usually just <tt>git commit –a</tt> when committing changes.</p>
<p>This does exactly the same thing as a svn commit would do: It takes all changes you made to your working copy and commits them to the repository. This also means that the smallest unit of change you can track is the state of the working copy itself.</p>
<p>To do finer grained commits, you can git add a file and commit that, which is the same as <tt>svn status</tt> followed by some grep and awk magic.</p>
<p>But even a file is too large a unit for a commit if you ask me. When you implement feature X, it’s possible if not very probable, that you fix bugs a and b and extend the interface I to make the feature Y work – a feature on which X depends.</p>
<p>Bugfixes, interface changes, subfeatures. A git <tt>commit –a</tt> will mash them all together. A <tt>git add</tt> per file will mash some of them together. Unless you are really really careful and cleanly only do one thing at a time, but in the end that’s now how reality works.</p>
<p>It may very well be that you discover bug b after having written a good amount of code for feature Y and that both Y and b are in the same file. Now you have to either back out b again, commit Y and reapply b or you just commit Y and b in one go, making it very hard to later just merge b into a maintenance branch because you’d also get Y which you would not want to.</p>
<p>But backing out already written code to make a commit? This is not a productive workflow. I could never make myself do something like that, let alone my coworkers. Aside of that, it’s yet another cause to create errors.</p>
<p>This is where the git index shines. Git tracks content. The index is a stage area where you store content you whish to later commit to the repository. Content isn’t bound to a file. It’s just content. By help of the index, you can incrementally collect single changes in different files, assemble them to a complete package and commit that to the repository.</p>
<p>As the index is tracking content and not files, you can add parts of files to it. This solves the problems outlined above.</p>
<p>So once I have completed Feature X, and assuming I could do it in one quick go, then I run <tt>git add</tt> with the –i argument. Now I see a list of changed files in my working copy. Using the patch-command, I can decide, hunk per hunk, whether it should be included in the index or not. Once I’m done, I exit the tool using 7. Then I run git commit<sup><a href="#note-1">1)</a></sup> to commit all the changes I’ve put into the index.</p>
<p>Remember: This is not done per file, but per line in the file. This way I can separate all the changes in my working copy, bug a and b, feature Y and X into single commits and commit them separately.</p>
<p>With a clean history like that, I can consequently merge the feature branch without —squash, thus keeping the history when dcommiting to subversion, finally producing something that can easily be merged around and tracked.</p>
<p>This is yet another feature of git that, after you get used to it, makes this VCS shine even more than everything else I’ve seen so far.</p>
<p>Git is fun indeed.</p>
<p style="font-size: x-small"><a name="note-1"></a>1) and not <tt>git commit -a</tt> which would destroy all the fine-grained plucking of lines you just did - trust me: I know. Now.</p>
Epic SSL fail2008-07-03T00:00:00+00:00http://pilif.github.com/2008/07/epic-ssl-fail<p>Today when I tried to use the fancy SSL VPN access a customer provided me with, I came across this epic fail:</p>
<p><a href="http://www.gnegg.ch/wp-content/uploads/2008/07/sslfail1.png"><img class="aligncenter size-full wp-image-425" title="SSL certificate failure" src="http://www.gnegg.ch/wp-content/uploads/2008/07/sslfail1.png" alt="" width="399" height="180" /></a></p>
<p>Of <em>all</em> the things that can be wrong in a SSL certificate, this certificate manages to get them wrong. The self-signed(1) certificate was issued for the wrong host name(2) and it has expired(3) quite some time ago.</p>
<p>Granted: In this case the issue of trust is more or less constrained to the server to know who I am (I wasn’t intending on transferring any amount of sensitive data), but still - when you self-sign your certificate, the cost of issuing one for the correct host or issuing one with a very long validity becomes a non-issue.</p>
<p>Anyways - I had a laugh. And now you can have one too.</p>
What sucks about the Touch Diamond2008-07-02T00:00:00+00:00http://pilif.github.com/2008/07/what-sucks-about-the-touch-diamond<p>Contrary to all thinking and common-sense I’ve displayed in my «<a href="/2008/06/which-phone-for-me/">Which phone for me?</a>»-post, I went and bought the Touch Diamond. The perspective of having a hackable device with high resolution, GPS and voip capability and flawlessly working Exchange-Synchronization finally pushed me over - oh and of course I just like new gadgets to try out.</p>
<p>In my dream world, the Touch would even replace my iPod Touch as a video player and bathtub browser, so I could go back to my old Nano for podcasts.</p>
<p>Unfortunately, the Touch is not much more than any other Windows Mobile phone with all the suckage and half-working features they usually come with. Here’s the list:</p>
<ul>
<li>VoIP is a no-go. The firmware of the Touch is crippled and does not provide Windows Mobile 6+ SIP support, Skype doesn't run on Windows Mobile 6.1, but all that doesn't matter anway because none of the Voip-Solutions actually use the speakerphone. You can only get VoIP sound on the amplified speaker on the back of the phone - or you use a headset at which time, the thing isn't better than any other VoIP solution at my disposal.</li>
<li>GPS is a no go as the Diamond takes *ages* to find a signal and it's really fiddly to get it to work - even just in the integrated Google maps application.</li>
<li>Typing anything is really hard despite HTC really trying. Whichever input method you chose, you lose: The Windows Mobile native solutions only work with the pen and the HTC keypads are too large for the applications to remain really usable. Writing SMSes takes me so much longer than every other smart phone I've tried before.</li>
<li>T9 is a nice idea, but here and then, you need to enter some special chars. Like dots. Too bad that they are hidden behind another menu - especially the dot.</li>
<li>This TouchFLO 3D-thingie sounds nice on the web and in all the demonstrations, but it sucks anway, mainly because it's slow as hell. The iPhone interface doesn't just look good, it's also responsive, which is where HTC fails. Writing an SMS message takes *minutes* when you combine the embarrassingly slow loading time of the SMS app with the incredibly fiddly text input system.</li>
<li>You only get a German T9 with the German version of the Firmware which has probably been translated using Google Translation or Babelfish.</li>
<li>The worst idea ever from a consumer perspective was that stupid ExtUSB connector. Aside of the fact that you'd practically have to buy an extra cable to sync from home and the office, you also need another extra cable if you want to plug in decent headphones. The ones coming with the device are unusable and it's impossible to plug better ones. Also, the needed adapter cable is currently not available to buy anywhere I looked.</li>
<li>The screen, while having a nice DPI count is too small to be usable for earnest web browsing. Why does windows mobile have to paint everything four times as large when there are four times as many pixels available?</li>
<li>Finger gestures just don't work on a touch sensitive display, no matter how much they try. At least they don't work once you are used to the responsiveness and accuracy of an iPhone (or iPod touch).</li>
<li>The built-in opera browser, while looking nice and providing a much better page zoom feature than the iPod Touch also is unusable because it's much too slow.</li>
</ul>
<p>So instead of having a possible iPhone killer in my pocket, I have a phone that provides around zero more actually usable functionality than my previous W880i and yet is much slower, crashier, larger and heavier than the old solution.</p>
<p>Here’s the old feature comparison table listing the features I tought the touch would have as opposed to the features the touch actually has:</p>
<table id="mobtable" border="0" cellspacing="0" cellpadding="0">
<thead>
<tr>
<td></td>
<th>assumed</th>
<th>actually</th>
</tr>
</thead>
<tbody>
<tr class="devider">
<th colspan="4">Phone usage</th>
</tr>
<tr>
<th>Quick dialing of arbitrary numbers</th>
<td></td>
<td>(the phone application takes around 20 seconds to load, the buttons are totally unresponsive)</td>
</tr>
<tr>
<th>Acceptable battery life (more than two days)</th>
<td>?</td>
<td>yes. Actually yes. 4 days is not bad.</td>
</tr>
<tr>
<th>usable as modem</th>
<td>yes</td>
<td>yes</td>
</tr>
<tr>
<th>usable while not looking at the device</th>
<td>limited</td>
<td>not at all mainly because of the laggyness of the interface</td>
</tr>
<tr>
<th>quick writing of SMS messages</th>
<td></td>
<td>it's much, much worse than anticipated.</td>
</tr>
<tr>
<th>Sending and receiving of MMS messages</th>
<td>yes</td>
<td>not really. Sending pictures is annoying as hell and everything is terribly slow.</td>
</tr>
<tr class="devider">
<th colspan="4">PIM usage</th>
</tr>
<tr>
<th>synchronizes with google calendar/contacts</th>
<td></td>
<td></td>
</tr>
<tr>
<th>synchronizes with Outlook</th>
<td>yes</td>
<td>yes</td>
</tr>
<tr>
<th>usable calendar</th>
<td>yes</td>
<td>very, very slow</td>
</tr>
<tr>
<th>usable todo list</th>
<td>yes</td>
<td>slow</td>
</tr>
<tr class="devider">
<th colspan="4">media player usage</th>
</tr>
<tr>
<th>integrates into current iTunes based podcast workflow</th>
<td></td>
<td></td>
</tr>
<tr>
<th>straight forward audio playing interface</th>
<td></td>
<td></td>
</tr>
<tr>
<th>straight forward video playing interface</th>
<td></td>
<td></td>
</tr>
<tr>
<th>acceptable video player</th>
<td>yes</td>
<td>no. No sound due to no way to plug my own headphones.</td>
</tr>
<tr class="devider">
<th colspan="4">hackability</th>
</tr>
<tr>
<th>ssh client</th>
<td>yes</td>
<td>not really. putty doesn't quite work right on VGA Winmob 6.1</td>
</tr>
<tr>
<th>skype client</th>
<td>yes</td>
<td>no. a) it doesn't work and b) it would require headset usage as skype is unable to use the speakerphone.</td>
</tr>
<tr>
<th>OperaMini (browser usable on GSM)</th>
<td>yes</td>
<td>limited. No softkeys and touch-buttons too small to reliably hit.</td>
</tr>
<tr>
<th>WLAN-Browser</th>
<td>yes</td>
<td>no. Too slow, Screen real estate too limited.</td>
</tr>
</tbody></table>
<p>Now tell me how this could be called progress.</p>
<p>I’m giving this thing until the end of the week. Maybe I get used to its deficiencies in the matters of interface speed. If not, it’s gone. As is the prospective of me buying any other Windows Mobile phone. Ever.</p>
<p>Sorry for the rant, but it had to be.</p>
Mozilla Weave 0.22008-07-01T00:00:00+00:00http://pilif.github.com/2008/07/mozilla-weave-02<p>I have quite many computers I use regularely, on all of which runs <a href="http://www.mozilla.com">Firefox</a>. Of course I’ve accumulated quite a lot of bookmarks, passwords and “keep me logged in”-cookies.</p>
<p>During my use of FF2, I’ve come across Google Browser Sync which was incredibly useful, albeit a bit unstable here and then, so last Christmas, I was very happy to see the prototype of <a href="http://services.mozilla.com">Mozilla Weave</a> to be released. It promised the same feature set as Google Browser Sync, but build from the makers of the browser on an open architecture.</p>
<p>I have been a user of Weave ever since and it was even more inconsistent in availability than what Google Browser Sync ever provided, but at least it was just the server not working, not affecting the client which GBS did here and then, which made me lose parts or all of my bookmarks.</p>
<p>Over time though, Weave got better and better and with todays 0.2 release, the installation and setup process actually got streamlined enough so that I can recommend the tool to anybody using more than one PC at any time.</p>
<p>Especially with the improved bookmarking functionality we got in Firefox 3, synchroniuzing bookmarks has become really important. I’m very happy to see a solution for this problem and I’m overjoyed that the solution is as open as weave is.</p>
<p>Congratulations, Mozilla Team!</p>
Converting ogg streams into mp3 ones2008-06-26T00:00:00+00:00http://pilif.github.com/2008/06/converting-ogg-streams-into-mp3-ones<p>This is just an announcement for my newest quick-hack which can be used to on-the-fly convert streams from webradios which use the ogg/vorbis format into the mp3 format which is more widely supported by the various devices out there.</p>
<p>I have created an <a href="/ogg2mp3">own dedicated page</a> for the project for those who are interested.</p>
<p>Also, I really got to like <a href="http://github.com">github.com</a>, not as the commercial service they intend to be (I’ve <a href="/2008/04/hosted-code-repository/">already written</a> about the stupidity of hosting your company trade secrets at a company in a foreign country with foreign legislation), but as a place to quickly and easily dump some code you want to be publically available without going through all the hassle otherwise associated with public project hosting.</p>
<p>This is why this little script is hosted there and not here. As I’m using git, even if github goes away, I still have the full repository around to either self-host or let someone else host for me, which is a crucial requirement for me to outsource anything.<small><a title="Attribution License" href="http://creativecommons.org/licenses/by/2.0/" target="_blank">
</a><a title="Jason Riedy" href="http://www.flickr.com/photos/66142667@N00/2610502858/" target="_blank"></a></small></p>
Simplest possible RPCs in PHP2008-06-25T00:00:00+00:00http://pilif.github.com/2008/06/simplest-possible-rpcs-in-php<p>After spending hours to find out why a particular combination of SoapClient in PHP itself and SOAP::Server from PEAR didn’t consistenly work together (sometimes, arrays passed around lost an arbitrary number of elements), I thought about what would be needed to make RPCs work form a PHP client to a PHP server.</p>
<p>I wanted nothing fancy and I certainly wanted as less an overhead as humanly possible.</p>
<p>This is what I came up with for the server:</p>
<figure class="highlight"><pre><code class="language-php" data-lang="php"><span class="cp"><?php</span>
<span class="nb">header</span><span class="p">(</span><span class="s1">'Content-Type: text/plain'</span><span class="p">);</span>
<span class="k">require_once</span><span class="p">(</span><span class="s1">'a/file/containing/a/class/you/want/to/expose.php'</span><span class="p">);</span>
<span class="nv">$method</span> <span class="o">=</span> <span class="nb">str_replace</span><span class="p">(</span><span class="s1">'/'</span><span class="p">,</span> <span class="s1">''</span><span class="p">,</span> <span class="nv">$_SERVER</span><span class="p">[</span><span class="s1">'PATH_INFO'</span><span class="p">]);</span>
<span class="k">if</span> <span class="p">(</span><span class="nv">$_SERVER</span><span class="p">[</span><span class="s1">'REQUEST_METHOD'</span><span class="p">]</span> <span class="o">!=</span> <span class="s1">'POST'</span><span class="p">){</span>
<span class="nx">sendResponse</span><span class="p">(</span><span class="k">array</span><span class="p">(</span><span class="s1">'state'</span> <span class="o">=&</span><span class="nx">gt</span><span class="p">;</span> <span class="s1">'error'</span><span class="p">,</span> <span class="s1">'cause'</span> <span class="o">=&</span><span class="nx">gt</span><span class="p">;</span> <span class="s1">'unsuppored HTTP method'</span><span class="p">));</span>
<span class="p">}</span>
<span class="nv">$s</span> <span class="o">=</span> <span class="k">new</span> <span class="nx">MyServerObject</span><span class="p">();</span>
<span class="nv">$params</span> <span class="o">=</span> <span class="nb">unserialize</span><span class="p">(</span><span class="nb">file_get_contents</span><span class="p">(</span><span class="s1">'php://input'</span><span class="p">));</span>
<span class="k">if</span> <span class="p">(</span> <span class="p">(</span><span class="nv">$res</span> <span class="o">=</span> <span class="nb">call_user_func_array</span><span class="p">(</span><span class="k">array</span><span class="p">(</span><span class="nv">$s</span><span class="p">,</span> <span class="nv">$method</span><span class="p">),</span> <span class="nv">$params</span><span class="p">))</span> <span class="o">===</span> <span class="kc">false</span><span class="p">)</span>
<span class="nx">sendResponse</span><span class="p">(</span><span class="k">array</span><span class="p">(</span><span class="s1">'state'</span> <span class="o">=></span> <span class="s1">'error'</span><span class="p">,</span> <span class="s1">'cause'</span> <span class="o">=></span> <span class="s1">'RPC failed'</span><span class="p">));</span>
<span class="k">if</span> <span class="p">(</span><span class="nb">is_object</span><span class="p">(</span><span class="nv">$res</span><span class="p">))</span>
<span class="nv">$res</span> <span class="o">=</span> <span class="nb">get_object_vars</span><span class="p">(</span><span class="nv">$res</span><span class="p">);</span>
<span class="nx">sendResponse</span><span class="p">(</span><span class="nv">$res</span><span class="p">);</span>
<span class="k">function</span> <span class="nf">sendResponse</span><span class="p">(</span><span class="nv">$resobj</span><span class="p">){</span>
<span class="k">echo</span> <span class="nb">serialize</span><span class="p">(</span><span class="nv">$resobj</span><span class="p">);</span>
<span class="k">exit</span><span class="p">;</span>
<span class="p">}</span>
<span class="cp">?></span></code></pre></figure>
<p>This client as shown below is a bit more complex, mainly because it contains some HTTP protocol logic. Logic, which could possibly be reduced to 2-3 lines of code if I’d use the CURL library, but the client in this case does not have the luxury of having access to such functionality.</p>
<p>Also, I’ve already had the function laying around (/me winks at domi), so that’s what I used (as opposed to file_get_contents with a pre-prepared stream context). This way, we DO have the advantage of learning a bit of how HTTP works and we are totally self-contained.</p>
<figure class="highlight"><pre><code class="language-php" data-lang="php"><span class="cp"><?php</span>
<span class="k">class</span> <span class="nc">Client</span><span class="p">{</span>
<span class="k">function</span> <span class="nf">__call</span><span class="p">(</span><span class="nv">$name</span><span class="p">,</span> <span class="nv">$args</span><span class="p">){</span>
<span class="nv">$req</span> <span class="o">=</span> <span class="nv">$this</span><span class="o">-&</span><span class="nx">gt</span><span class="p">;</span><span class="nx">openHTTPRequest</span><span class="p">(</span><span class="s1">'http://localhost:5436/restapi.php/'</span><span class="o">.</span><span class="nv">$name</span><span class="p">,</span> <span class="s1">'POST'</span><span class="p">,</span> <span class="k">array</span><span class="p">(</span><span class="s1">'Content-Type'</span> <span class="o">=&</span><span class="nx">gt</span><span class="p">;</span> <span class="s1">'text/plain'</span><span class="p">),</span> <span class="nb">serialize</span><span class="p">(</span><span class="nv">$args</span><span class="p">));</span>
<span class="nv">$data</span> <span class="o">=</span> <span class="nb">unserialize</span><span class="p">(</span><span class="nb">stream_get_contents</span><span class="p">(</span><span class="nv">$req</span><span class="p">[</span><span class="s1">'handle'</span><span class="p">]));</span>
<span class="nb">fclose</span><span class="p">(</span><span class="nv">$req</span><span class="p">[</span><span class="s1">'handle'</span><span class="p">]);</span>
<span class="k">return</span> <span class="nv">$data</span><span class="p">;</span>
<span class="p">}</span>
<span class="k">private</span> <span class="k">function</span> <span class="nf">openHTTPRequest</span><span class="p">(</span><span class="nv">$url</span><span class="p">,</span> <span class="nv">$method</span> <span class="o">=</span> <span class="s1">'GET'</span><span class="p">,</span> <span class="nv">$additional_headers</span> <span class="o">=</span> <span class="kc">null</span><span class="p">,</span> <span class="nv">$data</span> <span class="o">=</span> <span class="kc">null</span><span class="p">){</span>
<span class="nv">$parts</span> <span class="o">=</span> <span class="nb">parse_url</span><span class="p">(</span><span class="nv">$url</span><span class="p">);</span>
<span class="nv">$fp</span> <span class="o">=</span> <span class="nb">fsockopen</span><span class="p">(</span><span class="nv">$parts</span><span class="p">[</span><span class="s1">'host'</span><span class="p">],</span> <span class="nv">$parts</span><span class="p">[</span><span class="s1">'port'</span><span class="p">]</span> <span class="o">?</span> <span class="nv">$parts</span><span class="p">[</span><span class="s1">'port'</span><span class="p">]</span> <span class="o">:</span> <span class="mi">80</span><span class="p">);</span>
<span class="nb">fprintf</span><span class="p">(</span><span class="nv">$fp</span><span class="p">,</span> <span class="s2">"%s %s HTTP/1.1</span><span class="se">\r\n</span><span class="s2">"</span><span class="p">,</span> <span class="nv">$method</span><span class="p">,</span> <span class="nb">implode</span><span class="p">(</span><span class="s1">'?'</span><span class="p">,</span> <span class="k">array</span><span class="p">(</span><span class="nv">$parts</span><span class="p">[</span><span class="s1">'path'</span><span class="p">],</span> <span class="nv">$parts</span><span class="p">[</span><span class="s1">'query'</span><span class="p">])));</span>
<span class="nb">fputs</span><span class="p">(</span><span class="nv">$fp</span><span class="p">,</span> <span class="s2">"Host: "</span><span class="o">.</span><span class="nv">$parts</span><span class="p">[</span><span class="s1">'host'</span><span class="p">]</span><span class="o">.</span><span class="s2">"</span><span class="se">\r\n</span><span class="s2">"</span><span class="p">);</span>
<span class="k">if</span> <span class="p">(</span><span class="nv">$data</span><span class="p">){</span>
<span class="nb">fputs</span><span class="p">(</span><span class="nv">$fp</span><span class="p">,</span> <span class="s1">'Content-Length: '</span><span class="o">.</span><span class="nb">strlen</span><span class="p">(</span><span class="nv">$data</span><span class="p">)</span><span class="o">.</span><span class="s2">"</span><span class="se">\r\n</span><span class="s2">"</span><span class="p">);</span>
<span class="p">}</span>
<span class="k">if</span> <span class="p">(</span><span class="nb">is_array</span><span class="p">(</span><span class="nv">$additional_headers</span><span class="p">)){</span>
<span class="k">foreach</span><span class="p">(</span><span class="nv">$additional_headers</span> <span class="k">as</span> <span class="nv">$name</span> <span class="o">=></span> <span class="nv">$value</span><span class="p">){</span>
<span class="nb">fprintf</span><span class="p">(</span><span class="nv">$fp</span><span class="p">,</span> <span class="s2">"%s: %s</span><span class="se">\r\n</span><span class="s2">"</span><span class="p">,</span> <span class="nv">$name</span><span class="p">,</span> <span class="nv">$value</span><span class="p">);</span>
<span class="p">}</span>
<span class="p">}</span>
<span class="nb">fputs</span><span class="p">(</span><span class="nv">$fp</span><span class="p">,</span> <span class="s2">"Connection: close</span><span class="se">\r\n\r\n</span><span class="s2">"</span><span class="p">);</span>
<span class="k">if</span> <span class="p">(</span><span class="nv">$data</span><span class="p">)</span>
<span class="nb">fputs</span><span class="p">(</span><span class="nv">$fp</span><span class="p">,</span> <span class="s2">"</span><span class="nv">$data</span><span class="se">\r\n</span><span class="s2">"</span><span class="p">);</span>
<span class="c1">// read away header
</span> <span class="nv">$header</span> <span class="o">=</span> <span class="k">array</span><span class="p">();</span>
<span class="nv">$response</span> <span class="o">=</span> <span class="s2">""</span><span class="p">;</span>
<span class="k">while</span><span class="p">(</span><span class="o">!</span><span class="nb">feof</span><span class="p">(</span><span class="nv">$fp</span><span class="p">))</span> <span class="p">{</span>
<span class="nv">$line</span> <span class="o">=</span> <span class="nb">trim</span><span class="p">(</span><span class="nb">fgets</span><span class="p">(</span><span class="nv">$fp</span><span class="p">,</span> <span class="mi">1024</span><span class="p">));</span>
<span class="k">if</span> <span class="p">(</span><span class="k">empty</span><span class="p">(</span><span class="nv">$response</span><span class="p">)){</span>
<span class="nv">$response</span> <span class="o">=</span> <span class="nv">$line</span><span class="p">;</span>
<span class="k">continue</span><span class="p">;</span>
<span class="p">}</span>
<span class="k">if</span> <span class="p">(</span><span class="k">empty</span><span class="p">(</span><span class="nv">$line</span><span class="p">)){</span>
<span class="k">break</span><span class="p">;</span>
<span class="p">}</span>
<span class="k">list</span><span class="p">(</span><span class="nv">$name</span><span class="p">,</span> <span class="nv">$value</span><span class="p">)</span> <span class="o">=</span> <span class="nb">explode</span><span class="p">(</span><span class="s1">':'</span><span class="p">,</span> <span class="nv">$line</span><span class="p">,</span> <span class="mi">2</span><span class="p">);</span>
<span class="nv">$header</span><span class="p">[</span><span class="nb">strtolower</span><span class="p">(</span><span class="nb">trim</span><span class="p">(</span><span class="nv">$name</span><span class="p">))]</span> <span class="o">=</span> <span class="nb">trim</span><span class="p">(</span><span class="nv">$value</span><span class="p">);</span>
<span class="p">}</span>
<span class="k">return</span> <span class="k">array</span><span class="p">(</span><span class="s1">'response'</span> <span class="o">=></span> <span class="nv">$response</span><span class="p">,</span> <span class="s1">'header'</span> <span class="o">=></span> <span class="nv">$header</span><span class="p">,</span> <span class="s1">'handle'</span> <span class="o">=></span> <span class="nv">$fp</span><span class="p">);</span>
<span class="p">}</span>
<span class="p">}</span>
<span class="nv">$client</span> <span class="o">=</span> <span class="k">new</span> <span class="nx">Client</span><span class="p">();</span>
<span class="nv">$result</span> <span class="o">=</span> <span class="nv">$client</span><span class="o">-></span><span class="na">someMethod</span><span class="p">(</span><span class="k">array</span><span class="p">(</span><span class="s1">'data'</span> <span class="o">=></span> <span class="s1">'even arrays work'</span><span class="p">));</span>
<span class="cp">?></span></code></pre></figure>
<p>What you can’t pass around this way is objects (at least object which are not of type stdClass) as both client and server would need to have access to the prototype. Also, this seriously lacks error handling. But it generally works much better than what SOAP ever could accomplish.</p>
<p>Naturally, I give up stuff when compared to SOAP or any «real» RPC solution:</p>
<ul>
<li>This one works only with PHP</li>
<li>It has limitations on what data structures can be passed around, though that's aleviated by PHP's incredibly strong array support.</li>
<li>It relies heavily on PHP's loosely typed nature and thus probably isn't as robust.</li>
</ul>
<p>Still, protocols like SOAP (or even <strong>any </strong>protocol with either «simple» or «lightweight» in its name) tend to be so complicated that it’s incredibly hard if not impossible to create different implementations what still correctly work together in all cases.</p>
<p>In my case, where I have the problem of having to separate two pieces of the same application due to unstable third-party libraries which I would not want to have linked into every PHP instance running on that server for which the solution outlined above (plus some error handling code) works better than SOAP on so many levels:</p>
<ul>
<li>it's easily debuggable. No need for wireshark or comparable tools</li>
<li>client and server are written by me, so they are under my full control</li>
<li>it works all the time</li>
<li>it relies on as little functionality of PHP as possible and the functionality it depends on is widely used and tested, to I can assume that it's reasonably bug-free (aside of my own bugs).</li>
<li>it's a whole lot faster than SOAP, though this does not matter at all in this case.</li>
</ul>
Which phone for me?2008-06-17T00:00:00+00:00http://pilif.github.com/2008/06/which-phone-for-me<p><a href="http://www.gnegg.ch/wp-content/uploads/2008/06/whichphone.png"><img class="aligncenter size-full wp-image-416" title="whichphone" src="http://www.gnegg.ch/wp-content/uploads/2008/06/whichphone.png" alt="" width="300" height="160" /></a></p>
<p>I’m a quite happy user of my Sony Ericsson W880i / iPod Touch combo: The touch is for listening to podcasts and watching video, the W880i is for SMSing and making a phone call here and then, though it’s mostly for getting called these days. Skype exists and works well.</p>
<p>Now with all the new <span style="text-decoration: line-through;">toys</span>interesting devices coming out all over the place, maybe it’s time to reevaluate the different options. 3G iPhone? Something Windows Mobile based (though the touch diamond seems to be the way to go)? My old phone? Or a combination of any of them?</p>
<p>I tried to make a tabular comparison, where I’m listing the phones by use cases. And I’m only listening features interesting for me. Your points may differ from the ones presented here. This is, after all, a guide I used to pick a solution.</p>
<table id="mobtable" border="0" cellspacing="0" cellpadding="0">
<thead>
<tr>
<td></td>
<td>iPhone</td>
<td>Touch Diamond</td>
<td>W880i</td>
</tr>
</thead>
<tbody>
<tr class="devider">
<th colspan="4">Phone usage</th>
</tr>
<tr>
<th>Quick dialing of arbitrary numbers</th>
<td></td>
<td></td>
<td>yes</td>
</tr>
<tr>
<th>Acceptable battery life (more than two days)</th>
<td>?</td>
<td>?</td>
<td>yes</td>
</tr>
<tr>
<th>usable as modem</th>
<td>probably not</td>
<td>yes</td>
<td>yes</td>
</tr>
<tr>
<th>usable while not looking at the device</th>
<td></td>
<td>limited</td>
<td>yes</td>
</tr>
<tr>
<th>quick writing of SMS messages</th>
<td></td>
<td></td>
<td>yes</td>
</tr>
<tr>
<th>Sending and receiving of MMS messages<sup>1</sup></th>
<td></td>
<td>yes</td>
<td>yes</td>
</tr>
<tr class="devider">
<th colspan="4">PIM usage</th>
</tr>
<tr>
<th>synchronizes with google calendar/contacts<sup>2</sup></th>
<td>maybe</td>
<td></td>
<td>yes. Contacts limited</td>
</tr>
<tr>
<th>synchronizes with Outlook</th>
<td>maybe</td>
<td>yes</td>
<td>not reliably</td>
</tr>
<tr>
<th>usable calendar</th>
<td>yes</td>
<td>yes</td>
<td></td>
</tr>
<tr>
<th>usable todo list</th>
<td></td>
<td>yes</td>
<td></td>
</tr>
<tr class="devider">
<th colspan="4">media player usage</th>
</tr>
<tr>
<th>integrates into current iTunes based podcast workflow<sup>3</sup></th>
<td>yes</td>
<td></td>
<td></td>
</tr>
<tr>
<th>straight forward audio playing interface</th>
<td>yes</td>
<td></td>
<td></td>
</tr>
<tr>
<th>straight forward video playing interface<sup>4</sup></th>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<th>acceptable video player<sup>5</sup></th>
<td>limited</td>
<td>yes</td>
<td></td>
</tr>
<tr class="devider">
<th colspan="4">hackability</th>
</tr>
<tr>
<th>ssh client</th>
<td>maybe</td>
<td>yes</td>
<td></td>
</tr>
<tr>
<th>skype client<sup>6</sup></th>
<td>maybe</td>
<td>yes</td>
<td></td>
</tr>
<tr>
<th>OperaMini (browser usable on GSM)</th>
<td></td>
<td>yes</td>
<td>yes</td>
</tr>
<tr>
<th>WLAN-Browser</th>
<td>yes</td>
<td>yes</td>
<td></td>
</tr>
</tbody></table>
<p><strong>Notes:</strong></p>
<ol>
<li>While I'm not using it often, here and then I come across something funny which I want to share with my parents or my girlfriend. MMS is the optimal medium for that. I send about one MMS per two months and I receive around 2 MMS per month, so this is probably not as important.</li>
<li>Using Services like <a href="http://www.goosync.com">GooSync</a> it is possible to synchronize the W880i with the Google services, though Google's Contact API currently isn't in a state where it would be useful for actually using it to synchronize contacts with the pone - mainly due to not providing an option to synchronize only certain matching contacts.</li>
<li>iTunes not only downloads Podcasts but also keeps track of playback position and the new/not new state across devices and computers. I'm subscribed to more than 20 podcasts, so such features are essential for me.</li>
<li>Neither the iPhone nor the WinMob devices provide an user experience for playing video that even comes close to match the one the iPhone would provide for Audio files.</li>
<li> The Video player on the iPhone is limited to MP4-packaged H.264 files, whereas there are Media Players for WinMob that can handle whatever you throw at them.</li>
<li>Skype is available as a JavaME application, but in addition to the (horrendous) GPRS charge, Skype also charges you, whether you make or receive calls. This is why I listed skype support as missing on the W880i</li>
</ol>
<p>What’s missing in the comparison table is one of the upcoming large Windows Mobile devices with built-in keyboards like the Sony Ericcson XPERIA or the Touch Diamond pro. This class of devices does provide more convenient typing, but their usability still doesn’t even come close to matching a pure phones one. You’d still have to browse through menus, search special keys (like umlauts) and stuff. It’s just that typing has become a bit easier.</p>
<p>These little usability benefits do not even come close to offset the weight and especially thickness of these devices which is why I’m not listening them in the table above.</p>
<p>But let’s discuss the tables content for now:</p>
<p>First the obvious: The best phone in the list is… well… the phone. Neither of the two smart phones is capable of bringing a pure phone user experience that comes even close to what a real phone with a real keyboard can provide.</p>
<p>In case you wonder: I’m a heavy user of T9. Typing with a 10-key keypad assisted by T9 feels completely natural to me and the W880i provides really nice T9 functionality with quick access to suggestions and other shortcuts, so I’m actually inclined to say that I’m quicker to type on that phone than I would even be with one of the larger keyboard-based smart phones, mainly due to shorter distances to travel with the finger(s). With my ~100 SMS per month, I consider myself to be a heavy user of SMS, so quick and easy SMS writing and reception is a key feature for me.</p>
<p>Aside of that, the phone is more or less just that: A phone. It doesn’t really shine in every other aspect. Music kind-of works, but is unusable for Podcasts due to not saving playback position between launches of the media application, let alone synchronizing the playback position across devices.</p>
<p>Video, applications and even just browsing beyond the means of what OperaMini can provide are out of the question.</p>
<p>As such, the W880i basically is like grep. Or sort. Or uniq. Or like any other of these little UNIX utilities: It does one thing and it does it well.</p>
<p>The WinMob phones provide not much better media support (they do play video, but for Podcasts they are still not as good as iTunes), but they shine in the realm of hackability and, of course, the PIM synchronization, though there they more or less only work with Exchange. Also, the larger screen provides the user with a lot more possibilities UI-wise.</p>
<p>So while the W880i is the better phone, the WinMob devices are the better PIM solution and better platform to hack on which appeals the geek in me quite more - obviously.</p>
<p>The iPhone is limited in its capabilities as a phone, provides next to no hackability and will probably come with some enforced phone contract here in Switzerland. It does shine in the media department though, but that part is also perfectly well handled by my current iPod Touch to which I can easily (at the cost of $10) add the limited hackability the iPhone is going to get - should I need it.</p>
<p>Looking at this, the iPhone certainly looks like an uninteresting solution: All it would provide I currently have in the touch, aside of the phone, for which I currently have a better solution anyways.</p>
<p>Replacing the W880i/touch combo with either an iPhone or a WinMob solution seems like a stupid thing to do as I’d lose the good usability of the phone and/or the nice Media capabilities of the touch.</p>
<p>So in the end, I have only a couple of options which would work for me:</p>
<ul>
<li>Replace my W880i/touch combo with a W880i/iPhone combo and use the iPhone as an always-connected surf station with limited hackability. This, frankly, is just too expensive to be of any value as it would mean to get a second mobile contract just for surfing here and then, while <strong>still</strong> forcing me to keep the data option for my W880i because the iPhone is not usable as a modem in case I need to emergency-repair a server or something.</li>
<li>Replace the W880i in my combo with the Touch Diamond: With every earlier model of WinMob devices, this would have been completely un-doable due to the thickness of the devices. The Diamond is not much thicker than the W880i, so the Diamond and the iPod Touch would still fit the same pocket in my trousers. I would lose the kick-ass usability of the W880i, but I would gain a real in-bed media player (without transcoding), an emergency SSH client and a completely working PIM with totally working synchronization.</li>
<li>Keep my solution as it currently is, while keeping in mind that ever since I got the touch, it provides all the features I would ever need: A kick-ass phone, an acceptable video player, a kick-ass music player and two browsers - one for each type of usage: The OperaMini when I'm forced to use the slow GSM and Safari on the Touch when I have WLAN (you would not want to use Safari over GSM - I tried).</li>
</ul>
<p>It’s funny: I’m so much in love with technology and gadgets. I’m always on the lookout for new stuff, always trying out new, so called revolutionary technology. I’ve tried to many phone solutions in my life (just look at this blog), but I finally think that I have found a solution I’m willing to stick with.</p>
<p>The current W880i/Touch combo works so well that I don’t see any other solution that would only provide me with advantages. Each and every other new device comes with inherent drawbacks.</p>
<p>I guess, for once, I pass. I’ll stick around with my outdated solution and I’ll wait for the next revolution. What I currently have just works too well.</p>
They just don't want my money2008-06-11T00:00:00+00:00http://pilif.github.com/2008/06/they-just-dont-want-my-money<p><a href="http://en.wikipedia.org/wiki/Mass_Effect">Mass Effect</a> is a wonderful game. Its story is one of the most interesting I’ve ever witnessed in a game. The atmosphere it brings over is very deep and impressive. It’s science fiction. It contains aliens and explosions, so it’s perfect for my taste.</p>
<p>Also, I like the role playing elements which contain just enough stats to make the leveling process interesting while not being overly complicated.</p>
<p>I bought Mass Effect for the XBox 360 back in December and played through it once, while being annoyed that I had to buy it in the (albeit very good) German version (it’s practically impossible to get English originals here in Switzerland) and annoyed about the awful, awful equipment and inventory handling that made it impossible to really know how you should equip your characters (in fact, I went through half of the game in the starting equipment because I didn’t understand how to actually put the items on).</p>
<p>So despite the immense replayability value of the game, I left it at that one runthrough. But I bought the Mass Effect book telling the story leading up to the events of the game.</p>
<p>And now, the game was re-released for the PC. Considering the fact that I actually bought the Mac Pro I’m currently using with PC-gaming in mind, I pondered with the idea of buying the game again for the PC. In English and with the fixed inventory screen (they actually fixed that in the PC version. yes. so it wasn’t just my stupidity).</p>
<p>This may sound crazy, but as I said, the game provides incredible value to replay it: Different decisions, different choice of squad members, even choosing different classes to begin with (though I would never even have tried to play a caster in the 360 version - the interface was just too painful for that) - everything has influence on elements of the story. Playing through Mass Effect only once is clearly a waste of a very good game.</p>
<p>With a 25 MBit connection to the internet, I though that buying the game online would be a reasonable request too. So here’s what I’ve tried:</p>
<ol>
<li>Buy the game via Direct2Drive. All seemed to go well and it even asked me for my credit card info. But then, on the final step, it told me that my cart was empty. And a little footnote informed me that Mass Effect has been removed from the cart due to country restrictions. Thanks for telling me in advance!</li>
<li>On the web page of the publisher, there's a link to the EA store to buy the game online. Whatever I tried, I could only get the shop to actually provide me with the US version of the game which it refused to "ship" (hello? This is a digital download) to Switzerland - despite me trying on June 8th, two days after the official launch in Europe.</li>
<li>I tried to trick the EA store to sell me the game none-the-less by using paypal to pay for it, giving a fake US-"shipping" address. No dice though as paypal refused to bill my account due to the "shipping" address being different from the address I've entered in paypal.</li>
<li>Sure that electronic download will not work, I went to the local game store I usually get my games from. Unfortunately, they didn't have the English version of the game and won't be getting it.</li>
</ol>
<p>On a world where digital goods can hop from one corner to the next in milliseconds, on a world where everyone is complaining about rampant piracy, it is impressive how hard it actually is to timely and legally get the digital goods.</p>
<p>Here’s what I did in the end: First I began downloading the pirated version of the game and while that download was running, I went and bought the German version of the game. When I got back from the store, the download of the English version was finished. I’ve installed it, provided it with the serial number of my German original and then played it, using the German DVD as proof of purchase.</p>
<p>Why does it have to be so hard to actually buy a game these days?</p>
A look at my IRC configuration2008-06-10T00:00:00+00:00http://pilif.github.com/2008/06/a-look-at-my-irc-configuration<p>It has been a while since I’ve <a href="/2005/01/irc-clients/">last talked about my irc setup</a>. Last time was when I initially started to get my feet wet in IRC. In the three years (three years already? How time passes!), I have been active in IRC to various degrees (with some large wow-related holes, but that’s <a href="/2006/09/correlation-between-gneggch-and-wow/">a different story</a>).</p>
<p>Currently, my rate of IRC usage is back to quite high levels, mostly due to #git which is a place where I actually feel competent enough to help out here and then. Also #nesvideos is fun to be in as ever.</p>
<p>What has changed since 2005 is my setup:</p>
<ul>
<li>Whereas back then, I was mainly working on Windows, I have by now <a href="/2006/04/praise-to-vlc/">switched over to OSX</a>, so the question of clients is raised again, but see more on that later on.</li>
<li>By IRC proxy changed from ctrlproxy to <a href="http://znc.sourceforge.net">ZNC</a>. It's easier to configure and I've had much better success with that than with any other bouncer I tried. It's the only bouncer I know so far that does the thing I need bouncers for out-of-the box: Log the channel while I'm away and replay the log when I reconnect.</li>
<li>The client has radically changed. While I stayed true to X-Chat after my switch to OSX (using <a href="http://sourceforge.net/projects/xchataqua/">X-Chat Aqua</a>), recently, I have tried other clients as Colloquy (no light-on-dark-theme with colored nicks) and Linkinus (not offering any features making up for the commercial nature), but honestly, <a href="http://www.mibbit.com">Mibbit</a> works best. And it works when I'm on my girlfriend's computer. And it even (somewhat) works on my iPod Touch. And it looks good. And it provides all the features I need. I'm of course still connecting to my bouncer, but that's easily configured once you create an account there.</li>
</ul>
<p>IRC is still a very techie world, but it’s such a nice way to spend time on.</p>
First mail, then office, now IRC. What's next?2008-06-04T00:00:00+00:00http://pilif.github.com/2008/06/first-mail-then-office-now-irc-whats-next<p>I know that I may be really late with this, but I recently came across <a href="http://www.mibbit.com/">Mibbit</a>, a web based IRC client. This is another instance of the recent rush of applications being transported over to the web platform.</p>
<p>In the early days, there were webbased email services. Like Hotmail (or the third CGI script I’ve ever written - the firewall/proxy in my school only supported traffic on port 80 and I didn’t know about tunnels, nor did I have the infrastructure to create a fitting one).</p>
<p>Then came office applications like Google’s offering. And of course games. Many games.</p>
<p>Of course there were webbased chats in the earlier days. But they either required a plugin like java or flash or they worked by constantly reloading the page where the chat is appearing on. Neither of the solution provided what I’d call a full IRC-client. And many of the better solutions required a plugin to work.</p>
<p>mibbit is though. It provides many of the features a not-too-advanced IRC user would want to have. Sure. Scripting is (currently) absent, but everything else is here. In a pleasant interface.</p>
<p>What’s interesting is the fact that so many applications can nowadays be perfectly represented on the web. In fact, XHTML/CSS is perfectly fitted to present a whole lot of data to the user. For IRC for example, there is among the desktop clients to use HTML for their chat rendering aswell.</p>
<p>So in case of IRC clients, both types of applications sooner or later reach the same state: Representing chat messages in good-looking HTML while providing a myriad of features to put off everyone but the most interested and tech-savy user :-)</p>
<p>Still. The trend is an interesting thing to note. As more and more applications hop over to the web, we get more and more independant of infrastructure and OSes. Sometime in the future, maybe we’ll have the paradise of just having a browser to access all our data and applications from wherever we are.</p>
<p>No more software installations. No more viruses and spyware. No more software inexplicably stopping to work. And for the developer: Easy deployment of fixes, shorter turnaround times.</p>
<p>Interesting times ahead indeed.</p>
Why is nobody using SSL client certificates?2008-05-26T00:00:00+00:00http://pilif.github.com/2008/05/why-is-nobody-using-ssl-client-certificates<p>Did you know that ever since the days of Netscape Navigator 3.0, there is a technology that allows you to</p>
<ul>
<li>securely sign on without using passwords</li>
<li>allow for non-annoying two-factor authentication</li>
<li>uniquely identify yourself to third-party websites without giving the second party any account information</li>
</ul>
<p>All of this can be done using SSL client certificates.</p>
<p>You know: Whenever you visit an SSL protected page, what usually happens is that your browser checks the identity of the remote site by checking their certificate. But what also could happen is that the remote site could check <strong>your </strong>identity using a previously issued certificate.</p>
<p>This is called SSL client side certificate.</p>
<p>Sites can make the browser generate a keypair for you. Then they'll sign your public key using their private key and they'll be able to securely identify you from then on.</p>
<p>The certificate is stored in the browser itself and your browser will send it to any (SSL protected) site requesting it. The site in turn could then identify you as the owner of the private key associated to the presented certificate (provided the key wasn't generated on a <a href="http://lists.debian.org/debian-security-announce/2008/msg00152.html">pre-patch</a> Debian installation *sigh*).</p>
<p>The keypair is bound to the machine it was generated on, though it can be exported and re-imported on a different machine.</p>
<p>It solves our introductory three problems like this:</p>
<ul>
<li>by presenting the certificate, the origin server can identify you. <em>No need to enter a user name or a password</em>.</li>
<li>By asking for a password (something you know) and comparing the SSL certificate (something you have), you get cheap and <em>easy two factor authentication</em> that's a lot more secure than asking for your mothers maiden name.</li>
<li>If the requesting party in a three-site scenario knows your public key and uses that to request information from a requested party, you, <em>can revoke access by this key at any time</em> without any of the parties knowing your username and password.</li>
</ul>
<p>Looks very nice, doesn't it?</p>
<p>So why isn't it used more often (read: at all)?</p>
<p>This is why:</p>
<p><a href="http://www.gnegg.ch/wp-content/uploads/2008/05/scary_small.png"><img class="aligncenter size-full wp-image-411" title="Complicated SSL Dialogs" src="http://www.gnegg.ch/wp-content/uploads/2008/05/scary_small.png" alt="Picture underlining the \" width="450" height="473" /></a></p>
<p>The screenshot shows what’s needed to actually have a look at the client side certificates installed in your browser, which currently is the only way of accessing them. Let’s say you want to copy a keypair from one machine to another. You’ll have to:</p>
<ol>
<li>Open the preferences (many people are afraid of even that)</li>
<li>Select Advanced (scary)</li>
<li>Click Encryption (encry... what?)</li>
<li>Click "View Certificates" (what do the other buttons do? oops! Another dialog?)</li>
<li>Select your certificate (which one?) and click "Export" (huh?)</li>
</ol>
<p>Even generation of the key is done in-browser without feedback by the site requesting the key.</p>
<p>This is like basic authentication (nobody uses this one) vs. forms based authentication (which is what everybody uses): It's non-themeable, scary, modal and complicated.</p>
<p>What we need for client side certificates to become useful is a way for sites to get more access to the functionality than they currently do: They need information on the key generation process. They should allow the user to export the key and to re-import it (just spawning two file dialogs should suffice - of course the key must not be transmitted to the site in the process). They need a way to list the keys installed in a browser. They need to be able to add and remove keys (on the user's request).</p>
<p>In the current state, this excellent idea is rendered completely useless by the awful usability and the completely detached nature: This is a browser feature. It's browser dependent without a way for the sites to control it - to guide users through steps.</p>
<p>For this to work, sites need more control.</p>
<p>Without giving them access to your keys.</p>
<p><divpInteresting problem. Isn’t it?</p></p>
pilif.ch is back2008-05-19T00:00:00+00:00http://pilif.github.com/2008/05/pilifch-is-back<p>It has been a while since <a href="/2005/09/domain-grabbers-loveem/">I lost pilif.ch</a>. Two years to be exact.</p>
<p>Fortunately, it looks like the domain grabber who took pilif.ch after that unfortunate accounting incident has since lost interest, so now pilif.ch belongs to me again. About bloody time!</p>
<p>Aside of the fact that my online identity has always been pilif (despite lipfi sounding much friendlier when pronounced in swiss german), there are other reasons for me wanting the domain back:</p>
<ul>
<li>it's in my MSN-ID (passport@pilif.ch)</li>
<li>various other @pilif.ch addresses are registered at various services I've since forgotten the password for.</li>
<li>it was the very first domain I bought - ever.</li>
</ul>
<p>So it’s back to the roots for me. MX, Web and DNS are already configured (the zone file is actually symlinked to lipfi.ch - I have no idea whether this is a legal thing to do, but it works).</p>
<p>Home - sweet home!</p>
Broken by design2008-05-06T00:00:00+00:00http://pilif.github.com/2008/05/broken-by-design<p>The concept sounds nice: To control all the various remote controllable devices you accumulate in your home cinema, why not just use one programmable remote? With enough intelligence, I would even be able to do much more than provide some way of switching personality.</p>
<p>I mean: Press one button and you have a remote for your receiver, press another and it’ll be for your media center, but losing its receiver functionality.</p>
<p>Why not put it in “Media Mode” where it controls the volume by sending commands the receiver understands while still providing full navigation support for your media center.</p>
<p>Logitech’s <a href="http://www.logitech.com/index.cfm/remotes/universal_remotes">Harmony family</a> promises to provide that functionality.</p>
<p>Unfortunately, it’s broken by design as</p>
<ul>
<li>it tries to be intelligent while it is completely stupid. For example, I can add a "Music Player"-Functionality, with the intention of it sending commands to a Squeezebox, but as soon as you add a media center, it insists to use that to play music without a way to change that.</li>
<li>The web based programming interface is awful. It forces you through multi step assistants, each time reloading the (ugly) pages, asking questions which could easily be placed on one screen.</li>
<li>It only works on Mac and Windows (no Linux support)</li>
</ul>
<p>Especially the first point rendered this interesting concept completely unusable for me.</p>
<p>Now, Engadget just had <a href="http://www.engadget.com/2008/05/05/concordance-enables-logitech-harmony-programming-in-unix-linux/">an article</a> about project <a href="http://www.phildev.net/harmony/">Concordance</a>, a free software project allowing to access the functionality (the whole functionality) from any UNIX machine using a command line tool, while also providing a library (with Perl and Python bindings) for us to write a useful GUI for.</p>
<p>I can’t wait to try this out as this easily circumvents the awful UI and may actually provide me with means to make Harmony work for my setup.</p>
<p>Also, it’s a real shame to see a very interesting project be made completely unusable by bad UI design.</p>
Nice weather2008-05-05T00:00:00+00:00http://pilif.github.com/2008/05/nice-weather<p><img class="aligncenter size-full wp-image-406" title="weather" src="http://www.gnegg.ch/wp-content/uploads/2008/05/weather.png" alt="Nice Weather" width="363" height="167" /></p>
<p>it has <a href="/2006/07/nice-summer/">been a while</a> since I’ve last talked about the weather, but the current official forecast by the <a href="http://www.meteoschweiz.admin.ch/web/en/weather.html">Federal Office of Meteorology and Climatology</a> makes me really, really happy. Especially after a very wet and cold April.</p>
<p>The days of barbecues are back!</p>
Dependent on working infrastructure2008-04-28T00:00:00+00:00http://pilif.github.com/2008/04/dependent-on-working-infrastructure<p>If you create and later deploy and run a web application, then you are dependent on a working infrastructure: You need a working web server, you need a working application server and in most cases, you’ll need a working database server.</p>
<p>Also, you’d want a solution that always and consistently works.</p>
<p>We’ve been using lighttpd/FastCGI/PHP for our deployment needs lately. I’ve preferred this to apache due to the easier configuration possible with lighty (out of the box automated virtual hosting for example), the potentially higher performance (due to long-running FastCGI processes) and the smaller amount of memory consumed by lighttpd.</p>
<p>But last week, I had to learn the price of walking off the beaten path (Apache, mod_php).</p>
<p>In one particular constellation, the lighty, fastcgi, php combination, running on a Gentoo box sometimes (read: 50% of the time) a certain script didn’t output all the data it should have. Instead, lighty randomly sent out RST packets. This without any indication of what could be wrong in any of the involved log files.</p>
<p>Naturally, I looked everywhere.</p>
<p>I read the source code of PHP. I’ve created reduced test cases. I’ve tried workarounds.</p>
<p>The problem didn’t go away until I tested the same script with Apache.</p>
<p>This is where I’m getting pragmatic: I depend on a working infrastructure. I need it to work. Our customers need it to work. I don’t care who is to blame. Is it PHP? Is it lighty? Is it Gentoo? Is it the ISP (though it would have to be on the senders end as I’ve seen the described failure with different ISPs)?</p>
<p>I don’t care.</p>
<p>My interest is in developing a web application. Not in deploying one. Not really, anyways.</p>
<p>I’m willing (<a href="/2007/07/php-stream-filters-bzip2compress/">and able</a>) to fix bugs in my development environment. I may even be able to fix bugs in my deployment platform. But I’m certainly not willing to. Not if there is a competing platform that works.</p>
<p>So after quite some time with lighty and fastcgi, it’s back to Apache. The prospect of having a consistently working backed largely outweighs the theoretical benefits of memory savings, I’m afraid.</p>
Ubuntu 8.042008-04-24T00:00:00+00:00http://pilif.github.com/2008/04/ubuntu-804<p>I’m sure that you have heard the news: <a href="http://www.ubuntu.com">Ubuntu</a> 8.04 is out.</p>
<p>Congratulations to Canonical and their community for another fine release of a really nice Linux distribution.</p>
<p>What prompted me to write this entry though is the fact that I have updated <a href="http://www.gnegg.ch/2006/07/computers-under-my-command-issue-1-shion/">shion</a> from 7.10 to 8.04 this afternoon. Over a SSH connection.</p>
<p>The whole process took about 10 minutes (including the download time) and was completely flawless. Everything kept working as it was before. After the reboot (which also went flawlessly), even OpenVPN came back up and connected to the office so I could have a look at how the update went.</p>
<p>This is very, very impressive. Updates are tricky. Especially considering that it’s not one application that’s updated, not even one OS. It’s a seemingly random collection of various applications with their interdependencies, making it virtually impossible to test each and every configuration.</p>
<p>This shows that with a good foundation, everything is possible - even when you don’t have the opportunity to test for each and every case.</p>
<p>Congratulations agin, Ubuntu team!</p>
Web service authentication2008-04-18T00:00:00+00:00http://pilif.github.com/2008/04/web-service-authentication<p>When reading an <a href="http://googlesystem.blogspot.com/2008/04/subscribe-to-authenticated-feeds-in.html">article</a> about how to make google reader work with authenticated feeds, one big flaw behind all those web 2.0 services sprang to my mind: Authentication.</p>
<p>I know that there are <a href="http://oauth.net/">efforts</a> underway to standardise on a common method of service authentication, but we are nowhere near there yet.</p>
<p>Take facebook: They offer you to enter your email account data into some form to send an invitation to all your friends. Or the article I was referring to: They want your account data for a authenticated feed to make them available in google reader.</p>
<p>But think of what you are giving away…</p>
<p>For your service provider to be able to interact with that other service, they need to store your passwort. Be it short term (facebook, hopefully) or long term (any online feed reader with authentication support). They can (and do) assure you that they will store the data in encrypted form, but to be able to access the service in the end, they need the unencrypted password, thus requiring them to not only use reversible encryption, but to also keep the encryption key around.</p>
<p>Do you want a company in a country whose laws you are not familiar with to have access to all your account data? Do you want to give them the password to your personal email account? Or to everything else in case you share passwords?</p>
<p>People don’t seem to get this problem as account data is freely given all over the place.</p>
<p>Efforts like <a href="http://oauth.net/">OAuth</a> are clearly needed, but as webbased technology, they clearly can’t solve all the problems (what about Email accounts for example).</p>
<p>But is this the right way? <a href="http://www.codinghorror.com/blog/archives/001072.html">We can’t even trust desktop applications</a>. Personally, I think the good old username/password combination is at the end of its usefulness (was it ever really useful?). We need new, better, ways for proving our identity. Something that is easily passed around and yet cannot be copied.</p>
<p>SSL client certificates feel like an underused but very interesting option. Let’s make two examples. The first one is your authenticated feed. The second one is your SSL-enabled email server. Let’s say that you want to give a web service revokable access to both services without ever giving away personal information.</p>
<p>For the authenticated feed, the external service will present the feed server with its client side certificate which you have signed. By checking your signature, the authenticated feed knows your identity and by checking your CRL it knows whether you authorized the access or not. The service doesn’t know your password and can’t use your signature for anything but accessing that feed.</p>
<p>The same goes for the email server: The third party service logs in with your username and the signed client certificate (signed by you), but without password. The service doesn’t need to know your password and in case they do something wrong, you revoke your signature and be done with it (I’m not sure whether mail servers support client certificates, but I gather they do as it’s part of the SSL spec).</p>
<p>Client side certificates already provide a standard means for secure authentication without ever passing a known secret around. Why isn’t it used way more often these days?</p>
VMware shared folders and Visual Studio2008-04-17T00:00:00+00:00http://pilif.github.com/2008/04/vmware-shared-folders-and-visual-studio<p>ver since <a href="http://www.gnegg.ch/2008/03/impressed-by-git/">I’ve seen the light</a>, I’m using git for every possible situation. Subversion is ok, but git is fun. It changed the way how I do developement. It allowed me to create ever so many fun-features for our product. Even in spare-time - without the fear of never completing and thus wasting them.</p>
<p>I have so many branches of all our projects - every one of them containing useful, but just not ready for prime-time feature. But when the time is right, I will be able to use that work. No more wasting it away because a bugfix touches the same file.</p>
<p>The day I dared to use git was the day that changed how I work.</p>
<p>Now naturally, I wanted to use all that freedom for my windows work aswell, but as you know, git just isn’t quite there yet. In fact, I had an awful lot of trouble with it, mainly because of it’s integrated SSH client that doesn’t work with my putty pageant-setup and stuff.</p>
<p>So I resorted to storing my windows development stuff on my mac file system and using VMware Fusion’s shared folder feature to access the source files.</p>
<p>Unfortunately, it didn’t work very well at first as this is what I got:</p>
<p><a href="http://www.gnegg.ch/wp-content/uploads/2008/04/nottrusted.png"><img class="aligncenter size-medium wp-image-402" title="The project location is not trusted" src="http://www.gnegg.ch/wp-content/uploads/2008/04/nottrusted-300x156.png" alt="Error message saying that the 'Project location is not trusted'" width="300" height="156" /></a></p>
<p>I didn’t even try to find out what happens when I compile and run the project from there, so I pressed F1 and followed the instructions given there to get rid of the message that the “Project location is not trusted”.</p>
<p>I followed them, but it didn’t help.</p>
<p>I tried adding various UNC paths to the intranet zone, but neither worked.</p>
<p>Then I tried sharing the folder via Mac OS X’s built in SMB server. This time, the path I’ve set up using mscorcfg.msc actually seemed to do something. Visual Studio stopped complaining. And then I found out:</p>
<p>Windows treats host names containing a dot (.) as internet resources. Hostnames without dots are considered to be intranet resouces.</p>
<p>\celes\windev worked in mscorcfg.msc because celes, not containing a dot, was counted as an intranet resource.</p>
<p>\.host contains a dot and this is counted to be an internet resource.</p>
<p>This means that to make the .NET framework trust your VMWare shared folder, you have to add the path to the “Internet_Zone”. Not the “LocalIntranet_Zone”, because the framework loader doesn’t even look there.</p>
<p>Once I’ve changed that configuration, Visual Studio complained that it was unable to parse the host name - it seems to assume them not starting with a dot.</p>
<p>This was fixed by mapping the path to a drive letter like we did centuries ago.</p>
<p>Now VS is happy and I can have the best of all worlds:</p>
<ul>
<li>I can keep my windows development work in a git repository</li>
<li>I have a useful (and working) shell and ssh-agent to actually "git svn dcommit" my work</li>
<li>I don't have to export any folders of my mac via SMB</li>
<li>Time Machine now also backs up my Windows Work which I had to do manually until now.</li>
</ul>
<p>Very nice indeed, but now back to work (with git :-) ).</p>
git branch in ZSH prompt2008-04-14T00:00:00+00:00http://pilif.github.com/2008/04/git-branch-in-zsh-prompt<p style="padding-top: 20px !important">
<img src="http://www.gnegg.ch/wp-content/uploads/2008/04/gitprompt.png" alt="Screenshot of the terminal showing the current git branch" title="git prompt" width="365" height="36" class="aligncenter size-full wp-image-400" />
</p>
<p>Today, I came across a little trick on how to <a href="http://unboundimagination.com/Current-Git-Branch-in-Bash-Prompt">output the current git branch on your bash prompt</a>. This is very useful, but not as much for me as <a href="http://www.gnegg.ch/2005/04/praise-to-zsh/">I’m using ZSH</a>. Of course, I wanted to adapt the method (and to use fewer backslashes :-) ).</p>
<p>Also, in my setup, I’m making use of ZSH’s prompt themes feature of which I’ve chosen the theme “adam1”. So let’s use that as a starting point.</p>
<ol>
<li>First, create a copy of the prompt theme into a directory of your control where you intend to store private ZSH functions (~/zshfuncs in my case).
<figure class="highlight"><pre><code class="language-bash" data-lang="bash">cp /usr/share/zsh/4.3.4/functions/prompt_adam1_setup ~/zshfuncs/prompt_pilif_setup</code></pre></figure>
</li>
<li>Tweak the file. I've adapted the prompt from the original article, but I've managed to get rid of all the backslashes (to actually make the regex readable) and to place it nicely in the adam1 prompt framework.</li>
<li>Advise ZSH about the new ZSH function directory (if you haven't already done so).
<figure class="highlight"><pre><code class="language-bash" data-lang="bash"><span class="nv">fpath</span><span class="o">=(</span>~/zshfunc <span class="nv">$fpath</span><span class="o">)</span></code></pre></figure>
</li>
<li>Load your new prompt theme.
<figure class="highlight"><pre><code class="language-bash" data-lang="bash">prompt pilif</code></pre></figure>
</li>
</ol>
<p>And here’s the adapted adam1 prompt theme:</p>
<figure class="highlight"><pre><code class="language-bash" data-lang="bash"><span class="c"># pilif prompt theme</span>
prompt_pilif_help <span class="o">()</span> <span class="o">{</span>
cat <span class="sh"><<'EOF'
This prompt is color-scheme-able. You can invoke it thus:
prompt pilif [<color1> [<color2> [<color3>]]]
This is heavily based on adam1 which is distributed with ZSH. In fact,
the only change from adam1 is support for displaying the current branch
of your git repository (if you are in one)
EOF
</span><span class="o">}</span>
prompt_pilif_setup <span class="o">()</span> <span class="o">{</span>
<span class="nv">prompt_adam1_color1</span><span class="o">=</span><span class="k">${</span><span class="nv">1</span><span class="k">:-</span><span class="s1">'blue'</span><span class="k">}</span>
<span class="nv">prompt_adam1_color2</span><span class="o">=</span><span class="k">${</span><span class="nv">2</span><span class="k">:-</span><span class="s1">'cyan'</span><span class="k">}</span>
<span class="nv">prompt_adam1_color3</span><span class="o">=</span><span class="k">${</span><span class="nv">3</span><span class="k">:-</span><span class="s1">'green'</span><span class="k">}</span>
<span class="nv">base_prompt</span><span class="o">=</span><span class="s2">"%{</span><span class="nv">$bg_no_bold</span><span class="s2">[</span><span class="nv">$prompt_adam1_color1</span><span class="s2">]%}%n@%m%{</span><span class="nv">$reset_color</span><span class="s2">%} "</span>
<span class="nv">post_prompt</span><span class="o">=</span><span class="s2">"%{</span><span class="nv">$reset_color</span><span class="s2">%}"</span>
<span class="nv">base_prompt_no_color</span><span class="o">=</span><span class="k">$(</span><span class="nb">echo</span> <span class="s2">"</span><span class="nv">$base_prompt</span><span class="s2">"</span> | perl -pe <span class="s2">"s/%{.*?%}//g"</span><span class="k">)</span>
<span class="nv">post_prompt_no_color</span><span class="o">=</span><span class="k">$(</span><span class="nb">echo</span> <span class="s2">"</span><span class="nv">$post_prompt</span><span class="s2">"</span> | perl -pe <span class="s2">"s/%{.*?%}//g"</span><span class="k">)</span>
precmd <span class="o">()</span> <span class="o">{</span> prompt_pilif_precmd <span class="o">}</span>
preexec <span class="o">()</span> <span class="o">{</span> <span class="o">}</span>
<span class="o">}</span>
prompt_pilif_precmd <span class="o">()</span> <span class="o">{</span>
setopt noxtrace localoptions
<span class="nb">local </span>base_prompt_expanded_no_color base_prompt_etc
<span class="nb">local </span>prompt_length space_left
<span class="nb">local </span>git_branch
<span class="nv">git_branch</span><span class="o">=</span><span class="sb">`</span>git branch 2>/dev/null | grep -e <span class="s1">'^*'</span> | sed -E <span class="s1">'s/^\* (.+)$/(\1) /'</span><span class="sb">`</span>
<span class="nv">base_prompt_expanded_no_color</span><span class="o">=</span><span class="k">$(</span>print -P <span class="s2">"</span><span class="nv">$base_prompt_no_color</span><span class="s2">"</span><span class="k">)</span>
<span class="nv">base_prompt_etc</span><span class="o">=</span><span class="k">$(</span>print -P <span class="s2">"</span><span class="nv">$base_prompt</span><span class="s2">%(4~|...|)%3~"</span><span class="k">)</span>
<span class="nv">prompt_length</span><span class="o">=</span><span class="k">${#</span><span class="nv">base_prompt_etc</span><span class="k">}</span>
<span class="k">if</span> <span class="o">[[</span> <span class="nv">$prompt_length</span> -lt 40 <span class="o">]]</span>; <span class="k">then
</span><span class="nv">path_prompt</span><span class="o">=</span><span class="s2">"%{</span><span class="nv">$fg_bold</span><span class="s2">[</span><span class="nv">$prompt_adam1_color2</span><span class="s2">]%}%(4~|...|)%3~%{</span><span class="nv">$fg_bold</span><span class="s2">[white]%}</span><span class="nv">$git_branch</span><span class="s2">"</span>
<span class="k">else
</span><span class="nv">space_left</span><span class="o">=</span><span class="k">$((</span> <span class="nv">$COLUMNS</span> <span class="o">-</span> <span class="nv">$#base_prompt_expanded_no_color</span> <span class="o">-</span> <span class="m">2</span> <span class="k">))</span>
<span class="nv">path_prompt</span><span class="o">=</span><span class="s2">"%{</span><span class="nv">$fg_bold</span><span class="s2">[</span><span class="nv">$prompt_adam1_color3</span><span class="s2">]%}%</span><span class="k">${</span><span class="nv">space_left</span><span class="k">}</span><span class="s2"><...<%~ %{</span><span class="nv">$reset_color</span><span class="s2">%}</span><span class="nv">$git_branch</span><span class="s2">%{</span><span class="nv">$fg_bold</span><span class="s2">[</span><span class="nv">$prompt_adam1_color3</span><span class="s2">]%} </span><span class="nv">$prompt_newline</span><span class="s2">%{</span><span class="nv">$fg_bold_white</span><span class="s2">%}"</span>
<span class="k">fi
</span><span class="nv">PS1</span><span class="o">=</span><span class="s2">"</span><span class="nv">$base_prompt$path_prompt</span><span class="s2"> %# </span><span class="nv">$post_prompt</span><span class="s2">"</span>
<span class="nv">PS2</span><span class="o">=</span><span class="s2">"</span><span class="nv">$base_prompt$path_prompt</span><span class="s2"> %_&gt; </span><span class="nv">$post_prompt</span><span class="s2">"</span>
<span class="nv">PS3</span><span class="o">=</span><span class="s2">"</span><span class="nv">$base_prompt$path_prompt</span><span class="s2"> ?# </span><span class="nv">$post_prompt</span><span class="s2">"</span>
<span class="o">}</span>
prompt_pilif_setup <span class="s2">"</span><span class="nv">$@</span><span class="s2">"</span></code></pre></figure>
<p>The theme file can be downloaded <a href="http://www.lipfi.ch/prompt_pilif_setup">here</a></p>
This tram runs Microsoft® Windows® XPâ¢2008-04-11T00:00:00+00:00http://pilif.github.com/2008/04/this-tram-runs-microsoft-windows-xp<p style="text-align: center;"><a href="http://www.gnegg.ch/wp-content/uploads/2008/04/tramcrash.jpg"><img class="aligncenter size-medium wp-image-397" title="General Protection Fault" src="http://www.gnegg.ch/wp-content/uploads/2008/04/tramcrash-300x213.jpg" alt="Image of the new station information system in some of Zurich\'s tramways with a Windows GPF on top of the display" width="300" height="213" /></a></p>
<p style="text-align: left;">The trams here in Zürich recently were upgraded with a really useful system providing an overview over the next couple of stations and the times when they will be reached.</p>
<p style="text-align: left;">Today, I managed to grab this picture which once again shows clearly why windows maybe isn't the right platform for something like this. Also, have a look at the amount of applications in the taskbar (I know, the picture is bad, but that's all I can get out of my mobile phone)...</p>
<p style="text-align: left;">If I was tasked with implementing something like this, I'd probably use Linux in the backend and a webbrowser as s frontend. That way it's easier to debug, more robust and less embarassing if it blows up.</p>
Shell history stats2008-04-10T00:00:00+00:00http://pilif.github.com/2008/04/shell-history-stats<p>It seems to be cool nowadays to <a href="http://jimmac.musichall.cz/log/?p=427">post</a> the output of a certain unix command to ones blogs. So here I come:</p>
<pre class="code">pilif@celes ~
% fc -l 0 -1 |awk '{a[$2]++ } END{for(i in a){print a[i] " " i}}'|sort -rn|head
467 svn
369 cd
271 mate
243 git
209 ssh
199 sudo
184 grep
158 scp
124 rm
115 ./clitest.sh</pre>
<p>clitest.sh is a small little wrapper around wget which I use to do protocol level debugging of the <a href="http://www.popscan.ch">PopScan</a> Server.</p>
Converting Java keytool-certificates2008-04-09T00:00:00+00:00http://pilif.github.com/2008/04/converting-java-keytool-certificates<p>To be able to read barcodes from connected barcode-scanners into the webbased version of <a href="http://www.popscan.ch">PopScan</a>, we have to use a signed applet - there is no other way for getting the needed level of hardware access without signing your applet.</p>
<p>The signature, by the way, doesn’t at all prevent any developer from doing bad stuff - it just puts their signature below it (literally), so it kind of raises the bar to distribute malware that way - after all, the checks when applying for a certificate usually are very rigid, so there is no way anybody could forge their application, so the origin of any piece of code is very tracable.</p>
<p>But there is no validation done of the actual code to be signed and I doubt that the certificate authorities out there actually revoke certificates used to certify malware, thought that remains to be seen.</p>
<p>Anyways. Back to the topic.</p>
<p>In addition to the Java Applet, we also develop the windows client frontend to the PopScan server. And we have a small frontend to run on Windows CE (or Windows Mobile) based barcode capable devices. Traditionally, both of these were never signed.</p>
<p>But lately with Vista and Windows Mobile 6, signing becomes more and more important: Both systems complain in variable loudness about unsigned code, so I naturally prefer the code to be signed - we DO have a code signing certificate after all - for our Applet.</p>
<p>Now the thing is that keytool, Java’s way of handling code signing keys doesn’t allow a private key to be exported. This means that there was no obvious way for me to ever use the certificate we got for our applet to sign Windows EXEs.</p>
<p>Going back to the CA and ask them to send over an additional certificate was no option for me: Aside of the fact that it would certainly have cost another two years fee, this would have ment to prove our identity all over again - one year too early as our current certificate is valid till 2009.</p>
<p>But then, I found a solution. Here’s how you convert a java keystore certificate to something you can use with Microsoft’s Authenticode:</p>
<ol>
<li>Start <a href="http://yellowcat1.free.fr/keytool_iui.html">KeyTool GUI</a></li>
<li>In the Treeview, click "Export", "Private Key"</li>
<li>Select your java keystore-file</li>
<li>Enter two trarget file names for your key and the certificate chain (and select PEM format)</li>
<li>Click OK</li>
</ol>
<p>Now you will have two more files. One is your private key (I’ve named it key.pem), the other is the certificate chain (named cert.pem in my case). Now, use OpenSSL to covert this into something Microsoft likes to see:</p>
<pre class="code">% openssl pkcs12 -inkey key.pem -in cert.pem -out keypair.pfx -export</pre>
<p>openssl will ask for a password to encrypt the pfx file with and you’ll be done. Now you can use the pfx-file like any other pfx file you recived from your certificate authority (double click it to install it or use it with signcode.exe to directly sign your code).</p>
<p>Remember to delete key.pem as it’s the unencrypted private key!</p>
Thanks, Ebi2008-04-07T00:00:00+00:00http://pilif.github.com/2008/04/thanks-ebi<p>Yesterday, Ebi invited me and my girlfriend over for dinner and a round of trivial pursuit.</p>
<p>I fail to find words to describe how awesomly good the meal has been. I would have wanted to get a fourth serving, but I just couldn’t stuff in even a microgram more.</p>
<p>And the trivial pursuit was fun as ever - that game just shines if you don’t take it seriously.</p>
<p>Thanks Ebi. I had a blast!</p>
Old URLs fixed2008-04-07T00:00:00+00:00http://pilif.github.com/2008/04/old-urls-fixed<p>I have just added two rewrite rules to automatically translate most of the old s9y-URLs to something WordPress understands.</p>
<p>The first one was easy and could be done in WP’s .htaccess-file:</p>
<pre class="code">RewriteRule ^archives/([0-9]+)/([0-9]+)\.html$ /$1/$2 [R=permanent,L]</pre>
<p>This handles the s9y-style archive URLs for monthly archives - something that got quite the amount of hits apparently - at least that’s one of the 404 errors I’ve encountered the most in my logfiles.</p>
<p>The second one is the direct link to old posts. While this could be done in a PHP/.htaccess-only solution, I took the opportunity and learned how to do custom url maps for mod_rewrite which, of course, only work in the httpd.conf, so this isn’t probably something everyone can do on their hosting plan:</p>
<pre class="code">RewriteEngine On
RewriteMap s9yconv prg:/home/pilif/url-s9y2wp.php</pre>
<p>After defining this, I could use the map in WP’s .htaccess:</p>
<pre class="code">RewriteRule ^archives/([0-9]+)-(.*)\.html$ /${s9yconv:$2} [R=permanent,L]</pre>
<p>The <a href="http://www.lipfi.ch/url-s9y2wp.php">script</a> is very simple as you can see here:</p>
<figure class="highlight"><pre><code class="language-php" data-lang="php">#!/usr/bin/php
<span class="cp"><?php</span>
<span class="k">include</span><span class="p">(</span><span class="s1">'wp/wp-includes/formatting.php'</span><span class="p">);</span>
<span class="k">while</span> <span class="p">((</span><span class="nv">$line</span> <span class="o">=</span> <span class="nb">fgets</span><span class="p">(</span><span class="nx">STDIN</span><span class="p">))</span> <span class="o">!==</span> <span class="kc">false</span><span class="p">){</span>
<span class="nv">$line</span> <span class="o">=</span> <span class="nb">preg_replace</span><span class="p">(</span><span class="s1">'#\.html$#'</span><span class="p">,</span> <span class="s1">''</span><span class="p">,</span> <span class="nv">$line</span><span class="p">);</span>
<span class="nv">$line</span> <span class="o">=</span> <span class="nx">sanitize_title_with_dashes</span><span class="p">(</span><span class="nb">preg_replace</span><span class="p">(</span><span class="s1">'#^[0-9]+-#'</span><span class="p">,</span> <span class="s1">''</span><span class="p">,</span> <span class="nv">$line</span><span class="p">));</span>
<span class="k">echo</span> <span class="s2">"</span><span class="nv">$line</span><span class="se">\n</span><span class="s2">"</span><span class="p">;</span>
<span class="p">}</span>
<span class="cp">?></span></code></pre></figure>
<p>While WP is configured to create permalinks containing the date, you can usually just feed it the URL-ized title and it’ll find out the correct entry to use. This has the advantage that the script, which is long-running per the specification of prg-rewrite maps, is kept as simple as possible, which is needed as PHP doesn’t always free all allocated memory - something you don’t want to have in long-running processes like this one. This is why I redirect to something WP still has to do some work on: It spares me to do all the database-handling and stuff.</p>
<p>If I had to do this without the ability to change httpd.conf, I would use a rule like this:</p>
<pre class="code">RewriteRule ^archives/([0-9]+)-(.*)\.html$ /s9y-convert.php/$2 [L]</pre>
<p>and then do above logic in that script.</p>
<p>Both approaches work the same, but I wanted to try out how to do a dynamic rewrite map.</p>
Hosted Code Repository?2008-04-05T00:00:00+00:00http://pilif.github.com/2008/04/hosted-code-repository<p>Recently (yesterday), the <a href="http://www.rubyonrails.com/">Ruby on Rails</a> project announced their switch to <a href="http://git.or.cz/">git</a> for their revision controlling needs. Also, they announced that they will use the hosted service <a href="http://github.com/">github</a> as the place to host the main repository on (even though git is decentralized, there is some sense in having a “main tree” which contains what’s going to be the official releases).</p>
<p>I didn’t know github, so I had a look at their project.</p>
<p>What I don’t understand is that they seem to also target commercial entities with their offering. Think of it: Supposing that you are a commercial entity doing commercial software development. Would you send over all your sourcecode <em>and</em> all the development history to another company?</p>
<p>Sure. They call themselves “Secure”. But what does that mean? Sure: They have SSL and SSH support, but frankly, I’m less concerned with patches travelling over the network unencrypted than I’m concerned with trusting anybody to host my code.</p>
<p>Even if they don’t screw up storage security (think: “accessing the code of your competition”), even if they are completely 100% trustworthy (think: “displeased employee selling out to your competition before leaving his employer”), there is still the issue of government/legal access.</p>
<p>When using an external hosting provider, you are storing your code (and history) in a foreign country with its own legislation. Are you prepared for that?</p>
<p>And finally, do you want the government of the country you’ve just sent your code (and history) to, to really have access to all that data? Who guarantees that the hosting provider of your choice won’t cooperate as soon as the government comes knoking (it <a href="http://www.theregister.co.uk/2005/09/07/yahoo_china_dissident_case/">happened before</a>, even without <a href="http://seclists.org/nmap-hackers/2007/0000.html">legal base at all</a>)?</p>
<p>All that is never worth the risk for a larger company (or for smaller ones - <a href="http://www.sensational.ch">like ours</a>).</p>
<p>So what exactly are these hosting companies (github is one. <a href="http://www.codespaces.com/">Code Spaces</a> is another) targeted at?</p>
<ul>
<li>Free Software developers? Their code is open to begin with, so they have to face the problems I described anyways. But they are much harder to sue. Also, I'm not sure how compelling it is for a free software project to use a non-free tool (rails being the exception, but we'll talk about that later on)</li>
<li>Large companies? No way (see above)</li>
<li>Smaller companies? Probably not. Smaller companies are less of a target due to lower visibility, but sueing them for anything is more likely to get you something in return quickly as they usually don't dare prolonged legal fights.</li>
</ul>
A rant on brace placement2008-04-04T00:00:00+00:00http://pilif.github.com/2008/04/a-rant-on-brace-placement<p>Many people consider it to be good coding style to have braces (in language that use them for block boundaries) on their own line. Like so:</p>
<figure class="highlight"><pre><code class="language-php" data-lang="php">function doSomething($param1, $param2)
{
echo "param1: $param1 / param2: $param2";
}</code></pre></figure>
<p>Their argument usually is that it clearly shows the block boundaries, thus increasing the readability. I, as a proponent of placing bracers at the end of the statement opening the block, strongly disagree. I would format above code like so:</p>
<figure class="highlight"><pre><code class="language-php" data-lang="php">function doSomething($param1, $param2){
echo "param1: $param1 / param2: $param2";
}</code></pre></figure>
<p>Here is why I prefer this:</p>
<ul>
<li>In many languages code blocks don't have their own identity - functions have, but not blocks (they don't provide scope). Placing the opening brace on its own line, you emphasize the block but you actually make it harder to see what caused the block in the first place.</li>
<li>Using correct indentation, the presence of the block should be obvious anyways. There is no need to emphasize it more (at the cost of readability of the block opening statement).</li>
<li>I doubt that using one line per token really makes the code more readable. Heck... why don't we write that sample code like so?</li>
</ul>
<figure class="highlight"><pre><code class="language-php" data-lang="php">function
doSomething
(
$param1,
$param2
)
{
echo "param1: $param1 / param2: $param2";
}</code></pre></figure>
PostgreSQL on Ubuntu2008-04-01T00:00:00+00:00http://pilif.github.com/2008/04/postgresql-on-ubuntu<p>Today, it was time to provision another virtual machine. While I’m a large fan of <a href="http://www.gnetoo.org">Gentoo</a>, there were some reasons that made me decide to gradually start switching over to Ubuntu Linux for our servers:</p>
<ul>
<li>One of the large advantages of Gentoo is that it's possible to get bleeding edge packages. Or at least you are supposed to. Lately, it's taking longer and longer for an ebuild of an updated version to finally become available. Take PostgreSQL for example: It took about 8 months for 8.2 to become available and it looks like history is repeating itself for 8.3</li>
<li>It seems like there are more flamewars than real development going on in Gentoo-Land lately (which in the end leads to above problems)</li>
<li>Sometimes, init-scripts and stuff changes over time and there is not always a clear upgrade-path. <tt>emerge -u</tt> world once, then forget to <tt>etc-update</tt> and on next reboot, hell will break loose.</li>
<li>Installing a new system takes ages due to the manual installation process. I'm not saying it's hard. It's just time-intensive</li>
</ul>
<p>Earlier, the advantage of having current packages greatly outweighted the issues coming with Gentoo, but lately, due to the current state of the project, it’s taking longer and longer for packages to become available. So that advantage fades away, leaving me with only the disadvantages.</p>
<p>So at least for now, I’m sorry to say, Gentoo has outlived it’s usefulness on my productive servers and has been replaced by Ubuntu, which albeit not being bleeding-edge with packages, at least provides a very clean update-path and is installed quickly.</p>
<p>But back to the topic which is the installation of PostgreSQL on Ubuntu.</p>
<p>(it’s ironic, btw, that Postgres 8.3 actually is in the current hardy beta, together with a framework to concurrently use multiple versions whereas it’s still nowhere to be seen for Gentoo. Granted: An experimental overlay exists, but that’s mainly untested and I had some headaches installing it on a dev machine)</p>
<p>After installing the packages, you may wonder how to get it running. At least I wondered.</p>
<pre class="code">/etc/init.d/postgresql-8.3 start</pre>
<p>did nothing (not very nice a thing to do, btw). initdb wasn’t in the path. This was a real WTF moment for me and I assumed some problem in the package installation.</p>
<p>But in the end, it turned out to be an (underdocumented) feature: Ubuntu comes with a really nice framework to keep multiple versions of PostgreSQL running at the same time. And it comes with scripts helping to set up that configuration.</p>
<p>So what I had to do was to create a cluster with</p>
<pre class="code">pg_createcluster --lc-collate=de_CH --lc-ctype=de_CH -e utf-8 8.3 main</pre>
<p>(your settings my vary - especially the locale settings)</p>
<p>Then it worked flawlessly.</p>
<p>I do have some issues with this process though:</p>
<ul>
<li>it's underdocumented. Good thing I speak perl and bash, so I could use the source to figure this out.</li>
<li>in contrast to about every other package in Ubuntu, the default installation does not come with a working installation. You have to manually create the cluster after installing the packages</li>
<li>pg_createcluster --help bails out with an error</li>
<li>I had /var/lib/postgresql on its own partition and forgot to remount it after a reboot which caused the init-script to fail with a couple of uninitialized value errors in perl itself. This should be handeled cleaner.</li>
</ul>
<p>Still. It’s a nice configuration scheme and a real progress from gentoo. The only thing left for me now is to report these issues to the bugtracker and hope to see this fixed eventually. And it it isn’t, there is this post here to remind me and my visitors.</p>
Another new look2008-03-31T00:00:00+00:00http://pilif.github.com/2008/03/another-new-look<p>It has been a while since the <a href="http://www.gnegg.ch/2006/06/one-day-with-serendipity/">last redesign</a> of gnegg.ch, but is a new look after just a little more than one year of usage really needed?</p>
<p>The point is that I have changed blogging engines yet again. This time it's from Serendipity to <a href="http://wordpress.org">Word Press</a>. </p>
<p>What motivated the change? </p>
<p>Interestingly enough, if you ask me, s9y is clearly the better product than Wordpress. If WordPress is Mac OS, then s9y is Linux: It has more features it's based on cleaner code, it doesn't have any commercial backing at all. So the question remains: Why switch?</p>
<p>Because that OSX/Linux-analogy also works the other way around: s9y is an ugly duckling compared to WP. External tools won't work (well) with s9y due to it not being known well enough. The amount of knobs to tweak is sometimes overwhelming and the available plugins are not nearly as polished as the WP ones.</p>
<p>All these are reasons to make me switch. I've used a <a href="http://www.gnegg.ch/2006/06/one-day-with-serendipity/">s9y to wp converter</a>, but some heavy tweaking was needed to make it actually transfer category assignements and tags (the former didn't work, the latter wasn't even implemented). Unfortunately, the changes were too hackish to actually publish them here, but it's quite easily done.</p>
<p>Aside of that, most of the site has survived the switch quite nicely (the permalinks are broken once again though), so let's see how this goes :-)</p>
Impressed by git2008-03-04T00:00:00+00:00http://pilif.github.com/2008/03/impressed-by-git<p>The company I’m working with is a Subversion shop. It has been for a long time - since fall of 2004 actually where I finally decided that the time for CVS is over and that I was going to move to subversion. As I was the only developer back then and as the whole infrastructure mainly consisted of CVS and <a href="http://www.viewvc.org">ViewVC</a> (cvsweb back then), this move was an easy one.</p>
<p>Now, we are a team of three developers, heavy <a href="http://trac.edgewall.org">trac</a> users and truly dependant on <a href="http://subversion.tigris.org">Subversion</a> which is - mainly due to the amount of infrastructure that we built around it - not going away anytime soon.</p>
<p>But none the less: We (mainly I) were feeling the shortcomings of subversion:</p>
<ul>
<li>Branching is not something you do easily. I tried working with branches before, but merging them really hurt, thus making it somewhat prohibitive to branch often.</li>
<li>Sometimes, half-finished stuff ends up in the repository. This is unavoidable considering the option of having a bucket load of uncommitted changes in the working copy.</li>
<li>Code review is difficult as actually trying out patches is a real pain to do due to the process of sending, applying and reverting patches being a manual kind of work.</li>
<li>A pet-peeve of mine though is untested, experimental features developed out of sheer interest. Stuff like that lies in the working copy, waiting to be reviewed or even just having its real-life use discussed. Sooner or later, a needed change must go in and you have the two options of either sneaking in the change (bad), manually diffing out the change (hard to do sometimes) or just forget it and <tt>svn revert</tt> it (a real shame).</li>
</ul>
<p>Ever since the Linux kernel first began using Bitkeeper to track development, I knew that there is no technical reason for these problems. I knew that a solution for all this existed and that I just wasn’t ready to try it.</p>
<p>Last weekend, I finally had a look at the different distributed revision control systems out there. Due to the insane amount of infrastructure built around Subversion and not to scare off my team members, I wanted something that integrated into subversion, using that repository as the official place where official code ends up while still giving us the freedom to fix all the problems listed above.</p>
<p>I had a closer look at both Mercurial and git, though in the end, the nicely working SVN integration of git was what made me have a closer look at that.</p>
<p>Contrary to what everyone is saying, I have no problem with the interface of the tool - once you learn the terminology of stuff, it’s quite easy to get used to the system. So far, I did a lot of testing with both live repositories and test repositories - everything working out very nicely. I’ve already seen the impressive branch merging abilities of git (to think that in subversion you actually have to a) find out at which revision a branch was created and to b) remember every patch you cherry-picked…. crazy) and I’m getting into the details more and more.</p>
<p>On our trac installation, I’ve written a tutorial on how we could use git in conjunction with the central Subversion server which allowed me to learn quite a lot about how git works and what it can do for us.</p>
<p>So for me it’s git-all-the-way now and I’m already looking forward to being able to create many little branches containing many little experimental features.</p>
<p>If you have the time and you are interested in gaining many unexpected freedoms in matters of source code management, you too should have a look at git. Also consider that on the side of the subversion backend, no change is needed at all, meaning that even if you are forced to use subversion, you can privately use git to help you manage your work. Nobody would ever have to know.</p>
<p>Very, very nice.</p>
Failing silently is bad2008-02-04T00:00:00+00:00http://pilif.github.com/2008/02/failing-silently-is-bad<p>Today, I've experienced the perfect example of why I prefer <a href="http://www.postgresql.org">PostgreSQL</a> (congratulations for a successful 8.3 release today, guys!) to <a href="http://www.mysql.com">MySQL</a>.</p>
<p>Let me first give you some code, before we discuss it (assume that the data which gets placed in the database is - wrongly so - in ISO-8859-1):</p>
<p>This is what PostgreSQL does:</p>
<pre class="code">bench ~ > <strong>createdb -Upilif -E utf-8 pilif</strong>
CREATE DATABASE
bench ~ > <strong>psql -Upilif</strong>
Welcome to psql 8.1.4, the PostgreSQL interactive terminal.
Type: \copyright for distribution terms
\h for help with SQL commands
\? for help with psql commands
\g or terminate with semicolon to execute query
\q to quit
pilif=> <strong>create table test (blah varchar(20) not null default '');</strong>
CREATE TABLE
pilif=> <strong>insert into test values ('gnügg');</strong>
ERROR: invalid byte sequence for encoding "UTF8": 0xfc676727293b
pilif=>
</pre>
<p>and this is what MySQL does:</p>
<pre class="code">bench ~ > <strong>mysql test</strong>
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 97
Server version: 5.0.44-log Gentoo Linux mysql-5.0.44-r2
Type 'help;' or '\h' for help. Type '\c' to clear the buffer.
mysql> <strong>create table test( blah varchar(20) not null default '')</strong>
-> <strong>charset=utf8;</strong>
Query OK, 0 rows affected (0.01 sec)
mysql> <strong>insert into test values ('gnügg');</strong>
Query OK, 1 row affected, 1 warning (0.00 sec)
mysql> <strong>select * from test;</strong>
+------+
| blah |
+------+
| gn |
+------+
1 row in set (0.00 sec)
mysql>
</pre>
<p>Obvisouly it is wrong to try and place latin1 encoded data in an utf-8 formatted data store: While every valid utf-8 byte sequence is a valid latin1 byte sequence (latin1 does not restrict the validity of bytes, though some positions may be undefined), the reverse certainly is not true. The character ü from my example is 0xfc in latin1 and U+00fc in unicode which must be encoed as 0xc3 0xbc in utf-8. 0xfc alone is <em>no valid utf-8 byte sequence</em>.</p>
<p>So if you pass this invalid sequence to any entity accepting an utf-8 encoded byte stream, it will not be clear what to do with that data. It's not utf-8, that's for sure. But assuming that no character set is specified with the stream, it's impossible to guess what to translate the byte sequence into.</p>
<p>So PostgreSQL sees the error and bails out (if both the server and the client are set to utf-8 encoding and data is sent in non-utf8-format - otherwise it knows how to convert the data - conversion from any character set <strong>to</strong> utf-8 is possible all the time). MySQL on the other hand decides to fail silently and to try to fix up the invalid input.</p>
<p>Now while I could maybe live with the default of assuming latin1 encoding, just stopping to process the data without warning what so ever leads to <em>undetected loss of data</em>!</p>
<p>What if I'm not just entering one word? What if it's a blog-entry like this one? What if the entry is done by a non tech-savvy user? Remember: This mistake can easily be produced: Wrong Content-Type headers, old browsers, broken browsers... it's very easy to get Latin1 when you want utf-8. </p>
<p>While I agree that sanitization <em>must be done in the application tier</em> (preferably on the model), it's <strong>inacceptable</strong> for a database application to store different data than what it was ordered to store without warning the user in any way. This easily leads to data loss or data corruption.</p>
<p>There are many more little things like this where MySQL decides to silently fail where PostgreSQL (and any other database) bail out correctly. As a novice this can feel tedious for you. It can feel like PostgreSQL is pedantic and like you are faster with MySQL. But let's be honest: What do you prefer? An error message or lost data with no way of knowing that it's lost?</p>
<p>This, by the way, is the outcome of a lengthy debugging session on a Typo3 installation, which also, but not ultimately is to blame here. In a perfect world, MySQL would bail out, but Typo3 would either</p>
<ul>
<li>Not specify charset=utf8 when creating the table unless specifically asked to.
<li>Send a charset=utf-8 http-header, knowing that the database has been created as containing utf-8
<li>Sanitize user input before handing it over to the mysql-backend which is obviously broken in this instance.</li></ul>
<p>Now back to debugging real software on real databases *grin*</p>
</li></li></ul>
reddit's commenting system2008-01-29T00:00:00+00:00http://pilif.github.com/2008/01/reddits-commenting-system<p>This is something I wanted to talk about for quite some time now, but I never got around to it. Maybe you know <a href="http://reddit.com">reddit</a>. reddit basically works like digg.com - it's one of these web2.0 mashup community social networking <a href="http://www.youtube.com/watch?v=dr3qPRAAnOg">bubble</a> sites. reddit is about links posted by users and voted for by users.</p>
<p>Unlike digg, reddit has an awful screen design and thus seems to attract  a bit more mature crowds than digg does, but lately it seems to be taken over by politics and pictures which devalues the whole site a bit.</p>
<p>What is really interesting though is the commenting system. In fact, it's interesting enough for me to write about it and it works well enough for me to actually post a comment there here and then. It's even good enough for me to be sure that whenever I will be in the situation to design a system to allow users to comment on something that I will have a look at what reddit did and I will model my solution around that base.</p>
<p>There are so many commenting systems out there, but all fail in some regards. Either they disturb your reading flow, making it too difficult to post something. Or they either hide comments behind a foldable tree structure or they display a flat list making it difficult to see any kind of threading going on.</p>
<p>And once you actually are interested in a topic enough to post a comment or a reply to a comment, you'll quickly lose track of the discussion which gets as quickly buried by newly arriving posts.</p>
<p>reddit works differently.</p>
<p>First, messages are displayed in a threaded, but fully expanded view, thus allowing to skip over content you are not interested in while still providing all the overview you need. Then, posting is done inline via some AJAX interface. You see a comment you want to reply to, you hit the reply link, enter the text and hit "save". The page is not reloaded, you end up just where you left off.</p>
<p>But what good is answering to a comment if the initial commenter quickly forgets about his or her comment? Or if he or she just plain doesn't find her comment again? </p>
<p>reddit puts all direct replies to any comments you made into your personal inbox folder. If you have any of these replies, the envelope to the top right will light up red allowing you to see newly arrived replies to your comments. With one click, you can show the context of the post you replied to, your reply and the reply you got. This makes it incredibly easy to be notified when someone posted something in response, thus keeping the discussion alive, no matter how deeply it may have been buried by comments arriving after yours.</p>
<p>So even if reddit looks awful (one gets used to the plain look though), it has one of the best, if not the best online discussion systems under its hood and so many other sites should learn from that example. It's so easy that it even got me to post a comment here and then - and I even got replies despite not obviously trolling (which usually helps you get instant-replies, though I don't recommend this practice).</p>
The IE rendering dilemma - solved?2008-01-23T00:00:00+00:00http://pilif.github.com/2008/01/the-ie-rendering-dilemma-solved<p>A couple of months a <a href="/archives/379-The-IE-rendering-dilemma.html">IE rendering dilemma</a>: How to fix IE8's rendering engine without breaking all the corporate intranets out there? How to create both a standards oriented browser and still ensure that the main customers of Microsoft - the enterprises - can still run a current browser without having to redo all their (mostly internal) web applications.</p>
<p>Only three days after my posting IEBlog talked about <a href="http://blogs.msdn.com/ie/archive/2007/12/19/internet-explorer-8-and-acid2-a-milestone.aspx">IE8 passing the ACID2 test</a>. And when you watch the video linked there, you'll notice that they indeed kept the IE7 engine untouched and added an additional switch to force IE8 into using the new rendering engine.</p>
<p>And yesterday, A List Apart showed us <a href="http://www.alistapart.com/articles/beyonddoctype/">how it's going to work</a>.</p>
<p>While I completely understand Microsofts solution and the reasoning behind it, I can't see any other browser doing what Microsoft recommended as a new standard. The idea to keep multiple rendering engines in the browser and default to outdated ones is in my opinion a bad idea. Download-Sizes of browser increase by much, security problems in browsers must be patched multiple times, and, as the Webkit blog put it, "[..] <a href="http://webkit.org/blog/155/versioning-compatibility-and-standards/">hurts the hackability of the code</a> [..]".</p>
<p>As long as the other browser vendors don't have IE's market share nor the big company intranets depending on these browsers, I don't see any reason at all for the other browsers to adapt IE's model.</p>
<p>Also, when I'm doing (X)HTML/CSS work, usually it works and displays correctly in every browser out there - with the exception of IE's current engine. As long as browsers don't have awful bugs all over the place and you are not forced to hack around them, deviating from the standard in the process, there is no way a page you create will only work in one specific version of a browser. Even more so: When it breaks on a future version, that's a bug in the browser that must be fixed there.</p>
<p>Assuming that Microsoft will, finally, get it right with IE8 and subsequent browser versions, we web developers should be fine with</p>
<pre class="code"><meta http-equiv="X-UA-Compatible" content="IE=edge" />
</pre>
<p>on every page we output to a browser. These compatibility hacks are for people that don't know what they are doing. We know. We follow standards. And if IE begins to do so as well, we are fine with using the latest version of the rendering engine there is. </p>
<p>If IE doesn't play well and we need to apply braindead hacks that break when a new version of IE comes out, then we'll all be glad that we have this method of forcing IE to use a particular engine, thus making sure that our hacks continue to work.</p>
Apple TV - Second try2008-01-16T00:00:00+00:00http://pilif.github.com/2008/01/apple-tv-second-try<p>When Apple announced their AppleTV a couple of months (or was it years?) ago, I was very skeptical of the general idea behind the device. Think of it: What was the big success behind the iPod? That it could run proprietary AAC files people buy from the music store?</p>
<p>No. That thing didn't even exist back then. The reason for the success was the total easy (and FAST - remember: Back in the days, we had 1.1 MB/s USB which every MP3 player used vs. 40MB/s Firewire of the iPod) handling and the fact that it was an MP3 player - playing the files everyone already had.</p>
<p>It was a device for playing the content that was available at the time.</p>
<p>The AppleTV in its first incarnation was a device capable of playing content that wasn't exactly available. Sure it could play the two video podcasts that existed back then (maybe more, but you get the point). And you could buy TV shows and movies in subpar quality on your PC (Windows or Mac) and then transfer them to the device. But the content that was available back then was in a different format: XVID dominated the scene. x264 was a newcomer and MP4 (and mov) wasn't exactly used.</p>
<p>So what you got was a device, but no content (and the compatible content you had was in subpar quality compared to the incompatible content that was available). And you needed a PC, so it wasn't exactly a device I could hook to my parents PC for example.</p>
<p>All these things were fixed by Apple today:</p>
<ul> <li>There is a huge library of content available right here, right now (at least in the US): The new movie rental service. Granted. I think it's not quite there yet price vs. usability-wise (I think $5 is a totally acceptable price for a movie with unlimited replayability), but at least we have the content. </li> <li>It works without a PC. I can hook this thing up to my parents TV and they can immediately use it. </li> <li>The quality is OK. Actually, it's more than OK. There is HD content available (though maybe only 720p one, but frankly, on my expensive 1080p projector, I don't see that much of a difference between 720p and 1080p) </li> <li>It can still access the scarce content that was available before. </li> </ul>
<p>The fact that this provides very easy to use video-on-demand to a huge amount of people is what makes me think that this little device is even more of a disruptive technology than the iPod or the iPhone. Think of it: Countless of companies are trying to make people pay for content these days. It's the telcos, it's cable companies and it's device manufacturers. But what do we get? Crappy, constantly crashing devices, which are way too complicated for a non-geek and way too limited in functionality for a geek.</p>
<p>Now we got something that's perfect for the non-geek. It has the content. It has the ease-of-use. Plug it in, watch your movie. Done. This is what a whole industry tried to do and failed so miserably.</p>
<p>I for my part will still prefer the flexibility given by my custom Windows Media Center solution. I will still prefer the openness provided by illegal copies of movies. I totally refuse to pay multiple times for something just because someone says that I have to. But that's me. </p>
<p>And even I may sooner or later prefer the comfort of select-now-watch-now to the current procedure (log into private tracker, download torrent, wait for download to finish, watch - torrents are not streamable, even if the bandwith would easily suffice in my case - the packets arrive out of order), so even for me, the AppleTV could be interesting.</p>
<p>This was yet another perfect move by Apple. Ignore the analysts out there who expected more out of this latest keynote. Ignore the bad reception of the keynote by the marked (I hear that Apple stock just dropped a little bit). Ignore all that and listen to yourself: This wonderful device will certainly revolutionize the way we consume video content.</p>
<p>I'm writing this as a constant sceptic - as a person always trying to see a flaw in a certain device. But I'm sure that this time around, they really got it. Nice work!</p>
My PSP just got a whole lot more useful2008-01-04T00:00:00+00:00http://pilif.github.com/2008/01/my-psp-just-got-a-whole-lot-more-useful<p><a class="serendipity_image_link" href="http://www.gnegg.ch/uploads/update.jpg"><!-- s9ymdb:31 --><img width="170" height="128" style="float: left; border: 0px; padding-left: 5px; padding-right: 5px;" src="http://www.gnegg.ch/uploads/update.serendipityThumb.jpg" alt="" /></a><p>Or useful at all - considering the games that are available to that console. To be honest: Of all the consoles I have owned in my life, the PSP must be the most underused one. I basically own two games for it: <a href="http://www.gamefaqs.com/portable/psp/data/928759.html">Breath of Fire</a> and <a href="http://www.gamefaqs.com/portable/psp/data/920817.html">Tales of Eternia</a> - not only by this choice of titles, but also by reading this blog, you may notice a certain affinity to Japanese Style RPG’s.</p> <p>These are the closest thing to a successor of the classical graphic adventures I started my computer career with, minus hard to solve puzzles plus a much more interesting story (generally). So for my taste, these things are a perfect match.</p> <p>But back to the PSP. It’s an old model - one of the first here in Switzerland. One of the first on the world to be honest: I bought the thing WAAAY back with hopes of seeing many interesting RPG’s - or even just good ports of old classics. Sadly neither really happened.</p> <p>Then, a couple of days ago, I found a usable copy of the game <a href="http://www.gamefaqs.com/portable/psp/data/924594.html">Lumines</a>. Usable in a sense of when the guy in the store told me that there is a sequel out and I told him that I did not intend to actually play the game, he just blinked with one eye and wished me good luck with my endeavor. </p> <p>Or in layman’s terms: That particular version of Lumines had a security flaw allowing one to do a lot of interesting stuff with the PSP. Like installing an older, flawed version of the firmware which in turn allows to completely bypass whatever security the PSP would provide.</p> <p>And now I’m running the latest M33 firmware: 3.71-M4. </p> <p>What does that mean? It means that the former quite useless device has just become the device of my dreams: It runs SNES games. It runs Playstation 1 games. It’s portable. I can use it in bed without a large assembly of cables, gamepads and laptops. It’s instant-on. It’s optimized for console games. It has a really nice digital directional pad (gone are the days of struggling with diagonally-emphasized joypads - try playing Super Metroid with one of these).</p> <p>It plays games like Xenogears, Chrono Cross, Chrono Trigger - it finally allows me to enjoy the RPG’s of old in bed before falling asleep. Or in the bathtub. Or whatever.</p> <p>It’s a real shame that once more I had to resort to legally questionable means to get a particular device to a state I imagine it to be. Why can’t I buy <em>any</em> PS1 game directly from Sony? Why are the games I want to play not even available in Switzerland? Why is it illegal to play the games I want to play? Why are most of the gadgets sold today crippled in a way or another? Why is it illegal to un-cripple our gadgets we bought?</p> <p>Questions I, frankly, don’t want to answer. For years now I wanted a possibility to play Xenogears in bed and while taking a bath. Now I can, so I’m happy. And playing Xenogears. And loving it like when I was playing through that jewel of game history for the first time.</p> <p>If I find time, expect some more in-depth articles about the greatness of Xenogears (just kidding - just read the early articles in this blog) or how to finally get your PSP where you want it to be - there are lots of small things to keep in mind to make it work completely satisfactory. </p></p>
SPAM insanity2007-12-31T00:00:00+00:00http://pilif.github.com/2007/12/spam-insanity<p><a class="serendipity_image_link" href="http://www.gnegg.ch/uploads/spam.png"><!-- s9ymdb:30 --><img width="170" height="87" style="float: left; border: 0px; padding-left: 5px; padding-right: 5px;" src="http://www.gnegg.ch/uploads/spam.serendipityThumb.png" alt="" /></a><p>I don’t see much point in complaining about SPAM, but it’s slowly but surely reaching complete insanity…</p></p>
<p>What you see here is the recent history view of my DSPAM - our second line of defense against SPAM.</p>
<p>Red means SPAM. (the latest of the messages was a quite clever phishing attempt which I had to manually reclassify)</p>
<p>To give even more perspective to this: The last genuine Email I received was this morning at 7:54 (it's now 10 hours later) and even that was just an automatically generated mail from Skype.</p>
<p>To put it into even more perspective: My DSPAM reports that since december 22th, I got 897 SPAM messages and - brace yourself - 170 non-spam messages of which 100 were subversion commit emails and 60 other emails sent from automated cron-jobs.</p>
<p>What I'm asking myself now is: Do these spammers still get <em>anything</em> out of their work? The signal-to-noise ratio has gone down the drain in a manner which can only mean that no person on earth would actually still read through all this spam and even be stupid enough to actually fall for it.</p>
<p>How bad does it have to get before it gets better?</p>
<p>Oh and don't think that DSPAM is all I'm doing... No... these 897 mails were the messages that passed through both the <a href="http://www.heise.de/ix/nixspam/dnsbl_en/">ix DNSBL</a> and SpamAssassin.</p>
<p>Oh and: Kudos to the DSPAM team. A recognition rate of 99.957% is really, really good</p>
The IE rendering dilemma2007-12-16T00:00:00+00:00http://pilif.github.com/2007/12/the-ie-rendering-dilemma<p>There's a new release of Internet Explorer, aptly named IE8, pending and a whole lot of web developers are in fear of new bugs and no fixes to existing ones. Like the problems we had with IE7. </p>
<p>A couple of really <a href="http://www.positioniseverything.net/index.php">nasty bugs</a> where fixed, but there wasn't any significant progress in matters of extended support for web standards or even a really significant amount of bugfixes.</p>
<p>And now, so fear the web developers, history is going to repeat itself. Why, are people asking, aren't they just throwing away the currently existing code-base, replacing it with something <a href="http://www.mozilla.org">more</a> <a href="http://webkit.org">reasonable</a>? Or if licensing or political issues prevent using something not developed in-house, why not rewrite IE's rendering engine from scratch?</p>
<p>Backwards compatibility. While the web itself has more or less stopped using IE-only-isms and began embracing the way of the web standards (and thus began cursing on IE's bugs), corporate intranets, the websites accessed by Microsoft's main customer base, certainly have not.</p>
<p>ActiveX, <FONT>-Tags, VBScript - the list is endless and companies don't have the time or resources to remedy that. Remember. Rewriting for no real purpose than "being modern" is a real waste of time and certainly not worth the effort. Sure. New Applications can be developped in a standards compliant way. But think about the legacy! Why throw all that away when it works so well in the currently installed base of IE6?</p>
<p>This is why Microsoft can't just throw away what they have. </p>
<p>The only option I see, aside of trying to patch up what's badly broken, is to integrated another rendering engine into IE. One that's standards compliant and one that can be selected by some means - maybe a HTML comment (the DOCTYPE specification is <a href="http://www.quirksmode.org/css/quirksmode.html">already taken</a>).</p>
<p>But then, think of the amount of work this creates in the backend. Now you have to maintain two completely different engines with completely different bugs at different places. Think of security problems. And think of what happens if one of these buggers is detected in a third party engine a hypothetical IE may be using. Is MS willing to take responsibility of third-party bugs? Is it reasonable to ask them to do this?</p>
<p>To me it looks like we are now paying the price for mistakes MS did a long time ago and for quick technological innovation happening at the wrong time on the wrong platform (imagine the intranet revolution happening <em>now</em>). And personally, I don't see an easy way out. </p>
<p>I'm very interested in seeing how Microsoft solves this problem. Ignore the standards-crowd? Ignore the corporate customers? Add the immense burden of another rendering engine? FIx the current engine (impossible, IMHO)? We'll know once IE8 is out I guess.</p>
VMWare Server 2.02007-11-28T00:00:00+00:00http://pilif.github.com/2007/11/vmware-server-20<p>Now that the <a href="http://www.gnegg.ch/archives/377-shion-died.html">time has come</a> to upgrade <a href="/archives/291-Computers-under-my-command-Issue-1-shion.html">shion</a>'s hardware, and now that I'm running a x86 based platform (well, it's the 64 bit server install of <a href="http://www.ubuntu.com">Ubuntu</a> Gutsy), I guessed it was time to have a look at my current bittorrent solution.</p>
<p>Of all the torrent clients out there, so far, I had the most painless experience with uTorrent: Acceptable download speeds, a very nice web interface and a nice looking user interface. The only drawback is that it requires Windows to run and I had no constant-running Windows-PC at home.</p>
<p>In fact, I didn't even have a Windows-PC <em>at all</em>. VMWare Fusion came to the rescue as it allowed me to install Windows on a virtual machine and run that on my main mac at home. I chose fusion as opposed to parallels because I always knew that I was going to update shion sooner or later, so I wanted the portability of the VMWare virtual machines (they run everywhere VMWare runs on - no conversion, no nothing).</p>
<p>And now that I did replace shion, I've installed the latest beta version of VMWare Server 2.0 and moved the virtual machine over to the newly born shion 2.0 which means that I now have a constantly running "Windows-PC" at home.</p>
<p>The move was painless as expected, but the whole process of installing VMWare server or the web interface was not as painless. VMWare Server feels exactly like every other proprietary Unix application I ever had to deal with. Problems with shared libraries (PAM, Gentoo, 32bit emulation and vmware server 1.0 is pure hell), problems with init-scripts not working, problems with incomprehensible error messages, you name it.</p>
<p>And once I actually got the thing to run, the first thing I had to do was to configure a whole bunch of iptables-rules because it seems impossible to bind all the 7 ports the web interface opens to localhost only (shion also is my access router, so I certainly don't want the vmware-stuff exposed on eth1).</p>
<p>And actually using the web interface means forwarding all the 7 ports. In VMWare Server 1, it sufficed to forward the one port the console application used.</p>
<p>All this to finally end up without a working console access - the browser plugin they use for this seems not to work with Mac OS X and adding all the 7 ports to putty in my client windows VM, frankly, was more complicated than what I could get out of it.</p>
<p>Before this goes final with the expectation of being as useful as version 1 was, they need to give us back a native client and a smaller number of ports to forward.</p>
shion died2007-11-25T00:00:00+00:00http://pilif.github.com/2007/11/shion-died<p>After so many years of continued usage, <a href="/archives/291-Computers-under-my-command-Issue-1-shion.html">shion</a> (not the character from Xenosaga, my Mac Mini) died.</p>
<p>The few times it's actually capable of detecting its hard-drive at boot-time, it loses contact to it shortly after loading the kernel. And the hard drive makes an awful kind of noise which is a very good pointer at what's wrong.</p>
<p>Now, I could probably just replace the hard drive, but that old G4 processor, the 512 Megs of RAM and the two single USB-ports forcing me to cascade hub after hub all are good reasons to upgrade the hardware itself.</p>
<p>And thus, Shion 2.0 was born.</p>
<p>I grabbed an unused Mac Mini from the office and tried installing Ubuntu Gutsy on it, which worked well, but Leopard's "Startup Disk" preference pane didn't list the partition I installed Ubuntu on as a bootable partition. Booting Linux via pressing alt during pre-boot worked, but, hey, it's a server and I don't have a keyboard ready where shion is going to stand.</p>
<p>So I did it the brute-force way and just installed Ubuntu using the whole drive. It takes a hell of a lot of time for the EFI firmware to start missing the original GUID partition scheme and the original EFI parition, but when it does, it starts GRUB in the MBR partition, so I'm fine.</p>
<p>This does mean that I will be unable to install later firmware upgrades (due to the lack of a working OS X), but at least it means that I can reboot shion when needed without having to grab a keyboard.</p>
<p>This, provided that Domi will be able to solder me a <a href="http://www.mythic-beasts.com/support/macminicolo_howto.html">display adaptor</a> making the EFI BIOS emulation think that a display is connected.</p>
<p>All in all, I'm not totally happy with the next generation of shion. Not booting without a display attached, long boot times, non-working bios updates and, especially, no eSATA, but it's free, so I'll take it. I guess the old shion just chose a terribly inconvenient time to die.</p>
Closed Source on Linux2007-10-22T00:00:00+00:00http://pilif.github.com/2007/10/closed-source-on-linux<p>One of the developers behind the Linux port of the new Flex Builder for Linux has a blog post about how building <a href="http://www.swaroopch.com/archives/2007/10/22/closed-for-business/">closed source software for linux is hard</a></p>
<p>Mostly, all the problems boil down to the fact that Linux distributors keep patching the upstream source to fit their needs which clearly is a problem rooted in the fact that open source software is, well, open sourced.</p>
<p>Don't get me wrong. I love the concepts behind free software and in fact, every piece of software I've written so far has been open source (aside of most of the code I'm doing for my eployer of course). I just don't see why every distribution feels the urgue to patch around upstream code, especially as this issue applies to both open- and closed source software projects.</p>
<p>And worse yet: Every distribution adds their own bits and pieces - sometimes doing the same stuff in different ways and thus making it impossible or at least very hard for a third party to create add-ons for a certain package.</p>
<p>What good is a plugin system if the interface works slightly different on each and every distribution?</p>
<p>And think of the time you waste learning configuration files over and over again. To make an example: Some time ago, SuSE delivered an apache server that was using a completely customized configuration file layout, thereby breaking every tutorial and documentation written out there because none of the directives where in the files they are supposed to be.</p>
<p>Other packages are deliberately broken up. Bind for example often comes in two flavors: The server and the client, even though officially, you just get one package. Additionally, every library package these days is broken up in the real library and the development headers. Sometimes the source of these packages may even get patched to support such breaking up.</p>
<p>This creates an incredible mess for all involved parties:</p>
<ul>
<li>The upstream developer gets blamed for bugs she didn't cause because they were introduced by the packager.</li>
<li>Third party developers can't rely on their plugins or whatever pluggable components to work across distributions if they work upstream</li>
<li>Distributions have to do the same work over and over again as new upstream versions are released, thus wasting time better used for other improvements.</li>
<li>End users suffer from the general disability of reliably installing precompiled third-party binaries (mysql recommends the use of their binaries, so this even affects open sourced software) and from the inability to follow online-tutorials not written for the particular distribution that's in use.</li>
</ul>
<p>This mess must come to an end.</p>
<p>Unfortunately, I don't know how.</p>
<p>You see: Not all patches created by distributions get merged upstream. Sometimes, political issues <a href="http://kohei.us/2007/10/02/history-of-calc-solver/">prevent a cool feature from being merged</a>, sometimes clear bugs are not recognized as such upstream and sometimes <a href="http://kohei.us/2007/10/02/history-of-calc-solver/">upstream is dead</a> - you get the idea.</p>
<p>Solution like FHS and LSB tried to standardize many aspects of how linux distributions should work in the hope of solving this problem. Bureaucracy and <a href="http://blog.koehntopp.de/archives/860-Webanwendungen-und-der-FHS.html">idiotic ideas</a> (german link, I'm sorry) are causing quite the bunch of problems lately, making it hard to impossible to implement the standards. And often the standards don't specify the latest and greatest parts of current technology.</p>
<p>Personally, I'm hoping that we'll either end up with one big distribution defining the "state of the art", with the others being 100% compatible or with distributions switching to pure upstream releases with only their own tools custom-made.</p>
<p>What do you think? What has to change in your opinion?</p>
C, C#, Java2007-09-27T00:00:00+00:00http://pilif.github.com/2007/09/c-c-java<p>Today, I was working on porting a EAN128-parser from Java to C#. The parser itself was initially written in C and porting it from there to Java was already quite easy - sure. It still looks like C, but it works nicely and thankfully, understanding the algorithm once and writing it was enough for me, so I can live with not-so-well looking Java code.</p>
<p>What made me write this entry though is the fact that porting the Java version over to C# involved three steps:</p>
<ol> <li>Copy</li> <li>Paste</li> <li>Change byte barCode[] to byte[] barCode</li></ol>
<p>It's incredible how similar those two languages are - at least if what you are working with more or less uses the feature set C provided us with. </p>
Recursive pottery2007-09-25T00:00:00+00:00http://pilif.github.com/2007/09/recursive-pottery<p>Yesterday evening, my girlfriend and I had an interesting discussion about pottery techniques. She's studying archeology, so she has a real interest in pottery and techniques used. I in contrast have my interests in different subjects, but this method of potting we came up with was so funny that I thought I just had to post it.</p>
<p>Let's say you want to create a vase.</p>
<p>Our method involves the following steps:</p>
<ol> <li>Gather a vase that looks exactly like the one you want to build. <li>Fill the vase with something that gets hard quickly, but crumples easily. <li>Wait for that material to dry out, then destroy the original vase. <li>Put clay around the hardened up filler material. <li>Wait for the clay to dry up and burn the vase. <li>Remove the filler material. </li></ol> <p>Obviously this method will never allow you to produce more than one vase as in the process of creating one, you are destroying the other.</p> <p>We continued our discussion of how such a method of pottery could have interesting side effects. One is that the only way for a potter to generate revenue of his work is by <em>renting out</em> his current vase. And should the vase be returned defective, the whole business of the potter is over - until he receives another initial vase to continue working.</p> <p>Of course, getting hold of that would be quite interesting a job if every potter only used this method.</p> <p>And the question remains: Where do you take the initial vase from?</p> <p>Stupid. I know. But fun in its own way. Sometimes, I take great pleasure in inventing something totally stupid and then laugh at it. And believe me: We really had a good laugh about this.</p>
</li></li></li></li></li></ol>
The new iPods2007-09-20T00:00:00+00:00http://pilif.github.com/2007/09/the-new-ipods<p><a class="serendipity_image_link" href="http://www.apple.com/itunes/"><!-- s9ymdb:29 --><img style="border-top-width: 0px; padding-right: 5px; padding-left: 5px; border-left-width: 0px; float: right; border-bottom-width: 0px; border-right-width: 0px" height="170" alt="" src="http://www.gnegg.ch/uploads/touch.serendipityThumb.jpg" width="118" /></a> <p>So we have <a href="http://www.apple.com/itunes/">new iPods</a>.</p> <p>Richard sent me an email asking which model he should buy which made me begin thinking whether to upgrade myself. Especially the new touch screen model seemed compelling to me - at first.</p> <p>Still: I was unable to answer that email with a real recommendation (though honestly, I don’t think it was as much about getting a recommendation than about to letting me know that the models were released and to hear my comments about them) and still I don’t really know what to think.</p> <p>First off: This is a matter of taste, but I hate the new nano design: The screen still is too small to be useful for real video consumption, but it made the device very wide - too wide, I think, to be able to comfortably keep it in my trousers pockets while biking (I may be wrong though).</p> <p>Also, I don’t like the rounded corners very much and the new interface… really… why shrink the menu to half a screen and clutter the rest with some meaningless cover art which only the smallest minority of my files are tagged with.</p> <p>Coverflow feels tucked onto the great old interface and looses a lot of its coolness without the touch screen.</p> <p>They don’t provide any advantage in flash size compared to the older nano models and I think the scroll wheel is way too small compared to the large middle button.</p> <p>All in all, I would never ever upgrade my second generation nano to one of the third generation as they provide no advantage, look (much) worse (IMHO) and seem to have a usability problem (too small a scroll wheel)</p> <p>The iPod classic isn’t interesting for me: Old style hard drives are heavy and fragile and ever since I bought that 4GB nano a long while ago, I noticed that there is no real reason behind having all the music on the device.</p> <p>I’m using my nano way more often than I ever used my old iPod: The nano is lighter and I began listening to podcasts. Still: While I lost HD-based iPods around every year and a half due to faulty hard drives or hard drive connectors, my nano still works as well as it did on the first day.</p> <p>Additionally, the iPod classic shares the strange half-full-screen menu and it’s only available in black or white. Nope. Not interesting. At least for me.</p> <p>The iPod touch is interesting because it has a really interesting user interface. But even there I have my doubts: For one, it’s basically an iPhone without the phone. Will I buy an iPhone when (if) it becomes available in Switzerland? If yes, there’s no need to buy the iPod Touch. If no, there still remains that awful usability problem of touch-screen only devices:</p> <p>You can’t use them without taking them out of your pocket.</p> <p>On my nano, I can play and pause the music (or more often podcast) and I can adjust the volume and I can always see what’s on the screen.</p> <p>On the touch interface, I have to put the screen to standby mode, I can’t do anything without looking at the device and I think it may be a bit bulky all in all.</p> <p>The touch is the perfect bathtub surfing device. It’s the perfect device to surf the web right before or after going to sleep. But it’s <b>not portable</b>.</p> <p>Sure. I can take it with me, but it fails in all the aspects of portability. It’s bulky, it can’t be used without taking it out of your pocket and stopping whatever you are doing, it requires two hands to use (so no changing tracks on the bike any more) and it’s totally useless until you manually turn the display back on and unlock it (which also requires two hands to do).</p> <p>So: Which device should Richard buy? I still don’t know. What I know is that I will not be replacing my second generation Nano as long as it keeps working.</p> <p>The Nano looks awesome, works like a charm and is totally portable. Sure. It can’t play video, but next to none of my videos actually fits the requirement of the video functionality anyways and I don’t see myself recoding already compressed content. That just takes an awful lot of time, greatly degrades the quality and generally is not at all worth the effort.</p></p>
PHP 5.2.42007-08-31T00:00:00+00:00http://pilif.github.com/2007/08/php-524<p>Today, the bugfix-release 5.2.4 of PHP has been released.</p>
<p>This is an interesting release, because it includes my fix for bug <a href="http://bugs.php.net/?id=42117">42117</a> which I <a href="/archives/365-PHP,-stream-filters,-bzip2.compress.html">discovered and fixed</a> a couple of weeks ago.</p>
<p>This means that with PHP 5.2.4 I will finally be able to bzip2-encode data as it is generated on the server and stream it out to the client, greatly speeding up our windows client.</p>
<p>Now I only need to wait for the updated gentoo package to update our servers.</p>
The mother of all rubber duckies2007-08-23T00:00:00+00:00http://pilif.github.com/2007/08/the-mother-of-all-rubber-duckies<p><a class="serendipity_image_link" href="http://www.gnegg.ch/uploads/rubberduck.jpg"><!-- s9ymdb:27 --><img width="170" height="128" style="float: right; border: 0px; padding-left: 5px; padding-right: 5px;" src="http://www.gnegg.ch/uploads/rubberduck.serendipityThumb.jpg" alt="" /></a></p>
<p>Last sunday I came across the beauty you are seeing on the right and of course I <b>had</b> to have it.</p>
<p>I took a picture of her (Ebi suggested that it must be female and recommended me to call it "Emma") standing next to a bottle of soap to give you some sense of scale.</p>
<p>Crazy. Cute. Perfect.</p>
More iPod fun2007-08-19T00:00:00+00:00http://pilif.github.com/2007/08/more-ipod-fun<p>Last time I explained <a href="/archives/369-Cheating-with-OGG-podcasts.html">how to get .OGG-feeds to your iPod</a>.</p>
<p>Today I'll show you one possible direction one could go to greatly increase the usability of non-official (read: not bought at audible.com) audiobooks you may have lying around in .MP3 format.</p>
<p>You see, your iPod threats every MP3-File of your library as music, regardless of length and content. This can be annoying as the iPod (rightly so) forgets the position in the file when you stop playback. So if you return to the file, you'll have to start from the beginning and seek through the file.</p>
<p>This is a real pain in case of longer audiobooks and / or radio plays of which I have a <strong>ton</strong></p>
<p>One way is to convert your audiobooks to AAC and rename the file to .m4b which will convince iTunes to internally tag the files as audiobooks and then enable the additional features (storing the position and providing UI to change play speed).</p>
<p>Of course this would have meant converting a considerable part of my MP3 library to the AAC-format which is not yet as widely deployed (not to speak of the quality-loss I'd have to endure when converting a lossy format into another lossy format).</p>
<p>It dawned me that there's another way to make the iPod store the position - even with MP3-files: Podcasts.</p>
<p>So the idea was to create a script that reads my MP3-Library and outputs RSS to make iTunes think it's working with a Podcast.</p>
<p>And thus, <a href="http://www.lipfi.ch/audiobook2cast.phps">audiobook2cast.php</a> was born.</p>
<p>The script is very much tailored to my directory structure and probably won't work at your end, but I hope it'll provide you with something to work with.</p>
<p>In the script, I can only point out two interesting points:</p>
<ul>
<li>When checking a podcast, iTunes ignores the type-attribute of the enclosure when determining whether a file can be played or not. So I had to add the fake .mp3-extension.</li>
<li>I'm outputting a totally fake pubDate-Element in the <item>-Tag to force iTunes to sort the audiobooks in ascending order.</li>
</ul>
<p>As I said: This is probably not useful to you out-of-the-box, but it's certainly an interesting solution to an interesting problem.</p>
Cheating with OGG-podcasts2007-08-17T00:00:00+00:00http://pilif.github.com/2007/08/cheating-with-ogg-podcasts<center><!-- s9ymdb:24 --><img width="470" height="60" style="border: 0px; padding-left: 5px; padding-right: 5px;" src="http://www.gnegg.ch/uploads/ogg.png" alt="" /></center>
<p>For about a year, I'm listening to Podcasts all the time. Until now, I was using my iPod nano with iTunes for my podcasting needs and I was pretty happy about it.</p>
<p>Lately though, I came across some podcasts that provide either only OGG versions or at least enhanced OGG versions (like stereo or additional content). Not wanting to start writing code to listen to Podcasts, I thought that maybe I should try out another player...</p>
<p>I settled with an iRiver Clix 2 which looks great, has a nice OLED display and plays OGG files.</p>
<p>Unfortunately though, it doesn't play AAC-files which is what one of the podcasts I listen to is distributed in.</p>
<p>So I went down to code and wrote <a href="http://www.worldofwarcast.com/forums/showthread.php?t=871">some conversion scripts</a> that download the AAC-files, convert them to ogg and alter the RSS-feed to point to the converted files.</p>
<p>This worked perfectly, so today I rsynced two Podcasts to the iRiver and went to the Office, only to noticing two big problems with the thing:</p>
<ol>
<li>It doesn't keep track of what Podcasts I've already listened to. As I have quite many podcasts I'm subscribed to, it's very hard to manually keep track.</li>
<li>And the killer: It doesn't store the playback position. This is totally bad as podcasts usually are long (up to two hours) and while I like the iRiver's nice 'press-the-edge-of-the-device' usage concept, it's a real pain to seek in the file: Either it's <b>way too slow</b> or <b>totally inaccurate</b>, so while seeking on the iPod would be tolerable, it's completely impossible to do on the iRiver.</li>
</ol>
<p>Just when I thought that the advantages of being able to play OGGs still outweigh the two disadvantages, I began thinking that maybe, maybe I could do the AAC to OGG-Hack again, but in the other direction...</p>
<p>So now I'm "cheating" myself into better quality and bonus content without actually really using the free format.</p>
<p>And this is how it works (it's basically the same thing as the scripts I linked in the forum post above, but it has some advanced features):</p>
<ul>
<li>At half pas midnight (though I may increase the interval), <a href="http://www.lipfi.ch/ogg_cast_download.phps">ogg_cast_download.php</a> runs. It goes over a list of RSS-feeds (though I may actually automate this list in a later revision - as soon as I'm getting more and more ogg-casts), checks them for new entries (which is easy: If the file isn't there, it must be new), downloads the enclosures (using wget for resume functionality, proper handling of redirects and meaningful output), acquires tagging information and finally converts the files to AAC format using faac.</li>
<li>Whenever iTunes checks for new podcasts, it doesn't actually download the original, but uses <a href="http://www.lipfi.ch/oggcasts.phps">oggcasts.php</a> running on <a href="/archives/291-Computers-under-my-command-Issue-1-shion.html">shion</a>, passing the original URL</li>
<li>oggcasts.php checks the (symlinked) output directoy of the ogg downloader and alters the feeds to match the converted files.</li>
</ul>
<p>And if you think you can just install the <a href="http://www.xiph.org/quicktime/">official quicktime OGG component</a> to import the feeds: That unfortunately won't work. iTunes refuses to directly download ogg-feeds.</p>
Updating or replacing datasets2007-08-15T00:00:00+00:00http://pilif.github.com/2007/08/updating-or-replacing-datasets<p>This is maybe the most obvious trick in the world but I see people not doing it all over the place, so I guess it's time to write about it.</p>
<p>Let's say you have a certain set of data you need to enter into your RDBMS. Let's further assume that you don't know whether the data is already there or not, so you don't know whether to use INSERT or UPDATE</p>
<p>Some databases provide us with something like REPLACE or "INSERT OR REPLACE", but others do not. Now the question is, how to do this efficiently?</p>
<p>What I always see is something like this (pseudo-code):</p>
<ol>
<li>select count(*) from xxx where primary_key = xxx</li>
<li>if (count > 0) update; else insert;</li>
</ol>
<p>This means that for every dataset you will have to do two queries. This can be reduced to only one query in some cases by using this little trick:</p>
<ol>
<li>update xxx set yyy where primary_key = xxx</li>
<li>if (affected_rows(query) == 0) insert;</li>
</ol>
<p>This method just goes ahead with the update, assuming that data is already there (which usually is the right assumption anyways). Then it checks if an update has been made. If not, it goes ahead and inserts the data set.</p>
<p>This means that in cases where the data is already there in the database, you can reduce the work on the database to one single query.</p>
<p>Additionally, doing a SELECT and then an UPDATE essentially does the select twice as the update will cause the database to select the rows to update anyways. Depending on your optimizer and/or query cache, this can be optimized away of course, but there are no guarantees.</p>
Careful when clean-installing TabletPCs2007-08-14T00:00:00+00:00http://pilif.github.com/2007/08/careful-when-clean-installing-tabletpcs<p>At work, I got my hands on a <a href="http://www.motioncomputing.com/products/tablet_pc_ls.asp">LS-800</a> TabletPC by motion computing and after spending a lot of time with it and as I'm <a href="/archives/152-Fun-with-a-tablet-pc.html">very interested</a> in TabletPCs anyways, I finally got myself its bigger brother, the <a href="http://www.motioncomputing.com/products/tablet_pc_le17.asp">LE-1700</a></p>
<p>The device is a joy to work with: Relatively small and light, one big display and generally nice to handle.</p>
<p>The tablet came with Windows XP preinstalled and naturally, I wanted to have a look at the new Tablet-centric features in Vista, so I went ahead and upgraded.</p>
<p>Or better: Clean-installed.</p>
<p>The initial XP installation was german and I was installing an english copy of Vista which makes the clean installation mandatory.</p>
<p>The LE-1700 is one of the few devices without official Vista-support, but I guess that's because of the missing software for the integrated UMTS modem - for all other devices, drivers either come prebundled with Vista, are available on Windows update or you can use the XP drivers provided at the Motion computing support site.</p>
<p>After the clean installation, I noticed that the calibration of the pen was a bit off - depending on the position on the screen, the tablet noticed the pen up to 5mm left or above the actual position of the pen. Unfortunately, using the calibration utility in the control panel didn't seem to help much.</p>
<p>After some googling, I found out what's going on:</p>
<p>The end-user accessible calibration tool only calibrates the screen for the tilt of the pen relative to the current position. The calibration of the pens position is done by the device manufacturer and there is no tool available for end-users to do that.</p>
<p>Which, by the way, is understandable considering how the miscalibration showed itself: To the middle of the screen it was perfect and near the sides it got worse and worse. This means that a tool would have to present quite a lot of points for you to hit to actually get a accurately working calibration.</p>
<p>Of course, this was a problem for me - especially when I tried out journal and had to notice that the error was bad enough to take all the fun out of hand-writing (imagine writing on a paper and the text appearing .5cm left of where you put the pen).</p>
<p>I needed to get the calibration data and I needed to put it back after the clean installation.</p>
<p>It turns out that the linear calibration data is stored in the registry under HKLM\SYSTEM\CurrentControlSet\Control\TabletPC\LinearityData in the form of a (large) binary blob.</p>
<p>Unfortunately, Motion does not provide a tool or even reg-file to quickly re-add the data should you clean-install your device, so I had to do the unthinkable (I probably could have called support, but my method had the side effect of not making me wait forever for a fix):</p>
<p>I restored the device to the factory state (by using the preinstalled Acronis True Image residing on a hidden partition), exported the registry settings, reinstalled Vista (at which time the calibration error resurfaced), imported the .reg-File and rebooted.</p>
<p>This solved the problem - the calibration was as smooth as ever.</p>
<p>Now, I'm not sure if the calibration data is valid for the whole series or even defined per device, but here is <a href="http://www.lipfi.ch/tabletcalib.reg">my calibration data</a> in case you have the same problem as I had.</p>
<p>If the settings are per device or you have a non-LE-1700, I strongly advise you to <em>export that registry key before clean-installing</em></p>
<p>Obviously I would have loved to know this beforehand, but... oh well.</p>
Gmail - The review2007-08-08T00:00:00+00:00http://pilif.github.com/2007/08/gmail-the-review<p>It has been quite a while since <a href="/archives/364-Trying-out-Gmail.html">I began routing</a> my mail to Gmail with the intention of checking that often-praised mail service out thoroughly.</p>
<p>The idea was to find out if it's true what everyone keeps saying: That gmail has a great user interface, that it provides all the features one needs and that it's a plain pleasure to work with it.</p>
<p>Personally, I'm blown away.</p>
<p>Despite the obviously longer load time to be able to access the mailbox (Mac Mail launches quicker than it takes gmail to load here - even with a 10 MBit/s connection), the gmail interface is much faster to use - especially with the nice keyboard shortcuts - but I'm getting ahead of myself.</p>
<p>When I began to use the interface for some real email work, I immediately noticed the shift of paradigm: There are no folders and - the real new thing for me - you are encouraged to move your mail out of the inbox as you take notice of them and/or complete the task associated with the message.</p>
<p>When you archive a message, it moves out of the inbox and is - unless you tag it with a label for quick retrieval - only accessible via the (quick) full text search engine built into the application.</p>
<p>The searching part of this usage philosophy is known to me. When I was using desktop clients, I usually kept arriving email in my inbox until it contained somewhere around 1500 messages or so. Then I grabbed all the messages and put them to my "Old Mail" folder where I accessed them strictly via the search functionality built into the mail client (or the server in case of a good IMAP client).</p>
<p>What's new for me is the notion of moving mail out of your inbox as you stop being interested in the message - either because you plain read it or because the associated task is completed.</p>
<p>This allows you for a quick overview over the tasks still pending and it keeps your inbox nice and clean.</p>
<p>If you want quick access to certain messages, you can tag them with any label you want (multiple labels per message are possible of course) in which case you can access the messages with one click, saving you the searching.</p>
<p>Also, it's possible to define filters allowing you to automatically apply labels to messages and - if you want, move them out of the inbox automatically - a perfect setup for the SVN commit messages I'm getting, allowing me to quickly access them at the end of the day and looking over the commits.</p>
<p>But the real killer feature of gmail is the keyboard interface.</p>
<p>Gmail is nearly completely accessible without requiring you to move your hands off the keyboard. Additionally, you don't even need to press modifier keys as the interface is very much aware of state and mode, so it's completely usable with some very <a href="http://mail.google.com/support/bin/answer.py?hl=en&answer=6594">intuitive shortcuts</a> which all work by pressing just any letter button.</p>
<p>So usually, my workflow is like this: Open gmail, press o to open the new message, read it, press y to archive it, close the browser (or press j to move to the next message and press o again to open it).</p>
<p>This is as fast as using, say, <a href="http://www.mutt.org/">mutt</a> on the console, but with the benefit of <em>staying usable</em> even when you don't know which key to press (in that case, you just take the mouse).</p>
<p>Gmail is perfectly integrated into google calendar, and it's - contrary to mac mail - even able to detect outlook meeting invitations (and send back correct responses).</p>
<p>Additionally, there's a MIDP applet available for your mobile phone that's incredibly fast and does a perfect job of giving you access to all your email messages when you are on the road. As it's a Java application, it runs on pretty much every conceivable mobile phone and because it's a local application, it's fast as hell and can continue to provide the nice, keyboard shortcut driven interface which we are used to from the AJAXy web application.</p>
<p>Overall, the experiment of switching to gmail proofed to be a real success and I will not switch back anytime soon (all my mail is still archived in our Exchange IMAP box). The only downside I've seen so far is that if you use different email-aliases with your gmail-account, gmail will set the Sender:-Header to your gmail-address (which is a perfectly valid - and even mandated - thing to do), and the stupid outlook on the receiving end will display the email as being sent from your gmail adress "in behalf of" your real address, exposing your gmail-address at the receiving. Meh. So for sending non-private email, I'm still forced to use Mac Mail - unfortunately.</p>
PHP, stream filters, bzip2.compress2007-07-27T00:00:00+00:00http://pilif.github.com/2007/07/php-stream-filters-bzip2compress<p>Maybe you remember that, more than a year ago, I had an interesting <a href="/archives/268-PHP-Stream-Filters.html">problem with stream filters</a>.</p>
<p>The general idea is that I want to output bz2-compressed data to the client as the output is being assembled - or, more to the point: The <a href="http://www.popscan.ch">PopScan</a> Windows-Client supports the transmission of bzip2 encoded data which gets really interesting as the amount of data to be transferred increases.</p>
<p>Even more so: The transmitted data is in XML format which is very easily compressed - especially with bzip2.</p>
<p>Once you begin to transmit multiple megabytes of uncompressed XML-data, you begin to see the sense in jumping through a hoop or two to decrease the time needed to transmit the data.</p>
<p>On the receiving end, I have an <a href="/archives/149-Refactoring-Its-worth-it.html">elaborate</a> <a href="/archives/314-XmlReader-I-love-thee.html">construct</a> capable of downloading, decompressing, parsing and storing data as it arrives over the network.</p>
<p>On the sending end though, I have been less lucky: Because of that problem I had, I was unable to stream out bzip2 compressed data as it was generated - the end of the file was sometimes missing. This is why I'm using ob_start() to gather all the output and then compress it with bzcompress() to send it out.</p>
<p>Of course this means that all the data must be assembled before it can be compressed and the sent to the client.</p>
<p>As we have more and more data to transmit, the client must wait longer and longer before the data begins to reach it.</p>
<p>And then comes the moment when the client times out.</p>
<p>So I finally really had to fix the problem. I could not believe that I was unable to compress and stream out data on the fly.</p>
<p>It turns out that I finally found the smallest possible amount of code to illustrate the problem in a non-hacky way:</p>
<p>So: This fails under PHP up until 5.2.3:</p>
<pre class="code">
<?
$str = "BEGIN (%d)\n
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do
eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad
minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip
ex ea commodo consequat. Duis aute irure dolor in reprehenderit in
voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur
sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt
mollit anim id est laborum.
\nEND (%d)\n";
$h = fopen($_SERVER['argv'][1], 'w');
$f = stream_filter_append($h, "bzip2.compress", STREAM_FILTER_WRITE);
for($x=0; $x < 10000; $x++){
fprintf($h, $str, $x, $x);
}
fclose($h);
echo "Written\n";
?>
</pre>
<p>Even worse though: It doesn't fail with a message, but it writes out a corrupt bzip-File.</p>
<p>And it gets worse: With a little amount of data it works, but as the amount of data increases, it begins to fail - at different places depending on how you shuffle the data around.</p>
<p>Above script will write a bzip file which - when uncompressed - will end around iteration 9600.</p>
<p>So now that I had a small reproducible testcase, I could report a bug in PHP: <a href="http://bugs.php.net/?id=42117">Bug 47117</a>.</p>
<p>After spending so many hours on a problem which in the end boiled down to a bug in PHP (I've looked anywhere, believe me. I also tried workarounds, but all to no avail), I just could not let the story end there.</p>
<p>Some investigation quickly turned up a wrong check for a return value in bz2_filter.c which I was able to patch up very, very quickly, so if you visit that bug above, you will find a patch correcting the problem.</p>
<p>Then, when I finished patching PHP itself, hacking up the needed PHP-code to let the thing stream out the compressed data as it arrived was easy. If you want, you can have a look at <a href="http://www.lipfi.ch/bzcomp.phps">bzcomp.phps</a> which demonstrates how to plug the compression into either the output buffer handling or something quick, dirty and easier else.</p>
<p>Oh, and if you are tempted to do this:</p>
<pre class="code">
function ob($buf){
return bzcompress($buf);
}
ob_start('ob');
</pre>
<p>... it won't do any good because you will still gobble up all the data before compressing. And this:
<pre class="code">
function ob($buf){
return bzcompress($buf);
}
ob_start('ob', 32768);
</pre>
<p>will encode in chunks (good), but it will write a bzip2-end-of-stream marker after every chunk (bad), so neither will work.</p>
<p>Nothing more satisfying than to fix a bug in someone else's code. Now let's hope this gets applied to PHP itself so I don't have to manually patch my installations.</p>
</p>
Trying out Gmail2007-07-11T00:00:00+00:00http://pilif.github.com/2007/07/trying-out-gmail<p>Everyone and their friends seems to be using Gmail lately and I agree: The application has a clean interface, a very powerful search feature and is easily accessible from anywhere.</p>
<p>I have my Gmail address from back in the days when invites were scarce and the term AJAX wasn't even a term yet, but I never go around to really take advantage of the services as I just don't see myself checking various email accounts at various places - at least not for serious business.</p>
<p>But now I found a way to put gmail to the test as my main email application - at least for a week or two.</p>
<p>My main mail storage is and will be our Exchange server. I have multiple reasons for that</p>
<ol>
<li>I have all my email I ever sent or received in that IMAP account. That's WAY more than the 2.8 GB you get in Gmail and even if I had enough space there, I would not want to upload all my messages there. </li>
<li>I don't trust gmail to be as diligent with the messages I store there as I would want it to. I managed to keep every single email message from 1998 till now and I'd hate to lose all that to a "glitch in the system".</li>
<li>I need IMAP access to my messages for various purposes.</li>
<li>I need the ability of a strong server-side filtering to remove messages I'm more or less only receiving for logging purposes. I don't want to see these - not until I need them. No reason to even have them around usually.</li>
</ol>
<p>So for now I have added yet another filter to my collection of server-side filters: This time I'm redirecting a copy of all mail that didn't get filtered away due to various reasons to my Gmail address. This way I get to keep all mail of my various aliases all at the central location where they always were and I can still use Gmail to access the newly arrived messages.</p>
<p>Which leaves the problem with the sent messages which I ALSO want to archive at my own location - at least the important ones.</p>
<p>I fixed this by BCCing all Mail I'm writing in gmail to a new alias I created. Mail to that alias with my Gmail address as sender will be filtered into my sent-box by Exchange so it'll look as though I sent the message via Thunderbird and then uploaded the copy via IMAP.</p>
<p>I'm happy with this solution, so testing Gmail can begin.</p>
<p>I'm asking myself: Is a tag based storage system better than a purely search based (the mail I don't filter away is kept in one big INBOX which I access purely via search queries if I need something)? Is a web based application as powerful as a mail client like Thunderbird or Apple Mail? Do I unconsciously use features I'm going to miss when using Gmail instead of Apple Mail or Thunderbird? Will I be able to get used to the very quick keyboard-interface to gmail?</p>
<p>Interesting questions I intend to answer.</p>
Mail filtering belongs on the server2007-07-09T00:00:00+00:00http://pilif.github.com/2007/07/mail-filtering-belongs-on-the-server<p><a href="http://pixelated-dreams.com/archives/306-Im-now-officially-a-fanboy....html">Different</a> <a href="http://forums.macnn.com/103/ipod-iphone-and-apple-tv/340682/spam-on-the-iphone/">people</a> who got their iPhone are complaining about SPAM reaching their inbox and want Junk Mail controls on their new gadget, failing to realize the big problem with that approach:</p>
<p>Even if the iPhone is updated with a SPAM filter, the messages will get transmitted and filtered there, which means that you pay for receiving the junk just to throw it away afterwards.</p>
<p>Additionally, Bayes filter still seem to be the way to go with junk mail filtering. The Bayes rules can get pretty large, so this means that you either have to retrain your phone or that the seed data must be synchronized with the phone which will take both a lot of time and space better used for something else.</p>
<p>No. SPAM filtering is a task for the mail server.</p>
<p>I'm using SpamAssassin and DSPAM to check the incoming mail for junk and then I'm using the server side filtering capabilities of our Exchange server to filter mail recognized as SPAM into the "Junk E-Mail" box.</p>
<p>If the filter is easy enough (checking for header values and moving into boxes), even though it is defined in Outlook, the server can process them regardless of which client is connecting to it to fetch the mail (Apple Mail, Thunderbird and the IMAP client on my W880i in my case). This means that all my junk is sorted away into the "Junk Email" folder just when it arrives. It never reaches the INBOX and I never see it.</p>
<p>I don't have an iPhone and I don't want to have one (I <em>depend</em> on bluetooth modem functionality and a real keypad), but the same thing applies to any mobile emailing solution. You don't want SPAM on your Blackberry and especially not on your even simpler non-smartphone.</p>
<p>Speaking of transferring data: The other thing I really don't like about the iPhone is the browser. Sure: It's standard compliant, it renders nice, it supports AJAX and supports small-screen-rendering but <em>it transmits the websites uncompressed</em>.</p>
<p>Let me make an example: The digg.com frontpage in Opera Mini causes 10KB of data to be tranferred. It looks perfectly fine on my SonyEricsson W880 and works as such (minus some javascript functionality). Digg.com when accessed via Firefox causes 319 KB to be transmitted.</p>
<p>One MB costs CHF 7 here (though you can have some inclusive MB's depending on contract) which is around EUR 4.50, so for that money I could watch digg.com three times with the iPhone or 100 times with Opera Mini. The end-user experience is largely the same on both platforms - at least close enough not to warrant the 33 times more expensive access via a browser that works without a special proxy.</p>
<p>As long as GPRS data traffic is prohibitively expensive, junk mail filtering on the server and a prerendering-proxy based browser are a must. Even more so than the other stuff missing in the iPhone.</p>
Upscaling video2007-06-15T00:00:00+00:00http://pilif.github.com/2007/06/upscaling-video<p>I have an <a href="http://catalog2.panasonic.com/webapp/wcs/stores/servlet/ModelDetail?displayTab=O&storeId=11201&catalogId=13051&itemId=102052&catGroupId=21360&surfModel=PT-AE1000U">awesome Full-HD projector</a> and I have a lot of non-HD video material, ranging from <a href="/archives/330-DVD-ripping,-second-edition.html">DVD-rips</a> to <a href="/archives/142-Console-game-Videos.html">speedruns</a> of older consoles and I'm using a Mac Mini running Windows (first Vista RC2, then XP and now Vista again) connected to said projector to access the material.</p>
<p>The question was: How do I get the best picture quality out of this setup.</p>
<p>The answer boils down to the question of what device should do the scaling of the picture:</p>
<p>Without any configuration work, the video is scaled by your graphics card which usually does quite a bad job at it unless it provides some special upscaling support which the intel chip in my Mac Mini seems not to.</p>
<p>Then you could let the projector do the scaling which would require the MCE application to change the screen resolution to the resolution of the file played. It would also mean that the projector has to support the different resolutions the files are stored in which is hardly the case as there are some very strange resolutions here and then (think game boy's native 140x102 resolution).</p>
<p>The last option is to let your CPU do the scaling - at least to some degree.</p>
<p>This is a very interesting option, especially as my Mac Mini comes with one of these nice dual core CPUs we can try and leverage for this task. Then, there are a lot of algorithms out there that are made exactly for the purpose of scaling video, some of which are very expensive to implement in specialized hardware like GPUs or the firmware of a projector.</p>
<p>So I went around and finally found <a href="http://www.avsforum.com/avs-vb/showthread.php?t=719041">this post</a> outlining the steps needed to configure ffdshow to do its thing.</p>
<p>I used the basic setting and modified it just a bit to keep the original aspect ratio of the source material and to only do the resizing up to the resolution of 1280x720. If the source is larger than this, there's no need to shrink the video just to use the graphics chip to upscale it again to the projectors native 1920x1280 resolution (*sigh*).</p>
<p>Also, I didn't want ffdshow to upscale 1280x720 to the full 1920x1280. At first I tried that, but I failed to see a difference in picture quality, but I had the odd frame drop here and then, so I'm running at the limits of my current setup.</p>
<p>Finally, I compared the picture quality of a <a href="http://www.amazon.com/Columbo-Mystery-Movie-Collection-1989/dp/B000MV9OMM/ref=pd_bbs_sr_1/002-8839861-8394406?ie=UTF8&s=dvd&qid=1181859576&sr=8-1">Columbo</a> (non-referal link to Amazon - the package arrived last week) DVD rip with and without the resizing enabled.</p>
<p>The difference in quality is immense. The software-enhanced picture looks nearly like a real 720p movie - sure: Some details are washed-up, but the overall quality is <em>worlds</em> better than what I got with plain ffdshow and no scaling.</p>
<p>Sure. The CPU usage is quite a bit higher than before, but that's what the CPUs are for - to be used.</p>
<p>I highly recommend you taking the 10 minutes needed to set up the ffdshow video decoder to do the scaling. Sure: The UI is awful and I didn't completely understand many of the settings, but the increased quality more than made up the work it took to configure the thing.</p>
<p>Heck! Even the 240x160 pixel sized <a href="http://tasvideos.org/893M.html">Pokémon Sapphire run</a> looked much better after going through ffdshow with software scaling enabled.</p>
<p>Highly recommended!</p>
<p>By the way: This only works in MCE for video files as MCE refuses to use ffdshow for MPEG2 decoding which is needed for DVD or TV playback. But 100% of the video I watch are video files anyways, so this doesn't bother me at all.</p>
*sigh*2007-05-14T00:00:00+00:00http://pilif.github.com/2007/05/sigh<pre class="code">
% php -a
Interactive shell
php > if (0 == null) echo "*sigh*\n";
*sigh*
php > quit
</pre>
<p>that bit me today. Even after so many years. I should really get used to use ===</p>
Newfound respect for JavaScript2007-05-09T00:00:00+00:00http://pilif.github.com/2007/05/newfound-respect-for-javascript<p>Around the year 1999 I began writing my own JavaScript code as opposed to copying and pasting it from other sources and only marginally modifying it.</p>
<p>In 2004 I practically discovered AJAX (XmlHttpRequest in particular) just before the hype started and I have been doing more and more JavaScript since then.</p>
<p>I always regarded JavaScript as something you have to do, but which you dislike. My code was dirty, mainly because I was of the wrong opinion that JavaScript was a procedural language with just one namespace (the global one). Also, I wasn't using JavaScript for a lot of functionality of my sites, partly because of old browsers and partly because I have not yet seen what was possible in that language.</p>
<p>But for the last year or so, I'm writing very large quanitites of JS in very AJAXy applications, which made me really angry about the limited ways you could use to structure your code.</p>
<p>And then I found a link on reddit to a <a href="http://developer.yahoo.com/yui/theater/">lecture of a yahoo employee</a>, <a href="http://www.crockford.com/">Douglas Crockford</a>, which really managed to open my eyes.</p>
<p>JavaScript isn't procedural with some object oriented stuff bolted on. JavaScript is a functional language with object oriented and procedural concepts integrated where it makes sense for us developers to both quickly write code and to understand written code even with only a very little knowledge of how functional languages work.</p>
<p>The immensely powerful concept of having functions as first class objects, of allowing closures and of allowing to modify object prototypes at will makes turns JS into a really interesting language which can be used to write "real" programs with a clean structure.</p>
<p>The day when I have seen those videos, I understood that I had the completely wrong ideas about JavaScript mainly because of my crappy learning experience so far which initially consisted of Copying and Pasting crappy code from the web and later of reading library references, but always ignoring real introductions to the language («because I know that already»).</p>
<p>If you are interested to learn a completely new, powerful side of JavaScript, I highly recommend you watch these movies.</p>
A followup to MSI2007-04-17T00:00:00+00:00http://pilif.github.com/2007/04/a-followup-to-msi<p>My <a href="http://www.gnegg.ch/archives/357-Windows-Installer-Worked-around.html">last post about MSI</a> generated some nice responses, amongst them the <a href="http://blogs.xmission.com/legalize/2007/04/16/is-it-the-tool-or-the-author/">lengthy blog post</a> on <a href="http://blogs.xmission.com/legalize">Legalize Adulthood</a>.</p>
<p>Judging from the two track-backs on the MSI posting and especially after reading the linked post above, I come to the conclusion that my posting was very easy to misunderstand.</p>
<p>I agree that the workarounds I listed are <em>problems with the authoring</em>. I DO think however that all these workarounds where put in place because the platform provided by Microsoft is lacking in some kind.</p>
<p>My rant was not about the side effects of these workarounds. It was about their sole existence. Why are some of us forced to apply workarounds to an existing platform to achieve their goals? Why doesn't the platform itself provide the essential features that would make the workarounds unneeded?</p>
<p>For my *real* problems with MSI from an end users perspective, feel free to read <a href="http://www.gnegg.ch/archives/174-A-look-at-Windows-Installer.html">this rant</a> or <a href="http://www.gnegg.ch/archives/107-Why-o-why-is-my-harddrive-so-small.html">this on</a> e (but bear in mind that both are a bit oldish by now).</p>
<p>Let's go once again through my points and try to understand what each workaround tries to accomplish:</p>
<ol>
<li><p><strong>EXE-Stub to install MSI</strong>: MSI, despite being the platform of choice still isn't as widely deployed as the installer authors want it to be. If Microsoft wants us to use MSI, it's IMHO their responsibility to ensure that the platform is actually available.</p>
<p>I do agree though that Microsoft is working on this, for example by requiring MSI 3.1 (the first release with acceptable patching functionality) for Windows Update. This is what makes the stubs useless over time.</p>
<p>And personally I think a machine that isn't using Windows Update and thus hasn't 3.1 on it isn't a machine I'd want to deploy my software on because a machine not running Windows update is probably badly compromised and in an unsupportable state.</p>
</li>
<li><p><strong>EXE-Stub to check prerequisites</strong>: Once more I don't get why the underlying platform cannot provide functionality that is obviously needed by the community. Prerequisites are a fact for life and MSI does nothing to help that. MSI packages can't be used to install other MSI packages but Merge Modules, but barely any libraries required by todays applications actually come in MSM format (.NET framework? Anyone?).</p>
<p>In response to the excellent post on Legalize Adulthood which gives an example about DirectX, I counter with: Why is there a DirectX Setup API? Why are there separate CAB files? Isn't MSI supposed to handle that? Why do I have to create a setup stub calling a third-party API to get stuff installed that isn't installed in the default MSI installation?.</p>
<p>An useful package solution would provide a way to specify dependencies or at least allow for automated installation of dependencies from the original package.</p>
<p>It's ironic that an MSI package can - even though it's dirty - use a CustomAction to install a traditionally packaged .EXE-Installer-Dependency, but can't install a .MSI packaged dependency.</p>
<p>So my problem isn't with bootstrappers as such, but with the limitations in MSI itself requiring us developers to create bootstrappers to do work which IMHO MSI should be able to do.</p>
</li>
<li><p><strong>MSI-packages .EXE's</strong>: I wasn't saying that MSI is to blame for the authors that repacked their .EXE's into .MSI packages. I'm just saying that this is another type of workaround that could have been chosen for the purpose of getting the installation to work despite (maybe only perceived) limitations in MSI. An ideal packaging solution would be as accessible and flexibly as your common .EXE-installer and thus make such a workaround unneeded.</p></li>
<li><p><strong>Third party scripting</strong>: In retrospect I think the motivation for these third party scripting solutions is mainly the vendor-lock-in. I'm still convinced though that with a more traditional structure and a bit more flexibility for the installer authors, such third party solutions would get more and more unneeded until they finally die out.</p></li>
<li><p><strong>Extracting, then merging</strong>: Also just another workaround that has been chosen because a distinct problem wasn't solvable using native MSI technology.</li>
</ol>
<p>I certainly don't blame MSI for a developer screwing up. I'm blaming MSI for not providing the tools necessary for the installer community to use native MSI to solve the majority of problems. I ALSO blame MSI for messiness, for screwing up my system countless times and for screwing up my parent's system which is plainly unforgivable.</p>
<p>Because MSI is a complicated black box, I'm unable to fix problems with constantly appearing installation prompts, with unremovable entries in "Add/Remove programs" and with installations failing with such useful error messages as "Unknown Error 0x[whatever]. Installation terminated".</p>
<p>I'm blaming MSI for not stopping the developer community to author packages with above problems. I'm blaming MSI for its inherent complexity causing developers to screw up.</p>
<p>I'm disappointed with MSI because it works in a ways that requires at least a part of the community to create messy workarounds for quite common problems MSI can't solve.</p>
<p>What I posted was a list of workarounds of varying stupidity for problems that shouldn't exist. Authoring errors that shouldn't need to happen.</p>
<p>I'm not picky here: A large majority of packages I had to work with <em>do</em> in fact employ one of these workarounds (the unneeded EXE-stub being the most common one), none of which should be needed.</p>
<p>And don't get me started about how other operating systems do their deployment. I think Windows could learn from some of them, but that's for another day.</p>
</p></li></ol>
Altering the terminal title bar in Mac OS X2007-04-12T00:00:00+00:00http://pilif.github.com/2007/04/altering-the-terminal-title-bar-in-mac-os-x<p>After one year of owning a MacBook Pro, I finally got around to fix my <tt>precmd()</tt> ZSH-hack to really make the current directory and stuff appear in the title bar of Terminal.app and iTerm.app.</p>
<p>This is the code to add to your .zshrc:</p>
<figure class="highlight"><pre><code class="language-bash" data-lang="bash"><span class="k">case</span> <span class="nv">$TERM</span> <span class="k">in</span>
<span class="k">*</span>xterm<span class="k">*</span><span class="p">|</span>ansi<span class="p">)</span>
<span class="k">function </span>settab <span class="o">{</span> print -Pn <span class="s2">"</span><span class="se">\e</span><span class="s2">]1;%n@%m: %~</span><span class="se">\a</span><span class="s2">"</span> <span class="o">}</span>
<span class="k">function </span>settitle <span class="o">{</span> print -Pn <span class="s2">"</span><span class="se">\e</span><span class="s2">]2;%n@%m: %~</span><span class="se">\a</span><span class="s2">"</span> <span class="o">}</span>
<span class="k">function </span>chpwd <span class="o">{</span> settab;settitle <span class="o">}</span>
settab;settitle
<span class="p">;;</span>
<span class="k">esac</span></code></pre></figure>
<p><tt>settab</tt> sets the tab contents in iTerm and <tt>settitle</tt> does the same thing for the title bar both in Terminal.app and iTerm.</p>
<p>The sample also shows the variables ZSH replaces in the strings (the parameter -P to print lets ZSH do prompt expansion. See <tt>zshmisc(1)</tt> for a list of all variables): %n is the currently logged on user, %m the hostname up until the first dot and %~ is displaying the current directory or ~ if you are in $HOME. You can certainly add any other environment variable of your choice if you need more options, but this more or less does it for me.</p>
<p>Usually, the guides in the internet make you use <tt>precmd</tt> to set the title bar, but somehow, Terminal wasn't pleased with that method and constantly kept overwriting the title with the default string.</p>
<p>And this is how it looks in both iTerm (above) and Terminal (below):</p>
<center><!-- s9ymdb:23 --><img width="330" height="129" style="border: 0px;" src="http://www.gnegg.ch/uploads/titlebars.png" alt="" /></center>
<p><br /></p>
Windows Installer - Worked around2007-04-11T00:00:00+00:00http://pilif.github.com/2007/04/windows-installer-worked-around<p>I've talked about Windows Installer (the tool that parses these .MSI files) before and I've never really convinced that this technology really does its job. Just have a look at these previous articles: <a href="/archives/107-Why-o-why-is-my-harddrive-so-small.html">Why o why is my hard-drive so small?</a>, <a href="/archives/174-A-look-at-Windows-Installer.html">A look at Windows Installer</a> and <a href="/archives/261-The-myth-of-XCOPY-deployment.html">The myth of XCOPY deployment</a></p>
<p>Yesterday I had a look at the Delphi 2007 installation process and it dawned me that I'm going to have to write yet another blog entry.</p>
<p>It's my gut-feeling that 80% of all bigger software packages in Windows can't live with MSIs default feature set and they have to work around inherent flaws in the design of that tool. Here's what I found installers doing (in increasing order of stupidity):</p>
<ol>
<li>Use a .EXE-stub to install the MSI engine. These days this really doesn't make sense any more as 99% of all windows installation already have MSI installed and the ones that don't, you don't want to support anyways (Windows Update requires MSI).</li>
<li>Use a .EXE-stub that checks for availability and thereafter installs a bunch of prerequisites - sometimes even <em>other MSI packages</em>. This isn't caused by MSI-files unable to detect the presence of prerequisites - it's because MSI-files are unable to install other MSI files and the workaround (using merge packages) doesn't work because most of the third party libraries to install don't come as merge packages.</li>
<li>Create a MSI-file which contains a traditional .EXE-Setup, unpack that to a temporary location and run it. This is what I call the "I want a Windows-Logo, but have no clue how to author MSI files"-type of installation (and I completely understand the motivation behind that) which just defeats all the purposes MSI files ever had. Still: Due to inherent limitations in the MSI engine, this is often times the only way to go.</li>
<li>Create MSI-files that extract a vendor specific DLL, a setup script and all files to deploy (or even just an archive) and then use that vendor specific DLL to run the install script. This is what InstallShield does at least some of the time. This is another version of the "I have no clue how to author a MSI file"-installation with the additional "benefit" of being totally vendor-locked.</li>
<li>Create a custom installer that installs all files and registry keys and then launch the windows installer with a temporary .MSI-file to register your installation work in the MSI-installer. This is what Delphi 2007 does. I feel this is another workaround for Microsoft's policy that only MSI-driven software can get a windows-logo, but this time it's vendor-locked and totally unnecessary and I'm not even sure if such a behavior is consistent with any kind of specification.</li>
</ol>
<p>Only a small minority of installations really use pure MSI and these installations usually are installations of small software packages and as my previous articles show: The technology is far from fool-proof. While I see that Windows should provide a generalized means for driving software installations, MSI can't be the solution as evidenced by the majority of packages using workarounds to get by the inherent flaws of the technology.</p>
<p>*sigh*</p>
Software patents2007-03-19T00:00:00+00:00http://pilif.github.com/2007/03/software-patents<p>Like most programmers, I too hate software patents. But until now, I've never had a fine example of how bad they really are (though I've written about <a href="/archives/340-My-take-on-the-intellectual-property-debate.html">intellectual property</a> in general before).</p>
<p>But now I just found another <a href="http://www.google.com/patents?vid=USPAT7028023&id=Szh4AAAAEBAJ&dq=linked+list+Ming-Jen+Wang">granted patent</a> application linked on <a href="http://www.reddit.com">reddit</a>.</p>
<p>The patent covers... linked lists.</p>
<p>Granted. It's linked lists with pointers to objects further down the list than the immediate neighbors, but it's still a linked list.</p>
<p>I've first read about linked lists when I was 13 and I read my first book about C. This was 13 years ago - <em>way</em> before that patent application was originally filed.</p>
<p>So seeing a technology in use for at least 13 years being patented as «new invention», I'm asking myself two questions:</p>
<ol>
<li>How the hell could this patent application even be accepted seeing that it isn't inventive at all?</li>
<li>Why do companies file trivial patents for which prior art obviously exists and which are thus invalid to begin with?</li>
</ol>
<p>And based on that I'm asking the world: Why don't we stop the madness?</p>
<p>But let's have a look at the above two points. Answering the first one is easy: The people checking these applications have no interest (and no obligation) to check the applied patents. In fact, these «experts» may even be paid per passed patent and thus are totally interested in letting as many patents pass as possible. Personally, I also doubt their technical knowledge in the fields they are reviewing patents in.</p>
<p>Even more so: Most of these applications are formulated in legal-speak which is targeted to be read by lawyers which usually have no clue about IT, whereas the IT people usually don't understand the texts of the applications.</p>
<p>Patent law (as trademark law) basically allows you to submit anything and it's the submitters responsibility to make sure that prior art doesn't exist. The patent offices can't be hold liable for wrongly issued patents.</p>
<p>And this leads us to question 2: Why submit an obviously invalid patent?</p>
<p>For one, patent applications make the scientific achievement of a company measurable for non-tech people.</p>
<p>Analysts compare the «inventiveness» of companies by comparing the sheer number of granted patents. A company with more granted patents has a better value in the market and it's only about market-value these days. This is one big motivation for a company to try and have as many patents granted as possible.</p>
<p>The other issue is that once the patent is granted, you can use that (invalid) patent to sue as many competitors as possible. As you have the legally granted patent on your side, the sued party must prove that the patent is invalid. This means a long and very expensive trial with an uncertain outcome - you can never know if the jury/judge in question knows enough about technology to identify the patent as false or if they will just value the legally issued document higher than the possible doubts raised by the sued party.</p>
<p>This makes fighting an invalid patent a very risky adventure which many companies don't want to invest money in.</p>
<p>So in many (if not most) cases, your invalid patent is as valuable as a valid one if you intend to use it to sue competitors to make them pay royalty fees or hinder them at ever selling a product competing to yours - even though your legal measure is invalid.</p>
<p>One more question to ask: Why does the Free Software community seem so incredibly concerned about software patents while vendors of commercial software usually keep quiet?</p>
<p>It's all about the provability of infringing upon trivial patents.</p>
<p>Let's take above linked-list patent: It's virtually impossible to prove that any piece of compiled software is infringing on this (invalid) patent. In source form though, it's trivially easy to prove the same thing.</p>
<p>So where this patent serves only one purpose in the closed source world (increased shareholder value due to higher amount of patents granted), it also begins to serve the other purpose (weapon against competitors) in a closed source world.</p>
<p>And. Yes. I'm asserting that Free as well as Non-Free software infringes upon countless of patents. Either willing or unwilling (I guess the former is limited to the non-free community). Just look at the sheer amount of software patents granted! I'm asserting that it's plain <em>impossible</em> to write software today that doesn't infringe upon any patent.</p>
<p>Please, stop that software patent nonsense. The current system criminalizes developers and serves no purpose that trademark and intellectual property laws couldn't solve.</p>
Wii in a home cinema2007-03-02T00:00:00+00:00http://pilif.github.com/2007/03/wii-in-a-home-cinema<p>The day before yesterday I was lucky enough to get myself a Wii.</p>
<p>It was and basically still is impossible to get one here in Switzerland since the launch on December 8th. So I was very happy that I got the last device of a delivery of like 15 pieces to a game shop near where I work.</p>
<p>Unfortunately, my out-of-the-box experience with the Wii was quite poor which is why I didn't write the review yesterday - I wanted to spend a bit more time with the console before writing something bad about it.</p>
<p>Here's my story:</p>
<p>I'm using a projector, a receiver and a big screen - a real home cinema.</p>
<p>This means that the Wii is usually placed quite far away from either the screen or from the receiver (and especially from the projector about 25 meters in my case). This also means that I get into large issues with the relatively short cable with which you are supposed to connect the sensor bar to the Wii.</p>
<p>And the short A/V-cable didn't help either, so I also couldn't just place the Wii near the screen because then I wouldn't be able to connect it to the receiver.</p>
<p>I ended up placing the Wii more or less in the middle of the room and while I like the looks of the console, it still doesn't fit the clean look of the rest of my home cinema.</p>
<p>It gets worse though: I placed the sensor bar on the top of my center speaker right below the screen. It turned out though that this placement was too far below my usual line of sight so that the Wiimote wasn't able to pick the signal up.</p>
<p>So currently, I have placed the sensor bar on top of an awful looking brown box right on the middle of my table - a setup I have to rebuild whenever I want to play and to put away when I'm not playing.</p>
<p>I <em>SO</em> want that wireless sensor bar to place it on the top of my screen.</p>
<p>But the not-quite-working goes on: At first I wasn't able to connect to my WLAN. The Wii just didn't find the network. Flashing the ZyXEL AP with a newer software helped there and the Wii recognized the network, but was unable to get an IP address.</p>
<p>Due to the awkward placement it was unable to get a strong signal.</p>
<p>I moved the device more to the middle of the room (making it even more visible to the casual eye) and it was finally able to connect.</p>
<p>My first visit to the shopping channel ended up with the whole console crashing hard. Not even the power button worked - I had to unplug and replug it at which time I had enough and just played Zelda (a review of that jewel will probably follow).</p>
<p>Yesterday I was luckier with the shopping channel (I didn't buy anything though) and as I had my terrible "sensor bar on a box" configuration already up and running, I got a glimpse of what the Wii out-of-the-box-experience could be: Smootly working, good-looking and a very nice user control interface - using the Wiimote to point at the screen feels so ... natural.</p>
<p>In my opinion, Nintendo did an awful mistake of forcing that cable on the sensor bar. As we know by now, the bar contains nothing more than two IR-LEDs. The cable is only for powering them. Imagine the sensor bar being another BT device - maybe mains-powered or battery-powered otherwise (though these IR-LEDs suck power like mad). Imagine the console being able to turn it on and off wirelessly.</p>
<p>The whole thing would not have been that much more expensive (alternatively, they could sell it as an addon) but it would allow the same awesome out-of-the-box experience for all users - even the one with a real home entertainment system.</p>
<p>If it wasn't Nintendo (I admit that I am a «fanboi» in matters of Nintendo - the conditioning I got with the NES in my childhood still hasn't worn off), I would have been so incredibly pissed at that first evening that I would have returned the whole console and written one bad review here - even the XBox 360 worked better than the Wii... *sigh*</p>
<p>And all that to save a couple of hours in the engineering department.</p>
External blogging tools2007-03-01T00:00:00+00:00http://pilif.github.com/2007/03/external-blogging-tools<!-- s9ymdb:22 -->
<p><a href="http://www.gnegg.ch/uploads/blog-textmate.png"><img width="110" height="60" style="float: right; border: 0px; padding-left: 5px; padding-right: 5px;" src="http://www.gnegg.ch/uploads/blog-textmate.serendipityThumb.png" alt="" /></a></p>
<p>Ever since I started blogging, I have been using different tools to help me do my thing.</p>
<p>At first, I was using the browser to directly write the articles in the MT interface, but after losing a significant amount of text that way, I quickly migrated to writing my entries in a text editor (<a href="http://www.jedit.org">jEdit</a> <a href="/archives/3-Why-I-like-jEdit.html">back then</a>) and pasting them into the MT interface.</p>
<p>Then I learned about the XML-RPC interface to MT and began using <a href="http://www.wbloggar.com/">w.bloggar</a> to do my writing, but stagnation and little quirks made me to back to a real text editor which is what I was using for a long time (though I migrated from MT to s9y in between).</p>
<p>Last year, I caught the buzz about <a href="http://windowslivewriter.spaces.live.com/">Windows Live Writer</a>, which was kind of nice, but generally, I need more freedom in writing HTML than what a WYSIWYG editor can provide me with - especially as I have my special CSS rules for code for example that I prefer to just manually setting the font.</p>
<p>So I was back to the text editor (which went from jEdit to <a href="http://www.textmate.com">TextMate</a> in between).</p>
<p>And then I noticed the blogging bundle.</p>
<p>The blogging bundle for TextMate allows me to keep writing my blog entries in the tool of choice, while being able to post the finished entries direct from within TextMate.</p>
<p>Basically, you configure some basic settings for your blogs and then you write a colon-separated list of values at the beginning of the document which TextMate uses to post your entry.</p>
<p>It can fetch categories, configure pings and comments, set tags - whatever you want. Directly from your editor where you are doing your writing. Of course you can also fetch older postings and edit them.</p>
<p>So this provides me with the best of both worlds: Direct posting to the blog with one key press (Ctrl-Command-P) while writing in the editor of my choice that is very stable and provides me with the maximum flexibility at laying out my articles.</p>
<p>I love it.</p>
PT-AE1000 HDMI woes2007-02-27T00:00:00+00:00http://pilif.github.com/2007/02/pt-ae1000-hdmi-woes<p>Today was the day when I got the crown jewel of my home entertainment system: A <a href="http://catalog2.panasonic.com/webapp/wcs/stores/servlet/ModelDetail?displayTab=O&storeId=11201&catalogId=13051&itemId=102052&catGroupId=21360&surfModel=PT-AE1000U">Panasonic PT-AE1000</a></p>
<p>The device is capable of displaying the 1920x1080 resolution which means that it's capable of showing 1080p content (at 50,60 and even 24 Hertz). It's the thing that was needed to complete my home entertainment setup.</p>
<p>The projector is quite large but not that heavy. I also like the motorized lens controls for zoom and focus and I <em>love</em> the incredible lens shift range: You can basically move the picture the whole size of it in any direction. This allowed me not to tilt the device even though it's mounted quite high up on the ceiling. No tilt means no keystone distortion.</p>
<p>Even though all projectors provide you with some means to correct the keystone effect, but you'll automatically lose picture quality and content when using it, so it's best to leave it off.</p>
<p>Unfortunately, the device has one flaw: It reports totally wrong screen resolutions via DCC when you connect the projector via DVI (or HDMI, but that's the same thing).</p>
<p>It tells windows (strangely enough, it works on Mac OS X) that it supports the resolution of 1920x540 at some strange refresh rate of around 54 Hz.</p>
<p>The intel chipset of my Mac Mini can't output this resolution so it falls back to 480p and there's no possiblity of changing this.</p>
<p>With the help of <a href="http://entechtaiwan.com/util/ps.shtm">PowerStrip</a> (which you won't even need when you are reading this), I created a corrected <a href="http://www.lipfi.ch/ae1000.inf">Monitor .INF-File</a> that has the correct resolution and acceptable refresh rates in it (taken from the projectors manual).</p>
<p>Once you tell windows to update the driver of your monitor and point it to this file specifically, it will allow you to set the correct resolution.</p>
<p>*phew* - problem solved.</p>
<p>Aside of this glitch, so far, I love the projector. Very silent, very nice picture quality, perfect colors and it even looks quite acceptable with its black casing. This is the projector I'm going to keep for many years as there's no increase of resolution in sight for a very long time.</p>
Vista preloaded2007-02-20T00:00:00+00:00http://pilif.github.com/2007/02/vista-preloaded<p>Today I had the dubious "pleasure" of setting up a Lenovo Thinkpad R60 with Vista Business Edition preloaded.</p>
<p>We just needed to have a clean Vista machine to test components of our PopScan solution on and I just <em>didn't</em> have the disk space needed for yet another virtual machine.</p>
<p>I must say that I didn't look forward to the process. Mainly because I hated the OEM installation process under XP. Basically, you got an installation cluttered with "free" "feature enhancments" which usually were really bad-looking if provided from the hardware manufacturer or nagged the hell out of you if it were trial releases of some anti virus program or something else.</p>
<p>Ever since I'm setting up windows machines for personal use, my policy has been to wipe the things clean and install a clean windows copy on them.</p>
<p>With this background and the knowledge that just for testing purposes the out-of-the-box installation would do the trick, I turned on that R60 machine.</p>
<p>The whole initial setup process was very pleasant: It was just the usual Windows Setup minus the whole copying of files process - the installation started with asking me what language and what regional settings to use and it actually guessed the keyboard settings right after setting the location (a first! Not even apple can do that *sigh*).</p>
<p>Then came the performance testing process as we know it from non-oem-preinstalled installations.</p>
<p>Then it asked me for username and provided a selection of background images.</p>
<p>I <em>really, really</em> liked that because usually the vendor provided images are just crap.</p>
<p>The selection list even contained some Vista-native images and some Lenovo images - clearly separated.</p>
<p>The last question was a small list of "additional value-add products" with "No thank you" preselected.</p>
<p>You can't imagine how pleased I was.</p>
<p>...</p>
<p>up until what came after.</p>
<p>The system rebooted and presented me with a login screen to which I gave the credentials I provided during the setup process.</p>
<p>Then the screen turned black and a DOS command prompt opened. And a second, though minimized.</p>
<p>The first two lines in that DOS prompt were</p>
<pre class="code">echo "Please wait"
Please wait</pre>
<p>I can understand that Lenovo wanted to get their machines out and that they may be willing to sacrifice a bit of Vista's shinyness. But they obviously even lack the basic batch-knowledge of using "@echo off" as the first command in their setup script thus ruining the unpleasantness of the installation even more.</p>
<p>But wait... it's getting worse...</p>
<p>The script ran and due to echo being on displayed the horrors to me: ZIP-File after ZIP-File was unpacked into the Application Data folder of the new user. MSI-File after MSI-File was installed. All without meaningful progress report (to a non-techie that is).</p>
<p>Then some Lenovo registration assistant popped up asking me all kinds of personal questions with no way to skip it, but the worst thing about it was the font it used: MS Sans Serif - without <em>any</em> font smoothing. This looked like Windows 98, removing the last bit of WOW from Vista ( :-) ).</p>
<p>Then it nagged me about buying Norton Internet Security.</p>
<p>And finally it let me to the desktop.</p>
<p>And... oh the horror:</p>
<ul>
<li>My earlier choice of background image was ignored. I was seeing a Lenovo-Logo all over the place.</li>
<li>On the screen was a Vista-Builtin-Assistant telling me to update the Windows Defender signatures. <strong>It looked awful.</strong> Jaggyness all over the place: Clear Type was clearly off and the default font of windows looks <em>aful</em> without ClearType.</li>
<li>It's <em>impossible</em> for a non-techie to fix that ClearType thing as it's buried deep in the Control Panel - it's supposed to be on and never to be touched by normal users.</li>
<li>On the Notification Area were <em>three</em> icons telling me about WLAN connectivity: Windows' own, the Think Pad driver's and the one of the ThinkVantage Network Access tool (the last one has a bug, btw, it constantly keeps popping up a balloon telling me that it's connected. If I close it, it reopens 30 seconds later).</li>
</ul>
<p>I didn't do anything to fix this, but quickly joined the machine to the domain in the hope that logging in to that would give me the Vista default profile.</p>
<p>But no: Another MSI-installer and <em>still no ClearType</em></p>
<p>It's a shame to see how the OEMs completely destroy everything Microsoft puts into making their OSes look and feel "polished". Whatever they do, the OEMs succeed at screwing up the installations.</p>
<p>This is precisely where Apple outshines Windows by far. If you buy a computer by apple, you will have software on it that was put there by Apple, made by Apple, running on an OS made by Apple. Everything is shiny and works out of the box.</p>
<p>Microsoft will <em>never</em> be able to provide that experience to their users as long as OEMs think they can just throw in some crappy made installation tools that destroy all the good experience a new user could have with the system. From scary DOS prompts over crappy (and no longer needed) third party applications to completely crappy preconfiguration (I could *maybe* let that ClearType thingie pass IF they'd chosen a system font that was actually readable with ClearType off - this looked worse than a Linux distribution with an unpatched Freetype).</p>
<p>PC OEMs put no love at all into their products.</p>
<p>Just sticking a Windows Vista sticker on it isn't brining that "WOW" to the customers at all.</p>
<p>Microsoft should go after the OEMs and force them to provide clean installations with only a minimal amount of customization done.</p>
HD-DVD unlocked2007-02-14T00:00:00+00:00http://pilif.github.com/2007/02/hd-dvd-unlocked<p>Earlier, it was possible to work around the AACS copy protection scheme in use for HD-DVD and Blueray on a disc-to-disc basis.</p>
<p>Now it's possible to work around it for every disk.</p>
<p>So once more we are in the situation where the illegal media pirate is getting a superior user experience than the legal user: The "pirate" can download the movie to watch on-demand. He can store it on any storage medium he pleases (like home servers, NASes or optical discs). He can reformat the content to another format a particular output medium requires (like an iPod) without having to buy another copy. And finally, he is capable to watch the stolen media on whatever platform he chooses to watch it with.</p>
<p>The original media on contrast is very much limited:</p>
<p>The source of the content is always the disc the user bought. It's not possible to store legally acquired HD-content on a different medium than the source disc. It's not possible to watch it on <em>any</em> personal computer but the ones running operating systems from Microsoft. The disc may even force the legal user to watch advertisements or trailer in advance to the main content. There is no guarantee that a purchased disc will work with any player - despite player and disc both bearing the same compatibility label (HD-DVD or Blueray logos). It's not possible to legally acquire the content on-demand and it's impossible to reformat the content to different devices.</p>
<p>Back in the old days, the copy usually was inferior to the original.</p>
<p>In the digital age of DRM and user-money-milking, this has changed. Now the copy clearly provides many advantages the original currently can't provide or the industry does not want it to provide.</p>
<p>I salute the incredibly smart hackers that worked around yet another "unbreakable" copy protection scheme allowing me to create my personal backup copy of any medium I buy so that I can store the content on my NAS and I have the assurance that I'm able to play it when I want and where I want.</p>
<p>I assure you: My happyness is not based on the fact that I can now downloaded pirated movies over bittorrent. It's based on the fact that I can store legally purchased HD content on the harddrive of my home server and watch it on-demand without having to switch media.</p>
<p>Piracy, for me, is a pure usability problem.</p>
Strange ideas gone wrong2007-02-13T00:00:00+00:00http://pilif.github.com/2007/02/strange-ideas-gone-wrong<center><!-- s9ymdb:21 --><img width="250" height="34" src="http://www.gnegg.ch/uploads/apply.png" alt="Screenshot of three buttons: OK - Cancel - Apply" /></center>
<p>The apply button Windows brought to us with its windows 95 release is a strange beast.</p>
<p>Nearly all people I know (myself included) misuse the button.</p>
<p>Ask yourself: When you see the three buttons as shown on the screenshot and you want the changes you made in the dialog to take effect, what button(s) do you hit?</p>
<p>Chances are that you press "Apply" and then "OK".</p>
<p>Which obviously is wrong.</p>
<p>The meaning of the buttons is as follows: "Apply" applies the changes you made, but leaves the dialog open. "Cancel" throws the changes away and closes the dialog. "OK" applies the changes and closes the dialog.</p>
<p>So in a situation like the above, hitting OK would suffice.</p>
<p>I see no real reason why the apply button is there and personally, I don't understand why people insist on hitting it. Mind you, this also affects "educated" people: I perfectly well know how the buttons work and I'm <em>still</em> pressing Apply when it's not needed.</p>
<p>Actually, Apply is a dangerous option set out to defeat the purpose of the Cancel-Button: Many times, I catch myself making changes and hitting "Apply" after every modification I made in the dialog, thus rendering the cancel button useless because I'm constantly applying the changes so Cancel usually will do nothing.</p>
<p>Why is the Apply button there then?</p>
<p>It's to provide the user with feedback of her changes without forcing her to reopen the dialog.</p>
<p>Say you want to reconfigure the looks of your desktop. At first you change the font. Then you hit apply and you watch if you like the changes. If yes, you can now change the background and hit apply again. If not, you can manually change the font back.</p>
<p>Problem is that nobody uses the buttons that way and I personally have no idea why. Is it an emotional thing? Do you feel that you have to hit Apply and OK to really make it stick? I have no idea.</p>
<p>Personally, I prefer the Mac way of doing things: Changes you make are immediately applied, but there's (often) a way to reset all the changes you made when you initially opened the dialog. This combines the feature of immediate response with a clean, safe way to go back to square one.</p>
<p>My question to you is: Do you catch yourself too doing that pointless Apply-OK-sequence? Or is it just me, many people in screencasts, my parents and many customers doing it wrongly?</p>
MediaFork 0.8-beta12007-02-12T00:00:00+00:00http://pilif.github.com/2007/02/mediafork-08-beta1<p>A few <a href="http://www.gnegg.ch/archives/329-ripping-DVDs.html">months</a> <a href="http://www.gnegg.ch/archives/330-DVD-ripping,-second-edition.html">ago</a>, I was looking for a nice usable solution to rip DVDs. I was trying out a lot of different things, but the only application that had acceptable usability and speed was <a href="http://handbrake.m0k.org/">HandBrake</a></p>
<p>Unfortunately, the main developer of that tool has run out of time to continue to develop HandBrake which made the project stall for some time.</p>
<p>Capable fans of the tool have now created a form, aptly named MediaFork and they have just released <a href="http://mediafork.dynalias.com/blog/?p=35">Version 0.8-beta1</a> with some fixes.</p>
<p>But that's not all. Aside from the new release, they also created a blog, set up a <a href="http://www.gnegg.ch/archives/303-Developing-with-the-help-of-trac.html">trac</a> environement.</p>
<p>Generally, I'd say the project moved back to be totally alive and kicking.</p>
<p>The new release provides a linux command line utility. Maybe I should go ahead and try it out on a machine even more powerful than my Mac Pro (which is running linux without X) - let's see how many FPS I'm going to get.</p>
<p>Anyways: Congratulations to the MediaFork developers for their great release! You're doing for video what iTunes did for audio: You make ripping DVDs <em>doable</em>.</p>
The return of Expect: 100-continue2007-02-02T00:00:00+00:00http://pilif.github.com/2007/02/the-return-of-except-100-continue<p>Yesterday I had to work with a PHP-application using the CURL library to send a HTTP POST request to a <a href="http://www.lighttpd.net/">lighttpd server</a>.</p>
<p>Strangely enough I seemed unable to get anything back from the server when using PHP and I got the correct answer when I was using wget as a reference.</p>
<p>This made me check the lightpd log and I <a href="/2006/09/lighttpd-net-httpwebrequest/">once more</a> (I recommend you to read that entry as this is very much dependent on it) came across the friendly error 417</p>
<p>A quick check with <a href="http://www.wireshark.org/">Wireshark</a> confirmed: curl was sending the Expect: 100-continue header.</p>
<p>Personally, I think that 100-continue thing is a good thing and it even seems to me that the curl library is intelligent about it and only does that thing when the size of the data to send is larger than a certain threshold.</p>
<p>Also, even though people are complaining about it, I think lighttpd does the right thing. The expect-header is mandatory and if lighttpd doesn't support this particular header, the error 417 is the only viable option.</p>
<p>What I think though is that the libraries should detect that automatically.</p>
<p>This is because they are creating a behavior that's not consistent to the other types of request: GET, DELETE and HEAD requests all follow a fire-and-forget paradigm and the libraries employ a 1:1 mapping: Set up the request. Send it. Return the received data.</p>
<p>With POST (and maybe PUT), the library changes that paradigm and in fact sends two request to the wire while actually pretending in the interface that it's only sending one request.</p>
<p>If it does that, then it should at least be capable enough to handle the cases where their scheme of transparently changing semantics breaks.</p>
<p>Anyways: The fix for the curl-library in PHP is:</p>
<pre class="code">curl_setopt($ch, CURLOPT_HTTPHEADER, array('Expect:'));</pre>
<p>Though I'm not sure how pure this solution is.</p>
Knives, Fingers and washing dishes2007-01-22T00:00:00+00:00http://pilif.github.com/2007/01/knives-fingers-and-washing-dishes<p><img width="166" height="133" style="float: right; border: 0px; padding-left: 5px; padding-right: 0px;" src="http://www.gnegg.ch/uploads/750px-Wiegemesser_fcm.jpg" alt="" /></p>
<p>About two or three weeks ago, I discovered a new passion of mine: cooking.</p>
<p>Don't laugh. Cooking is like programming: Doing it is a lot of fun and its rewards - when done right - are worth so much more than any work you could have put into it - especially if you value a good meal as much as I do.</p>
<p>With cooking, there comes cleaning of dishes and various tools you need while doing your job</p>
<p>Last Saturday, after preparing a nice and very tasty tomato soup, I put a knife like the one you are seeing to the right (thanks <a href="http://de.wikipedia.org/wiki/Wiegemesser">Wikipedia</a>) into the dish water - together with other dirty things, ready to clean them up.</p>
<p>Then I grabbed into the foam-covered dish to take out one of the things in to to rinse it clean.</p>
<p>I'm sparing you the picture of how my finger looked once I finished pushing it into the blade of the <a href="http://en.wikipedia.org/wiki/Kitchen_knife#Mincing">mincing knife</a>.</p>
<p>Seeing how the finger looks right now, I'm pretty sure I should have gone to a doctor to have it sewn, but I didn't have time for it back then and now it's healed enough that sewing won't do any good without reopening the wound which I certainly don't want anybody to do right now (it stopped hurting this morning).</p>
<p>On the upside, I will have a nice scar to show around :-)</p>
<p>Conclusions:</p>
<ol>
<li>Don't put knives into foam covered dish water</li>
<li>Typing with nine fingers is quite hard if one of the disabled fingers is your middle-finger</li>
<li>Cooking can be a painful experience.</li>
<li>We never stop learning.</li>
<li>Blogs really are pointless sometimes</li>
</ol>
Two speed runs2007-01-15T00:00:00+00:00http://pilif.github.com/2007/01/two-speed-runs<p>You know I'm very much into speed runnning through games.</p>
<p>You probably aren't.</p>
<p>So, in the last few weeks, two runs where posted that may help you get going as they show perfectly how much fun watching these videos can be. Both show an immense ammount of precision and sheer speed:</p>
<ol>
<li><a href="http://tasvideos.org/Players-126up.html">Xaphan</a> did an <a href="http://tasvideos.org/750M.html">emulated run</a> on Mega Man Zero 2 on the GBA. Note that this game isn't played like how a real person could be able to play it. During the creation, technical means like slowdown (or even frame-advance for frame-by-frame precision) and save states were used. Still: Enjoy the precision and speed in this one.</li>
<li>Josh Mangini did a <a href="http://speeddemosarchive.com/NinjaGaidenBlack.html#SS">single segment run</a> of Ninja Gaiden on a real XBox. This is not emulated. What ever you see is the skill of a real player playing through the game. I didn't know Ninja Gaiden before seeing this run, but have a look at the speed and effects you are seeing when watching this run. Isn't this just cool?</li>
</ol>
<p>Congratulations to both players. While both runs may not be perfect and both games may not be that famous, both runs are very impressive to watch due to sheer speed.</p>
<p>I for one had lots of fun watching them on my home cinema setup.</p>
10 Mbit/s2007-01-09T00:00:00+00:00http://pilif.github.com/2007/01/10-mbits<center>
<img width="361" height="94" src="http://www.gnegg.ch/uploads/10mbit.png" alt="" />
</center>
<p>Yesterday, my provider announced an upgrade to their bandwith offerings to up to 10 Mbit/s.</p>
<p>Of course I went ahead and updated my 6Mbit subscription to the new speed.</p>
<p>And look: The change has already been applied.</p>
<p>This means that I now have 10 Mbit/s downstream and - which is becoming more and more important to me - the decent upstream of 1Mbit/s.</p>
Vista, AC3, S/PDIF2007-01-08T00:00:00+00:00http://pilif.github.com/2007/01/vista-ac3-spdif<p>Since around December 31th last year, my home cinema is up and running. That day was the day when I finally had all the equipment needed to mount the screen that has arrived on December 29th.</p>
<p>It was a lot of work, but the installation just rocks. And I've <a href="http://www.gnegg.ch/archives/318-Upgrading-the-home-entertainment-system.html">already blogged</a> about the main components of the thing: The Gefen HDMI extender and the Denon AVR-4306.</p>
<p>The heart of the system consists of <a href="http://www.gnegg.ch/archives/291-Computers-under-my-command-Issue-1-shion.html">shion</a> serving content (thankfully, the TB harddrive was <a href="http://news.com.com/2100-1041_3-6147409.html">announced last week</a> - it's about time) and a brand new 1.8Ghz Mac Mini running Windows Vista (RC2) in BootCamp which is actually displaying the content.</p>
<p>I've connected a windows media center remote receiver which Microsoft sells to OEMs to use the old IR remote of my Shuttle MCE machine.</p>
<p>The mac mini is connected to the Denon receiver via a DVI to HDMI adaptor and optical digital cable for the audio.</p>
<p>And that last thing is what I'm talking about now.</p>
<p>The problem is that Microsoft changed a lot about how audio works in Vista and I had to learn it the hard way.</p>
<p>At first, I couldn't not hear any sound at all. That's because Vista treats all outputs of your sound card as separate entities and you can configure over which connector you want to hear which sounds.</p>
<p>The fix there was to set the S/PDIF connector as system default (in the sound applet of control panel) which fixed the standard windows sounds and stereo sound for me.</p>
<p>Actually, the properties screen of the S/PDIF connector already contains options for AC3 and DTS, complete with a nice testing feature allowing you to check your receiver's support for the different formats by actually playing some multichannel test sound.</p>
<p>The problem is that this new framework is mostly unsupported by the various video codecs out there.</p>
<p>This means that even if you get that control panel applet to play the test sound (which is easy enough), you won't get AC3 sound when you are playing a movie file. You <em>still</em> need to get a codec for that.</p>
<p>But most codecs don't work right any more in vista as the S/PDIF connector now is a separate entity and seems to be accessed differently than in XP.</p>
<p>Usually, the only thing I install on a new windows machine I need to play video with is <a href="http://en.wikipedia.org/wiki/FFmpeg">ffmpeg</a> which actually has some limited support for Vista's way of handing S/PDIF: In the audio settings dialog, you can select "Output" and then in the formats list for S/PDIF, you can check AC/3. Unfortunately, this unchecks the PCM formats.</p>
<p>This means that you will get sound in movies with an AC3 track, but no sound at all in every other movie - ffmpeg seems (emphasis on seems - I may just not have found a way yet) unable to either encode stereo to AC3 or output both PCM and AC3 without changing settings (not at the same time of course).</p>
<p><a href="http://ac3filter.net/">AC3filter</a> works better in that regard.</p>
<p>Depending on hour of the day (...), it's even able to work with the S/PDIF output without forcing it to encode stereo to AC3 (which AC3filter is capable to do).</p>
<p>So for me the solution to the whole mess was this:</p>
<ol>
<li>Install the latest build of ffmpeg, but don't let it handle audio</li>
<li>Install AC3filter</li>
<li>Open the configuration tool and on the first page enable S/PDIF.</li>
<li>On the system tab, enable passthrough for AC3 and DTS.</li>
</ol>
<p>This did the trick for me.</p>
<p>As time progresses, I'm certain that the various projects will work better and better with the new functionality in Vista which will make hassles like this go away.</p>
<p>Until then, I'm glad I found a workable solution.</p>
VMWare Server, chrony and slow clocks2007-01-03T00:00:00+00:00http://pilif.github.com/2007/01/vmware-server-chrony-and-slow-clocks<p>We have quite many virtual machines running under VMWare server. Some for testing purposes, some for quite real systems serving real webpages.</p>
<p>It's wonderful. Need a new server? Just <tt>cp -r</tt> the template I created. Need more RAM in your server? No problem. Just add it via the virtual machine configuration file. Move to another machine? No problem at all. Power down the virtual machine and move the file where you want it to be.</p>
<p>Today I noticed something strange: The clocks on the virtual machines were <em>way</em> slow.</p>
<p>One virtual second was about ten real seconds.</p>
<p>This was so slow that chrony which I used on the virtual machines thought that the data sent from the time servers was incorrect, so chrony was of no use.</p>
<p>After a bit of digging around, I learned that VMware server needs access to /dev/rtc to provide the virtual machines with an usable time signal (usable as in "not too slow").</p>
<p>The host's /var/log/messages was full of lines like this (you'll notice that I found yet another girl from a console RPG to name that host):</p>
<pre class="code">
Dec 15 16:12:58 rikku /dev/vmmon[6307]: /dev/rtc open failed: -16
Dec 15 16:13:08 rikku /dev/vmmon[6307]: host clock rate change request 501 -> 500
</pre>
<p>-16 means "device busy"</p>
<p>The fix was to stop chrony from running on the host machine so VMWare could open /dev/rtc. This made the error messages vanish and additionally it allowed the clocks of the virtual machines to work correctly.</p>
<p>Problem solved. Maybe it's useful for you too.</p>
Button placement2006-12-04T00:00:00+00:00http://pilif.github.com/2006/12/button-placement<center><a href="http://www.gnegg.ch/uploads/activesync.png"><img width="300" height="182" style="border: 0px; padding: 5px;" src="http://www.gnegg.ch/uploads/activesync_thumb.png" alt="" /></a></center>
<p>Besides the fact that this message is lying to me (the device in question certainly is a Windows Mobile device and there can't be any cradle problem because it's an emulated image ActiveSync is trying to connect to), I have one question: What exactly do the OK and the Cancel button do?</p>
<p>And this newly created dialog is in ActiveSync 4.2 - way after the MS guys are said to have seen the light and are trying to optimize usability.</p>
<p>Oh and I could list some other "fishy" things about this dialog:</p>
<ul>
<li>It has no indication of what the real problem is (a soft reset of the emulator image helped, by the way).</li>
<li>It has way too much text on it</li>
<li>Trying to format a list using * and improper indentation looks very unprofessional. Judging from the bottom part of the dialog where the buttons are, this is no plain <tt>MessageBox</tt> anyways, so it would have been doable to fix that.</li>
<li>The spacing between the buttons is not exactly consistend with the Windows-Standard</li>
</ul>
<p>Dialogs like these is precisely why I doubt that Windows Mobile really is the right OS to run on a barcode scanner - at least if it's a scanner that will be distributed among end-users with no clue of PCs. It's such a good thing that the scanners finally have GPRS included.</p>
Debugging PocketPCs2006-11-27T00:00:00+00:00http://pilif.github.com/2006/11/debugging-pocketpcs<p>Currently I'm working with Windows Mobile based barcode scanning devices. With .NET 2.0, actually developing real-world applications for the mobile devices using .NET has become a viable alternative.</p>
<p>.NET 2.0 combines sufficient speed at runtime (though you have to often test for possible performance regressions) with a very powerful development library (really usable - as compared to .NET 1.0 on smart devices) and unbeatable development time.</p>
<p>All in all, I'm quite happy with this.</p>
<p>There's one problem though: The debugger.</p>
<p>When debugging, I have two alternatives and both suck:</p>
<ol>
<li><p>Use the debugger to connect to the real hardware. This is actually quite fast and works flawlessly, but whenever I need to forcibly terminate the application (for example when an exception happened or when I'm pressing the Stop-Button in the debugger), the hardware crashes somewhere in the driver for the barcode scanner.</p><p>Parts of the application stay in memory and are completely unkillable. The screen freezes</p><p>To get out of this, I have to soft-reset the machine and wait half a century for it to boot up again.</p></li>
<li><p>Use the emulator. This has the advantage of not crashing, but it's so <em>slow</em>.</p><p>From the moment of starting the application in VS until the screen of the application is loaded in the emulator, nearly three minutes pass. <strong>That</strong> slow.</p></li>
</ol>
<p>So programming for mobile devices mainly contains of waiting. Waiting for reboots or waiting for the emulator. This is wearing me down.</p>
<p>Usually, I change some 10 lines or so and then run the application to test what I've just written. That's how I work and it works very well because I get immediate feedback and it helps me to write code what's working in the first place.</p>
<p>Unfortunately, with these prohibitive long startup times, I'm forced to write more and more code in one batch which means even more time wasted with debugging.</p>
<p>*sigh*</p>
My take on the intellectual property debate2006-11-26T00:00:00+00:00http://pilif.github.com/2006/11/my-take-on-the-intellectual-property-debate<p>Despite the fact that I fear I'm not totally qualified to have an opinion regarding the ongoing debate over intellectual property, sometimes, I think about the problem too and I certainly <em>do</em> have an opinion.</p>
<p>To say it with the tongue of the usenet, IANAL, but bear with me when I finally take the time to write down my own ideas on the IP debate:</p>
<p>When you take a look at todays landscape, you'll clearly see clashing interests. On one side, you have the authors (I am one of these in a sense - I write software) that more or less wish to make a living with their work. Then you have the people selling the work created by the authors and then you have the consumers which should pay to actually consume the work produced.</p>
<p>Of course, we don't want either the authors nor the resellers to starve to death, so there must be some incentive for the consumers to actually consume the goods and to compensate for the authors and even more so for the distributors work.</p>
<p>That's what we have created the term intellectual property for.</p>
<p>Even though you as the consumer get to consume the work of the author, that's all you can do. In theory, you can't resell, redistribute, copy or whatever else you'd want to do with the work of the initial author. You pay for your right to consume the initial work. If you want to do more (like creating a <em>derivative work</em>), you naturally have to pay more (per copy of that derived work you distribute) - at least that's what society works like.</p>
<p>Let me make an example. <a href="http://www.drs.ch/drs.html">DRS</a>, the swiss national radio station created wonderful audio plays about a certain private investigator called «Franz Musil». The first two parts of the series of plays (currently, there are five of them if I counted correctly) will never ever be available on CD for us consumers to buy:</p>
<p>In the production they used tiny pieces of music for which they don't have the license to sell on CD.</p>
<p>Even though the original part of that audio play is immense compared to those small pieces of music, the original publisher of the pieces in question <em>still</em> has a say on the distribution on something completely different and <em>orignial</em> that has come out of the initial work.</p>
<p>Later audio plays contain music they created themselves and these plays are actually available to buy on CD. This whole situation is bad for us the consumers (the plays are really good), DRS (they'd like to sell their original work) and the initial author of the music in question (because fewer people now hear his work).</p>
<p>Especially in the matters of software, it gets even worse tough: While copyright law protects the work as a whole, there's the discussion about patents that actually manages to protect bits and pieces of your idea as an author.</p>
<p>Let's say I write a poem and I distribute that using the old and known methods (via some publisher), then that poem is protected by the publishers copyright (I had to sign off all rights I had on the poem to that company for them to do the work).</p>
<p>If someone takes my publishers poem (remember, it's not any longer my poem. It's the publishers), sets his own name below it and sells it, then he violates my publishers copyright. So far so good.</p>
<p>But imagine that my publisher went further ahead and besides taking all rights to my work also <em>patented</em> the «method or apparatus to put letters in context to form a meaning»... (don't laugh - todays understaffed and underqualified patent offices can clearly be fooled into granting such a patent)</p>
<p>Now my publisher not only made sure that <em>my</em> poem can't be copied, they also made sure that no one else will ever be able to write a poem by lining up characters.</p>
<p>Now let's go ahead to distribution to consumers, but let's stay with my poem (which is the only poem in existence due to act of spelling now <em>also</em> being my distributors property).</p>
<p>Naturally, my distributor wants to maximize the cash they can make with their newly acquired poem. On one hand, they have expensive lawyer-bills to pay and on the other, they try to use their new poem to get back the money wasted on less successful poems that came before the one I have initially written (just to say it once and for all: I don't write poems. And if I would, I would <strong>never</strong> assign the copyright to a publisher).</p>
<p>Now, for a poem, you have a fixed-sized group of recipients: People capable of reading (and thus violating that patent granted earlier) and interested in poems.</p>
<p>So to maximize income, the publisher must make sure that everyone of the targeted group goes ahead and pays the distributor that new poem. Besides advertising for it to reach an initial amount of people, the publisher makes sure that everyone reading that poem pays for doing so one way or another.</p>
<p>One way is to sell books. The other is to publicly perform the poem, while getting payed both from entrance fees and third party sponsors. Or they create an audiobook and sell that.</p>
<p>Of course, if the publisher sells a book to one person, they obviously would want to sell another book to a friend of that person. This is why copying is disallowed.</p>
<p>To further maximize profits, the publisher now sees a way to make the initial person actually buy more than one copy of the same book: A book you buy destroys itself after a set number of days. And you can only read the book while in one predefined room. When you move to another location, the book renders itself unreadable.</p>
<p>All that magic protecting that book can of course go wrong due to various reasons and in that case, the publisher can make the person go ahead and just buy another copy of the same book...</p>
<p>And <em>this</em> is what's fundamentally wrong.</p>
<p>People are not used to not own something they pay for.</p>
<p>When I buy myself an apple, I can eat it when I want and where I want. When I buy myself furniture, I can place it where I want and I can sell it to whomever I want. But when I buy a piece of music in the iTunes music store (using this as an example because it's well-known), then I can only hear it on so many devices. If I buy the n-th new computer, I need to buy the song again. Also, I cannot resell the song. And one day, when Apple is gone or running the Music Store is no longer interesting for them, my Songs will stop working too.</p>
<p>When I buy a book, it's my responsibility to handle it with care and if I succeed in doing that, then the book I buy today is still readable in hundreds of years. No external influence not ultimately under my control can take away that book from me. No company going out of business, no company losing interest in providing me with a "license" to read my book.</p>
<p>The more time passes, the more patents are granted and the more strict DRM is put in place.</p>
<p>And - now we finally come to the core of the whole thing - the more strict distribution of new content is handled, the more expensive creating derivative work gets, the more our society gets stuck.</p>
<p>I postulate that no person is able to create truly original works. <em>Everything</em> one creates is influenced by outside factors. News postings. Books. Music. Other software: Either you accept that outside influence and improve upon that or you get slowed down more and more, always hitting walls because "someone was already there".</p>
<p>With enforcing distribution limitation and patenting and thus restricting the building blocks of future work, society slows down scientific and cultural evolution. Or it passes control over that evolution fully to big distribution companies that actually have the money to pay all the royalties needed.</p>
<p>Individual authors (no matter what profession) lose their capability of creating and releasing novel work because each and every possible building block is protected and owned by a big company.</p>
<p>The final goal of the current society will be a conglomerate of two to three big companies owning all rights to all new scientific and cultural advancement. These companies will be constantly paying themselves royalty fees for the patents and copyrights they violate between each other.</p>
<p>If you want to be an author, you are not allowed to create any work until you have a contract with one of these big companies. Working will only be possible in close proximity to a lawyer because the big companies still want to maximize their earnings and thus watch closely to minimize the cost of the new work created.</p>
<p>When we reach that point, all advancement of civilization (which is by a big part defined by advancement of culture) comes to a halt and we end up back in the middle ages where only a few enlightened people (monks) where able to create cultural works (because only they could write). Everyone else had to work for their survival and pay taxes.</p>
<p>In an ideal world, copyright and patent law gets radically changed by allowing to freely create derivative works as long as there is a certain percentage of new content in the created work and the original content is attributed to.</p>
<p>Let's say 60%, though this obviously must be tweaked by people far more intelligent than I am.</p>
<p>If I write a poem, in the ideal world, I can keep the copyright and I can distribute it however I feel. Or I can ask a publisher to do that work for me while I keep the initial copyright on it. The more work the distributor has to do to advertise my work, the more I will be paying him. No changes here, beside the fact that I retain the copyright.</p>
<p>The distributor still tries to sell the product. But as creating derivative works is now permitted in some boundaries, expenses for both legal and technical protection go down. The publisher can once again focus on what they were payed to do in the first place.</p>
<p>If someone really likes my poem, she can go ahead and take it to create a new, better poem. Maybe longer. Maybe with a completely different message. Maybe the new author just takes out a verse or two. Maybe the whole poem. It doesn't matter.</p>
<p>When she is finished, she roughly checks that there's 60% of novel art in it and then goes ahead to distribute the poem - either herself or via a publisher.</p>
<p>This model, by the way, works. It's in use today. Everyday. It's an invention by geeks like you and me. It's called Free Software. It doesn't even have a limitation that defines a percentage of new content to allow for redistribution under ones own copyright.</p>
<p>Despite creating a platform where knowledge can be openly shared, people are still able to make a living out of their work. The money is in the services rendered for a specific need. Customize a piece of software for a specific working environment. Publicly present that poem from the samples above at some poetry event. Provide the end user with a package of multiple poems collected together in one book...</p>
<p>There are so many things still to do and which are completely doable without forcing all scientific and cultural advancement of society to stop or at least go through a lawyer and through courts.</p>
<p>We are the new generation. It's our task to see the shortcomings of the current system. It's our task to see opportunities to create a new and better system.</p>
<p>It's our task to fix this problem once and for all.</p>
<p>The whole Free Software movement is a big step in the right direction. Thank you, Free Software community. You show us the way we all have to go.</p>
<p>Let's move!</p>
Quality of video game consoles2006-11-21T00:00:00+00:00http://pilif.github.com/2006/11/quality-of-video-game-consoles<p>First, there was <a href="http://editorials.teamxbox.com/xbox/1651/The-Red-Ring-of-Death/p1/">The Red Ring Of Death</a>, then we got <a href="http://www.joystiq.com/2006/11/18/ps3-is-following-in-360s-footsteps/">the beep of death</a> and now we got the <a href="http://www.kotaku.com/gaming/wii/wiiconnect-bricks-wii-216036.php">Error 110213 of death</a>.</p>
<p>What is it with modern game consoles?</p>
<p>Remember the NES? Plug in, turn on, play.</p>
<p>I know so many people who owned or still own a NES. Not one of them ever had a defective device.</p>
<p>Same goes for the SNES. Or any other console.</p>
<p>Is this obvious degrade in quality the price of ever increasing complexity? Is this <a href="http://www.gnegg.ch/archives/133-The-price-of-abstraction.html">the price of abstraction</a>?</p>
<p>I wonder: What will ultimately be the end of ever increasing evolution in technical devices as we know them today: Is it physical limitations like the theory of relativity or is it the plain inability of our brains to comprehend the complexity of the devices we create?</p>
Living without internet at home2006-11-20T00:00:00+00:00http://pilif.github.com/2006/11/living-without-internet-at-home<p><a href="http://www.lipfi.ch/gallery2/v/flat/renovation/IMG_0012.JPG.html"><img src="http://www.lipfi.ch/gallery2/d/1779-2/IMG_0012.JPG" width="133" height="200" style="float: left; margin: 5px" /></a></p>
<p>When your fuse box looks like the one on this photo and when your bedroom wall looks <a href="http://www.lipfi.ch/gallery2/v/flat/renovation/IMG_0014.JPG.html">like this</a> then you can be sure of one thing: You don't have power.</p>
<p>What's more interesting though is that for one time in my whole life, Cablecom did something right: Three months ago, I had them move my cable internet access from my old address to the new one by November, 15th</p>
<p>The problem is that you have to do this three months in advance and back then, I wasn't sure how long the renovation of my bathroom was going to take. So I guessed.</p>
<p>Of course that guess turned out to be wrong: The bathroom, while making splendid progress, is still two weeks off from being completed.</p>
<p>But there was no way to explain that to Cablecom.</p>
<p>They successfully switched over my internet connection from my current flat to the new one where I don't have my stuff, some essential parts of my furniture (like my bed), and even worse: No power, no water, no toilet (that is currently lying on the balcony waiting for the bathroom to be completed before they can replug it).</p>
<p>So for now, Internet is something I can only have at work.</p>
<p>The irony is that usually, Cablecom screws everything up you may want from them. Their internet access is flawless and always working, but whenever you have <em>any</em> administrative request, you can be sure that they screw up.</p>
<p>To underline this, I have two nice conversations with them:</p>
<div style="background-color: #f4f4f4; padding: 3px">
<p>
<b>Me:</b> Why do I not get any bills from you? As much as I like not paying for your service, I'd hate you turning it off because I'm not paying for it. Please start sending me bills!
</p>
<p><b>Them:</b> What's your customer ID?</p>
<p><b>Me:</b> No Idea. But my name is Philip Hofstetter and I live at ...</p>
<p><b>Them:</b> Let me check...</p>
<p><b>Them:</b> Are you sure that you are our customer? I can't find you here...</p>
<p><b>Me:</b> Totally sure. Yes.</p>
<p><b>Them:</b> That just can't be.</p>
<p><b>Me:</b> And yet it is: As a matter of fact, I'm currently using the phone you have sent to me calling over your connection you provide me with and I'd really like to pay for.</p>
<p><b>Them:</b> Sure?</p>
</div>
<p>That episode ended with me getting one hell of an envelope containing about 20 bills. I'm sure that had I not called, I would have been able to surf and phone for free, but I didn't want to take the risk of ending up with no internet and no way of getting it back. Besides, not paying for a service used is unfair for both the provider and the other people who are forced to pay.</p>
<p>The other episode was shorter and happened to Ebi, a friend of mine:</p>
<div style="background-color: #f4f4f4; padding: 3px">
<p><b>Ebi:</b> Hello, I have a question: What is my customer ID? My Name is xxx and I live in xxx</p>
<p><b>Them:</b> No problem. Can I first have your customer ID though?</p>
</div>
<p>Other episodes turn around redundant modems being delivered, about accounts where multiple bills are sent for the same service, about not being able to fix an obvious defect at the in-house repeater or, a CHF 100'000+ water damage caused by them not sealing a pipe properly (their insurance payed for that of course).</p>
<p>Still: Their internet service is kick-ass! No downtime. Maximum speeds. No forced disconnection. No forced reverse proxy or other crap.</p>
<p>That's why I prefer them to any ADSL provider out there.</p>
<p>It's just ironic that a company this prone to screwing up administrative tasks actually does the right thing that one time where some delay would not have mattered - or would even have been preferred.</p>
<p>Well... at least I have one more reason to be looking forward to december now.</p>
ServeRAID - Fun with GUI-Tools2006-11-19T00:00:00+00:00http://pilif.github.com/2006/11/serveraid-fun-with-gui-tools<p>We've recently bought three more drives for our in-house file server. Up until now, we had a RAID 5 array (using a IBM ServeRAID controller) spawning three 33GB drives. That array recently got very, very close to being full.</p>
<p>So today, I wanted to create a second array using the three new 140GB drives.</p>
<p>When you download the ServeRAID support CD image, you get access to a nice GUI-tool which is written in Java and can be used to create Arrays on these ServeRAID controllers.</p>
<p>Unfortunately, I wasn't able to run the GUI at first because somehow, the Apple X11 server wasn't willing/able to correctly display the GUI. I always got empty screens when I tried (the server is headless, so I had to use X11 forwarding via ssh).</p>
<p>Using a Windows machine with <a href="http://www.straightrunning.com/XmingNotes/">Xming</a> (which is very fast, working perfectly and totally free as in speech) worked though and I got the GUI running.</p>
<p>All three drives where recognized, but one was listed as "Standby" and could not be used for anything. Additionally, I wasn't able to find any way in the GUI to actually move the device from Standby to Ready.</p>
<p>Even removing and shuffling the drives around didn't help. That last drive was always recognized as "Standby", independant of the bay I plugged it into.</p>
<p>Checking the feature list of that controller showed nothing special - at first I feared that the controller just didn't support more than 5 drives. That fear was without reason though: The controller supports up to 32 devices - more than enough for the server's 6 drive bays.</p>
<p>Then, looking around on the internet, I didn't find a solution for my specific problem, but I found out about a tool called "ipssend" and there was documentation how to use it in an <a href="http://www.redbooks.ibm.com/redbooks/pdfs/sg245853.pdf">old manual</a> by IBM.</p>
<p>Unfortunately, newer CD images don't contain ipssend any more, Forcing you to use the GUI which in this case didn't work for me. It may be that there's a knob to turn somewhere, but I just failed to see it.</p>
<p>In the end, I found a very, very old archive at the IBM website which was called <a href="ftp://ftp.software.ibm.com/pc/pccbbs/pc_servers/linux_dumplog.tgz">dumplog</a> and contained that <tt>ipssend</tt> command in a handy little .tgz archive. Very useful.</p>
<p>Using that utility solved the problem for me:</p>
<pre class="code">
# ./ipssend setstate 1 1 5 RDY
</pre>
<p>No further questions asked.</p>
<p>Then I used the Java-GUI to actually create the second array.</p>
<p>Now I'm asking myself a few questions:</p>
<ul>
<li>Why is the state "Standby" not documented anywhere (this is different from a drive in Ready state configured as Standby drive)?</li>
<li>Why is there no obvious way to de-standby a drive with the GUI?</li>
<li>Why isn't that cool little <tt>ipssend</tt> utility not officially available any more?</li>
<li>Why is everyone complaining that command line is more complicated to use and that GUIs are so much better when obviously, the opposite is true?</li>
</ul>
The atmosphere in good games2006-11-17T00:00:00+00:00http://pilif.github.com/2006/11/the-atmosphere-in-good-games<p>I'm a big fan of the <a href="http://en.wikipedia.org/wiki/Metroid_%28series%29">Metroid series</a>.</p>
<p>It took me a long while to get used to it though. Back in the day, where there was just Metroid, I never got very far in it - and I've only seen the game running at a friends home.</p>
<p>Then came the emulators and I gave Super Metroid a shot, but I didn't <em>get it</em>. I didn't know what to do, where to go and how to progress - the whole thing didn't make any sense to me.</p>
<p>Then came Metroid Fusion on the GBA which I actually bought.</p>
<p>And this was when I got it.</p>
<p>The concept is the same as it's in Zelda: You walk as far as you can go with your current equipment, you get better equipment, opening new paths and then finally, you meet the last boss.</p>
<p>Of course there's another element to a real Metroid game: Brilliant level design. The designers have thought of so many places where you can "cheat" and break the obvious sequence of events. Doing so varies in difficulty from quite difficult to pull off at first but easy later on till insanely hard to do.</p>
<p>Metroid Fusion is a bit off in this regard though - its sequence is quite linear and there's only one relevant part in the game where you can skip some content and are rewarded with some extra <a href="http://www.metroid2002.com/fusion/other_secret_message.php">movie sequence</a>. Additionally, it's hard as hell to pull of. Much, much harder than the linked video may make you think as it's dependent on your reaction in tenths of seconds.</p>
<p>But now to the topic: Metroid Prime. And Prime Echoes.</p>
<p>When I started with Prime, I had the same problem as I had when I started with the 2D Metroids: I had no idea where to go, what to do or even finding out how to navigate the world.</p>
<p>This was partly caused by a bad projector with very, very bad contrast in dark areas of the picture - everything was more or less dark gray or black on that projector. Not much fun to play like that.</p>
<p>On the other hand, I played the game like I would play a 3D shooter, expecting the usual smaller levels, lots of shooting and shallow gameplay. Of course this is totally the wrong approach to a game like Metroid: Prime. For 10 minutes, force yourself to think you are playing Super Metroid. Immerse yourself into the world - you have to force yourself for these 10 minutes. And of course, get a better projector.</p>
<p>Then it clicked.</p>
<p>This <em>was</em> a real Metroid. It felt like one and it played like one.</p>
<p>But then something more happened. Something that's the reason why I don't play either Prime or Echoes any more. And the reason is the most impressive thing a game could ever accomplish: I stopped playing out of plain fear. Plain and simple fear.</p>
<p>Fear of the bosses. Fear of the lights turning off and these awful chozo ghosts spawning. Fear of small, cramped rooms. Fear of darkness. And in Echoes it was even worse: Fear of being alone in the dark. Fear of dying alone on the dark side of the planet. Fear of being eaten alive by the darkness surrounding Samus (and actually hurting her).</p>
<p>Notice though: This is not the usual fear of losing an extra life by missing a jump and landing in a hole. It's not the fear of running out of life energy. That's plain old style video game fears.</p>
<p>No. Metroid is <em>real</em>. The fear is <em>real</em>. You see, both games have an incredibly well balanced learning curve. You practically can't die. It can take you longer to accomplish something when you aren't that good/precise, but you don't die. At least I never did.</p>
<p>The atmosphere created by the games is what make it seem real. There's that encyclopedia with an entry for every creature - even plant life - you encounter. Then there are no visible borders between levels. Sure, you zone between different places, but all is connected. Progress isn't something allowing you to leave zones behind you. Progress is fluent. You go there, come back, go there again... The world feels real.</p>
<p>Samus is all alone in that big world, while there are still artifacts reminding of that old civilization. And there are real dangers in that world.</p>
<p>And the music works very, very well too. Light tunes, sometimes menacing, always fitting.</p>
<p>The graphics art too helps completing the illusion of reality. It's not very detailed (it's a GC game after all), but it fits. It creates a believable world.</p>
<p>All those little parts come together to create something I've never before seen in any game I have played. It brings emotions to a new level. The fear I had when playing Prime and Echoes was real. Real fear of the darkness. Of loneliness. And of drowning in that crashed space pirate ship in Prime - I know there is no limit on how long you can be submerged, but still, it felt so incredibly real.</p>
<p>In the end, it was too much for me.</p>
<p>I couldn't get myself around to boot up the game any more - out of fear of dark areas or enemies jumping at me.</p>
<p>So what to say? Both GC Metroids are what I'd like to call the <em>perfect game</em> as they awaken real emotions. Something I never felt then using any other entertainment medium. Watching a movie feels like watching a movie. Reading a book is always reading a book. Playing Half Life (with much better graphics but much less credible atmosphere) is like playing a game. Even playing WoW is obviously playing a game.</p>
<p>But playing Metroid is living the game. It's living the world created by these talented designers.</p>
<p>Unfortunately, even though they have created the perfect game, I'm unable to play it. The perfection put into the design made me too afraid to actually play the game.</p>
<p>Now, after around two years, I finally realized that. And I'm just plain impressed.</p>
<p>Do you know the games I was writing about? Did you feel the same? Do you know other games making you feel like that?</p>
Mysql in Acrobat 82006-11-16T00:00:00+00:00http://pilif.github.com/2006/11/mysql-in-acrobat-8<p>I have Acrobat 8 running on my Mac. And look what I've found by accident:</p>
<p>I had console.log open to check something, when I found these lines:</p>
<p><tt><p>061115 9:57:48 [Warning] Can’t open and lock time zone table: Table ‘mysql.time_zone_leap_second’ doesn’t exist trying to live without them</p></tt></p>
<p>/Applications/Adobe Acrobat 8 Professional/Adobe Acrobat Professional.app/Contents/MacOS/mysqld: ready for connections.</p>
<p>Version: '4.1.18-standard' socket: '/Users/pilif/Library/Caches/Acrobat/8.0_x86/Organizer70' port: 0 MySQL Community Edition - Standard (GPL)</p>
<p></tt></p>
<p>MySQL shipped with Acrobat? Interesting.</p>
<p>The GPL-Version shipped with Acrobat? IMHO a clear license breach.</p>
<p>Of course, I peeked into the Acrobat bundle:</p>
<pre class="code">% pwd
/Applications/Adobe Acrobat 8 Professional/Adobe Acrobat Professional.app/Contents/MacOS
% dir mysql*
-rwxrwxr-x 1 pilif admin 2260448 Feb 20 2006 mysqladmin
-rwxrwxr-x 1 pilif admin 8879076 Feb 20 2006 mysqld
</pre>
<p>Interesting. Shouldn't the commercial edition <em>not</em> print "Community Edition (GPL)"? Even if Adobe doesn't violate the license (because they are just shipping the GPLed server and have bought the client library (which is GPL too) or written their own client), the GPL clearly states that I can get the sourcecode and a copy of the license. I couldn't find these anywhere though...</p>
<p>I guess I should ask at mysql what's going on here.</p>
Fun with Tags2006-11-16T00:00:00+00:00http://pilif.github.com/2006/11/fun-with-tags<center><img width="309" height="98" style="border: 1px solid gray; padding: 4px;" src="http://www.gnegg.ch/uploads/tagfun.png" alt="" /></center>
<p>I've just seen this on slashdot today. It seems like we have some opposed oppinions on how to tag <a href="http://it.slashdot.org/article.pl?sid=06/11/16/0112214">this article</a>.</p>
<p>IMHO, it's clearly FUD. FUD does not necessarily have to mean something <b>you</b> may fear. It can also be about something <b>they</b> may fear.</p>
Bootcamp, Vista, EFI-Update2006-11-15T00:00:00+00:00http://pilif.github.com/2006/11/bootcamp-vista-efi-update<p>Near the end of october I wanted to install Vista on my Mac Pro, using Bootcamp of course. The reason is that I need a Windows machine at home to watch <a href="http://tasvideos.org">speedruns</a> on it, so it seemed like a nice thing to try.</p>
<p>Back then, I was unable to even get setup going: Whenever you selected a partition that's not the first partition on the drive (where OS X must be). The installer complained that the BIOS reported the selected partition to be non-bootable and that was it.</p>
<p>Yesterday, Apple has released another EFI update which was said to improve compatibility with Bootcamp and to fix some suspend/resume problems (I never had those)</p>
<p>Naturally, I went ahead and tried again.</p>
<p>The good news: Setup doesn't complain any more. Vista can be installed to the second (or rather third) partition without complaining.</p>
<p>The bad news: The bootcamp driver installer doesn't work. It always cancels out with some MSI-error, claims to roll back all changes (which it doesn't - sound keeps working even after that «rollback» has occurred). This means: No driver support for NVIDIA card of my MacPro.</p>
<p>Even after trying to fetch a vista compliant driver from NVIDIA, I had no luck: The installer claimed the installation to be successful, but resolution stayed at 640x480x16 after a reboot. Device manager complained about the driver not finding certain resources to claim the device and that I was supposed to turn off other devices... whatever.</p>
<p>So in the MacPro case, I guess it's waiting for updated Bootcamp drivers by Apple. I hear though that the other machines - those with an ATI driver are quite well supported.</p>
<p>All you have to do is to launch the bootcamp driver installer with the /a /v parameters to just extract the drivers and then you use the device manager and point it to that directory to manually install the drivers.</p>
The pain of email SPAM2006-11-08T00:00:00+00:00http://pilif.github.com/2006/11/the-pain-of-email-spam<p>Lately, the SPAM problem got a lot worse in my email INBOX. Spammers seem to more and more check if their mail gets flagged by SpamAssasin and tweak the messages until they get through.</p>
<p>Due to some tricky aliasing going on on the mail server, I'm unable to properly use the bayes filter of SpamAssasin on our main mail server. You see, I have an infinite amount of addresses which are in the end delivered to the same account and all that aliasing can only be done <em>after</em> the message has passed SpamAssassin.</p>
<p>This means that even though mail may go to one and the same user in the end, it's seen as mail for many different users by SpamAssassin.</p>
<p>This inability to use Bayes with SpamAssassin means that lately, SPAM has been getting through the filter.</p>
<p>So much SPAM that I began getting really, really annoyed.</p>
<p>I know that mail clients themselves also have bayes based SPAM filters, but I often check my email account with my mobile phone or on different computers, so I'm dependent on a solution that filters out the SPAM before it reaches my INBOX on the server.</p>
<p>The day before yesterday I had enough.</p>
<p>While all mail for all domains I'm managing is handled by a customized MySQL-Exim-Courier setting, mail to the @sensational.ch domain is relayed to another server and then delivered to our exchange server.</p>
<p>Even better: That final delivery step is done after all the aliasing steps (the catch-all aliases being the difficult part here) have completed. This means that I can in-fact have all mail to @sensational.ch pass through a bayes filter and the messages will all be filtered for the correct account.</p>
<p>This made me install <a href="http://dspam.nuclearelephant.com/">dspam</a> on the relay that transmits mail from our central server to the exchange server.</p>
<p>Even after only one day of training, I'm getting impressive results: DSPAM only touches mail that isn't flagged as spam by SpamAssassin, which means that it's carefully crafted to look "real".</p>
<p>After one day of training, DSPAM usually detects junk messages and I'm down to one false negative every 10 junk messages (and no false positives).</p>
<p>Even after running SpamAssassin and thus filtering out the obvious suspects, a whopping <b><em>40%</em></b> of emails I'm receiving are SPAM. So nearly half of the messages not already filtered out by SA are still SPAM.</p>
<p>If I take a look at the big picture, even when counting the various mails sent by various cron daemons as genuine email, I'm getting <em>much more</em> junk email than genuine email per day!</p>
<p>Yesterday, tuesday, for example, I got - including mails from cron jobs and backup copies of order confirmations for PopScan installations currently in public tests - 62 genuine emails and <em>252 junk mails</em> of which 187 were caught by SpamAssassin and the rest was detected by DSPAM (with the exception of two mails that got through).</p>
<p>This is insane. I'm getting four times more spam than genuine messages! What the hell are these people thinking? With that volume of junk filling up our inboxes how ever could one of these "advertisers" think that somebody is both stupid enough to fall for such a message and intelligent enough to pick the one to fall for from all the others?</p>
<p>Anyways. This isn't supposed to be a rant. It's supposed to be a praise to DSPAM. Thanks guys! You rule!</p>
podcast recommendation2006-10-30T00:00:00+00:00http://pilif.github.com/2006/10/podcast-recommendation<p>I haven't been much into podcasts till now: The ones I heard were boring, unprofessional or way too professional. Additionally, I didn't have a nice framework set up to get them and to listen to them.</p>
<p>That's because I don't often sync my ipod. Most of the time, it's not connected to a computer: About once every two months, I connect it to upload a new batch of audiobooks (I can't fit my whole connection on the nano). So podcasting was - even if I had found one that I could interest myself in, an experience to have while behind the computer monitor.</p>
<p>Now two things have changed:</p>
<ol>
<li>I found the <a href="http://www.linuxactionshow.com/">Linux Action Show</a>. They guy doing that podcast are incredibly talented people. The entries sound very professionally made, while still not being on the obviously commercial side of things. They cover very, very interesting topics and they are everything but boring. Funny, entertaining and competent. Very good stuff.</li>
<li><p>At least since the release of SlimServer 6.5, my <a href="http://www.slimdevices.com">Squeezebox</a> is able to tune into RSS feeds with enclosures (or podcast for the less technical savy people - not that those would read this blog). Even better: The current server release brought a firmware which finally gives the Squeezebox the capability of natively playing ogg streams.</p><p>Up until now, it could only play FLAC, PCM and MP3, requiring tools like <tt>sox</tt> to convert ogg streams on the fly. Unfortunately, that didn't work as stable as I would have liked, but native OGG support helped <em>a lot</em></p></li>
</ol>
<p>So now, whenever a new episode of the podcast is released (once per week - and each episode is nearly two hours in length), I can use my Squeezebox to hear it via my home stereo.</p>
<p>Wow... I'm so looking forward to do that in front of a cozy fire in my fireplace once I can finally move into my new flat.</p>
DVD ripping, second edition2006-10-13T00:00:00+00:00http://pilif.github.com/2006/10/dvd-ripping-second-edition<p><a href="http://handbrake.m0k.org/">HandBrake</a> is a tool with the worst website possible: The screenshot that's presented on the index page leaves behind a completely wrong image of the application.</p>
<p>When you just look at the screenshot, you will get the impression that the tool is fairly limited and totally optimized for creating movies for handheld devices.</p>
<p>That's not true though. The screenshot is the screenshot of a light edition of the tool. The real thing is actually quite capable and only lacks the capability to store subtitles in the container format.</p>
<p>And it doesn't know about Matroska.</p>
<p>And it refuses to store x264 encoded video in the OGM container.</p>
<p>Another tool I found after my first <a href="/archives/329-ripping-DVDs.html">very bad experience</a> with ripping DVDs last time is <a href="http://ogmrip.sourceforge.net/">OGMrip</a>. The tool is a frontend for mencoder (of <a href="http://www.mplayerhq.hu/">mplayer</a> fame) and has all the features you'd ever want from a ripping tool, while still being easy to use.</p>
<p>It even provides a command line interface, allowing to process your movies from the console.</p>
<p>It has one inherent flaw though: It's single threaded.</p>
<p>HandBrake on the other hand, can split the encoding work (yes. the actual encoding) over multiple threads and thus can profit <em>a lot</em> of SMP machines.</p>
<p>Here's what I found in matters of encoding speed. I encoded the same video (from a DVD ISO image) with the same settings (x264, 1079kbit/s, 112kbit mp3 audio, 640x480 resolution at 30fps) on different machines:</p>
<ul>
<li>1.4Ghz, G4 Mac mini, running Gentoo Linux with OGMrip: 3fps</li>
<li>Thinkpad T43, running Ubuntu Edgy Eft, 1.6Ghz Centrino, OGMRip: 8fps</li>
<li>MacBook Pro, 2Ghz Core Duo, HandBrake: 22fps (both cores at 100%)</li>
<li>Mac Pro, Dual Dual Core 2.66Ghz, HandBrake: 110fps(!!), 80% total cpu usage (hdd io seems to limit the process)</li>
</ul>
<p>This means that encoding the whole 47 minutes A-Team episode takes:</p>
<ul>
<li>OGMRip on Mac mini G4: 7.8 hours</li>
<li>OGMRip on Thinkpad: 2.35 hours per episode</li>
<li>HandBrake on MacBook Pro: 1.6 hours per episode</li>
<li>HandBrake on MacPro: 0.2 hours (12 minutes) per episode</li>
</ul>
<p>Needless to say what method I'm using. Screw subtitles and Matroska - I want to finish ripping my collection this century!</p>
<p>On an additional closing note, I'd like to add that even after 3 hours of encoding video, the MacPro stayed very, very quiet. The only thing I could hear was the hard drive - the fans either didn't run or were quieter than the harddrive (which is quiet too)</p>
ripping DVDs2006-10-10T00:00:00+00:00http://pilif.github.com/2006/10/ripping-dvds<p>I have plenty of DVDs in my possession: Some movies of dubious quality which I bought when I was still going to school (like "Deep Rising" - eeew) and many, many episodes of various series (Columbo, the complete Babylon 5 series, A-Team and other pearls).</p>
<p>As you may know, I'm soon to move into a new flat which I thought would be a nice opportunity to reorganize my library.</p>
<p><a href="/archives/291-Computers-under-my-command-Issue-1-shion.html">shion</a> has around 1.5TB of storage space and I can easily upgrade her capacity (shion is the only computer I own I'm using a female pronoun for - the machine is something really special to me - like the warships of old times) by plugging in yet another USB hub and USB hard drives.</p>
<p>It makes totally sense to use that unlimited amount of storage capacity to store all my movies - not only the ones I've downloaded (like <a href="http://speeddemosarchive.com/news.html">video game speed runs</a>). Spoiled by the ease of use of ripping CDs, I thought, that this would be just another little thing to do before moving.</p>
<p>You know: Enter the DVD, use the ripper, use the encoder, done.</p>
<p>Unfortunately, this is proving to be harder than it looked like in the first place:</p>
<ul>
<li>Under Mac OS X, you can try to use the Unix tools with fink or some home-grown native tools. Whatever you do, you either get outdated software (fink) or not really working freeware tools documented in outdated tutorials. Nah.</li>
<li><p>Under Windows, there are two kinds of utilities: On one hand, you have the single-click ones (like <a href="http://www.autogk.me.uk/">AutoGK</a>) which really do what I initially wanted. Unfortunately, they are limited in their use: They provide only a limited amount of output formats (like no x264) and they hard-code the subtitles into the movie stream. But they are easy to use. On the other hand, you have the hardcore tools like Gordian Knot or MeGUI or even StaxRip. These tools are frontends for other tools that work like Unix tools: Each does one thing, but tries to excel at that one thing.</p><p>This could be a good thing, but unfortunately, it fails at things like awful documentation, hard-coded paths to files everywhere and outdated tools.</p><p>I could not get any of the tools listed above to actually create a x264 AVI or MKV-File without either throwing a completely unusable error message ("Unknown exception ocurred") or just not working at all or missing things like subtitles.</p></li>
<li>Linux has <a href="http://www.exit1.org/dvdrip/">dvd::rip</a> which is a really nice solution, but unfortunately, no solution for me as I don't have the right platform to run it on: My MCE machine is - well - running Windows MCE, my laptop is running Ubuntu (no luck with the debian packages and no ubuntu-packages). shion is running Gentoo, but she's headless, so I have to use a remote X-connection which is awfully slow and non-scriptable.</li>
</ul>
<p>The solution I want works on the Linux (or MacOS X) console, is scriptable and - well - works.</p>
<p>I guess I'm going the hard-core way and use <a href="http://www.transcoding.org">transcode</a> which is what dvd::rip is using - provided I find good documentation (I'm more than willing to read and learn - if the documentation is current enough and actually documents the software that I'm running and not the software at the state of two years ago).</p>
<p>I'll keep you posted on how I'm progressing.</p>
Prewritten content2006-10-09T00:00:00+00:00http://pilif.github.com/2006/10/prewritten-content<p>You may have noticed that last week had postings on nearly every day - and all the postings seem to have happened around 8:30am.</p>
<p>The reason for that is that I had a lot of inspiration on last Monday, allowing me to write two or three entries at once. I made Serendipity queue them up and post one on each day.</p>
<p>And as time progressed, I was adding more entries which I could schedule to the future too, thus keeping the illusion up that I was actually posting at 8:30 in the morning - a thing I'm certainly not thinking about doing.</p>
<p>While I'm awake at 8:30, I am most certainly not in the mood to post anything not to speak about the lack of inspiration due to not having surfed the web yet.</p>
<p>Writing content ahead of time has some advantages like allowing for better editing (much more time to read before the entry is posted) and it helping keeping the blog alive (a post for every day), but it also has some disadvantages: For one, the entries may not be as deep as one I'm writing for the moment.</p>
<p>After writing down an entry or two, I'm feeling a bit of a burnout, which certainly has negative effects on the entries length and depth.</p>
<p>And even worse: s9y insists on sending pings when the entry is submitted - not when it's published.</p>
<p>This means that I'm sending out pings for non-existing entries (bad thing) or I'm not sending out pings at all (slightly better).</p>
<p>So in retrospect, I'm going to do both: Posting ahead and posting in real-time.</p>
<p>An insider trick to find out if the posting is pre-written or not would be to look at the posting time: If it's 8:30 in the morning, it's prewritten.</p>
Intel Mac Mini, Linux, Ethernet2006-10-07T00:00:00+00:00http://pilif.github.com/2006/10/intel-mac-mini-linux-ethernet<p>If you have one of these new Intel Macs, you will sooner or later find yourself in the situation of having to run Linux on one of them. (Ok. Granted: The situation may be coming sooner for some than for others).</p>
<p>Last weekend, I was in that situation: I had to install Linux on an Intel Mac Mini.</p>
<p>The whole thing is quite easy to do and if you don't need Mac OS X, you can just go ahead and install Linux like you would on any other x86 machine (provided the hardware is sufficiently new to have the BIOS emulation layer already installed - otherwise you have to install the Firmware Update first - you'll notice by the mac not booting from the CD despite holding c during the initial boot sequence).</p>
<p>You can partition the disk to your liking - the Mac bootloader will notice that there's something fishy with the parition layout (the question-mark-on-a-folder icon will blink one or two times) before passing control to the BIOS emulation which will be able to boot Linux from the partitions you created during installation.</p>
<p>Don't use grub as bootloader though.</p>
<p>I don't know if it's something grub does to the BIOS or if it's something about the partition table, but grub can't launch stage 1.5 and thus is unable to boot your installation.</p>
<p>lilo works fine though (use plain lilo when using the BIOS emulation for the boot process, not elilo)</p>
<p>When you are done with the installation process, something bad will happen sooner or later though: Ethernet will stop working.</p>
<p>This is what syslog has to say about it:</p>
<pre class="ccode">NETDEV WATCHDOG: eth0: transmit timed out
sky2 eth0: tx timeout
sky2 eth0: transmit ring 60 .. 37 report=60 done=60
sky2 hardware hung? flushing</pre>
<p>When I pulled the cable and plugged it in again, the kernel even oops'ed.</p>
<p>The macs have a Marvel Yukon ethernet chipset. This is what lspci has to tell us: <tt>01:00.0 Ethernet controller: Marvell Technology Group Ltd. 88E8053 PCI-E Gigabit Ethernet Controller (rev 22)</tt>. The driver to use in the kernel config is "SysKonnect Yukon2 support (EXPERIMENTAL)" (CONFIG_SKY2)</p>
<p>I guess the EXPERIMENTAL tag is warranted for once.</p>
<p>The good news is, that this problem is fixable. The bad news is: It's tricky to do.</p>
<p>Basically, you have to update the driver with the version that is in the repository of what's going to be kernel 2.6.19</p>
<p>Getting a current version of sky.c and sky.h is not that difficult. Unfortunately though, the new driver won't compile with the current 2.6.18 kernel (and upgrading to a pre-rc is out of the question - even more so considering the ton of stuff going into 2.6.19).</p>
<p>So first, we have to patch in <a href="http://www.kernel.org/git/?p=linux/kernel/git/torvalds/linux-2.6.git;a=commit;h=84fa7933a33f806bbbaae6775e87459b1ec584c0">this changeset</a> to make the current release of sky compile.</p>
<p>Put the patch to /usr/src/linux and patch with <tt>patch -p1</tt></p>
<p>Then fetch the current revision of sky2.c and sky2.h and overwrite the existing files. I used the web interface to git for that as I have no idea how the command line tools work.</p>
<p>Recompile the thing and reboot.</p>
<p>For me, this fixed the problem with the sky2 driver: The machine in question is now running for a whole week without any networking lockups - despite heavy network load at times.</p>
<p>While happy to see this fixed, my <a href="http://www.gnegg.ch/archives/6-Fun-with-Linux-and-new-Hardware.html">statement about not buying too new hardware</a> (posting number 6 here on gnegg.ch - ages ago) if you intend to use Linux on it seems to continue to apply.</p>
XmlTextReader, UTF-8, Memory Corruption2006-10-06T00:00:00+00:00http://pilif.github.com/2006/10/xmltextreader-utf-8-memory-corruption<p>XmlTextReader on the .NET CF doesn't support anything but UTF-8 which can be a good thing as it can be a bad thing.</p>
<p>Good thing because UTF-8 is a very flexible character encoding giving access to the whole Unicode character range while still being compact and easy to handle.</p>
<p>Bad thing because PopScan doesn't do UTF-8. It was just never needed as its primary market is countries well within the range of ISO-8859-1. This means that the protocol between server and client so far was XML encoded in ISO-8859-1.</p>
<p>To be able to speak with the Windows Mobile application, the server had to convert the data to UTF-8.</p>
<p>And this is where a small bug occurred: Part of the data wasn't properly encoded and was transmitted as ISO-8859-1.</p>
<p>The correct thing a XML-Parser should do about obviously incorrect data is to bail out, which also is what the .NET CF DOM parser did.</p>
<p>XmlTextReader did something else though: It threw an uncatchable IndexOutOfRange exception either in Read() or ReadString(). And sometimes it miraculously changed its internal state - jumping from element to element even when just using ReadString().</p>
<p>To make things even worse, the exception happened at a location not even close to where the invalid character was in the stream.</p>
<p>In short, from what I have seen (undocumented and uncatchable exceptions being thrown at random places), it feels like the specific invalid character that was parsed in my particular situation caused memory corruption somewhere inside the parser.</p>
<p>Try to imagine how frustrating it was to find and fix this bug - it felt like the old days of manual memory allocation combined with stack corruption. And all because of one single bad byte in a stream of thousands of bytes.</p>
The price of automatisms2006-10-05T00:00:00+00:00http://pilif.github.com/2006/10/the-price-of-automatisms<p>Visual Studio 2005 and the .NET Framework 2.0 brought us the concept of table adapters and a nice visual designer for databases allowing you to quickly "write" (point and click) your data access layer.</p>
<p>Even when using the third party SQLite library, you can make use of this facility and it's true: Doing basic stuff works awfully well and quickly.</p>
<p>The problems start when what you intend to do is more complex. Then the tool becomes braindead.</p>
<p>The worst thing about it is that it's tailor-made for SQL-Server and that it insists on parsing your queries instead of letting the database or even the database driver do that.</p>
<p>If you add any feature to your query that is not supported by SQL-Server (keep in mind that I'm NOT working with SQL-Server - I don't even <em>have</em> a SQL-Server installed), the tool will complain about not being able to parse the query.</p>
<p>The dialog provides an option to ignore the error but it doesn't work like I would have hoped it should: "Ignore" doesn't mean: "Keep the old configuration". It means "work as if there wasn't any query at all".</p>
<p>This means that even when you want to do something simple as write "insert or replace" instead of "insert" (saves one query per batch item and I'm doing lots of batch items) or just add a limit clause "limit 20" will make the whole database designer unusable for you.</p>
<p>The ironic thing about the limit clause is that the designer certainly accepts "select top xxx from..." which will fail at run time due to SQLite not supporting that proprietary extension.</p>
<p>So in the end it's back to doing it manually.</p>
<p>But wait a minute: Doing it manually is even harder that it should be because the help, tutorials, books and even google all only talk about the automatic way, either unaware or not caring that it just won't work if you want to do more than example code.</p>
Oldstyle HTML - the worst offenders2006-10-04T00:00:00+00:00http://pilif.github.com/2006/10/oldstyle-html-the-worst-offenders<p>More and more, the WWW is cleansed of old, outdated pages. In more and more cases, the browsers will finally be able to go into standards mode - no more quirks.</p>
<p>But one bastion still remains to be conquered.</p>
<p>Consider this:</p>
<pre class="code"><br><font size=2 face="sans-serif">Danke</font>
<br><font size=2 face="sans-serif">Gruss</font>
<br><font size=2 face="sans-serif">xxxx</font>
</pre>
<p>By accident, I had my email client on "View Source" mode and this is the (complete) body of an email my dad sent me.</p>
<p>Beside the fact that it's a total abuse of HTML email (the message does not contain anything plain text would not have been able to contain), it's an obscene waste of bandwith:</p>
<p>The email ALSO contains a text alternative part, effectively doubling its size - not to speak of the unneeded HTML tags.</p>
<p>What's even worse: This is presentational markup at its finest. Even if I would insist in creating a HTML mail for this message, this would have totally sufficed:
<pre class="code">Danke<br />
Gruss<br />
xxxx<br />
</pre>
<p>Or - semantically correct:</p>
<pre class="code">
<p>Danke</p>
<p>Gruss</p>
<p>xxx</p>
</pre>
<p>Personally, I actually see reason behind a certain kind of HTML email. Newsletter or product announcements come to mind. Why use plain text if you can send over the whole message in a way that's nice for users to view?</p>
<p>Your users are used to viewing rich content - everyone of them probably has a web browser installed.</p>
<p>And with todays bandwith it's even possible to transfer the image and all pictures in one nice package. No security warnings, no crappy looking layout due to broken images.</p>
<p>What I don't see though is what email programs are actually doing. Why send over messages like the one in the example as HTML? Why waste the users bandwith (granted: It doesn't matter any more) and even create security problems (by forcing the email client to display HTML) to send a message that's not looking any different than one consisting of plain text?</p>
<p>The message also underlines another problem: The old presentational markup actually lent itself perfectly for creating WYSIWYG editors. But today's way of creating HTML pages just won't work in these editors for the reasons I outlined in my <a href="/archives/319-Word-2007-So-much-wasted-energy.html">posting about Word 2007</a></p>
<p>Still - using a little bit of CSS could result in so much nicer HTML emails which have the additional benefit of being totally readable even if the user has a client not capable of displaying HTML (which is a wise decision security-wise).</p>
<p>Oh and in case you wonder what client created that email...</p>
<pre class="code">
X-MIMETrack: Serialize by Router on ZHJZ11/xxxx(Release 7.0.1FP1|April 17, 2006) at
02.10.2006 16:35:09,
Serialize complete at 02.10.2006 16:35:09,
Itemize by SMTP Server on ZHJZ05/xxxxx(Release 6.5.3|September 14, 2004) at
02.10.2006 16:36:15,
Serialize by Router on ZHJZ05/xxxxx(Release 6.5.3|September 14, 2004) at
02.10.2006 16:36:19,
Serialize complete at 02.10.2006 16:36:19
</pre>
<p>I wonder if using a notes version of september 04 is a good thing to do in todays world full of spam, spyware and other nice things - especially considering that my dad is working in a public office.</p>
</p>
My new Flat - Location2006-10-03T00:00:00+00:00http://pilif.github.com/2006/10/my-new-flat-location<p>As <a href="/archives/313-Where-have-I-been.html">I've told before</a>, I'm moving into my very own flat quite soonish.</p>
<p>I can't show pictures of the interior just yet as the current owners have not moved out yet. What I can show you though is a picture of the surroundings:</p>
<center><img width="389" height="372" src="http://www.gnegg.ch/uploads/FirefoxScreenSnapz001.png" alt="" /></center>
<p>The picture was ripped off the <a href="http://www.gis.zh.ch/gb4/bluevari/gb.asp">GIS Browser</a> Zürich provides for us. I could have used <a href="http://map.search.ch">map.search.ch</a> (which had AJAX before google maps and also has a prettier zoom than its hyped counterpart, btw) and I could even have created a link, but that would kind of give away my address (and the images of the GIS browser have a much higher resolution).</p>
<p>But now to the flat itself:</p>
<p>The green stuff to the north of the building is forest. And there's a nice creek flowing through it (in a more or less straight east -> west line). The forest also is quite big: It takes you about 2 hours to walk from the entrance on the west to the exit on the east.</p>
<p>Additionally, my parents live in the vicinity the forests top end, so it'll be a very nice walk for me when I visit them and decide to go by foot or bike.</p>
<p>Forest, no streets... way off the city life?</p>
<p>Not at all: The place is located near Zürich and I reach my work place by train (<a href="/archives/98-Some-suburban-railways-I..html">Forchbahn</a> even) in only just 9 minutes - or 20 if I decide to walk through the forest.</p>
<p>So I'm getting the best out of two worlds: Nature literarily just outside my front door (I'll be getting myself a cat next year) and still closer to my work space than before. And about the same distance away from the central parts of Zürich as I'm right now.</p>
<p>Granted: Walking home right now is more or less walking in-plane when it will be uphill later on, but it will be in the middle of the forrest, aside a creek as oppsed to a walk through the city.</p>
<p>But that's not all just yet.</p>
<p>It's very nearby the place where I've grown up.</p>
<p>Despite moving away from there back in 1993, I never bonded as much to any other place. That old place still feels like home to me and I'm getting warm feelings whenever I'm passing by.</p>
<p>Now I'm moving to a place where I was playing when I was a kid - granted, we weren't there every free minute as it was a bit off, but we visited that forest here and then - we even once played quite close to where the house is.</p>
<p>And only three years ago, I used dry-ice to make bottles of PET explode - right in the same forest - also quite near the place where I'll be living.</p>
<p>All these features make this flat the truly amazing thing it is. Granted: Room for a nice <a href="/archives/318-Upgrading-the-home-entertainment-system.html">home cinema</a>, a large bathtub, a <a href="http://www.slimdevices.com/pi_squeezebox.html">Squeezebox</a> in every room, heck, 140m<sup>2</sup> of room - all that is nice. But what really makes the flat special is its location.</p>
<p>November 1st, I'll officially be its owner and then I'll be able to post some pictures from the inside.</p>
OS X 10.4.8 - Update gone wrong2006-10-02T00:00:00+00:00http://pilif.github.com/2006/10/os-x-1048-update-gone-wrong<p>Today, Software Update popped up and offered me to upgrade the OS to 10.4.8.</p>
<p>Usually I'm turning down such offers as I don't want to reboot my system in mid-day, but it felt like a good time to do it none the less. This is why I accepted.</p>
<p>After the installation, the update asked me to reboot which I accepted.</p>
<p>What came afterwards was as scary as it was ironic: The system rebooted into Windows XP.</p>
<p>But not worries: The 10.4.8 update isn't a windows installation in disguise: The Windows installation that greeted me was the one I have on a second partition - mostly to play WoW (which <a href="/archives/321-Correlation-between-gnegg.ch-and-WoW.html">I don't any more</a>).</p>
<p>A quick reboot showed me even more trouble: Whenever my MacBook tried to boot from the MacOS partition, it showed the folder-with-question-mark icon for a few seconds and then the EFI BIOS emulation kicked in and booted from the MBR, which is why I was seeing Windows on my screen.</p>
<p>Now, I'd gladly explain here what has gone wrong and how I fixed it, but as I was in a state of panic, so I have not exactly documented my fix and as I tried many steps at once without getting confirmation if the step has fixed the problem, I don't even know what was wrong (which certainly doesn't stop me from guessing).</p>
<p>Anyways.</p>
<p>I booted from the MacBook DVD and first selected disk utility in the tools menu and let it check the disk for errors (none found as I have expected) and then let it repair permissions (tons of errors found, but I doubt this was the problem).</p>
<p>Then I quit the disk utility and launched terminal.</p>
<p>Beside the fact that I had some trouble actually entering commands (how do I set the keyboard layout in that pre-install-terminal?), I quickly went to <tt>/System/Libary</tt>, deleted the Extensions cache (Extensions.kextcache), went to <tt>/System/Library/Exentsions</tt> and removed all Extensions installed by Parallels (which I suspected being responsible for the problem).</p>
<p>I think the list was vmmain.kext, helper.kext, Pvsnet.kext and hypervisor.kext. You have to remove them with <tt>rm -r</tt> as they are bundles (directories)</p>
<p>After that, I rebooted the system and the question-mark-on-a-folder disappeared and the updating process completed.</p>
<p>I can't tell you how scared I was: My OS X installation is tweaked to oblivion and I'd really, really hate to lose all the stuff. Don't mind the data - it's configuration files and utilities and of course fink.</p>
<p>*shudder*</p>
<p>As I have not tried to reboot after completing each of the steps above, I'm unable to say what actually caused the problem. I doubt it was Parallels though as I'm currently running 10.4.8 and Parallels (which I had to reinstall of course). I also doubt it was the permissions issue as wrong permissions are unlikely to cause boot-failure.</p>
<p>So it probably was a corrupted Extension cache. Or the update process not able to cope with the Parallels extensions.</p>
<p>Me being in the dark makes me unable to place blame, so you won't find any statement about how a more or less forced OS update should never cause a failure like this...</p>
<p>For all I know, this could have happened without the update anyways.</p>
<p>The good news on the other hand is that I'm slowly reaching a state where I am as good at fixing macs as I am good at fixing Windows and Linux. Just don't tell this to my friends who have macs.</p>
Correlation between gnegg.ch and WoW2006-09-30T00:00:00+00:00http://pilif.github.com/2006/09/correlation-between-gneggch-and-wow<p>If you take a look at the <a href="http://www.gnegg.ch/archive">archive</a> (a feature I've actually only discovered just now), you'll notice quite an interesting distribution of posts here on gnegg.ch</p>
<p>2002 was where all started. November was still a bit slow, but in December I really got into blogging only to let it slip a bit during 2003.</p>
<p>2004, I began subscribing to tons of RSS feeds which provided me with a lot of inputs for my own articles. You'll notice a significant increase of posts during the whole year.</p>
<p>Then, in 2005, my WoW-time began. My first <a href="http://www.gnegg.ch/archives/229-World-of-Warcraft.html">WoW-related posting</a> was from February 21st, 2005 and makes a reference to when I bought WoW, which would be - provided I'm calculating correctly - February 15th 2005.</p>
<p>Going back to the archive, you'll immediately notice something happening to the post count: It's steadily going down. From a good 9 entries in January (pre-WoW) down to one entry in October which is more or less when I got my first character to level 60. In November I was affected by my first fed-up-ness of WoW which lasted till January 2006 (post count coming up again - despite having christmas and all which was keeping me away from computers.</p>
<p>Then, in January, I was playing again, getting closer to 60 with my second character in February (just one posting).</p>
<p>March was WoW-less again due to my feeling of not having anything to do any more.</p>
<p>In mid-April, I began playing again and started my third character... (posts going down) - which I got 60 with at the end of May.</p>
<p>June was playing at 60 and before the end of the month, I began feeling fed-up with WoW. And burned out. I clearly felt to have wasted <em>way</em> too much of my life. And I felt like I was truly addicted to WoW. So I used the emergency break and stopped playing.</p>
<p>As you can see, I was back to 16 posts in July which also was due to my "Computers under my command"-series which was easy to do due to the topics being clear in advance.</p>
<p>August is interesting. Have a look at the <a href="http://www.gnegg.ch/archives/2006/08.html">month calendar</a> and guess when I took my lv60 character out again!</p>
<p>More or less regular postings here until August 10th. Then nothing.</p>
<p>September is better again because I put my WoW to a deep-freeze again - especially after having seen what WoW does to my other hobbies. gnegg.ch is a very nice indicator in that regard.</p>
<p>So I'm coming to all the same conclusion as <a href="http://www.artofadambetts.com">Adam Betts</a> who <a href="http://www.artofadambetts.com/weblog/?p=117">also stopped playing WoW</a> due to noticing his real life being severely affected by WoW.</p>
<p>World of Warcraft is highly addictive and I know of no person who could say not being affected by this. Once you start to play, you play. Even worse: Even if you think that you got it behind you and that you can control it, it just takes over again.</p>
<p>So for me it's clear what I have to do: I will stop playing. For real this time. No taking out my character again. No-more-playing. I won't delete my characters as they are the result of a lot of work, but I will cancel my subscription.</p>
<p>I'm really grateful for the archive function of gnegg.ch as it was a totally clear indicator of my addiction and it still is a perfect way to prevent me from going back as everyone will know I have due to the post count going down again.</p>
SQLite, Windows Mobile 2005, Performance2006-09-29T00:00:00+00:00http://pilif.github.com/2006/09/sqlite-windows-mobile-2005-performance<p>As you know from <a href="/2006/07/sqlite-on-net-cf-revisited/">previous</a> <a href="/2004/10/sqlite-on-net-cf/">posts</a>, I’m working with SQLite on mobile devices which lately means Windows Mobile 2005 (there was a <a href="http://www.gnegg.ch/archives/177-Extreme-fun-with-Linux.html">Linux device</a> before that tough, but it was hit by the <a href="http://en.wikipedia.org/wiki/RoHS">RoHS</a> regulation of the European union).</p>
<p>In previous experiments with the older generation of devices (Windows CE 4.x / PocketPC 2003), I was surprised by the high performance SQLite is able to achieve, even in complex queries. But this time, something felt strange: Searching for a string in a table was very, very slow.</p>
<p>The problem is that CE5 (and with it Windows Mobile 2005) uses non-volatile flash for storage. This has the tremendous advantage that the devices don’t lose their data when the battery runs out.</p>
<p>But compared to DRAM, Flash is slow. Very slow. Totally slow.</p>
<p>SQLite doesn’t load the complete database into RAM, but only loads small chunks of the data. This in turn means that when you have to do a sequential table scan (which you have to do when you have a LIKE ‘%term%’ condition), you are more or less dependant on the speed of the storage device.</p>
<p>This what caused SQLite to be slow when searching. It also caused synchronizing data to be slow because SQLite writes data out into checkpoint files during transactions.</p>
<p>The fix was to trade off launch speed (the application is nearly never started fresh) for operating speed by loading the data into an in-memory table and using that for all operations.</p>
<pre class="code">attach ":memory:" as mem;
create table mem.prod as select * from prod;</pre>
<p>Later on, the trick was to just refer to mem.prod instead of just prod.</p>
<p>Of course you’ll have to take extra precaution when you store the data back to the file, but as SQLite even supports transactions, most of the time, you get away with</p>
<pre class="code">begin work;
delete from prod;
insert into prod (select * from mem.prod);
commit;</pre>
<p>So even if something goes wrong, you still have the state of the data of the time when it was loaded (which is perfectly fine for my usage scenario).</p>
<p>So in conclusion some hints about SQLite on a Windows Mobile 2005 device:</p>
<ul>
<li>It works like a charm</li>
<li>It's very fast if it can use indexes</li>
<li>It's terribly slow if it has to scan a table</li>
<li>You can fix that limitation by loading the data into memory (you can even to it on a per-table basis)</li>
</ul>
Word 2007 - So much wasted energy2006-09-28T00:00:00+00:00http://pilif.github.com/2006/09/word-2007-so-much-wasted-energy<p>Today, I've come across a screencast showing how to <a href="http://www.jonesxml.com/jobailor/21stcenturydoc/21stcenturydoc.html">quickly format</a> a document using the all new Word 2007 - part of office 2007 (don't forget to also read the associated <a href="http://blogs.msdn.com/microsoft_office_word/archive/2006/09/18/761200.aspx">blog post</a>).</p>
<p>If you have any idea how Word works and how to actually use it, you will be as impressed as the presenter (and admittedly I) was: Apply some styles, chose a theme and be done with it.</p>
<p>Operations that took ages to get right are now done in a minute and it'll be very easy to create good looking documents.</p>
<p>Too bad that it's looking entirely different in practice.</p>
<p>If I watch my parents or even my coworkers use word, all I'm seeing is styles being avoided. Heading 1? Just use the formatting toolbar to make the font bigger and bold.</p>
<p>Increase spacing between paragraphs? Hit return twice.</p>
<p>Add empty spacing after a heading (which isn't even one from Word's point of view)? Hit return twice.</p>
<p>Indent text? Hit tab (or even space as seen in my mother's documents).</p>
<p>This also is the reason why those people never seem to have problems with word: The formatting toolbar works perfectly fine - the bugs lie in the "advanced" features like assigning styles.</p>
<p>Now the problem is that all features shown in that screencast are totally dependent of the styles being set correctly.</p>
<p>If you take the document shown as it is before you apply styling and then use the theme function to theme your document, nothing will happen as word doesn't know the semantic data about your document. What's a heading? What's a subtitle? It's all plain text.</p>
<p>Conversely, if you style your document the "traditional" way (using the formatting toolbar) and then try to apply the theme, nothing will happen either as the semantic information is still missing.</p>
<p>This is the exact reason why WYSIWYG looks like a nice gimmick at the first glance, but it more or less makes further automated work on the document impossible to do.</p>
<p>You can try and hack around this of course - try to see pattern in the user's formatting and guess the right styles. But this can lead to even bigger confusion later on as you can make wrong guesses which will in the end make the themeing work inconsistently.</p>
<p>Without actually using semantic analysis of the text (which currently is impossible to do), you will never be able to accurately use stuff like themeing - unless the user provides the semantic information by using styles which in turn defeats the purpose of WYSIWYG.</p>
<p>So, while I really like that new themeing feature of Office 2007, I fear that for the majority of the people it will be completely useless as it plain won't work.</p>
<p>Besides, themes are clearly made for the end user at home - in a corporate environment you will have to create documents according to the corporate design which probably won't be based on a pre-built style in office.</p>
<p>And end users are the people the least able to understand how assigning styles to content works.</p>
<p>And once people "get" how to work with text styles and the themes will begin to work, we'll be back at square one where everyone and their friends are using all the same theme because it's the only one looking more or less acceptable, defeating all originality initially in the theme.</p>
Upgrading the home entertainment system2006-09-20T00:00:00+00:00http://pilif.github.com/2006/09/upgrading-the-home-entertainment-system<p>Upgrading the home entertainment system</p>
<p>The day when I will finally move into my new flat is coming closer and closer (expect some pictures as soon as the people currently living there have moved out).</p>
<p>Besides thinking about outdated and yet necessary stuff like furniture, I'm also thinking about my home entertainment solution which currently mostly consists of a Windows MCE computer (<a href="/archives/294-Computers-under-my-command-3-terra.html">terra</a>) and my GameCube (to be replaced with a Wii for sure).</p>
<p>The first task was to create distance.</p>
<p>Distance between the video source and the projector. Currently, that's handled simply by having the MCE connected to the projector via VGA (I'd prefer DVI, but the DVI output is taken by my 23" cinema display I) and the GC, the PS2 and the XBox360 via composite to my receiver and the receiver via composite to the projector.</p>
<p>The distance between the projector and the receiver/MCE is currently about three meters tops, so no challenge there.</p>
<p>With a larger flat and a ceiling mounted projector, interesting problems arise distance-wise though: I'm going to need at least 20 meters of signal cable between receiver and projector - more than what VGA, DVI or even HDMI are specified for.</p>
<p>My solution in that department was the <a href="http://www.gefen.com/kvm/product.jsp?prod_id=3872">HDMI CAT-5 Extreme</a> by Gefen. It's a device which allows sending HDMI signals over two normal ethernet cables (shielded preferred) and reaching up to 60 meters of distance.</p>
<p>Additionally, CAT-5 cables are lighter, easier to bend and much easier to hide than HDMI or even DVI cables.</p>
<p>Now, terra only has a DVI and VGA out. This is a minor problem though as HDMI is basically DVI plus audio, so it's very easy to convert a DVI signal into a HDMI one - it's just a matter of connecting pins on one side with pins on the other side - no electronics needed there.</p>
<p>So with the HDMI CAT-5 Extreme and a DVI2HDMI adaptor, I can connect terra to the projector. All well, with one little problem: I can't easily connect the GameCube or the other consoles any more: Connecting them directly to the projector is no option as it's ceiling mounted.</p>
<p>Connecting them to my existing receiver isn't a solution either as it doesn't support HDMI, putting me into the existing distance problem yet again.</p>
<p>While I could probably use a very good component cable to transport the signal over (it's after all an analog signal), it would mean I have three cables going from the receiver/MCE combo to the projector: Two for the HDMI extender and one big fat component cable.</p>
<p>Three cables to hide and a solution at the end of its life span anyways? Not with me! Not considering I'm moving into the flat of my dreams.</p>
<p>It looks like I'm going to need a new receiver.</p>
<p>After looking around a bit, it looks like the <a href="http://usa.denon.com/ProductDetails/2243.asp">DENON AVR-4306</a> is the solution for me.</p>
<p>It can upconvert (and is said to do so in excellent quality) any analog signal to HDMI with a resolution of up to 1080i which is more than enough for my projector.</p>
<p>It's also said to provide excellent sound quality and - for my geek heart's delight - it's completely remote-controllable over a telnet interface via its built-in ethernet port - even bidirectional: The - documented - protocol provides events on the line when operating conditions change by different events, like the user changing the volume on the device.</p>
<p>This way, I can have all sources connected to the receiver and the receiver itself connected to the projector over the CAT-5 Extreme. Problems solved and considering how many input sources and formats the denon supports, it's even quite future-proof.</p>
<p>I've already ordered the HDMI extender and I'm certainly going to have a long, deep look into that Denon thing. I'm not ready to order just yet though: It's not exactly cheap and while I'm quite certain to eventually buy it, the price may just fall down a little bit until November 15th when I'm (hopefully) moving into my new home.</p>
Windows Vista, Networking, Timeouts2006-09-14T00:00:00+00:00http://pilif.github.com/2006/09/windows-vista-networking-timeouts<p>Today I went ahead and installed the RC2 of Windows Vista on my media center computer.</p>
<p>The main reason for this was because that installation was very screwed (as most of my Windows installations get over time - thanks to my experimenting around with stuff) and the recovery CD provided by Hush was unable to actually recover the system.</p>
<p>The Hard Drive is connected to a on-board SATA-RAID controller which the XP setup does not recognize. Usually, you just put the driver on a floppy and use setup's capability of loading drivers during install, but that's a bit hard without a floppy drive anywhere.</p>
<p>Vista, I hoped, would recognize the RAID controller and I read a lot of good things about RC2, so I thought I should give it a go.</p>
<p>The installation went flawlessly, though it took quite some time.</p>
<p>Unfortunately, surfing the web didn't actually work.</p>
<p>I could connect to some sites, but on many others, I just got a timeout. <tt>telnet site.com 80</tt> wasn't able to establish a connection.</p>
<p>This problem in particular was in my Marvel Yukon chipset based network adapter: It seems to miscalculate TCP packet checksums here and there and Vista actually uses the hardwares capablity to calculate the sums.</p>
<p>To fix it, I had to open the advanced properties of the network card, select "TCP Checksum Offload (IPv4)" and set it to "Disabled".</p>
<p>Insta-Fix!</p>
<p>And now I'm going ahead and actually start to review the thing</p>
lighttpd, .NET, HttpWebRequest2006-09-13T00:00:00+00:00http://pilif.github.com/2006/09/lighttpd-net-httpwebrequest<p>Yesterday, when I deployed the server for my PocketPC-Application to an environment running <a href="http://www.lighttpd.net">lighttpd</a> and PHP with FastCGI SAPI, I found out that the communication between the device and the server didn't work.</p>
<p>All I got on the client was an Exception because the server sent back error 417: Precondition failed.</p>
<p>Of course there was nothing in lighttpd's error log, which made this a job for <strike>Ethereal</strike><a href="http://www.wireshark.org/">Wireshark</a>.</p>
<p>The response from the server had no body explaining what was going on, but in the request-header, something interesting was going on:</p>
<pre class="code">
Expect: 100-continue
</pre>
<p>Additionally, the request body was empty.</p>
<p>It looks like HttpWebRequest, with the help of the compact framework's ServicePointManager is doing something really intelligent which lighttpd doesn't support:</p>
<p>By first sending the POST request with an empty body and that <tt>Expect: 100-continue</tt>-header, HttpWebRequest basically gives the server the chance to do some checks based on the request header (like: Is the client authorized to access the URL? Is there a resource available at that URL?) without the client having to transmit the whole request body first (which can be quite big).</p>
<p>The idea is that the server does the checks based on the header and then either sends a error response (like 401, 403 or 404) or it advises the client to go ahead and send the request body (code 100).</p>
<p>Lighttpd doesn't support this, so it sends that 417 error back.</p>
<p>The fix is to set <a href="http://msdn.microsoft.com/library/default.asp?url=/library/en-us/cpref/html/frlrfsystemnetservicepointmanagerclassexpect100continuetopic.asp">Expect100Continue</a> of <tt>System.Net.ServicePointManager</tt> to false before getting a HttpWebRequest instance.</p>
<p>That way, the .NET Framework goes back to plain old POST and sends the complete request body.</p>
<p>In my case that's no big disadvantage because if the server is actually reachable, the requested URL is guaranteed to be there and ready to accept the data on HTTP-level (of course there may be some errors on the application level, but there has to be a request body for them to be detected).</p>
.NET CF, Windows CE and Fullscreen2006-09-11T00:00:00+00:00http://pilif.github.com/2006/09/net-cf-windows-ce-and-fullscreen<p>.NET CF, Windows CE and Fullscreen</p>
<p>Assuming you are creating an application for the .NET compact framework and further assuming that the application is designed to be the only one running on the target device because the whole device is <em>defined</em> by your application.</p>
<p>Also, you don't want the end-users to tamper with the device.</p>
<p>This is why you sometimes want to put your application in a full-screen mode, hiding all other UI elements on the screen. Of course, to prevent tampering, you'd have to take additional measures, but that's another topic.</p>
<p>The application I'm currently working on is written for the .NET compact framework, so the explanations are made for that environment.</p>
<p>Putting your application to full screen on the PocketPC is easy: Set your form's <tt>FormBorderStyle</tt> to <tt>None</tt> and set <tt>WindowState</tt> to <tt>Maximized</tt>. That will do the trick.</p>
<p>On Windows CE (PocketPC is basically a special UI library and application collection running on top of Windows CE), there's a bit more work to do.</p>
<p>First of all, you have to remove the task bar, which is acomplished by some P/Invoke calls which are declared like this:</p>
<pre class="code">
[DllImport("coredll.dll", CharSet=CharSet.Auto)]
public static extern bool ShowWindow(int hwnd, int nCmdShow);
[DllImport("coredll.dll", CharSet = CharSet.Auto)]
public static extern bool EnableWindow(int hwnd, bool enabled);
</pre>
<p>Then, in your main form's constructor, do the magic:</p>
<pre class="code">
int h = FindWindow("HHTaskBar", "");
ShowWindow(h, 0);
EnableWindow(h, false);</pre>
<p>And don't forget to turn the task bar on again when your application exits.</p>
<pre class="code">
int h = FindWindow("HHTaskBar", "");
ShowWindow(h, 5);
EnableWindow(h, true);</pre>
<p>There's one important additional thing to do though:</p>
<p>WindowState = Maximized won't work!</p>
<p>Well. It <strong>will</strong> work, but it will resize your form in a way that there weill be empty space at the bottom of the screen where the taskbar was. You will have to manually resize the form by using something like this:</p>
<pre class="code">
this.Height = Screen.PrimaryScreen.Bounds.Height;
this.Width = Screen.PrimaryScreen.Bounds.Width;</pre>
<p>That last bit hit me hard today :-)</p>
<p>On a side note: There's also the <tt>SHFullScreen</tt>-API call which also allows your application to position itself on the top of the taskbar. This basically is the official way to go, but the DLL aygshell.dll where the function is implemented in is not always available on all CE configurations.</p>
XmlReader - I love thee2006-09-08T00:00:00+00:00http://pilif.github.com/2006/09/xmlreader-i-love-thee<p>Lately, I have been working with the .NET framework. Well. It was the compact framework actually. I'm currently writing software for one of these advanced barcode scanners which run Windows Mobile.</p>
<p>The one thing I want to talk about is <tt>XmlReader</tt>. You know: One of these devices actually has a built-in GPRS unit, so it lends itself as a really nice mobile client.</p>
<p>With mobility comes synchronization and synchronization is something PopScan can do quite well. The protocol is XML based, so I need to parse XML on the device.</p>
<p>It's even getting more interesting though: The server usually bzip2-compresses the XML-data while sending it out. The XML stream is perfectly compressible, so that's a good thing to do - even more so that the device communicates over a volume taxed GPRS connection.</p>
<p>The naïve approach to this situation is to do this:</p>
<ol>
<li>Read data from server to the memory</li>
<li>Compress the data in-memory</li>
<li>Use a DOM-Parser to build a DOM-Tree</li>
<li>Iterate over the tree and handle the article data</li>
</ol>
<p>This approach, of course, is completely unworkable. For once, you waste memory by storing the data multiple times in different forms. Then you build a DOM-tree which is pointless as it's more or less flat data anyways. And finally, you wait for the download and then for the decompression before you can begin parsing. So it's slow.</p>
<p>The way to go is to read data from the network, decompress it as it arrives, feed the data into a stream based XML-parser and work with its output.</p>
<p>That way, you only need some memory of buffers in the decompression engine and the XML parser. And you don't wait. As you recieve data from the server, you can start decompressing and parsing it.</p>
<p>I've done this before. It was in Delphi. Reciving data from WinInet, feeding it through a bzip2 decompressor and finally parsing it with expat was truly hard work: Pointers here, malloc there and that awful event based interface of expat making it very difficult to track state.</p>
<p>And now I had to do it again with c#</p>
<p>Wow! This was easy.</p>
<p>First, there's the nice Stream interface using a decorator pattern: You can just wrap streams into each other and then just read from the "outermost" stream.</p>
<p>This means that I can wrap a bzip2-decompression stream around the HTTP-Response stream and make the XML parser read from the decompression stream which in turn reads from the HTTP-response stream.</p>
<p>And then you have the XmlReader interface.</p>
<p>Parsing XML is done in a while loop by calling the object's Read() method which returns whenever it encounters a start or end element in the stream. This makes tracking the state much easier and helps cleaning keeping your code clean.</p>
<p>All in all, I can't believe how easy it was to write that parser.</p>
<p>This shows that some nice thought went into the design of the .NET framework and I'm really looking forward into finding even more nice surprises such as this.</p>
Where have I been?2006-09-07T00:00:00+00:00http://pilif.github.com/2006/09/where-have-i-been<p>Long time no see. Where did yours truly go? Back to World of Warcraft (which was the reason for the lack of postings during 05)? Or something even worse?</p>
<p>I'm pleased to say that the WoW-times are more or less over. Granted: I still log on to the game here and then, but the pleasure I was getting out of playing the game is more or less gone.</p>
<p>There are more fun things to do than playing WoW and I'm currently enjoying them. WoW finally has regained the state of standard evening leisure as one of many alternatives of how to waste my time.</p>
<p>But back to the reason for my absence:</p>
<p>Since april this year I know that I will move into my very own flat. Back in April, it was a date far off with lots of things still needed to be done - things I didn't bother about yet back then.</p>
<p>But now, November 1st is getting closer and closer by the day. And stuff <em>still</em> needs to be done.</p>
<p>And this is precisely why I somewhat lack the time to blog.</p>
<p>Writing an entry here on gnegg.ch consists of many tasks: First there's inspiration. I browse the web, live through my day at work or just talk to colleagues of mine. Sooner or later something will happen about which I want to talk.</p>
<p>Then, I think about the subject and try to serialize my thoughts to create an entry that's (hopefully) interesting to read.</p>
<p>And then I sit down and write the thing. This is the task that actually takes the least amount of my time (inspiration is the hardest for me - often times, I think the subjects are too obvious or too uninteresting to blog about).</p>
<p>The final thing is the proofreading - a task I'm not really good at.</p>
<p>So an average entry here takes about two to four hours to do - time I currently rather use for planning where to put existing furniture, where to buy new furniture (and where to put it of course), who to hire to install a new bathtub and so on.</p>
<p>This is a big thing for me. When I moved to my current flat back in 2001, it was more or less a "getting away from my parents" (don't get me wrong: I love my parents). I moved more or less into the first available flat - also because it was hard as hell to get one in Zürich back then. So I took the opportunity.</p>
<p>Now it's different. For one, this is <strong>my</strong> flat. Yes. I bought it. It's mine. Then it's more than three times as big as my current one. And it's beautiful. Just filling it with my current furniture doesn't give it the credit it deserves.</p>
<p>So, this is what's keeping me absorbed.</p>
<p>Still, work is very, very interesting currently and I have lots of interesting stuff to write about in the pipeline (so inspiration is there) and I'm looking forward to post these entries. Today and in the near future.</p>
Profiling PHP with Xdebug and KCacheGrind2006-08-10T00:00:00+00:00http://pilif.github.com/2006/08/profiling-php-with-xdebug-and-kcachegrind<p><a class="serendipity_image_link" href="http://www.gnegg.ch/uploads/kcachegrind.png"><img width="110" height="91" border="0" hspace="5" align="left" src="http://www.gnegg.ch/uploads/kcachegrind.serendipityThumb.png" alt="" /></a></p>
<p>Profiling can provide real revelations.</p>
<p>Sometimes, you have that gut feeling that a certain code path is the performance bottleneck. Then you go ahead and fix that only to see, that the code is still slow.</p>
<p>This is when a profiler kicks in: It helps you determine the <em>real</em> bottlenecks, so you can start fixing <em>them</em></p>
<p>The PHP IDE I'm currently using, Zend Studio (it's the only PHP IDE filling <a href="http://www.gnegg.ch/archives/255-On-the-search-of-a-text-editor.html">my requirements</a> on the Mac currently) does have a built-in profiler, but it's a real bitch to set up.</p>
<p>You need to install some binary component into your web server. Then the IDE should be able to debug and profile your application.</p>
<p>Emphasis on "should".</p>
<p>I got it to work once, but it broke soon after and I never really felt inclined to put more effort into this - even more so as I'm from time to time working with a snapshot version of PHP for which the provided binary component may not work at all.</p>
<p>There's an open source solution that works much better both in terms of information you can get out of it and in terms of ease of setup and use.</p>
<p>It's <a href="http://www.xdebug.org">Xdebug</a>.</p>
<p>On gentoo, installing is a matter of <tt>emerge dev-php5/xdebug</tt> and on other systems, <tt>pear install xdebug</tt> might do the trick.</p>
<p>Configuration is easy too.</p>
<p>Xdebug generates profiling information in the same format as <a href="http://valgrind.org/">valgrind</a>, the incredible debugger the KDE people created.</p>
<p>And once you have that profiling information, you can use a tool like <a href="http://kcachegrind.sourceforge.net/cgi-bin/show.cgi">KCacheGrind</a> to evaluate the data you've collected.</p>
<p>The tool provides some incredibly useful views of your code, making finding performance problems a joyful experience.</p>
<p>Best of all though is that I was able to compile KCacheGrind along with its dependencies on my MacBook Pro - another big advantage of having a real UNIX backend on your desktop.</p>
<p>By the way: Xdebug also is a debugger for PHP, though I've never used it for that as I never felt the need to step through PHP code. Because you don't have to compile it, you are often faster by instrumenting the code and just running the thing - especially once the code is spreading over a multitude of files.</p>
Backup with dirvish2006-08-09T00:00:00+00:00http://pilif.github.com/2006/08/backup-with-dirvish<p>Using tape drives for your backups (in contrast to for example external hard drives) has some advantages and a whole lot of disadvantages which makes it impractical for me and a whole lot of other people:</p>
<ul>
<li>There's (next to) no random access. Need a specific file? Often you have to restore the backup until that file is restored.</li>
<li>Tapes are maintenance intensive: You have to clean the streamer, clean the tapes, store the tapes in specific environmental conditions and so on.</li>
<li>Tapes are a slow medium. You won't get much more than 5-10 MB/s while writing to the tape.</li>
<li>The inaccuracy of tapes makes a verify run something important if not absolutely needed to do.</li>
<li>The equipment is expensive. Both tapes and streamer (or tape robots) cost quite some money.</li>
</ul>
<p>That's why I am using external hard drives for quite some time now. Granted, they have some serious disadvantages in long-livety (but they still outperform tapes not stored in said environmental conditions), but really important documents must be archived on a read-only medium anyways.</p>
<p>What harddisks provide you with is cheap storage, random access and the possibility to use common file system access tools to work with them.</p>
<p>External drives can be disconnected and stored at a different location from the backup machine and as they have a much larger capacity per medium than tape drives, you usually get away with one or two drives where you'd use many more tapes (at least in the affordable range of things).</p>
<p>If you need a pragmatic yet perfectly working and clever backup solution to fill up these external drives, I'd recommend <a href="http://www.dirvish.org/">dirvish</a></p>
<p>Dirvish uses existing tools like SSH and mainly rsync to create backups.</p>
<p>What I like most about it is it's functionality to create incremental backups by creating hardlinks to non-changed files (actually a feature of rsync).</p>
<p>That way, a initial backup of 22G can be incrementally backed up creating only 20M of more/different data here on my system I'm currently looking at.</p>
<p>This obviously depends on the type of data you are backing up and as the mechanism is file-based (it always operates on complete files), your savings won't be that good if you back up ever growing files like log files.</p>
<p>Still. For my use, dirvish does exactly what I want it to do and it does it very, very well. Perfect!</p>
<p>The tool creates backup sets as folders containing all the backed up files in their original structure. Restoring a specific file this is very, very easy.</p>
<p>To get you started, I would recommend you reading the <a href="http://edseek.com/~jasonb/articles/dirvish_backup/">dirvish HOWTO</a> by Jason Boxman - especially as dirvish uses sometimes not quite obvious terminology.</p>
IE7 - Where is the menu?2006-08-07T00:00:00+00:00http://pilif.github.com/2006/08/ie7-where-is-the-menu<p>Today, I finally went ahead and installed the current Beta 3 of Internet Explorer, so I too will have the opportunity to comment on it.</p>
<center>
<a class="serendipity_image_link" href="http://www.gnegg.ch/uploads/ie7.png"><img width="400" height="85" border="0" hspace="5" src="http://www.gnegg.ch/uploads/ie7-smaller.png" alt="" /></a>
</center>
<p>What I ask myself is: Where is the menu?</p>
<p>Well. I know it's to the right of the screen behind these buttons. But it's no real menu. It's something menu-alike.</p>
<p>Why radically change the GUI? Even in Vista itself, there won't be a menu any more. Or at least not a permanently visible one.</p>
<p>The problem is: It took me years to teach my parents, that all functionality of a program is accessible via the menu and that the menu is always at the top of the application (whereas it's at the top of the screen on the mac).</p>
<p>Now with this new mood of removing the menus and putting them behind arbitrary buttons, how will I explain my parents how to use the application? I can't say "Go to File / Save" any more. Besides the names of the menu items, in the future I will <em>also</em> have to remember where the menu is.</p>
<p>And as each application will do it differently, I'll have to remember it countless of times for different applications.</p>
<p>And even if I know: How to explain it to them? "Click that Icon with the cogwheel"? I doubt they'd associate that icon with the same thing as I do. Thankfully in IE7, there's still text so I could say "Click on Tools". But what if some "intelligent" UI designer decides that not even the text is needed any more?</p>
<p>In my opinion, removing the menu has nothing to do with making the application easier to use. It's all about looking different. And looking different is counter-productive to usability. Why move away from something everyone knows? Why change for changes sake?</p>
<p>It's not that people were overwhelmed by that one line of text at the top of the application. People that didn't use the menu didn't bother. But in the case where they needed it, or needed assistance in using it, it was clear where it was and how to use it.</p>
<p>This has changed now.</p>
<p>And even worse: It has changed in a non-consistant way: Each application will display its own menu replacement where each one will work in a different way.</p>
<p>So I repeat my question: How can I teach my parents how to use one of these new applications? How can I remotely support them if I can't make them "read the menu" when I'm not sure of the layout of the application in question?</p>
<p>Thankfully, for my parents browsing needs, all this doesn't apply: They are happy users of Firefox.</p>
<p>But I'm still afraid of the day when the new layouts will come in effect in Office, Acrobat and even the file system explorer (in vista). How to help them? How to support them?</p>
Usable Playstation emulation2006-08-02T00:00:00+00:00http://pilif.github.com/2006/08/usable-playstation-emulation<p>Up until now, the Playstation emulation scene was - in my opinion - in a desolate state: Emulators do exist, but they are dependant on awfully complicated to configure plugins each with its own bugs and features and none of the emulators were updated in the last two years.</p>
<p>Up until now, the best thing you could do was to use ePSXe with the right plugins. What you got then was a working emulation with glitches all over the place. Nothing even remotely comparable to the real thing.</p>
<p>And it failed my personal acceptance check: Final Fantasy 7 had severe performance problems ("slideshow" after battle and before opening the menu interface) and the blurred color animation on the start of a battle didn't work either.</p>
<p>Well. I was used to the latter problem: They never worked and it was - for me - a given fact that these animations just don't work in emulators</p>
<p>The other thing that didn't work in epsxe was FFIX. You could play up to that village where they are creating these black mage robots. The emulator crashed on the airship movie after that part of the game. The workaround was to downgrade to epsxe 1.5.1 which actually worked, but IMHO that just underlines the fact that epsxe is not what I'd call working software.</p>
<p>I was not willing to cope with that - mainly because I own a PSOne, so I could use that. Unfortunately, it's an european machine though and both games I own and I'm currently interested in replaying, FFIX and FFVI, are german and especially FFVI is the worst translation I've ever seen in a game (even the manual was bad, btw).</p>
<p>So, there is some incentive in getting an emulator to work: You know, getting the US versions of the games isn't exactly hard, but playing them on a european console is quite the challenge even if the games were optained through retail channels.</p>
<p>Keep in mind that a) both games are no longer sold and b) I own them already albeit in the wrong language.</p>
<p>And today I found for the Playstation what ZSNES was compared to Snes9x back in 1995: I'm talking about the emulator <a href="http://psxemulator.gazaxian.com/">pSX emulator</a>.</p>
<p>Granted. The name is awfully uninventive, but the software behind that name, well... works very, very nicely:</p>
<ul>
<li>No cryptic plugins needed. Unpack, run, play.</li>
<li>It's fast. No performance problems at all on my MacBook Pro (using BootCamp)</li>
<li>It's stable. It just does not crash on me.</li>
<li>And you know what: Even the color animations work - precisely these color animations which we were told would never work properly on PC hardware.</li>
</ul>
<p>But I'm not finished yet: The software even is <b>under active development</b>! And the author is actually taking and even fixing(!) bug reports. He/She is never blaming the player for bad configurations on the emulators forum. He's always looking into reports and often fixing the stuff reported.</p>
<p>It's not Free (freedom) software. It's Windows only. But it has one hell of an advantage over all other Playstation emulators out there: It works.</p>
<p>As I hinted above: This is just like SNES emulation back in 1995: You had many abandoned projects, one emulator you were told was good (Snes9x) and one that worked (ZSNES). It looks like history is once again repeating itself.</p>
<p>My big, heartfelt thanks to whoever is behind pSX emulator! You rock!</p>
mod_php, LightTPD, FastCGI - What's fastest?2006-08-01T00:00:00+00:00http://pilif.github.com/2006/08/mod_php-lighttpd-fastcgi-whats-fastest<p>Remember last April where I <a href="http://www.gnegg.ch/archives/274-Ruby-on-Rails.html">found out</a> that Ruby on Rails was that quick compared to a PHP application? Remember when I told that it may be caused by FastCGI, but that I didn’t have the time to benchmark the thing properly?</p>
<p>Well… today I needed to know.</p>
<p>This article is even larger than my usual articles, so I had to split it up and create an extended entry. I hope you don’t mind.</p>
<!--more-->
<p>You see, if you think of it, FastCGI has some advantages over the common mod_php in Apache scenario that's so widespread these days. Let me explain:</p>
<p>When you load PHP into Apache as a module (using mod_php), each Apache process you run will also contain a PHP interpreter which in turn will load all the compiled in libraries which themselves are not exactly small.</p>
<p>This means that even if the Apache process that just started will only serve images, it will contain a PHP interpreter with all assigned libraries. That in turn means that said Apache process uses a lot of memory and takes some time to start up (because PHP and all the shared libraries it's linked to need to be loaded). Wasted energy if the file that needs to be served in an image or a CSS file.</p>
<p>FastCGI in contrast loads the PHP interpreter into memory, <em>keeps it there</em> and Apache will only use these processes to serve the PHP requests.</p>
<p>That means that all the images and CSS, flashes and whatever other static content you may have can be served by a much smaller Apache process that does not contain a scripting language interpreter and that does not link in a bunch of extra libraries (think libxml, libmysqlclient, and so on).</p>
<p>Even if you only serve pages parsed by PHP - maybe because you process your stylesheets with PHP and because you do something with the served images - you are theoretically still better off with FastCGI as Apache will recycle its processes here and then (though that's configurable) while FastCGI processes <em>stay there</em>.</p>
<p>And if you go on and need to load-balance your application, FastCGI still can provide advantages: In the common load balancing scenario, you have a reverse proxy or a load balancer and a bunch of backend servers actually doing the work. In that case, if you use FastCGI, the backend servers will be running your PHP application and noting else. No web server loading an interpreter loading your script. Just the interpreter and your script. So you safe a whole lot of memory by not loading another web server in the backend (Yes. FastCGI works over the network).</p>
<p>And if all that does not convince you: You even get Unix rights separation for different virtual servers using a SuEXEC wrapper - a thing you don't get when you work with mod_php. In that case, all PHP scripts are run directly by the Apache process and thus share the permissions - even across vhosts.</p>
<p>There are some Apache 2+ MPMs that try to fix that, but neither of them is stable and all of them aren't really under development any more.</p>
<p>You can use SuEXEC with a standard CGI process, but the performance hit there is prohibitive.</p>
<p>Well, and if you continue all these thoughts, you'll end up with another possibility of optimization: Today's web applications don't need many of the features Apache provides: They don't need to parse server side image maps. They usually don't need server based authentication (they do it themselves using forms and cookies), they don't need multiple competing alias implementations and they certainly don't need fancy directory indexes.</p>
<p>So maybe, one can even remove Apache out of the equation and replace it with something with fewer features and optimized for raw performance? Maybe something like <a href="http://www.lighttpd.net/">LightTPD</a>.</p>
<p>All that theory sounded quite interesting to me. Is there a way to speed up PHP applications? Does FastCGI provide a viable way to run scrips of different vhosts as different Unix users? Is FastCGI really faster?</p>
<p>Questions over questions. And this time around, I went after the answers:</p>
<p>I created a VMWare Server based virtual machine running Gentoo Linux and tested (using <tt>ab -c5 -n100</tt>) a PHP application using various settings.</p>
<p>These are the preconditions:</p>
<ul>
<li><p>The page I benchmarked was the unauthenticated start page of a fairly large PHP application using a framework that's quite similar to Ruby on Rails considering the levels of indirection and count of file inclusions going on.</p><p>I used the unauthenticated start page because building that one involves next to no database queries which was important as I wanted to test the performance of the web server and PHP, not the performance of the database.</p><p>I didn't use a simple test page, as I wanted real-life results, not canned ones.</p></li>
<li>For each test, I completely rebuilt the server environment by resetting the virtual machine to a clean state.</li>
<li>CFLAGS were conservative (-O2)</li>
<li>I didn't tweak the configuration of the used programs at all. This was done because I know the involved programs to various degrees (Apache is well-known to me, while LightTPD isn't). So I trusted the OS package maintainer to provide a usable initial configuration. Also, this allows you to recreate the experiment easily and it saved me from quite a lot of work.</li>
<li>All tests were done with <tt>ab -c5 -n100</tt>, which I ran six times. I removed the worst and the best result and took the median of the remaining 4 runs. I don't think that was necessary as the results were pretty constant over the runs.</li>
<li>I used the following software components (all stock Gentoo):
<table id="bench-art-comps" cellspacing="0">
<tr>
<th>Component</th>
<th>Version</th>
<th>USE-flags</th>
</tr>
<tr>
<td>Apache</td>
<td>2.0.58</td>
<td><tt>-ssl mpm-worker threads</tt> and <tt>-ssl mpm-prefork</tt></td>
</tr>
<tr>
<td>LightTPD</td>
<td>1.4.11</td>
<td><tt>-gdbm fastcgi -ssl php</tt></td>
</tr>
<tr>
<td>PHP</td>
<td>5.1.4-gentoo-r4</td>
<td><tt>fastbuild -memlimit iconv postgres xml xmlreader tokenizer bzip2 pear -tiff -xpm cli ftp -berkdb curl bcmath curlwrappers -gdbm -jpeg -ncurses pcre -png -readline session simplexml -spell spl sqlite -truetype</tt> with <tt>apache2</tt> and <tt>-apache2 cgi force-cgi-redirect</tt> (cgi contains FastCGI support). The used flags are the minimum requirements of the tested application.</td>
</tr>
<tr>
<td>mod_fcgid</td>
<td>1.0.8</td>
<td>-</td>
</tr>
<tr>
<td>APC</td>
<td>3.0.10</td>
<td><tt>mmap</tt></td>
</tr>
</table>
</li>
</ul>
<p>The results are as follows (in the order I made the tests. Winners are bolded):</p>
<table id="bench-art-res" cellspacing="0">
<tr>
<td> </td>
<th>Requests/s</th>
<th>Time per Request (mean, ms)</th>
<th>Failures</th>
</tr>
<tr>
<th>Apache (prefork), mod_php</th>
<td align="right">4.59</td>
<td align="right">1089.647</td>
<td align="right">0</td>
</tr>
<tr>
<th>Apache (prefork), mod_php, APC</th>
<td align="right">11.05</td>
<td align="right">452.485</td>
<td align="right">0</td>
</tr>
<tr>
<th>Apache (prefork), FastCGI, php</th>
<td align="right">4.89</td>
<td align="right">1022.645</td>
<td align="right">11</td>
</tr>
<tr>
<th>Apache (prefork), FastCGI, php, APC</th>
<td align="right"><b>11.24</b></td>
<td align="right"><b>444.905</b></td>
<td align="right">4</td>
</tr>
<tr>
<th>Apache (worker), FastCGI, php</th>
<td align="right">4.99</td>
<td align="right">1001.935</td>
<td align="right">12</td>
</tr>
<tr>
<th>Apache (worker), FastCGI, php, APC</th>
<td align="right">11.12</td>
<td align="right">449.818</td>
<td align="right">0</td>
</tr>
<tr>
<th>LightTPD, FastCGI, php</th>
<td align="right">4.65</td>
<td align="right">1075.088</td>
<td align="right">0</td>
</tr>
<tr>
<th>LightTPD, FastCGI, php, APC</th>
<td align="right">11.24</td>
<td align="right">444.919</td>
<td align="right">0</td>
</tr>
</table>
<p>Notes:</p>
<ul>
<li>The results you see here are not comparable to that other blog post I made as it's an entirely different configuration.</li>
<li>As some libraries used by PHP extension are not thread save, one should not use mod_php with a threaded MPM. That's why I didn't either.</li>
<li>The failed requests where caused by the fastcgi process not answering in the allotted time. In a mod_php scenario, Apache (and the user) keeps waiting, blocking the process and causing another one to start. This is an example of a place where some tweaking is needed.</li>
<li>The glibc on the server wasn't compiled with NPTL enabled, so the worker MPM could be made a little faster I guess.</li>
<li>LightTPD needs some configuration tweaking to both php.ini and the fastcgi.conf to make PATH_INFO work which the tested PHP application depended on. This is documented in the <a href="http://www.lighttpd.net/documentation/fastcgi.html">LightTPD manual</a>.</li>
<li>My very rough notes (ab output anonymized) is <a href="http://www.lipfi.ch/gnegg-benchmark.txt">available</a>.</li>
</ul>
<p>So in conclusion, I can say this:</p>
<ul>
<li>APC is a must for larger PHP applications and it's time that piece of software is integrated into PHP</li>
<li>FastCGI is indeed faster than mod_php, but only by a very, very small difference. The positive things I said above do apply though: User separation and better memory management because of more lightweight httpd-processes.</li>
<li>LightTPD is smaller compared to Apache, but the old school server is still faster.</li>
<li>The speed-differences between servers and technologies are negligible, so there's no need for you to move away from mod_php right now (unless you are running out of RAM).</li>
<li>FastCGI indeed is an alternative to mod_php which could be very interesting in shared hosting scenarios.</li>
<li>Personally, before I did this test, I was certain that LightTPD would win the race. Obviously, large software which is perceived bloated not necessarily is.</li>
</ul>
<p>So, in the end, I hope, this was, is or will be useful for you. If you have any recommendations and suggestions on how to benchmark better, faster, shiny-er or whatever, please don't hesitate to leave a comment here!</p>
After 13 years something new in Monkey Island2006-08-01T00:00:00+00:00http://pilif.github.com/2006/08/after-13-years-something-new-in-monkey-island<p><a href="http://www.gnegg.ch/uploads/monkey-seq.png"><img src="http://www.gnegg.ch/uploads/monkey-seq.serendipityThumb.png" border="0" alt="" hspace="5" width="110" height="99" align="left" /></a></p>
<p>It has been 14 years ago that I played Monkey Island for the first time. Well… maybe 13. I just don’t remember exactly if it was 1992 or 1993 when my parents finally bought a computer and I illegally copied the game from a classmate (including a photocopied version of that “copy protection” wheel).</p>
<p>Of course it didn’t take me 13 years to complete it. But the game was so incredibly cool that I played through it over and over again. And with the downfall of DOS came the advent of <a href="http://www.scummvm.org">ScummVM</a>, allowing me to still play the game.</p>
<p>And just now I started another run - probably because I’ve seen Pirates of the Carribean 2 last monday and I noticed quite some similarities to Monkey Island - especially the second part (Voodoo Lady in a swamp comes to mind)</p>
<p>Anyways. Today was the first time ever I’ve seen the scene I screenshotted there.</p>
<p>In all my previous runs, I always “salvaged” the idol as my last task which mean that as soon as I got out of the water, I’ve seen the ghost ship fade away with Elaine on it.</p>
<p>Now, I did it first as it actually makes sense as that task requires the least amount of walking around, which led me to see this cute scene between Guybrush and Elaine (and not to mention her stupid excuse for not kissing shortly afterwards.</p>
<p>How nice to find something new after 13 long years.</p>
Tracking comments with cocomment2006-07-26T00:00:00+00:00http://pilif.github.com/2006/07/tracking-comments-with-cocomment<p>I'm subscribed to quite a long list of feeds lately. Most of them are blogs and almost all of them allow users to comment on posts.</p>
<p>I often leave comments on these blogs. Many times, they are as rich as a posting here as I got lots to say once you make me open my mouth. Many times, I quietly hope for people to respond to my comments. And I'm certainly eager to read these responses and to participate in a real discussion.</p>
<p>Now this is a problem: Some of the feeds I read are aggregated feeds (like PlanetGnome or PlanetPHP or whatever) and it's practically impossible to find the entry in question again.</p>
<p>Up until now, I had multiple workarounds: Some blogs (mainly those using the incredibly powerful <a href="http://www.s9y.org">Serendipity</a> engine) provide the commenter with a way to subscribe to an entry, so you get notified per Email when new comments are posted.</p>
<p>For all non-s9y-blogs, I usually dragged the link to the site to my desktop and tried to remember to visit them again to check if replies to my comments where posted (or maybe another interesting comment).</p>
<p>While the email method was somewhat comfortable to use, the link-to-desktop one was not: My desktop is enough cluttered with icons without these additional links anyways. And I often forgot to check them none the less (making a bookmark would guarantee myself forgetting them. The desktop link at least provides me with a slim chance of not forgetting).</p>
<p>Now, by accident, I came across <a href="http://www.cocomment.com/">cocomment</a>.</p>
<p>cocomment is interesting from multiple standpoints. For one, it just solves my problem as it allows you to track discussions on various blog entries - even if they share no affiliation at all with cocomment itself.</p>
<p>This means that I finally have a centralized place where I can store all my comments I post and I can even check if I got a response on a comment of mine.</p>
<p>No more links on the desktop, no more using bandwidth of the blog owners mail server.</p>
<p>As a blog owner, you can add a javascript-snippet to your template so cocomment is always enabled for every commenter. Or you just keep your blog unmodified. In that case, your visitors will use a bookmarklet provided by cocomment which does the job.</p>
<p>Cocomment will crawl the page in question to learn if more comments were posted (or it will be notified automatically if the blog owner added that javascript snippet). Now, crawling sounds like they waste the blog owners bandwidth. True. In a way. But on the other hand: It's way better if one centralized service checks your blog once than if 100 different users each check your blog once. Isn't it?</p>
<p>Anyways. The other thing that impresses me about cocomment is how much you can do with JavaScript these days.</p>
<p>You see, even if the blog owner does not add that snippet, you can still use the service by clicking on that bookmarklet. And once you do that, so many impressive things happen: In-Page popups, additional UI elements appear right below the comment field (how the hell do they do that? I'll need to do some research on that), and so on.</p>
<p>The service itself currently seems a bit slow to me, but I guess that's because they are getting a lot of hits currently. I just hope, they can keep up, as the service they are providing is really, really useful. For me and I imagine for others aswell.</p>
Developing with the help of trac2006-07-21T00:00:00+00:00http://pilif.github.com/2006/07/developing-with-the-help-of-trac<p><a href="http://trac.edgewall.org/">trac</a> rules.</p>
<p>If you have a small team working on a project that's getting bigger and bigger, if you need a system to track the progress of your project, a system to allow communications within your team in a way that keeps track of what you've talked about, if you need a kick-ass frontend to subversion - if you need anything of that, consider trac.</p>
<p>trac is a web based subversion frontend with the nicest addons: It provides a wiki, some project management features and a bug tracker. One that's actually usable for non-scientists as well (in contrast to bugzilla).</p>
<p>But the tools real strength comes from its networking features: All components are interconnected. You are looking at the svn history and you see links to your bugtracker. You are looking at the bugtracker und you see links to the wiki where you find more information about the bug. And you look at the wiki and you'll find links to individual changesets (SVN revisions). And so on.</p>
<p>All this is very nice in itself, but it's not what really made me write this post. The ease of use is. And the good looks.</p>
<p>The software, once it's running, looks very nice and is very, very easy to use. Some administration tasks require you to pay a visit to the command line, but all everyday tasks can be done from the web interface. In a completely hassle-free way.</p>
<p>No forms too complicated to understand for a normal person to be able to add a bug to the database. No complex customization needed to make these links between the modules work. And no ugly, bloated interface.</p>
<p>If you like the tool so far, be warned though: Installing the thing isn't exactly a piece of cake - at least if you want to integrate it into an existing apache installation. Still: The benefits far outweigh the hassle you have to go through to set the thing up.</p>
<p>Trac really is one nice piece of software.</p>
<p>Oh and in case you haven't noticed. Yepp. We are using it internally to manage our projects. One of them at least.</p>
Computers under my command (4): yuna2006-07-21T00:00:00+00:00http://pilif.github.com/2006/07/computers-under-my-command-4-yuna<p><img width="177" height="450" border="0" hspace="5" align="left" src="http://www.gnegg.ch/uploads/yuna.jpg" alt="" /></p>
<p>Yuna was the lead girl in Final Fantasy X, the first episode of the series being released for the Playstation 2.</p>
<p>Now, I know I'm alone with this oppinion, but FFX was a big disappointment for me: Obvious character backgrounds, unimpressive story, stupid mini games, no world map, much too short. No. I didn't like FFX.</p>
<p>But this doesn't change the fact that I played through the game and that I was serisouly impressed of how well the thing looked. Yes. The graphics were good - unfortunately that's everything positive I can say about the game.</p>
<p>And this is why I'm getting straight to the computer behind the name:</p>
<p>I called my MacBook Pro "yuna".</p>
<p>My MacBook Pro is the one machine I use at work that impressed me the most yet: Fast, good looking, long battery life... and... running MacOS X.</p>
<p>Yuna did what was completely unthinkable for me not much more than 5 years ago: It converted me over to using MacOS X as my main OS. It's not secondary OS. It's no dual boot (especially since I stopped playing WoW). It's no "MacOS is nice, but I'm still more productive in Windows". It's no "sometimes I miss Windows" and no "mmh... this would work better in Windows".</p>
<p>No. It's a full-blown remorseless conversion.</p>
<p>Granted: Some things DO work better in windows (patched emulators for use in Timeattack videos come to mind), but my point is: I don't miss them.</p>
<p>The slickness and polish of the OSX interface and especially the font rendering (I admit, I putting way too much emphasis in fonts when chosing my platform, but fonts after all are the most important interface between you and the machine) and the unix backend make me wonder: How could I ever work without OS X?</p>
<p>It's funny. For some time now I thought about converting.</p>
<p>But what really made me do it was the knowing that there's a safety net: You know: I still have that windows partition on this intel mac. And I do have Parallels (which is much faster than Virtual PC) which I use for Delphi and lately Visual Studio.</p>
<p>Everyone that keeps telling that Apple switching to Intel will decrease their market share even more better shuts up. Now. Once you have that machine, once you see the slickness of the interface, once you notice how quickly you can be productive in the new environement, once that happens, you'll see that there's no need, no need at all, to keep using Windows.</p>
<p>So, a wonderful machine with a name of a (admittedly) good looking girl (with a crappy background story) from a crappy game. Too bad <a href="http://www.gnegg.ch/archives/293-Computers-under-my-command-2-marle.html">Marle</a> or <a href="http://www.gnegg.ch/archives/294-Computers-under-my-command-3-terra.html">Terra</a> wasn't free any more.</p>
Template engines complexity2006-07-20T00:00:00+00:00http://pilif.github.com/2006/07/template-engines-complexity<p>The current edition of the german computer magazine <a href="http://www.heise.de/ix/">iX</a> has an article comparing different template engines for PHP.</p>
<p>When I read it, the old discussion about Smarty providing too many flow controlling options sprang to my mind again, even though that article itself doesn't say anything about whether providing a rich template language is good or not.</p>
<p>Many purists out there keep telling us that no flow control what so ever should be allowed in a template. The only thing a template should allow is to replace certain marker by some text. Nothing more.</p>
<p>Some other people insist, that having blocks which are parsed in a loop is ok too. But all the options Smarty provides are out of the question as it begins intermixing logic and design again.</p>
<p>I somewhat agree on that argument. But the problem is that if you are limited to simple replacements and maybe blocks, you begin to create logic in PHP which serves no other purpose than filling that specially created block structure.</p>
<p>What happens is that you end up with a layer of PHP (or whatever other language) code that's so closely tailored to the template (or even templates - the limitations of the block/replacement engines often require you to split a template into many partial file) that even the slightest changes in layout structure will require a rewrite in PHP.</p>
<p>Experience shows me that if you really intend to touch your templates to change the design, it won't suffice to change the order of some replacements here and there. You will be moving parts around and more often than not the new layout will force changes in the different blocks / template files (imagine marker {mark} moving from block HEAD to block FOOT).</p>
<p>So if you want to work with the down-stripped template engines while still keeping the layout easily exchangeable, you'll create layout-classes in PHP which get called from the core. These in turn use tightly coupled code to fill the templates.</p>
<p>When you change the layout, you'll dissect the page layouts again, recreate the wealth of template files / blocks and then <em>update your layout classes</em>. This means that changing the layout does in-fact require your PHP backend coders to work with the designers yet again.</p>
<p>Take <a href="http://smarty.php.net">smarty</a>.</p>
<p>Basically you can feed a template a defined representation of view data (or even better: Your model data) in unlimited complexity and in raw form. You want to have floating numbers on your template represented with four significant digits? Not <b>your</b> problem with smarty. The template guys can do the formatting. You just feed a float to the template.</p>
<p>In other engines, formatting numbers for example is considered backend logic and thus must be done in PHP.</p>
<p>This means that when the design requirement in my example changes and numbers must be formatted with 6 significant digits, the designer is stuck. He must refer back to you, the programmer.</p>
<p>Not with Smarty. Remember: You got the whole data in a raw representation. A Smarty template guy, knows how to format Numbers from within Smarty. He just makes the change (which is a presentation change only) right in the template. No need to bother the backend programmer.</p>
<p>Furthermore, look at complex structures. Let's say a shopping cart. With Smarty, the backed can push the whole internal representation of that cart to the template (maybe after some cleaning up - I usually pass an associative array of data to the template to have a unified way of working with model data over all templates). Now it's your Smarty guys responsibility (and possibility) to do whatever job he has to do to format your model (the cart) in a way the current layout specification asks him to.</p>
<p>If the presentation of the cart changes (maybe some additional text info must be displayed what the template was not designed for in the first place), the model and the whole backend logic can stay the same. The template just uses the model object it's provided with to display that additional data.</p>
<p>Smarty is <em>the</em> template engine allowing to completely decouple the layout from the business logic.</p>
<p>And let's face it: Layout DOES in-fact contain logic: Alternating row colors, formatting numbers, displaying different texts if no entries could be found,...</p>
<p>When you remove logic from the layout, you will have to move it to the backend where it immediately means that you will need a backend worker whenever the layout logic changes (which it always does on redesigns).</p>
<p>Granted. Smarty isn't exactly easy to get used to for a HTML only guy.</p>
<p>But think of it: They managed to learn to replace <font> tags in their code with something more reasonable (CSS), that works completely differently and follows a completely different syntax.</p>
<p>What I want to say is that your layout guys are not stupid. They are well capable of learning the little bits of pieces of logic you'd want to have in your presentation layer. Let them have that responsibility means that you yourself can go back to the business logic once and for all. Your responsibility ends after pushing model objects to the view. The rest is the Smarty guys job.</p>
<p>Being in the process of redesigning a fully smarty-based application right now, I can tell you: It works. PHP does not need to get touched (mostly - design flaws exist everywhere). This is a BIG improvement over other stuff I've had to do before which was using the way everyone is calling clean: PHPLIB templates. I still remember fixing up tons and tons of PHP-code that was tightly coupled into the limited structure of the templates.</p>
<p>In my world, you can have one backend, no layout code in PHP and a unlimited amount of layout templates. Interchangable without changing anything in the PHP code. Without adding any PHP code when creating a new template.</p>
<p>Smarty is the only PHP template engine I know of that makes that dream come true.</p>
<p>Oh and btw, Smarty won the performance contest in that article with a lot of distance to the second fastest entry. So bloat can't be used as argument against smarty. Even if it IS bloated, it's not slower than non-bloated engines. It's faster.</p>
PostgreSQL: Explain is your friend2006-07-19T00:00:00+00:00http://pilif.github.com/2006/07/postgresql-explain-is-your-friend<p>Batch updates to a database are a tricky thing because of multiple aspects. For one, many databases are optimized for fast read access (though not as optimized as say LDAP). Then, when you are importing a lot of data, you are changing the structure of the data already in there which means that it's very well possible that the query analyzer/optimizer has to change its plan in mid-batch. Also, even if a batch import is allowed to take a few minutes when running in the background, it must not take too long either.</p>
<p><a href="http://www.popscan.ch">PopScan</a> often relies heavily on large bulk imports into its database: As the applications feature set increased in time, it has become impossible to match all of the applications features to a database which may already be running at the vendors side.</p>
<p>And sometimes, there is no database to work with. Sometimes, you're getting quite rough exports from whatever legacy system may be working at the other end.</p>
<p>All this is what forces me to work with large bulk amounts of data coming in in one of many possible formats: Other databases, text files, XML files, you name it.</p>
<p>Because of a lot of bookkeeping and especially tracking of changes in the data to allow to synchronize only changed datasets to our external components (Windows Client, Windows CE Scanner), I can't just use COPY to read in a complete dump. I have to work with UPDATE/INSERT which doesn't exactly help at speeding up the process.</p>
<p>Now what's interesting is how indexes come into play when working with bulk transfers: I had it both now: Sometimes it's faster if you drop them before starting the bulk process. Sometimes you must not drop them if you want the process to finish this century.</p>
<p><tt>EXPLAIN</tt> (and <tt>top</tt> - if your postgres process is sitting there with constant 100% CPU usage, it's full-table-scanning) is your friend in such situations. That and an open eye. Sometimes, like yesterday, it was obvious that something was going wrong: That particular Import I was working with slowed down the more data it processed. We all know: If speed is dependent of the quantity of data, something is wrong with your indexes.</p>
<p>Funny thing was: There was one index too many in that table: The primary key.</p>
<p>The query optimizer in PostgreSQL thought that using the primary key for one condition and then filtering for the other conditions was faster. But it was dead wrong as the condition on which I checked the primary key yielded more data with every completed dataset.</p>
<p>That means that PostgreSQL had to sequentially scan more and more data with every completed dataset. Using the other index, one I specifically made for the other conditions to be checked, always would have yielded a constant amount of datasets (one to four) so filtering after the PK condition <em>after</em> using that other index would have been much faster. And constant in speed even with increasing amounts of imported datasets.</p>
<p>This is one of the times when I wish PostgreSQL had a way how to tell the optimizer what to do. To tell it: "Take index a for these conditions. Then filter after that condition.".</p>
<p>The only way to accomplish that so far is to drop the index that was used by accident. It's just that it feels bad, dropping primary keys. But here it was the only solution. To PostgreSQL's defense, let me add though: My 8.1 installation took the right approach. It was the 7.3 installation that screwed here.</p>
<p>OK. So just drop the indexes when making a bulk import. Right? Wrong.</p>
<p>Sometimes, you get a full dump to import, but you want to update only changed datasets (to mark only the ones that actually changed as updated). Or you get data which is said to have a unique key, but which doesn't. Or you get data which is said to have a foreign key, but which violates it.</p>
<p>In all these cases, you have to check your database for what's already there before you can actually import your dataset. Otherwise you wrongly mark a set as updated, or your transaction dies because of a primary key uniqueness violation or because of a foreign key violation.</p>
<p>In such cases, you <strong>must not</strong> remove the index your database would use in your query to check if something is already there.</p>
<p>Belive me: The cost of updating the index on each insert is MUCH lower than the cost of doing a full table scan on every dataset you are trying to import ;-)</p>
<p>So in conclusion let me tell this:</p>
<ul>
<li>Bulk imports are interesting. Probably even more interesting than complex data selection queries.</li>
<li>EXPLAIN is your best friend. Learn how to read it. Learn it now.</li>
<li>So-called "rules of thumb" don't apply all the time.</li>
<li>There are few things in life that beat the feeling of satisfaction you get after staring at the output for EXPLAIN for sometimes hours and optimizing the queries/indexes in question countless times, when your previously crawling imports begin to fly.</li>
</ul>
Six years of Sensational AG2006-07-14T00:00:00+00:00http://pilif.github.com/2006/07/six-years-of-sensational-ag<p>Six years and a day ago (I already made two posts yesterday, so this had to wait), we were at the Handelsregisteramt (the public office where you register companies here in Switzerland) where we officially founded the <a href="http://www.sensational.ch">Sensational AG</a>.</p>
<p>I even remember the weather (which is basically because I have a really hard time at forgetting <em>anything</em>): It was one of the few days that summer where it didn't rain (completely contrary to the <a href="http://www.gnegg.ch/archives/295-Nice-summer.html">current summer</a>). There wasn't much to see of the sun either, but it was hot and moist.</p>
<p>When we founded, one of us was still going to school and I was absorbed by something else, so we continued keeping our operation on a slow level.</p>
<p>On February 4th, 2001, we really took off.</p>
<p>By then school was over for all of us and my thing was over too. We moved into a real office (the team was working together even before we founded the real company - but then we all were still at school and working from home and from the school house). I set up the basics of our internal network (which still works today - even some hardware is the same, namely Thomas, a Thinkpad 390 or something like that which is a central gateway). The internet access was still over a ISDN line, but at least it was something. ADSL was not available back then.</p>
<p>Mid 2001, we developed a barcode scanning application on a specific customers request. This application is the foundation of <a href="http://www.popscan.ch">PopScan</a> - our current Big-Thing.</p>
<p>In the last five years of operations, we did a lot of interesting stuff. Sometimes risky, sometimes just interesting and sometimes really, really great. I myself migrated to Mac OS, we migrated our telephone system to VoIP, we were running quite a big internet portal, we developed applications from scratch for the web, for windows and for PocketPCs. We moved office (inside the same building - even the same floor) and we finally hired two more people.</p>
<p>Looking back, we've come a long way while still being ourselves. And we managed to achieve incredibly much with just three (and now five) people.</p>
<p>Thanks Lukas, thanks Richard. It's great to have this thing going with you!</p>
Horror Movies and LCDs2006-07-13T00:00:00+00:00http://pilif.github.com/2006/07/horror-movies-and-lcds<p>Have you ever tried watching a horror movie on a LCD screen?</p>
<p>I'm telling you, it sucks.</p>
<p>Movies in general and horror movies in specific may contain dark scenes with very little contrast.</p>
<p>That's perfectly ok - especially in the horror genre where you expect a certain creepyness which is best achieved with dark shots.</p>
<p>Now the problem is that LCDs (the non-glossy editions in particular) suck at displaying black.</p>
<p>What you are seeing is not black - it's more like a blue-greyish bright surface - bright enough to suck up all the other dark tones that may be on the screen.</p>
<p>The effect is even worsened by watching the movie in a dark room. When there's ambient light, it's much better. But when it's dark around you, there's nothing you can do to fix it:</p>
<p>Increasing the screens brightness will make the black glow even brighter, sucking up more surrounding areas. Turning the brightness down will make the black darker, but you'll lose the surrounding areas to the darkness too.</p>
<p>I tried a display on a ThinkPad T42p, a Cinema Display and the display of my MacBook Pro. None of them is particularly better or worse - they plain suck.</p>
<p>This is why I always buy DLP projectors. The black really is quite black there (it's still not perfect - I imagine laser projectors will rule here).</p>
<p>The solution the industry is throwing at this is the glossy displays. But where the non-glossy ones suck at horror movies in a dark room, the glossy ones suck at the rest - like working in the office, working outside and even watching a movie in a bright room.</p>
<p>To actually see something of the higher contrast these glossy displays are said to provide (to actually see something at all - I mean besides yourself), you either need to be in a completely dark room or to turn up the brightness very high which will be quite unpleasant for your eyes (I get a bad headache after working more than 30 minutes on my Cinema Display when I turn the brightness all up).</p>
<p>Can't wait for laser projectors, non-reflecting glossy displays (can a thing like that even exist?), hologram projectors, neuronal interfaces or something completely different. Till then, I guess I'll have to turn on the lights in my room when watching a dark movie.</p>
Blogroll is back - on steroids2006-07-13T00:00:00+00:00http://pilif.github.com/2006/07/blogroll-is-back-on-steroids<p>I finally got around to adding an excerpt of the list of blogs I'm regularly reading to the navigation bar to the right.</p>
<p>The list is somewhat special as it's auto-updating: It refereshes every 30 minutes and displays a list of blogs in descending order of last-updated-time.</p>
<p>Adding the blogroll was a multi step process:</p>
<p>At first, I thought adding the Serendipity blogroll plugin and pointing it to my <a href="http://www.newsgator.com">Newsgator</a> subscription <a href="http://services.newsgator.com/ngws/svc/opml.aspx?uid=61859&mid=1">list</a> (I'm using Newsgator to always have an up-to-date read-status in both Net News Wire and FeedDemon) was enough, but unfortunately, that did not turn out to be the case.</p>
<p>First, the expat module of the PHP installation on this server has a bug making it unable to parse files with the unicode byte order mark at the beginning (basically three bytes telling your machine if the document was encoded on a little- or big-endian machine). So it was clear that I had to do some restructuring of the OPML-feed (or patching around in the s9y plugin, or upgrading PHP).</p>
<p>Additionally, I wanted the list to be sorted in a way that the blogs with the most recent postings will be listed first.</p>
<p>My quickly hacked-together solution is <a href="http://www.lipfi.ch/pilif-feed.phps">this script</a> which uses a RSS/Atom-parser I took from Wordpress, which means that the script is licensed under the GNU GPL (as the parser is).</p>
<p>I'm calling it from a cron-job once per 30 minutes (that's why the built-in cache is disabled on this configuration) to generate the OPML-file sorted by the individual feeds update time stamp.</p>
<p>That OPML-file then is fed into the serendipity plugin.</p>
<p>The only problem I now have is that the list is unfairly giving advantage to the aggregated feeds as these are updated much more often than individual persons blogs. In the future I will thus either create a penalty for these feeds, remove them from the list or just plain show more feeds on the page.</p>
<p>Still, this was a fun hack to do and fulfills its purpose. Think of it: Whenever I add a feed in either Net News Wire or FeedeDemon, it will automatically pop up on the blogroll on gnegg.ch - this is really nice.</p>
<p>On a side note: I could have used the Newsgator API to get the needed information faster and probably even without parsing the individual feeds. Still, I went the OMPL-way as that's an open format making the script useful for other people or for me should I ever change the service.</p>
Amazing Ubuntu2006-07-10T00:00:00+00:00http://pilif.github.com/2006/07/amazing-ubuntu<p>I must say, I'm amazed how far <a href="http://www.ubuntu.com">Ubuntu Linux</a> has come in the last 6 months.</p>
<p>When I tried 5.10 last october, it was nice, but it was still how I experienced Linux ever since I tried it out on the desktop - Flaky: WLAN didn't work, DVDs didn't work, Videos didn't work (well... they did, but audio and video desynched after playing for more than 10 seconds), fonts looked crappy compared to Windows and OS X and suspend and hibernate didn't work (or rather worked too well - the notebook didn't come up again after suspending / hibernating).</p>
<p>I know, there were tutorials explaining how to fix some of the problems, but why working through tons of configuration files when I can also just install Windows or OSX and have it work out-of-the box?</p>
<p>Now, yesterday, I installed Ubuntu 6.06 on my Thinkpad T42.</p>
<p>Actually, I tried updating my 5.10 installation, but after doing so, my network didn't work any longer. And in comparison with Windows and OSX and even Gentoo Linux where the fix is obvious or well documented with useful error messages, I had no chance in fixing it in Ubuntu on short notice.</p>
<p>Seeing that I had no valuable data on the machine, I could just go ahead with the reinstallation.</p>
<p>WPA still didn't work with the tools provided by default. Now, we all know that WEP is not safe any more and in my personal experience is much flakyer than WPA (connections dropping or not even getting up). How can a system like Linux which is that security-centered not support WPA? Especially as it also works better than WEP.</p>
<p>To Ubuntu's credit I have to say, that a tool, <a href="http://www.gnome.org/projects/NetworkManager/">NetworkManager</a> to fix WPA on the desktop was released post-feature-freeze. If you know what to do, it's just a matter of installing the right packages to get it to work (and fixing some strange icon <a href="https://launchpad.net/distros/ubuntu/+source/network-manager/+bug/37128">resource error</a> preventing the gnome applet from starting).</p>
<p>Aside the connectivity issue (you won't read any praise for NetworkManager here as a tool like that is nothing special in any other OS which is designed for desktop-use), the Ubuntu Experience was a very pleasant one.</p>
<p>Syspend to RAM worked (Hibernate didn't - it doesn't even hibernate). Fonts looked OK. And best of all:</p>
<p>I was able to play Videos (even HD with sufficient performance) and watch a DVD. Hassle-free.</p>
<p>Granted, I had to install some legally not-so-safe packages (with the help of <a href="http://easyubuntu.freecontrib.org/">EasyUbuntu</a> which does the hard work for you), but you'd have to do that on any other system aswell, so that's ok IMHO.</p>
<p>This was a real plesant experience.</p>
<p>And in the whole process I only got three or four meaningless error-messages or stuff not-working silently which is supposed to work according to the documentation.</p>
<p>I'm good enough with computers to fix stuff like that and I had enough time to do it, so I'm not very upset about that, but I'll only recommend Ubuntu as a real desktop OS once I can install it on a machine and connect to my home network without cryptic error messages and as cryptic fixes (that NetworkManager-bug).</p>
<p>Still: They've come a really long way in the past 6 months. Ubuntu is the first Linux distribution ever that manages to play an AVI video and a DVD without forcing me to tweak around for at least two hours.</p>
Nice summer2006-07-07T00:00:00+00:00http://pilif.github.com/2006/07/nice-summer<div align="center"><img width="292" height="70" border="0" hspace="5" src="http://www.gnegg.ch/uploads/weather.png" alt="" /></div>
<p>This is what the <a href="http://www.meteoschweiz.ch">Bundesamt für Meteorologie und Klimatologie</a> (basically the official entity to tell us and the world how the weather in Switzerland is) has to say about the upcoming weather.</p>
<p>It shows a certain nicety in this summer: It's neither just hot (like 2003) nor just cold (like all other years for the last 10 years or so). There are hot days like last week, but there are also the cooler ones like just now where it's raining at comfortable 22 degrees celsius.</p>
<p>And just when you have enough cool weather and want the sun back, it's turning around again just to get colder when it's getting too hot.</p>
<p>I wish every summer could be like this.</p>
Computers under my command (3): terra2006-07-07T00:00:00+00:00http://pilif.github.com/2006/07/computers-under-my-command-3-terra<p><img width="123" height="451" border="0" hspace="5" align="left" src="http://www.gnegg.ch/uploads/terra.jpg" alt="" /></p>
<p><a href="http://en.wikipedia.org/wiki/Final_Fantasy_VI">Final Fantasy VI</a> (known as Final Fantasy 3 in the US) begins with two guys and Terra using some mech-like device to raid a town with the objective to find a creature referred to as an Esper.</p>
<p>You soon learn that Terra is in fact wearing a device inhibiting her free will and that she would never do something like that out of her free will - quite to the contrary.</p>
<p>When the three people find that Esper, the two soldiers die at its hands, but Terra survives.</p>
<p>The rest of the game evolves about her, the balance between magic and technology, love and humanity.</p>
<p>Terra is the main character in what I think is the best Final Fantasy ever done, probably because it's in a way similar to Chrono Trigger (which is the second-best RPG ever done): Good thought-out characters, very free progression in the game (once the world ends, which is about half into the game), nice graphics and one hell of a story.</p>
<p>What really burns this game and especially Terra into your mind though is her theme sound. Even on the SNES it sounds really nice and in the end it's what made me really interested in game soundtracks.</p>
<p>Also, <a href="http://www.gnegg.ch/archives/252-Just-incredible.html">I've blogged</a> about a remix of that theme song. You should probably go ahead and listen to that to understand why FF6 is special to me and to everybody else.</p>
<p>Even after not having played FF6 for more than two years now, I still can't get that theme song out of my head.</p>
<p>The computer terra is a fan-less media center PC by <a href="http://www.hushtechnologies.net/">hush technologies</a>. It's running Windows XP Media Center edition and it's connected to my video projector.</p>
<p>I'm also using it to manage my iPod (and I'm using a script to rsync the music over to <a href="http://www.gnegg.ch/archives/291-Computers-under-my-command-Issue-1-shion.html">shion</a> both for backup and <a href="http://www.slimp3.com">SlimServer</a> access) and sometimes to play World of Warcraft on.</p>
<p>Even though the machine is quite silent, I can't have it running over night, so it hasn't that big an uptime: It's right next to my bed and the sound of the spinning hard-disk and the blue power indicator led both keep me from sleeping at night.</p>
<p>Ever since I'm using the machine, I had small glitches with it: Crashes after updating the graphics driver (fixed via system restore), segmentation faults in the driver for the IR receiver, basically the stuff you get used to when you are running Windows.</p>
<p>I'm not complaining though: Even though the installation is fragile, my <a href="http://www.gnegg.ch/archives/264-Evening-leisure.html">home entertainment concept</a> depends on terra and usually it works just fine.</p>
<p>And after all, the original Terra was kind of fragile too (mentally and physically), so it's just fitting that the same applies to the computer named after her.</p>
<p>PS: Sorry for the bad picture quality, but I only found Terra on a black background, so I had to manually expose her. Problem is: I know as much about graphics software as a graphics designer knows about programming in C. Anyways: It turned out acceptable IMHO.</p>
More disk space needed2006-07-06T00:00:00+00:00http://pilif.github.com/2006/07/more-disk-space-needed<div align="center"><img width="439" height="182" border="0" hspace="5" src="http://www.gnegg.ch/uploads/space_needed.png" alt="" /></div>
<p>Can somebody explain me, why my Mac OS X needs 4 TB of diskspace to encrypt my home directory which currently is about 15 GB in size?</p>
<p>Before I got this message, it wanted me to free another 1KB, btw. When I did that and retried, this message popped up. Unfortunately, I can't reproduce that other message though.</p>
Computers under my command (2): marle2006-07-06T00:00:00+00:00http://pilif.github.com/2006/07/computers-under-my-command-2-marle<p><img width="220" height="393" border="0" hspace="5" align="left" src="http://www.gnegg.ch/uploads/marle2.jpg" alt="" /></p>
<p>While everyone keeps calling her Marle, she is actually the princess Nadia of the Kingdom of Guardia in what many people are calling the best console RPG ever made, <a href="http://en.wikipedia.org/wiki/Chrono_Trigger">Chrono Trigger</a></p>
<p>Chrono Trigger was one of the last RPGs Squaresoft ever did for the SNES and it's special in many ways: Excellent Music (by <a href="http://www.procyon-studio.com/">Yasunori Mitsuda</a>), excellent graphics, smooth game play, really nice story and: Excellently done characters.</p>
<p>Robo, Frog, Lucca, Marle, Crono, Magus and Ayla - every one of them has its very own style and story. Aside from Crono which is quite the ordinary guy, every one of them is special in its own kind.</p>
<p>The server marle is special on its own way too.</p>
<p>It's not as outstanding as <a href="http://www.gnegg.ch/archives/291-Computers-under-my-command-Issue-1-shion.html">shion</a>, but it's special in its own way: It was the first 64Bit machine running a 64Bit OS I've ever deployed.</p>
<p>The OS was <a href="http://www.gentoo.org">Gentoo linux</a> (as usual) and the machine itself is some IBM xSeries machine equipped with a 3Ghz Xeon processor and 2GB of RAM, so basically nothing you need 64 Bit for.</p>
<p>It still was an interesting experiment to get the machine to work with a 64 Bit OS, though all that went completely uneventful.</p>
<p>Ever since deployed, marle is running at a customers site without crashes or other problems.</p>
<pre class="code">
marle ~ # uptime
11:56:13 up 265 days, 44 min, 2 users, load average: 0.00, 0.01, 0.00
</pre>
<p>Not much happening there currently I guess. Also, it's amazing how quickly time passes - installing that machine feels like it was only yesterday.</p>
Computers under my command - Issue 1: shion2006-07-05T00:00:00+00:00http://pilif.github.com/2006/07/computers-under-my-command-issue-1-shion<p><img src="http://www.gnegg.ch/uploads/Shion.jpg" border="0" alt="Picture of the "real" Shion Uzuki" width="145" height="442" align="left" /></p>
<p>After <a href="/2006/07/linux-powerpc-gcc-segmentation-fault/">yesterdays fun</a> with one of my servers, I thought I could maybe blog about some of them - especially when they are kind of “special” to me.</p>
<p>Of course, the first machine I’m looking at is my PowerPC Mac mini which I called “<a href="http://en.wikipedia.org/wiki/Shion_Uzuki">Shion</a>”, after the girl Shion Uzuki of the Xenosaga trilogy.</p>
<p>I don’t really have a very advanced naming scheme for my servers, but the important ones get names I tend to remember.</p>
<p>First it was people from Lord of the Rings (with Windows servers having names belonging to the evil people). Then, after I ran out of names, it was places in LotR and after I run out of those too, I began naming (important) servers after girls in console RPGs.</p>
<p>And of all the names, I guess shion is a very fitting name for a server. In the game, Shion is a robotics engineer and the inventor of that android called KOS-MOS.</p>
<p>And in my network, shion has a special place:</p>
<p>I initially bought the machine to run a <a href="http://www.slimp3.com">SlimServer</a> on it as my previous <a href="http://www.gnegg.ch/archives/238-The-greatest-gadget-ever.html">NSLU solution</a> was not really usable as hardware for the heavy, perl-based slim server.</p>
<p>After I replaced the slim-server, I obviously installed a samba server on shion to serve the non-music files as-well. Back then, I only had one external drive connected to the server.</p>
<p>Next thing to get installed was OpenVPN which I used for <a href="http://www.gnegg.ch/archives/242-Lots-of-fun-with-OpenVPN.html">quite a nice configuration</a> allowing me transparent access from and to the office.</p>
<p>Shortly after that, I finally found a USB ethernet adapter which made shion replace my ZyAir access point. I also had to buy a USB hub back then and I decided to use the remaining two ports of that to plug in additional hard drives, leading to shion’s current disk space capacity of roughly 1.2 TB.</p>
<p>Then I installed mp[3]act (I’ve also <a href="http://www.gnegg.ch/archives/266-mp3act.html">blogged about it</a>) and shortly after replaced it with Jinzora due to mp[3]act being quite bug-ridden and not in development any longer. (update 2013: links removed - mp[3]act is now pointing to a porn site and Jinzora is gone)</p>
<p>In all that time (one year of operation), shion never crashed on me. Overall, the stability of my home network went through the roof since switching all tasks over to her: No more strange connection losses. No more rebooting router and cable modem when lots of outgoing connections are active. No more inexplicable slowness in the internal network.</p>
<p>Shion does a wonderful job for me and I would never ever go back to any less flexible or stable solution.</p>
<p>Lately, I thought about maybe ditching her for a more powerful intel-based Mac Mini, but in the end shion is fast enough for my current purpose and I could never ditch a machine as nice as this one.</p>
<p>Flexible, Stable, Fast, Quiet and quite inexpensive. A machine worthy of being referred to with a name and a female pronoun.</p>
Linux, PowerPC, gcc, segmentation fault2006-07-04T00:00:00+00:00http://pilif.github.com/2006/07/linux-powerpc-gcc-segmentation-fault<p>If you ask of me me to name the one machine in my possession I love the most, that’ll be my Mac Mini.</p>
<p>It’s an old PPC one, I bought a bit more than a year ago with the intention of installing Linux on it and using it as home-server/router. It’s not the quickest machine there is, but it’s the most quiet and it does its job like no other machine I ever had: Its Samba file server, <a href="http://www.gnegg.ch/archives/242-Lots-of-fun-with-OpenVPN.html">OpenVPN Gateway</a>, bittorrent client, <a href="http://www.jinzora.org/">mp3 streaming server</a>, <a href="http://www.slimp3.com">SlimServer</a>, just all you could ever use a home server for.</p>
<p>From the beginning, it was clear to me: The distribution I’m going to install on the beauty was to be <a href="http://www.gnetoo.org">Gentoo Linux</a>. This decision was based on multiple reasons, from hard facts like always current software to soft facts like nice command-prompts.</p>
<p>Basically, the machine just sat there after I installed it, doing its job. Until this week when I wanted to install some software on it - mainly the unrar command to extract some file right on one of the external HDs I plugged in (shion - that’s what the machine is called - is connected to about 1TB worth of external HDs).</p>
<p>Unfortunately, <tt>emerge unrar</tt> failed.</p>
<p>It failed hard with a SIGSEGV in gcc (or its cousin cc1).</p>
<p>Naturally I assumed there to be some bug in the gcc I originally installed (3.3 something - as I said: I did not touch the installation for a year now) and I tried to reemerge gcc.</p>
<p>… which ALSO failed with a segmentation fault.</p>
<p>I had no interest what so ever in reinstalling the box - I invested much too much time in its configuration. Cron jobs here, certificates there, home grown scripts everywhere. Even with all the backups I had in mind - I did not want to do that kind of job. Besides: Who tells me if it’s really a software problem? Maybe the hardware is at fault which would mean that my work was in vain.</p>
<p>Searching for “gcc segmentation fault ppc” in google is… interesting… but not really something you can do if you actually want a solution for this problem.</p>
<p>In the end, I mentally prepared myself to go on with the reinstallation - still hoping it’d be a software problem.</p>
<p>And by accident, I came across the <a href="http://www.gentoo.org/doc/en/gentoo-ppc-faq.xml">Gentoo PPC FAQ</a> which I more or less read out of pure interest while waiting for the ISO to be burned.</p>
<p>To my biggest delight, said FAQ was really helpful though as it had a question that went “<a href="http://www.gentoo.org/doc/en/gentoo-ppc-faq.xml#gccsegfaults">Why does gcc keep segfaulting during ebuilds?</a>”</p>
<p>So it is a kernel problem! Of course I had preemption enabled! And that option - while working perfectly on all my x86 boxes - causes cache corruption on PPC.</p>
<p>Now that I knew what the problem was, I had two possible ways to go on: Quick and dirty or slow, but safe and clean:</p>
<ol>
<li>Recompile the kernel on the obviously defective machine, hoping the cache corruption would not hit or at least would not lead to a non-bootable kernel to be compiled.</li>
<li>Boot from a Gentoo live-CD, <tt>chroot</tt> into my installation, recompile the kernel.</li>
</ol>
<p>Obviously, I took the option 1.</p>
<p>I had to repeat the <tt>make</tt> command about 20 times as it continued to fail with a segmentation fault here and then. Usually I got away with just repeating the command - the cache corruption is random after all.</p>
<p>I was unable to get past the compilation of reiserfs though - thank god I’m using ext3, so I could just remove that from the kernel and continue with my make-loop.</p>
<p>Rebooting that kernel felt like doing something really dangerous. I mean: If the cache corruption leads to a SIGSEGV, that’s fine. But what if it leads to a corrupted binary? And I was going to boot from it…</p>
<p>To my delight, this worked flawlessly though and I’m writing this blog entry behind the rebooted MacMini-router. This time, even compiling the all new gcc 4.1.1 worked as expected, so I guess the fix really helped and the hardware’s ok.</p>
<p>Personally, I think fixing this felt great. And in retrospect, I guess I was lucky as hell to have read that FAQ - without it, I would have gone ahead with the reinstallation, compiling yet another kernel with preemption enabled which would have led to just the same problems as before.</p>
<p>Maybe the (very talented) Gentoo Hanbook guys should add a big, fat, red (and maybe even blinking) warning to the handbook to tell the user not to enable preemption in the kernel.</p>
<p>I know it’s in the FAQ, but why is it not in the installation handboook? That’s the place you are reading anyways when installing Gentoo.</p>
<p>Still: Problem solved. Happy.</p>
SQLite on .NET CF - Revisited2006-07-01T00:00:00+00:00http://pilif.github.com/2006/07/sqlite-on-net-cf-revisited<p>Another year, another posting.</p>
<p>Back in 2004, I <a href="http://www.gnegg.ch/archives/188-SQLite-on-.NET-CF.html">blogged about Finisar.SQLite</a>m which at the time was the way to go.</p>
<p>Today, I am in quite the same situation as I was back then, with the difference that this time, it's not about experimenting. It's a real-world will-go-life-thing. I'm quite excited to finally have a chance at doing some PocketPC / Windows Mobile stuff that will actually be seen by someone else than myself.</p>
<p>Anyways: The project I blogged about is quite dead now and does not even support the latest versions of SQLite (3.2 is the newest supported file format). Additionally, it's a ADO.NET 1.0 library and thus does not provide the latest bells and whistles.</p>
<p>Fortunately, someone stepped up and provided the world with
<a href="http://sqlite.phxsoftware.com/">ADO.NET SQLite</a>, which is what I'm currently trying out. The project is alive and kicking, supporting the latest versions of SQLite.</p>
<p>So, if you, like me, need a fast and small database engine for your PocketPC application, this project is the place to look I guess.</p>
My task: RemoveTempHxDs2006-06-30T00:00:00+00:00http://pilif.github.com/2006/06/my-task-removetemphxds<p>Le'ts say, you want to inform your user about what's going on (which is a nice thing to do).</p>
<p>This is an example of how not to do it:</p>
<center><a href="http://www.gnegg.ch/uploads/installer.png" class="thickbox"><img width="400" height="304" border="0" src="http://www.gnegg.ch/uploads/installer-thumb.png" alt="" /></a></center>
<p>What exactly is that "RemoveTempHxDs", the installer is doing right there? And why is the progress bar at 100% for more than three minutes when I made the screenshot?</p>
<p>If you are unable to provide meaningful progress information, don't provide it at all. Make your program display a "neutral" progress bar (some spinning wheel or something like that) and make it tell the user it's "Doing stuff...". Why expose useless internals?</p>
<p>While I see some value in displaying information like that if it's to have more information when you are trying to support the application. But in that case, a log file of some kind is much more valuable as it both gives YOU as developer the information you need and does not confuse your user.</p>
Programatically generating XML2006-06-26T00:00:00+00:00http://pilif.github.com/2006/06/programatically-generating-xml<p>If you have to generate XML, it's usually considered good style to use one of these defined APIs like DOM or <tt>XMLWriter</tt>.</p>
<p>Just writing out a string to the line is considered bad practice because... why, actually?</p>
<p>Jeff Atwood once more <a href="http://www.codinghorror.com/blog/archives/000617.html">wrote down</a> what I have been thinking for quite some time now.</p>
<p>In many cases, just dumping out XML with <tt>sprintf</tt> or whatever your language provides you with is faster, independent of bugs in the libraries you use and easier to read.</p>
<p>There are five characters that need to be treated with caution in XML: the &, the <, the >, the " and the '.</p>
<p>Quoting even is straight forward and you usually don't run into niceties like quoting backslashes in regular expressions you are passing to <tt>perl -e</tt> inside a double quoted string on your shell (I don't even want to count the \'s needed to actually get the regex parser in perl to see just one of them).</p>
<p>And even if you screw up, you can still rely on the XML parser to bail out if something is wrong.</p>
<p>The time you waste learning your library, coping with its bugs and finally working with the usual bloat of todays OOP interfaces (interface as in "user interface") far outweighs the occasional quoting problem which should not happen anyways.</p>
<p>And don't make me get started on trying to understand the structure of the XML code like Jeff posted is going to create:</p>
<div>
<code>System.Text.StringBuilder sb = new System.Text.StringBuilder();
XmlWriterSettings xs = new XmlWriterSettings();
xs.ConformanceLevel = ConformanceLevel.Fragment;
xs.Indent = true;
XmlWriter xw = XmlWriter.Create(sb, xs);
xw.WriteStartElement("status");
xw.WriteAttributeString("code", "1");
xw.WriteEndElement();
xw.WriteStartElement("data");
xw.WriteStartElement("usergroup");
xw.WriteAttributeString("id", "usr");
xw.WriteEndElement();
xw.WriteEndElement();
xw.Flush();
return sb.ToString();
</code></div>
<p>If you are seeing this in code you have to maintain (but you have not written), how would you tell what XML it generates? How does the readability of that compare to this?</p>
<div><code>string s =
@"<status code=""{0}"" />
<data>
<usergroup id=""{1}"" />
</data>";
return String.Format(s, "1", "usr");
</code></div>
<p>Note that I'm not that much of a .NET guy, but I'm quoting Jeff's code here</p>
<p>Summary in one word: Jeff's Article: ACK!</p>
One day with Serendipity2006-06-15T00:00:00+00:00http://pilif.github.com/2006/06/one-day-with-serendipity<p>Here we go: Everything migrated. Every link (hopefully) fixed. Worked around (I think) some problems with images uploaded from MT clashing with Serendipity's (s9y from now on) mod_rewrite handling and re-categorized every entry: the new gnegg.ch is up and running.</p>
<p>So, how is life with s9y?</p>
<p>Fist of all: I got no single comment SPAM. This is due to the better SPAM countermeasures and due to all URLs changing. I'll have to see how good the SPAM prevention will work, though I have an idea it can't be that bad (see below).</p>
<p>While s9y is slower than MT in delivering pages (understandable considering MT is generating static pages), it's more feature-rich compared to MT - at least if you consider s9y to be a blogging engine, not a framework to create blogging-engine-like tools.</p>
<p>I love the plugin system: There's nothing you can't write a plugin for and people seem to have noticed that - at least considering the wealth of plugins available for you to download and install (directly from the administration interface).</p>
<p>Also, because I'm using a premade template and because s9y is a bit more intelligent in reusing templates, the whole site finally has a consistent look. No more usage of outdated templates when commenting or displaying error messages.</p>
<p>The most interesting thing though is the SPAM prevention: When you post a comment, it will go through the following procedure:</p>
<ul>
<li>Is it exactly the same comment as another posted before? If so, reject it. This prevents a spammer that got through once from getting through again. And it prevents you from double-posting by accident.</li>
<li>Is your IP-Address posting a comment within 2 minutes after posting another one, the comment will be rejected. I know proxy servers and NAT routers exist and I will tweak the time if I should ever get more popular. A cookie-based approach obviously doesn't work to flood-protect the blog from malicious spammers.</li>
<li>Does the comment point to an URL listed on <a href="http://www.surbl.org/">SURBL</a>, it'll be rejected. I'm sorry, but this is a sacrifice I must ask for.</li>
<li>If you post a comment to an entry older than 30-days, it'll be insta-moderated. I promise to activate it as soon as possible.</li>
<li>If you post to a comment older than 7 days, you'll have to solve a captcha, just to be sure. If you cannot solve it, feel free to contact me via Email</li>
<li>After you post a comment with more than 3 links, I'll have to approve it first. If you post more than 20 links, it'll be rejected.</li>
<li>A word-filter is active aswell, though I think all these measures stop the spam before even getting here.</li>
<li>If all this fails, I'm sure the SPAM will be detected by <a href="http://akismet.com">Akismet</a></li>
</ul>
<p>While I know that some restrictions may hurt you, please believe me that the restrictions are in place to both increase the overall quality of content here and to make my life a bit easier.</p>
<p>Serendipity really is a nice blogging engine. Go ahead and try it!</p>
New face, new engine, new everything2006-06-14T00:00:00+00:00http://pilif.github.com/2006/06/new-face-new-engine-new-everything<p>Management Summary of this longer entry: 1) Comments are back, 2) I'm using Serendipity instead of Movable Type and 3) This layout - though premade - is going to stay.</p>
<p>But now my reasoning:</p>
<p>As I've stated <a href="/archives/284-Comments-disabled.html">earlier today</a>, I had enough comment spam arriving on gnegg.ch. Not only the blog was filled up with junk, but also my mailbox was hit (MT was sending mails for every comment).</p>
<p>To underline how BAD it was, notice this: During last weekend I was off the internet most of the time. In the two nights (friday to saturday and saturday to sunday), gnegg.ch was hit by 683 SPAM comments, of which MT only classified 4 as spam.</p>
<p>For each of these 683 comments, I got an email message. Which was especially bad as I was checking mail from my mobile phone (that was the most expensive mail checking process in my life I guess - imagine the sheer size of only the headers)</p>
<p>Even worse was the interface for comment removal: The biggest page size I could select was 50 comments, so I had to delete the comments in groups of 50, each time waiting for the affected pages to be rebuilt over and over again.</p>
<p>There is a mutliselect option in MT, but it always affects all comments per page, so chosing to display all comments and then using the "Select All" feature would not have helped as it would have deleted the legit comments too.</p>
<p>This just so you understand why I had to do something. I did not want to have another "fun" comment removal session next sunday evening (most of the comments get posted on the weekends - probably in the hope they will remain unnoticed for a while longer - which they did).</p>
<p>At first, I just wanted to turn off the comments and keep it at that.</p>
<p>But what is a blog without comments? Yeah. right... not much.</p>
<p>So I went ahead and installed <a href="http://www.s9y.org">Serendipity</a> because I knew that it had some really nice SPAM-countermeasures included.</p>
<p>As I currently don't have the time needed to port the old MT template over, I selected a template that comes with s9y and I have to say: It looks great, IMHO. I think I'll keep it at this.</p>
<p>I'm no web designer and even if I could convince Richard to create a new layout for me (thinking that the old one just is a bit too dark and grey for my current mood), it would take AGES for me to create a Smarty version out of it, so I decided to go with premade templates.</p>
<p>And this one (Perun Blue) is really nice - IMHO even better than the old, custom made, one. So, I hope, you can live with this.</p>
<p>While the import process worked flawlessly, many links inside the site are broken and I'm currently in the process of fixing them.</p>
Comments disabled2006-06-14T00:00:00+00:00http://pilif.github.com/2006/06/comments-disabled<p>Ok. this is it. I have enough.</p>
<p>While I value the legit comments of my visitors, I'm deleting over 200 spam comments per day lately. This must stop. NOW.</p>
<p>Unfortunately, no technical measure currently available really prevents comment spam at least not without serious disadvantages.</p>
<p>Let me go into this:</p>
<ul>
<li>Use a catpcha: Captchas can be broken and in fact ARE broken all over the place. No point in placing another hurdle that's easily overcome by machines, but can't be overcome at all by some humans. True: I could decrease the readability to make OCRing the thing harder, but what's the point? Once the captcha is unreadable, it can't be broken by machines, but it can't be solved by humans either.</li>
<li>Use a service like TypeKey to authenticate users and let only authenticated users post: Easy to implement, but unfortunately, noone seems to trust MT (neither do I - fully), so noone is using the service. Unfortunately, it doesn't solve the problem either as machines are well able to create TypeKey accounts (I doubt their captcha is so much better - and even if it currently is: Above problems apply to them aswell).</li>
<li>Create your own authentication service: While this may be more liked than TypeKey, it means a lot of work to integrate it into MT and has the same drawbacks (machines can create accounts unless you use a captcha, where my first point applies again).</li>
<li>Use a SpamAssassin-like system to get rid of the SPAM. MT has such a system, but it doesn't really work. Neither seem the blacklists to do their job.</li>
</ul>
<p>So I come to the only tool that really works to take care of all comment spam: Turn off comments. No discriminating against visually impaired people, no possibility for even the smartest algorithm to sneak a comment into the system. Problem solved.</p>
<p>Personally, I think MT is lacking in terms of counter-spam measures and I will once more have a look at <a href="http://www.s9y.org">Serendipity</a> which provides more fine-grained control. Until then, I'm sorry, but I have to disable comments on this site.</p>
<p>Spammers: 1, Freedom: 0</p>
Howto kill your mac? Use Acrobat!2006-05-19T00:00:00+00:00http://pilif.github.com/2006/05/howto-kill-your-mac-use-acrobat<p>Today I though I'd lost my mac.</p>
<p>I was creating a PDF from Word using Adobe Acrobat. While Mac OS provides a built-in method to generate PDFs, I was under the impression that using the distiller word macro will generate better PDFs - mostly because the macro knows meta information about the text it converts and thus can create links and other PDF hints.</p>
<p>The system completely locked up.</p>
<p>I was able to still move the mouse, but the rest was dead.</p>
<p>I did the 4 seconds power button trick to restart the machine in the hope that this was just the usual flakiness of distiller on intel macs</p>
<p>Unfortunately, the system just froze again after a short wile (seconds)</p>
<p>And on every restart, the freeze happened earlier till I could not even log in in the end.</p>
<p>Using bootcamp (which I'm only using for playing World of Warcraft currently) I determined that it was no hardware problem (Windows had no lockups) which made me considerably happier.</p>
<p>I also learned that you have to press and hold shift on boot until that progress indicator is displayed to boot in some kind of Safe Mode</p>
<p>Using that, I was able to boot, log in and even work in OSX without the crash. I began stripping my system.</p>
<p>I've uninstalled the newly released RC of Parallels, removed all startup items (/Library/StartupItems, /Users/pilif/Library/StartupItems) - thinking that one of those things may have caused the problem.</p>
<p>A reboot showed no improvement unfortunately. The system still locked up. The error log did not show anything of interest.</p>
<p>Googling about "lockup macbook pro" after rebooting yet again in safe mode (which takes AGES), showed me lots of people having this problem after the recent security update released by Apple.</p>
<p>Usually the hint was to reinstall the OS (like in the old days of windows...) and to skip installing that securtiy update. Unfortunately that was no option at all as I did not have the CDs in range and I'm completely against not installing a security relevant patch.</p>
<p>Remembering the crash first happening when using Acrobat, I opened the printer setup utility and got the message that no printer was installed.</p>
<p>This made me notice that Safe Mode also seemed to disable printers, giving me some hope that maybe Acrobat was the problem: More stuff to disable is always a good thing when debugging something like this.</p>
<p>The printer setup utility has a nice feature. It's called "Reset printing system..." and it's placed in the application menu.</p>
<p>The feature works exactly as advertized, thus removing that acrobat printer (and all other printers *sigh*).</p>
<p>I rebooted once more and... it worked.</p>
<p>That recent security update did something to Rosetta (I'm guessing, but the same lockups seem to happen with Adobe Version Cue and they don't happen on PPC systems) causing these lockups. And probably the printing system reinitializes the printer drivers after the updated.</p>
<p>And as I did not print after installing the update until now, the problem was triggered only just now.</p>
<p>While I'm happy that everything is working again, I'm certainly pissed right now.</p>
<p>A security update should never render your system unusable. I don't mind who screwed up here (Apple, Adobe, Rosetta), but something like this must not happen.</p>
<p>The only good thing is that I'm quite experienced with situations like this. But still: This is my first mac. It was sheer luck that I found out how to fix this.</p>
<p>If such situations can't be averted then please, please provide meaningful error reporting or just logging. Were there lines like 'initializing printers' in some logfile, I'd have known where the problem was.
</p>
<p>But no. It just crashes with no way what so ever to just kill the hanging process. Why does failing to load a printer driver crash my whole system? Granted. The problem is probably in Rosetta, but something like this still MUST NOT happen.</p>
<p>The emulation layer stops responding? Easy: Kill and restart it.</p>
<p>This is majorly unpleasant. And it took away nearly two hours of my time which I'd have preferred to use for more useful stuff.</p>
Mac Mail: Can software perform worse?2006-05-16T00:00:00+00:00http://pilif.github.com/2006/05/mac-mail-can-software-perform-worse<p>
I'm a fan of Mac Mail (Mail.app). It looks nice, it renders fonts very nicely it creates mails conforming to the relevant RFCs and it basically supports most of the <a href="/archives/34-Mail-for-Windows-as-I-like-it.html">requirements</a> I've posted back in 2003.
</p>
<p>
There are some drawbacks though. First one is no proper IMAP search support. This is not as bad as it sounds as the local full text index works very nicely (faster than our exchange server) and it's even integrated into Spotlight.</p>
<p>Then, the threading support sucks as it's not multi-level. This does not matter as much as back in 2003 though as my daily dose of technology-update now comes from RSS and blogs. Actually I'm currently not subscribed to any mailing list.</p>
<p>Everything else on that list is supported and the beautiful UI and font-rendering convince me to live with those two drawbacks and not use Mozilla Thunderbird for example which supports the whole set of features but looks foreign to OS X.</p>
<p>BUT. There's a big BUT</p>
<p>Performance is awful.</p>
<p>Even though I'm using IMAP, Mail.app insists on downloading all messages - probably to index them. I know that you can turn this behavior off, but then it doesn't download any message at all, rendering the program useless in offline situations. In Thunderbird you can make the program just download the messages as you read them and then use the contents of the cache for later offline display.</p>
<p>Then again: I have no problem with downloading and it even displays new mail while still downloading in the background. It does a better job at not blocking the UI than Thunderbird too.</p>
<p>What sucks is the performance while doing its thing.</p>
<p>I have around 3GB of mails on my IMAP server and before I could use Mail.app for the first time, the program downloaded the whole thing, utilizing 100% of one CPU core (it's not SMP capable ;-) ), forcing my MacBook Pro to turn on the fans - it was louder than after playing 4 hours of World of Warcraft in Windows (via Boot Camp - it's around twice as fast than the mac version).</p>
<p>It also took lots and lots of RAM making working with the machine a non-fun experience.</p>
<p>Later I decided to throw away two years worth of Cron-Emails containing a basic "Job done" which were filtered away on the server so I never noticed them. Deleting those ~22000 emails took two hours - again with 100% CPU usage on one core.</p>
<p>Even worse: Mail.app does not send an IMAP move command to move the messages to the trash (or just mark them as deleted). It actually manually copies the messages over! Message by Message. From the local copy to the server. Then it deletes them. And then begins the awful "Writing Changes to disk", completely killing the performance of my MacBook.</p>
<p>Also annoying: Mail.app does not support IMAP folder subscriptions. It insists to fetch all folders - if you have a full calendar on your exchange server, it's going to fetch all those (useless for Mail.app) entries aswell - and we know now how well Mail.app works with large folders.</p>
<p>My conclusion is: Mail.app is perfect for reading and writing your daily mail. It fails miserably at all mail administration jobs.</p>
<p>I'm going to stick with it none the less as reading my daily mail is what I'm doing most of the time. It's just a good thing that Thunderbird exists and I'm going to use that for the next round of cleanup (hoping that Mail.app picks up the changes and does not take too long to mirror them to its local store).</p>
RAM for my MacBook Pro2006-05-08T00:00:00+00:00http://pilif.github.com/2006/05/ram-for-my-macbook-pro<p>Today, another GB of RAM for my MacBook Pro arrived (I bought it at <a href="http://www.heinigerag.ch">Heiniger AG</a>. It's no original Apple RAM, but it's about a third as expensive as the original).</p>
<p>Installation was <a href="http://docs.info.apple.com/article.html?artnum=303491">very easy to do</a> (I link the instructions for the 17" model because that's what I found on the web - it works the same for the 15" model).</p>
<p>And I tell you: This is the best thing to ever happen to a computer of mine performance-wise.</p>
<p>While the system feels a lot snappier in "default mode", it shines even more when I'm running <a href="http://www.parallels.com">Parallels Workstation</a> in the background (at full screen - using <a href="http://virtuedesktops.info/">VirtueDesktop</a>).</p>
<p>I'm inclined to say that the parallels-thing just got usable with this upgrade.</p>
<p>Funny thing: When you are working with Windows XP, you won't notice as much as a speed increase in normal operating mode when you upgrade your RAM from 1GB to 2GB. I guess the memory manager of OSX is just more eager to swap out stuff if RAM gets scarce. And as we all know: Swapping kills a system.</p>
More Asterisk stuff2006-05-02T00:00:00+00:00http://pilif.github.com/2006/05/more-asterisk-stuff<p>I thought I'd give a little update on what's going on in my Asterisk installation as some of the stuff might be useful for you:</p>
<p><a name="speeddial"></a></p>
<h3>Speed Dial</h3>
<p>If you have <a href="http://www.snom.com">Snom Phones</a> and want to program the function keys to dial a certain number, be sure to select "Speed Dial" and not "Destination" when entering the number.</p>
<p>Destination was used in earlier firmwares but it now used to not only make the phone dial that number, but also subscribe to the line to make the LED light up when the line is used.</p>
<p>This obviously makes no sense at all with external numbers and requires some configuration for internal ones (see below). The additional benefit is that buttons with "Speed Dial" assigned don't turn on the LED.</p>
<p><a name="dialbyclick"></a></p>
<h3>Dial by click</h3>
<p>You can dial a number from the Mac OS X address book aswell. Asterisk will make your phone ring and redirect the call once you pick up (just like <a href="http://sourceforge.net/projects/asttapi/">AstTapi</a> on Windows). I had the best experience with <a href="http://mezzo.net/asterisk/app_notify.html">app_notify</a>. I don't quite like the way how it notifies clients of incoming calls (hard-coding IP-Addresses of clients is NOT how I want my network to operate), but maybe there will be a better solution later on. Currently, I'm not using this feature.</p>
<p>Dialing works though.</p>
<p>You don't have to modify manager.conf, btw, if you already have the entry for the AstTapi-Solution. app_notify will ask for username (manager context) and password when it launches the first time.</p>
<h3>Subscription</h3>
<p>As noted above, your Snom Phone can be advised to monitor a line. The corresponding LED will blink (asterisk 1.2+) when it's ringing and light up when the line is busy.</p>
<p>Snom-wise, you'll have to configure a function key to a "Destination" and enter the extension you like to monitor.</p>
<p>Asterisk-wise you have to make various changes:</p>
<p><b>sip.conf</b><br />
<ul>
<li>Add <tt>subscribecontext=[context]</tt>, where context is the context in extensions.conf where the corresponding SNOM phone is configured in. I've put this to the [general]-Section because all phones are sharing the same context (<tt>internal</tt>).</li>
<li>Add <tt>notifyringing=yes</tt> if you have Asterisk >= 1.2 and want to make the LEDs blink when the line is ringing.</li>
</ul>
</p>
<p>
<b>extensions.conf</b><br />
This is a bit hacky: In the sip-context add a <tt>notify</tt> extension for every line you want to be allowed to be monitored. Unfortunately, you can't use macros or variables here, so it's messy.</p>
<p>On my configuration it's:</p>
<p><code>[internal]
exten => 61,hint,SIP/61
exten => 62,hint,SIP/62
exten => 63,hint,SIP/63
exten => 64,hint,SIP/64
exten => _6[1-9],1,Dial(SIP/${EXTEN},,tWw)
</code></p>
<p>While I would have preferred</p>
<p><code>[internal]
exten => _6[1-9],hint,SIP/${EXTEN}
exten => _6[1-9],1,Dial(SIP/${EXTEN},,tWw)
</code></p>
<p>Though this may have been fixed with 1.2.2, but I'm not sure just yet.</p>
<p>You may have to reboot your phone after making the configuration change there. To check the registration in asterisk use <tt>SIP show subscriptions</tt>.</p>
<p>You should get something like this:</p>
<p><code>asterisk*CLI> SIP show subscriptions
Peer User Call ID Extension Last state Type
192.168.2.152 62 3c26700b57e 61 Idle dialog-info+xml
1 active SIP subscription
</code></p>
<p>This is not quite tested as of yet because the guy at extension 61 is currently in his office and I don't want to bother him ;-)</p>
<p>Update while editing/correcting this text: It works. They guy has left and I checked it.</p>
Pragmatic AJAX2006-04-26T00:00:00+00:00http://pilif.github.com/2006/04/pragmatic-ajax<div class="floatimgauto"><img alt="img_hc_ajaxfreshflavour.jpg" src="http://www.gnegg.ch/archives/img_hc_ajaxfreshflavour.jpg" width="172" height="165" /></div>
<p>Today, my hardcopy of <a href="http://pragmaticprogrammer.com/titles/ajax/index.html">Pragmatic Ajax</a> arrived. I bought the bundle edition consisting of the PDF beta version of the book and the printed edition.</p>
<p>It's a wonderful book. I have been doing AJAX since a bit earlier than the launch of Google Maps, but I still thought that I need some brushing up especially in the matters of usable frameworks (I've been doing everything from scratch) and useful practices.</p>
<p>Considering that I liked every book of the <a href="http://www.pragprog.com">Pragmatic Programmers</a> I've read so far (we share the same mindset I guess), the decision to buy that book was a non-issue.</p>
<p>I've been reading the PDF-edition so far and completed about the first half of it, but I'm very happy now to see the dead-tree edition as reading that is much easier in terms of handling (no cheap printouts and no computer needed to read it).</p>
<p>The first half was very interesting, but I think I'll better write a complete review once I'm through with it.</p>
<p>And in case you wonder what that picture above means... Well... Here in Switzerland, we have a brand of cleaner fluid which is called <a href="http://www.colgate.ch/de/products/householdcare/allpurpose.shtml#">AJAX</a>.</p>
<p>And we have had that for <em>years</em>, so they are not just riding some strange fashion mood like <a href="http://www.flickr.com/photos/ristau/111447814/">these guys</a>. I wonder if the inventor of the AJAX acronym knew :-)</p>
GPRS, Bluetooth and Mac OS X2006-04-20T00:00:00+00:00http://pilif.github.com/2006/04/gprs-bluetooth-and-mac-os-x<p>GPRS in your mobile phone and bluetooth are a real dream-team: Phones like that are small, fit in your pocket and still, they allow you to connect to the internet from your laptop - and even at reasonable speeds.</p>
<p>In contrast to the widely spread PC-Cards or wired connections, the handling too rocks: Just keep your phone in your pocked, dial from the laptop and use the internet. No fiddling with cards (or drivers. or strange software. your OS comes with all it needs to get the connection going), no problems with forgotten cables.</p>
<p>Bluetooth brought simplicity to the connection with your phone. Earlier we had infrared or cables, but nothing works as reproducibly as bluetooth does - at least in Windows.</p>
<p>As you know, I switched over to using Mac OS for my main office workstation. And today I was in the situation of needing internet in train and at a customers site.</p>
<p>Naturally, I wanted to use my mac to connect via Bluetooth to my K750i to use it's GPRS capability.</p>
<p>While the bluetooth stack provides a very nice assistant to add a new bluetooth device and even allows you to create the GPRS connection, unfortunately, it does not work in the end.</p>
<p>Apple does provide some very specialized modem scripts, which is both good and bad. Good because if there's a script for your modem/phone, it'll work perfectly. Bad because if there is no script, it won't work at all.</p>
<p>The assistant provided a list of Ericsson phone scripts and suggested using "Ericsson Infrared". Naturally I first tried connecting with that, dialing *99***3# as I would in windows (the GPRS data connection being the third configured connection on the phone).</p>
<p>The phone did not even begin the dialing process.</p>
<p>I rebooted the Mac, launched Windows, created the RAS connection there and connected via GPRS to google for a solution (oh the irony...).</p>
<p>And I quickly found one: The <a href="http://www.taniwha.org.uk/">modem scripts</a> by Kia ora Ross</p>
<p>One thing to note though: You must use the script using a CID which is <em>not</em> configured on your phone (which is different from windows) and use the name of the APN as phone number (which also is different). With that in mind, connecting is easy.</p>
<p>What remains to be told: Apple which claims to be the superior OS usability-wise fails on this not-so-advanced task. Not only that. It fails in multiple ways:</p>
<ul>
<li>It does not provide a generic modem script (like Windows does)</li>
<li>It suggests a completely non-working solution (instead of telling "sorry. I have not matching script.")</li>
<li>One you get the right scripts, you have to click the "Show all"-Checkbox to be actually able to select it - despite all scripts listed in the default configuration being completely unusable.</li>
</ul>
<p>So I'm coming back to what I was saying all the way: OS X or Windows? Doesn't matter. Both have advantages. Both have disadvantages. Neither is clearly more usable than the other. Just go with what you feel more comfortable and live with the problems.</p>
<p>Oh and: Setting up a GPRS connection via a bluetooth-connected phone arguably is not a task doable at all for the people OSX was designed for. So it's probably OK if it's a bit harder. But still.... I'm not very happy about this.</p>
<p>PS: This is written and posted during a train ride. Connected via GPRS. Written on my MacBook Pro.</p>
Tweaking Mac OS X for the Linux/Windows user2006-04-11T00:00:00+00:00http://pilif.github.com/2006/04/tweaking-mac-os-x-for-the-linuxwindows-user<p>As you no doubt know by now, I'm gradually switching over from using Windows to using Mac OS X.</p>
<p>I have quite some experience with using Unix and I'd love to have the power of the command-line combined with the simplicity of a GUI here and then.</p>
<p>OSX provides that advantage to me: For one, I'm getting a very styled and time-tested UI, the ability to run most applications I need (this is where Linux still has some problems) and on the other hand, I'm getting a nice well-known (to me) command line-environment.</p>
<p>Of course, in my process of switching over, I made some tweaks to the system, I'm sure some of my readers may find useful:</p>
<ul>
<li>Use a useful default shell: I very much prefer <a href="http://www.zsh.org">ZSH</a>, so <tt>chsh -s /bin/zsh</tt> was the first thing I did.</li>
<li>Use a useful configuration for said shell: I'm using <a href="http://www.lipfi.ch/zshrc">this .zshrc</a>. It configures some options, enables a nice prompt, fixes the delete-key, sets the path and does other small cosmetical things.</li>
<li>Install the developer tools. They are on your install DVD(s).</li>
<li>Go and install <a href="http://fink.sf.net">Fink</a>. No UNIX without some GNU utilities and other small tools. The current source-distribution works perfectly well with the intel macs.</li>
<li>Fix the <a href="http://macromates.com/blog/archives/2005/07/05/key-bindings-for-switchers/">Home- and End-Keys</a>.</li>
<li>Tweak the terminal: Open the Window-Settings, chose "Display", use a reasonable cursor (underline) and set your terminal to Latin-1 (I had numerous problems using UTF with ZSH). If you want, enable Anti-Aliasing. Then chose "Color", use the "White on Black" preselection and play with the transparency slider. Use the settings as default.</li>
<li>Install <a href="http://www.videolan.org">VLC</a> - your solution for every thinkable multimedia need. Watch out to get the Intel nightly if you have an Intel Mac.</li>
<li>I never use sleep-mode because it feels "wrong" not to shut the machine down completely. That's why I entered <tt>sudo pmset -a hibernatemode 1</tt> to make the "Sleep" option in the Apple-Menu work like Hibernate in Windows.</li>
</ul>
<p>If you are a web developer on an intel mac and consider using PostgreSQL, don't use the <a href="http://www.entropy.ch/software/macosx/postgresql/">premade builds</a> on entropy.ch because they are still built for PPC. You may use the <a href="http://www2.entropy.ch/download/pgsql-startupitem-1.2.pkg.tar.gz">StartupItem</a> which is provided there though. If you do, call PostgreSQL's <tt>configure</tt> like this to get the paths right:</p>
<div><code>./configure --prefix=/usr/local/pgsql --bindir=/usr/local/bin --with-openssl \
--with-pam --with-perl --with-readline --with-libs=/sw/lib\
--with-includes=/sw/include
</code></div>
<p>This is after you've installed readline using fink. OS X itself does not come with readline and <tt>psql</tt> without readline sucks.</p>
<p>After installing PostgreSQL with <tt>make install</tt>, the paths are set correctly for the premade StartupItem, which makes PostgreSQL start when you turn on your machine.</p>
<p>Furthermore, I created my own customized PHP-installation (5.1.2) using the following configure line:</p>
<div>
<code>./configure --enable-cli --prefix=/usr/local --with-pear --with-libxml-dir=/sw \
--with-apxs=/usr/sbin/apxs --enable-soap --with-pgsql=/usr/local/pgsql \
--with-readline=/sw --with-pdo-pgsql=/usr/local/pgsql --enable-pcntl \
--with-curl=/usr --enable-ftp --with-gd --with-png-dir=/sw --with-jpeg-dir=/sw \
--with-zlib-dir=/usr --with-freetype-dir=/usr/X11R6 --with-bz2
</code></div>
<p>Use fink to install <tt>libxml2</tt>, <tt>libjpeg</tt> and <tt>libpng</tt></p>
<p>Using the hints provided here, you'll get a configuration which makes working with the machine <em>much</em> easier for a UNIX/Windows guy. I hope it's of some use for you.</p>
Praise to VLC2006-04-10T00:00:00+00:00http://pilif.github.com/2006/04/praise-to-vlc<p>Now that I can be assured to have a <a href="/archives/275-XP-on-the-MacBook-Pro.html">windows system ready at hand</a> should I need one, I'm more and more switching over to using Mac OS X for day-to-day productivity work - at least if it's not about doing delphi work.</p>
<p>Now this sounds crazy, but in the end it all boils down to (IMHO) better font rendering and a
alpha-blended terminal.</p>
<p>Functionality-wise and productivity-wise, MacOS and Windows are on par. Both systems have little things that suck and both have advantages in other little things.</p>
<p>In the end, both are OSes.</p>
<p>Today, I was in the position of wanting to listen to the <a href="http://oc.ormgas.com">streaming version</a> of <a href="http://www.ocremix.org/">OCRemixes</a> once again.</p>
<p>They are using an .ogg-stream which I appreciate because of two things: For one it provides a
better Bandwith:Quality ratio and furthermore ogg's a patent-free technology.</p>
<p>Problem: How to listen to a OGG-Stream on OS X?</p>
<p>Apples arrogance in regards of QuickTime is one of those things that bother me in OSX. Apple: There's more to the world of multimedia than just QuickTime and MP3, so make the infrastructure extendable in a sense so it actually works (Hint: DirectShow works quite well - despite being a Microsoft product).</p>
<p>There are some QT/Ogg-plugins available on the net, but none of them (not even one I compiled myself to be 100% sure to have an Intel build) actually worked.</p>
<p>Just when I thought that all was lost, I remembered <a href="http://www.videolan.org">VLC</a>.</p>
<p>My experience with video already showed it to me: VLC just plays everything you can possibly throw at it. And yes: It managed (and still manages) to play the remixes stream.</p>
<p>And the UI is great on OS X (if you don't look at the awful preferences dialog).</p>
<p>VLC IMHO is a really nice example how a cross-platfrom UI should be done: It looks like it's perfectly at home on my OSX. And it ALSO looks like it's perfectly at home on Windows XP (a bit
minimalistic, but it does it's job).</p>
<p>And with feeling at home I don't mean: "It looks the same on both platforms". No. It's perfectly adapting to the look & feel of the platform it's running on. No common theme, no quasi OS-look. It looks as much as your native Mac OS X application as, say, iTunes or TextMate does.</p>
<p>So: Thanks guys. This is great stuff!</p>
XP on the MacBook Pro2006-04-06T00:00:00+00:00http://pilif.github.com/2006/04/xp-on-the-macbook-pro<p>As I've <a href="/archives/270-Powerbook-runs-XP.html">announced earlier</a>, I've bought myself a MacBook Pro with the intention of running Windows XP on it (see that other post for the reasoning behind that), thought I changed my mind considering the multimedia capabilities: <a href="http://www.videolan.org">VLC</a> (the preview for intel macs) plays whatever I throw at it, has a nice GUI and does NOT use 100% CPU time all the time.</p>
<p>One day after I made that blog post, I actually got my machine and immediately used the <a href="http://www.onmac.net">XOM EFI-Hack</a> to actually install XP.</p>
<p>The process went quite smoothly despite it being quite a hack. The installation process actually was one of the fastest I've ever seen so far.</p>
<p>The problem with the XOM solution is the lack of drivers where it hurts the most: Power Management and Graphics</p>
<p>Having no graphics driver means: No acceleration, no DVI, no 2560 pixels resolution.</p>
<p>Useless for my purpose.</p>
<p>Yesterday, Apple announced <a href="http://www.apple.com/bootcamp">Bootcamp</a>, their solution for installing XP (or any other x86 OS for that matter) on the Intel Macs. Bootcamp requires a firmware update on the Macs which actually does nothing more but adding a real BIOS compatibility layer, allowing to install any non-EFI system.</p>
<p>Bootcamp itself is a graphical partitioning tool with the capability of resizing HFS+ paritions without data loss. And it comes fully packed with drivers for most of the integrated hardware (only iSight, the harddisk shock protection and the keyboard backlight don't work)</p>
<p>As installing that driver package on a XOM solution sounded risky to me (and does not work as I've learned afterwards), I've installed the whole thing from scratch, deleting the former two paritions and letting bootcamp create new ones.</p>
<p>Installing XP was fast as ever and installing the drivers was one of the most pleasant experiences: Just doubleclick that large MSI and let it do it's work. Reboot. Done.</p>
<p>Here's my desktop in full 2560x1600 resolution (warning: The linked full-size picture is <em>large</em>) of my 30-inch cinema display, showing some CPU specs, the resolution control panel applet and the tool for selecting the default OS which was installed by Apple'a driver package.</p>
<div align="center">
<a href="/img/desktop.png"><img src="/img/desktop_thumb.png" width="370" height="231" /></a>
</div>
<p>The OS you select there is booted by default, but you can hold the Alt-Key while booting to bring up a boot manager.</p>
<p>I'm very plesed by the speed and low noise of the machine. Now the only thing I'm still whishing for is a docking solution as I now have to plug in three (DVI, USB, Power) or four (if I want ethernet) cables each day.</p>
<p>Well done, Apple. And: Thanks!</p>
<p><b>Update</b>: This is a screenshot of the same machine running OS X. The installation is quite fresh still, but the most important things are installed already: Textmate, X11 and <a href="http://www.sshkeychain.org/">SSHKeychain</a>.</p>
<div align="center">
<a href="/img/macdesk.png"><img src="/img/macdesk_thumb.png" width="370" height="231" /></a>
</div>
<p><br /></p>
Ruby on Rails2006-04-04T00:00:00+00:00http://pilif.github.com/2006/04/ruby-on-rails<p>Today, our first project done in Ruby on Rails went live.</p>
<p>Christoph has done a wonderful job on it. The only thing I had to do was to fix up some CSS buglets in IE and install a deployment environement (developement was done using the Rails-integrated WEBRick server)</p>
<p>Personally, I think I'd have preferred using <a href="http://www.lighttpd.net/">LightTPD</a> with FastCGI instead of Apache, but the current setup pretty much prevented me from doing so.</p>
<p>Which is why I've installed mod_fastcgi on apache which was very, very easy on <a href="http://www.gentoo.org">Gentoo</a> (<tt>emerge mod_fastcgi</tt> - as usual).</p>
<p>Once I've corrected the interpreter path in <tt>dispatch.fcgi</tt> (which was set to the location of Christophs developement environment), the thing began working quite nicely.</p>
<p>And fast.</p>
<p>Considering the incredible amount of magic rails does behind the scenes, those 73.15 requests per second I got are very, very impressive (<tt>ab -n 100 -c 5</tt>). And actually so much faster than a comparable PHP application running using mod_php on a little faster server (19.36 req/s, same ab call).</p>
<p>The results have to be taken with a grain of salt as it's different machines, different load and a different application.</p>
<p>But it's similar enough to be comparable for me: the PHP application is running on a framework somewhat similar to rails with lesser optimization but also with lesser complexity. Both benchmarks ran against the unauthenticated start page which comes pretty much down to including some files and rendering a template. No relevant database queries.</p>
<p>I wonder how much of this higher speed is caused by FastCGI (a very convincing technology) instead of running the code in the apache server itself and how much is just rails being faster.</p>
<p>I will set up a test environement which is better defined to actually allow an accurate performance comparison: Comparable application in mod_php, php-fastcgi and rails-fastcgi. And if I have time, I'm going to run the two fastcgi-tests on LightTPD aswell.</p>
<p>Benchmarking is fun. Time-consuming, but fun.</p>
<p>For now, I'm content with the knowledge that an application that took a very small effort to write (even considering that Christoph had to learn the rails environment first) is running fast enough for its intended purpose.</p>
<p>As Christoph said: Rails Rules</p>
<p><em>thanks, guys</em></p>
Jabber: Even file transfers work2006-03-29T00:00:00+00:00http://pilif.github.com/2006/03/jabber-even-file-transfers-work<p>Transferring large amounts of data is a problem to overcome for all the IM networks.</p>
<p>You see: Usually you don't transfer files over the central IM server because it will use the IM providers bandwith which is better used sending out those small text messages. That's why files are usually transferred in a P2P fashion.</p>
<p>The problem here is that usually there's a NATing router or even a firewall working at one, or most of the time both ends of the communication. This usually forbids the making of a direct connection - at least without some <a href="http://www.secdev.org/conf/skype_BHEU06.handout.pdf">serious trickery</a> (warning: PDF link) going on</p>
<p>This is why I never expect a file transfer to work - even when using the native IM client.</p>
<p>And today, a guy sent me some image. Via Jabber. Via PyMSN-t.</p>
<p>Just when I wanted to write that it's never going to work, I watched the bytes flow in.</p>
<p>I'm still unable to belive it: Wildfire/PyAIM-t/Psi succeeded in making a direct P2P file transfer happen from a friend in MSN to my PC running a jabber client.</p>
<div align="center">
<a href="http://www.gnegg.ch/archives/transferworks.png"><img alt="transferworks.png" src="http://www.gnegg.ch/archives/transferworks-thumb.png" border="0" width="370" height="185" /></a>
</div>
<p><br /></p>
pilif's guide to jabber2006-03-28T00:00:00+00:00http://pilif.github.com/2006/03/pilifs-guide-to-jabber<p>I have been talking about <a href="http://www.jabber.org">jabber</a> before on this blog (<a href="/archives/36-Just-like-SMS-only-cheaper.html">here</a> and <a href="http://www.gnegg.ch/archives/150-Gentoo-and-Jabber.html">here</a>). And each time the euphory was put back by one or another malfunctioning element. You know: I would never use a third party service for a fun-project if I can be my own provider aswell.</p>
<p>And being ones own provider is one of the biggest advantages of using jabber. Over the years (the experiment began in the winter of 2002/2003), I have been running a jabber-server here and then.</p>
<p>First it was after reading the <a href="http://www.oreilly.com/catalog/jabber/index.html">jabber book</a> (very interesting read) which ended with me installing a jabber server with transports for aim (aim-t) and icq (jit? I don't remember). Installing jabber on debian was quite hard because there was no (useable) package (too old - as usual), but once I got it to work, it was fun.</p>
<p>The problems began with the advent of iChat and .Mac: aim-t was not able to detect the presence of .Mac users and thus I was unable to talk with them via the jabber server. Unfortunately, Richard is one of those .Mac guys, so I had to find another solution to talk with him.</p>
<p>For long, the only solution was original AIM itself, but in the fall of 2003 <a href="http://www.trillian.cc">Trillian 2.0</a> was released with AIM/iChat-Support. This was the demise of my jabber-solution.</p>
<p>While I've always liked to have the whole client-configuration, contactlist and whatnot stored on the server, the advantage of actually being able to chat with richard made me switch to trillian and thus even <em>pay</em> for a IM solution - regardless of the many free alternatives.</p>
<p>Remember: At the time, Trillian was the only AIM client capable of talking with the .Mac guys. The original AOL client excluded of course, but who wants to be running a ton of IM clients at the same time (most of my buddies were on ICQ which was not compatible with AIM then)? And who wants to cope with advertising all over the place?</p>
<p>After that, I was keeping Trillian, where I used a Jabber-Plugin to still be connected to the Jabber-Server (which was completely pointless as I was and am the only user on that server and no-one I know is using jabber (any more)).</p>
<p>Then the Debian installation went away and Gentoo came. I've written about the pleasant experience with <a href="http://www.gnegg.ch/archives/150-Gentoo-and-Jabber.html">jabber on Gentoo</a>. Still: As I was the only user, that jabber installation lived not very long either (I've never come around to have jabberd start automatically, so after a power outage, the service was gone. And I did not even notice *sigh*)</p>
<p>Only last week, my interest came back when I've seen that iChat provides jabber-support. Don't ask me why. I just wanted to check the progress of the various projects once more.</p>
<p>I immediately noticed <a href="http://ejabberd.jabber.ru/">ejabberd</a> which is what's currently powering jabber.org</p>
<p>On their site I read about <a href="http://pyaim-t.blathersource.org/">PyAIM-t</a>. Finally a replacement for that old aim-t without .Mac support. And I checked the readme-file: Yes. PyAIM-t uses the oscar protocol which is what's needed to get the presence info of those .Mac users</p>
<p>Installing ejabberd failed miserably though.</p>
<p>For one, the gentoo ebuilds are outdated(!) and I never managed to install the whole thing in a way that the command-line administration tool was able to access the (working) server. I admit: I've not invested nearly enough time to understand that erlang-thing. But why should I? It's a for-fun-only project after all.</p>
<p>Via the installation instructions of that PyAIM-t transport I found out about <a href="http://www.jivesoftware.org/wildfire/">Wildfire</a>. Wildfire is GPL, but backed by a company with strict commercial interest. A bit like the MySQL-thing. For me it did not matter as I did not want to integrate the thing into a commercial solution. Heck! I did not even want to use the unmidified thing commercionally.</p>
<p>Installing wildfire was - even though it required Java - easy to do. Especially as Gentoo provides a current ebuild (hard masked though because Wildfire depends on Java 1.5). Getting the thing to work was a matter of <tt>emerge wildfire</tt>, <tt>/etc/init.d/wildfire start</tt> and <tt>rc-update add wildfire default</tt> as it's the norm with Gentoo.</p>
<p>Then I read the documentation to learn how to add a SSL certificate (signed by our company's CA) which was a bit hairy (note: the web interface does not work. if you use the web interface, you corrupt the certificate store).</p>
<p>Installing the transports (PyAIM-t, PyMSN-t, PyICQ-t) was a matter of untaring the archive, entering the right connections settings I've configured in wildfire and launching the scripts. Easy enough.</p>
<p>Then I went to select the right client (on windows this time around): I've already known <a href="http://jajc.ksn.ru">jajc</a>, new for me were <a href="http://exodus.jabberstudio.org/">Exodus</a> and <a href="http://psi-im.org/">Psi</a> and <a href="http://www.pandion.be/">Pandion</a>. I could have kept trillian, but the nicest thing about the jabber clients is that they can store their settings on the jabber server. Trillian can't do that. So if I'm working on a new machine, I have to reconfigure Trillian where every pure jabber client will just fetch the settings from the server. Also, I wanted to have an OpenSource solution.</p>
<p>Now, that client-thing is a very subjective thing as functionality-wise, all three are identical - at least concerning the jabber-featureset (I'm not counting addons like RSS readers or whatever). </p>
<p>So here's my subjective review:</p>
<p>Jajc is not open source, provides a ton of settings to tweak (too many for my taste) and does not look that attractive (UI-wise).</p>
<p>Exodus seemingly does not provide a way to make the different contacts on the list look differently depending on which transport they use and the chat window is very, very limited in featureset and looks. If you dislike good looking programs with tons of unimportant settings to tweak, go for Exodus (this was not meant with disrespect. I was one of those users myself).</p>
<p>What remains is Pandion and Psi.</p>
<p>What I like about Pandion is the nice contact list display. You know: With avatar display (which works cross-im-network with those python transports!) I also like the nice looking chat window. What I dislike is the limited amount of settings to tweak (hehe... It's hard to make it <em>right</em> for me. Isn't it?).</p>
<p>I like the space-economic, yet still nice looking contact list in PSI. I also like the design of the chat window and the count of settings to tweak.</p>
<p>Personally, I can't decide between Psi and Pandion, so I'm running both of them currently. One day I will sure as hell know which of them I want to use.</p>
<p>So finally I'm up to speed with jabber again: Nice opensource client, working server and - finally - the .Mac AIM-Users on my contact list, while even able to chat with them.</p>
<p>So, you may ask: Why go through all this? Why not just stick with trillian?</p>
<p>Easy!</p>
<ul>
<li>A pure Open Source solution. No strange subscription model</li>
<li>As settings are stored on the server, equal configuration wherever I am.</li>
<li>Jabber has inherent support for multiple connections with the same account.</li>
<li>Jabber works on many mobile phones. That way I can IM with my mobile phone while not being locked into a specific service</li>
<li>It was fun to set up!</li>
</ul>
<p><em>*happy*</em></p>
A praise to VMWare Server2006-03-17T00:00:00+00:00http://pilif.github.com/2006/03/a-praise-to-vmware-server<div align="center">
<a href="http://www.gnegg.ch/archives/putty.png"><img alt="putty.png" src="http://www.gnegg.ch/archives/putty-thumb.png" width="370" height="231" /></a>
</div>
<p>This is putty, showing the output of <tt>top</tt> on one of our servers. You may see that there are three processes running which are obviously <a href="http://www.vmware.com">VMWare</a> related.</p>
<p>What's running there is their new <a href="http://www.vmware.com/products/server/">VMWare Server</a>. Here's a screenshot of the web-interface which gives an overview over all running virtual machines and allowing to attach a remote console to anyone of them:</p>
<div align="center">
<a href="http://www.gnegg.ch/archives/web.png"><img alt="web.png" src="http://www.gnegg.ch/archives/web-thumb.png" width="370" height="344" /></a>
</div>
<p>As you can see, that server (which is not a very top-notch one) has more than enough capacity to do the work for three servers: A gentoo test machine and a Windows 2003 Server machine doing some reporting work.</p>
<p>Even under high load on the host machine or the two virtual machines, the whole system remains stable and responsive. And there's so much work needed to even get the VM's to high load, so that this configuration could even be used in production right now.</p>
<p>Well... what's so great about this, you might ask.</p>
<p>Running production servers in virtual machines has some very nice advantages:</p>
<ul>
<li>It's hardware independant. You need more processing power? More ram? Just copy the machine to a new machine. No downtime, no reinstallation.</li>
<li>Need to move your servers to a new location? Easy. Just move one or two machines instead for five or more.</li>
<li>It's much easier to administer. Kernel update with the system not booting any more? typing "shutdown -h" instead of "shutdown -r" (both happened to me)? Well... just attach the remote console. No visiting the housing center anymore</li>
<li>Cost advantage. The host-server you see is not one of the largest ones ever. Still it's able to handle real-world-traffic for three servers and we still have reserve for at least two more virtual machines. Why buy expensive hardware?</li>
<li>Set up new machines in no time: Just copy over the template VM-folder and you're done.</li>
</ul>
<p>And in case you wonder about the performance? Well, the VM's don't feel the slightest bit slower than the host (I've not benchmarked anything yet though). </p>
<p>We're currently testing this to put a configuration like this into real production use, but what I've seen so far looks very, very promising.</p>
<p>Even though I don't think we're going to need support for this (it's really straight-forward and stable), I'm more than willing to pay for a fine product like this one (the basic product will be free, while you pay for the support).</p>
<p>Now, please add a native 64bit edition ;-)</p>
Powerbook runs XP2006-03-16T00:00:00+00:00http://pilif.github.com/2006/03/powerbook-runs-xp<p>It's done. The <a href="http://onmac.net/">contest</a> has been won.</p>
<p>Windows XP is installable on a MacBook Pro and it even boots from there after the installation.</p>
<p>This solves a big problem I'm having: In the office, I'm using a 30" cinema display connected to a Windows XP box which I had to custom-build because no out-of-the box systems have graphics cards with dual-link DVI ports which is needed for all resolutions bigger than 1920x1080 (technically, the limit is even 1600x1200, but 1920 still works somehow).</p>
<p>Now, custom-built boxes are nice, but they have two flaws: The first is the noise. Even though I got it to run very quiet despite the kick-ass graphics card I had to put in, it's louder than your average laptop (it was even louder before I unplugged the chipset fan. As it's working stable since more than a year now, I guess it doesn't matter). The second problem is the problem of data redundancy:</p>
<p>If you prefer and have the ability, like I do, to both work at home and in the office, you are dependant of having current data at both places. Version control systems help a lot here, but they don't solve all the problems (unfinished revisions which I don't like to commit and binary files). In the end, the only way to have your data where you need it is to maintain it only at one place at a time.</p>
<p>This is <b>the</b> main advantage I'm seeing in laptops (beside the quietness).</p>
<p>Looking at the current laptop pc market, it's even worse in respect of dual-link DVI outputs: Usually, you don't even get single-link DVI. And custom-building one is no option unfortunately. There are some barebones, but what comes out of such an operation is a loud, badly manufactured "thing" with short battery life.</p>
<p>So, hardware-wise, powerbooks were always perfect because all of them have my direly needed dual-link DVI port.</p>
<p>The problem was the software. I'm dependant on <a href="http://www.borland.com/delphi">Delphi</a> for my daily work. Even if days can pass without me actually using it, it still happens that I need it. And when I do, it must be quick.</p>
<p>The other thing is multimedia. No matter what you are now going to tell me: Nothing on the Mac matches the perfect architecture of DirectShow which allows media players and codecs to be developped independently. <a href="http://www.corecoded.com">Core Media Player</a> with the <a href="http://packs.matroska.org/">right codec pack</a> is unbeatable in performance, usability (at least for me) and versatility. Sorry Quicktime (only quicktime and maybe DivX). Sorry MPlayer (always uses 100% CPU, awkward GUI). You just don't beat that.</p>
<p>Also multimedia related is my passion for speedruns and <a href="http://bisqwit.iki.fi/nesvideos/">superplay movies</a> in particular. For the latter ones, you need the emulator and the original ROM and especially the emulators (some of which are patched for the movies to work) don't run on the Mac Platform and if they do, they only do with some limitations (like pausing the emulation stopping the playback of the movie in Snes9x).</p>
<p>If I want a single computer to work both at home and in the office, it must do both: Provide an environement to work with and an environement to play with.</p>
<p>OS X allows me to do <em>some</em> developement (<a href="http://macromates.com/">TextMate</a> comes to mind), but not all. It allows me to do some multimedia (XVid works sometimes), but not all. Thus, OS X is currently not a solution for me. At least not the single one (I'd love to work in TextMate for PHP and Ruby, eclipse for Java, but I can't do Delphi or Windows CE).</p>
<p>So what I <em>need</em> is Windows (where the software does everything I need) on a Macintosh Laptop (where the hardware does everything I need).</p>
<p>Up until today, this was not possible.</p>
<p>Now it is. This wonderous hack (which is not completely disclosed yet, but I have a very good idea how it works) solves my problem in allowing me to combine the optimal software (for me. I know lots of people who can be perfectly happy with OS X) with the optimal hardware (for me).</p>
<p>Needless to say that I've ordered a MacBook Pro at our hardware distributor. They even had 13 on stock, so I'll be getting mine tomorrow.</p>
<p>I hope the hack gets disclosed shortly, so I can do that nice dual-boot configuration :-)</p>
<p>Or maybe Virtual PC just works good enough for delphi and the speedruns...</p>
<p>
<b>Update</b>: A howto with needed tools is now <a href="http://www.condoski.com/download/">ready to be donwnloaded</a></p>
Sure. Just dump your trash right here!2006-03-09T00:00:00+00:00http://pilif.github.com/2006/03/sure-just-dump-your-trash-right-here<p>Boy was I pissed when I read my mail today:</p>
<div align="center">
<a href="http://www.gnegg.ch/archives/trash.png"><img alt="Spam in Inbox" src="http://www.gnegg.ch/archives/trash-thumb.png" width="370" height="175" /></a>
</div>
<p>Dear Spammer. What do you think to get out of this? All links your post will be masked, so no page rank for you. And I will almost certainly not overlook something like this (400 comments in one night), so no chance in it persisting either.</p>
<p>I'm sick of cleaning up after you guys. Dump your trash somewhere else. /dev/null sounds like a nice alternative.</p>
<p>Oh, and MT: Why did your Spam filter not catch this?</p>
PHP Stream Filters2006-02-09T00:00:00+00:00http://pilif.github.com/2006/02/php-stream-filters<p>You know what I want? I want to append one of those nice and shiny PHP stream filters to the output stream.</p>
<p>I have this nice windows-application that recives a lot of XML-data that can be compressed with a very high compression factor. And as the windows application is for people with very limited bandwith, this seems to be the perfect thing to do.</p>
<p>You know, I CAN compress all my output already. By doing something like this:</p>
<pre class="code">
<?php
ob_start();
echo "stuff";
$c = ob_get_clean();
echo bzcompress($c);
?>
</pre>
<p>The problem with this approach is that the data is only sent to the client once it's assembled completely. bzip2 on the other hand is a stream compressor that is very well able to compress a stream of data and send it out as soon as a chunk is ready.</p>
<p>The windows client on the reciving end is certainly capable of doing that. As soon as bytes come in, it decompresses it chunk-wise and feeds it to a Expat based parser which will handle the extracted data. Now I want this to happen on the sending side aswell.</p>
<p>The following code does work sometimes:</p>
<pre class="code">
<?php
$fh = fopen('php://stdout', 'w');
stream_filter_append($fh, 'bzip2.compress', STREAM_FILTER_WRITE, $param);
fwrite($fh, "Stuff");
fclose($fh);
?>
</pre>
<p>But sometimes it doesn't and produces a incomplete bzip2-stream.</p>
<p>I have a certain idea of why this is happening (no sending out of data to the filter on shutdown), but I can't prevent it. Sometimes the data is not put out which makes this method unusable.</p>
<p>I'm afraid to report this to bugs.php.net as I'm sure it's something PHP was not designed for and it'll get marked as BOGUS faster than I can spell 'gnegg'.</p>
<p>So this means that the windows-client just has to wait for the data being extracted, converted to xml and compressed.</p>
<p>*sigh*</p>
<p>(thinking of it, there may be this option of outputting data to a temp-file (to which handle a filter is assigned to) and the read it out to the browser immediately afterwards. But come on, this can't be the solution, can it?)</p>
<p><b>Update:</b> I've since <a href="/archives/365-PHP,-stream-filters,-bzip2.compress.html">tracked the problem</a> to a bug in PHP itself for which I found a fix. My assumption of writing to a temporary file could help was wrong as PHP itself does not check the return value of a bzlib function correctly and never writes out a half-full buffer on stream close. Neither to the output stream nor to a file.</p>
Reporting Engines2006-01-26T00:00:00+00:00http://pilif.github.com/2006/01/reporting-engines<p>There are quite many solutions available to you if you want to create printed (or PDFed) reports from your delphi application, but surprisingly, one one is really convincing.</p>
<p>The so called professional solutions like Crystal Reports or List&Label are very expensive, require a large runtime to be deployed and may even require you as the developer to pay royalties per delivered client</p>
<p>So for my particular problem, the only solution was to use a VCL based engine that can be compiled into the exe and does not need any additional components to be deployed.</p>
<p>Years ago, I was told to use <a href="http://www.digital-metaphors.com">ReportBuilder</a> and maybe people were even right there: Quick Reports (as in included into delphi back then) had and still has a very bad reputation, and RB was the only other VCL based product available</p>
<p>RB has some problems though. Lacking Delphi 2006 support, limited report templates, field formating on the database access layer and last but not least: A nasty bug preventing barcodes from being correctly rendered to PDF-Files.</p>
<p>Then, I had a look at <a href="http://www.fast-report.com">Fast Report</a> and I tell you: That piece of software is perfect!</p>
<p>Granted, switching will come with a bit of work, though the paradigms of both engines kinda match. But once you've done the stupid rebuilding of the old templates, you'll notice how wonderful Fast Report actually is. And you will notice immediately as it's very, very intuitive to use - compared to RB. Things that required custom programming or even a little hacking here and ther in RB just work in FR. And they even work without forcing you to read through lots of documentation in advance</p>
<p>And everything - just everything is in the report template. Number formats, even small scripts for changing the report in subtle ways while it's being printed. Just perfect for what I was doing.</p>
<p>So, if you are looking for a nice, powerful really easy to use reporting enginet that can be fully compiled into your EXE, you should really go with FR. It even costs less than RB.</p>
mp3act2006-01-11T00:00:00+00:00http://pilif.github.com/2006/01/mp3act<p>When you have a home server, sooner or later your coworkers and friends (and if all is well even both in one person ;-) ) will want to have access to your library</p>
<p>Cablecom, my ISP, has this nice 6000/600 service, so there's plenty of upstream for others to use in principle. And you know: Here in Switzerland, the private copy among friends is still legal.</p>
<p>Well, last sunday it was time again. Richard wanted access to my large collection of audiobooks and if you know me (and you do as a reader of this blog), you'll know that I can't just give him those files on a DVD-R or something. No. A webbased mp3-library had to be found.</p>
<p>Last few times, I used Apache::MP3, but that grew kinda old on me. You know: It's a perl module and my home server does not have mod_perl installed. And I'm running Apache 2 for which Apache::MP3 is not ported yet AFAIK. And finally, I'm far more comfortable with PHP, so I wanted something written in that language so I could make a patch or two on my own.</p>
<p>I found <a href="http://www.mp3act.net/">mp[3]act<span style="display: none">mp3act</span></a> which is written in PHP and provides a very, very nice AJAX based interface. Granted. It breaks the back-button, but everything else is very well done</p>
<p>And it's fast. Very fast.</p>
<p>Richard liked it and Christoph is currently trying to install it on his windows server, not as successful as he wants to be. mp3act is quite Unix-Only currently.</p>
<p>The project is in an early state of developement and certainly has a rough end here and there, but in the end, it's very well done, serves its need and is even easily modifiable (for me). Nice.</p>
Flattening Arrow Code2006-01-11T00:00:00+00:00http://pilif.github.com/2006/01/flattening-arrow-code<p>In an <a href="http://www.codinghorror.com/blog/archives/000486.html">equally named article</a>, the excellent (yes. Really. This is one of the blogs you HAVE to subscribe to) <a href="http://www.codinghorror.com/blog">Coding Horror</a> blog talks about flattening out deeply stacked IF-clauses in your code.</p>
<p>I so agree with the guy, though there seem to be two opinions in the matter of the points 1 and 4 in the list the article provides:</p>
<blockquote>
Replace conditions with guard clauses. This code..
</blockquote>
<p>Many people disagree. Sometimes because they say that Exceptions are a bad thing (I don't get that either) and sometimes because they says that a function should only have one return point</p>
<blockquote>
Always opportunistically return as soon as possible from the function. Once your work is done, get the heck out of there! This isn't always possible -- you might have resources you need to clean up. But whatever you do, you have to abandon the ill-conceived idea that there should only be one exit point at the bottom of the function.
</blockquote>
<p>I once had to work with code a intern has written for us. It was exactly written as Coding Horror tells you not to. It was PHP code and all of it basically took place in a biiig else-clause around the whole page, with a structure like this:</p>
<pre class="code">
if (!$authenticated){
die('not authenticated');
else{
// 1000 more lines of code, equally structured
}
</pre>
<p>This is a pain to read, understand and modify.</p>
<p>To <em>read</em> because the thing get's incrediby wide requiring you to scroll horizontally, to <em>understand</em> because you sometimes find an <tt>}else{</tt> not having the slightest idea where it belongs to, requiring you to scroll upwards for half a file to see the condition and to <em>modify</em> because PHP's parser is inherently bad at reporting the exact position missing or spurious braces, which is bound to happen when you extend the beast.</p>
<p>But back to the quote: I talked to that intern about his code style (there were other things) and he mostly agreed, but he refused to change those deeply stacked IF's. "<em>A function must only have one single point of return. Everything else is bad design</em>", he told me.</p>
<p>Point is. I kinda agree. Multiple exit points can make it hard to understand the workings of a function. But if it's a single, well definded condition that makes the function unable to continue or if the function somehow gets its result way early (like if it's able to read the data from a cache of some kind), IMHO there's nothing wrong with just stopping to work. That's easy to read and understand and certianly does not have above problems.</p>
<p>And of course every function should be short enough to fit on one screen, so scrolling is never neccessary and it's always obvious where that }else{ belongs to - at least without making you scroll.</p>
<p>Personally, I write code exactly as it is suggested in that article. And I try to keep my functions short. Like this, it's very easy to understand the code (most of the time) and thus to extend it. Even by third parties.</p>
<p>Christoph, do you agree? And: No, I'm not talking about that sort-by-material-group-thing. That <b>IS</b> unclean. I know that (and so do you now *evilgrin*)</p>
Evening leisure2006-01-10T00:00:00+00:00http://pilif.github.com/2006/01/evening-leisure<p>You know what's one of the greatest things to happen on the evening of a workday?</p>
<p>You come home, select 'Play Movie' on your <a href="http://www.logitech.com/index.cfm/products/features/harmonytopics/CH/EN,crid=2079,categoryid=396">Harmony Remote</a> and watch the <a href="http://speeddemosarchive.com/BanjoTooie.html">speedrun</a> that downloaded itself while you were on the office. (using Windows Media Center. Sorry, but that has the advantage of just working).</p>
<p>This is the future. This is like watching TV with the full control of the program.</p>
<p>Why should I watch TV only to see programs I don't want to for most of the time? Why should I cope with advertisments every 10 minutes? Why should I be forced to watch all the movies in the german synchronized version?</p>
<p>Not with me. This is what the internet is for. This is why I'm running a linux server at home.</p>
Asterisk Extended2005-12-23T00:00:00+00:00http://pilif.github.com/2005/12/asterisk-extended<p>Playing around with <a href="http://www.asterisk.org">Asterisk</a>, it was inevitable for me to stumble upon AGI.</p>
<p>AGI is a protocol quite like CGI which allows third party applications to be plugged into asterisk, giving them full control over the call handling. That way, even non-asterisk-developers are able to write interesting telephony applications.</p>
<p>One thing I always wanted to do is to set the CallerID on incoming calls. Some numbers are stored in our customer database. There is no reason not to show the customer names on the phones displays instead of only the number.</p>
<p>The snom phones do have a little addressbook, but it's very limited in both amount of memory and featureset, so it was clear that I'll have to set the CallerID via Asterisk (SIP allows for transmission of a caller-id. And <a href="http://www.voip-info.org/wiki/view/set+callerid">so does AGI</a>)</p>
<p>Additionally, I thought, it would be very nice to use the swiss phone book at <a href="http://tel.search.ch">tel.search.ch</a> or even the non-free ETV to try and guess numbers not already in our database.</p>
<p>That scenario is exactly what AGI is for.</p>
<p>As AGI works like CGI, it creates a new process for every call to AGI applications. This is not an option if you want to use interpreted languages. Well. it *is* an option considering our low amount of calls we are getting per time unit, but still. I don't like to deploy solutions with obvious drawbacks.
</p>
<p>
Besides, launching a PHP interpreter (I'd have written this in PHP) can easily take a second or so - not acceptable if you want the AGI script to be mandatory on each call. Think of it. You don't want the caller to wait for your application.</p>
<p>The solution to this is FastAGI, which works like FastCGI: A server keeps running and answers AGI-requests. Like this, you start the interpreter once and just serve the calls in the future. You save the startup-time of the interpreter.</p>
<p>Even better: It allows to run the AGI applications on a different machine than the PBX. This is good because you want the PBX to have as much CPU time slices as possible.</p>
<p>Unfortunately, this made PHP quite unsuitable for me: While it is possible to write a socket server in PHP (ext/posix does exist), I never managed to get it to work as I wanted to. It was slow, unstable and created zombies.</p>
<p>Then I found <a href="http://www.snapvine.com/code/ragi">RAGI</a> which was even better. <a href="/archives/ruby_on_rails.html">For quite some time now</a>, I have been looking for an excuse to do something with <a href="http://www.rubyonrails.org">Ruby on Rails</a>. With RAGI, I finally got it.</p>
<p>Getting the sample provided with RAGI to work was very easy (look at the README file). And reading through that sample file, I was very pleased to see the simplicity of writing a AGI-Application in Ruby (RAGI uses FastAGI, of course).</p>
<p>Now I can finally start hacking away in Rails to create my internal-customer-database / external-phone-lookup application (with some nice caching/timeout handling) to finally show the name behind the calling phone number on the displays of our SNOM phones.</p>
<p>Of course I'm going to provide the sourcecode here once I'm done.</p>
Strangest JavaScript error ever2005-12-15T00:00:00+00:00http://pilif.github.com/2005/12/strangest-javascript-error-ever<p>Let's say you create some very nice AJAX-stuff for a web project of yours. With nice I mean: Not breaking the back-button where its functionality is needed, not doing something that works better without AJAX, and doing it while providing lots of useful visual feedback.</p>
<p>Let's further assume that the thing works like a charm in every browser out there (not counting Netscape 4.x and IE in all versions - those are no browsers).</p>
<p>And then, IE throws this at you:</p>
<center>
<img alt="Unknown Runtime Error" src="http://www.gnegg.ch/archives/jserror.png" width="437" height="290" />
</center>
<p>Needless to say that the HTML output in question had a line count not even close to 370, so finding this thing easily was out of the question.</p>
<p>The solution: IE is unable to write to innerHTML of a TBODY element. But instead of providing an useful error message or even a link to the source with the line in question already highlighted (that's what Firefox would do), it just bails out with completely useless error information.</p>
<p>*sigh*</p>
<p>(btw: That mix of fonts in the details section of the error message is just another indication of IEs great code quality)</p>
The myth of XCOPY deployment2005-12-07T00:00:00+00:00http://pilif.github.com/2005/12/the-myth-of-xcopy-deployment<p>Since the advent of .NET, everyone is talking about XCOPY deployment.</p>
<p>XCOPY deployment means that the applications are distributabe without a setup routine. Just copy the file(s) where you want them and that's it.</p>
<p>We are being told that this is much easier and safer than the previous non-.NET approaches which - as they continue - always required a setup program.</p>
<p>The problem with those statements is that they are all false.</p>
<p>First the ease of use: Think of it: Say you want to install <a href="http://blogs.geekdojo.net/brian/articles/Cropper.aspx">Cropper</a> (which made me write this entry. I found that screenshot utility via <a href="http://miksovsky.blogs.com/flowstate/2005/12/elegant_cropper.html">flow|state</a>). What you are getting is a ZIP-File, containing 5 files and a folder (containing another 6 files). Nearly all the files are needed for the application to run.</p>
<p>XCOPY deployment in this case means: Create a folder somewhere (Windows guidelines advocate you create that in c:\Program Files which is a folder windows does not want you to mess with and per default does not display its contents) and copy over all those files, being aware not to forget a file or the folder in the archive. </p>
<p>But it does not end there: As you have to launch the application and going all the way through those folders, you will want to have a shortcut in the start menu or on the desktop. With this new and "better" method of deployment, you'll have to do that yourself.</p>
<p>This is a tedious task involving lots of clicks and browsing. An unexperienced user may not be able to do this at all.</p>
<p>What an unexperienced user will want to do is to copy that application right to the desktop. But in this case this does not work well as the whole application consits of multiple interdependant parts. Copying only the .EXE will break the thing.</p>
<p>Compare this with Mac OS X</p>
<p>In Mac OS X, application <em>also</em> consist of multiple parts. But the shell is built with XCOPY deployment (not called like this, of course. As a matter of fact, it does not have a name at all) in mind: In OS X, you can create a special kind of folder which is a folder only on the file system. The shell displays it to the user as a single file - the application.</p>
<p>Whenever you move that "file" around, OS X will move the whole folder. When you double click the "file", the application will launch (the binary is a file somewhere in this special folder. The shell is intelligent enough to find and launch that). When you delete it, the shell will delete the folder including it's contents (of course).</p>
<p>This makes XCOPY deployment possible as the applications become one piece. You want it on the desktop? Drag it there. In the Application folder (without warnings about not being allowed to mess with its contents, btw)? Drag it there? On an USB-Stick? Drag it there.</p>
<p>Well. There's one other thing: It's the users data and the applications data. Most of the applications will be used to create data with them. And all application somehow create their own data (for saving things like the window state or position for example). As all modern OSes are multiuser ones where a user does not necessarily have to have write access everywhere, there's the concept of the home directory. That one is yours. You may store whatever you want in there.</p>
<p>So naturally, this is the place where the applications should store data to0.</p>
<p>User data goes to a specific folder of the users choice. Per default, applications should suggest some Documents-Folder. Like "My Documents" in Windows or "Documents" in Mac OS. In most of the cases you don't want to delete that on uninstall.</p>
<p>Application settings are in Windows stored in the Registry (under HKEY_CURRENT_USER - a hive that belongs to the current user like his home folder does. And actually, the file behind that is stored in the home folder aswell (USER.DAT)) or in the Application Data folder below the users home folder.</p>
<p>Mac OS X Applications are advised to use the Preferences-Folder inside the Library Folder inside the users home directory<./p>
<p>Now. Application data is something you want to remove when you uninstall the application (which means deleting a bunch of files in Windows or one "File" in Mac OS). Application data is created by the application, for the application. No need to keep that.</p>
<p>In Mac OS, you can do that by going into the folder I've described above and delete the files - mostly named after your application. There are no warnings, no questions, no nothing. Just delete.</p>
<p>In Windows, editing the registry is off-limits for end-users and very, very tedious to do for experienced users (due to the suboptimal interface of regedit and because the whole thing is just too large to navigate it easily), so you generally let the stuff stick there. Deleting the Application Data in the same-named folder is also impossible for the end user: That folder is hidden by default. Explorer does not display it. And it's hard as hell to find, as you have to manually navigate into your home directory - there's not easy GUI-access to that. So that sticks too.</p>
<p>All in all, this means that windows is - at least in its current state - very unsuited for XCOPY deployment:</p>
<ul>
<li>It does not help at keeping together things that must be together</li>
<li>Its complex file system structure makes it hard to copy the application where windows wants it to be</li>
<li>Manually creating shortcuts is not feasible for an unexperienced user</li>
<li>Uninstallation of Application Data is impossible</li>
</ul>
<p>So, we found out that XCOPY deployment is not easy at all. Now let's find out how it's not true that only .NET enabled you to do this.</p>
<p>Ever since there is <a href="http://www.borland.com/delphi">Delphi</a>, there theoretically is XCOPY deployment.</p>
<p>Delphi is very good at creating self-contained executables.</p>
<p>With delphi it's a breeze to create one single .EXE containing all the functionality you need. That one single .EXE can be moved around as a whole (obviously), can be deleted, can even be put right into the start menu (if you want that). It can even create the start menu shourcuts, delete application data - basically configure and clean itself</p>
<p>It can even uninstall itself (embed an uninstaller, launch that with CreateProcess and set the flag to delete the .exe after it ran). And it can contain all it's image and video and sound data it needs.</p>
<p>Just because nobody did it does not mean it was not possible.</p>
<p>Face it: Windows users are used to fancy installers. Windows users are not at all used to dragging and dropping an application somewhere. And currently Windows users are not even able to do so as dragging and dropping will break the application.</p>
<p>OS X and <a href="http://klik.atekon.de/">now</a> Linux allow for true XCOPY deployment of desktop applications.</p>
<p>Well, you say... then maybe XCOPY deployment is just for those fancy ASP.NET web applications?</p>
<p>Maybe. But after XCOPY you need to configure your webserver - at least create a virtual directory or host. A good installer could do that for you - if you want it to.</p>
<p>Microsoft too has seen that this XCOPY thingie is not as great as everyone expected, so they added the new "One-Click Install" technology, which is not much more than a brushed-up MSI file which does a old-fashioned install.</p>
<p>To really make XCOPY deployment a reality (btw, I'm a big fan of depolying software like this), there must be some changes within Windows itself. Microsoft, copy that application bundle feature from OSX. That one works really, really good.</p>
<p>Btw: Am I the only one that thinks "XCOPY deployment" is a very bad term? What is XCOPY? Who the hell still uses XCOPY these days? And when we are using the command line: COPY would be enough.</p>
</p>
PostgreSQL scales2005-12-02T00:00:00+00:00http://pilif.github.com/2005/12/postgresql-scales<p>Via <a href="http://people.planetpostgresql.org/xzilla/index.php?/archives/111-Thoughts-on-FeedLounge-Switching-to-PostgreSQL.html">zillablog</a>, I was notified of <a href="http://feedlounge.com/blog/2005/11/20/switched-to-postgresql/">FeedLounge switching to PostgreSQL</a></p>
<p>FeedLounge is just another in a serious of webbased services switching their RDBMS away from MySQL.</p>
<p>For one thing, it's the features that's driving this. Postgres just has more features and sometimes, you need to have them. Triggers? Views? Until very recently, those features were not available with MySQL.</p>
<p>And when they switch, they notice another thing: <a href="http://www.postgresql.org">PostgreSQL</a> scales very well.</p>
<p>While everyone says that MySQL is optimized for speed and that there's no database system as fast as MySQL, this is only true for small setups.</p>
<p>In small setups MySQL scores with its ease of use and administration. But as soon as you want more (more features, more users accessing), you will run into MySQL's limitations and - even more important: MySQL will slow down, it will use lots of RAM and disk space and it even will begin to <a href="http://www.google.com/search?hl=en&q=myi+error">corrupt it's tables</a> (a thing a RDBMS should never ever do - not even in case of broken hardware though that's unavoidable).</p>
<p>PostgreSQL does not have these flaws. It may be a little bit slower under low load, but it speed and reliability scales with its users.</p>
<p>PostgreSQL scales.</p>
Opera Mini2005-11-10T00:00:00+00:00http://pilif.github.com/2005/11/opera-mini<p>Today, Opera released <a href="http://mini.opera.com">Opera Mini</a>, a browser written in Java for all the Java capable mobile phones out there</p>
<p>By the use of a special proxy server, they manage to both minimize the traffic a usual browsing session generates and to keep the application as performant as possible.</p>
<p>When I tried to download the application via WAP, all I got was an 'Invalid Jad Request'-Error (wahtever that meant), but with some sneakyness, I found the <a href="http://mini.opera.com/builds/releases/operette-1.1.2231/operette-hifi_myopera.jar">download URL</a> for the jar file none the less (the linked version is the high-memory version. There's another for less advanced phones)</p>
<p>I copied the file over to my K750i via bluetooth which is cheaper than downloading it and it also had the advantage of actually working.</p>
<p>The browser is very nice. While it uses quite some time to finally launch, surfing is very quick. And the very good font rendering (of course operas small screen rendering is active aswell) makes this a pleasure to use and is the first justification why a phone should have a screen resolution as big as the K750i's</p>
<p>And the most interesting thing: Opera uses the default internet GPRS profile. Not the WAP one. This makes surfing via opera a whole lot cheaper than via the built-in wap browser of my phone.</p>
<p>Congratulations, Opera. This rules!</p>
<p>(and thanks, Christoph, for pointing it out to me)</p>
PostgreSQL 8.12005-11-08T00:00:00+00:00http://pilif.github.com/2005/11/postgresql-81<p>A new year, a new announcement of a new version of <a href="http://www.postgresql.org">PostgreSQL</a>, an all-time-member on my <a href="http://www.gnegg.ch/archives/138-All-time-favourite-Tools.html">favourite tools</a> list.</p>
<p><a href="/archives/8-PostgreSQL-7.3.html">2002</a> brought us PostgreSQL 7.3, 2003 brought 7.4 (no announcement on this blog) and <a href="/archives/209-ALTER-TABLE-in-PostgreSQL-8.0.html">2004</a> brought us PostgreSQL 8.0 (the date of the blog entries match out of sheer accident - I did not time them at all).</p>
<p>And now it's time for the next announcement. While the team is a bit early this time (it's not december the 2nd yet), it once more brings a <a href="http://www.postgresql.org/docs/whatsnew">lot of good stuff</a>.</p>
<p>What's the most interesting aspect about those PostgreSQL releases: They always bring just the feature I need at the time of the release.</p>
<p>7.1 brought TOAST tables. 7.4 brought autovacuum, 8.0 brought the windows version and now 8.1 brought some needed performance improvements with large tables and large COPY operations (which is what I'm doing currently)</p>
<p>And it's not just me. Christoph was in the position to have needed something like PHP's <tt>max()</tt> function. And what do we learn: 8.1 brings us <tt>greatest()</tt></p>
<p>Congratulations to another splendid release, PostgreSQL team. I hope to see you going as strong for the next couple of years.</p>
Usability with the Browse for folder dialog2005-11-04T00:00:00+00:00http://pilif.github.com/2005/11/usability-with-the-browse-for-folder-dialog<center><img alt="browsefolder.png" src="http://www.gnegg.ch/archives/browsefolder.png" width="324" height="338" /></center>
<p>This dialog is the worst usability nightmare ever. It's so bad that I'm really afraid of using any function in any program making me to chose a directory. Why? Don't get me started:</p>
<ul>
<li>It's too small. Newer versions of Windows allow you to resize it, but it's dependent on the program. The one I took this screenshot off does not.</li>
<li>It's uncommon. You don't see that dialog often. Many programs use a standard file selection dialog when they ask for a directory (I guess because of the problems outlined here)</li>
<li>It does not allow multiple selection. Meaning that if your program provides the user to work with multiple directories at once, it can't be done with this dialog. You have to build your own solution, thus losing even more usability by forcing the user to learn something new</li>
<li>The tree-view is an uncommon view on the filesystem. Over time, Microsoft eliminated the tree view for directory navigation more and more. You have to willingly turn it on in Windows XP. There's no explorer view with that tree per default</li>
<li>It's context-less. Tell me: What's the reason to select a folder in the dialog you see in the screenshot above? I don't select a folder for the selections sake. I want do do something with that folder. What? The dialog does not tell me. I know this can be set in the API-Call to bring up the dialog. But many people do not.</li>
<li>It has not way to enter a path manually. Copy & Paste exists for a reason. If you have deeply nested paths, it can be a real timesaver. Navigate in Explorer (maybe already open anyways), Copy the path to the clipboard and... nope... no pasting</li>
<li>It has no autocomplete. I'm very fast in typing paths with the help of autocomplete: c:\Pro[arrow down]\Po[arrow down] and I'm in c:\Program Files\PopScan. Not in this dialog. In this dialog, I have to click through the whole path</li>
<li>Around Windows 2000, microsoft extended the file selection dialogs with a shortcut bar allowing easy access to some commonly used folders. This bar is missing in this dialog.</li>
<li>It's readonly. What if I want to make a new folder? Some "editions" of the dialog do provide a 'New Folder' button. Even so, it works by adding a 'New Folder'-Folder and you have to manually click it to rename it (at least on this system here. Behaviour is erratic</li>
<li>There's no context menu. Usuall when you see filesystem icons, you can right-click it to get a systemwide context-menu. Not with this dialog. Well. You can right-click. A context menu does appear. But the single entry is "What's this" providing an utterly pointless context sensitive help entry that - I'm afraid has no context at all: "Click the plus sign (+) to display more choices" choices? What choices? Why do I want to see new choices? This is no answer for the question "what's this". It's no answer at all. What the <em>heck</em>.</li>
<li>It's context less (yeah. we had that before): Tell me: What's the path I currently have selected in the screenshot? How can I know I have selected the correct folder? I may - after all - have multiple Richard Wagner folders on my harddrive</li>
<li>I wanted to write that it violates fitts law because you had to click those small '+'-signs (as the context sensitive help tells you). And now - after <em>years</em> of using this dialog I finally learned that you can double-click folders to expland them. I did not know that until now</li>
<li>When you have mounted network drives, it's a living performance-problem as the top layer of the dialog displays all drives which can take some time. In which the dialog (and the unerlying app freezes).</li>
<li>The deeper you get in the hierarchy, the more you have to <em>horizontally</em> scroll.</li>
</ul>
<p>I'm sure I can list even more, but enough is enough. I'm sure you get the point</p>
<p>Microsoft, I beg of you: Redesign this dialog!</p>
<p>Force the programs to use the new design. Don't provide a fallback!</p>
<p>This way only the programs that are actually re-building this dialog (instead of calling the API) remain broken - and after all, they were broken to begin with: Why rebuild something crappy? Why rebuild it and risk it only being similar but not identical in usage? Why rebuild and risk it remaining broken even if the dialog gets fixed?</p>
<p>And believe me: There are people rebuilding existing OS-features. For no reason at all (see another posting about Adobes new file selection dialogs)</p>
Frustrated by personal firewalls2005-11-04T00:00:00+00:00http://pilif.github.com/2005/11/frustrated-by-personal-firewalls<p>As you may know, the <a href="http://www.sensational.ch">company</a> I'm working in develops <a href="http://www.popscan.net">barcode ordering solutions</a>.</p>
<p>Now for me it's very frustrating to see that whatever I do, those oh-so-good personal firewall and internet security and whatnot tools manage to screw the experience for the enduser. During developement, I'm always watching to adhere to common known-good practices in regards to handling the system. Works without admin rights? Yes. Uses systemwide functions wherever possible? Yes. Clean uninstall? Yes. Spyware free? Of course. Trojan horse? God beware! No!</p>
<p>None the less, PopScan gets majorly screwed here and then:</p>
<ul>
<li>Norton Internet Security is per default configured to let only 'Programs authorized by Symantec' to access the internet. I don't even try to ask how to get on that list - besides the fact that we'd never have the resources to do wahtever Symantec wants from us - if they provide such a possibility at all.</li>
<li>Whenever the offline version connects to the internet, a big scary warning from whatever personal firewall (besides Norton - that tool silently blocks everything that's not IE and LiveUpdate) pops up telling the (not-knowing) user that something bad is currently happening. End-users are known to click 'block' here and accuse us of creating trojan horses</li>
<li>To circumvent many problems associated with installations on the client, we created the <a href="http://www.popscan.net/web_en/smb.html">Web version</a> of PopScan. And you know what: We're still screwed. Java-Applets get blocked (how the hell should we get the barcodes in the scanner if not with Java or ActiveX??), PopUps get blocked (of course we don't pop up any unrequested ones. The only popup used is for reading the scanner. With onClick="window.open()". It can't be more 'user-requested' than this. Still... Some security program deemed it necessary to block that.</li>
</ul>
<p>The worst thing about all that is: Those obviously broken programs that screw applications all over the place call themselves 'Security Tools' and with this, they seem to be automatically trusted by the end users. If a security tool tells the user "Trojan Horse Alert", the user panics and blocks everyting. If a security tool just silently blocks certain internet connections (PopScan Offline uses Port 80 to communicate - using WinInet API - a less intrusive, less sneaky way for connecting to the internet does not exist), everyone blames the blocked program of not working.</p>
<p>To connect to the internet regardless of any PFW setting would mean to inject code into IE and use that to do your internet work. The better tools still detect that, but you can get around it by abusing the Windows Message Loop and simulating keypresses. But both solutions are actually trojany. And I'd never ever implement such a "feature". It's compomising stability and integrity. And it's etically flawed. None the less: The tools force me to do something like this if I want to work it 100% of the caused in 100% of the installations</p>
<p>Those tools go way too far.</p>
<p>And don't forget: It's the nonexperienced users that get bitten: Those install security tools. Those don't know what those tools do. Those trust them. Those make the wrong conclusions (PopScan can't connect. PopScan must be broken).</p>
<p>It's just frustrating. Why use lots of time to make a software non-intrusive and perfectly compliant to both technical and ethical standards when it's blocked just like your average trojan horse trashing your installation and displaying advertisement all over the place?</p>
<p>Actually I think, those trojans are better off because they have code to circumvent the security tools.</p>
<p>As it currently stands I have the feeling these tools do block more legitimate applications then trojan horses. And this frustrates me. Greatly.</p>
On the search of a text editor2005-10-18T00:00:00+00:00http://pilif.github.com/2005/10/on-the-search-of-a-text-editor<p>When I began with this blog, <a href="/archives/3-Why-I-like-jEdit.html">I was using</a> <a href="http://www.jedit.org">jEdit</a> because of its wonderful list of countless features directly optimized for the programmers needs.</p>
<p>It was lacking one thing though: PHP support. While it provides (excellent) syntax highlighting, there's nothing more. No code completition, no parameter hints, no code browser. While many people tick those things off as useful but not needed, I tend to disagree:</p>
<p>Sometime around autumn 2003, I gave the Zend Studio another try. And it has matured quite a lot since its first release. The speed problems were fixed, some editing features came back... nice.</p>
<p>What made me stick to Zend Studio is the above features: Code completition and parameter hints.</p>
<p>I know that you can just look the order of a functions parameter up in either your (or someone elses) code or in the manual, but it always interrupts your work. Not only that, you have to actually know where to look. Is it in the PHP manual? In code file a? In file b? Maybe in some library installed in /usr/lib/php (PEAR)? Zend Studio provides me with the parameter hints regardless of where the file is stored - provided it can read them.</p>
<p>This is a killer-feature. It immensely increases ones productivity. Whatever editor I'm ever going to use: It must have parameter-hinting for PHP. And it has to work as good as it does in the Zend IDE.</p>
<p>The Zend IDE has other problems though. What it has in parameter hinting, it lacks in basic editing features. Remove whitespace at the end of lines? Comment out a block? Smart autoindent (another thing where jEdit shines)? Splitting the editing window? No. Neither of them.</p>
<p>What pisses me off most (besides the whitespace problem as that creates very ugly SVN commits) is the font renedering though. Now that I finally <a href="http://www.gnegg.ch/archives/254-Nice-font....html">found a font</a> I really like, I'm unable to use it in the mostly used editor environement. Like many other Java applications Zend Studio does not support cleartype. And if you hack around a bit to run ZDE in a Java 1.6 alpha, the whole application will use cleartype - the whole application except the editing window, of course. Consolas looks really bad without ClearType.</p>
<p>Actually, any of the fonts I do like for programming (basically any besides Courier New) looks bad without ClearType, which means that I'm programming PHP with Courier as my font.</p>
<p>PHP of course is my main language at this time, so I'm doing most of my work in an environement that is not to my liking at all.</p>
<p>So... time for a new editor. Here's what I've tried:</p>
<ul>
<li>jEdit again. Now has a PHP parser plugin, which is completely unusable unfortunately: It parses while you are typing and as soon as it detects a syntax error (which is bound to happen while you are writing a line), it puts the keyboard focus to the error list(!!!!). This means that I have to type like this: function gnegg([TAB]$param1,[TAB]$param2){[TAB], the [TAB] meaning me hitting tab to get the focus back to where it belongs. Additionally, there's no parameter hint, which is a must for me. As much as I'd like to use jEdit. It's not possible like this. Sorry. (even though jEdit actually renders Consolas quite right with its own implementation of subpixel hinting).</li>
<li><a href="http://www.phpeclipse.de">PHPEclipse</a>: An Eclipse plugin (even though Eclipse is written in Java, it uses SWT and thus the native font rendering of the underlying platform, meaning that cleartype is usable) teaching the JAVA IDE how to do PHP. Unfortunately, many of the great features in eclipse are part of the JDT plugin suite, so every language has to redo the stuff in there. PHPEclipse is seriously lacking in the features departement and the parameter completition is missing aswell.</li>
<li>UEStudio. Well... let's try it with a commercial offering. UEStudio is a enhanced version of UltraEdit. They emphasize on their PHP support. You guessed right: No parameter hints.</li>
<li>phpEd, Maguma Studio, ... I did not even try them again. My last experience was very, very painful. While Delphi is a RAD tool allowing to make quick progress, you have to be as careful with memory allocation as in every other native-compiled language. None of those Windows-Only-PHP-Editors seem to care about that, so they crash all the time. No alternative.</li>
</ul>
<p>Well... that's it for now. Please. Anyone! What are you working with? Is there a editor with the editing features of jEdit, the font rendering of eclipse and the PHP-specific features of Zend Studio (auto completition and <b>parameter hinting</b>)? I don't need no profiler. No debugger. Just a good editor.</p>
<p>Am I doomed to write PHP with Courier New for the rest of my life?</p>
Nice font...2005-09-20T00:00:00+00:00http://pilif.github.com/2005/09/nice-font<p>I have my windows set up with ClearType™ enabled. Now, for Longhorn, they have created some new fonts, specially hinted for configurations with ClearType™ enabled. One of them - Consolas - has a fixed width and is for use in programming environements for exmple.</p>
<div align="center">
<img alt="Sample Screenshot" src="http://www.gnegg.ch/archives/fontsample.png" width="355" height="241" />
</div>
<p>I really like this font. It's very easily readable but still looks great and smoothed.</p>
<p>Unfortunately, in environements without cleartype, it looks <a href="http://www.codinghorror.com/blog/archives/000356.html">really crappy</a>. One of those environements, unfortunately still is the Java Runtime and with this the Zend Studio. Actually, no single font i've tried in there looks acceptable. The best of them - still - is Courier New which is a real pain.</p>
<p>What I liked most about Consolas is also visible on the screenshot: Usually I'm working on a bright-on-dark editor scheme because it makes the thing a whole lot more readable for me. With consolas I don't need this any more. The font looks good and readable even on a white background. This in turn takes away a lot of strain from my eyes.</p>
<p>Nice. I've copied it over from my Lonhorn VMware-Image to my default working-environement and I'm really, really happy</p>
Domain grabbers - love'em2005-09-20T00:00:00+00:00http://pilif.github.com/2005/09/domain-grabbers-loveem<p>Well... pilif.ch is gone. I forgot to pay the renewal fee for one single day and the next day, the domain is in the hands of a domain grabber. I'm sure, nic.ch has some sort of deal with them to automatically forward expired domains.</p>
<p>I'm really grateful for that. And I'm also grateful that I did not get a warning in advance (which happened because of a wrong address in their database - mea culpa).</p>
<p>So: Visit <a href="http://www.lipfi.ch">lipfi.ch</a> for my personal webpage.</p>
<p>To be honest, I would not have resurrected the thing if it was only about www.pilif.ch, but I had lots of other services running on subdomains, services on which I depend on (like the administration tool for this blog)</p>
<p>Well... if you are one of the users of any of those services, replace pilif.ch with lipfi.ch and continue to use them as usual.</p>
<p>Stupid domain grabbers</p>
Just incredible2005-08-24T00:00:00+00:00http://pilif.github.com/2005/08/just-incredible<p>Maybe you know it: There's a big community about remixing video games music at <a href="http://www.ocremic.org">ocremix</a>. Additionally, there's an ogg-stream available <a href="http://oc.ormgas.com/ocmain.php">here</a> which is what I'm currently mostly listening to while at work.</p>
<p>There's some techno in there I really dislike, but I usually just close my ears (mentally) then. But mostly it's really great sound - especially if you know the games.</p>
<p>Well... And just some moments ago, I listened to <a href="http://www.ocremix.org/remix/OCR00205/">Death on the Snowfield</a> which is a remix of Terra's theme from Final Fantasy VI.</p>
<p>I've always been a big fan of the music in said game - IMHO it's the best thing Noubo Uematsu has done so far. Terra's theme is the best part of a the best composition of the second best video games composer... you could say that it's pretty good ;-)</p>
<p>But what made me write this entry: Said <a href="http://www.ocremix.org/remix/OCR00205/">Remix</a> is so beautiful - it made me cry (and listen to it over and over again). I really recommend you to get that mp3 and see if you like it aswell.</p>
<p>So good!</p>
No topic-based help system installed2005-07-21T00:00:00+00:00http://pilif.github.com/2005/07/no-topic-based-help-system-installed<p>Recently I had to do some Delphi-work again. To my surprise, the online help seemed to have stopped working. I always got this error message:</p>
<blockquote>No topic-based help system installed</blockquote>
<p>Programming without an online help is very tedious and sometimes nearly impossible.</p>
<p>When I had to look up in which Unit <code>TWinControl</code> is declared, I had two possibilities: Either look it up in the source code (Borland ships the full source code to their class library) or fix the help system once and for all.</p>
<p>I deceided to do the latter (searching after TWinControl is no fun).</p>
<p>Googling in the web turned out nothing. In Groups most of the time, the suggestion was to reinstall the whole thing</p>
<p>I absolutely did not have time for this, so I dug deeper.</p>
<p>The problem is caused by the installation of the VS2005 Beta which resets some AppID-GUUID. Afterwards delphi crashes while loading the IDE-package htmlhelp290 which in the end causes delphi to think that there’s no help installed</p>
<p>I fixed it doing the following:</p>
<ol>
<li>Reset the help-viewer-appid. In the registry under HKEY_CLASSES_ROOT\AppID\DExplore.exe, set AppId to {4A79114D-19E4-11d3-B86B-00C04F79F802}</li>
<li>In HKEY_CURRENT_USER\Software\Borland\BDS\3.0\Disabled IDE Packages remove the entry for htmlhelp290 that has been created.</li>
<li>Start Delphi and use the help again</li>
</ol>
<p>What I don’t know is if this has a negative effect to VS, but this does not matter for me: I need Delphi to work.</p>
<p>The whole thing is a consequence of the .NET orientation of delphi: Earlier, Delphi was as self-contained as the executables it can build: Drop it into a directory and run it. No problems, no questions asked.</p>
<p>With Delphi integrated into .NET and using .NET-Components, problems begin to rise: First there was a bug in D8 causing it to stop working after .NET 1.1 SP1, now this.</p>
<p>Hopefully, they find a way back to both .NET (for the acceptance in the buzzword-centered world where you can’t have a dev-tool not .NET capable) and self-containment.</p>
Fresh Air2005-07-08T00:00:00+00:00http://pilif.github.com/2005/07/fresh-air<div class="floatimgauto">
<a href="http://www.gnegg.ch/archives/pont04.JPG"><img alt="pont04.JPG" src="http://www.gnegg.ch/archives/pont04-thumb.JPG" width="150" height="112" border="0" /></a>
</div>
<p>Already <a href="http://www.gnegg.ch/archives/167-Mountains.html">another year</a> has passed.</p>
<p>It's fresh air time for me!</p>
<p>Hopefully (though not likely unfortunately), the weather will be better this time, but in the end I suppose it does not matter. It's about nature, free time and Evelyn, the best girlfriend in the world</p>
<p>As of tomorrow, I will be off for one week of holiday.</p>
<p>PS: I wonder how many points of rested bonus this will get me in <a href="http://www.worldofwarcraft.com">WoW</a> :-)</p>
Once more: PHP and SOAP2005-07-01T00:00:00+00:00http://pilif.github.com/2005/07/once-more-php-and-soap<p>I can't reist: I made my third attempt at getting a SOAP-Server in PHP to work (I only documented my <a href="http://www.gnegg.ch/archives/49-SOAP-needs-soap.html">first try</a> here on the blog).</p>
<p>My first try was a little more than two years ago. That one failed miserably.</p>
<p>The next try was last november. I came somewhat further than I did my first time, but Visual Studio was unable to import the WSDL correctly as soon as I was passing arrays of structs around</p>
<p>And now I tried again - this time with PEAR SOAP 0.9.1</p>
<p>This time all looks so much better. First of all, I do this because I really have to: For one of our <a href="http://www.popscan.ch">PopScan</a> customers, we are accessing their IBM DB2 database - currently using a Perl-based server that's nearing the end of its maintainability, so I deceided to redo it with PHP (PHP-code is somewhat cleaner than Perl code and I'm more fluent in PHP than in Perl)</p>
<p>The DB2-client (especially the one needed for that old 7.1 database) is clumsy, a bit unstable and really not something I want to link into our Apache-Server that serves all our clients.</p>
<p>So the idea was to compile another apache, run it on another port, bound to localhost only. Add PHP with the DB2-client. Access this combo via some way of RPC with the nice DB2-free standard-installation.</p>
<p>Well. And instead of once again designing a custom protocol (like I did for the Perl-Server), I though: Maybe give SOAP another shot.</p>
<p>In contrast to previous experience, this time, it was the Server that worked and the client that was failing. Using PEAR SOAP 0.9.1, creating the server (which creates the dreaded WSDL) went without flaw. This time I was even able to import the WSDL into VS 2003, which I tried just for fun.</p>
<p>Passing around arrays of structs of structs was no problem at all. After building the <tt>self::$__typedef</tt> and <tt>self::$__dispatch_map</tt> arrays, passing around those data types has become really intuitive: Just create arrays of arrays in PHP and return them. No problem.</p>
<p>Well done, PEAR team!</p>
<p>This time I've had problems with the PEAR SOAP Client. It insisted in passing around ints as strings which the server (correctly) did not like.</p>
<p>Instead of using lots and lots of time debugging that, I went the pragmatical way and used PHP5's build in <tt>SoapClient</tt> functionality. No problems there.</p>
<p>And then it suddenly broke</p>
<p>My test-client was written for the CLI version of php which was version 5.0.4. The apache-module of the live-server was 5.0.3.</p>
<p>All I got with 5.0.3 was a HTTP Client Error (SoapFault exception: [HTTP] Client Error).</p>
<p>Whatever I did, it did not go away, but to my delight I have seen that PHP did not even connect to the server to fetch the WSDL. This was good as I was able to debug much quicker that way.</p>
<p>In the end it was the URL of the WSDL. Every version of PHP5 (even the 5.1 betas) - besides 5.0.4 - does not like this:</p>
<pre class="code">http://be.sen.work:5436/?wsdl</pre>
<p>it prefers this</p>
<pre class="code">http://be.sen.work:5436/index.php?wsdl</pre>
<p>I ask now: Why is that this way? The first version is a valid URL aswell. The served WSDL is correct - it's the same file that gets called and it returns totally the same content. This is so strange.</p>
<p>After all, I have to say. SOAP with PHP - after two years - still is not ready for prime time. It's still in the state of "sometimes working - sometimes not". But as I now have an environement where it's known to be working and as I'm in total control of said environement, I will go with SOAP none-the-less. It's so much cleaner (and more secure: more people than just me are looking at the SOAP-code) than designing yet another protocol and server.</p>
<p>Oh. And the bottom line is: Never trust protocols that call themselves "simple" or "lightweight" ;-)</p>
Sorry. Connection's down2005-06-22T00:00:00+00:00http://pilif.github.com/2005/06/sorry-connections-down<p>We all know it: Network connections are unreliable. This is ok and I have no problem whatsoever with that. Connections can go down. Nothing serious, nothing special.</p>
<p>There are multiple ways how software can let you know that a connection dropped:</p>
<ul>
<li>Crash. This is the second worst way to handle it. At least the user knows what to do: Restart the application and it will (hopefully) work again.</li>
<li><em>Connection failed: Software caused the connection to abort</em>. Somewhat incorrect, too much information, a bit scary for the enduser, but common for many Winsock-Applications as this is the default error-message you can ask windows to provide you with given a specific error-code</li>
<li><em>Sorry. The connection somehow went down. Should I try to connect again?</em>. Correct, not technical, not scary. This is how I try to explain it to my users.</li>
</ul>
<p>Well... and then there's the IBM DB2 client:</p>
<blockquote>
SQL30081N A communication error has been detected. Communication protocol being used: "TCP/IP". Communication API being used: "SOCKETS". Location where the error was detected: "3.134.144.87". Communication function detecting the error: "send". Protocol specific error code(s): "104", "*", "0". SQLSTATE=08001
</blockquote>
<p>What the hell?</p>
World of Warcraft: Language Packs2005-06-01T00:00:00+00:00http://pilif.github.com/2005/06/world-of-warcraft-language-packs<p>Well. <a href="http://www.gnegg.ch/archives/229-World-of-Warcraft.html">Back here</a>, I have begged Blizzard to release a language-pack for WoW, as I had real difficulties playing on an english server with my german version (which I helped later with a semi-legal solution)</p>
<p>Today, they have released language-packs called <a href="http://wow-europe.com/en/info/faq/elp.html">ELP</a> which do exactly what I asked for in my blog entry</p>
<p>Now if the installation would not take that long, I'd happily remove my semi-legal setup and replace it with the original again.</p>
<p>Thank you so much for seeing and solving this problem, Blizzard!</p>
Firefo^WDeer Park Alpha 12005-06-01T00:00:00+00:00http://pilif.github.com/2005/06/firefowdeer-park-alpha-1<p>Yesterday, a developers preview of Mozilla 1.1 was released. To not confuse end users, the've called it <a href="http://www.mozilla.org/projects/deerpark/releases/alpha1.html">Deer Park Alpha 1</a>. You won't see (m)any Firefox-References in the UI.</p>
<p>As always on a major release, extensions and themes tend to break. And as always, you can try to patch (change the MaxVersion) the install.rdf-file in the XPI-file (it's just a zip-archive) and try to see whether the extension still works. Here's what I got so far:</p>
<ul>
<li>Installing DeerPark Alpha 1 breaks Firefox. You basically get an unstyled white screen when you start Firefox. This is not great, but unavoidable I suppose.</li>
<li>You can patch up the Qute-Theme and it mostly works (install it with <a href="http://www.winmatrix.com/forums/index.php?showtopic=2640">this script</a>). The preferences-screen looks funny though (it's mostly transparent). So if you don't change any preferences, you can go with qute.</li>
<li>The Web Developer toolbar continues to work without patching, though with limited functionality.</li>
<li>Download Manager Tweak works as always, though you can't access its preferences-screen from the preferences dialog (from the extensions window works fine though)</li>
<li>Feed Your Reader can be patched up. It does not work any more though</li>
<li>Greasemonkey can be patched up. It does not work though. Throws an error when trying to install an user script.</li>
<li>Platypus seems to work fine, though it's useless as Greasmonkey does not.</li>
<li>Adblock can be patched and <em>actually continues to work</em>.</li>
</ul>
<p>This scenario underlines my one problem I'm having with Firefox: They seem to be unable to provide a stable extensions API. On one hand this is a good thing: Cleaning up the API here and then helps getting the product clean and fast. On the other hand, this is bad for the end user. What do you do if your favourite plugin stops being developed and a new browser comes out? Either you don't use the plugin any more, or you stay with the old release of the browser (I'd do that if adblock would stop working - for example).</p>
<p>But you can't stay on old versions. Sometime in the future, a security problem will show up. If you are unlucky enough, the older version is not supported any more. So the choice is: Not using the plugin or surfing with an insecure browser.</p>
<p>That's why I have so few extensions installed. Those I have are popular enough to give me some guarantees that they will be updated. Those I'd like to install that seem to come without the guarantees, I won't install so I don't get used to having them available.</p>
<p>This is not the best situation ever. The people at Mozilla should try to stabilize the API somewhat as soon as possible. And they should try to be backward compatible at least for two bigger releases or so.</p>
<p>I will now go and look for people responsible for all those extensions and will try to report them my findings. And hope for the best.</p>
31337 OOP code?2005-05-27T00:00:00+00:00http://pilif.github.com/2005/05/31337-oop-code<p>In the current issue of <a href="http://www.phparch.com">php | architect</a>, there's an article about "enterprise ready" session management. While it provides a nice look about how to structure your application (besides the capital mistake of endorsing a multiple-entry application structure - but I'll save that for another post) and about some design-patterns, I have one big objection to the article: It's basically saying that the $_SESSION-things in PHP are not enterprise-ready. The article names three reasons:</p>
<ol>
<li>It is not OOP enough</li>
<li>The Session-ID is guessable</li>
<li>The storage location for the session-data does not work with load balancers</li>
</ol>
<p>The article then goes fruther and writes a complete replacement for PHP's session API</p>
<p>Now. Le'ts have a look those points:</p>
<p>Point 3 is valid. If you load balancer cannot guarantee that each subsequent request from a user goes to the same server, /tmp is not a good place to store session data. What the article does not tell you is that most load balancers actually <b>do</b> make that guarantee. Reading the session-data from a file, unserializing it, using it, serializing it, storing it to a file probably is faster than doing the same thing with a database. Maybe you should do some testing and then deceide - at least when you have the real enterprise-grade-load balancers at your disposal.</p>
<p>Point 2 is also somewhat true, but the workaround provided by the article is not any better than what PHP already does. I especially dislike taking a hash of the first two octets of the IP-adress for protection against session spoofing. Hey. 2 octets of IP-range are not checked. This are 65536 addresses. Say I want to spoof sessions on your site, instead of those 4 billions of users I only have 65 thousand to try it with, but let's say even only 1% of the users in said range do some online financial transactions on your site, it's worth it for me. I just make an accaount at a particular ISP and try out my range.</p>
<p>It's unfair to say PHP's session ID generation is weak because it uses the systems time (amongst other things) and then create a replacement algortihm using the systems time (amongst other things).</p>
<p>The idea with the second ID is somewhat valid, but does not protect at all against network-based attacks (listening on the network and sending a valid request)</p>
<p>My biggest concern - the one that actually made me write this - is point 1. Tell me: What's better at</p>
<pre class="code">
HTTPRequest::getSession()->getValue('gnegg');
</pre>
<p>than</p>
<pre class="code">
$_SESSION['gnegg'];
</pre>
<p>As I see it, the first version has three distinct disadvantages:</p>
<ul>
<li>Depending on the state of PHP's optimizer, this involves two function calls (in PHP userland code - and maybe countless others in the backend) per variable you query (and with the proposed implementation one additional database query(!)). Function calls are expensive. This is inperformant. Not with two to three queries but with maybe 100 or 1000 per second</li>
<li>The second method is the one documented and endorsed by PHP. Any coder you will find will know what it means, and how to work with it. Whenever you hire a new coder, he immediately will understand your session management code and will be able to concentrate on the business logic. The first method does not have this advantage. It's just another hurdle for the coder to take before being able to be productive. A needless hurdle</li>
<li>It's more code. More to type. More work to do. Thus inefficient for your programmers.</li>
</ul>
<p>Saying the first one is better because it's more OOP is like saying "I am more 31337 than you because I'm using Windows", or "rogues in world of warcraft are more 31337 than warriors" or ... take your pick (a phrase involving vi and emacs springs to mind).</p>
<p>So. From the three points the author of the article had to present, only one, maybe two are valid. Does this justify dumping the whole session management functionality in PHP? No it does not. Dumping ready-to-use funcationality is always bad. Especially if the funtionality you want to dump is extendable (and thus fixable for your purpose).</p>
<p>The PHP session management can be customized! Just have <a href="http://php.benscom.com/manual/en/ref.session.php"> a look at the manual</a>. There is <tt>session.save_handler</tt>, <tt>session.serialize_handler</tt>. There's even <tt>session.entropy_file</tt></p>
<p>So after all, another of those people trying to be god-like by writing about <i>the enterprise</i> without really knowing what it means. The java world is full of such individuals. And now PHP is getting them too. The price for being known? Maybe.</p>
The most pleasant installation experience2005-05-25T00:00:00+00:00http://pilif.github.com/2005/05/the-most-pleasant-installation-experience<p>The most pleasant experience I ever had when installing a webbased application was when I was installing <a href="http://gallery.sf.net">Gallery 2 Beta 1</a>. I've never seen such a polished assistant. I've never seen a webbased installer work so well as the Gallery one did.</p>
<p>While I was really, really happy with this, I have not blogged about it (shame on me).</p>
<p>But now that I have updated to Beta 3, this really, really is cause for a blog entry. </p>
<p>The update-process uses the same assistant-type as the installer and is just as pleasant and unproblematical as the installation process. Call your gallery, read, click "next", repeat. Done. Fast, pleasant and error-free.</p>
<p>Congratulations to the gallery team. You rock!</p>
<p>On and the gallery is <a href="http://www.pilif.ch/gallery2/">here</a></p>
Lots of fun with OpenVPN2005-05-22T00:00:00+00:00http://pilif.github.com/2005/05/lots-of-fun-with-openvpn<p><a href="http://openvpn.net/">OpenVPN</a> may seem to you as being "just another VPN solution". And maybe you are right.</p>
<p>However, OpenVPN has some distinct advantages over other VPN-solution that makes it quite interesting for deployment:</p>
<ul>
<li>NAT traversal. OpenVPN uses plain old UDP-Packages as a transport medium. Every NAT router on this world can forward them correctly out-of-the-box. If not, create the usual port-forwarding rule and be done with it. If that fails too (whyever it could fail), use the TCP-protocol.</li>
<li>Ease-of-use: Install, create two certificates, use the VPN. It's as easy as 1-2-3</li>
<li>Designed with small installations in mind. OpenVPN is not a big slow beast like IPSec for example. While it may not be as secure, it does not have all the problems associated with IPSec.</li>
<li>User-Space. OpenVPN runs completely in userspace (while using the TUN device provided by the kernel). This way the installation is non-critical and does require no reboots. Updates in case of security problems do not require reboots either.</li>
</ul>
<p>So after this unexpected praise: What brings me to writing this posting?</p>
<p>Well. I've just deployed one of the coolest things on earth: Using OpenVPN, I have connected my home network to the network in the office. Both ends see each other and allow for direct connections. I'm not only able to print on the offices printers from home (which admittedly is as useless as it is cool), but I'm also able to - for example - stream music from home to the office over a secured channel. All using straight IP connections without any NAT-trickery or other things.</p>
<p>Actually not even one port is forwarded through my NAT-gateway (a ZyAir B-2000 - as the Airport-Basestation does not allow for static routes (see below), I was forced to cross-grade).</p>
<p>I already had some of this functionality using my previously deployed PPTP-setup, though this had some disadvantages:</p>
<ul>
<li>Flacky support in Linux. Maintaining the beast across windows- and mac versions was not easy as something always broke on new versions.</li>
<li>Suboptimal security. You know: PPTP has flaws - quite like WEP. Though I've tried to work around them by using very very long passwords.</li>
<li>Suboptimal usability: When I wanted to connec to the office, I had to dial into the VPN, so user interaction was needed. Additionally, the default-gateway was redirected (I could have turned that off), so all open TCP connections got disconnected when I dialled.</li>
</ul>
<p>My current solution does not have any of those problems (I don't know about the security of course - no one does. For now, OpenVPN is said to be secure): No dialling is required, no problems with changing software-versions are to be expected (as it runs on a dedicated router which I don't intend on changing), and I don't have to dial in. The default gateway is not changed either of course, so the usual internet-connections go out directly. This way I'm unaffected from the office's suboptimal upstream of 65KBytes/s (unless I use services from the office of course - but this is unavoidable).</p>
<p>So. What did I do?</p>
<p>At the very first, I had to recompile the kernel on the server side once. I have not included TUN-support when I created my <tt>.config</tt> last year. After this, <tt>emerge openvpn</tt> was all that was needed. I kept the default configuration-file somewhat intact (install with the "examples" USE-flag and use the example-server.conf), but made some minor adjustments:
</p>
<pre class="code">
local x.x.x.x
push "route 192.168.2.0 255.255.255.0"
client-config-dir ccd
route 192.168.3.0 255.255.255.0
#push "redirect-gateway"
</pre>
<p>(just the changed lines)</p>
<p>and the /etc/openvpn/ccd/Philip_Hofstetter:</p>
<pre class="code">
iroute 192.168.3.0 255.255.255.0
</pre>
<p>Now, what does this configuration do?</p>
<ul>
<li>Bind to the external interface only. This has only cosmetical reasons</li>
<li>Push the route to the internal network to the client. Using the default configuration, all OpenVPN-Addresses are in the 10.8.0.0 network which allows me for nice firewall-settings on the server-side. The 192.168.2.0/24 network is our office-network</li>
<li>Tell OpenVPN that there are some client-specific configuration options to reach the 192.168.3.0/24 net which is my home network</li>
<li>Comment out the option to let OpenVPN set the default gateway. We really don't want all the traffic in my home net going through the office</li>
</ul>
<p>Then we create this client-configuration file. It's named after the CN you use in the SSL-certificate, while replacing spaces with underscores. You can see the correct value by setting up everything and then connecting to the server while watching the logfile.</p>
<p>In the client specific configuration-file we confirm the additional route we want to create.</p>
<p>The configuration file on the client router is unchanged from the default.</p>
<p>The only thing you need now is the SSL-certificate. Create one for the server and more for each client. I won't go into this in this article as it's somewhat complicated on itself, but you'll find lots of guides out there.</p>
<p>I used our companies CA to create the certificates for both the server and the client.</p>
<p>After this, it's just a matter of <tt>/etc/init.d/openvpn start</tt> on both machines (the path to the certificates/keys in the configuration files must match your created files of course).</p>
<p>Just watch out for the routing: On the server I had to change nothing as the server was already entered as default gateway on all the clients in the office network.</p>
<p>In the client network, I had to do some tweaking as the default-gateway was set to the Airport Basestation, which (understandably) knew nothing about the 192.168.2.0/24 network, so was unable to route the IP-packets to the VPN-gateway in the internal network (my Mac Mini).</p>
<p>Usually you solve that by installing a static route on the default gateway in your network. Unfortunately, this is not possible on an airport basestation. A problem I have solved by replacing it with a ZyAir B-2000 from Zyxel which allows for setting static routes.</p>
<p>On that new access-point I created a route equivalent to this unix-command:</p>
<pre class="code">route add -net 192.168.2.0 netmask 255.255.255.0 gw 192.168.3.240</pre>
<p>Where 192.168.3.240 is the address of my Mac Mini on which OpenVPN was running as client.</p>
<p>Then I issued "echo 1 > /proc/sys/net/ipv4/ip_forward" on the Mac Mini to allow the packets to be forwarded.</p>
<p>So whenever I send packets to one of the offices computers - let's say 192.168.2.98, this is what happens:</p>
<ol>
<li>The client uses it's IP and netmask to find out that the packet cannot be delivered directly. It sends it to the default gateway (my ZyAir)</li>
<li>The ZyAir consults its routing table to watch for the route to 192.168.2.0/24 and finds 192.168.3.240 as gateway for that network (every other address would have been routed thorugh my cable modem)</li>
<li>192.168.3.240, shion, watches it's own roting table where OpenVPN has created a route thorugh the VPN-interfaces (10.8.0.x) to the 192.168.2.0/24 network. It delivers the packet there.</li>
<li>On the other end of the tunnel, the OpenVPN-Server delivers the packet to the destination server.</li>
</ol>
<p>The path of the reply-packets is the same - just from the bottom to the top.</p>
<p>After getting the routing as I wanted it (verifyable by pinging petween computers in both networks), the next step was pure cosmetics:</p>
<ul>
<li>Create an internal DNS-server. Use it as a slave for the office's DNS-server to allow for DNS-lookups to work without crossing the VPN each time</li>
<li>Use said DNS-server to create entries for the computers in my home network</li>
<li>Make the office DNS-server a slave for that home-zone (to reach my computers by name)</li>
</ul>
<p>All of this was most interesting to implement and went much more smootly than anything else I've tried so far VPN-wise. Finally, I have the optimum solution concering connectivity to my office.</p>
<p>And besides that: It was fun to implement. Just worthy of a "Supreme nerd" - the title I got <a href="http://www.nerdtests.com/ft_nq.php?im ">here</a> for my 92 points.</p>
FreeNX2005-05-20T00:00:00+00:00http://pilif.github.com/2005/05/freenx<div class="floatimgauto">
<a href="http://www.gnegg.ch/archives/nx.png"><img alt="nx.png" src="http://www.gnegg.ch/archives/nx-thumb.png" width="150" height="116" /></a>
</div>
<p>FreeNX is the GPLed variant of <a href="http://www.nomachine.com/">NoMachines</a> NX product.</p>
<p>While exporting X-Sessions never has been a problem, it was kind of slow especially on connections with limited bandwidth. NX tries to solve this by using some tricks at the X11-protocol level, a little proxy-server and a big local bitmap cache. They promise fluently working X-Sessions even over a 56K modem.</p>
<p>Well. I have installed KDE and now FreeNX on my Mac Mini, which I bought for the sole purpose of being a little home-server/VPN-Gateway. My <a href="http://www.gnegg.ch/archives/238-The-greatest-gadget-ever.html">NSLU2</a> while being a really nice little thing, does not work with OpenVPN due to the kernel lacking TUN-support.</p>
<p>Installation was easy and flawless - besides forcing me to forward port 5000 to the NATed mac mini as the commercial (freeware) windows-client seems to have problems with the FreeNX-server when tunneling the X-Session over ssh.</p>
<p>The client works very well too. And I can say: It's fast. Very, very fast.</p>
<p>Some more things to note about the screenshot:</p>
<ul>
<li>While I usually had the policy to name servers after persons and then locations from "lord of the rings", I somewhat run out of names, so I began using names from RPGs. My Mac Mini is called Shion, after Shion Uzuki of Xenosaga. </li>
<li>I'm running <a href="http://www.gentoo.org">Gentoo</a>, of course.</li>
<li>Installing FreeNX is as easy as <tt>emerge nxserver-freenx</tt> on Gentoo.</li>
<li>The screenshot is of a session exported at 800x600 pixels. Using more pixels does not slow down the session siginficantly, but those 800x600 where comfortable to use on my current display so I can have other things besides the session.</li>
</ul>
Very small file?2005-05-10T00:00:00+00:00http://pilif.github.com/2005/05/very-small-file<p>Can anyone please tell me what went wrong here?</p>
<p><img alt="longdl.png" src="http://www.gnegg.ch/archives/longdl.png" width="400" height="121" />
<br /></p>
Snom 1902005-04-27T00:00:00+00:00http://pilif.github.com/2005/04/snom-190<p>The <a href="http://www.snom.com/snom190_voip_phone.html">Snom 190</a> is a SIP hardware phone which I have ordered recently to continue my asterisk experiment.</p>
<p>Yesterday it arrrived.</p>
<p>I have to say: I love that device. Contrary to those proprietary PBX phones, the Snom 190 is easy to use, provides a big heap of features (complete remote managability, web interface, dialing over http-request (outlook-plugin - here I come)) and does not cost more than what the other companies ask for their lowest entry level phones. The Snom even looks good!</p>
<p>Like many other devices today, the Snom 190 runs Linux (2.4), though this time I have not tried to hack it yet. All the sources including the developement environement are available <a href="http://www.snom.com/snom_source.html">at the website of snom</a>.</p>
<p>Contrary to the somewhat crappy ZyXel 2000W which I have tested too, the Snom 190 is ready for productive business.</p>
<p>This makes implementing VoIP at our company seem more and more likely every day.</p>
The greatest gadget ever2005-04-17T00:00:00+00:00http://pilif.github.com/2005/04/the-greatest-gadget-ever<p>Recently I though: "well... having this <a href="http://www.gnegg.ch/archives/198-Pile-of-new-hardware.html">iMac as server</a> is all nice and well, but what about having all that a little more like embedded? What about not having to have this iMac running all the time? After all, it is not always as silent as I would have whished it to be. And I really wanted to have something more "hackish"</p>
<p>So I went after the <a href="http://www.linksys.com/products/product.asp?grid=33&scid=35&prid=601" title="Linksys WRT54G product page">Linksys WRT54G</a>. There are two ROM's you can flash on it: On one hand the more or less proprietary ROM by Sveasoft and on the other hand the ROM by <a href="http://www.openwrt.org">OpenWRT</a>, the last one being the only one actually allowing to install packages.</p>
<p>I bought myself one of those linksys-thingies and I was less then pleased. The ROM by Sveasoft worked well by adding some extended features to the device, but not allowing me to install anything (or even change configuration files). OpenWRT fixed that readonly-thing, but I could not get WPA to work.</p>
<p>After all, the device is of limited use as a home-server. The storage you have at your disposal is just too limited, so I went out to fix that problem.</p>
<p>The fist thing that came to my mind is one of those "Network Harddrives" - poor mans NAS.</p>
<p>I went to one of those big retailers and found the <a href="http://www.linksys.com/products/product.asp?grid=35&scid=43&prid=640" title="NSLU2 Product Page">Linksys NSLU2</a>, which enables externally plugged USB-drives to be exported via CIFS (or SMB or SAMBA or whatever you call it).</p>
<p>Before doing anything with the device - having in mind Linksys' relation to Linux, I googled around a bit and found <a href="http://www.nslu2-linux.org/">NSLU2 Linux</a></p>
<p>After getting it installed (the <a href="http://www.nslu2-linux.org/wiki/HowTo/ChangePasswordsFromTheCommandLine">root-password thing</a> was a bit tricky, but consequent RTFM helped here), I was slowly getting very, very impressed.</p>
<p>What you get is the usual down-stripped linux-distribution, but the root-fs is writable, so you can change the configuration in-place. Then, you can use the attached harddrive as storage for additional software, thus working around the single problem I've had with the wrt54g: In-extensibility</p>
<p>After you install the basic distribution, there's little more than 1 MByte of free space on the flash-rom of the device itself. But there's this script, <tt>unslug</tt> that enables the device plugged to the first USB-port as storage for additional software. And additional software, there's plenty of.</p>
<p>After installing the package <tt>unslug-feeds</tt> (with <tt>ipkg install unslug-feeds</tt>) you gain access to <a href="http://ipkg.nslu2-linux.org/feeds/unslung/native/">this repository</a> containing software like Apache, PHP, Postgresql, a bittorrent-client, cups, perl (for Slimp3),... just all you need on a decent linux distribution (and more less-useful stuff like OpenLDAP). You even get <a href="http://www.asterisk.org">asterisk</a> - and there's a way to install additional USB-drivers. If only AVM would provide kernel modules for the ARM-kernel running on the device. Then, the NSLU2 would be the smallest PBX on this planet.</p>
<p>The best thing is: While the firmware by linksys does not allow it, with the improved version, you can plug an USB-Stick into the first USB-port and use that as target for additional software installation.</p>
<p>This allows for installing a complete linux distribution on a device with no mechanical parts whatsoever. No PC you're going to build yourself will even be so silent. Neither is my iMac. Finally a home-server not making any sound at all. This is great.</p>
<p>Because I have no USB-stick at hand, I have not run <tt>unslug</tt> yet, but I will tomorrow.</p>
<p>Then I'm going to plug my newly bought external 250GB harddisk to the second USB-port and use that for storage for a bittorrent client I'm eventually going to install on the USB stick. And for my MP3's which a <a href="http://www.slimp3.com">Squeezebox</a>-Server installed on the USB-Stick will serve. So, when I'm not asleep, I turn on the HDD to serve MP3's to the Squeezebox. When I'm going to sleep, I just turn the HD off, keeping the rest of the server running.</p>
<p>This little device is so extremely great. I really really like it so far and I can't wait to see it to work at it's fullest potential.</p>
<p>This is the best CHF 150.- I've ever spent in my whole live.</p>
Praise to ZSH2005-04-13T00:00:00+00:00http://pilif.github.com/2005/04/praise-to-zsh<p>Jochen Maes <a href="http://blog.sejo.be/archives/23-Z-shell.html">talks about</a> <a href="http://www.zsh.org">zsh</a> today. (I found that blog via <a href="http://planet.gentoo.org">planet.gentoo.org</a>)</p>
<p>
I wholeheartly agree with Jochen here.
</p>
<p>
Finally someone else writing good stuff about zsh.
</p>
<p>
I'm using this shell since 2000 where I did my first serious steps with Unix. This mainly has three reasons:
</p>
<p>
One is the "User Friendly Users Guide" available <a href="http://zsh.sunsite.dk/Guide/">here</a>. Besides this being an excellent introduction to zsh it is one to unix shells in general. When you're learning unix shells using this guide, you'll somewhat automatically stay with zsh.
</p>
<p>
The other reason is the great flexibility and expandibility. Zsh had a programmable autocomplete-feature long before bash had (or at least long before it was generally known) and even better: It came with some autocompletition functions already enabled for some tools (like tar or even scp). Programmable autocompletition allows you create special autocompletitions depending on the context you are hitting <tt>tab</tt>.</p>
<p>So let's say if you are beginning to type</p>
<pre class="code">
$ scp gnegg.dat pilif@server.example.com:~/gn
</pre>
<p>and then hit <tt>tab</tt>, zsh will actually autocomplete on the remote server(!) and create</p>
<pre class="code">
$ scp gnegg.dat pilif@server.example.com:~/gnegg
</pre>
<p>for you (assuming that directory exists)</p>
<p>The same goes for tar (even with .gz or .bz2 compressed ones). Or cvs or svn</p>
<p>While gentoo provides <tt>bash-completition-config</tt> which does the same for bash, zsh was there first. And it provides many senseful completitions.</p>
<p>The third reason for me going with zsh is the syntax of the shell-scripts which can be configured to be much more intuitive to a C-programmer than the default-syntax, while still being more like ksh/bash than (t)csh.</p>
<p>So for me, switching from bash to zsh was a no-brainer back in 2000. And as with the text-editors: Once you use a certain tool, you will not change it afterwards.</p>
<p>I strongly recommend you to take a look at zsh too.</p>
Asterisk - it's getting real2005-04-05T00:00:00+00:00http://pilif.github.com/2005/04/asterisk-its-getting-real<p><a href="http://www.gnegg.ch/archives/235-Fun-with-VoIP.html">Last week</a> I talked about me and Christoph installing <a href="http://www.asterisk.org">Asterisk</a> on my thinkpad to do a little VoIP-Experiment.</p>
<p>While we were able to create a should-be-working configuration, actually calling to the outside PSTN network did not work. Read the details in my other article.</p>
<p>Last saturday, we fixed that.</p>
<p>There seems to be a problem somehwere between the AVM CAPI Driver and the CAPI layer of the 2.6.11 kernel. After we downgraded to 2.6.10, the problem solved itself without we doing anything more.</p>
<p>So... this was getting interesting...</p>
<p>The first thing I did was to annoy my wonderful girlfriend:</p>
<div><code>
exten => s,1,Wait,1 ; Wait a second, just for fun
exten => s,2,Answer
exten => s,3,MP3Player(/home/pilif/mp3/3.mp3)
</code></div>
<p>(included into or used as the default context)
<p>Where 3.mp3 is that endlessly stupid song "Tell me" (or whatever it's called) by britney spears (this is an insider-joke - both of us just hate that song). Then I told her to call that number...</p>
<p>While this example is completely pointless, it was fun to watch my girlfriend connecting and listening to the song (which soon ended in a disconnection log entry)</p>
<div><code>
exten => s,1,Wait,1
exten => s,2,Dial(SIP/12345,60,tr)
exten => s,3,Congestion
</code></div>
<p>This makes much more sense and directs all incoming calls to the SIP-Phone 12345 as configured in sip.conf. After 60 seconds, it sends back a congestion signal. The first entry would not be necessary, but I hate it when I call somewhere and the phone is answered just at the first ring. So in my PBX, the answering party will wait one second before directing to the sip-phone.</p>
<p>In <tt>musiconhold.conf</tt> I've configured madplay as my MP3-Player for music on hold:</p>
<div><code>
default => custom:/home/pilif/mp3/,/usr/bin/madplay --mono -R 8000 --output=raw:-
</code></div>
<p>madplay is much better than mpg123 used per default as it accepts VBR encoded input and bitrates > 128 kbit which is what nearly all of my MP3's are encoded with.</p>
<p>In <tt>zapata.conf</tt> enable music on hold with <tt>musiconhold=default</tt> in <tt>[channels]</tt></p>
<p>The next thing was an optimization of the SIP-Phone used...</p>
<p>X-Lite is nice, but in the end it's just a demo for other products by the same vendor. Call transferring is not possible for example, which is what we wanted to try next.</p>
<p>The best soft phone we've seen so far is <a href="http://www.sjlabs.com/">SJPhone</a>. A configuration guide is <a href="http://www.jimradford.com/asterisk/sjphone/">here</a>
<p>But the real clou is the <a href="http://www.zyxel.co.uk/Products.32+B6JnR4X1p5WEVMcHJvZHVjdHNfcGkxW3Nob3dVaWRdPTEzJmNIYXNoPTA4NWJjMjdlOTM_.0.html">Zyxel 2000W</a> phone that's currently on my desk: The phone has a WLAN interface (unfortunately no WPA support) and can perfectly well speak with asterisk.</p>
<p>The phone has some problems though: it's slow, it has no support for call transferring, nor holding, neraly every configuration change causes it to reboot,... In the end I really hope Zyxel will further improve the firmware, which is what they seem to be doing - the current release is from the end of february, so quite current.</p>
<p>The next thing will be trying to install a webbased frontend to asterisk and creating a real dialplan with voice mail. Then, our experiment will be over and we'll see how it can be put into practical use (like finally getting rid of the old, proprietary PBX from alcatel of our landlords)</p>
</p></p>
Fun with VoIP2005-03-29T00:00:00+00:00http://pilif.github.com/2005/03/fun-with-voip<p>When I read for then n-th time about <a href="http://www.asterisk.org">Asterisk</a>, an Open Source PBX solution, I deceided to team up with Christoph and tame the beast.</p>
<p>I have actually two problems with asterisk as it stands now:</p>
<ol>
<li>There's not much really useful newbie-documentation or tutorials. There are some sample configurations, but they are not very useful because...</li>
<li>the tool has a incredibly intransparent and difficult to understand syntax for it's main configuration file (<tt>extension.conf</tt>). I't just like it's with sendmail: Many extremely low-level things to care of for getting complex high-level results.</li>
</ol>
<p>I thought, that teamed up with Christoph, we'll more likely to see some results.</p>
<p>The first thing was defining the parameters of our experiment. Here's what we wanted to do:</p>
<ul>
<li>Act as a SIP-Proxy, so two softphones (we did not want to buy too much actual hardware yet) could talk to each other.</li>
<li>Provide a gateway to the ISDN-Network, so both SIP-Phones can dial out to the rest of the world.</li>
<li>The same gateway should be able to receive incoming calls and direct them to one of the Phones (just one for now).</li>
</ul>
<p>In the next session, we want more advanced features, like voicemail and waiting music. A third session should provide us with a webbased frontend (I know there are some). But for now, we wanted to concentrate on the basics.</p>
<p>The next step was to get the required hardware. I already have <a href="http://www.gentoo.org">Gentoo</a> running on my Thinkpad, so that was a good base. Furthermore, we needed any ISDN-Solution being supported by Asterisk. As we had a plain old BRI interface and a very limited budget (it was just an experiment after all), we went with the <a href="http://www.avm.de/en/index.php3?Produkte/FRITZ/FRITZ_Card_USB/index.js.html">Fritz Card USB</a> by AVM which has Linux CAPI drivers, albeit only binary ones (we could also have used the PCMCIA-version, but this is three times as expensive as the USB one).</p>
<p>Said piece of hardware proved to be a real pearl: It's very compact, does not need a power adaptor and was very easily installed under Linux. I would not be using this for a real-world solution (which most likely requires PRI support and absolutely would require open sourced drivers), but for our test, this was very, very nice.</p>
<p>Installing the needed software is where gentoo really shined as everything needed was already in the distribution: After hooking up all the stuff, we emerged <tt>net-dialup/fritzcapi</tt>, <tt>net-misc/asterisk</tt> and <tt>net-misc/asterisk-chan_capi</tt> which suked in some more dependencies.</p>
<p>The next step is to reconfigure the kernel for the CAPI-stuff to work. Just include everything you find under "Device Drivers / ISDN Support / CAPI" - even the one option marked as Experimental (as the CAPIFS is needed and only available when enabling "CAPI2.0 Middleware support")</p>
<p>Then, we made sure that CAPI (a common ISDN access API) was running by issuing <tt>capiinit start</tt>.</p>
<p>Then we went on to asterisk.</p>
<p>The fist thing, you have to do is to set up the phones you're using. As we worked with SIP-Phones, we used <tt>sip.conf</tt>:
<pre class="code">
[general]
port = 5060
bindaddr = 0.0.0.0
tos = none
realm = sen.work
srvlookup = yes
[12345]
context = theflintstones
dtmfmode = rfc2833
disallow = all
allow = gsm
callerid = "Fred Flintstone" <12345>
secret = blah
auth = md5
host = dynamic
reinvite = no
canreinvite = no
nat = no
qualify = 1000
type = friend
[12346]
accountcode = 12346
dmtfmode = rfc2833
host = dynamic
auth = md5
secret = blah
canreinvite = no
context = theflintstones
qualify = 2000
type = friend
disallow = all
allow = gsm
</pre>
<p>This worked with our two test-phones running <a href="http://www.xten.com/index.php?menu=products&smenu=download">X-Lite</a></p>
<p>Interesting are the following settings:</p>
<table border="0">
<tr>
<td><b>realm</b></td><td>The realm. I used our internal domain here. The default is asterisk. Your VoIP-Address will be identifier@[realm].</td>
</tr>
<tr>
<td><b>accountcode</b></td><td>This is the username you're going to use on the phone</td>
</tr>
<tr>
<td><b>context</b></td><td>The context will be used when we create the dial plan in the feared <tt>extension.conf</tt></td>
</tr>
</table>
<p>Then, we configured CAPI in <tt>capi.conf</tt></p>
<pre class="code">
[general]
nationalprefix=0
internationalprefix=00
rxgain=0.8
txgain=0.8
[interfaces]
msn=44260XXXX
incomingmsn=*
controller=1
softdtmf=1
accountcode=
context=demo
devices=2
</pre>
<p>Those settings are said to work in Switzerland. Interesting is the setting for <tt>msn</tt>. This is where you enter the MSNs (phone numbers) assigned to your NT. I somewhat X-ed it out. Just don't use any leading zeroes in most countries. You can enter up to five using commas as separator.</p>
<p>The next thing is to update <tt>modules.conf</tt>. In the <tt>[modules]</tt>-Section, add <tt>load => chan_capi.so</tt>, in the <tt>[global]</tt>-section, add <tt>chan_capi.so=yes</tt>.</p>
<p>Without those entries, asterisk will complain about unresolved symbols when loading the CAPI modules and will finally terminate with a "broken pipe"-Error. Thrust us. We tried. ;-)</p>
<p>The best thing now is that you can already test your setup so far. Launch asterisk with <tt>asterisk -vvvvvc</tt> (each v adds a bit of verbosity, while -c tells it not to detach from the console). If it works well, you'll end up at a console. If not, make sure, that capiinit did not report any error and that you've really added those lines to module.conf.</p>
<p>Now for the fun of it, call one of your MSNs with any phone.</p>
<p>Asterisk should answer and provide you with a demo-menu</p>
<p>The next step is configuring <tt>extensions.conf</tt>. This is somewhat complex and I will go into more detail, as soon as I've figured out, what's wrong with our test-configuration. We've added this to the end:</p>
<pre class="code">
[ch-fest-netz]
exten => _0[1-9].,1,Dial(CAPI/44260XXXX:b${EXTEN},30)
exten => _0[1-9].,2,Hangup
[theflintstones]
include => ch-fest-netz
</pre>
<p>Just look that you enter one of the MSNs you have configured in <tt>capi.conf</tt>.</p>
<p>Now what this configuration <em>should</em> do is to allow those SIP-phones (recognize the "context" we used in sip.conf?) to dial out via CAPI.</p>
<p>You best learn how to configure this beast by calling the demo-voicebox and then comparing the log output of Asterisk with the entries in extension.conf. Basically, <tt>exten =></tt> defines a dial plan to execute. Then comes the pattern of numbers dialed to recognize. After that comes a (BASIC-like) sequence-number, followed by the action to execute.</p>
<p>The format of the number-pattern is explained in one of the comments in extension.conf</p>
<p>Now, this configuration does not work for us: When I dial on the SIP-Phone, Asterisk notices this, actually connects the ISDN-line (the target phone actually rings), but does not seem to notice when the target phone is answered.</p>
<p>If I answer the phone, it's just silence in the line. The SIP-phone is still in the "trying to connect"-state.</p>
<p>This stays this way until I cancel the dial attempt in the SIP-phone. After that, asterisk prints more log entries - one of them the notice that the connection was successfully established.</p>
<p>A question in the malinglist was promptly answered: My configuration is correct, but maybe I'm running into a bug of Kernel 2.6.11. I was told to downgrade to 2.6.10, which is what I'm going to do next.</p>
<p>After this, I will extend the dial plan so I can call the internal SIP-phones both from another softphone or from a real phone over the ISDN</p>
<p>It's hacky, it's just somewhat working, but it's a lot of fun!</p>
<p>I'll keep you updated.</p>
</p>
World Of Warcraft - A little Newbie-Guide2005-03-26T00:00:00+00:00http://pilif.github.com/2005/03/world-of-warcraft-a-little-newbie-guide<p>I just had three of the most pleasant hours I've ever had with gaming. As you can imagine, the game was World of Warcraft (I hereby promise not to post any more WoW-related stuff in the near future, but bear with me one last time ;-)</p>
<p>I'm playing as a human mage and I've now reached level 17 (looking forward to 18 to get more spells)</p>
<p>For some time, I had problems getting along, but it's really better now, so I though I maybe give you some advice if you too play as a human mage:</p>
<ul>
<li>When you're first sent to westfall, you may be completely under-leveled. It began being fun for me about at level 15 or so, but when you get there, you're usually at 9 to 11. You can do two things to remedy that:<ul><li>Join groups (use the <tt>/4</tt> chat-command). As a group you're so much more efficient</li><li>Use the underground railroad (it's in the dwarven destrict) in Stormwind to go to the dwarven capital city Ironforge and from there take some quests outside and in Loch Modan (east of the region around Ironforge). Those are easy to do for you and the scenery is much nicer than in Westfall</li></ul></li>
<li>Never hesitate to talk to people. So far, I never had problems getting along with other players. Don't be afraid and talk to them. You have so much better chance of accomplishing something if you work in groups.</li>
<li>Try to meet with people you've already met. Once you know them better, it becomes even more fun</li>
<li>As a mage, never... I repeat... never try to attack a group of enemies. Wait till they separate. Or Sheep them and attack just one.</li>
</ul>
<p>I really think, the balance of difficulty is way off in westfall and maybe, the guys at Blizzard fix that in the future. Until then, you will have much fun in the dwarfen lands. Return to Westfall after reaching level 15 or so and do the easier quests first. Talk to people. You'll see: It will get fun. Much fun..</p>
World Of Warcraft Patch2005-03-25T00:00:00+00:00http://pilif.github.com/2005/03/world-of-warcraft-patch<p>Today, when I wanted to login with my <a href="http://www.gnegg.ch/archives/230-WoW-Language-Hacking.html">somewhat tweaked</a> installation of WoW, I was greeted with an error message telling me something about not being able to verify my version.</p>
<p>This was fixed by temporarily using the US login-servers so that the new patch could be installed.</p>
<p>During installation of said patch I found this note here:</p>
<blockquote>
- Reduced the respawn rate of the troggs on the islands in Loch Modan.
</blockquote>
<p>This is nice - just yesterday I've had some serious problems with those troggs there. Too bad, the patch was released only today when I don't have to go back there.</p>
QTek S1002005-03-15T00:00:00+00:00http://pilif.github.com/2005/03/qtek-s100<p>I have been <a href="http://www.gnegg.ch/archives/15-Another-day-full-of-fun-with-hard-and-software.html">talking</a> <a href="http://www.gnegg.ch/archives/27-The-13-most-annoying-things-of-the-P800-phone.html">about</a> <a href="http://www.gnegg.ch/archives/83-T610Z600,-Outlook,-MobileAgent-and-Bluetooth.html">mobile</a> <a href="http://www.gnegg.ch/archives/48-iSync-1.1-but-I-will-not-need-it.html">phones</a> <a href="http://www.gnegg.ch/archives/148-SonyEricsson,-IMAP,-Exchange.html">quite a lot</a> on this blog.</p>
<p>I've always been on the lookout for the optimal phone for my needs, which I finally thought to have found in the combination of the SonyEricsson K700i and the iPAQ hx4700 by HP. I used the phone (good usability, small size) for communication and the iPAQ for emailing and the PIM applications. The combination beared the risk of not having the PIM-Data ready when I needed it, but all other smartphone offerings out there where either too heavy, too user-unfriendly, too large or just too limited in their feature-set.</p>
<p>However, last week, the joystick of my K700 completely stopped working (I've never met even one person not having a broken joystick after about a year or so), so I needed a replacement.</p>
<p>SonyEricsson does not have any new devices to offer (the next one being the K750i, released in Q2 - about june or july, I suppose), so I was on the lookout to something different.</p>
<p>Then I found the QTek S100 quite by accident. You may know the device produced by <a href="http://www.htc.com.tw/">HTC</a> where it's called "Magician" under the name <a href="http://www.clubimate.com">JAM</a> by i-Mate (or as SPV if you're a customer of Orange, or even MDA compact at T-Online - it's always the same device).</p>
<p>Size-wise it's a bit thinner than the K700, has the same height, but it's a bit wider. It runs Windows Mobile 2003 Phone Edition, so it can naturally be natively integrated in our Exchange-Environement. All known PocketPC-Software runs on it and it's even powerful enough for watching videos (only at 320x240 pixels - the device has no VGA-Screen). It has a SD-slot which is SDIO capable, so I could use that for WLAN which the device unfortunately does not have built-in.</p>
<p>It comes with Bluetooth-Support which I've already used both for dialing into the internet and synchronizing with the PC.</p>
<p>I'm told that the MS-Stack is a bit limited, but it fits my needs.</p>
<p>The sound quality isn't as good as with the K700, but far better than what I feared it'd be.</p>
<p>Usability-wise, this is the first Smartphone that really works for me. I'm as fast with the QTek as I'm with my K700. As I'm already used to the letter-recognition of the PocketPC, I'm quite fast in writing SMS too, though the device does have a special input-panel with T9 support.</p>
<p>What surprises me the most (which actually led me to write this article) is the battery lifetime: It's now 5 days since I last charged it and it's still 45% full. This is already longer than what my K700 did when it was completely new. I did not think I'd last longer than 2 days at most....</p>
<p>Additionally, as it's a real PocketPC, you will have the device connected to your PC when you are in your office. So it will automatically be charged during the day, so battery lifetime would not even be that an issue.</p>
<p>For me, the QTek is a great device. Nearly the optimal phone (which I still have not found). The only things I'm missing are (in no particular order):</p>
<ul>
<li>A standard 3mm headphone connector. The S100 has a smaller 2.5mm connector which doesn't allow me to plug my headphones and use the phone as an MP3 player. I know that adaptors exist, but it would have been nice if it thad the right connector in the first place.</li>
<li>A VGA screen. This is unrealistic for this small screen size, but whatever...</li>
<li>WLAN-Support. Public WLANs are getting more and more common here. It would have been nice to connect to those.</li>
<li>A real docking station. Currently they provide only a USB-cable. A real docking-station would have been a nice thing to have</li>
<li>A real keypad. While the soft-keyboard is nice, an exdendable real hardware-keypad has the advantage of being usable even when not looking at the device.</li>
</ul>
<p>That's all. Small things. Not nearly as annoying as the <a href="http://www.gnegg.ch/archives/27-The-13-most-annoying-things-of-the-P800-phone.html">problems I found in the P800</a>.</p>
<p>So if you ask me what phone you should buy: For now it's clearly one of those HTC Magician based phones as it combines the power of the smartphone and the known user interface of the PocketPC with the small size and battery power of a regular cellphone.</p>
Hacking Hiltl2005-03-04T00:00:00+00:00http://pilif.github.com/2005/03/hacking-hiltl<p>The <a href="http://www.hiltl.ch">Hiltl</a> is an excellent vegetarian restaurant in the middle of Zürich. I eat there quite often because the food's great, the waitors are friendly and they always have space for you despite being constantly full of guests (others seem to think the same).</p>
<p>What's interesting from a technical point of view is their ordering system: All waiters are equipped with a Windows CE device by Symbol and use WLAN to communicate with a central server (two actually, but see later) to process your order, send it to the citchen and finally print out the receipt for you.</p>
<p>What's even more impressive is the seemingly perfect user interface: The waitors are actually faster with those things than they'd ever be using the old-fashioned paper-way. Even if you have special whishes, they can enter them in an efficient way.</p>
<p>The only time papers are involved is when they print your receipt. The system automatically selects the nearest printer.</p>
<p>This is one of the secrets behind the incredible efficiency of the Hiltl allowing for an incredible throughput of guests while still giving them all the time they need to eat and chat. Actually, a table is ready for the next guest only about one minute after the previous guests have left.</p>
<p>The restaurant is devided into two floors. Both have a master-waitor which has control over all the tables. They communicate via radio.</p>
<p>So you see: This is <b>the</b> restaurant for a geek to visit: Good food and good tech in one.</p>
<p>Now, the Zyxel access point they had mounted to the roof of the restaurant somewhat itched me. I mean: It's WLAN after all. And I know the devices they are using - I wrote some lines for them too. So, maybe I can get some insight, I thought.</p>
<p>Armed with a notebook and the right software, me and Christoph took our meal in the Hiltl today.</p>
<p>The bad thing first: They don't even use WEP for their network. They just created and empty SSID but don't even hide it. So we did not have to use a WEP cracking equipment.</p>
<p>The devices communicate via SOAP over HTTP on a non-standard port. Additionally, the server often pings the known clients to check if they are still there. Then there's a misconfigured router sending out IPv6 packets which are not used in any way. Oh and a Win9x-machine is there too, announcing itself as a network browser.</p>
<p>There are two servers: One for ordering, the other for printig.</p>
<p>Unfortunately, the SOAP messages (especially those to the ordering-system) contain much binary data, so there's not much one can do there without isolating one device and doing some known steps on it. </p>
<p>Unfortunately, our equipement was not running until after our order was taken, so I don't even have a reference point.</p>
<p>The printing though, uses some clear text XML-parameters. I think, I could be able to print some funny messages to all of those printers.</p>
<p>As I see it, no authentication whatsoever takes place - besides a hard-coded registration of the devices IP-adresses. ARP-spoofing could help about that though.</p>
<p>Now... what do I want to say with this? I'm certainly <b>not</b> going to attack them as I <b>really, really</b> like their food and want to return there often for my nutritial needs. Then, it's a matter of honor: They are so progressive and efficient that I just can't punish them for their (quite obvious) security problem.</p>
<p>Still, for educational purposes, this little experiment was very useful. Maybe, another day, I will even try to decode those binary parameters - just to know how it <b>would</b> work, not to hack me a cheaper meal or so ;-)</p>
<p>The last thing to do for me on this posting is one thing: I ask you kindly to do the same thing as I do: Don't crack the network there, but go there to eat. It's really worth it.</p>
WoW: Language Hacking2005-02-21T00:00:00+00:00http://pilif.github.com/2005/02/wow-language-hacking<p>
As I explained in my <a href="http://www.gnegg.ch/archives/229-World-of-Warcraft.html">previous posting</a>, I very much like to play World of Warcraft in the english version.
</p>
<p>Now I got my hands on the US-version and installed it (after uninstalling the german version).</p>
<p>The problem came after patching to the current version: My account was not recognized anymore - no wonder: The game was connecting to the US servers while my account is on the european ones.</p>
<p>A bit searching for worldofwarcraft.com in the games directory revealed the string <tt>set realmlist [something]</tt> in <tt>base.mpq</tt></p>
<p>As always, google was my friend and showed me the solution: Add</p>
<pre class="code">SET realmlist "eu1.wow.battle.net"</pre>
<p>to the file <tt>config.wtf</tt> in the directory <tt>WTF</tt> of your WoW installation.</p>
<p>This lets you login to the european servers where your account is recognized.</p>
<p>Works well (at least until the next patch is released ;-)</p>
<p><b>Update:</b> if you have a file called <tt>realmlist.wtf</tt> in the main installation directory, change that one, not the <tt>config.wtf</tt> as it will get overwritten on every launch. And additionally, you should set the server to <tt>eu.logon.worldofwarcraft.com</tt> instead - the older one was for the beta.</p>
World of Warcraft2005-02-21T00:00:00+00:00http://pilif.github.com/2005/02/world-of-warcraft<p>For the last three years or so, I was constantly thinking about those online RPGs, but the high amount of micro-management you had to do and the steep learning-curve, the newbie-killers and all those other factors led me to ask myself: "Why spending money for that kind of dubious entertainment?"</p>
<p>Then I've read many good things about Blizzards <a href="http://www.worldofwarcraft.com">World of Warcraft</a>: It was said to have a nice learning curve, few micro-management and to be entertainment-centered - now we where talking...</p>
<p>So I went ahead and bought it last tuesday.</p>
<p>While there were some problems at first when I tried to create my character (Blizzard was quite overrun by the many people trying it out here in Europe), they were solved the same day and since then I had no problems with long waiting lists or disconnects. So from a technical point of view, it's very satisfying.</p>
<p>And then there's the gameplay of course.</p>
<p>This is very well done: There are many small things where the designers have tried to minimize the problems other MMORPG seem to have: There's no senseless doing stupid jobs with your alter-ego just to earn money (you earn money by beating quests which are somewhat diablo-like). There's this concept of getting double experience points when you log in after a larger pause. Then, if you don't want to play in a designated player-vs-player-area, it's immensely difficult to be slaughtered by another player - if you get killed by another player, it's entirely your own fault. And besides: Other players cannot steal your inventory.</p>
<p>While the game provides an incredible amount of options how to progress your character, it introduces them nice and slowly. I'm still quite the newbie (playing about 2 hours per day I'm now at level 9) and I never felt overwhelmed. Very nice.</p>
<p>The most interesting experience I've had so far was yesterday when I was having problems concluding a certain quest alone: The boar I had to kill was just too strong for me.</p>
<p>So I did the logical thing: I went to the nearest tavern and asked around if there was someone willing to kill that beast with me. I soon found someone and we succeeded. This is what I expect from a MMORPG - not forging horseshoes and selling them for much too less money becuase of ebay-caused inflation - each horseshoe taking about 1000 senseless clicks to build.</p>
<p>So WoW is definitely getting my $11 monthly after my one month trial runs out.</p>
<p>Oh. There's one thing though: Here in Switzerlad, you just get the german version of the game. This is very unfortunate for me as I prefer playing in english realms. Now its quite difficult to talk about something with an other player if I just have my own translation of the german name instead of what's on the screen of the other players.</p>
<p>This is partly my own fault - I could play on a german realm, but partly blizzards too - here in Switzerland, many of us are used to read and understand english - all the movies are shown in the original language (mostly english) with subtitles for example. I think, that many of us would really prefer to have an english version of the game.</p>
<p>I for myself will probably do as I always do: Use the CD-key of the german original with an english copy I get via other channels. This is not particulary legal, but not that illegal too, I think.</p>
<p>Please, Blizzard, if you hear me: Provide us swiss with an english version of your games in the future.</p>
Check for update2005-02-18T00:00:00+00:00http://pilif.github.com/2005/02/check-for-update<p>I've seen many pieces of different software.</p>
<p>Many of them provide the user with a way to go online and check for new versions of the program.</p>
<p>Nearly all of them have the corresponding menu entry in the "Help"-Menu.</p>
<p>Why is that so? Checking for updates does not provide you with help. Maybe, just maybe it can fix a problem you are having - but it's nowehre near providing help.</p>
<p>If I wrote software, it would have this option in the Tools menu or - if the application had none - in the File-Menu, though it's misplaced even there. As is "quit" for example...</p>
Security Tools2005-02-10T00:00:00+00:00http://pilif.github.com/2005/02/security-tools<p>There was <a href="http://www.zdnet.com.au/news/security/0,2000061744,39180674,00.htm">this security announcement</a> today: Another time a Symantec product does not what it's supposed to and actually executes <a href="http://upx.sourceforge.net/">UPX</a>-Packaged .EXE-Files to find out whether they conain malicious code or not.</p>
<p>This is certainly not the best way to accomplish that...</p>
<p>So this is anoter point why I'm no fan of security software in place of user education (and regular flaw-patching): Such software creates a false sense of security ("should I click here? Oh well.. I have my NAV running, so nothing's going to happen") and may even open bigger holes when itself is not secure.</p>
<p>As it stands now, a educated user without NAV that receives an email with a prepared UPX-packaged .exe will just delete the file and be happy.</p>
<p>An educated user <b>with</b> NAV will delete the file too, but before he can, NAV will have scanned the email and thus <em>executed the malware</em>. This is a case where the infection comes from the software supposed to be preventing it.</p>
<p>It's just like with firewalls: Why installing a packet filter filtering unwanted packets to open ports when you can close the ports in the first place?</p>
<p>Security is (mostly) a social thing (not counting exploits which must/can be prevented by updating the affected software) that can be achieved best using social skills, not software-barriers (as software has flaws - education at least has the possibility of achieving its goals).</p>
<p>So I'm not bashing Symantec (for once), but security-software as such.</p>
AWStats2005-02-09T00:00:00+00:00http://pilif.github.com/2005/02/awstats<p>For the last five years or so, I've been using <a href="http://www.modlogan.org/">ModLogAn</a> for my/our web analyzing needs: The tool is fast and much more powerful than Webalizer which I was using before modlogan</p>
<p>Getting it to run was a bit difficult at first (requiring a hacked GD library and all that), but this gradually got better. Since then the tool does a wonderful job (except one broken release about three years ago).</p>
<p>With all this buzz about the phpBB.com incident which happened because of a hole in <a href="http://awstats.sourceforge.net/">AWStats</a>, I wanted to give said tool (in a fixed version - of course) a shot.</p>
<p>The gentoo ebuild is tightly integrated into <tt>webapp-config</tt> which I've not used before, so the installation was somewhat difficult for me, but some symlinks here and there soon brought me a working setup.</p>
<p>
I must say that I'm impressed of the tools capabilities: It's quite fast (not as fast as modlogan, but fast enough), its CGI user interface profits from its dynamical nature (filtering long lists in realtime for example), the plugins provided with it are very cool (geoip, whois,...) and as soon as one understands how it ticks, it's really easy to configure and manage.
</p>
<p>Useful for some people is its possibility to update the statistics in realtime by analyzing the current rotation of the logfile. Another thing, modlogan isn't capable of.</p>
<p>And finally it's the looks - as always. awstats looks much more pleasant than modlogan does (even when using the template-plugin which has the nicest look of all of them).</p>
<p>I've not deceided yet whether I should replace the currently well-working modlogan-setup or not, but I've certainly analyzed the whole backlog of gnegg.ch (link to the tool removed due to gnegg.ch redesign).</p>
IRC Clients2005-01-28T00:00:00+00:00http://pilif.github.com/2005/01/irc-clients<p>When my favourite <a href="http://bisqwit.iki.fi/nesvideos/">game movies site</a> (written about it <a href="http://www.gnegg.ch/archives/94-If-only-I-could-play-like-this.html">here</a> and <a href="http://www.gnegg.ch/archives/142-Console-game-Videos.html">here</a>) went offline last week, I ventured a look into its <a href="irc://irc.freenode.net/nesvideos">IRC channel</a> to find out what’s going on.</p>
<p>Chatting with the guys there was so much fun that I deceided that it’s time to get into IRC after all (I never really used it before, so I did not really have a big insight into this part of the net)</p>
<p>Soon after this decision, I began learning the ins and outs of IRC and the first thing I did was setting up a bouncer (IRC-proxy - let’s you be logged into a channel despite your client machine being offline. Very useful for getting an overview on what happened while you were away). There are quite many available, but the only one that seems to be still maintained is <a href="http://jelmer.vernstok.nl/ctrlproxy/">ctrlproxy</a></p>
<p>If you plan on using mIRC with it, go and install the current pre-release 2.7pre2. Older versions don’t let you connect.</p>
<p>Next was the question which client to use.</p>
<p>While mIRC is nice it has two problems: a) it’s single-platform. As I’m constantly using all three of Win/Mac/Linux, a single program would be nice so I don’t have to relearn all the shortcuts for each platform. b) It does not look very polished and cannot be made to do so.</p>
<p><a href="http://www.klient.com">Klient</a> looks much better, but is still single-platform and has problems recognizing the state when reconnecting to the ctrlproxy (it sometimes does not notice that you are already in a channel).</p>
<p><a href="http://www.visualirc.net/">virc</a> looks better than mirc, but worse than Klient. Plus, it seemed a bit unstable to me. And it was slow displaying the backlog. Very slow. It’s single-platform too (and written in Delphi it seems)</p>
<p><a href="http://irssi.org/">irssi</a> is single-platform too, but I could work around that by running it on our webserver and using <tt>screen</tt>.</p>
<p>A program that warns with</p>
<pre class="code">17:43 -!- Irssi: Looks like this is the first time you've run irssi.
17:43 -!- Irssi: This is just a reminder that you really should go read
17:43 -!- Irssi: startup-HOWTO if you haven't already. You can find it
17:43 -!- Irssi: and more irssi beginner info at http://irssi.org/help/
17:43 -!- Irssi:
17:43 -!- Irssi: For the truly impatient people who don't like any automatic
17:43 -!- Irssi: window creation or closing, just type: /MANUAL-WINDOWS</pre>
<p>before starting it and with no obvious way to exit it (Ctrl-C, quit, exit - neither did work) is something I’m afraid of (quite like <tt>vim</tt>, though I learnt to love that one). So: no-go</p>
<p>Finally I ended up with <a href="http://www.xchat.org/">X-Chat</a>. It looks good, has all features I need, a big userbase, is maintained and is multiplatform after all.</p>
<p>There was this fuss about the windows version becoming shareware, but I can live with that as the tool is very, very good. For supporting it’s author, I gladly payed those $20 (I see it as a packaging fee - just like with those linux distributions), though you can get a windows binary for free <a href="http://www.silverex.org/news/">here</a>.</p>
<p>So for me, it’s X-Chat. And much fun in <tt>#nesvideos</tt></p>
Why I love the command line2005-01-24T00:00:00+00:00http://pilif.github.com/2005/01/why-i-love-the-command-line<p>Today I had the task to join together quite some mp3-files.</p>
<p>I had about 100 radio plays, each devided in three to six files which I wanted to have joined to one file per play so I can better organize them on my iPod</p>
<p>There are tools out there doing exactly that. mp3surgeon being one of them. All these tools a) have a non-scriptable GUI (meaning lots and lots of clicks) and b) cost money</p>
<p>b) would not be a pronlem if those tools would work for me, but because of a) they do not.</p>
<p>Then I found <a href="http://www.mpgedit.org/mpgedit/">mpgedit</a> a command line tool capable of joining MP3's (respecting VBR-headers, but without recoding the new file)</p>
<p>As it's usable from the command line, I could write a small script doing exactly what I wanted:</p>
<pre class="code">
<?
$dir = dir(".");
while (false !== ($entry = $dir->read())) {
if (preg_match('/^\.+$/', $entry)) continue;
$path = '.\\'.$entry;
if (is_dir($path))
doJoin($path);
}
function doJoin($dir){
echo "Looking in $dir\n";
$of = escapeshellarg("..\\".basename($dir).".mp3");
chdir($dir);
$files = array();
$d = dir(".");
while (false !== ($entry = $d->read())) {
if (!preg_match('/\.mp3$/', $entry)) continue;
$files[] = $entry;
}
$d->close();
sort($files);
$files = array_map('escapeshellarg', $files);
system("c:\mp3\mpgedit_nocurses.exe -o $of -e- ".implode(' ', $files));
chdir("..");
}
?>
</pre>
<p>Note that it's written in PHP as this is the language I currently do most of my work with. And note that it's very customized to just my needs. None the less it works very well and saves me from about 200'000 clicks</p>
<p>Now this is exacltly why I love the command line.</p>
Great things coming2005-01-23T00:00:00+00:00http://pilif.github.com/2005/01/great-things-coming<p>For fans of my <a href="http://www.gnegg.ch/archives/98-Some-suburban-railways-I..html">many</a> <a href="http://www.gnegg.ch/archives/90-Too-bad!.html">railroad</a> <a href="http://www.gnegg.ch/archives/103-Now-its-real.html">postings</a>, I have something very special I'm soon going to post. Stay tuned.</p>
Another day, another "head first" book2005-01-19T00:00:00+00:00http://pilif.github.com/2005/01/another-day-another-head-first-book<p>With pleasure I found out that <a href="http://www.oreilly.com/catalog/hfdesignpat/index.html">Head First Design Patterns</a> was in the bookstore I'm usually getting tech-books at (I like going to a store, buying the book and then immediately begin reading it - this is why I don't order all books over the web). The book was hidden in the shelf full of UML-books where it should have been placed near the Java-books: It's really Java-centric.</p>
<p>As I noted <a href="http://www.gnegg.ch/archives/180-Head-First-Servlets-JSP.html">here</a>, I really like the head first series and if you ask me, head first design patterns is the best so far which may be because the topic really, really interests me. Additionally, I so far found much less mistakes than in head first jsp (where there were quite some).</p>
<p>This new book of the series has something the others don't: It has suspense. Always when one of the patterns is explained, I'm so much looking forward to learn what the next pattern will be and what the next example will be.</p>
<p><a href="http://www.gnegg.ch/archives/194-Learning-by-example.html">I'm not a theoretical guy</a>, so it's quite difficult to keep me reading when dry topics are to be explained. Not so with head first design patterns: They keep it interesting and they keep explaining by example (very good ones by the way). It's really well done.</p>
<p>I'm now about in the middle of the book (the command pattern) and while I alreay knew some things, I was able to learn a good deal of new stuff (and the correct terminlogoy to use) and interestingly, it's sticking in my brain. I can remember every single important thing (the rocket-powered rubber-duck, for example. Btw: Rubber-ducks do fly indeed: Just throw it out of the window and they fly - in one direction only, but they do fly. The fly()-method would have had to be overriden by many ducks anyways, but I agree, the strategy-pattern is the better solution).</p>
<p>Even if you are not interested in design-patterns: Go and get this book. Even during reading the very first chapter you'll soon get interested and by the middle of the book you long for every second of free time to continue going on reading just to learn what the next pattern may be and what example may be used to explain it.</p>
<p>Incredibly great stuff.</p>
Just one year to go...2005-01-17T00:00:00+00:00http://pilif.github.com/2005/01/just-one-year-to-go<p>... and I'm on this world for a quarter of a century.</p>
<p>Today is my 24th birthday.</p>
<p>Sometime I really would like to be much older than I'm now. I would so much have liked to get the whole IT revolution - as it stands now, the first real program I wrote (besides many complicated batchfiles) was for Windows 95, so I missed all the fun of segmented memory and assembler. Too bad.</p>
<p>Then again: There are those days where I just whish to be younger so I will be able to see even more new technologies rise and vanish before I die. It would be so nice to be still alive when we finally get fusion reactors, warp speed, matter-transporters and all this stuff.</p>
<p>I love technology. I really do.</p>
Problems with my Thinkpad2005-01-12T00:00:00+00:00http://pilif.github.com/2005/01/problems-with-my-thinkpad<p>My <a href="http://www.gnegg.ch/archives/166-IBM-Thinkpad-42.html">Thinkpad T42</a> went down <a href="http://www.gnegg.ch/archives/183-Unforeseen-annoyance.html">again</a>.</p>
<p>It's the graphics chip again: As soon as I push the device just a little bit, the screen goes blank or becomes garbled.</p>
<p>I hate such things. What's become of IBM lately?</p>
A worm named pilif?2005-01-06T00:00:00+00:00http://pilif.github.com/2005/01/a-worm-named-pilif<p>I just heard, that my nickname-ever-since has been "misused" by someone in his evil malware-schemes.</p>
<p>At least the second entry in google after searching for "pilif" points to <a href="http://www.iamnotageek.com/a/pilif.exe.php">this page</a></p>
<p>This is very unfortunate. I'm using the name "pilif" since <a href="http://groups-beta.google.com/group/comp.os.ms-windows.programmer.win32/browse_frm/thread/b30be1ad2e451bdf/d1ce704a9ae46142?tvc=1&q=author:pilif&_done=%2Fgroups%3Fhl%3Den%26safe%3Dimages%26q%3Dauthor:pilif%26qt_s%3DSearch+Groups%26as_drrb%3Db%26as_mind%3D12%26as_minm%3D5%26as_miny%3D1981%26as_maxd%3D6%26as_maxm%3D1%26as_maxy%3D1999%26&_doneTitle=Back+to+Search&scrollSave=&&d#d1ce704a9ae46142">long before the first mail virus</a> (ILOVEYOU) has been written. Pilif has the benefit of being nearly unused in the web so far (very convinient when registering somewhere) and it somewhat contains my name (Philip -> Filip -> Pilif)</p>
<p>I can asssure you that I have nothing to do with this worm or any other worm for that matter.</p>
<p>Besides, if I really wrote a virus I would never be so stupid as to name it after my nickname ;-)</p>
Serendipity2005-01-04T00:00:00+00:00http://pilif.github.com/2005/01/serendipity<p>Last sunday I somehow came across <a href="http://www.s9y.org/">Serendipity</a>.</p>
<p>Besides being the only project where hitting Ctrl-V is simpler than actually spelling the name (let alone pronouncing it), this blogging engine shows much promise for me.</p>
<p>It has some obvious advantages over MT for me:</p>
<ul>
<li>It's OpenSource. Hacking it isn't a crime.</li>
<li>It's written in PHP, one of the languages I'm <em>really</em> fluent in.</li>
<li>It has some great anti-spam-features (though forcing the preview here did help greatly).</li>
<li>It uses dynamically generated pages instead of statically generating each an every page.</li>
<li>It has many more features than MT does.</li>
</ul>
<p>My only problem: It does not have an importer for Movable Type. Well, actually, the current CVS HEAD does, but it does not work either. But because of the first two points above, <a href="http://sourceforge.net/mailarchive/forum.php?thread_id=6270930&forum_id=31275">I could do something about that</a>.</p>
<p>Now, this evening, I will be working on the comments importer and tomorrow you will be able to have a look at how well my patches work (at least with this blogs data)</p>
<p>And sometime later this year, I will be using Serendipity as my blogging engine (hopefully with many more patches by myself). That's for sure!</p>
eAccelerator2005-01-03T00:00:00+00:00http://pilif.github.com/2005/01/eaccelerator<p>Maybe, you remember turck-mmcache, a bytcode-cache for PHP, released under the GNU GPL which was said to be extremely fast.</p>
<p>It's author got employed by Zend (the maker of another bytecode-cache, but a commercial one) and turck-mmcache was lingering around unmaintained since then (about a year ago).</p>
<p>Then PHP5 was released and truck-mmcache stopped working.</p>
<p>Finally, last month, some guys forked the dead mmcache and created <a href="http://eaccelerator.sf.net">eAccelerator</a>, first fixing up the cache and optimizer to work with PHP5</p>
<p>And today, I gave it a shot on our developement-server, just to see, if this magical cache-thing really works.</p>
<p>I run <tt>ab</tt> on one of the most calculating intensive pages of my current project (which makes heavy use of PHP5s new object oriented language features).</p>
<p>Most interesting would be the "Requests per second" value:</p>
<p>Without eAccelerator: Requests per second: 7.89 [#/sec] (mean)
<br />
With eAccelerator: Requests per second: 24.77 [#/sec] (mean)
</p>
<p>Which is a factor 3 speed increase.</p>
<p>Please note that the absolute values are somewhat irrelevant as this server is quite a weak developement server and DB, Application and ab all run on this same machine. Anyway. The relative three-fold speed-increase is quite cool. As soon as I have more confidence in eAccelerator, I think I'm going to deploy it in the productive environement.</p>
<p>Btw: if your are trying this out for yourselves: Your mileage may vary somewhat. The tested application uses quite many classes all separated into different files, so this is a case where a bytecode-cache can help greatly.</p>
What I hate about PHP2004-12-29T00:00:00+00:00http://pilif.github.com/2004/12/what-i-hate-about-php<p>This is what I really hate about PHP:</p>
<pre class="code">
pilif@galadriel ~ % cat test.php
<?
if (10 == '10ABC')
echo "Gnegg!\n";
?>
pilif@galadriel ~ % php test.php
Gnegg!
</pre>
<p>This is the reason for a pretty serious bug in my current i'm-loving-doing-that-as-it's-the-greatest-ever-project</p>
<p>What happens is that PHP implicitly converts 10ABC to an integer (yielding 10) and then making an integer comparison.</p>
<p>In my oppinion, this is wrong as inplicitely converting a string to an integer can cause information to be lost. Would PHP have converted 10 to '10', the comparison would have worked like one expects because converting an intger to a string works without losing information.</p>
<p>Then again, integer-conversions are more accurate than string conversions, so I can understand PHP's way. What I cannot understand is that a non-integer string is converted to something else than 0 or nothing (while causing a runtime-error). The comparison in my example should never have evaluated to a true value (which happened, because <tt>intval('10abc') == 10</tt>!</p>
<p>And converting to string if one argument of a comparison is a string is not the holy grail either - problems with locale-specific decimal points come to mind (is it . or ,?).</p>
<p>So perls idea of using a dedicated string comparison operator may not have been a bad idea after all...</p>
Horde 3.02004-12-23T00:00:00+00:00http://pilif.github.com/2004/12/horde-30<p>Today <a href="http://www.orde.org">Horde 3.0</a> along with some applications using it, the most noteworthy being IMP 4.0, has been released.</p>
<p>
For me, horde has a long history of being a pain in the ass to install and extend. While installing the first versions has been quite easy (but not possible for me back then as I did not have access to my own server and the environement of our shared hoster did not have all the extensions needed - let alone shell access), it grew quite complicated with 3.0 onwards.
</p>
<p>My main problem has been and is that Horde is not really a framework for application developement, but a frontend-container. It's not possible to just install IMP. You're always installing a kind-of groupware-application (Horde) and only <em>them</em> the webmail component</p>
<p>If you don't do it right, you actually force your users to login in twice when checking their email (once in horde, once in IMP)</p>
<p>As always, I really had to take a look at those new releases.</p>
<p>As the horde main server is quite busy currently, I've downloaded from the mirror in the netherlands - the others where either not reachable or not current.</p>
<p>After downloading the horde framework, satisfying the very long list of dependencies took some time. Especially tricky was the fileinfo PECL-extension but this was because of a problem with my local PHP installation. Glad I found out now and could fix it</p>
<p>Then came the configuration. What a nicely done web interface! Unfortunately, I just managed to lock myself out (I chose "IMAP Server" as authentication source not knowing that this only works after IMP is installed and IMP cannot be installed without a working horde installation...)</p>
<p>After those things where setteled, I came to the installation of IMP. Easy procedure here - after getting used to with the framework itself before.</p>
<p>Then I've configured horde to use IMP as authentication source which did not work at first but after copying over the backup configuration file and trying again, it finally worked (don't ask me what I did the second time).</p>
<p>My next problem was the preset settings for my users: Per default, it's using a 12 hours time format, Arabia as location and somewhere in Africa for the time zone.</p>
<p>As I cannot ask my (possible) users to change those preferences, I looked for a way to do that and while doing that I began to understand how the Horde configuration system works. </p>
<p>
Now, I'm quite impressed about how they are doing this: it's generic, it's configurable and every single feature can be locked down for the end users. Very nice.</p>
<p>
Just make your configuration changes in <tt>config/prefs.php</tt>. If you need a list of possible values, either read the source, or easier: Just look at the HTML source of the preferences-screens.</p>
<p>If I had a whish for the next release: Provide a way for the administrator changing those settings via the Web-Frontend.</p>
<p>While I first just installed IMP, which worked flawslessly out-of-the box, I ventured further and installed kronolith, turba, nag and fiinally even chora. Additionally, I configured Horde to give access to chora only to me. Comfortable. Even more impressive, when I recall that the whole user-management is done via the <a href="http://www.xams.org">XAMS</a> environement (by using IMP to authenicate the users).</p>
<p>All in all, I still would whish to hide away horde and just install IMP (with a small, simple integrated addressbook), but as a) IMP really is the best (PHP-based - I don't know no others) webmailer out there and as the other applications work really nicely (even with PHP5, though it's not officially supported), I can live with that limitation.</p>
<p>Now, I have two tasks ahead of me:</p>
<ol>
<li>Provide support for changing the XAMS account password from within the web interface. This will be a great opportunity to learn how the preferences system really works.</li>
<li>Teach <a href="http://www.horde.org/ingo/">Ingo</a> how to create Exim-Filters as this is the filtering system that could most easily be integrated into XAMS. When I designed the <a href="http://www.pilif.ch/mail.txt">initial draft</a> of XAMS (then still called pmail), I took great pride that the mail delivery does not cause a non MTA-process to be forked an I want to keep it that way. It saves resources under high mail load.</li>
</ol>
<p>After the christmas days, I certainly will know what the new Horde/IMP is made of. From an Administrators/Users perspective, it's a great release.</p>
<p>Thank you guys!</p>
Apache 22004-12-23T00:00:00+00:00http://pilif.github.com/2004/12/apache-2<p>There was this <a href="http://apache.slashdot.org/article.pl?sid=04/12/21/1837209&tid=169&tid=2">discussion</a> recently about whether Apache 2.0 should be recommended by the PHP guys or not.</p>
<p>While I find <a href="http://ch.php.net/manual/en/install.unix.apache2.php">their warning</a> a bit too harsh, I for myself still cannot run Apache 2 - though I'd really like to. So maybe it's time to add my two cents:</p>
<p>Last march, I was going to <a href="http://www.gnegg.ch/archives/110-Speed-up.html">newly set up</a> our productive server. As the apache guys keep telling that Apache 2.0 is production ready, I first went with the new version of course. Here's what did not work and finally forced me to go back to 1.3: It's not about PHP at all: The two extensions I'm depending on (MySQL and PostgreSQL) are available in a threadsafe edition, so even one of the threaded MPMs would have worked. What killed my intentions was mod_perl.</p>
<p>Back then, when the comment-spam problem was not that a big one for me, I have been running gnegg.ch in a mod_perl environement which at that time was not setupable with Apache 2: mod_perl itself had an even bigger warning about not working well than PHP still has. And additionally, they've changed their API, so even if I'd been able to get it to work, there would have been no guarantees of getting MT to work with that new api.</p>
<p>Anyway: I've been willing to try it out, but libapreq, required by MT when running in mod_perl, was only available as an early preview too (still isn't nowhere near production ready). My tries in installing it anyway lead to a flurry of SIGSEGVs in Apache when using MT. Judging from the <a href="http://bugs.gentoo.org/show_bug.cgi?id=61893">Gentoo bugtracker</a> this has not gotten better yet.</p>
<p>One of the strongest selling-points for Apache isn't PHP. It's mod_perl. And currently, it's mod_perl that should have this big warning on its webpage. Mod_perl and not PHP (which works nicely under Apache 2 in an internal developement system).</p>
<p>And even when mod_perl gets fixed: As they have changed the API, many existing (and not longer maintained) packages using mod_perl (like Apache::MP3 for example) will possibly stop working after the switch to Apache 2.</p>
<p>As soon as the first guy comes here and posts that he/she's gotten MT to work under mod_perl on Apache 2, I'm going to reconsider the switch. Not a second earlier.</p>
Wasted hours2004-12-22T00:00:00+00:00http://pilif.github.com/2004/12/wasted-hours<p>Today I wasted three hours finding a bug in my code: A server-side plugin of our PopScan-Server recently stopped working. Looking at the code, I've quickly seen that some queries to the customers MS SQL-Server seemed to fail.</p>
<p>Nothing helped. I did not even get a message what's going wrong. <tt>mssql_query()</tt> just returned false.</p>
<p>In the end, I created a small, reproducible testcase and reported a bug in PHP. And guess what: It's <a href="http://bugs.php.net/bug.php?id=31243">already fixed</a>.</p>
<p>This seems to have been introduced between 5.0.2 and 5.0.3 which is bad as I had to update because of the recent security problems. I really find it questionable that a security-update can introduce non-related bugs. But that's live. I'm happy for now.</p>
<p>This is the first time in the last 6 years I've been working with PHP so far that I've been hit by a bug in PHP itself in a critical situation. This also is the reason why I wasted three hours searching for the bug in my code instead of going after PHP. I just thrusted PHP more than myself.</p>
Zwei Affichen2004-12-21T00:00:00+00:00http://pilif.github.com/2004/12/zwei-affichen<p>This is only for my fellow readers from the german part of switzerland. I'm presenting it without further comments as those knowing about the two newspapers and capable of understanding german will certainly get my point.</p>
<table border="0" cellspacing="5" cellpadding="0" align="center">
<tr>
<td><img alt="Tagesanzeiger" src="http://www.gnegg.ch/archives/ta.jpg" width="100" height="133" /></td>
<td><img alt="NZZ" src="http://www.gnegg.ch/archives/nzz.jpg" width="100" height="133" /></td>
</table>
<p>I took the pictures with my cellphone, so the quality is kind of bad which is why I do not provide an enlarged version</p>
</tr></table>
Productive with Delphi 20052004-12-16T00:00:00+00:00http://pilif.github.com/2004/12/productive-with-delphi-2005<p>Yesterday and today, I finally had the opportunity to do some real work on PopScan with Delphi 2005. Here's what I really like besides the <a href="http://www.gnegg.ch/archives/204-Delphi-2005.html">obvious</a>:</p>
<ul>
<li>Those <tt>.bdsproj</tt>-Files are incredibly useful. They replace the old DSK and DOF-Files, have a convenient XML-format and are the new project file, you open with in Delphi. This is very nice as the old project file (.dpr) is actually program code and does not contain any project metadata. This is what those .dof and .dsk-Files where used for, but I never understood which setting is in which and the format has not been XML either. So this consolidation really is convinient.</li>
<li>The history-feature really saved my day. With me hitting Ctrl-S on nearly every line I write, the older .~pas-approach wasn't very useful and CVS was no help either because I don't commit as often as something could go wrong in the code. </li>
<li>The new exception catching-dialog of the debugger is really nice. I like this "Dont halt on this exception again"-Checkbox.</li>
<li>While it makes the application significantly slower under the debugger, the new "event log" is great.</li>
<li>Speaking of debugging: The "Local Variables"-Window is great too.</li>
<li>Delphi now distinguishes between a "Default Layout" and a "Debug Layout". You can configure both of them as you like and Delphi automatically switches between them. This is much more intuitive than before.</li>
<li>Maybe I'm the only person on earth, but I like the single-window-approch: It's much cleaner than before. No more tons of clutter on the screen and the important screens are always wisible. No more Ctrl-Alt-F11 either.</li>
</ul>
<p>Additionally, I don't have as much speed problems as others seem to have: While starting the IDE takes it's time, working with Delphi when it's open goes quite smoothly.</p>
<p>My only problem is opening the form designer. This definitely takes too long, but not long enough for me to switch back to the undocked layout.</p>
<p>Memory usage is of no concern to me. I have 1 GB of RAM and even after a day of using delphi, my Thinkpad remains responsive even though not only delphi, but also eclipse, Zend Studio, Firefox and many other programs are running. For me the figure the task manager tells me is not nearly as important as the responsiveness. If delphi uses 500 MB of RAM, fine - as long as my PC stays responsive.</p>
<p>All in all, I really like this new Delphi and I already have uninstalled D7 (thus <a href="http://www.bradsoft.com/forums/shwmessage.aspx?ForumID=1&MessageID=7482">breaking FeedDaemon</a>).</p>
Tales of Symphonia2004-12-13T00:00:00+00:00http://pilif.github.com/2004/12/tales-of-symphonia<p>Now that the project I'm currently working on for which I didn't really have much time to complete it and which I insisted in doing cleanly despite the time constraints (beleive me: It's worth it. Read about that later) is coming along very nicely, I actually had some time to do a little gameing yesterday.</p>
<p>About two weeks ago, I bought <a href="http://tales.namco.com/">Tales of Symphonia</a> for my gamecube, but only yesterday, I played it for the first time (while still waiting for Mario 64 to arrive for my DS I've imported and actually got last week). Read about my more-than-plesant experience:</p>
<p>
First of all, I actually could buy a legal european version. Relying on grey import was - for once - not necessary despite Tales of Symponia (just "tales" from now on) being quite a hardcode RPG. A really big <b>THANK YOU!</b> for that, Nintendo.
</p>
<p>
Additionally, while I would have preferred playing it in english, the german translation is really good (completly unlike the miserable translation of Pokémon, for example) and thankfully, the voice actors where not synchronized and the english actors did a very good job on this one.
</p>
<p>
One thing is stupid though: You cannot turn off the german subtitles and they do not vanish automatically. So it's necessary for me to hit the A-button in just the right time not to create unnatural sounding cutoff sentences. This was a problem in the first 15 minutes. After that I got quite used to it, maybe also because the german translation really is good (I'd translate most of the sentences like they did).
</p>
<p>The next thing I did not like at first was the story: First, you have this "Wake the goddess to save the world by unsealing four seals" which sound kind of silly for a hardcore RPG. And then there are the two other main themes: "Girl on a pilgrimage to save the world" and "Boy brings destruction to his own village because of an accident and gets banished for that. His first station on the journey is a desert".</p>
<p>Both of those themes should sound familiar to you, the first one being a FFX-ripoff, the second one being from the best RPG of all-time, Xenogears.</p>
<p>Fortunately, this feeling of "seen-that" quickly begins to wear off after about two hours where the party crosses the sea and lifes (hopefully) through the Governour Dorr-sidequest. Now, that's something new (and great too).</p>
<p>As I'm just about ending said quest, I don't know anything further to say about the story, but I've read great things about it.</p>
<p>I really like the battle-system. It's a bit like "Star Ocean", fast-paced and doable none-the-less. In the desert just at the beginning after being abducted by those maybe-desians (the enemy race opressing the humans, strangely equipped with technology well beyond that of the humans), I was hopelessly under-leveled: Those visible enemies on the world map invite you to skip instead of fight them. In the end, I got around, but it was not easy there.</p>
<p>On a side-note: Speaking of advanced technology: Why the heck does Raine seem to know all that stuff? What is it about her? If she has something to hide it's much better done than Citan in Xenogears where this is clear from the beginning. Besides: I really like her character. She is very likeable. </p>
<p>Another thing I really, really like is the graphics: I love this cell-shading technology - especially if it's done as well as in tales. It's like watching an animee - just interactive.</p>
<p>On and talking about "interactive": In contrast to what I had to <a href="http://www.gnegg.ch/archives/31-got-it....html">rant about</a> in Xenosaga, in tales, the balance between interactive and non-interactive sequences is done very well. It's never boring and the story is always developping. Very nice.</p>
<p>
All in all, tales certainly is the best I've seen RPG-wise on the gamecube and it even matches some of the better-known Squaresoft titles. I really hope, the story continues as it is now and does not fall back to re-telling things already told by other games.</p>
<p>If you have a cube and are longing to good RPGs on it, go and buy tales. You will not regret it.
</p>
<p>So, now I'm just going to recompile and upload my little Java-Applet and then I'm off home to play another round of tales...</p>
ALTER TABLE in PostgreSQL 8.02004-12-02T00:00:00+00:00http://pilif.github.com/2004/12/alter-table-in-postgresql-80<p>I've just discovered my new favourite feature of the upcoming <a href="http://www.postgresql.org">PostgreSQL 8.0</a>: Let's say, you have forgotten a column when creating the schema of a table. Let's also say there already exist foreign kays referencing this table, so dropping and recreating it with the updated schema from your text-editor won't work (or force you to recreate all other tables too).</p>
<p>So, you need <tt>alter table</tt></p>
<p>Here's what Postgres < 8 needs to add a column <tt>cdate</tt> which must be <tt>not null</tt> and have a default-value of <tt>current_timestamp</tt>:</p>
<pre class="code">
alter table extart_prods add cdate timestamp;
update extart_prods set cdate = current_timestamp;
alter table extart_prods alter column cdate set not null;
alter table extart_prods alter column cdate set default current_timestamp;
</pre>
<p>And here's what it takes to do it in PostgreSQL 8:</p>
<pre class="code">
alter table extart_prods add cdate timestamp not null
default current_timestamp;
</pre>
<p>When typing this into <tt>psql</tt>, you're so much faster. This is actually the only feature I really missed when going from MySQL to PostgreSQL for all bigger work</p>
<p>Oh and did I mention that in Postgres 8 (currently running Beta 4) the statement is executed noticably faster than in Postgres 7.4 (though this doesn't really matter - you should not be altering production tables anyway)</p>
Internet Explorer, File Downloads, PHP2004-12-01T00:00:00+00:00http://pilif.github.com/2004/12/internet-explorer-file-downloads-php<p>Have you ever tried sending a file to Internet Explorer, for which an internal displaying plugin is installed? Take a .CSV-File for example (or a PDF for that matter).</p>
<p>If so, then maybe you have noticed that IE in some versions just displays an error-message about not being able to find the file just downloaded whenever you have a call to session_start() in your script.</p>
<p>
The problem is with the Headers PHPs session management sends to the browser: It disallows any cahing and tells that the document expired somewhere around my year of birth (1981). It seems like IE takes that literaly and really does not cache the doument, but then naturally is unable to forward it to the plugin (or activex-control or whatever).</p>
<p>Fortunately, you may change PHPs default headers by just emitting some additional <tt>header()</tt>-calls:
</p>
<pre class="code">
header('Content-Type: application/csv');
header('Pragma: cache');
header('Cache-Control: public, must-revalidate, max-age=0');
header('Connection: close');
header('Expires: '.date('r', time()+60*60));
header('Last-Modified: '.date('r', time()));
</pre>
<p>A short explanation of the headers sent:</p>
<ol>
<li>The content-type tells the browser that there's a CSV file coming</li>
<li>Pragma is an old HTTP/1.0-Header. This one allows caching of the resource</li>
<li>Cache-Control is the new HTTP/1.1 header to replace Pragma. "public" means: Public proxies may cache the document (private would also work and would mean: Cache in the Browsers cache). must-revalidate advises proxy servers (and browsers) to check if the resource is modified whenever the document is older than max_age seconds.</li>
<li>The connection-header tells the server and browser what to do with the connection when the resource has been transmitted. The old HTTP/1.0 behaviour is <tt>close</tt>. <tt>keep-alive</tt> would be the newer behaviour. I'm not sure whether this really is necessary here, but with this header, it definitely works.</li>
<li>The Expires-Header tells the browser when the document is going to expire. PHP default this to somewhere in 1981 and I think this is what causes the problem for IE. I set it to one hour in the future. If it were possible to just turn off those default-headers, I would simply send no Expires-header at all.</li>
<li>Last-Modified tells the browser when the resource was last modified. I could actually get a timestamp of the underlying data representation and output that so the browser would not have to redownload the resource when the data has not changed, but it's changing that often that this optimization is not worth the trouble, so I'm telling it just changed.</li>
</ol>
<p>I have confirmation that this solves the problems some clients where expecting before. Very nice.</p>
Blinking SDRAM?2004-11-30T00:00:00+00:00http://pilif.github.com/2004/11/blinking-sdram<p>Just read about <a href="http://www.crucial.com/eu/pvtcontent/memorytype.asp?memtype=BallistixTracer184-pinDIMM">these</a> on <a href="http://www.golem.de/0411/34947.html">Golem.de</a> (German page).</p>
<p>It's all about SDRAM with integrated LEDs blinking on activity.</p>
<p>I really can't understand those case-modders...</p>
Any Eclipse users out there?2004-11-30T00:00:00+00:00http://pilif.github.com/2004/11/any-eclipse-users-out-there<p>Usually I'm not here to ask questions, but today I have two for my readers. Maybe someone can help?</p>
<p>It's about <a href="http://www.eclipse.org">Eclipse</a>:</p>
<ul>
<li>Is there a way to automatically switch back to the Java-perspective after a debugging-session ended?</li>
<li>Can the complete JDK-Documentation somehow be integrated into the help system? While I know there is Javadoc everywhere while writing code, using the full-text search capabilities of the help-system would be really nice here and then...</li>
</ul>
<p>I'm quite sure, both problems can be solved. I'm just not seeing where. And additionally I've quite some problems devising useful google keywords to find a solution.</p>
<p>So, I thought: Maybe some of my readers know Eclipse better than I do.</p>
<p>Any help is appreciated..</p>
Found on my iMac2004-11-29T00:00:00+00:00http://pilif.github.com/2004/11/found-on-my-imac<p>Today, I found a residual registration link lingering around in my home-directory of my iMac. Looking at it's contents with <tt>cat</tt> reveals quite an ordinary .plist-XML-file.</p>
<p>What's interesting is what the engineers at Apple obviously thought of the newsletters the user is given a chance to subscribe to:</p>
<pre class="code">
<key>RegistrationInfo</key>
<dict>
<key>Apple<b>Spam</b></key>
<string>NO</string>
<key>Location</key>
<string>B</string>
<key>Occupation</key>
<string>5</string>
<key>Others<b>Spam</b></key>
<string>NO</string>
</dict>
</pre>
<p>(the emphasis is mine)</p>
<p>Oh... how I agree with them!</p>
Delphi 20052004-11-25T00:00:00+00:00http://pilif.github.com/2004/11/delphi-2005<p>I got my hands on the demo-version of Delphi 2005 (download it <a href="http://www.borland.com/products/downloads/download_delphi.html">here</a>), and I actually have configured the beast already, so I have my usual environement to work on PopScan with it. These are my first impressions (I won't talk about this File-Download-Window-Popping-Up Problem as all know it's a nasty problem with a security-patch from Microsoft which will soon be fixed. Read about it <a href="http://blogs.borland.com/stevet/archive/2004/11/16/1844.aspx"> here on Steves blog</a></p>
<ul>
<li>It takes quite some time to start up. After removing the Delphi.NET and c# personalities (don't need them), it starts about as fast, als my Delphi 7 did. Just a little bit slower</li>
<li>The compiler got faster, if you ask me.</li>
<li>Besides the great new features Borland is talking about, there are very nice usability-tunings everywhere which make working quite a bit easier.</li>
<li>The VCL form designer is extremely slow on my machine. Just displaying the PopScan Main Form within the designer takes nearly 10 Seconds. Delphi 7 does that instantly.</li>
<li>The debugger is slower too, which certainly has to do with the many great feature additions. I can live with that.</li>
<li>It's extremely compatible to Delphi 7: I could install every single third party component without any problems. This is quite impressive considering Delphi 2005 is quite a rewrite.</li>
<li>While I like the new docked form designer, there's one usability-problem with it: When you have components that use their own property editors (like Toolbar 2000), those editors are opened in their own window (understandable). Now, if you select a button in the component editor and then click the Project inspector to change a property, the Delphi Main Window will cover the property editor rendring it invisible. An easy fix would be to define the property editor always-on-top - a better fix would be integrating it somewhere in the IDE</li>
<li>Even JCLDebug could be compiled and installed (even the IDE Expert did work, though you have to manually install it)</li>
</ul>
<p>All in all, this release of Delphi is a very great release providing the user with a ton of new features and fixes to long-standing usability problems (so long that you got used to them and now are missing them...). I have not expirienced any crashes so far (besides the one where the expat-parser of a debugged application took all the ram on my system, but I don't blame Delphi for that), which is very nice.</p>
<p>Now, if only the beast could be made to run a bit faster (which <a href="http://blogs.borland.com/ao/archive/2004/11/24/1937.aspx">will be done</a>, I'd say, it's the best Delphi since Delphi 2 which means quite a lot...</p>
<p>Thanks Borland.</p>
<p>PS: I know that it's currently more in fashion to bash Borland and to whine about everything they do. And for the fourth consecutive year now I read posting about Delphi's impending doom everywhere on the net. But consider this: Delphi still is the only RAD tool out there producing 100% native windows executables. And it still has one of the most lively communities I know of in the Windows-world. Even if Borland <em>would</em> kill off delphi, I'm quite certain, it will not go so easily. Not with this community.</p>
<p>On and speaking of killing off delphi: Seeing this great release of Delphi 2005, I am quite assured that Borland will continue supporting us.</p>
<p>So: Quit whining around!</p>
XMLHTTP2004-11-23T00:00:00+00:00http://pilif.github.com/2004/11/xmlhttp<p>
Imagine, you are working on a webshop.
</p>
<p>Imagine further, that you have a page displaying the users shoppingcart. Left of each entry, there's an <tt><input type="text"></tt> for letting the use change the quantity of the article. Till now quite a common scenario, isn't it?</p>
<p>
Now in the time of DHTML and all that, you write some JavaScript to automatically recalculate the grand total of your shoppingcart on-the-fly, as the user is changing the quantities. This is very nice, as the user gets immediate response to her actions. No reloading the page is involved.</p>
<p>Now imagine further that the user has changed quite some quantities. The new cart is nothing like the old one was. The user is very happy with the total recalulating itself on every key she presses while the focus is in one of those editfieds. Very nice.</p>
<p>Now the user realizes that she needs another product. She clicks on the "Browse"-Link and ... </p>
<p>What happens?</p>
<p>Well,... the link certainly works and she browses around in the shop looking for another product to order. But there's a serious problem lurking around: As all the calculations were done on the client when the user changed the quantities, <em>the server knows nothing about the changes</em>. The server still thinks (provided something like a HTTP-Session-Emulation being at work - but how would you implement a shopping-cart without it?) the quantities are unchanged. When the user looks at the cart the next time (even after reloading the cart-page), she will see all the old values.</p>
<p>How to fix this? (Jonas, if you read this entry: This is about the solution to a problem we faced about a year ago while working on PopScan SMB). Most common today is one of the following:</p>
<ul>
<li>Post the form on every change of the quantity. While this fixes the problem, it's not very convinient for the user - especially if she uses a slow modem link. And even if the link is fast: Reloading the page whenever I'm tabbing out of an edit-field is very disturbing (though I've seen sites where the page even reloads <em>on every key press</em>).</li>
<li>Don't recalculate anything, but provide an "Update values"-Button. This is what most users are used to as this is how the web worked so far: You enter something, you submit it to the server or you lose it.</li>
</ul>
<p>Now this is where XMLHTTP comes to play.</p>
<p>While it has XML in it's name, it has very less to do with XML. It's a technology to send HTTP-Requests from JavaScript. And not only that: The requests are sent completely transparent to the end-user in the background. She doesn't notice the slightest thing while the script is posting requests. As the API is asynchronous, there even is no waiting involved - not even over slow lines.</p>
<p>So.. how does it work?</p>
<p>I used this function to post back quantity-changes from my shoppingcart:</p>
<pre class="code">
function updateToServer(quant,art){
var xmlhttp=false;
/*@cc_on @*/
/*@if (@_jscript_version >= 5)
// JScript gives us Conditional compilation, we can cope
// with old IE versions and security blocked creation of
// the objects.
try {
xmlhttp = new ActiveXObject("Msxml2.XMLHTTP");
} catch (e) {
try {
xmlhttp = new ActiveXObject("Microsoft.XMLHTTP");
} catch (E) {
xmlhttp = false;
}
}
@end @*/
if (!xmlhttp && typeof XMLHttpRequest!='undefined') {
xmlhttp = new XMLHttpRequest();
}
xmlhttp.open("GET", "/index.php/order/qchg?a="+encodeURI(art)+
"&q="+encodeURI(quant),true);
/* not interested in feedback. if it doesn't work, too bad. other
methods provide fallback
xmlhttp.onreadystatechange=function() {
if (xmlhttp.readyState==4) {
alert(xmlhttp.responseText)
}
}*/
xmlhttp.send(null)
}
</pre>
<p>(disclaimer: much of the code comes from <a href="http://jibbering.com/2002/4/httprequest.html">this page</a>. If you know, what you are doing, copy&paste really is a timesaver.)</p>
<p>What does it do?</p>
<ol>
<li>It uses some IE-trickery with conditional code to instantiate the object.</li>
<li>If the IE-code does not get run (on every standards-compliant browser), it uses the common way to instantiate the thing</li>
<li>It prepares the request</li>
<li>It sets up some event-handlers. As I'm not interested in the outcome, I'm not setting up anything.</li>
</ol>
<p>As you can see, I created a special url for accessing my shop-system, just for updating the quantities.</p>
<p>This function is called from the <tt>onChange</tt>-event of the quantity-change-input-boxes. Now, whenever the user changes a quantity, <tt>/index.php/order/qchg</tt> is called, advising the server to update the quantity (if you find the URL strange - using PATH_INFO and all that: I will post something about a PHP-design-pattern that I'm using that has proven to be the most powerful in all those years I've been working with PHP).</p>
<p>Problem solved.</p>
<p>And just 30 minutes after implementing this method, I found out that for the purpose I'm using it, this whole XMLHTTP-thing would not be necessary:</p>
<p>While some trickery with FRAMEs could do the same thing, the really best method that even works with Netscape 4.x (even 3.x, if I remember correctly) would be to conditionally change the URL of a (transparent 1px<sup>2</sup>) image. This works always if no feedback from the script must be evaluated:</p>
<p>Pseudocode:</p>
<pre class="code">
function updateToServer(quant, art){
document.images['qposter'].src="/index.php/order/qchg?a="+encodeURI(art)+
"&q="+encodeURI(quant);
}
</pre>
<p>A one-liner, no frame-trickery (frames are bad - even for such things), no finding out what object to instantiate, no problems with near-browsers,... very nice, but nowhere near structural markup, which is why I prefer the less hacky solution.</p>
<p>I hope, this was helpful for you. And as I'm progressing with this very interesting project I'm working on, I certianly will have more of such things to post.</p>
PostgreSQL vs. MySQL - a subjective view2004-11-17T00:00:00+00:00http://pilif.github.com/2004/11/postgresql-vs-mysql-a-subjective-view<p>Still quite enthusiastic about <a href="http://www.gnegg.ch/archives/201-PostgreSQL-rocks!.html">my success</a> with <a href="http://www.postgresql.org">PostgreSQL</a> erlier today and after reading the first comment on that entry, I think, it's time for a little list describing the highlights why I prefer PostgreSQL to mySQL and another one describing what mySQL does better:</p>
<h3>PostgreSQL</h3>
<ul>
<li><tt>psql</tt>, the command line tool for accessing the database is much better than the mySQL pendant. What many don't seem to know is <tt>\x</tt>. Try it and you will ask yourself, why <tt>mysql</tt> can't do that. Also, I really like that a pager is invoked when dealing with large result sets. MySQL does not do that either</li>
<li>The license. While I certailny prefer any free software license to any proprietary one, I much prefer the more free BSD one. But I better leave the flam^Wphilosophying about this to others...</li>
<li>All those "professional" database-features like VIEWs, stored procedures (which can even be written in Perl or Python), triggers, rules, enforced referential integrity and all that stuff. I could never ever imagine going back to a database without VIEWs. Those things are so incredibly useful both for much friendlier interface to complex data and integrating different pieces of software.</li>
<li> The community around PostgreSQL is very strong. Reading the "general" and "developers" mailinglists is very interesting and many times provides a very good insight in database design</li>
</ul>
<p>
Back in 2002 where I was working on the new adsl.ch, I used VIEWs to merge satisfy the needs both PostNuke and phpBB2 had concerning their table containing the user accounts. With a view and a little bit cusomized scripting I was able to integrate both without the need for any patching around in either of them which makes applying security-updates so much easier. This is where I deceided that I will never use anything else but PostgreSQL for my database needs.
</p>
<h3>The mySQL list</h3>
<ul>
<li><tt>mysqli</tt> is an object oriented interface for PHP scripts directly built into the language (and thus fast). Too bad it requires MySQL 4.1 as Gentoo does not have fitting ebuilds yet. And don't get me wrong: Postgres' interface is not bad either.</li>
<li>Seems easier to handle. Just install and run. ALTER TABLE is much more powerful than in PostgreSQL, so changing the structure after the fact is easy. Nothing must be configured to get quite the optimum performance</li>
<li>Clustering built into the core of the database, though it's still a master-slave replication which provides fail-safety, but no (real) load balancing.</li>
</ul>
<p>ALTER TABLE in PostgreSQL 8 is about as powerful as the one of MySQL, but PostgreSQL 8 suffers from the same problem as MySQL 4.1: No Gentoo ebuild. Here, on my iMac I'm already running the latest BETA of 8.0</p>
<p>The decision to go with PostgreSQL is an easy one: None of the advantages of MySQL are big enough to outweigh the missing features. Oh and if you ask for benchmarks and tell me that PostgreSQL is slower than MySQL, let me tell you this: While I doubt that this statement is still true (mySQL got slower due to the transaction support and PostgreSQL got much faster), I can say one thing for certain: PostgreSQL is fast enough for my needs. What is it worth giving up data integrity and writing lots of dirty code that should really be stored directly in the database just because of a percent more performance or so?</p>
<p>Another thing is how those systems perform under high load. While I certainly know that PostgreSQL handles it well and stays fast for many more concurrent connections, I always hear problems form people using mySQL: Corrupted tables (sometimes beyond repair), hanging connections,... Nothing I want to happen to me even if it would mean to live with one or two percent less performance under unrealistical-benchmarky load.</p>
<p>Oh and everything I told about performance is quite un-scientific. While I did some load-tests with Postgres, all my expirience with MySQL under same conditions comes from other people. I never tried it myself. Why should I? PostgreSQL is perfect.</p>
PostgreSQL rocks!2004-11-17T00:00:00+00:00http://pilif.github.com/2004/11/postgresql-rocks<p>I <a href="http://www.gnegg.ch/archives/138-All-time-favourite-Tools.html">told so before</a>, but I have to again: <a href="http://www.postgresql.org">PostgreSQL</a> is incredibly cool.</p>
<p>Today I had this job of importing around 11'000'000 datasets distributed to 15 tables. 10 millions of them went into one big table. And after importing the whole thing should still respond fast to queries involving JOINS with this large table.</p>
<p>What surprises me: After a bit tweaking of the settings (one of them would be moving the beast to a partition where there's enough space on to store the indexes ;-) ), the queries I did on a much smaller amount of data before, remain as fast as ever. PostgreSQL really makes great use of its indexes</p>
<p>Granted: Importing all those datasets was somewhat slow (I could and can not use COPY because I'm just receiving differences), but tweaking around with the indexes helped a lot (tip: drop them while inserting)</p>
<p>While processing the import, Postgres still was as responsive as ever while working with other parts of the database. </p>
<p>I know that all this is nothing fancy - I mean: I expect nothing less from a good RDBMS, but still... it's amazing how good and flawless this worked and how fast it is.</p>
<p>Maybe, I could get faster INSERT/UPDATE performance if I'd be using MySQL instead, but I absolutely want to use all those features a real database should have that MySQL lacks: Views, referential integrity, subselects (still using 4.0 until Gentoo releases a more current ebuild).</p>
<p>Yes. Postgres is just great.</p>
<p>Just for your interest:</p>
<pre class="code">
pilif@fangorn /home % sudo du -chs pgdata
1.8G pgdata
1.8G total
</pre>
<p>And it's still as fast as your common little webboard-application. I still cannot quite believe it.</p>
AirPort basesation and external DHCP server2004-11-12T00:00:00+00:00http://pilif.github.com/2004/11/airport-basesation-and-external-dhcp-server<p>Recently, I bought an airport basestation.</p>
<p>I wanted to use it as a NAT router and a wireless access point. DNS and DHCP I wanted to do via a fully-fledged BIND/dhcpd combination running on my <a href="/archives/pile_of_new_hardware.html">iMac</a>.</p>
<p>DNS I need because I'm doing some work for the office from home. As much of it is web based, I need virtual hosts on my server and I certainly don't want to go back to stone age and move around <tt>hosts</tt> files. DNS was invented for something, so please, let me use it.</p>
<p>DHCP I wanted because sometimes, I'm using applications on my notebook that require some ports forwarded to them (bittorrent for example). Forwarding ports without fixed IP-adresses can be difficult (especially if changing the forwarding address requires a restart of the router), so I wanted the possibility to give the MAC-adress of my notebooks NIC a fixed IP-address. This is not possible with airports built-in DHCP server (and I don't blame them for this - it's quite a special feature)</p>
<p>Now, imagine how disappointed I was seing, that this is not possible when using Apples configuration program:</p>
<p>They tie NAT and DHCP together: Either you turn off both NAT and DHCP, NAT only, or none of them. Turning off DHCP only is not possible.</p>
<p>Looking around on the web, I came across Jon Sevys <a href="http://edge.mcs.drexel.edu/GICL/people/sevy/airport/#Configurator">Java Based Configurator</a> again.</p>
<p>With this tool my configuration certainly is possible:</p>
<ol>
<li>Configure your basestation using Apples utility. Tell it to enable NAT and distribute IP-Adresses</li>
<li>Update the configuration and exit Apples utility.</li>
<li>Run the Java Based configurator.</li>
<li>On the "DHCP Functions"-Tab, unckeck the Checkbox</li>
<li>On the "Bridging Functions"-Tab uncheck "Disable bridging between Ethernet and wireless Lan"
<li>Save the configuration.
</ol>
<p>The last step is important if you want the Basestation to continue working as an usable wireless access point. I forgot to do this the first time I tried and did not get an IP-Adress and could not connect to the wired lan after setting one manually either. Logical, but disturbing if you think you got the solution but it still does not work as expected...</p>
</li></li></ol>
AC3-Divx on my PPC2004-11-09T00:00:00+00:00http://pilif.github.com/2004/11/ac3-divx-on-my-ppc<p>As I've written in the <a href="http://www.gnegg.ch/archives/196-My-hx4700.html">review</a> of my hx4700 PDA, the thing really shines when it comes to displaying XViD videos.</p>
<p>The single big problem about <a href="http://betaplayer.corecodec.org">Betaplayer</a> is that it lacks support for decoding AC3-streams. This is bad, as most of my movies have an AC3 audio stream (always looking for the optimum quality). So I was on the lookout for a solution.</p>
<p>
I quickly found <a href="http://divx.ppccool.com/">PockedDivxEncode</a> which comes with nice presets for encoding videos for a PocketPC</p>
<p>The problem was that the current version always insists to recoding the video stream when converting the video file, thus reducing the overall quality (it's no use compressing two times) and needing a long time to do its job (about 50% realtime or slower. Haven't tried).</p>
<p>Then, on the download-page I found this not-so-visible link to the the current <a href="http://www.l2ita.net/PDE/PocketDivXEncoder_0.3.51_RC7.exe">beta test version</a>, which has - under "Advanced Settings" an option to leave the video stream alone and just work on the audio stream.</p>
<p>Using this configuration, recoding just the AC3 stream becomes possible. As it's leaving the video alone, it's reasonably fast too - about 4 times realtime on my thinkpad.</p>
<p>This is a usable solution until Betaplayer gets AC3-Support.</p>
Pile of new hardware2004-11-08T00:00:00+00:00http://pilif.github.com/2004/11/pile-of-new-hardware<p>Last wednesday, I finally did what I have been talking since the very beginning of this blog: I bought myself a Mac. In the end, what lead me to the decision was that I wanted my home server back. I had some requirements to the new hardware:</p>
<ul>
<li>It must run quietly. I don't have enough space to designate a dedicated server-room so the server - if constantly running, must be quiet. This is where my older solution failed.</li>
<li>It must run a UNIX derivate. Much of the work I do requires a UNIX server. Having such a beast at home can save me from going to the office here and then.</li>
</ul>
<p>The iMac (17inch, 1.8Ghz) I finally bought fullfills those two requirements (it's quite quiet as long as I'm not doing anything calcualation intensive - which I'm not - at least not when sleeping) and has the additional benefit of a cool UI frontend.</p>
<p>So in the end, this was a logical decision: Had I deceided to go with a quiet Linux box, the parts alone would have been more expensive - not to speak of the time required to assemble the beast.</p>
<p>Setting up the iMac was easy (as I've expected). First, I wanted to go with <a href="http://www.gentoo.org">Gentoo</a> for MacOS X, but this is extremely under construction, so I went with <a href="http://fink.sf.net">Fink</a> for my UNIX-needs</p>
<p>Now I'm running a DNS, DHCP, PostgreSQL and Apache Server. All I need to do my work.</p>
<p>So. After my UNIXish needs where fulfilled, there came the Macish ones: I wanted to video-iChat with my girfriend. This has proven to be quite a hassle to set up:</p>
<p>We never managed to get a working connection. I got timeouts everytime I tried to connect. A bit debugging around on my ZyWall router quickly determined it as the culprit: Despite me having configured the iMac as default NAT-Server, the device did not forward any UDP packets to the host. No wonder this does not work.</p>
<p>So it was time for my now nearly 4 years old ZyWall to be replaced (lately it began crashing quite often anyways). As I did not want to take any further risks, I bought an Airport Basesation. Works nicely - also with my other gear (PocketPC and Thinkpad).</p>
<p>Furthermore, it became clear to me that I finally have a continously running server in my home, so I finally could (at least somewhat) justify buying myself a <a href="http://www.slimp3.com">Squeezebox</a>. This device arrived only today and while I knew how great the thing is, it suprised me even more now that I had a look at it: So many settings to tweak and so great quality of the hardware. Very good.</p>
<p>In the end, the last week (ending today) was quite hardware-intensive:</p>
<ul>
<li>The iMac</li>
<li>Two <a href="http://www.apple.com/isight/">iSight</a>s (one for me, another for my girfriend). Speaking of iSight. I've just noticed that the iMac has a magnet in the middle of the screen to be used with the iSight magnet holder to position it nicely in the middle of the screen. Very nice</l>
<li>An Airport Extreme Basestation. Not that I wanted one, but the investement into the iSights would have been in vain as it's technically impossible to video-chat over the ZyWall</li>
<li>The squeezebox</li>
</ul>
<p>All in all quite a lot of junk, but so much fun to play with ;-)</p>
</li></ul>
Two years of gnegg.ch2004-11-05T00:00:00+00:00http://pilif.github.com/2004/11/two-years-of-gneggch<p>Two years ago, I started to use my spare gnegg.ch domain with this weblog. My <a href="http://www.gnegg.ch/archives/1-Welcome.html">first posting </a> was quite the ordinary welcome-posting. Even back then, I promised to create a better layout for the site, which I finally did <a href="http://www.gnegg.ch/archives/84-CSS-Im-getting-into-it.html">this february</a>:</p>
<blockquote>
As I am not-so-good™ with layout, I kept the default one of Movable Type, my blogging-engine. Maybe Richard will help me here sometime in the future.
</blockquote>
<p>And Richard did a really good job with it. Thanks again.</p>
<p>Many times, gnegg was lingering around a bit, but I managed to put myself together all the time and in the last two years, there was at least one post every month. Since around january 2004, post much more often. Currently I've nearly 200 postings on the site, wich means that I wrote the same amount of postings in 6 months that took me a year and a half before: My <a href="http://www.gnegg.ch/archives/100-Entry-100.html">100th post</a> was only this march.</p>
<p>With the increased amount of postings, I also got more visits: 2003 there where and average of 115 visits a day producing 184 pageviews. Now it's more like 552 visits producing 12883 pageviews. Tendency: rising. Thank you, my fellow readers, for this.</p>
<p>With gnegg.ch becoming more known, also the problems grew: Currently, I'm filtering about 50 SPAM comments per day. A year ago it was at most one per month.</p>
<p>Posting here still is a lot of fun and I'm certainly going to continue writing here.</p>
<p>And in case your wonder, what "gnegg" actually means: It's nothing. In 2001 I created that word quite by accident by typing around the keyboard to create some blind text and it liked it so much that I reserved the domain... What I liked about the name was that I was quite uncommon in the internet so far. Ok. There's <a href="http://www.floom.com/words/lyrics_gnegg.htm">this</a>, but whatever it is, it's funny anyway...</p>
My hx47002004-11-01T00:00:00+00:00http://pilif.github.com/2004/11/my-hx4700<p>Last thursday, I've <a href="http://www.gnegg.ch/archives/195-HP-iPAQ-hx4700.html">written</a> about my iPAQ hx4700 and I've promised a more thorough review. Well, here it is:
</p>
<p>
First of all, I can't understand all the moaning about the device being so big out there. Granted, it's larger than most PDA's, but much less bulky than all it's predecessors (the last one I used being the iPAQ 5550). Also, it's acutally lighter than the previous model.
</p>
<p>And then there's this plastic cover, all are complaining about. I can't confirm that either. You can flip it around without problems or fear of breaking it. So, I actually think, the cover is quite great as it does not thicken the device while still providing excellent protection for the display.</p>
<p>Speaking of displays: This is where I absolutelty concur with the other reviewers: It's great. Extremely great. And while I understand Microsofts intentions when they created this special-vga-mode (essentially you have the same amount of real estate on the screen as with the 320x240 resolution. It just looks better and more detailed), I got used to the extremely small look of the screen with <a href="http://www.pocketgear.com/software_detail.asp?id=14679">SE_VGA</a> enabled which is how I have my device currently configured.</p>
<p>SE_VGA turns off this pixel-duplication and provides real VGA resolution. Everything get's quite smallish, but you know: I'm the <a href="/archives/no_more_blur.html">resolution guy</a>...</p>
<p>Unfortunatly, none of the programs currently out there are prepared for this extended VGA mode. The glitches range from too much whitespace over cut icons to quite unusable screens (iPAQ wireless, the communications center of the device, being such an example). All in all, the high resolution outweights those glitches for me.
</p>
<p>Turning this mode on and off requires a warm reboot, unfortuntately</p>
<p>The eye-candy software HP is providing with the device is quite useless: The screensaver does not make sense to me (I don't have pictures I would place on it and the status information provided on the today screen is more useful anyways) and the today-applet is unconfigurable and not really featureful, too.
</p>
<p><a href="http://www.spbsoftwarehouse.com/products/pocketplus/?en">Spb Pocket Plus</a> is much more useful for that matter.</p>
<p>While HP provides a copy of <a href="http://www.pocketinformant.com">Pocket Informant</a> on the ROM, the version is already outdated. Updating is possible, but is a tedious process if you want to profit from the 50% rebate.</p>
<p>Really useful is this bluteooth phone configurator. Getting a GPRS-connection to work has never been easier. The phone and even the mobile provider got recognized and automatically configured.</p>
<p>What I extremely dislike about the device is that it has only 64 MB of RAM. This lead me to install all the applications to the iPAQ File Store which is about 100 MB large. While this is a solution, it has two problems:</p>
<ol>
<li>Flash ROM is slower than RAM. Starting Mobile Agent (a GPS software) took long already when installed in RAM. Now imagine starting it from the ROM. We're talking <em>minutes</em> here!</li>
<li>The software installation is semi-automatic as I must change the folder on every installation.</li>
</ol>
<p>As software can easily be reinstalled even on power loss (and thus empty RAM), I don't see any advanteage of the overly big ROM at the cost of more useful RAM.</p>
<p>Battery live is average. One day worth of heavy usage, most of it connected via WLAN and some via Bluetooth and GPRS (have I told you, how great this bluetooth phone tool is? I guess I have, but saying it again is the least I can do to emphasize how great it really is ;-) ) about brings the battery down. This is neither more nor less than I'd had expected.</p>
<p>What I really looked forward to (besides better user experience while bathtub-surfing) was watching videos on the device. Lying in bed, ready to sleep and watching an episode of Stargate or so on the device which is turned off fast is much more comfortable than using the notebook or even the video beamer.</p>
<p>First, I was quite disappointed: Using Windows Media Encoder and the built-in WMP9 was quite laggish: Many framedrops, bad sound quality.</p>
<p>Where the device really began to shine was with XViD movies and <a href="http://betaplayer.corecodec.org/">Betaplayer</a>: No framedrops, great picture quality and no encoding-time. Very nice.</p>
<p>Now, if AVI would be streamable and/or if the pocketpc would just be a little bit faster while transferring files, this would get really great.</p>
<p>As it is now, copying a movie to the device without plugging around SD-Cards is impossible. Neither ActiveSync (much faster over USB2) nor WLAN are fast enough for transferring a movie in a bearable time frame. I think this is either a problem in the OS or in the bus where CF and SD-Cards are connected to.</p>
<p>All in all, I'm very happy with the device. Its slick look, the metallic body and the more-than-sufficient performance make it a great update from my 5550. What I'd wish for on a future incarnation would be more RAM and a HID capable bluetooth driver so I could use <a href="/archives/fun_with_logitech.html">my BT-keyboard</a> with it to write my short stories in bed too (none posted here yet).
</p>
<p>Oh and last but not least: Maybe you ask yourself about this touchpad of the hx4700. It's no coincidence I forgot to write about it 'till just now: It's unspectacular. On one side it somewhat kills gaming (which does not matter for me as I'm not using my PDA for gaming - I have my GPA SP for that), on the other hand it's just there. Neither useful nor useless. Neither comfortable nor not. Neiter an innovation nor not. It's just there.</p>
<p>I have not yet come across an occation where it really is useful, but having the cursor-mode on also is no disadvantage (besides the mouse pointer floating around), so it really doesn't matter for me.</p>
HP iPAQ hx47002004-10-28T00:00:00+00:00http://pilif.github.com/2004/10/hp-ipaq-hx4700<p>Finally.</p>
<p>Last July, Lukas and me ordered HPs iPAQ hx4700 and only just today it finally arrived. Nice thing. I'm still looking at everything, so I'm going to post a deeper review sometime in the future.</p>
<p>But now back to my new toy ;-)</p>
Learning by example2004-10-27T00:00:00+00:00http://pilif.github.com/2004/10/learning-by-example<p>After getting through with <a href="http://www.gnegg.ch/archives/180-Head-First-Servlets-JSP.html">Head First Servlets & JSP</a>, yesterday I bought <a href="http://www.oreilly.com/catalog/0596006519/index.html">Programming Jakarts Struts</a> just outof pure interest. You never know when knowing those things may come in handy.</p>
<p>Currently I'm somewhere in chapter 3 and already know quite a lot of things about struts (that I really like the framework is one of them - I should really try to do something Servlet-ish in the future). Chapter 3, for those that don't know the book, is an introduction to Struts by example of a very simple online banking application.</p>
<p>And this gets me to the point: I'm a very practical person and I despise of doing lots of theoretical stuff. Usually I come quite soon to a point where I lose my interest because the topic gets to theoretical.</p>
<p>This is why I learn best using examples.</p>
<p>When I have to learn some database structure, I usually don't even try to learn from the documentation. I just look at how the database is built to learn how to use it. That way, I'm doing something practical while still learning how to do the right thing. Only whhen I'm not sure somewhere, I'm going to look at the documentation.</p>
<p>The same thing with meetings. As soon as it gets redundant, I almose immediately lose interest. My brain hungers for more, clear information. If there is some, it just sticks. I seldom take notes and I seldom forget important stuff - just as long as it's non redundant and somewhat visual.</p>
<p>So, the chapter three of the Struts-book is the optimal way for me to learn something as it's expaining things by dissecting a complete application. This way I always know the big picture and a practical goal (the application) which helps me greatly understand and memorize the details.</p>
<p>And all this is the reason I so much like doing what I do at <a href="http://www.sensational.ch">our company</a>. Our philosphy has always been to try something out, never to think of being unable to do something, every time saying yes to some request of a potentional customer.</p>
<p>That way, I can always be on the lookout for practical solutions. I can always learn by example (the project I'm currently working on). In the last five years it seldom happened that I had to do something I did before. It's learning, trying, erring, trying again all the time.</p>
<p>And as this is how I work best, we never failed so far to actually deliver what we promised to. From my very first CGI-script ("CGI? Never did that... but it can't be that difficult") over streaming satellite TV over the internet to <a href="http://www.gnegg.ch/archives/177-Extreme-fun-with-Linux.html">Linux powered barcode scanners</a>: It always worked out. And it always will.</p>
Fix for comment spam?2004-10-25T00:00:00+00:00http://pilif.github.com/2004/10/fix-for-comment-spam<p>Yesterday, <a href="http://www.7nights.com/asterisk/">asterisk*</a> talks <a href="http://www.7nights.com/asterisk/archive/2004/10/easy-comment-spam-fix">about comment spam</a> and an easy fix to do it.</p>
<p>Reading <a href="http://weblog.burningbird.net/archives/2002/10/29/comment-spam-quick-fix">the article</a> gives quite a good insight on how those spammers work: They don't seem to really request the page of your entry, but they only submit hardcoded values in some database.</p>
<p>This gets this seemingly simple trick to work. Inststead of reading the weblog page and submitting the real form, spammers still submit the hardcoded value, missing the additional form-element.</p>
<p>Unfortunately, this problem is easy to fix for the spammer: Just update the database with the new information form the forms. And I promise you: As soon as this hack gets more known (which is bound to happen soon as it's so simple to impelement), they will update their scripts.</p>
<p>The logical next consequence would be to change this additional tag more often, leading to the spammers updating the index more often.</p>
<p>The ultimate consequence would be a script generating some kind of random cookie which is different on every request. This in turn would lead the spammers to actually request the form before sumitting it.</p>
<p>I don't think, I have to name the consequences of that: The spam will stay, but the bandwidth needed will increase greatly. Instead of just posting, the spammer will also request the whole page.</p>
<p>And the spammer will certainly do that on all weblogs. Regardless of whether they deploy this cookie or not.</p>
<p>So in the end, this "fix" just makes the whole thing worse for all us bloggers.</p>
<p>Sorry. No solution. Or ist it? Convince me otherwise!</p>
Favourite Thunderbird Extensions2004-10-25T00:00:00+00:00http://pilif.github.com/2004/10/favourite-thunderbird-extensions<p><a href="http://www.gnegg.ch/archives/173-My-favourite-Firefox-extensions.html">last time</a> I talked about my favourite extensions to <a href="http://www.mozilla.org/products/firefox/">Firefox</a> and while this list is outdated already (I've got some more on the list), I think it's time for the <a href="http://www.mozilla.org/projects/thunderbird/">Thunderbird</a> list:</p>
<ul>
<li><a href="http://enigmail.mozdev.org/download.html">Enigmail</a> is the all-you-need solution for encryption matters. Unfortunately, not many of my common adressees have GPG-keys already, but maybe that's going to change. Important emails I'm sending out are signed.</li>
<li><a href="http://quotecolors.mozdev.org/">QuoteColors</a> is a must-have for me as only with this extension Thunderbird complies to point 7 on <a href="http://www.gnegg.ch/archives/34-Mail-for-Windows-as-I-like-it.html">my list</a> of features I want in a mail client.</li>
<li><a href="http://www.brunschwig.net/ClearSearch/">ClearSearch</a> re-adds the Clear-button to the search toolbar. It's quite click-intensive to clear the filter without this button and it even serves as another indicator of whether a filter ist active or not (it's disabled, if not).</li>
</ul>
<p>So: Not many Extensions, but absolutely important to me. I wonder: What are others using? The same? More? Different?</p>
Web Programming with CSS2004-10-22T00:00:00+00:00http://pilif.github.com/2004/10/web-programming-with-css<p>For the first time in a very long time, I'm able to use a completely de-table-ized design in pure XHTML and CSS for creating a little web-application in PHP.</p>
<p>While many people just quote less bandwidth usage and better maintainable HTML-Code as the big advantages of using pure CSS layouts, let me add another one: Extremely increased productivity for programmers bringing interactivity to the layout</p>
<p>It's a real pleasure: Never was it so easy to just concentrate on the functionality. No layout-information creeping into the business logic because it's the only way to get some stupid placeholder GIF into the layout. No more error-prone stitching together immense and complicated HTML-snippets. No more debugging what went wrong when building one of those complicated layout tables.</p>
<p>And of course: No more pulling out hears when having a look at the size of the generated HTML-Code</p>
<p>I've never been as productive in coding a web application than in this case where the HTML-code is clean and the design is where it belongs to: In the CSS (which I don't have to touch (anymore - the whole thing happens to be written by myself - Richard isn't that good in CSS yet))</p>
<p>If only all future projects would be CSS-only. It would make live so much easier...</p>
Explain This!2004-10-22T00:00:00+00:00http://pilif.github.com/2004/10/explain-this<p>Would anyone care to explain me this:</p>
<div align="center">
<a href="http://www.gnegg.ch/archives/stats.png"><img alt="stats.png" src="http://www.gnegg.ch/archives/stats-thumb.png" width="400" height="167" /></a>
</div>
<p>I mean: While I can understand that an entry concerning filesharing is very popular and while I really see the sense in the rdf-File being requested often, I can absolutely not understand what's so interesting about suburban railways!</p>
<p>I for myself certainly find it interesting, but none of the people around me share this interest. Who would have thought that there are more fans of railways out there on the net than there are people having problems with their P800 phone...
</p>
<p>Reading logfile analysis can be so interesting at times...</p>
<p>Oh and on another note: I would be really interested to know how many people have actually subscribed to the RDF-Feed and thus are coming back regularly to read what I have to write. So: RSS-Subscribers: Stand up and post a little comment here. A "I do" certianly suffices.</p>
<p>As the traffic really peaks whenever I post an entry, there certainly have to be <em>some</em> subscribers.</p>
SQLite on .NET CF2004-10-21T00:00:00+00:00http://pilif.github.com/2004/10/sqlite-on-net-cf<p><a href="http://www.sqlite.org">SQLite</a> just doesn't stop to amaze me. First, we <a href="http://www.gnegg.ch/archives/177-Extreme-fun-with-Linux.html">got it to compile</a> on our small ucLinux based barcode scanner where it not only works flawlessly, but extremely fast too.</p>
<p>Now I thought about using SQLite in a little PocketPC application I intend to write using the .NET compact framework. This after some very bad expirience with SQL Server CE</p>
<ul>
<li>There is no useful frontend to modify the data in the sdf-File: There is no tool for the desktop (besides using SQL-Server and then replicating the data to the device which I actually got to work this march or so, but it was a major pain in the ass to set up and is no solution to me. I mean: Why should I install a whole SQL-Server just to get some test-data to a smart device?) and the little frontend on the PocketPC suffers from the small screen and the lack of a keyboard.</li>
<li>Despite everyone claiming it's fast, it isn't (though this certainly is relative. I'm sure, the marketing departement of MS is still conviced that it's fast). Where some operations may be, others are not. Searching for strings is an example of extreme slowness.</li>
<li>Starting an application using SQL Server CE takes about a minute on a usual 400 Mhz PocketPC. <b>Way</b> too long to be used in production with customers.</li>
</ul>
<p>So, using a leightweight local SQL-engine which is fast even on a 66 Mhz CPU without MMU sounded quite appealing to me. Just: How much work would it be? How well would I be able to integrate SQLite into .NET?</p>
<p>Knowing about the lack of features in P/Invoke on the compact framework and knowing that the SQLite API uses callback functions, I feared the worst, but fortunately, I googled before getting to work.</p>
<p>So, I found <a href="http://sourceforge.net/projects/adodotnetsqlite/">this project</a>.</p>
<p>They provide you with a full-fledged ADO.NET driver for SQLite, so you can use all the database classes and components you are used to, while still profiting from the advantages of SQLite</p>
<p>Compiling it was easy (while they provide pre-built binaries of sqlite.dll and the P/Invoke-Wrapper sqlite_wrapper.dll, they do so only for ARM and the desktop version of Windows for x86, so if you want to use the emulator for developement, you have to build those two DLLs yourself - using eVC4) and a quick look (it's already late now - I got up more than 17 hours ago, so I'm quite tired now) using the sample project <a href="http://www.eggheadcafe.com/PrintSearchContent.asp?LINKID=720">here</a> was quite successful: The application started (instantly, no wait) and displayed the data inside the SQLite-File.</p>
<p>So, the speed-problem is solved. What about the frontend? While I don't know any Windows GUI frontends for SQLite (though I know they exist), I have already worked with the <a href="http://www.ch-werner.de/sqliteodbc/">SQLite ODBC driver</a> (it's funny to think of that: Usually ODBC-drivers are just a middleware between the Application and the dtabase, but in case of sqliteodbc, the database engine is <b>linked into</b> the ODBC-driver. Strange) and of course the command line tool and the PHP extension. So for my purposes, I'm going to create the database using a PHP-Script on Linux and copy the .db-File to the PocketPC. As seamless as possible. No replication, no installation of servers, no nothing. Just plain old copy.</p>
Delete-Key in zsh2004-10-21T00:00:00+00:00http://pilif.github.com/2004/10/delete-key-in-zsh<p>I’m a big fan of <a href="http://www.zsh.org">zsh</a>. Besides it having an awful amount of features, it was <a href="http://zsh.sourceforge.net/Guide/zshguide.html">this guide</a> (called “User-friendly user guide”) that brought me up to speed on unix-shell matters back then.</p>
<p>So it’s only logical that my default shell is the one the guide is about ;-)</p>
<p>What annoyed me majorly was that in Gentoo Linux, the delete key did not work in zsh (unless of course you count outputing ~ instead of forward-deleting as “working”).</p>
<p>Finally I got around to fixing that.</p>
<p>Adding</p>
<pre class="code">bindkey "^[[3~" delete-char
bindkey "^[3;5~" delete-char</pre>
<p>to your <tt>.zshrc</tt> enables your delete key on every thinkable keyboard. Finally!</p>
Soundblaster Audigy 2 NX Driver2004-10-18T00:00:00+00:00http://pilif.github.com/2004/10/soundblaster-audigy-2-nx-driver<p>If you are like me, then you certainly throw away or lose CDs containing drivers for your hardware. I mean: Why should you not? These days, every hardware vendor has the most current drivers on it's webpage, ready to be downloaded.</p>
<p>Actually, using the drivers on the CD often does not even work... You know - little or now quality management.</p>
<p>But then, there is my Audigy 2 NX card. I bought it, so I can hook my AC3-Receiver to my ThinkPad when watching an occasional DivX movie with AC3-Sound. For that matter, I have not used the little silvery box for the last five months or so. As I noted before, I have long lost the driver CD.</p>
<p>You can certianly imagine how annoyed I was when I saw that Creative Labs only provides driver <b>updates</b> requiring the original driver to be already installed.</p>
<p>I don't even want to think what I would have to do to get back to a CD or a full installer. I can well imagine that the support - if it even answers my calls for help - would really like to see me buy another package - just for a new driver CD (which I will lose again - eventually)</p>
<p>So, I needed another solution.</p>
<p>This is what I did:</p>
<ol>
<li>Plug in your the Audigy and turn it on</li>
<li>Download and run the driver installer (the older of both versions)</li>
<li>Wait for it to tell you that the software must be installed</li>
<li>Go to <tt>[Path_to_profile]\Local Settings\Temp</tt> (mostly c:\Documents and Settings\[your username]) and look for a folder just created. If you don't want to search, use <a href="http://www.sysinternals.com/ntw2k/freeware/procexp.shtml">Process Explorer</a> and look what handles the installer has opened.</li>
<li>In this directory, you'll find a folder named "Drivers". Copy that to somewhere else</li>
<li>Open the Device Manager (Start - Settings - Control Panel - System - Hardware - Device Manager or much faster Windows-Pause)</li>
<li>Right-Click on your Soundcard (either not recognized or recognized as "USB Audio Device"), select "Update Driver..."</li>
<li>Don't let the assistant install anything automatically</li>
<li>Provide the path where you copied the <tt>Drivers</tt> directory to in step 5</li>
<li>Windows will install the driver which creative's installer would not have let you to</li>
<li>Click "OK" in the installer from Creative. It'll think that the original software ist installed, re-install your manually installed driver, flash the card's firmware and exit nicely</li>
</ol>
<p>This saved me from a lot of stupid asking-arond or even re-buying a piece of hardware I already own.</p>
<p>I can't understand why Creative Lab's does not provide un-crippled installers for their drivers. This procedure is far from obvious and many non-geeks are probably not able to do this. If this policy should be a new business-case in selling more products, I don't really see how this will work in the long run...</p>
<p>Anyway. I could get it to work and I will now please myself watching this DivX-Video using my beamer and my AC3-Receiver. ;-)</p>
RSS Readers for PocketPC2004-10-15T00:00:00+00:00http://pilif.github.com/2004/10/rss-readers-for-pocketpc<p>I really like surfing the web on my PDA while taking a bath. Every morning before leaving home for the office, I use my PDA to check my email and read some news-sites like <a href="http://www.slashdot.org">Slashdot</a> or the <a href="http://www.heise.de/newsticker/pda/data/paket2.html">Heise Newsticker</a> (which is available in a PDA-optimized version, which I have linked here)</p>
<p>Some days ago, I thought that actually reading all the RSS feeds I'm subscribed to could help me prolong my bathing-expirience, so I went looking for a decent RSS-Reader for the PocketPC. Here's what I found (all products reviewed here understand all the RSS-variants and Atom 0.3 and work without a permanent connection to the net):</p>
<h3>Egress</h3>
<p>
<a href="http://web.newsguy.com/GarishKernels/egress.html">Egress</a> is Shareware and costs $12. Like all the readers I'm reviewing here, I began using it by importing <a href="http://www.pilif.ch/gnegg.opml">this OPML file</a>. After importing, Egress insits on checking all the feeds for new entries, so don't do that if you are connected via your cellphone. Of all the readers reviewed here, Egress has the best UI: The whole Channel-List is compacted to just a little menu bar at the top of the screen, leaving lots and lots of space for the entry itself. To the left is a drop-down menu showing all your feeds, to the right are two arrows to navigate within the feeds containing unread items.
</p>
<p>While there is a "Manage Channels" function, it's not possible to move imported channels into subfolders - at least I have not found out, how to do it</p>
<p>The content viewer somewhat supports HTML, though I'm missing support for the <pre>-Tag as I'm reading quite a bit of programming-blogs. Another thing that bothers me is the support for the four-way navigation button of my iPaq: Accitentally hitting left or right insted of down will switch to another unread blog. Paging back will not scroll to the point where you were interrupted reading. Additionally, in contrast to PocketIE, scrolling down using the nav buttons just scrolls a few lines instead a whole screen.</p>
<p>By clicking on an entry you can toggle it's expanded/colapsed state. The former is used to read the whole entry, the latter just presents it's title. Unread items are bold-faced</p>
<p>The reader has quite an extensive preferences screen and is the only application tested here where one can set the User-Agent-Header with which it should request the feeds.</p>
<p>A Today-Screen-Plugin provides an alternating view to new entries, but is not really useful.</p>
<p>Unless your are constantly connected, you should turn of Egresses feature to automatically refresh the feeds as it will trigger the Windows CE autodial routine for every feed in your subscription list, not stopping the process when you hit cancel, which is a major pain in the ass</p>
<h3>PocketFeed</h3>
<p>
<a href="http://www.furrygoat.com/Software.aspx">PocketFeed</a> is the only free software I'm reviewing here. It's interface is devided in two parts: At the top there is a tree view containing the feeds and the entries. Unfortunately, unread ones are not displayed in boldface or any other way. </p>
<p>The bottom half is the reader which does not support scrolling with the navigation buttons.</p>
<p>There is no today-plugin or a way to automatically check for updated entries.</p>
<p>All in all, PocketFeed is a nice start, but many features are still missing.</p>
<h3>PocketRSS</h3>
<p>
<a href="http://www.happyjackroad.com/AtomicDB/pocketpc/pocketRSS/pocketRSS.asp">PocketRSS</a>, like Egress is Shareware. It costs $5, so it's slightly cheaper than Egress. The user interface is similar to the one of PocketRSS (with a tree view taking a lot of precious screen estate), though this time, the viewer is at the top of the screen. Unfortunately, scrolling with the navigation buttons changes from entry to entry and does not scroll the current one.</p>
<p>Like Egress, PocketRSS has a Today-Screen Plugin - the most featureful of all the readers.</p>
<p>Settings-wise, there's not much the user can change and the settings are devided into a "Preferences" and a "Configuration"-Menu. Not that useful.</p>
<p>Just by looking at the review-length alone, you should be able to see, which reader I prefer: It's Egress. Egress is the only one that's really parctical for me (many full-content feeds subscribed). The problem with taking screen estate and the non-working navigation buttons is reason enough for me not to use the other two programs</p>
<p>Granted: You can switch off the panel, but it involves always more tapping than with Egress</p>
<p>Additionally, Egress is the only program properly displaying the unread-state of the feeds - an absolute must for me.</p>
<p>While there are some somewhat annoying buglets, I probably will pay those 12 bucks, but not without directing it's author to this review here, in the hope that those little problems get fixed eventually.</p>
<p>
PS: I'm writing this entry on the <a href="http://www.gnegg.ch/archives/183-Unforeseen-annoyance.html">just repaired</a> IBM Thinkpad. I will write about the wonderful experience I had with the IBM support in another entry.</p>
More programmers fonts2004-10-15T00:00:00+00:00http://pilif.github.com/2004/10/more-programmers-fonts<p>In an <a href="http://urbanmainframe.com/folders/blog/20041006/">older post</a> of Urban Mainfraime, I read about some fonts for programmers. Jonathan recommends <a href="http://www.ms-studio.com/FontSales/anonymous.html">Anonymous</a> and <a href="http://www.december14.net/fonts.shtml">Bitstream Vera Sans Mono</a>.</p>
<p>While both fonts are certainly better than the omnipresent Courier, I still think, <a href="http://www.tobias-jung.de/seekingprofont/">ProFont</a>, which I've <a href="http://www.gnegg.ch/archives/80-Programmers-Font.html">already written about</a> is much better than te other two alternatvies.</p>
<p>ProFont is a Bitmap font, optimized for 8px size, which is incredibly small. Because every character is hand-painted, the font is still very readable desite it's small size, so it's great if you want to have much text on little space - als always when programming ;-)
</p>
<p>Maybe it takes a litte getting-used-to, but every Monospaced font I've seen since I switched to ProFont in Putty, Thunderbird, Delphi, jEdit and all other applications, looks clunky and too large for my eyes. Try it!</p>
What the heck?2004-10-11T00:00:00+00:00http://pilif.github.com/2004/10/what-the-heck<p>Friendly error messages?</p>
<p>Then tell me, what PHP wants to say me with this:</p>
<blockquote>
<b>Parse error</b>: parse error, unexpected T_PAAMAYIM_NEKUDOTAYIM in <b>/home/pilif/.../include/simple_news.inc</b> on line <b>131</b><br /></blockquote>
<p>Whatever. Back to work.</p>
Unforeseen annoyance2004-10-07T00:00:00+00:00http://pilif.github.com/2004/10/unforeseen-annoyance<p>Today something happened that I never though could happen:</p>
<p><a href="http://www.gnegg.ch/archives/166-IBM-Thinkpad-42.html">My Thinkpad</a> died.</p>
<p>And I really mean it. The device did noting more than beep once long and twice short. A quick look at the very informative support pages of IBM showed me that the worst possible thing happened: Failure on the System Board or the RAM (which is just as bad as I have only built-in RAM).</p>
<p>A call to the IBM support line got me a ticket number and about half an hour later, they told me to send the machine to their repair center but because I can reach that by train and foot in about 15 minutes from my office, I've asked whether I can bring it to them which they accepted.</p>
<p>So now I'm working on a helplessly underpowered Thinkpad T41 that was lying around in the office. Starting Delphi takes a bit more than a minute just to give you an impression of it's speed (256 MB of RAM is much too less for me)</p>
<p>They told me that the computer will be fixed in two to three days, so I will have to live through the week-end without my beloved Thinkpad. Too bad.</p>
<p>This is the first of about 7 Thinkpads in my life where something like that happened which should speak a lot for their incredible quality. And while this whole story was certainly annoying, the support was helpful and speditive. This is all I could ask for. Thanks IBM.</p>
<p>In case you ask how I could get to work on the new computer so fast: I've removed the harddrive of the defective machine and put it into the older one. While Windows XP comlained a bit about the graphic driver on the new machine, the rest of the hardware was detected flawlessly, so this was extremely un-painful. To be on the secure side non the less, I've created an image of the partition containing my personal data [note to self: Finally blog about your partition scheme].</p>
Eclipse 3.1 M22004-10-06T00:00:00+00:00http://pilif.github.com/2004/10/eclipse-31-m2<p>The Eclipse team seem to have stopped announcing Milestone releases on the front page, so it's a bit tricky to know when a new one is released.</p>
<p>So it may be old news, but Eclipse 3.1 M2 has been released.</p>
<p>It contains some nice <a href="http://mirror.switch.ch/mirror/eclipse/downloads/drops/S-3.1M2-200409240800/eclipse-news-M2.html ">new features</a>.
</p>
<p>The Milestone builds are extremely stable, so there should be no problems using it.</p>
Is that still POP3?2004-10-05T00:00:00+00:00http://pilif.github.com/2004/10/is-that-still-pop3<p>
My mobile phone provider here is <a href="http://www.sunrise.ch">sunrise</a>. I am subscribed to what they call "Onebox", a unified messageing solution.
</p>
<p>I did that because I have access to my voice mailbox via their web-interface which is much more comfortable (and cheaper) than to use the mobile phone.</p>
<p>Unfortunately, their interface does not allow forwarding those messages to another address. While they say they do, entering a forwarding-address actually forwards the emails sent to the sunrise mailbox, but the voice messages stay where they are.</p>
<p>Today I though about accessing the box via <a href="http://catb.org/~esr/fetchmail/">fetchmail</a> and sending it to my regular mailbox.</p>
<p>While this turned out to work extremely well (even the simple notification flag gets cleard on my handset when the fetchmail job forwards the message), the protocol the server speaks is awfully strange. It's supposed to be POP3 passing around RFC2822 messages, it's actually something else... Just have a look:</p>
<pre class="code">
pilif@galadriel ~ % telnet um.sunrise.ch pop3
Trying 212.161.159.6...
Connected to um.sunrise.ch.
Escape character is '^]'.
1 +OK POP3 umsi3-c04d2.mysunrise.ch vUMSI v1.6.0.0 (UM2 Build 030408) server ready
2 user [phonenumber]
3 +OK User name accepted, password please
4 pass [password]
5 +OK Mailbox open, 1 messages
6 stat
7 +OK 1 192931
8 retr 1
9 +OK 1421099 octets
10 From: [calling number] <[calling number]@mysunrise.ch>
11 To: - <[phonenumber]@mysunrise.ch>
12 Date: 04 Oct 2004 09:29 +0200
13 Message-id: 0xe97d4b80-0x40-0x3735-0x50
14 Subject: Voice Message
15 Mime-Version: 1.0 (Voice Version 2.0)
16 Content-Type: multipart/voice-message;
17 boundary="2448314160_4000_141330_5000.04102004_0929"
18 Sensitivity: Normal
19 Importance: Normal
20 X-Priebity: 1 (Highest)
21 Content-Duration: 64
22 X-UMSI-Transferred: Server-Id="1"; Server-Type="INFINITY";
23 Profile="[phonenumber]@4:6";
24 Original-Message-UID="244831416 004 005 14133"
</pre>
<p>
(I've added the line numbers myself)
</p>
<p>Line 7: Oh nice. There's a message and it's about 188 KiB large</p>
<p>Line 9: Wait a minute... 1300 KiB? Didn't they say otherwise in Line 7? Actually it's the server decompressing the Voice message and converting it to WAV just after the <tt>retr</tt></p>
<p>Line 13: Is that supposed to be a valid Message-ID? Don't think so</p>
<p>Line 15: What's that? That's not a valid Mime-Version Header</p>
<p>Line 18+19: Are those really valid message headers?</p>
<p>Line 21: What the heck is "Priebity"? That's not an english word.. Maybe they mean "Priority"?</p>
<p>Line 22: Is this a valid header?</p>
<p>I pity the developers of mail user agents: They must cope with such rubbish and in the end, they are blamed if they do not. It's never the vendors of the brolen servers because those are not visible to the end users.</p>
<p>Different question: Why is it always closed source commercial software doing such stupid things? They get paid to create working software and what you see above is not what I'd call "working".</p>
<p>When I'm writing software communicating with some other component not written by me, I follow the defined protocol <em>to the character</em> whether the software is going to be publically released or not. It's just <em>polite</em>.</p>
Head First Servlets & JSP2004-09-28T00:00:00+00:00http://pilif.github.com/2004/09/head-first-servlets-jsp<p>Today I bought myself <a href="http://www.oreilly.com/catalog/headservletsjsp/">Head First Servlets & JSP</a>.</p>
<p>While I have absolutely no intention in passing any Java related exam or even do anything JSP-ish in the near future (see my reasons <a href="http://www.gnegg.ch/archives/145-PHP-scales-well.html">here</a>, <a href="http://www.gnegg.ch/archives/137-Web-Applications-and-the-View-State.html">here</a> and <a href="http://www.gnegg.ch/archives/58-What-I-dislike-about-Java.html">here</a>), I had to have the book.</p>
<p>Why?</p>
<p>For one thing: While I have my problems with Java, I'm still very interested in it and the technologies around it. This is the reason, why I've already read one or the other book about JSP. The scond reason: The "Head First" series is absolutely the best way of how a tech book can be written. It's a pleasure to read them.</p>
<p>They are that good, that it's even worth reading them even when you don't really need to learn what's in them. It's just fun to read it. Try it yourself!</p>
XAMS (Exim) and SpamAssassin2004-09-24T00:00:00+00:00http://pilif.github.com/2004/09/xams-exim-and-spamassassin<p>It just came to me, that with the new custom_query-option for the SQL-preferences, it will finally be prossible to integrate <a href="http://www.spamassassin.org">SpamAssassin</a> 3.0.0 into <a href="http://www.xams.org">XAMS</a></p>
<p>For those that do not know: XAMS is a sophisticated configuration for handling multiple virtual email domains, keeping all users in a strictly normalized MySQL database.
</p>
<p>In contrast to things like vpopmail, it's easy to set up, does not require patches to any software component involved and is extremely feature-rich.</p>
<p>XAMS was built by Oliver Siegmar, taking my <a href="http://www.pilif.ch/mail.txt">initial idea</a> (german. written von de.comm.software.mailserver), cleaning it up and adding a web interface</p>
<p>I posted my SpamAssassin configuration to the <a href="http://sourceforge.net/mailarchive/forum.php?thread_id=5638889&forum_id=8171">XAMS Mailinglist</a>, so read it there.</p>
Geocaching2004-09-21T00:00:00+00:00http://pilif.github.com/2004/09/geocaching<div class="floatimgauto"><a href="http://www.gnegg.ch/archives/cache.jpg"><img alt="cache.jpg" src="http://www.gnegg.ch/archives/cache-thumb.jpg" width="150" height="112" border="0" /></a></div>
<p>You may have heard of <a href="http://www.geocaching.com">Geocaching</a>.</p>
<p>This weekend I found my first cache. It was quite well hidden, but as I knew the territory (it's where I lived from 1993 to 2000), it was not that difficult.</p>
<p>When I first tried to find the cache last sunday, it was already quite dark outside, so I had to give up. Yesterday, I've returned and finally found <a href="http://www.geocaching.com/seek/cache_details.aspx?pf=&guid=da2afe51-d5f5-4b97-90e7-c3dd89d550c4&decrypt=y&log=">Magic Place / Forch-Denkmal</a>
</p>
<p> The picture was taken using my <a href="http://www.sonyericsson.com/spg.jsp?cc=us&lc=en&ver=4000&template=pp1_loader&php=php1_10139&zone=pp&lm=pp1&pid=10139">K700i</a>. I didn't have my camera with me</p>
Extreme fun with Linux2004-09-17T00:00:00+00:00http://pilif.github.com/2004/09/extreme-fun-with-linux<p>The question is: What's this device running?</p>
<div align="center">
<div class="centerimgauto">
<a href="http://www.gnegg.ch/archives/ck1.jpg"><img alt="Good looking device" border="0" src="http://www.gnegg.ch/archives/ck1-thumb.jpg" width="150" height="200" /></a>
</div>
</div>
<p>The answer:</p>
<div align="center">
<div class="centerimgauto">
<a href="http://www.gnegg.ch/archives/putty_screenshot_sqlite.gif"><img alt="A screenshot of Putty connected to the device running SQLite" border="0" src="http://www.gnegg.ch/archives/putty_screenshot_sqlite-thumb.gif" width="150" height="81" /></a></div></div>
<p><a href="http://www.sqlite.org/">SQLite</a>. It was not easy, but not particulary hard either. This is the first step to putting a <a href="http://www.popscan.ch">PopScan</a> frontend to a quite inexpensive barcode scanner with both keypad and display</p>
<p>I really like Linux for extactly these things!</p>
<p>Oh and before I forget: Thanks, Jonas. This was an incredibly good job!</p>
Somewhere around 9588...2004-09-16T00:00:00+00:00http://pilif.github.com/2004/09/somewhere-around-9588<p>OK. This has a definite technical reason and is neither wrong nor in any other way special. It's just funny and reminds me of school where in a short presentation, someone did the same thing, so I though I can pot it anyway...</p>
<p>In <a href="http://www.postgresql.org">PostgreSQL</a> you can help the query optimizer to do it's work even better by calling "VACUUM ANALYZE" - especially after inserting tons and tons of data.</p>
<p>I did that and found this status message:</p>
<pre class="code">
INFO: "art_pf": 209 pages, 3000 rows sampled, 9588 estimated total rows
</pre>
<p>What's funny about that is that PostgreSQL actually counted the rows (I did a full analyze) and still talks about having estimated the count. And 9588 definitely is not what we humans call an estimation. When we estimate we talk in tens or even hunderts like "9000 estimated total rows or so".
</p>
<p>In the presentation I talked about at the beginning, the colleague of mine talked about a weather station "<em>about</em> 987.6 meters over sea level" which falls into the same category ;-)</p>
mod_perl or not to mod_perl2004-09-14T00:00:00+00:00http://pilif.github.com/2004/09/mod_perl-or-not-to-mod_perl<p>
Floating around the net I found a patch for my mod_perl problem <a href="http://www.gnegg.ch/archives/171-MT-3.1.html">I had</a> with MT 3.1, so I have reenabled mod_perl, which actually sped up the wohle system greatly, but forced me to remove <a href="http://www.jayallen.org/projects/mt-blacklist/">MT-Blacklist</a>, as it's not compatible with mod_perl environements (Internal Server Error, here I come!)
</p>
<p>"No big deal", I thought - deleting those five SPAM comments a day would not have been so bad - especially since MT 3.1 provides a far better comment-deleting UI than 2.6</p>
<p>
Then, today, I had to change my mind: Between 6am and 12pm two of those f***ing SPAMMers actually posted stupid comment spam to nearly every posting in my blog. After deleting them, they gave me a rest just to continue their evil doing during the whole afternoon, forcing me to delete about 2 comments per 20 minutes. Inconvinient when I have to work in between.
</p>
<p>So - for me, it's back to non-mod_perl. It seems like gnegg.ch is popular enough for actually depending on MT-Blacklist. Very nice. Thank you stupid SPAMMers!
</p>
A look at Windows Installer2004-09-13T00:00:00+00:00http://pilif.github.com/2004/09/a-look-at-windows-installer<p>
Before I begin, let me put this disclaimer: <a href="http://www.gnegg.ch/archives/138-All-time-favourite-Tools.html">I'm biased</a>, so this is maybe not objective, but it's something I wanted to say. And who knows: Maybe you even think the same.
</p>
<p>As you may know, Microsoft would really like to see all software using <a href="http://msdn.microsoft.com/library/default.asp?url=/library/en-us/msi/setup/windows_installer_start_page.asp">Windows Installer</a> for its deployment needs. Windows Installer is a complex piece of technology, evaluating some kind of database that's stored in those .MSI-Files.</p>
<p>Windows 2000 Server and later, namely with its Active Directory, provides the system Administrator with the ability to automatically install and update MSI based applications on the client computers, which definitely is a good thing. Additionally, MSI should provide end users with clean uninstalls, automatical repair and the solutions to <em>COM</em>mon [;-)] problems. Sound's like a good thing, doesn't it?
</p>
<p>It would - if there was not quite a heap of problems associated with Windows Installer</p>
<ul>
<li>First, the thing is intransparent and <a href="http://www.gnegg.ch/archives/107-Why-o-why-is-my-harddrive-so-small.html">messy</a>.</li>
<li>I have migrated my user profile to <a href="http://www.gnegg.ch/archives/166-IBM-Thinkpad-42.html">another machine</a>, where not all the software I had on the previous machine has been installed. So the control panel was full of software that was not installed on the machine. Hitting "Remove" caused MSI to request the original installation MSI file (why the heck?) and with me failing to provide it (why should I redownload something just to remove traces of it from my machine if I don't want it in the first place) and hitting "Cancel" removed the entry, but it reappeared when reloading the control panel applet. Cleaning the registry did help, but tell me of an end user capable of doing that.</li>
<li>Whenever I drop an URL from my Browser to the desktop, MSI pops up and wants to repair some software I've already removed. For this it asks me to provide the path to the original media. Why?</li>
<li>Creating Update-Packages is a pain in the ass: You have three possibiliteis:
<ol>
<li>Create a "full update" which will first uninstall the existing version. That way you have to go great lengths to preserve the user's data because it's not that easy detecting whether the uninstall happens because of the upgrade or just is a normal uninstall. This, I want to add, is the recommended way of updating an application deployed by MSI</li>
<li>Create some update-package. This often needs quite a bit of hacking to the MSI-File, leading to problems like MSI-Dialogs popping up asking for some files.</li>
<li>Create a patch-File (.MSP). In MSI pre 3.0 this is a pain in the ass if you want to prevent the user from having the original MSI-File ready. Too bad, MSI 3.0 runs only on Win2k and later</li>
</ol></li>
</ul>
<p>
Windows Installer is very tightly integrated into the system. Even small problems here and there (non-clean uninstallation or whatever) can cause major problems that are not really fixable. This is not what I call an end-user friendly technology.</p>
<p>And it does not end there: Have you ever tried engineering a MSI-File? You may begin by reading the SDK documentation I linked to above, but you will soon be overwhelmed - the beast is incredibly complex. But with complexity does not come feature-richness. For example, it's impossible to install the .NET framework from an MSI based installation as only one of them may be running at the same time.</p>
<p>Because of this and many other problems, it's the general oppinion to create a self extracting .EXE installing the prerequisities and then passing the control over to the MSI-File which still isn't capable enough to do many things, setup authors today are used to do.</p>
<p>
Big tool Vendors like <a href="http://www.wisesolutions.com">WISE Solutions</a> or <a href="http://www.installshield.com">InstallShield</a> go great lengths to hide the comlexity of MSI and to add features not there in the basic version, while sometimes breaking validity of the generated packages or even the one big advantage of MSI's: Transactional functionality. Thus, taking away the last benefits, MSI may have.</p>
<p>Conventional Installer Tools are availbable for free (<a href="http://www.jrsoftware.org/isinfo.php">InnoSetup</a>, <a href="http://nsis.sourceforge.net/">NSIS</a>) and have a much more pleasant user experience: No silly questions for source packages, no confusing breakage and more.</p>
<p>Of course: Some things those conventional tools will never be able to provide:</p>
<ul>
<li>Advertized Features (and Shortcuts)</li>
<li>Automatical Self-Repair</li>
<li>Advertized Installations (which allow restricted users to install certain packages)
</ul>
<p>But to be honest: Which one of those features provides real value to the end user? I know many people that have installed MS Office, for example and no one of them - absolutely no one has instructed setup to install features on demand - nobody wants to insert the office CD at random events.
</p><p>
In contrast: Everyone I know simply hates those Windows Installer dialogs popping up, requesting the source image. Lukas for example is unable to remove PGP Deskop becuase the uninstaller requires the installer package which can't be provided as it's packaged in a self extracting .EXE using a proprietary format. Simply reinstalling from this .EXE isn't possible either because the extracted MSI detects the existing installation and want's to uninstall but still does not recognize the extracted original MSI. Bad luck.</p>
<p>So in the end MSI looks to me like an administrators thing, but not like a tool making the live easier for the end user. With tools like InnoSetup, I can create an user expirience that even non-tech-savy users understand and that even has no further problems popping up later. Granted: More advanced tasks are better integrated into MSI (Installations per User/Per System), but can be done with the conventional tools, if some thought is put into the installation.</p>
<p>
For now, I will definitely stay with InnoSetup and keep my supporting work focused on real application issues, not wierd MSI problems. For Administrators, installing <a href="http://www.popscan.ch/wholesale.html">PopScan</a> for their users, we provide a detailed documentation describing, which file goes where and what the installer does, thus providing the administrator with the means of either create an MSI (which can be done automatically these days. The result is not optimized, but it does it's job. Combined with our documentation, this can be a real alternative) or use other technologies to deploy the software.</p>
</li></ul>
My favourite Firefox extensions2004-09-08T00:00:00+00:00http://pilif.github.com/2004/09/my-favourite-firefox-extensions<p>I'm using <a href="http://www.mozilla.org">Mozilla Firefox</a> as my main web browser. You know, that you can install extensions for every feature imaginable, but maybe, you are like me - undecisive on what to install. Here is my list of extensions I have installed in my firebird, which I'm constantly fond of having:</p>
<ul>
<li><a href="http://extensionroom.mozdev.org/more-info/popupcount">Popup Count</a>. While it has no real purpose, it seems to have no negative effects either, so it's fun to watch all those popups getting killed in real time.</li>
<li><a href="http://dmextension.mozdev.org/">Download Manager Tweak</a>. Useful because the downloads take less space on screen and because I can customize the download managers toolbar with it (I use "open folder" quite often)</li>
<li><a href="http://extensionroom.mozdev.org/more-info/targetalert">TargetAlert</a>: Displays a small icon right of links that do not point to an other HTML-File on the same domain. Very useful, though it has two bugs: 1) sometimes when reloading, the icons get duplicated. I can't actually reproduce it, but I'm trying to 2) the images are added by altering the documents structure (<img>-Tags are added). Using CSS would be nicer, IMHO</li>
</ul>
<p>While compiling the list and adding the URLs to the link above, I notices that all the extensions come from the same author and that his webpage seems to be down. The former is a coincidence (really!) and the latter is too bad as it kind of defeats the purpose of this entry. But now that It's written already, I'm going to post it anyway. Maybe, the site comes back.</p>
<p>The one and only theme I use is <a href="http://www.quadrone.org/projects/mozilla/browser/">Qute</a>, though I could possibly get used to the new default theme, but cute was with me for the last year, so I'm having problems adopting to the new one.</p>
MT 3.12004-09-03T00:00:00+00:00http://pilif.github.com/2004/09/mt-31<p>As you almost certainly know, <a href="http://www.movabletype.com">MovableType</a> 3.1 has been released.</p>
<p>Reading the feature list - especially the entry about dynamical publishing - I deceided to upgrade.</p>
<p>Needless to say that much went wrong:</p>
<ul>
<li>The dynamical generation is of no use to me because I'm using (<a href="http://www.nonplus.net/software/mt/MTEntryIfComments.htm">exactly one</a>) cutom tags in my archive template and custom tags do not work with dynamical generation. Too bad. And too much documentatino to read to port it to PHP</li>
<li>My beautiful mod_perl setup ceased to work. Somehow MT sometimes (this is completely random) gets a random number back from <tt>$q->parse</tt> in lib/MT/App.pm. Updating Perl, Apache and mod_perl did not help. The effect of this bug is a randomly occuring "Upload too large" error. Back to CGI then... (I've opened up a support ticket. Let's see how good this support really is)</li>
</ul>
<p>At least I can now use <a href="http://www.jayallen.org/projects/mt-blacklist/">MT-Blacklist</a> as it does not work in mod_perl. Much trouble for setting up something I dont really like either because of its extremely commercial background. We'll see what the future brings...</p>
An experiment2004-09-03T00:00:00+00:00http://pilif.github.com/2004/09/an-experiment<p>Now that I have some problems with MT (it's so terribly slow when not using mod_perl), I thought to myself: "Let's do a little experiment. Let's try out <a href="http://www.wordpress.org">WordPress</a> and let's see what happens"</p>
<p><a href="http://wp.gnegg.ch/index2.php">This</a> is what happened. And <a href="http://wp.gnegg.ch/index2.phps">this</a> is the source of the template.</p>
<p>So. Was it worth it? How is it, working with WP?</p>
<p>While I really like the dynamical generation feature and the OPML-Upload, I have some problems with WP:</p>
<ul>
<li>It's not as flexible as MT. All those template-functions output much too much HTML (every little bit of HTML-code is too much, actually). I had to change the stylesheet to accomodate wordpresses forced <ul> in the sidebar. And for the links I actually had to patch around in WP for my template</li>
<li>MT seems much more polished.</li>
</ul>
<p>Anyway. As WP is written in PHP and contributions are certainly welcome (it's free software after all), maybe I should look into contributing something.</p>
<p>And as for the future of gnegg.ch: I've not deceided yet, what I should do: Adopting the other gnegg.ch templates would take about half a day to one day, which is terribly much time to invest in replacing something that essentially works.</p>
<p>So, as I said <a href="http://www.gnegg.ch/archives/movabletype.html">here</a>, I'm going to stay with what I currently have - for now. At least until I hear back from MT about my support ticket, as mod_perl is a requirement for me to be running MT.</p>
How journalism should not be done2004-08-31T00:00:00+00:00http://pilif.github.com/2004/08/how-journalism-should-not-be-done<p>
I am subscribed to the german "Linux Magazin" (it's articles are translated and published to the english "Linux Magazine") and today I received their anniversary edition (10 years Linux Magazin).</p>
<p>With great interest, I read the article "Insel Hüpfer" on page 56 and later. It's about the author telling his story of finding security holes in the setup of a big german hosting provider</p>
<p>The author goes into great details when describing what he did and full of pride he actually tells the reader the MySQL-Root password of one of the compromised servers:</p>
<blockquote>
Und dann entdeckte ich erstmals etwas Erfreuliches: Das Passwort für MySQL-Root lautet: xxxxxx. So sollte ein sicheres Passwort aussehen.</blockquote>
<p>Which means in english: <em>Finally, I discovered something good: The mysql root password is: xxxx. This is what a secure password should look like</em>. In contrast to the article in the Linux Magazin, I am definitiely not naming the password here!</p>
<p>All this would not be worse enough for me to blog about here if only they would not have been so stupid to actually show the user the name of the provider!</p>
<p>While all URLs are left out and the article does not name the provider, they made two bad mistakes:</p>
<ol>
<li>On page 63, there is a screenshot of a compromised FAQ page. While they cleared out Mozillas the URL-field, they did not do that with the big visible title of the page containing the domain name in the top left corner. Additionally if they had grayed out the text, googling with the contents of the rest of the page would too have led me to the providers address</li>
<li>On page 64, they have a screenshot displaying the URL of the compromised phpMyAdmin, graying out the domainname, but leaving the URL intact otherwise. Too bad that the name of the provider is no secret anymore (see above).</li>
</ol>
<p>All this would not be so bad (it certainly is bad for the publisher of Linux Magazin as this will get them in trouble with the provider), it really is <b>catastrophical</b> that the provider <b>has not changed the password printed in the article!</b></p>
<p>This means that any reader of the Linux Magazin (currently only subscribers - I really hope they stop further delivery of this issue) can access the MySQL-Databases of many customers of said provider!</p>
<p>Posting stories like this is really nice and is what gets you the readers actually, but if you do this, please take care not to publicly post compromised passwords that continue to be working when your edition goes to press. And don't leave clues like URLs and other stuff that points to the victim in question! Please!</p>
No more blur2004-08-29T00:00:00+00:00http://pilif.github.com/2004/08/no-more-blur<p>
When reading my Think Pad T42p <a href="http://www.gnegg.ch/archives/166-IBM-Thinkpad-42.html">review</a> the other day, you may have seen that my only problem I had with the fine machine was that the DVI port of the docking station supported only the 1280x1024 resolution. This forced me to use the analog video otput to power my cool 21 inch 1600x1200 LCD at my workspace.
</p>
<p>My problem with this solution: The picture was blurry and a bit unsharp. While it got way better after upgrading the VGA cable to something better than what came with the display, it still did not get as sharp and crisp as the image I had on a 1280x1024 18 inch display I had connected via DVI. Actually it was still quite blurry - at least for me, used to the sharper display.</p>
<p>
A comment in my blog entry (many thanks - comments like this are the only thing keeping me deleting all those SPAM-comments while still not disabling the comment function) pointed me to <a href="http://forum.thinkpads.com/viewtopic.php?t=880&highlight=dvi">this forum entry</a> which in turn pointed me <a href="http://www.omegadrivers.net/">here</a>.
</p>
<p>Omegadrivers provides a hacked version of ATI's Catalyst driver that enables the Think Pad's DVI port to support the 1600' resolution (Actually, the driver is optimized for gaming-performance, but that's not so important for me)! Very nice!
</p>
<p>Now the image is clear and crisp, just as I always wanted it to be. Cool</p>
<p>Now... if someone could tell me what I have to do to un-break the OpenGL-Support, I'd really appriciate that... Whenever a program is using OpenGL it immediately crashes using those new drivers.</p>
Comments working again2004-08-29T00:00:00+00:00http://pilif.github.com/2004/08/comments-working-again<p>OK... there was this... embarassing... problem with the pilif.ch-Domain. Talk about forgetting payment for the registration ;-)</p>
<p>The problem is fixed. so the comments and the search function should be working again...</p>
Mountains2004-08-21T00:00:00+00:00http://pilif.github.com/2004/08/mountains<div class="floatimgauto"><a href="/archives/berge.jpg"><img alt="Mountains" src="/archives/berge-thumb.jpg" width="150" height="112" border="0" /></a></div>
<p>This is what I'm going to see for the next seven days. Yes. Finally, I will once again travel to a small cottage in Pontresina, enjoy the great landscape and do nothing businessish at all.</p>
<p>So don't expect any postings next week. While I will have extremely limited internet connection (GPRS), I will use it only if <a href="http://www.gnegg.ch/archives/110-Speed-up.html">something bad should happen</a>.</p>
IBM Thinkpad 422004-08-18T00:00:00+00:00http://pilif.github.com/2004/08/ibm-thinkpad-42<p>
Quite exactly one year ago, I <a href="http://www.gnegg.ch/archives/57-IBM-Thinkpad-T40.html">reviewed</a> my then new IBM Thinkpad T40. To save you from going there and have a look: I really liked the device.
</p>
<p>In the year that has passed, I had some things that began to bug me me, though they are somewhat minor. I have not noticed them back when I made the review:</p>
<ul>
<li>The harddrive is slow. And when I say slow, I really mean it. Windows has a tendency to swap, regardless of available memory. And those times when my TP was swapping made it nearly impossible to work with it. The boot time after entering my password and before the system really gets responsive (you know: The GUI is drawn, but does not really react to input yet) was quite long - stripping down the installation and defragmenting the drive did not really help, which - considering 1 GB of available RAM - lead me to the conclusion that the drive really was quite a bottleneck.</li>
<li>The display had a resolution of just 1400x1050. I would really have liked the 1600x1200 one</li>
<li>Soon after I got my T40, the T41 was released with a feature to automatically park the heads of the harddrive and spin it down when the laptop is shaken. This feature was absent in my T40 and this march, I had to learn this the hard way: The drive died (I was very lucky: It only had tons and tons of bad blocks on the system partition - my data was not affected). This was when I really wanted this drive-spin-down feature</li>
<li>Graphics-Performance was somewhat behind of what I would have whished for. Especially it was not possible to run <a href="http://www.epsxe.com">epsxe</a> at sufficient speeds. Certainly not something I would need in a computer I use mostly for work, but it would have been nice. Doom 3 comes to mind, too, though I don't think any laptop existing today is actually powerful enough for that game - at least no portable one ;-)</li>
</ul>
<p>And that's about it: Minor issues. I am a really big fan of my T40. Really. Believe me. And continue to believe me when I tell you this: IBM has announced the T42 model which finally comes with the 1600x1200 screen resolution. And not only that: The built-in Fire GL Chip from ATI should definitely provide enough juice for epsxe (though I've not tested that yet because of the lack of PSX-CD's here in the office). I could not resist getting one</p>
<p>I mean, 1200x1600 resolution is just great for anything you do beyond just surfing the web. While you can use more than one monitor, it's always more convinient to have everything on just one screen. Just think of Delphi with all it's palettes and stuff. Very convinient</p>
<p>And this harddrive spinning down feature. Very convinient too.</p>
<p>So, I'm writing this blog entry on my brand new IBM ThinkPad T42p. Time for a review, don't you think?</p>
<p>From the outside, IBM has not changed much: With it's 15 inch monitor, the whole thing got a bit bigger (and a little bit thicker, if I'm not mistaken), but else they have left the outside unchanged from my T40 model.</p>
<p>On the inside, when installing Windows XP (while the IBM preinstallations are quite un-intrusive, I still prefer a completely un-customized installation of Windows and downloading just the drivers I need. That way I could even test my slipstreamed SP2 installation), I noticed the immense power this thing had. After just about 15 minutes, the installation was completed (excluding the drivers, of course). Boot time was much shorter than what I had on my T40 - even considering the emptyness of the harddrive. And it remained to be that short after copying over my profile. I really think, they finally used a better harddrive. Because the new computer is just 100 Mhz faster than the old (1.6 -> 1.7 GHz), I think that it must be the drive performing better.</p>
<p>The display is great. Highly readable and very bright. I really like the resolution. Display-related, though is my one big problem I have with this wonderful toy (why oh why must everything have at least one flaw?):</p>
<p>The DVI-Port (provided by my docking station) is (still) limited to 1280x1024 pixels, so I have to use the analog output to power my 1600x1200 monitor, giving me a somewhat suboptimal performance. Too bad. Maybe tey'll fix that later.</p>
<p>Now I'm looking forward to check the computers 3D-performance. If there's something unusual about it, I'm going to post it, of course.</p>
<p>Overall, I think, if you don't need the 1600x1200 resolution, you can live without upgrading. If you really like (or even depend on) that big resolution (and consequently high DPI count), you should maybe consider updating. Was there not that problem with the DVI port, this would be the perfect notebook. With this flaw, it's just the best one existing on the market. ;-)</p>
<p><b>UPDATE:</b> Jepp. ePSXe works. It works extremely well, actually. I'm using Pete's OpenGL GPU plugin with nearly everything turned on and I'm still not getting any lag. This is nice.</p>
Working with subversion2004-08-17T00:00:00+00:00http://pilif.github.com/2004/08/working-with-subversion<p>
I'm currently making first steps using <a href="http://subversion.tigris.org">Subversion</a> and it's going quite well. It took some time to get the $Id$ expansion to work though, but <a href="http://wincent.org/article/articleview/236/1/0">this article</a> helped me in the end.
</p>
<p>The next thing I'm going to do is trying to migrate a simple project (no braches, no tags) from CVS to subversion. I know there are some tools out there which promise being able to do that for you, so I hope it'll work.</p>
<p>The final step would be to migrate over <a href="http://www.popscan.ch">PopScan</a>, which has gotten quite complex these days: About 5 branches, countless tags and three years worth of history data. If that too goes well, it's "welcome subversion" for me. If not, I think, I'll postphone the migration until the tools get better. I absolutely don't want to have my code in different source management systems.
</p>
<p>I'll keep you posted.</p>
SUCON '042004-08-17T00:00:00+00:00http://pilif.github.com/2004/08/sucon-04<p>I will certainly <a href="http://www.suug.ch/sucon/04/">be there</a>. Will you?</p>
PHP 52004-08-16T00:00:00+00:00http://pilif.github.com/2004/08/php-5<p>
As you surely know, <a href="http://www.php.net">PHP 5</a> has been released. Actually, it's already 5.0.1.
</p>
<p>What you also may know is that Gentoo's dev-php/mod_php package was promoted from -x86 to ~x86. This means from broken to unstable in Gentoo-terms.</p>
<p>This means that I can now make some tests with PHP5 which I already began doing: I've upgraded PHP on our developement server to 5.0.1 and it's working quite well so far. The only problem I've come across is this stupid code in a <a href="http://www.oscommerce.com">osCommerce</a> installation:</p>
<pre class="code">
class something{
function something{
// do something
$this = null;
}
}
</pre>
<p>New or old object model in PHP: This is just something you don't do. Not in PHP, and certainly not in any other language. You should not assign anything to <tt>this</tt>, <tt>self</tt> or even <tt>Me</tt> (or whatever the implicit pointer to your own object is called in your language).</p>
My new toy2004-08-14T00:00:00+00:00http://pilif.github.com/2004/08/my-new-toy<div class="floatimgauto">
<a href="http://www.gnegg.ch/archives/ipod.png"><img alt="ipod.png" src="http://www.gnegg.ch/archives/ipod-thumb.png" width="150" height="209" border="0" /></a></div>
<p>New year, new iPod. They made so many small usability enhancments with those new models, that you actually ask yourself, whether the predecessors are really made by Apple (because if they would be, there weren't that many usability flaws in the first place)</p>
<ul>
<li>Playback stops when you plug out the headphones. Oh an speaking of headphones, I'm using <a href="http://www.sonystyle.com/is-bin/INTERSHOP.enfinity/eCS/Store/en/-/USD/SY_BrowseCatalog-Start;sid=cYT0SPe1bZH0ebasca7-Q7ik3oDsrKxQJJE=?CategoryName=acc_Headphones_Fontopia%2e%2fEarbud">these</a>. They are a great compromise between extremely expensive and good-sounding</li>
<li>The menu item where the Music is stored is called - surprisingly - Music now. This is much better then the "Browse" in the older models.</li>
<li>The click wheel is the best user interface they created so far. I hated those soft keys in the 2nd genration: They were extremely inprecise and fired ofthen when I did not acutally want them to.</li>
<li>It's faster. My old model paused quite a while when entering the artists list. The new model does this instantly.</li>
</ul>
<p>Convinience-wise, the jump to the third generation of iPods was the biggest step. Thanks, apple.</p>
<p>Oh and the Music I'm playing on the photo is <a href="http://www.oneupstudios.com/albums.php?show=3">this CD</a>. The music is difficult to describe. A bit jazz-ish, but not really. I really like it - especially as a passionate gamer of the Chrono series and Xenogears, where the music is inspired from. Consider buying it. It's great!</p>
SSH daemon on installation CD2004-08-13T00:00:00+00:00http://pilif.github.com/2004/08/ssh-daemon-on-installation-cd<p>
First, my apologies for not posting for quite some time now, but I have a hell of a lot of things to do. One of those was setting up yet another IBM xSeries 345 Server. And yet again, I deceided to install <a href="http://www.gentoo.org">Gentoo Linux</a> on it and yet again this distribution does not stop to amaze me:
</p>
<p>On their current livecd (used for installing the distribution), they have actually installed an OpenSSH-Server ready to be started, allowing you to do the whole installation procedure remotely. This is incredibly nice.</p>
<p>So I could put the server in our basement where its noise did not annoy anyone and still do the installation from my comfortable chair in my office. This is great!</p>
<p>But then I widened my thoughts: Imagine, you modify the CD just a little bit: Preconfigure the network with the IP of your server somewhere in a remote location, set a non-random root password and configure the SSH-daemon to automatically start on boot.
</p>
<p>Then configure the server to boot from CD, if one is there.</p>
<p>Now, if your server (somewhere in a remote location where getting into is difficult or at least time-consuming) should crash and fail to come up properly after a reboot, just ask someone at the housing center to insert the CD and reboot. The rescue system from the CD will boot and the SSH daemon will start. Now you can try to fix your system remotely.
</p>
<p>When you are finished, your customized reboot-script will eject the CD after unmounting it, allowing the server to reboot normally from it's (hopefully) fixed installation. This would even allow to completely fresh-install a compromised system remotely, without forcing you to do that on-location.</p>
<p>This is extremely nice and just another reason why I prefer the seemingly simple and anachronistic installation procedure of Gentoo. I mean: Just try doing this with either Fedora or SuSE...</p>
Wrong hand?2004-07-30T00:00:00+00:00http://pilif.github.com/2004/07/wrong-hand<div class="floatimgauto">
<a href="http://www.gnegg.ch/archives/wronghand.png"><img alt="wronghand.png" src="http://www.gnegg.ch/archives/wronghand-thumb.png" width="150" height="146" border="0" /></a>
<p class="legend">(Taken using <a href="http://www.snes9x.com">Snes9x</a>)</p>
</div>
<p>
As you <a href="http://www.gnegg.ch/archives/142-Console-game-Videos.html">may know</a> I really like watching speedruns of video games. And yesterday, when watching the <a href="http://bisqwit.iki.fi/jutut/nesvideos/movies.cgi?id=146">updated Super Metroid movie</a>, I came across this picture in the intro.
</p>
<p>Have you noticed? Samus is actually giving her <em>left</em> hand to this scientist. I needed quite some time to understand that this was not an error of the artist, but actually just another of those well layed out details: The right arm of Samus' suit is equipped with her beam weapon, so she obviously can't use it to shake hands ;-)</p>
<p>And this leads to other conclusions:</p>
<ul>
<li>Her weapon seems hardwired. Else she would have laid it down before delivering the captured Metroid</li>
<li>She is right-handed. Because in her missions the beam weapon is quite necessary for her sruvival, she would not want to have it attached to the weaker hand</li>
<li>It must be quite difficult to take off this suit. This has the same reason as my first conclusion</li>
</ul>
<p>It's funny to see how much those designers seem to have though about all those details - either consciously or not.</p>
Look what I've found!2004-07-29T00:00:00+00:00http://pilif.github.com/2004/07/look-what-ive-found<p>
This is great. This makes me incredible happy as it documents quite a relevant part of my live, which <a href="http://www.lipfi.ch/info/rit/index.php">I though was long gone</a>. Here is it:
</p>
<p style="text-align: center"><a href="http://www.lipfi.ch/pilif/">My old webpage</a></p>
<p>And even more: <a href="http://www.lipfi.ch/pilif/fabedit">Fabedit</a>, too is still there.</p>
<p>I took the time to fix all long dead links and the syntax of the navigation tree, so it works in Mozilla (somewhat). Hell, I even fixed those little cgi-scripts.
</p>
<p>I first thought, that <a href="http://www.lipfi.ch/pilif/fabgrats">fabgrats</a> was lost, but I found a copy of it lying somewhere else. This is so incredibly great!</p>
<p>So. What's the fuss?</p>
<p>In the years 96 till 2000, I was <a href="http://groups.google.com/groups?safe=images&ie=UTF-8&as_uauthors=Philip%20Hofstetter&as_drrb=b&as_mind=1&as_minm=1&as_miny=1996&as_maxd=31&as_maxm=12&as_maxy=2000&lr=&hl=en">quite active</a> in the web. The freeware tool <a href="http://www.lipfi.ch/cgi-bin/pilif/frameset.pl?template=bestbrowser.tml&right=http://www.lipfi.ch/pilif/english/application/rasintask/rit.html">RasInTask</a> [the page has an usability defficit. There are some deeper links on the right side under "subtopics"] (unfortunately I lost the installer, so you cannot download the tool any more - unless you want to use the source, which is well-conserved) was quite well-known in the net. I actually got quite good reviews in two german magazines and I had quite some fans.</p>
<p>And I did more. I wrote articles, short stories, delphi components and such stuff. All of it, so I do think now, was kind of compensation for what I was not getting in real-life. Respect and a girlfriend. So this old page is not only interesting from a technical standpoint (I think, despite it being a bit amateurish, it was quite good back then [just look at how it's possible to link directly into the page with an unique url despite frames being used. I did this with the RasInTask link above] - not to speak about RasInTask which I still think is quite good, though no one would use it these times), but also from a psychological.</p>
<p>This relict from old times is quite a good proof that people, extroverted on the web are quite different in the real live. And while I'm still kind of active in the net world, I think I can say about myself, that I'm finally adjusing my real live to what I always was on the net. This is a good thing, it seems.</p>
<p>So, what's there for you, my fellow reader?</p>
<p>Not much. An <a href="http://www.lipfi.ch/cgi-bin/pilif/frameset.pl?template=bestbrowser.tml&right=http://www.lipfi.ch/pilif/english/other/hp/index.html">old picture of mine</a>, some texts written in quite bad english and this nostalgic flair of a webpage done in the 90ies. The only thing probably useful to you is, unfortunately, lost (and I really don't think that you can still compile the sourcecode of RasInTask any more)</p>
<p>
For me it's something different. Its a testimonial of who I was and who I've become. Yes. Those years between 99 and 02 where great. And quite a lesson for me. It's good to finally feel grown up after such a long time. And it's good to see who I was, just to learn, who I really am.</p>
<p>Oh. And I will go back to more technical stuff the next time. I promise</p>
<p><b>UPDATE:</b> I actually found the RasInTask installer somewhere, so it's available do download now. But plase note: I know that the code is not very clean. I'm quite convinced that there are some access violations and synchronization errors just waiting to annoy you. So while it's interesting from a nostalgic point of view, I'd recommend against installing it. Oh! And don't ask me for support. I have not touched this fo years.</p>
What a tool2004-07-27T00:00:00+00:00http://pilif.github.com/2004/07/what-a-tool<p>
I really like to photograph. I do so since I was a child. Then I bought my Canon ixus 500, which reawakened this old passion of mine. About a week ago or so, I bought O'Reillys <a href="http://www.oreilly.com/catalog/digphotohks/">Digital Photography Hacks</a> and read through it, which was quite fun - it's an excellent piece of work. Easy to read while still providing you with quite some knowledge.
</p>
<p>You should defintiely read the book too, if you are interested in digital photography (some hacks apply to the old fashioned analog one too)</p>
<p>
One thing, I noticed when reading through: There is quite a lot of stuff that can't be done using those compact cameras. Many hacks just begin with "if you have feature X, you can..."</p>
<p>The feature list of <a href="http://www.olympusamerica.com/e1/sys_body_spec.asp">this baby</a> actually contains all those Xes from the book. Wow. That looks nice (besides being written in light-gray on white ;-) ). Expensive, but nice.</p>
Full text search for outlook2004-07-27T00:00:00+00:00http://pilif.github.com/2004/07/full-text-search-for-outlook<p>
As you <a href="http://www.gnegg.ch/archives/63-Each-problem-has-a-solution....html">may know</a>, we are using Exchange and Outlook for our Email and groupware needs. The thing just works and has some really useful groupware features while - in contrast to all those PHP-solutions - still being well integrated in the usual working area (read: has a windows client). And even better: Using Outlook / Exchange, even synchronizing the PDA works out of the box without that much of tweaking.
</p>
<p>But with all this greatness, there are two problems: First, Outlook is not what I'd call a <a href="http://www.gnegg.ch/archives/34-Mail-for-Windows-as-I-like-it.html">good email client</a>, but it gets near. I still can't use it for mailinglist consumption (bad threading, no qote highlighting,...), but for the rest it's usable. The second problem is the search function. It's so incredibly slow, even when you create a full text index on the Exchange-Server (without it's even slower). And besides being slow, it looks like it's searching forwards. When I enter some search term, it walks through the messages from the oldest to the nweset which is quite inpractical</p>
<p>So for reading mailinglists and for searching, I used <a href="http://www.mozilla.org/products/thunderbird/">Thunderbird</a></p>
<p>Then I found <a href="http://www.lookoutsoft.com/Lookout/">Lookout</a> which was recently bought by Microsoft and released as freeware. This wonderful Outlook Add-In builds a fulltext index of all your Outlook folders and actually uses it (in contrast to outlook and the indexes on the exchange server). Additionally it has quite a powerful query language.</p>
<p>And with "fast" I mean <b>fast</b>: It takes just about 0.1 seconds to search my about 33'000 mails for this one message containing a certain word. This is great.</p>
<p>I've actually only two small problems with the tool:</p>
<ol>
<li>It uses the .NET Framework which must be loaded each time I start Outlook. This increases the already long startup time</li>
<li>It uses it's own window to display the search result. Outlook's "Look for" function does this better and reuses the message list.</li>
</ol>
<p>Besides that: Great tool!</p>
99 little emails2004-07-26T00:00:00+00:00http://pilif.github.com/2004/07/99-little-emails<pre class="code">
pilif@galadriel ~ % cat ebinerv.php
<?
for ($i = 0; $i < 100; $i++){
mail('xxx@sensational.ch', 'Gnegg', 'Gnegg!', 'From: xxx@xxx.ch');
echo "\rSent Mail $i";
}
echo "\nDone!\n";
?>
</pre>
<p>In principle I'm long ahead such little toys. But Ebi had this special configuration where each email that arrives at his mailbox is forwarded as an SMS to his very old mobile phone. And the phone has that nasty bug (or some may call it strange behaviour) where the "Delete all"-function does not really do it's task.
</p>
<p>In the end it was quite funny to see ebi manually delete neary each and every SMS he got because of my script. Maybe he will now buy a better phone or fix his configuration? We'll see.</p>
Copying with MOVE? Moving with copy?2004-07-25T00:00:00+00:00http://pilif.github.com/2004/07/copying-with-move-moving-with-copy<p>
Today I came across the situation where I had to copy - using delphi - some chunck of memory from one place to another. I nevery did that before (using OOP techniques gets you around that most of the time - at least in Delphi), so I had no idea how to do it. What I knew is that in C, I'd do that with <tt>memcpy</tt>. As a convinced fan of Pascals intuitive API notation, I looked in the help for <tt>MemCopy</tt> or <tt>CopyMem</tt>. Nothing (which is strange, considering things like <tt>AllocMem</tt> actually exist).
</p>
<p>Some googling around turned out the name of the function: it's</p>
<pre class="code"><strong>procedure</strong> <font color="#2040a0">Move</font><font color="4444FF">(</font><strong>const</strong> <font color="#2040a0">Source</font><font color="4444FF">;</font> <strong>var</strong> <font color="#2040a0">Dest</font><font color="4444FF">;</font> <font color="#2040a0">Count</font><font color="4444FF">:</font> <font color="#2040a0">Integer</font><font color="4444FF">)</font><font color="4444FF">;</font></pre>
<p>Move? That can't be. Can it? I want to copy, not to move. A quick glance at the help file revealed that it's the truth: Move actually copies...</p>
<blockquote>Move copies Count bytes from Source to Dest. No range checking is performed. Move compensates for overlaps between the source and destination blocks.</blockquote>
<p>Descriptive procedure names? Usually, yes. But this can only be described as way beyond the optimum ;-)</p>
<p>Oh... on another note: What do you think, <tt>Copy</tt> does? Copying memory? No way:</p>
<pre class="code">
<strong>function</strong> <font color="#2040a0">Copy</font><font color="4444FF">(</font><font color="#2040a0">S</font><font color="4444FF">;</font> <font color="#2040a0">Index</font>, <font color="#2040a0">Count</font><font color="4444FF">:</font> <font color="#2040a0">Integer</font><font color="4444FF">)</font><font color="4444FF">:</font> <font color="#2040a0">string</font><font color="4444FF">;</font>
<strong>function</strong> <font color="#2040a0">Copy</font><font color="4444FF">(</font><font color="#2040a0">S</font><font color="4444FF">;</font> <font color="#2040a0">Index</font>, <font color="#2040a0">Count</font><font color="4444FF">:</font> <font color="#2040a0">Integer</font><font color="4444FF">)</font><font color="4444FF">:</font> <strong>array</strong><font color="4444FF">;</font></pre>
<blockquote>
S is an expression of a string or dynamic-array type. Index and Count are integer-type expressions. Copy returns a substring or subarray containing Count characters or elements starting at S[Index]. The substring or subarray is a unique copy (that is, it does not share memory with S, although if the elements of the array are pointers or objects, these are not copied as well.)
If Index is larger than the length of S, Copy returns an empty string or array.
If Count specifies more characters or array elements than are available, only the characters or elements from S[Index] to the end of S are returned.</blockquote>
<p>Yeah!. Right.</p>
<p>Oh and on second thought: The move-thing may have its roots in the assembler language, where <tt>MOV</tt> actually copies the data too - at least I think so. But anyway: If even C got it right, why has my beloved Pascal to fail in such an easy case?</p>
Strange preconfiguration2004-07-23T00:00:00+00:00http://pilif.github.com/2004/07/strange-preconfiguration<p>Ever since I've updated our in-office Gentoo-Box to Samba 3, I had very bad performance (throughput wise). And with bad performance I mean at most 200 KBytes/s on a 100MBit network.</p>
<p>For quite some time I thought that it must be my client machine, so I rested the case. Till today, where someone else complained about really bad performance. So I began investigating.</p>
<p>At first I had one of our ultra-cheap switches in mind, so I tested the performance using FTP. Too bad: full speed there, so it must be a Samba problem.</p>
<p>What was really strange: Write performance to the server was great. It was just reading that took so incredibly long. So, armed with this information I did some googling and found ... only vague stuff. While there are some people with the same problem as myself, they are always told that it must be a hardware or windows problem (the two easy solutions) and there was no further discussion in all cases.</p>
<p>Somewhere I found the tip to set the following in smb.conf for maximum performance:</p>
<pre class="code">socket options = TCP_NODELAY SO_RCVBUF=8192 SO_SNDBUF=8192</pre>
<p>I went and looked, but the setting was already there. Too bad. The next thing I did was to comment the line out and restart samba:</p>
<pre class="code">#socket options = TCP_NODELAY SO_RCVBUF=8192 SO_SNDBUF=8192</pre>
<p>And you will not believe it, but it helped. The server is back to its old performance with 8 Mbytes per second which is a good value considering the cheap equipement involved.</p>
<p>Problem solved. Culprit: Strange preconfiguration by Gentoo. Why this helped? No idea! Why the wrong setting in the first place? No idea either. Why the wrong tip to put this option into smb.conf? Don't ask me. I'm just happy, it works again.</p>
Responding to search-strings (II)2004-07-21T00:00:00+00:00http://pilif.github.com/2004/07/responding-to-search-strings-ii<p>While looking through the logfile analysis of gnegg.ch I saw that someone came to this site searching with</p>
<blockquote>
set ie proxy delphi
</blockquote>
<p>so I deceided that it's time for anther episode of "Responding to search-strings" (the other being <a href="http://www.gnegg.ch/archives/89-Responding-to-search-strings.html">here</a>). This time it's about setting the IE's proxy server from a delphi application.</p>
<p>When you do it manually, you access the proxy server settings from Tools / Internet Options / Connections in Internet Explorer. Whatever you change there, is used not only by IE, but by every application on your system using the WinInet API function <a href="http://msdn.microsoft.com/library/default.asp?url=/library/en-us/wininet/wininet/internetopen.asp">InternetOpen</a> with the flag <tt>INTERNET_OPEN_TYPE_PRECONFIG</tt> set. Additionally, many applications use the WinInet-API to get the Proxy Server settings and then use their own routines to actually connect to the server via the proxy they got before</p>
<p>So, if you want to change the Internet Explorer Proxy settings, you actually change it for the bigger part of the wohle system</p>
<p>When you go to Tools / Internet Options / Connections, you will immediately see that setting the proxy is going to be quite a task: You don't just set one proxy server, you actually set one for LAN-Connections and one for each dialup connection that is installed on your system. Finally the proxy being used depends on the state of the radio buttons you see in the middle of the dialog because they define whether IE should even bother connecting to the LAN or just call one of the connections defined.</p>
<p>But it gets even more complicated: The proxy settings provided changed for each version of Internet Explorer. As always, it was an evolutionary process getting more complex in every iteration, so you will have to cope with that too.</p>
<p>But now to the details: While the settings are stored in the Registry, this is not the recommended way for changing them. Microsoft has created some API functions specifically for that, so you should use them as this is the only way guaranteed to be portable even for future versions of Windows.</p>
<p>The problem: The API is very painful to use - even more so because it is somewhat different for each version of IE (getting more complex along the proxy feature itself). Oh, and please don't ask me how to get the version of the installed IE - that I do not know.</p>
<p>All is about <a href="http://msdn.microsoft.com/library/default.asp?url=/library/en-us/wininet/wininet/internetsetoption.asp">InternetSetOption</a> and <a href="http://msdn.microsoft.com/library/default.asp?url=/library/en-us/wininet/wininet/internetqueryoption.asp">InternetQueryOption</a> respectivly. Both require a parameter to tell them which option you are interested in. Have a look at <tt>INTERNET_OPTION_PER_CONNECTION_OPTION</tt> (for IE5 and later), <tt>INTERNET_OPTION_PROXY</tt> and <tt>INTERNET_PER_CONN_OPTION</tt></p>
<p>In the end, you will be calling InternetQueryOption quite a lot of times and change some settings with InternetSetOption, but you will soon see that it's not actually worth it: There is always the possiblity that you have not anticipated some obscure setting a user may have which will distrub your application greatly</p>
<p>And additionally, changing the proxy server settings is a task for an adminitrator, not for a simple application. Before asking the question "How can I change the proxy server?", the question should be "Do I really have to change the proxy server? Isn't there a better way?"</p>
Fun with a tablet pc2004-07-21T00:00:00+00:00http://pilif.github.com/2004/07/fun-with-a-tablet-pc<p>
I laughed at them. Just like everyone did. I mean: Why on earth should I pay more to get less? Tablet PCs usually have a much too small monitor and are much too powerless - not to speak about resolution (I'm quite the screen resolution guy anway, considering that I'm seriously thinking about buying myself a T42p with the cool 1600x1200 resoulution, just because of that. But then: Have you ever really used Delphi? If yes, you know what I mean). And on top of that: Why on earth should I rely on handwriting recognition when everyone knows that this doesn't work?
</p>
<p>
Then, I got a panel on my table to evaluate it's potential as mobile device running our <a href="http://www.popscan.ch">PopScan</a>. While it's not important what brand the thing actually was from (Acer in this case) and while I certainly did not have the opportunitiy to really test the thing like <a href="http://www.gnegg.ch/archives/57-IBM-Thinkpad-T40.html">I did</a> with my T40, one thing I've seen: Tablet PC's are cool. Really cool.
</p>
<p>For one there is that extremely powerful handwriting recognition engine. In contrast to all other engines I've seen, the one running on the tablets really works. Without training or getting used to on my part, I had a recognition rate of about 95% with the exceptions being some non-words anyway (like gnegg or "Sauklaue" which actually got recognized as Saddam [Sauklaue is what you call a really terrible handwriting in German]). The engine is so good that it actually <strong>can</strong> serve as a keyboard replacement - at least if you're not writing too long texts (like this entry here ;-) )</p>
<p>But the real killer application of that thing is the included Microsoft Journal: A digital notepad (the name notepad was already taken for something... else... in Windows). You just make your notes, which goes very well using the pressure sensitive pen and because you can rest your hand on the display while writing - the tablet reacts to the pen only. Then, when you are done, you can draw a circle around the text you want to have recognized. Journal will do as you ask and replace your writing with a common text-box, leaving your drawings in place.</p>
<p>This is perfectly adapted for my workflow. I usually have a piece of paper lying on my desk, serving as container for all that small stuff I have to keep in mind. Line numbers, small concepts, interface definitions - quite a lot of stuff actually. Then, when the paper gets full, I usually throw it away and take a new one.</p>
<p>If I could do those notes on a Tablet PC, I could actually conserve them. But not only that, I could search for them - <em>in full text</em> (recognition is done in the background)! And it does not stop there: When I actually wrote down program code in those notes, I could immediatly reuse it, instead of manually retyping it</p>
<p>All this potential is realized with the really great UI the Journal has: You can insert space everywhere you want, pushing down the content below (and doing that quite intelligently), you can copy and paste your drwaings (sometimes I really whished I could do that on paper) and all that with a really simple UI. This is so incredibly great.</p>
<p>So to all those people laughing about the Tablet PC's: Try them! Maybe you will be quite surprised. I for myself am quite sorry, I had to send the thing back.</p>
Vendor lock-in2004-07-19T00:00:00+00:00http://pilif.github.com/2004/07/vendor-lock-in<blockquote>
But, as Tom Kyte points out in his latest book, Effective Oracle by Design (Oracle Press), database dependence should be your real goal because you maximize your investment in that technology. If you make generic access to Oracle, whether through ODBC or Perl's DBI library, you'll miss out on features other databases don't have. What's more, optimizing queries is different in each database.
</blockquote>
<p>
Needless to say on what vendors webpage I've seen <a href="http://otn.oracle.com/pub/articles/hull_asp.html?_template=/ocom/technology/content/print">the article</a> the quote is coming from. One thing you learn in the practical live is that it's extremely difficult to switch databases one you begin using the proprietary features. And you <strong>will</strong> have to switch. Sooner or later. Be it unsufficient functionality (as I've seen it with MySQL. I am still cursing the day when I began using SETs) or vendors going out of service or even political reasons.</p>
<p>While I certainly see some value in using proprietary features, let me tell you: Use them with care. Always be on the lookout for the availability of different approaches to do the same thing. If there are none, don't do it (don't use SETs in MySQL for example).</p>
<p>And if you can only get the full performance out of your RDBMS by relying on proprietary features, don't use the RDBMS at all as it's quite obviously not the right system. Performance must be available without being forced to use proprietary features. At least without relying on features in the query language itself - optimizations in the backend are ok for me.</p>
<p>This is one of the reasons I don't use oracle, by the way. The other being <a href="http://www.gnegg.ch/archives/138-All-time-favourite-Tools.html">this</a> ;-)</p>
Gentoo and Jabber2004-07-18T00:00:00+00:00http://pilif.github.com/2004/07/gentoo-and-jabber<p>
Already in 2002 I did my first experiments with jabber and I really liked what I saw when still reading the documentation. Setting up the server was a real pain, but eventually I got it working. </p>
<p>
Then came the <a href="http://www.gnegg.ch/archives/110-Speed-up.html">thing with our server</a> and having in mind the hard work needed for setting up jabber, I deceided not to rebuild the jabber-configuration - even more so because aim-transport still does not support those fancy iChat-AIM-Accounts while Trillian does.</p>
<p>But today after having seen that iChat in Tiger is going to support jabber, I finally deceided that adding my beloved server back would be a cool thing...</p>
<p>And the whole adventure turned out to be another point where <a href="http://www.gentoo.org">Gentoo</a> shines above all other distributions: The ebuilds for jabber and the two transports I am using (AIM and ICQ) where already beautifully preconfigured. And not only that: They where current too (hint to debian... ;-) )</p>
<p>One thing did not work at the beginning: I could not register with the AIM-Transport. A quick glance at the configuration file of aim-t showed me that the preconfigured config file uses another port (5233) than the recommended settings in the main configuration file (5223).</p>
<p>All in all it took me about 10 minutes to get my old jabber installation back. With current versions of all the tools involved and without writing own startup scripts or other fancy stuff. This is one of the reasons I really like Gentoo</p>
<p>Oh... and in case you ask: My Jabber-ID is <tt>pilif@chat.sensational.ch</tt>. It's not listed in the global user directory.</p>
<p>And if you're asking what client I'm using: Though its interface may need some improvement, <a href="http://jajc.ksn.ru">jajc</a> is in my oppinion the best client you can get if you are using windows</p>
Refactoring - It's worth it2004-07-16T00:00:00+00:00http://pilif.github.com/2004/07/refactoring-its-worth-it<p>
Just shortly after <a href="http://www.gnegg.ch/archives/146-Refactoring-If-only-Id-had-time.html">complaining</a> about not having time to do some refactoring, I reached a place in my code where it was absolutely impossible to add feature x without cleaning up the mess I created three years ago. And - what's even better: I had the time do really fix it. Cleanly</p>
<p>
What I did was to sit down and recreate the whole module in a new Delphi project. I knew what features I wanted to have when finished and I somewhat knew the interface I had to comply to. The latter proofed inpractical, so I did some modifications to the interface itself (the thing was hacky too). Redoing the whole module took about a week (it's about downloading stuff, exctracting and then XML-parsing it - everything in a thread while still providing feedback to the main thread), but it was absolutely worth it:</p>
<ul>
<li>The code is clean. And with clean I mean so clean that adding further features will still be clean, depite not being needed as the new framework I've created is extremely powerful.</li>
<li>The thing is fast. Twelve times faster than the old version. I'm processing 7000 datasets in just 20 seconds now (including time needed for downloading and decompressing) which took me four minutes before.</li>
<li>The thing is more usable. Status reporting to the end user went from nearly nothing to everything the user may need. And she can now cancel the process - of course.</li>
</ul>
<p>A task fully worth of undertaking. I've not been that pleased with my code for quite some time now</p>
SonyEricsson, IMAP, Exchange2004-07-10T00:00:00+00:00http://pilif.github.com/2004/07/sonyericsson-imap-exchange<p>Since we <a href="/2003/10/each-problem-has-a-solution/">switched to Exchange</a> I’ve been unable to get my Email from my SonyEricsson-phones (first T610, then Z600 - talk about buying too many mobiles per time unit ;-). Every time I tried to connect, I immediatly got a <tt>Server not found</tt></p>
<p>Today I’d had enough. This must be fixed, I told myself and set to fix it. And as the category for this enty is “Solutions”, I actually did solve it.</p>
<p>A quick check with <tt>netcat</tt> on the firewall (after turning off the port forwarding rules) revealed that it’s not actually a connection problem I was running into: The phone connected fine. So it must be something with Exchange…</p>
<p>The event log on the server revealed nothing at all. As always with Microsoft products. Those messages are either not there or completely ununderstandable.</p>
<p>Next I tried to set the server to maximum logging (Exchange-Manager, right click on your Server, Properties, Tab “Diagnostics Logging”, IMAP4Svc). The result were two entries in the event log: Client XXX connected, Client XXX disconnected. Extremely helpful. Nearly as helpful as the “Server not found” my cellphone was throwing at me (see <a href="#errnote">note below</a>).</p>
<p>I noticed that this isn’t getting me anywhere, so I went getting the cannon to shoot sparrows with: I’ve downloaded <a href="http://www.ethereal.com/">Ethereal</a> and listened to the conversation my phone is having with exchange:</p>
<pre class="code">S: * OK Microsoft Exchange Server 2003 IMAP4rev1 server version x.x (xxx) ready.
C: A128 AUTH xxx\x xxxx
S: A128 BAD Protocol Error: "Expected SPACE not found".</pre>
<p style="font-size: 0.8em">(I won't ask, why the phone isn't checking the capabilities first before logging in. This is not what I call a clean impementation)</p>
<p>Not very helpful either. At least for me, knowing the IMAP-RFC just enough to understand what the A128 stands for (it’s a transaction number which allows for asynchronous command execution. The server prefixes answers to commands with the number given by the client), but not much else. So I had to do something else: Logging in with Mozilla Thunderbird, where I had no problems. After one failed attempt where I forgot to turn off SSL (…), I got this:</p>
<pre class="code">S: * OK Microsoft Exchange Server 2003 IMAP4rev1 server version x.x (xxx) ready.
C: 1 capability
S: * CAPABILITY (...) AUTH=NTLM
S: 1 OK CAPABILITY completed.
C: 2 login "xxx\\xx" "xxx"
S: 2 OK LOGIN completed.</pre>
<p style="font-size: 0.8em">(now that I'm reading through this (still without having read the RFC): Isn't the server lying here: It just tells to be acceping NTLM-Auth, but Mozilla seems to ignore that and using AUTH=LOGIN to log in which the server accepts too. Enlighten me!)</p>
<p>Aha! We seem to be having quoting issues in the phone. Good. Even better: The issue seems to be that the phone does no quoting at all, which is fine because then we can do some quoting in the preferences-screen</p>
<p>After one failed attempt with two spaces after the username in the LOGIN-Line which was fixed by removing the somehow added trailing space in the phone’s username-field, <strong>I actually got it working</strong>. Yes. I’m reading my mail with the phone. It did work!</p>
<p>So, if you are having problems connecting to an Exchange-Server using SonyEricssons Phones, do the following:</p>
<ul>
<li>Enter the username as <tt>"DOMAIN\\username"</tt> (with quotes). Look that there are no spaces before the first and after the last quote.</li>
<li>Enter the password as <tt>"password"</tt>. Include the quotes too and remove spaces that may linger aroung</li>
</ul>
<p>In other words:</p>
<ul>
<li>Escape \-es with anoter one of them: \ -> \\</li>
<li>Put username and Password in double quotes (")</li>
</ul>
<p><em>Dann klappt’s auch mit dem Nachbarn!</em> (from a stupid german commercial. Forget it if you don’t understand it)</p>
<p><a name="errnote"></a>One final note: <rant>Everything would have been so much easier if only there were more usefil error messages involved. While I completly understand that the designers of the software don’t want to overwhelm their users and thus create seemingly simple messages, they should absolutely provide a “Details”-Link somewhere where the whole message can be read. Granted. Cellphones are limited, so in a way, I can accept the message I got there. What I can not accept is the way Exchange loggs the errors it occurs. Why on earth doesnt’ a protocol error getting logged when logging is set to “Maximum”?</rant></p>
Manager or programmer2004-07-06T00:00:00+00:00http://pilif.github.com/2004/07/manager-or-programmer<p>
When I read <a href="http://weblogs.asp.net/oldnewthing/archive/2004/07/06/173935.aspx">this blog entry</a> I could not resist to post a big warm</p>
<p align="center"><strong>ACK!</strong></p>
<p>The theory works quite well here in Switzerland too.</p>
Refactoring - If only I'd had time2004-07-05T00:00:00+00:00http://pilif.github.com/2004/07/refactoring-if-only-id-had-time<p>
Refactoring is a cool thing to do: You go back to the drawing-board and redesign some parts of your application so that they fit better to the new requirements building up over time. Sometime you take old code and restructure it, sometime you just rewrite the functionality in question (or even the whole application, but I don't count this as refactoring any more)
</p>
<p>Code always has the tendency to get messy over time as new requirements arise and must be implemented on the basis of existing code. Not even the most brilliant design can save your code. It's impossible to know what you are going to do in the future with your code.</p>
<p>Let's say you have an application that is about orders. Orders with ordersets that somehow get greated and then processed. Now let's say you create quite an usable model of your order and ordersets. Very well. It's nice, it's clean and it works.</p>
<p>And now comes the customer and over the years new features are added, let's call it an inventory mode. You notice that these new inventory records have quite a lot in common with your orders, so you reuse them, but add some features.</p>
<p>Now full stop! It already happened. Why on earth are you reusing the old code and "just adding features"? That's not the way to go. The correct solution would be to abstract away the common parts of your order and inventory records to something like TProductContainer (using Delphi naming conventions here) which has two descendants TOrder and TInventoryRecord.</p>
<p>But this comes at a cost: It requires time. It requires quite some steps:</p>
<ol>
<li>Think of a useful abstraction (just naming it is not easy. My TProductContainer above is stupid).</li>
<li>Create the Interface</li>
<li>Implement the new subclasses</li>
<li>Change the application where appropriate (and if it's just changing declarations, it still sucks as it's time consuming)</li>
<li>Test the whole thing</li>
</ol>
<p>Now try to convince the project-manager or even your customer that implementing the required feature can be done in x days, but you'd like to do it in x*2 days because that would be cleaner. The answer would be another question like: "If you do it in x days, will it work?". You'll have to answer "yes", in the end. So you will be asked "if you do it in x*2 days, will it work better than in x days?" and you'd have to answer "No" as the whole sense in cleaning up messy code is to keep it running just the same.
</p>
<p>So, in the end those things will accumulate until it cannot be put away any longer and the refactoring has to be done no matter what, just because implementing the feature uses x days plus y days just for understanding the mess you have created over time. y being 2x or so.</p>
<p>The mean thing is: The longer you wait doing the inevitable, the longer you will have to fix it, so in the end, it should always be the x*2 way - if only those noneducated people would understand.</p>
PHP scales well2004-07-04T00:00:00+00:00http://pilif.github.com/2004/07/php-scales-well<blockquote>
I think PHP scales well because Apache scales well because the Web scales well. PHP doesn't try to reinvent the wheel; it simply tries to fit into the existing paradigm, and this is the beauty of it.
</blockquote>
<p>Read on <a href="http://shiflett.org/archive/46">shiflett.org</a> after a <a href="http://developers.slashdot.org/article.pl?sid=04/07/03/1319245&mode=nested&tid=126&tid=156&tid=169">small pointer</a> by Slashdot into the right direction. This guy really knows what he is writing - or at least it seems to me as I think exactly the same way as he does (which is a somewhat arrogant way of saying things, I suppose :-)).
</p>
Read by the PostgreSQL team2004-07-02T00:00:00+00:00http://pilif.github.com/2004/07/read-by-the-postgresql-team<p>
Seing <a href="http://lwn.net/Articles/90603/">this</a> in my referrer-log and seing that Robert commenting <a href="http://www.gnegg.ch/archives/105-Quote-of-the-day.html">here</a> is in the PostgreSQL-Team too, I come to the conclusion that someone of the Postgres-Team with obviously enough influence to propose links to the weekly newsletter seems to be reading my humble blog.</p>
<p>Thank you for mentioning my posting in your weekly news. That was very kind.
</p>
WinInet, Proxies and NTLM2004-06-30T00:00:00+00:00http://pilif.github.com/2004/06/wininet-proxies-and-ntlm<p>For quite some time now I heard about customers telling me that PopScan seems to be having problems with proxy servers using NTLM authentication. I knew that and I told everyone that this is not supported.</p>
<p>But I could not understand it: Why did it not work. I mean, I went from my own HTTP-Routines to WinInet just to be able to use the system-wide proxy server settings and connections</p>
<p>When using WinInet and <tt>INTERNET_OPEN_TYPE_PRECONFIG</tt> with <tt>InternetOpen</tt>, the whole thing is supposed to just work - as long as IE itself does work. But in my application this wasn't the case and I had no idea why. As soon as NTLM was enabled at the proxy, I was just getting a 407 <tt>HTTP_PROXY_AUTHENTICATION_REQUIRED</tt> status from the proxy, despite the correct password being used
</p>
<p>MSDN was of help (taken from the documentation of <a href="http://msdn.microsoft.com/library/default.asp?url=/library/en-us/wininet/wininet/httpopenrequest.asp">InternetOpenRequest</a>):</p>
<blockquote>
If authentication is required, the INTERNET_FLAG_KEEP_CONNECTION flag should be used in the call to HttpOpenRequest. The INTERNET_FLAG_KEEP_CONNECTION flag is required for NTLM and other types of authentication in order to maintain the connection while completing the authentication process
</blockquote>
<p>I've added this flag (and some more - now that I already was at it), recompiled, tested and -yes- finally it does what it should: It works just out of the box. No more 407, no more entering password for the users. One more thing that switched its state from "not supported" to "supported and working splendidly".</p>
<p>This is with a NTLM-enabled Squid Proxy, but it should work with Microsoft ISA too.</p>
Console game Videos2004-06-28T00:00:00+00:00http://pilif.github.com/2004/06/console-game-videos<p>
I've already <a href="http://www.gnegg.ch/archives/94-If-only-I-could-play-like-this.html">posted</a> about <a href="http://bisqwit.iki.fi/jutut/nesvideos/">this site</a> with its speedruns for old console games. What I did not know back then is that these videos are created using slow motion and savestates which makes them look so expectionally good (if you are up for movies not using savestates, then <a href="http://planetquake.com/sda/other/">this</a> is for you).
</p>
<p>Though the videos are made with savestates, they are extremely fun to watch, so bisqwits page is one of those I have been visiting every day just to look for updates. Recently it went all quiet...</p>
<p>And today I see what was bisquwit keeping from posting new movies: The whole page got redesigned (on a WIKI-basis) and SNES and Genesis (Mega Drive here in Europe) Movies were added. Very nice. My Bittorrent client is already hard-working ;-)</p>
Optimized comment display2004-06-23T00:00:00+00:00http://pilif.github.com/2004/06/optimized-comment-display<p>Yesterday, when I was reading through old entires here on gnegg.ch, it came to me that I have never really styled the comments-section of my postings during the redesign. I’ve taken the old MT-Template and style definitions and let it rest at that.</p>
<p>I wanted to change that and so I did:</p>
<ul>
<li>The comments are in one of those grey boxes now. I think the can be a lot better distinguished from each other now.</li>
<li>The comment-form is hidden by default. The used JavaScript is quite straight forward, but if you don't want to use JS and still comment, use a User defined stylesheet and set
<pre> #comment-form{
display: show !important;
}</pre>
</li>
<li>Using <a href="http://www.nonplus.net/software/mt/MTEntryIfComments.htm">MTEntryIfComments</a>, the trackback-list is only shown if there actually <strong>are</strong> trackbacks</li>
<li>Using the same plugin, if there are no comments, a message is displayed, encouraging to write one.</li>
</ul>
<p>I like this solution quite a lot. The entries are quite less cluttered that way. What do you think?</p>
RAM doubler ;-)2004-06-22T00:00:00+00:00http://pilif.github.com/2004/06/ram-doubler<p>I have a server (running gnegg.ch) with 1.5 GBytes of RAM and I'm running <a href="http://www.getoo.org">Gentoo Linux</a> (another candidate for my <a href="http://www.gnegg.ch/archives/138-All-time-favourite-Tools.html">all-time favourites list</a>, but it's still too soon for that. I'm only working with it for a little bit more than one year). And as I wanted the thing to be as secure as possible, I created a kernel from scratch <b>without module support</b>.</p>
<p>
What I've always asked myself is why the heck "free" just lists 896 Mbytes of available memory:</p>
<pre style="font-size: 10px; background-color: #DEDEDE; border: 1px solid #6E6E6E; padding: 4px;">
galadriel root # free -m
total used free shared buffers cached
Mem: 885 193 692 0 6 69
-/+ buffers/cache: 117 768
Swap: 976 0 976
</pre>
<p>At first I had a BIOS problem in mind, bit after having seen GRUB recognizing the whole amount of memory, I came to the conclusion that there must be some problem in the kernel</p>
<p>
As 2.6 is still quite new, I waited for the next <tt>gentoo-dev-sources</tt> to be released which happened somewhere around today. With the new kernel the problem still existed, so I dug deeper</p>
<p>
<tt>dmesg</tt> output something like this in its first lines:</p>
<pre>
Warning only 896MB will be used.
Use a HIGHMEM enabled kernel.
</pre>
<p>Though I misread the second line as a status message (stating that HIGHMEM is being used) instead of a request, I entered the above message to <a href="http://groups.google.com">Google Groups</a> and found out that the second line indeed is the solution to the problem</p>
<p>In <tt> Processor type and features</tt>, set <tt>High Memory Support</tt> to <tt>4GB</tt> and recompile your kernel.</p>
<p>What I don't understand: I'm having this problem with 1.5GB of RAM and this option seemed to me like talking about 4 GB. But Google was helpful like most of the time, enabling me to virtually double the available RAM</p>
<pre style="font-size: 10px; background-color: #DEDEDE; border: 1px solid #6E6E6E; padding: 4px;">
galadriel root # free -m
total used free shared buffers cached
Mem: 1520 333 1186 0 12 158
-/+ buffers/cache: 162 1358
Swap: 976 0 976
</pre>
<p>Nice! Isn't it?</p>
<p><b>Update:</b> For those that have not yet noticed it: The title of this entry does hint at products like <a href="http://ramdefrag.sourceforge.net">this</a>, though this one is at least honest in its description.</p>
Movable Type licensing2004-06-22T00:00:00+00:00http://pilif.github.com/2004/06/movable-type-licensing<p>While looking for some documentation for improving my comments-system (later post), I came across a link to <a href="http://www.sixapart.com/log/2004/06/announcing_pric.shtml">this blog entry</a> that announces a revised licensing scheme for Movable Type 3.0.</p>
<p>This time they actually did it right: The (still) free edition is now clearly announced. The personal edition is what quite a lot of users (including myself) have wanted (unlimited blogs) and it is quite affordable. This is nice.
</p>
<p>Thank you, Movable Type</p>
All-time favourite Tools2004-06-21T00:00:00+00:00http://pilif.github.com/2004/06/all-time-favourite-tools<p>
Who doesn't have them? Those all-time favourite tools. It's not just software, it's passion. Those tools are tools that you <strong>always</strong> have to use. Tools where all objectivity seems to fade away when it comes to making recommendations. Tools where you actively monitor (or even participate in) the developement. Tools where you, though they are free, gladly donate some money. Tools you love.
</p>
<p>
Of course, I too know of some tools. And this is my list (in no particular order):</p>
<ul>
<li><a href="http://www.exim.org">Exim</a> is an UNIX MTA (mail server). It is not only extremely configurable, it's even easy to do so. Back in 2000, Exim was the only MTA capable of being used in a environement where all accounts are stored in a MySQL database. Since then I am using Exim for all my mail serving needs and I still have not stopped discovering new ways the incredibly flexible configuration scheme can be used to do even fancier stuff. But the greatest thing about exim is it's creator, Philip Hazel. Phil is a ingenious programmer. A really pragmatical one. I love to read his emails on the exim mailinglist. I love to see his solutions that are quite often so much easier than what others suggested but leave nothing to ask for at all. Btw: During summer 2001 I even extended my Accounts-in-MySQL-Configuration and put it on the web as a .txt-File. Oliver Siegmar was convinced enough to build <a href="http://www.xams.org">XAMS</a> on it. I really like Exim</li>
<li><a href="http://www.postgresql.org">PostgreSQL</a> came to my rescue when I desperatly needed a RDBMS that really merits that name. I constantly run into limitations of MySQL, so I was on the lookout for a better alternative. With the TOAST tables of PostgreSQL 7.1, it was finally possible to have length-unlimited columns which I needed in the webapplication I was working with (for storing long comments), so it became a real solution. Since then PostgreSQL never failed me or any of our customers. In my journey with PostgreSQL I learned a lot about programming database systems while reading through the posts of people like the ever so conservative Tom Lane and others. What a great community. What a great database server!</li>
<li><a href="http://www.jrsoftware.org/isinfo.php">InnoSetup</a> (and it's graphical frontend <a href="http://www.istool.org">ISTool</a>) is a easy to use and extremely powerful generator for Windows Installations. I know that you are supposed to use MSI these days, but InnoSetup works, has any feature you could dream of and - that's the point - is terribly easy to use. My journey with InnoSetup is a long one. It began back in 1996, where I was the first translator at all (now long outdated) and it goes on through nearly all releases till' today. Inno's programmer, Jordan Roussell is another one of those extremely talented ones. Reading his posts in the support newsgroups is a real pleasure - reading Inno's sourcecode is very enlightening. How powerful such a little tool can be!</li>
</ul>
<p>
And you? Do you have such tools in your toolbox? Do you use the words love and software in the same phrase? I certainly do!</p>
Web Applications and the View State2004-06-19T00:00:00+00:00http://pilif.github.com/2004/06/web-applications-and-the-view-state<p>Today, it came to my mind, that I know of a problem with some web applications, which apparently few else seem to know about. What is worse, is that those new technologies like ASP.NET and Java Server Faces seem to run straight into the problem.</p>
<p>This article is even bigger than the usual, so I split it up.
<!--more--></p>
<p>This is about tracking the state of the application, which is usually done with whatever your environment provides for using HTTP-Sessions (Cookie or whatever based - it doesn’t matter). The concept is always the same: For each concurrent visitor on your application, the server allocates some kind of storage, assigns that an ID and sends that back to the web browser of the client, which, on every further request, provides the Server with that ID which in turn looks for the associated data.</p>
<p>This works very well, but can badly break when the user opens another browser window. Let me make an example (which is a bit constructed, but I’m going to give you a real-world one later):</p>
<p>Suppose you have a web-application that lists some items and provides a link to delete them. When the user clicks those links, another page will open asking the user for confirmation. This could look that way:</p>
<table border="1">
<tbody>
<tr>
<td>Item1</td>
<td>[Delete]</td>
</tr>
<tr>
<td>Item2</td>
<td>[Delete]</td>
</tr>
<tr>
<td>Item3</td>
<td>[Delete]</td>
</tr>
<tr>
<td>Item4</td>
<td>[Delete]</td>
</tr>
<tr>
<td>Item5</td>
<td>[Delete]</td>
</tr>
</tbody></table>
<p>(of course, there would also be an Edit-Link and a possibility to add another enty, but this doesn’t matter for this example)</p>
<p>Now, if the user clicks on one of those delete-links, another page will open looking about like this:</p>
<hr />
<p><Item> is going to be deleted? OK?
[yes] [no]</p>
<hr />
<p>Now let’s suppose, that on this delete-page, there is code in the back end, that sets a session-variable called $item_to_delete to the ID of the item the user has clicked the delete-button in the index-page. Then, when the user clicks “ok”, the next page will use this session-variable and delete the (apparently) selected page.</p>
<p>This workflow is very nice and works well <strong>as long as the user does not open more than one browser window</strong></p>
<p>Let me explain with an example. Suppose the user does the following:</p>
<ol>
<li>Open the delete-Link of the Item1 in a new window ($item_to_delete is set to 1)</li>
<li>Open the delete-link of the Item2 in a new window ($item_to_delete is set to 2)</li>
<li>Go to the Window asking for confirmation to delete Item 1 and click ok</li>
</ol>
<p>What happens is that the session-variable $item_to_delete is looked at and the corresponding item is deleted. Too bad, it’s not set to 1, but has been set to 2 while the user opened the other confirmation-page in another window. <strong>The wrong item is deleted</strong>.</p>
<p>Session-Variables work for the users session, not per open browser window, which generally is what the developer intended, as it should be allowed to open multiple windows and still stay logged on, for example.</p>
<p>Now you may call this example of mine a bit constructed and you are right with that. But then again, go to <a href="http://www.linux-community.de">linux-community.de</a> (it’s german, but it doesn’t matter for this example) and to the following:</p>
<ol>
<li>Open any article linked on the front page in a new browser window</li>
<li>Open another article in another browser window</li>
<li>In the window displaying the first article, change some comments-viewing preferences (with those buttons below the article)</li>
</ol>
<p>Now though you have correctly changed the viewing-style of the comments, the article you are seeing is the wrong one - it’s the one you have opened in the second browser window. Not the original one. This is exactly the same problem as the one I’ve described in my example above.</p>
<p>There are two approaches to fix that problem:</p>
<ul>
<li>Disallow the user to open another browser window. This is a no-solution as it introduces quite an usability-problem and may be not even technically possible. I mean: Just imagine reading above linux-website without the ability to open multiple windows at once (the same happens with tabs, anyway)</li>
<li>Fix the problem by putting the object which the application currently is working on into the context of the currently visible page. This may be the url (/del_form.php?id=xxx) or a hidden form-field. Whatever. Just not the session-data</li>
</ul>
<p>Now. That’s an easy fix. Why am I blogging about this if it’s so easily fixed?</p>
<p>The problem is those new environments that allow to do web-programming event-centered like you would do client-side GUI programming. Those environments (ASP.NET and Java Server Faces come to my mind) depend heavily on the state of the application stored somewhere and they try to abstract away any web-specific problems (like this one), but - and that’s my point - both systems I know of (I’ve never worked with them, but read quite some documentation) don’t seem to be aware of the buglet I’m describing here. This would not be that a big problem as the developer can work around that easily enough. The problem is the high degree of abstraction provided, which allows the developer to (seemingly) forget about the stateless environment she is working with which in the end leads to her not thinking about problems like the one I’m writing about.</p>
<p>What a developer may see is that the application sometimes seems to fail on some users, which is one of those extremely difficult to debug problems. In the end the developer may notice that it has to do with multiple browser windows and instead of trying to analyze the problem further, she will be compelled to jsut disallow opening multiple windows, thus creating a big usability problem.</p>
<p>Later, the vendors of those new environments may even recognize the problem and “fix it” by doing interesting things like checking the integrity of the view state with some kind of hash or whatever and present the user that “dares” to open another windows with a message like this:</p>
<blockquote>Viewstate curruption detected
A corruption of the viewstate has been detected. Please close all browser windows and try again. All your changes you may have made are lost. We apologize for any inconvenience</blockquote>
<p>or whatever.</p>
<p>I’m really no fan of those new web technologies as the abstract away quite too much details. In gerneral, I have no problems with abstractions and with anything else that makes it easier for us developers to build applications. But when it comes down to creating a usability nightmare just because it must be easy for the developer, we went a step too far. And we should be looking for a better solution.</p>
Zelda: Four Swords Adventures2004-06-18T00:00:00+00:00http://pilif.github.com/2004/06/zelda-four-swords-adventures<p>
About a year ago, I bought myself a GBA with the GBA-port of "A Link to the past" - one of the best Zelda games (and they all are great [except Majoras Mask which I didn't really like]). What I did not know when I bought the game is that the add-on "Four Sword" that was on the cartridge could be so incredibly fun to play.</p>
<p>It was more to proof that a storyless multiplayer Zelda can't be any fun that I played a round with Richard and Evelyn back then.</p>
<p>As always with prejudice, I have been proofen wrong: It was so incredibly fun to play Zelda together. I mean: Though you have to be cooperative to play through, it's the player with the most rupees that finally wins the round, so it creates quite a bit of danymics</p>
<p>I have not played the GBA-game since last december as I now more than know all the ramdomized levels that can get generated and I think we have killed Vaati about 100 times, so the fun finally went away- after nearly a whole year of fun.</p>
<p>But then I found out that multiplayer Zelda is coming back. To the Game Cube this time: <a href="http://www.nintendo.com/gamemini?gameid=m-Game-0000-1849">The Legend of Zelda: Four Swords Adventures</a>.</p>
<p>What concerned me was that a GBA is needed to play the game and that you always play with four links. And then there was the thing with the non-autogenerated levels. Is there any replay-value? Does the not-so-great grpahics work on the GameCube? Is it difficult to follow the action on screen alternating between the video beamer and the GBA?</p>
<p>Many questions. I wanted them answered and I wanted more multiplayer Zelda! So I've imported the US-Version of the game and received it last thursday. And yesterday I played it with someone else for the first time.</p>
<p>I can guarantee you: It's fun. As always I will list the things I don't quite like first:</p>
<ul>
<li>The guy that designed the GBA->GC cable without a pass-through power plug should be shot. I hate it to interrupt playing just because one of the GBA SP's lost power with no way to attach the power cable.</li>
<li>The GC->GBA cable makes it quite inconvinient to use the L and R-Buttons on the GBA which are quite often used when playing FSA (Four Swords Adeventures)</li>
<li>It takes some getting used to the GBA-GC combination, but when you get the hang of it, it works quite well</li>
</ul>
<p>And that's already all I have to say to the negative points. Now to the positive:</p>
<ul>
<li>It's so much fun. Whether you play alone or with opponents (or is it partners? You can never be sure when playing multiplayer Zelda). I really like the sequences where hordes of enemies appear and you can slay through them. Great!</li>
<li>I really like the graphics. It's 2D, yes, but they did a really great thing to it. It's very smooth, detailed, zoming and everything. Cool.</li>
<li>Great sounds. As a fan of "A link to the past", it's great to hear the old tunes on current more capable hardware.</li>
<li>Did I mention, it's incredibly fun? This is more a property of any multiplayer Zelda, but it's no different this time. I so much like throwing my so-called partner into the water just to get some rupees^W Force Gems. Then again, ruthless playing gets you nowhere as you can't proceed with the help of the other players. And then there is that election at the end of each level which can still change the tide...</li>
<li>It's long. And, thus, unrepetitive. Much longer than the GBA.</li>
<li>Though it's not much, there is a bit of story in the game. Much more than on the GBA title. I like that</li>
<li>It's just great!</li>
</ul>
<p>If you can get it, go and buy the game. But don't play it alone. It's so much more fun when played together.</p>
Eclipse, CVS and putty2004-06-17T00:00:00+00:00http://pilif.github.com/2004/06/eclipse-cvs-and-putty<p>I'm a really big fan of <a href="http://www.eclipse.org">Eclipse</a>. This Java-IDE has many great features I have never come across in other IDEs so far. The new context based syntax-highlighting comes to mind (it analyzes your sourcecode and can - for example distinguish between local variables and constants)</p>
<p>Actually, it's only because of Eclipse that I now can write fairly good Java code. The thing was incredibly helpful during my first struggle to get something to work, so I made quite some progress in a quite small timeframe.
</p>
<p>There is one thing though, I never got to work: CVS integration</p>
<p>I'm using CVS strictly over SSH, with the help of Putty, Pageant and Public Key authentication. Despite the fact I've entered the correct settings for the "ext" method (using Puttys plink.exe as CVS_RSH) in Eclipse, it never worked (it failed with various messages)</p>
<p>Of course there is the new extssh-Method, but this is non-standard. Where I can access the CVS-Server using extssh from eclipse, it does not really help because then the command line tools and <a href="">TortoiseCVS</a> stop working because they don't understand extssh</p>
<p>Finally I found the solution: Even though it doesn't make sense, you have to enter "cvs" under CVS_SERVER in the CVS-Settings. I don't know why. It's just that way. So to use Eclipse together with the command line tools and Tortoise to access the CVS-Repository from the same working copy, this is what you have to enter under Window/Preferences/Team/CVS/Ext Connection Method:</p>
<table>
<tr>
<th>CVS_RSH</th><td><tt>your\full\path\to\plink.exe</tt></td>
</tr>
<tr>
<th>Parameters</th><td><tt>{user}@{host}</tt></td>
</tr>
<tr>
<th>CVS_SERVER</th><td><tt>cvs</tt></td>
</tr>
</table>
<p>Then you add a repository in the repository-view using the following settings:</p>
<table>
<tr>
<th>Host</th><td><tt>your.host.name</tt></td>
</tr>
<tr>
<th>Repository path</th><td><tt>/path/to/repos</tt></td>
</tr>
<tr>
<th>User</th><td><tt>username</tt></td>
</tr>
<tr>
<th>Password</th><td>empty</td>
</tr>
<tr>
<th>Connection type</th><td>ext</td>
</tr>
</table>
<p>Before you finally click "Finish", open up a command line window and log in to your CVS-Server using plink:</p>
<pre>
plink user@host
</pre>
<p>Maybe you are asked to store the host key in plinks database. Do so. Then make sure that you can login without a Password-Request popping up (Pageant must be running, your key must be loaded and authorized on the server). If that works, click "Finish" in Eclipse.
</p>
Worst scrollbar ever2004-06-14T00:00:00+00:00http://pilif.github.com/2004/06/worst-scrollbar-ever<div class="floatimg">
<img alt="scrollbar.png" src="http://www.gnegg.ch/archives/scrollbar.png" width="78" height="173" border="0" /><p class="legend">Taken from thawte.com. I made the thing a bit shorter, but changed nothing else.</p>
</div>
<p>Small quiz: What is this thing you are seing in this image (possibly displayed on the right in your browser)?</p>
<p>It's a scrollbar. Or at least, it's supposed to be one. I came across this "wonderful" ... thing when I was looking around for a code signing certificate on thawte.com. Have a look for yourself: It's right <a href="http://www.thawte.com/codesign/index.html">there</a>.</p>
<p>I have quite some problems with this maybe-scrollbar:</p>
<ul>
<li>I can't read it. Maybe it's my eyes, but I simply have problems recognizing a dark blue slider on black background.</li>
<li>It's no real scrollbar. It does just look like one. This has many "advantages":
<ul>
<li>The mouse wheel does not work with this one.</li>
<li>The arrow-buttons are much too small for my preference</li>
<li>It's terribly slow as it's pure DHTML</li>
<li>Clicking somewhere on it does scroll the document to the designated point, but it's terribly slow too.</li>
<li>Dragging and Dropping the knob does not work. Firefox (correctly) thinks I want to drag the knob-image somewhere to my desktop</li>
<li>Keyboard navigation is not possible</li>
</ul></li>
<li>But the worst thing of them all: If the browser window is small enough, it gets a (real) scrollbar on its own. But with this one, the mouse wheel does work (no surprise: it's a real scrollbar). Thus, scrolling through the page with the wheel scrolls, but it scrolls the wrong thing. There is no way to read the whole page using the mousewheel.</li>
</ul>
<p>The page has a fixed height so the last problem above can occur quite frequently. I see absolutely no sense in creating a DHTML monster to do something the browser already does. I can't see any drawbacks of the page having a non-fixed height and would rely on the browsers scrollbar.
</p>
<p>Can somebody enlighten me?</p>
<p>PS: As comments about usability seem to get constantly more numerous, I have created a usability category. Maybe some time in the future I will actually create a category-based listing :-)</p>
The price of abstraction2004-06-10T00:00:00+00:00http://pilif.github.com/2004/06/the-price-of-abstraction<p>
<a href="http://www.osnews.com/story.php?news_id=7324">This article</a> was featured on Slashdot today. It's about the current state-of-the-art Linux-Desktop being quite demanding in Hardware - even more demanding than the arc-nemesis Windows XP.</p>
<p>And it's true.</p>
<p>I see one of the problems in the basics of the Unix philosophy: Use small tools to do a specific task and another in the OpenSource-Philosophy: Write clean code.</p>
<p>These two approaches create wonderful architectures and abstractions of small tools doing their work.
</p>
<p>What nobody seems to recognize: This so wonderful and well thought-out architecture is bloated per se. Let's say you are playing a Video-File in a KDE-Video-Player running in KDE. This is what's running on your system to acomplish this task (I hope I get all the (bigger) components really running - maybe there are more (or less) of them):
</p>
<ul>
<li>Linux Kernel</li>
<li>KDE-Sound-Server</li>
<li>X-Window-Server (complete with un-used network transparency which would not work with the video anyway</li>
<li>The whole QT-Library</li>
<li>Some basic KDE-Abstractions (kdelibs)</li>
<li>Your media player</li>
</ul>
<p>
Every one component ist cleanly seperated from each other - every one can be replaced without disturbing other components. Every one is designed cleanly using many abstractions to provide this replacability even for internal components.</p>
<p>But it get's even more complicated: Many of the acting components are independant processes which creates the need for quite a bit of IPC wich is always slower than direct calls.</p>
<p>No wonder this is slow!</p>
<p>In Windows for example quite a lot of the stuff described above is actually running in the kernel or at least very close to it, maybe using undocumented interfaces to the kernel.
</p>
<p> Playing a video mostly depends on DirectX which uses mostly in-process calls. It's dirty, it's unstable (maybe), but it's fast, doesn't flicker and happens to just work (the less independant components involved, the less can go wrong).</p>
<p>Of course that's not how software should be written. It's how it <em>is</em> written when fast and impressive resuslts are requested.</p>
New PowerMacs2004-06-09T00:00:00+00:00http://pilif.github.com/2004/06/new-powermacs<div class="floatimg" style="width: 210px">
<img alt="designcluttervertical060904.jpg" src="http://www.gnegg.ch/archives/applesmall.jpg" width="150" height="290" border="0" /><p class="legend">(Taken from apple.com)</p>
</div>
<p>
Today Apple announced new G5 Power Macs, where the most expensive one has two 2.5 GHz CPU's and - that's the reason for this entry - a liquid cooling system.
</p>
<p>
When I see the two words liquid and cooling together with Computer in one Phrase, I think of things like <a href="http://www.golem.de/0406/31643-star_ice_b.jpg">this</a>, <a href="http://www.caseking.de/shop/catalog/default.php?cPath=29_409&osCsid=f138953d9013d47def3ed5c8e4a1eb59">this</a> and <a href="http://personal.telefonica.terra.es/web/articiapower/FOTO11.jpg">especially this</a>. I find that really stupid</p>
<p>The german magazine <a href="http://www.heise.de/ct">c't</a> recently had an article about liquid cooling systems for PC's and none of them was both more efficient than conventional air coling and secure enough to be used (you know: Water & Current isn't a very good team. And then comes the whole chemistry with its cool things like corrision and other stuff). So for me, liquid cooling is just another gadget for overclocked gaming PC's. Often a liquid cooling system is applied to keep the hoplessly overclocked CPU cool. And all is done because overclocking is supposed to be cheaper than buying the real thing, but in the end all this cooling stuff is much more expensive - even more so when something goes wrong.</p>
<p>
I fail to see where apples solution is something different. The old G5 where quiet too, so I don't see any reason for this besides it being a cool feature.
</p>
<p>
But whatever. The comparison between the internals of a G5 and a common PC I've taken from the <a href="http://www.apple.com/powermac/design.html">Apple page</a> is quite cool. What they don't tell you: If you buy a complete PC from a manufacturer like IBM, you won't get something extremely different from what you see on the left. But it's cool none the less.</p>
Another one...2004-06-09T00:00:00+00:00http://pilif.github.com/2004/06/another-one<p>
Now that bluetooth-support gets integrated into the Windows OS, I thought, maybe Logitech created a new driver for it's diNovo Package that does not <a href="http://www.gnegg.ch/archives/72-Fun-with-Logitech.html">insist</a> in the Logitech Hub being installed.
</p>
<p>This time the driver did not complain about the hub not being installed, so maybe this is a good sign (I have not rebooted yet), but the installer presented me with this screenshot which has such a bad wording that it's worth blogging about:</p>
<div align="center">
<a href="http://www.gnegg.ch/archives/setpoint.html" onclick="window.open('http://www.gnegg.ch/archives/setpoint.html','popup','width=504,height=260,scrollbars=no,resizable=no,toolbar=no,directories=no,location=no,menubar=no,status=no,left=0,top=0'); return false"><img src="http://www.gnegg.ch/archives/setpoint-thumb.png" width="320" height="165" border="0" /></a><br />(I cut out some empty space at the bottom and created red overlays to highlight my point)
</div>
<p>This thing not only has one flaw, it has many:</p>
<ul>
<li>The text speaks about check boxes, but there are radio buttons</li>
<li>The text suggest it's possible to select more than one item in the list below. It isn't</li>
<li>There is no way to install Keyboard, Tackball <b>and</b> mice support</li>
<li>I don't have a trackball plugged for now - the option is still there</li>
<li>The list is extremely badly readable as it combines options to install in one point: "Keyboard and Mouse". The approach with Checkboxes would be much easier to understand</li>
</ul>
<p>A fix would be to actually do what the text says: Use Checkboxes. If it's not possible to install both mice and trackball drivers (why is that so? Older drivers did not have that limitation), uncheck the other box, or even better, create one Checkbox and two radio buttons:</p>
<pre>
Please chose what drivers to install:
[ ] Keyboard
( ) Mouse
( ) Trackball
</pre>
<p>The problem is that the used installer (<a href="http://www.installshield.com">InstallShield</a>) is extremely bloated and though it has the features necessary to create a dialog like the one I described above, it's that complicated that a seperate developer is needed just for the installer and often there's no time to do that.</p>
<p>Which leads to me finding <a href="http://support.installshield.com/kb/view.asp?articleid=Q105444">this</a> even more hypocritical than it already is - even more so that the Installer the end user sees is not by InstallShield, but by the respective developer - the final installer is as good as it's created, not as the tool creating it</p>
<p>If you ask me what a good installer may be, I'll answer <a href="http://www.jrsoftware.org">InnoSetup</a>. It's small, simple, fast and extremely powerful.</p>
Relaunch2004-06-08T00:00:00+00:00http://pilif.github.com/2004/06/relaunch<p>
Richard did a wonderful job on the new <a href="http://www.popscan.ch">PopScan</a> website. Looks great and even uses CSS to some extent. It's not perfect yet, but it's getting there.
</p>
<p>
The page as such is much better than the previous one, though it's not translated to english yet.</p>
XP SP2, Delphi Debugger2004-06-07T00:00:00+00:00http://pilif.github.com/2004/06/xp-sp2-delphi-debugger<p>
This weekend, a pre-RC2-Release of Windows XP SP2 could be found on the net. Eager to learn whether the Delphi debugger now works again, I've downloaded and installed the thing.
</p>
<p>
I'm happy to report, that delphi indeed works again, so I'm keeping the build installed for now, as I really like the integrated bluetooth-support.</p>
pilif.ch SPFed...2004-05-29T00:00:00+00:00http://pilif.github.com/2004/05/pilifch-spfed<p>
I'm quite proud to announce that as of now, <a href="http://www.pilif.ch">pilif.ch</a> (my personal webpage - in contrast to gnegg.ch, my blog) has a TXT record that follows the <a href="http://spf.pobox.com/">SPF</a> specification. If you already use SPF on your mailserver, you can now be sure whether mail seemingly coming for pilif.ch is legit or not.</p
<p>But there's another thing. While I was quite impressed from the simplicity and the good protection from SPAM, SPF could provide, I had some thoughts about how to circumvent SPF based filters and I found that it's disturbingly easy...</p>
<p>The problem lies in the fact that any SPAMMer can just buy himself a nice new domain, use it for this one session of SPAM, while adding a nice SPF record. It's even possible to still use cracked zombie systems when the SPF-entry is "wisely" chosen (like adding 0.0.0.0 to the permitted senders).</p>
<p>But even if that's going to happen, there are some drawbacks for the spammer:</p>
<ul>
<li>Trackability: If I have to buy myself my very own domain, I become trackable. If SPAMMing is not allowed in my country, it's possible that I'm facing some kind of punishment for my acts</li>
<li>Price: As the actual executor of the SPAMing action has to actually buy a domain, face legal problems and more, the price for each message will rise. Maybe siginificantly enough so that conventional marketing may get more worthwile.</li>
</ul>
<p>Read <a href="http://spf.pobox.com/faq.html#churn">this FAQ entry</a> to get some thoughts about this problem from the creators of the standard. While I don't like the solutions provided there, i hope my above points will solve the problem in time. And if not, someone else will have another idea to stop the flood once more... For the time being SPF is a nice solution to a big problem. Simple, nice and very pragamtic</a>
</p></p>
Bluetooth driver nightmare2004-05-29T00:00:00+00:00http://pilif.github.com/2004/05/bluetooth-driver-nightmare<p>Another post around bluetooth - one I wanted to do for quite some time now, but I have not come around to yet.</p>
<p><a href="http://www.gnegg.ch/archives/101-Even-more-bluetooth.html">As you know</a>, Microsoft will bring its own Bluetooth-Implementation to Windows XP with Service Pack 2 (this and the better WLAN support are two strong reasons for me wanting to install it, but the current RC1 does not work with Delphi's debugger - I hear, this is fixed in RC2 to be released somewhere in June). What you may not know is that there is some Post-SP1-Fixup floating around that already has rudimentary BT support. I think, it initially came with Microsofts Bluetooth Accessories (Keyboards and Mice).
</p>
<p>The Problem with this rudimentary support is that it is not compatible at all with the WIDCOMM-Stack, which provides far more functionality that this MS-thingy does.</p>
<p>The problem gets even worse because this Fixup pack seems to be integrated in quite some OEM preinstallations these days, even if the devices themselfes come with a WIDCOMM stack</p>
<p>I came across this problem with two thinkpads: Initially they have BT disabled. The official way to get it enabled is to first install the Drivers provided by IBM (the WIDCOMM-Stack) and then Press Fn-F5 and click on "Enable" in the bluetooth section. What then happens is that Windows detects the (USB-, though it's internal hardware, it's still USB) device and <em>installs its rudimentary support</em>.</p>
<p>The Widcomm-Tools never get to recognize the Bluetooth device - the Icon in the tray stays red. You are locked down to the limited (limited as in virtually no functionality at all) functionality of this Microsoft upgrade</p>
<p>The clou: I did not know this and the IBM-Support I've called could not help me either.</p>
<p>So, what's the solution? How to recognize this problem when it happened?</p>
<p>Recognizing is simple: If the BT-Icon is red despite bluetooth being enabled, this may be the problem. If you want to be sure, open Control Panel / System / Hardware / Device Manager and right-click on the BT-Drvice. Select properties. If Manufacturer is Microsoft, you ran into the trap.</p>
<p>So... how to fix it then?</p>
<p>In the window described above, go to the drivers-Tab, select Update Driver. Then follow these steps:
</p>
<ol>
<li>Install from a list or specific location</li>
<li>Don't search.</li>
<li>Have Disk</li>
<li>Enter <tt>c:\Program Files\<WIDCOMM Installation dir>\bin</tt></li>
<li>OK</li>
<li>Ignore the warning about drivers not being signed</li>
<li>Complete the installation.</li>
</ol>
<p>Sometimes you must reboot, sometimes not. But now the Widcomm software will recognize the drvice and you will have access to the full functionality</p>
<p>Quite simple - as soon as you have found out what the problem is</p>
Cashpoint software for Geeks2004-05-21T00:00:00+00:00http://pilif.github.com/2004/05/cashpoint-software-for-geeks<p>
I'm currently working on a WinCE based POS-System for unexpirienced users in low-profile stores (I took the liberty to black-out the logo at the top as it's not official).
</p>
<p><img src="http://www.gnegg.ch/archives/hex-thumb.png" width="320" height="255" border="0" /></p>
<p>
The Screen on this shot shows you the manual price entry screen. The problem: It seems like the values seem to get interpreted as HEX values... (see arrow). This is the optimal piece of software for geeky hardware stores ;-)
</p>
<p>PS: Of course I fixed this. Actually it never even was a bug as it was wrong on purpose to give me reason for another blog entry</p>
.Python2004-05-19T00:00:00+00:00http://pilif.github.com/2004/05/python<p><a href="http://www.python.org/pycon/dc2004/papers/9/">This Paper</a> was featured on <a href="http://www.slashdot.org">Slashdot</a> today. It's about an implementation of <a href="http://www.python.org">Python</a> based on Microsofts CLR. The following quote speaks for itself:
</p>
<blockquote>
<p>
I wanted to pinpoint the fatal flaw in the design of the CLR that made it so bad at implementing dynamic languages. My plan was to write a short pithy article called, "Why .NET is a terrible platform for dynamic languages".
</p><p>
Unfortunately, as I carried out my experiments I found the CLR to be a surprisingly good target for dynamic languages, or at least for the highly dynamic specific case of Python. This was unfortunate because it meant that instead of writing a short pithy paper I had to build a full Python implementation for this new platform [...]
</p>
</blockquote>
<p>
This is very interesting. Imagine having access to all the Tools, Components around .NET from a wonderful language like Python. But it does not end here: As your Python code in the end gets compiled to MSIL, you can even create libraries in Python and share them with users of languages like C#. This is nice!
</p>
<p>Too bad I don't speek Python. But then again: If it's working with python: What about Perl? PHP? Unix Shell [;-)]?
</p>
Van Helsing2004-05-15T00:00:00+00:00http://pilif.github.com/2004/05/van-helsing<p>
Van Helsing is maybe the worst movie I've ever seen. My girlfriend and I have some sort of qualification for movies. The really bad ones are sarcastically called BME (best movie ever) and for Van Helsing we had to create a new category "BME plus" (or BME+)...
</p>
<p> The silliest thing about the movie are the stupid dialogs. I mean phrases like "I've never been to the sea.... it must be beautiful" - and such a thing completly out of context after being nearly killed by vampires. Really nice.
</p>
<p>The worst thing is the pseudo-romantic ending. I will not lose any more words about that. It's just bad.
</p>
<p>And then there's the soundtrack...</p>
<p>This is quite a different thing: It's just great. When I heard it in the movie I took a mental note to get the CD and today I did. Great! If only it was longer than those 40 minutes...</p>
MovableType2004-05-14T00:00:00+00:00http://pilif.github.com/2004/05/movabletype<p>You may know that I'm using <a href="http://www.movabletype.org">MovableType</a> for this blog. Now they have announced the Version 3.0 and unlike the previous versions they put a hefty price tag on it: What once was available at no cost, now <a href="http://secure.sixapart.com/">requires you to pay</a> $70 and more. Not only that: Where you was quite free in adding users and blogs to your installation, this is now limited too - even the most expensive edition allows only for 15 Weblogs.
</p>
<p>I have no problem with paying for (really good) software (I actually use) - I even donated $45 for this installation you are seing here, but $70 is much - even more so that you don't get something you can thinker with, but some restricted proprietary piece of software that is quite against what blogging is about.</p>
<p>For now I'll be staying with what I'm currently running, but I'm certainly looking for alternatives. Too bad that another company went from developer- and community-friendly to just making profits with it's good name.
</p>
<p><b>Update:</b> Actually they do still have a free personal edition, but this green box at the right side is so badly layouted that I've just overlooked it. Additionally you still have to pay the full price if you want to see the "updated"-feature. And it's much more than what was required previously</p>
News2004-05-11T00:00:00+00:00http://pilif.github.com/2004/05/news<p>
While not that much has happened in the world out there (at least not much of the stuff I usually write about), I have some news about what I've been doing the last few days:
</p>
<ul>
<li>I've devised quite a cool way to add article images to our <a href="http://www.popscan.net">Barcode Solution</a>. It's quite fast, space efficient and expandable. I'd love to see customers using it.</li>
<li>I played through Super Metroid on <a href="http://www.zsnes.com">ZSNES</a> very nice indeed. Much nicer than what you get on the Gameboy - especially because it's so loooong.</li>
<li>Began playing Metroid Prime on my Gamecube. Now that I forced me not to play it like I'd be playing Unreal or so, it's getting quite good. 3D-Shooter-Fans: Take your time, move slowly and explore. Don't rush forward and shoot everything you see.</li>
<li>Added RSS-Feeds to <a href="http://linktrail.gnegg.ch">linktrail</a> it's not linked for now. Use <trail-url>?rss to see it. As always: I'm going to explain later.</li>
<li>Had in my hands what is known as XDA, SPV, qTeck: A Windows Mobile based smartphone. We got one from Orange to do some tests with it. Feels nice, has a hell of a GUI but is still too large for me to use regularely</li>
<li>I've been quite depressed because of the weather here - it's better now. Until tomorrow, at least.</li>
</ul>
<p>Now that this roundup is complete, I'm looking forward to posting some more interesting stuff in the future ;-)</p>
And now for something completly different2004-05-11T00:00:00+00:00http://pilif.github.com/2004/05/and-now-for-something-completly-different<p>ok. actually it's not completely different from what I've seen <a href="http://www.gnegg.ch/archives/70-The-anatomy-of-a-delphi-crash.html">here</a>, but its quite remarkable that a single error message can fall into two categories at once:</p>
<p><a href="/archives/other_delphicrash.png"><img src="http://www.gnegg.ch/archives/other_delphicrash-thumb.png" width="300" height="59" border="0" /></a></p>
<ul>
<li>It's a non-average delphi-crash (never seen <b>that</b> before)
<li>It's completly un-understandable and says nothing about the problem.
</ul>
<p>Very nice...</p>
</li></li></ul>
Changes...2004-04-30T00:00:00+00:00http://pilif.github.com/2004/04/changes<p>
... is the subject of the virus mails I had in my SPAM-Folder today. And Changes there are: It seems that the most current mutation of virus-what-ever-its-name-may-be now uses HTML to format the ZIP-Password I'm supposted to enter in green and bold typeface. *sight*
</p>
<p>And It's not about those Mails I'm unhappy about. I have a <a href="http://www.spamassassin.org">SpamAssassin</a> based filter on the server and the <a href="http://spambayes.sourceforge.net/windows.html">SpamBayes</a> Plugin in Outlook (and Mozillas own Spam-Filter in <a href="http://www.mozilla.org/projects/thunderbird">Thunderbird</a>) which protect me quite well from actually seeing all those messages.
</p>
<p>No. Its three different type of messages I'm getting that I'm concerned about:</p>
<ul>
<li>Per day I'm getting about 20 messages telling me that I presumably sent a message containing a virus which has been eliminated by super-tool 2000 (tm). Stupid, as my PC is completely virus free and everyone knows that those viruses and worms fake their sender adresses. Although not happy, I took the consequence and updated my filters to catch those things.</li>
<li>About 50 messages per day are out-of-office replies of people I never met. I hate those as they are completly unnecessary. After all Email is not a real time medium and if it's really important that your customers get an immediate response, you can tell them in advance that you are not there or have someone else take over the communication. Filtering those messages proves difficult as I'd be generating the source for quite a lot false-positives</li>
<li>Finally I'm getting all those non delivery messages from MTA's all over the world. Some because of integrated virus scanners (sometimes I'm getting even two messages per virus I've not sent: One commercial for a virus scanner and one non delivery report) and some because the destination users do not exist. Because the virus fakes the sender adress, I am getting those messages. And because I have the postmaster@<many domains>-Adress, I'm getting even more of those. Summed up, we're talking of about 100 messages per day. Additionally, I must not filter those. I mean: There are about 1000 useful cases for non-delivery reports.</li>
</ul>
<p>So, you see: The amount of messages I can filter with a good conscience is actually only a small percentage of junk mail I'm getting per day. Where does this lead to? How can it be fixed? I've no idea. </p>
Just another feed2004-04-27T00:00:00+00:00http://pilif.github.com/2004/04/just-another-feed<p>
While experimenting around with <a href="http://www.bradsoft.com/feeddemon/index.asp">FeedDaemon</a> I came to the conclusion that an XML-Feed containing the whole entry (instead of just the [autogenerated ... I know. I will probably change that sometime] excerpt) would be really nice as it makes FeedDaemon a very useful tool.
</p>
<p>
Other blogs, I'm currently reading (I definitely will update my templates to include links to them) also provide this service.
</p>
<p>
Now I'm not really sure about this whole RSS-Stuff, so I did some copying and pasting from <a href="http://www.7nights.com/asterisk/">asterisk*</a> and then validated it using <a href="http://www.feedvalidator.org/">Feed Validator</a> and I quite like the outcome.
</p>
<p>
For now I don't link this new feed with the full postings from every page as it's just a test for me. If you can and have more clue about RSS, try it out: <a href="/index2.rdf">RSS 2.0 Feed with full content</a>
</p>
<p>
In another step, I told MovableType to create the excerpt from 50 instead of 20 words to put a little more value to trackback-pings and the <a href="/index.rdf">old fashioned RDF 1.0 Feed</a>
</p>
<p>Slowly but steady I'm really getting into this blogging-stuff</p>
Debugging2004-04-27T00:00:00+00:00http://pilif.github.com/2004/04/debugging<p>
Debugging can be so much fun if you just know how to entertain yourself while doing it. I've taken the screenshot below when I did some debugging on a stupid AV and finally found why it happens. Then I've added a <a href="http://www.gexperts.org">Gexperts</a> Debug-Statement to visualize whether I was right.
</p>
<div align="center">
<img alt="debug_fun.png" src="http://www.gnegg.ch/archives/debug_fun.png" width="261" height="211" border="0" />
</div>
<p>
It seems, I was.... Talk about programs not knowing when it's time to die. If only Delphi itself could tell me before it's crashing...
</p>
<p>
(read the thing from bottom to top: 19:00 'till 19:02 I was debugging and the app was crashing. Then I found the problem, added the debug-statement which checks for a NULL-Pointer and outputs the message if there's indeed one of them and at 19:02:42 I ran the thing again and it warned me that it's going to crash. At 19:05:46 it was fixed)
</p>
Linktrail2004-04-22T00:00:00+00:00http://pilif.github.com/2004/04/linktrail<p>
If only someone told me that the <a href="http://linktrail.gnegg.ch">linktrail demo</a> did not work after the <a href="http://www.gnegg.ch/archives/110-Speed-up.html">server update</a> I would have restored it sooner. Sorry.
</p>
<p>
In case you wonder: I'll explain it later.
</p>
23rd Post2004-04-22T00:00:00+00:00http://pilif.github.com/2004/04/23rd-post<p>
<a href="http://www.7nights.com/asterisk/archives/23rd_post.php">asterisk*</a> modified the <a href="http://www.gnegg.ch/archives/page_23.html">Page 23</a> idea a bit and came up with this:
</p>
<ol>
<li>Go into your blog's archives.</li>
<li>Find your 23rd post (or closest to).</li>
<li>Find the fifth sentence (or closest to).</li>
<li>Post the text of the sentence in your blog along with these instructions.</li>
</ol>
<p>
Well... my <a href="http://www.gnegg.ch/archives/23-Apple-X11-cool.html">23rd post</a> was about Apples X-Server and the fifth sentence (not counting one-word thingies) was:
</p>
<blockquote>
It launches in about half a second on Richard's Mac and launching <a href="http://www.eterm.org/">Eterm</a> or <a href="http://www.nedit.org">nedit</a> just happens instantly without any remarkable delay.
</blockquote>
<p>(links added in this quote. I should have made them back then)</p>
A programmers Editor...2004-04-20T00:00:00+00:00http://pilif.github.com/2004/04/a-programmers-editor<p>
... doesn't have to take that much care of usability. And the installation routine of <a href="http://www.editplus.com">EditPlus</a> certainly doesn't.
</p>
<div align="center">
<img alt="edp.png" src="http://www.gnegg.ch/archives/edp.png" width="300" height="201" border="0" />
</div>
<p>
Besides the fact that this dialog appears when it's already too late (after the installation has completed) and that it contains redundancy (the "Send To" entry and the additional context menu entry do practically the same), the marked wording is very ridiculous or can you imagine a mouse button with an attached (?) editor?
</p>
<p>
As usability would not matter (remember: programmer's editor) that much, a more useful and less ridiculous wording would be "Add EditPlus to the Context Menu of Explorer".
</p>
<p>
The wording is one of the things that are very often very wrong in software by semi-professional companies (not excluding my own software) and this usually gets even worse in the installers as they are often not very well tested (or not at all). Those InstallShield things are the worst as many developers just click together the installation, then click through the dialogs and put the thing on the web.
</p>
<p>
This is the reason why my parents still have not succeeded in installing software on their own while nearly everything else went quite well the last year.
</p>
Gmail revisited2004-04-18T00:00:00+00:00http://pilif.github.com/2004/04/gmail-revisited<p>It seems like I’m not the only one to be reasonable about Gmail. Tim O’Reily even "stole" <a href="http://www.gnegg.ch/archives/108-All-this-fuss-about-Gmail.html">my blog’s</a> title with <a href="http://www.oreillynet.com/pub/wlg/4707">his article</a>. Nice to find oneself being right in the first place ;-)</p>
Apocalypse 122004-04-18T00:00:00+00:00http://pilif.github.com/2004/04/apocalypse-12<p>It seems like Larry Wall has done it again and released <a href="http://www.perl.com/lpt/a/2004/04/16/a12.html">Apocalypse 12</a> (linked to the print view as you definitely want to print it out). The Apocalypses are nothing religious, but Larry Walls ideas and definitions about the next version of Perl (although I am inclined to call it something else as it's going to be quite different - if it's ever released).</p>
<p>The Appocalypses are quite nice to read. They are not only great from a technological standpoint but from a linguistic and humorous one too. If you have no problems reading 72 pages of technical definitions on quite a high level, go for it and read it. It's a real pleasure and I looked forward to this for over a year now
</p>
Page 232004-04-16T00:00:00+00:00http://pilif.github.com/2004/04/page-23<p>
<a href="http://www.7nights.com/asterisk/archives/page_23.php">Asterisk*</a> got me the idea to
</p>
<ol>
<li>Grab the nearest book.</li>
<li>Open the book to page 23.</li>
<li>Find the fifth sentence.</li>
<li>Post the text of the sentence in your journal along with these instructions.</li>
</ol>
<p>
Now the book happens to be <a href="http://www.oreilly.com/catalog/swarrior/index.html">Security Warrior</a> by Cyrus Peikari and Anton Chuvakin, O'Reilly. The quote is:
</p>
<blockquote>
Before beginning your practical journey, there is one final issue to note.
</blockquote>
<p>
So: nothing fancy. But the book is great. It reads lika a criminal novel, despite being a tech-book (I don't read much else these days).</p>
Back from vacation2004-04-13T00:00:00+00:00http://pilif.github.com/2004/04/back-from-vacation<p>
I was away during the easter-days and now that I'm back, I had to see that my blog somehow got recognized by the various spammers around here: I've just deleted 10 spam entries from the comments.
</p>
<p>
Those entries are primarely made to get a better PageRank at Google, but since I've upgraded to MovablaType 2.665 which uses a redirector for all links, this does not get the spammer anything, so it's just annoying and does not even bring profit to those f***ing spammers.
</p>
<p>
Why is there always someone to destroy something good?
</p>
Speed up2004-04-07T00:00:00+00:00http://pilif.github.com/2004/04/speed-up<p>
Maybe you have noticed that this page loads faster than before - especially faster than it did the last two weeks or so. Maybe you wonder too, why there was this downtime at the end of march.
</p>
<p>
I won't go into many details, but gnegg.ch (and a whole lot of other stuff) is now running on a brand new server (slightly faster machine) with <a href="http://www.gentoo.org">Gentoo Linux</a> using a 2.6.4 Kernel.
</p>
<p>
This due to some sucker hacking into the older machine last march, installing a quite destabilizing rootkit (thanks for that... this lead me to notice the crack quite fast...), modifying a lot of html-files and php.ini so that nearly every page served contained a IFRAME utilizing a IE exploit to install some kind of dialer (the IFRAME linked to forced-action.com). The wonderful and gratifying work of this unknown and soooo cool guy caused me to return home from vacation to do some rescuing work.
</p>
<p>
This is not the usual stinking phpNuke-Exploit (we were not running any phpNuke anyway) as this would not lead to a rootkit getting installed.
</p>
<p>
Again: Many thanks for your "hard work", dear anonymous hacker. You got me the much needed opportunity to finally install Gentoo. And not only that: You even got me a faster Server to work on (to prevent any further downtime during reinstallation of the new OS). Now that this episode finally has come to an end, I will have a look at those disk-images I took from the compromised machine. Let's see what I find out.</p>
Recognized by Borland2004-04-07T00:00:00+00:00http://pilif.github.com/2004/04/recognized-by-borland<p>A <a href="http://www.pilif.ch/stuff/pershchg/index.php">tool of mine</a> has been <a href="http://homepages.borland.com/strefethen/index.php?pagename=Main.TipsPage">recognized by Borland</a> (Enabling Windows Search for Delphi related files) - or at least by an <a href="http://homepages.borland.com/strefethen/">employee</a> of them. Now if someone acutally would read those Borland Employee blogs, maybe I could get a little bit of traffic to <a href="http://www.pilif.ch">pilif.ch</a> ;-)</p>
Why o why is my harddrive so small?2004-04-06T00:00:00+00:00http://pilif.github.com/2004/04/why-o-why-is-my-harddrive-so-small<p>
I have the whole windows profile on its own NTFS-Partition that I've mounted to the "Documents and Settings"-Folder, so I can easily copy my clean windows image over the current system partition without losing any data. So my profile is about a year old where the system partition is quite clean.
</p>
<p>
Yesterday I've asekd myself why my the free space on my profile partition is shrinking and shrinking over time without me installing that much stuff (and removing it from time to time). Just per accident I found out: It's windows installer: Whenever I'm installing one of those .msi-Files (or .EXE-Based InstallShield installers using MSI technology), a whole lot of junk gets into my profile and is never removed:
</p>
<ul>
<li>*.msp: msp-Files are like MSI-Files, but are used to patch an existing installation. I currently have 253 MB's worth of msp-Patches in my profile (<tt>Local Settings\Temp</tt>. Value: Unknown because Windows Installer is not nearly documented enough</li>
<li>msi*.log: Logfiles of MSI-Installations. No value whatsoever. I have 106 MB worth of MSI-Logfiles in my profile.</li>
<li>*.msi: Whenever I install and MSI-File (or an exe-based MSI-Installer), the MSI-File is copied somewhere. Although it's not in the profile, I've 217 MB worth of spare .MSI-Files on my harddrive - not counting the ones I have downloaded to my download-directory.
</li>
</ul>
<p>
So: I have about 600 MB worth of data which hs no real purpose on my computer and I don't know whether I can delete it or not as MSI is not really documented (there's just some technical documentation for developers available).
</p>
<p>
Another nice sample of how strange Windows Installer can be: All CHM-based help-files recently stopped working with an Windows Installer Message asking me to provide the path to pgadmin2.msi - a postgres frontend which I have already deletetd ages ago - just now that I have removed the msi-installer from the original download directory, MSI wants to access it when doing things that don't even remotely have to do with the file it asks for. Why?
</p>
<p>
Microsoft: If you sell us your installer technology as the non-plus-ultra solution for the old problems with overwritten dlls, incomplete installations and such: Please fix your tool or at least document it enough!
</p>
All this fuss about Gmail2004-04-06T00:00:00+00:00http://pilif.github.com/2004/04/all-this-fuss-about-gmail<p>
When reading the news on the web, one thing is in all mouths: Googles email service gmail. What I cannot understand is the fuss about gmails <a href="http://gmail.google.com/gmail/help/privacy.html">privacy policy</a>. The following two points are what everyone seems to be so upset about:
</p>
<blockquote>
Residual copies of email may remain on our systems, even after you have deleted them from your mailbox or after the termination of your account.
</blockquote>
<p>
I ask you: So what? Just imagine how this service is going to work: Google has thousands of computers running - that's their philosophy. For me it's jsut clear that the whole concept would not work if there where just one copy of each email message available. Think of it: Every message that enters the system surely is replicated among the many cluster nodes at google. This is a going-on process. And it's just the same with a deletion: Once you delete the message, this process must be replicated among the cluster nodes. It's just not feasible to instantly remove a message on 100'000 computers. And: While receiving and displaying a message to the user must have absolute priority, processor time and network usage can be saved if deletion requests in the cluster are handeled with lower priority.
</p>
<p>
For me, this clause does not mean: "We will keep your mail forever because we want to know everything you do and you are", but "to provide the optimal service for you, there may be some technical limitations that prevent a message from being immediately deleted from 100'000 computers at the same time". It's great that google tells us about this. What about hotmail? Can they guarantee instant deletion? Don't they run a cluster?
</p>
<blockquote>
Google's computers process the information in your email for various purposes, including formatting and displaying the information to you, delivering targeted related information (such as advertisements and related links), preventing unsolicited bulk email (spam), backing up your email, and other purposes relating to offering you Gmail. </blockquote>
<p>
This is so plain simple. Tell me of one webbased email service that does not to the very same thing. The thing everyone is concerned about is the "delivering related information"-thing. But this does not mean that the computer or anyone else really "reads" your email. It just tells you that the content that is displayed on your webbrowser is analyzed and that targeted advertising is added. Tell me about any other webbased email service that does not do that.
</p>
<p>
So for me this is a whole lot of hot air and really injust: Where the privacy policies on other services just don't tell you those (obvious) things, google's is and everyone complains about. I hate the press.</p>
RealPlayer - Useable again?2004-04-05T00:00:00+00:00http://pilif.github.com/2004/04/realplayer-useable-again<p> The last time I've installed <a href="http://www.real.com">RealPlayer</a> was back in 96 or so. Since then the have added more stupid icons, popup windows, sales-pitches and such useless features with every new release, while they went great lengths in hiding the free download, giving the impression, that one has to pay to view realvideo content.
</p>
<p>It looks like <a href="http://www.cartalk.com/content/features/real/response.html">they finally saw</a> that being nasty and cluttering users systems with trash does not get them anywhere...
</p>
<p>Not that you get the impression I'm actually visiting the linked page regularly, but it was linked on slashdot today.</p>
Quote of the day2004-04-01T00:00:00+00:00http://pilif.github.com/2004/04/quote-of-the-day<p>
While reading <a href="http://www.lwn.net">LWN</a> today, I stopped at the following quote as posted in this weeks <a href="http://www.postgresql.org">PostgreSQL</a> Weekly News:
</p>
<blockquote>
While there was some <em>subversive</em> discussion about source control
programs <em>arching</em> through the mailing lists this week, those with an eye on
the CVS repository noticed several interesting changes come down the pike.
</blockquote>
<p>
With the whole war going on about <a href="http://subversion.tigris.org/">Subversion</a> or <a href="http://www.gnu.org/software/gnu-arch/">arch</a> being better, this phrase is just great.
</p>
Now it's real2004-03-30T00:00:00+00:00http://pilif.github.com/2004/03/now-its-real<p>Today I was in Forch. First I saw this:</p>
<p>
<a href="http://www.gnegg.ch/archives/out.html" onclick="window.open('http://www.gnegg.ch/archives/out.html','popup','width=640,height=480,scrollbars=no,resizable=no,toolbar=no,directories=no,location=no,menubar=no,status=no,left=0,top=0'); return false"><img src="http://www.gnegg.ch/archives/out-thumb.jpg" width="240" height="180" border="0" /></a>
</p>
<p>... then this:</p>
<p>
<a href="http://www.gnegg.ch/archives/in.html" onclick="window.open('http://www.gnegg.ch/archives/in.html','popup','width=640,height=480,scrollbars=no,resizable=no,toolbar=no,directories=no,location=no,menubar=no,status=no,left=0,top=0'); return false"><img src="http://www.gnegg.ch/archives/in-thumb.jpg" width="240" height="180" border="0" /></a>
</p>
<p>The new train is <strong>awesome</strong>! I finally could make my test-ride. Now this blog is going to be a bit more computer-centered from now...</p>
Forchbahn: They don't want me to ride it.2004-03-30T00:00:00+00:00http://pilif.github.com/2004/03/forchbahn-they-dont-want-me-to-ride-it<p><strong>Note:</strong> I've written this yesterday, but I forgot to post it until it was too late...</p>
<p>You may <a href="/archives/some_suburban_railways_i.html">have</a> <a href="/archives/too_bad.html">noticed</a> that I'm quite interested in the new trains, Forchbahn recently purchased. Now hear my sad story of failed attempts to finally ride one of them:
</p>
<ol>
<li>Monday, the 23rd of February was the first date when the new cars would have been used in public transport. <a href="/archives/too_bad.html">This</a> is what happened to that train.</li>
<li>A week ago, March 22nd was the new date when the train was to be used. This time it went smoothly, but unfortunatly, no one told me.</li>
<li>Last Thursday, March 25th, I would have had time to finally take the ride (they use it just for one run per day (7:32am from Forch) - about 40 minutes. Afterwards the train is used to teach the drivers how to use it). The wednesday before, I was at my parents and had a look at the new train: Three people where above it and tweaking something. When I went home about four hours later, they still where there and it was already 11pm, so I deceided against going at thursday morning - I didn't think, it would have been fixed until then.</li>
<li>On Friday I didn't have time</li>
<li>I wasn't sure whether the train is used during the weekend (considering its current stability I think not), so I did not even try to get up at 6 in the morning</li>
<li>Today I finally went to Forch. What I've seen was the train leaving the depot halfways, then stopping, turning off all lights and lowering the pantographs. Then I've seen the replacement-train coming out and driving to the track where the new one should have departed from. I asked around whether it would work tomorrow and they told me that "yes, it will - provided they fix the train until then" - let's see what tomorrow brings to me.</li>
</ol>
<p>
On the positive side: This afternoon, I've seen the train in one of the instruction runs down here in the city, so I suppose they fixed it.
</p>
<p>
I love new stuff!
</p>
Another unobvious Windows problem2004-03-30T00:00:00+00:00http://pilif.github.com/2004/03/another-unobvious-windows-problem<p>
I have quite a lax administration policy concerning our network which is possible as long as we don't have that many machines and employees: I for myself do not place many restrictions in choice of hardware and OS on our employees. They should work with whatever they want. Only restriction: The OS must be multi-user capable (means: no Windows 9x) and if the employee wants access to our file-server it must somehow support the SMB protocol.
</p>
<p>
Lukas, on the other hand, adds another requirement to the list above: The system must somehow provide support for our <a href="http://www.gnegg.ch/archives/63-Each-problem-has-a-solution....html">exchange based</a> groupware. This can be native access or via the web interface.
</p>
<p>
So yesterday, someone wanted to add his computer to our network. It's a IBM Thinkpad running Windows 2000 in a highly tweaked installation which should be preserved at all costs. Every other administrator would insist that at least the corporate configuration would be enforced, but I don't care and put the users satisfaction above all easement for my task, so I let him keep his setup, but suggested him to join our Windows domain to make his life easier (no logging in to our fileserver, better exchange-support (remember: Lukas' condition).
</p>
<p>
After some initial problems with the installed personal firewall (have I told you that I hate them? <a href="http://www.gnegg.ch/archives/61-pptp-+-linux-much-fun..html">Yes I have</a>), I went on and tried to join our Windows 2003 domain. After quite a long waiting time, the only thing I got was "Access Denied". A quick look to the server's event log showed nothing but success-messages.
</p>
<p>
Googling did not help (much), but told me about a certain <tt>netsetup.log</tt> windows is supposed to create on the client (it's in <tt>%windir%\Debug</tt>. Here's the log I got:
</p>
<pre>03/30 16:19:28 -----------------------------------------------------------------
03/30 16:19:28 NetpDoDomainJoin
03/30 16:19:28 NetpMachineValidToJoin: 'THINKPAD'
03/30 16:19:28 NetpGetLsaPrimaryDomain: status: 0x0
03/30 16:19:28 NetpMachineValidToJoin: status: 0x0
03/30 16:19:28 NetpJoinDomain
03/30 16:19:28 Machine: THINKPAD
03/30 16:19:28 Domain: office.sensational.ch
03/30 16:19:28 MachineAccountOU: (NULL)
03/30 16:19:28 Account: office.sensational.ch\pilif
03/30 16:19:28 Options: 0x3
03/30 16:19:28 OS Version: 5.0
03/30 16:19:28 Build number: 2195
03/30 16:19:28 ServicePack: Service Pack 4
03/30 16:19:28 NetpValidateName: checking to see if 'office.sensational.ch' is valid as type 3 name
03/30 16:19:28 NetpValidateName: 'office.sensational.ch' is not a valid NetBIOS domain name: 0x7b
03/30 16:19:28 NetpCheckDomainNameIsValid [ Exists ] for 'office.sensational.ch' returned 0x0
03/30 16:19:28 NetpValidateName: name 'office.sensational.ch' is valid for type 3
03/30 16:19:28 NetpDsGetDcName: trying to find DC in domain 'office.sensational.ch', flags: 0x1020
03/30 16:19:43 NetpDsGetDcName: failed to find a DC having account 'THINKPAD$': 0x525
03/30 16:19:43 NetpDsGetDcName: found DC '\\durin.office.sensational.ch' in the specified domain
03/30 16:19:43 NetUseAdd to \\durin.office.sensational.ch\IPC$ returned 5
03/30 16:19:43 NetpJoinDomain: status of connecting to dc '\\durin.office.sensational.ch': 0x5
03/30 16:19:43 NetpDoDomainJoin: status: 0x5</pre>
<p>
Not so useful besides: <tt>NetUseAdd to \\durin.office.sensational.ch\IPC$ returned 5</tt>
</p>
<p>
As the last entry was something about a status 0x5 and the error was "Access Denied", I figured that this "returned 5" must mean "Access Denied" too.
</p>
<p>
A quick try to access the server showed me that I was right: I could not access any share - my password was not accepted (besides the server's security log telling me otherwise).
</p>
<p>
Finally the guy owning the noteook had an idea: He has disabled Windows 2000's packet signing and encryption via Administrative Tools/Local Security Policy. Enabling it and rebooting finally did the trick. When asked why he did so he said that it would greatly speed up access from a PC running Windows 98...
</p>
<p>
What did I learn: Maybe my policy is a bit too lax and if keep it, I should at least not try to fix problems I'm getting with it (it would have worked perfectly well without joining the domain)<br />
What do you learn: If you have the same problem, here's the solution. And this is what this blog is for.
</p>
Even more bluetooth2004-03-23T00:00:00+00:00http://pilif.github.com/2004/03/even-more-bluetooth<p>On <a href="http://graemef.com">graemef.com</a> I read that Windows XP SP2 <a href="http://graemef.com/blog/archive/2004/03/15/635.aspx">will have integrated bluetooth support</a>. This could prove useful as I don’t really like the Widcomm Stack - even more so because it does not allow me to log into windows using my BT keyboard - the driver simply isn’t loaded yet.</p>
<p>So it was nice to learn, that MS has released the <a href="http://www.microsoft.com/technet/prodtechnol/winxppro/maintain/sp2predl.mspx">RC1 of the SP2</a> sometime last week.</p>
<p>I’ve installed it, removed the widcomm-stack, rebooted and Windows recognized my Bluetooth-Hardware. Then I’ve added my Logitech Keyboard and - it works. I can now log in using the bluetooth keyboard - without the external Logitech Bluetooth hub. Nice!</p>
Entry 1002004-03-23T00:00:00+00:00http://pilif.github.com/2004/03/entry-100<p>This is the 100st entry here on gnegg.ch rejoice!</p>
Software problem?2004-03-22T00:00:00+00:00http://pilif.github.com/2004/03/software-problem<p><a href="http://www.gnegg.ch/archives/IMG_0251.html" onclick="window.open('http://www.gnegg.ch/archives/IMG_0251.html','popup','width=640,height=480,scrollbars=no,resizable=no,toolbar=no,directories=no,location=no,menubar=no,status=no,left=0,top=0'); return false"><img src="http://www.gnegg.ch/archives/IMG_0251-thumb.JPG" width="240" height="180" border="0" /></a><br />(click to enlarge)</p>
Some suburban railways (I).2004-03-12T00:00:00+00:00http://pilif.github.com/2004/03/some-suburban-railways-i<p>
This post is the first of a series introducing some suburban railways here in switzerland. All of those I will present here I've already tried out (some more thoroughly, some less).
</p>
<p>
If you know interesting railways for me to try out, please do not hesitate to tell me!
</p>
<h4>Forchbahn</h4>
<p>
I used to take the <a href="http://www.forchbahn.ch">Forchbahn</a> every day to go to school and later to work, but now that I live in the city, it's just interest and visiting my parents that keeps me using it. Recently they had <a href="http://www.gnegg.ch/archives/90-Too-bad!.html">this bad accident</a>, so I am still waiting for the new trains to finally be available to the public. Quite lame website by the way, but <a href="http://www.forchbahn.ch/de/forchbahn/forchbahn.php?target=fahrzeuge">
here</a> is some light-content which a non-german speaking visitor may understand.
</p>
<p>
Probably the most interesting thing about the Forchbahn is the terrain it is running through: It starts in the middle of Zürich (city environement) and goes all the way to quite agricultural environement - all within just 40 minutes. The most interesting spot in the whole track is the station Rhealp: It is non-planar and if there is only the slightest bit of moisutre on the track, the train will have quite a hard time in getting away there. Additionally in Rhealp, the voltage is changed from 600 Volts DC in Zürich (shared with the tramway) to 1200 Volts DC for the rest of the stretch.</p>
<p>
The <a href="http://www.forchbahn.ch/de/forchbahn/forchbahn.php?target=wagen_4_4">oldest</a> cars they have where built back in the fifties. They are quite loud and shaky. If you happen to get into one of those (numbered 10 [although car number 10 isn't used any more] to 15 and 101 to 110), take the control car as those are not quite as loud as the motor coaches. You will recognize them by having only two doors on the side and no pantographs.
</p>
<p>The <a href="http://www.forchbahn.ch/de/forchbahn/forchbahn.php?target=wagen_8_8">medium series</a> is numbered 21 to 32 and 201 to 206 (i think - the 200-numbers habe no motors) where 31 and 32 have some smaller modifications but the whole series has the same motors. They are a lot quieter than the old ones (actually even quieter than the newest ones) and quite a bit faster but they have problems with moisture on the tracks - they are sliding quite often.
</p>
<p>
The first two cars in this series have no armrests on the side of the window.</p>
<p>
The <a href="http://www.forchbahn.ch/de/forchbahn/forchbahn.php?target=wagen_be_4_4">newest series</a> is numbered 51 to 58 (all with motors). They are fast, don't slide around that much, but they are quite noisy when accelerating. This is the first series containing a real computer for cruise control and other stuff and thus it's the first series which can have softweare-problems preventing it from working ;-)
</p>
<p>The Forchbahn really is quite cool, but there's more to ride and I will tell you about!</p>
The best bittorrent client2004-03-09T00:00:00+00:00http://pilif.github.com/2004/03/the-best-bittorrent-client<p>
I have been looking for a decent <a href="http://bitconjurer.org/BitTorrent/">Bittorrent</a>-Client.
</p>
<p>
While the official one is quite nice for not-that-large files, its disadvantage of not being able to limit the upstream-bandwidth becomes deadly with large files: All connections I currently have access to for running bittorrent have a much smaller upstream than downstream and a saturated upstream will eventually kill off the downstream (as you most likely already know)
</p>
<p>
So I went looking and here's what I found so far:
</p>
<ul>
<li><a href="http://ei.kefro.st/projects/btclient/">BitTorrent EXPERIMENTAL download client</a>: quite similar to the official client, but with the desired upload-limiting-feature. Unfortunatly quite out of date. I haven't tried it out because of that.</li>
<li><a href="http://pingpong-abc.sourceforge.net/">ABC [ Yet Another Bittorrent Client ]</a>: written in Python - supports more than one torrent in one application window. While it has quite a decent feature set, it has a terribly geeky user interface (not necessarily a bad thing) ande it crashed on me about four times in just 12 hours, so I can't really recommend it</li>
<li><a href="http://azureus.sourceforge.net/">Azureus</a>: Written in Java, but nice-looking (thanks to SWT), fast and with an extremely comprehensive feature-set. I can't just say a lot about its stability - the featureset (especially the cool graphs) have amazed me so much that I deceided to post this entry here...</li>
</ul>
<p>
Azureus is now about the third Java-Application I know of that not only works, but works so well that I recommend it over native counterparts (the other ones being <a href="http://www.jedit.org">jEdit</a> and <a href="http://www.eclipse.org">Eclipse</a>).
</p>
<p>
I really think it's time to rethink the "java-is-crap-for-the-desktop" saying that was so incredibly popular the old days. Actually I do think that Java slowly begins to become a real alternative.
</p>
<p>
I mean: If you just stop thinking about the difficult (for end users) installation of the JRE and the (till now) slow speed, Java indeed has some advantages which make it <em>the</em> tool for desktop developement: It's platform independent (ok... nearly - at least the major ones are supported), it's (quite) easy to work with (I don't like it very much myself, but it's definitely much easier to work with than C, for example) and it has a very convinient memory management which makes it a bit more secure than your standard C-application (speaking of Buffer Overflows for example).
</p>
<p>
In short: It's the optimal toolset to build applications for the desktop where a lot of features, fast developement and high security (unconcerned users, not admins are working with the software) are the key to success.
</p>
<p>
I really think that the big time for Java is just coming, not fading away.
</p>
Save query2004-03-08T00:00:00+00:00http://pilif.github.com/2004/03/save-query<p>By the way: The Gnome guys are the ones trying to simplify everything by removing “interface bloat”, if I remeber correctly.</p>
<p>Then please explain me what this “If you don’t save changes from <strong>the last 23 seconds</strong> will be definitively lost” in <a href="http://www.gnegg.ch/archives/gedit_save.jpg">this</a> dialog box has to say? I mean tracking this value costs a little bit of performance, putting it on this message uses valuable screen real estate and thus makes the dialog less readable and finally the thing has no real value.</p>
<p>What if I’ve opened the editor an hour ago to enter some temporary text snippet and then forgot it and now that I’ve finished working I’m closing my apps down.</p>
<p>The counter would be insanely high, suggesting a lot of unsaved changes which is neither correct nor are the changes valuable.</p>
<p>What if I’ve fixed a important bug in my program code by just changing one line which takes me about two seconds to do. Now the counter would be low but the changes would be very significant.</p>
<p>What I want to say: This counter has no real-world value. It’s just a geeky thing. Not that I don’t like geeky things, but adding geeky bloated things to a GNOME application seems quite hypocritical.</p>
<p>Doing something like: “You have quite a lot of unsaved changes in this document. Are you really sure?” (appearing depending of the real size of the changes, not the time you used editing) would be friendlier and more useful but - of course - would mean an even bigger overhead for tracking it.</p>
<p>But then again: I think, this message is read about once. Every later time, the user knows what it says and presses the buttons without reading. So it would seen to be better just letting the message be static so the user is not forced to re-read a semantically unchanged message - assuming her sub-conscience detects the slightly different look of the familiar looking dialog and thus causing it to be actively re-read.</p>
<p>PS: Please don’t get me wrong about this nit-picking: GNOME and KDE both are great projects. Both have their problems and both have their unique solutions. This just sprung to my eye and whenever I find something in any other app I surely will write about it.</p>
Imitating Windows?2004-03-08T00:00:00+00:00http://pilif.github.com/2004/03/imitating-windows<p>The <a href="http://www.gnome.org">Gnome</a> fanatics and other <a href="http://www.kde.org">KDE</a>-basher always talk about KDE imitating Windows with it’s UI.</p>
<p>Today I’ve read <a href="http://sdg.agreatserver.com/GNOME_2_6.html">this guide</a> to the upcoming Gnome 2.6 and with pictures like <a href="http://www.gnegg.ch/archives/gedit_save.jpg">this</a> (look at the icon, the caption of the buttons and the whole look of the dialog) or <a href="http://www.gnegg.ch/archives/filesel_save.jpg">this</a>, this seems awfully Mac-ish to me. KDE-folks out there: Start bashing Gnome for imitating Mac OS X!</p>
<p>;-)</p>
If only I could play like this2004-03-02T00:00:00+00:00http://pilif.github.com/2004/03/if-only-i-could-play-like-this<p>Somehow I came across <a href="http://bisqwit.iki.fi/jutut/nesvideos.html">this site</a>. This is a collection of videos from someone playing old NES games in an emulator. They try to get through a game as fast as possible.</p>
<p>What this <a href="http://soramimi.egoism.jp/">Morimoto</a> did in Super Mario Bros 3 is quite unbelivable - regardless of which tricks he may have used. Look at <a href="http://bisqwit.iki.fi/torrents/supermariobros3j-timeattack-morimoto.avi.torrent">this video</a> (it’s a <a href="http://www.bitconjurer.org/BitTorrent/">Bittorrent</a>-File)</p>
Today's little PC-Problems2004-02-24T00:00:00+00:00http://pilif.github.com/2004/02/todays-little-pc-problems<p>
Today, I deceided to track the daily PC-problems I have to solve, just because I wondered why I generally think that PC's suck and to learn how much time I lose just for fixing things that should work.
</p>
<p>Today's list of software-stupidity I've had to learn:</p>
<ul>
<li><a href="http://fma.sf.net">floAt's Mobile Agent</a> has a feature to react when the connected BT phone comes out of range. I've configured fma to lock the workstation as soon as I leave it. Quite nice. Every time the BT connection now drops, the screen is locked. Unfortunatly, it does not check for the cause of the connection drop: If it's because I'm just quitting fma, it locks the screen anyway. This stupid thing happend to me just one time too many, so I decided to post this whole entry here.</li>
<li>I've 1 GB of RAM in my <a href="http://www.gnegg.ch/archives/ibm_thinkpad_t40.html">Thinkpad</a> which, I think should be enough for the machine to swap only very rarely. Then: Why is my system virtually unusable because of swapping if I bring Firefox in the forground after is lied in the background for an hour or so? I'm not blaming Firefox for this. I'm blaming windows for it's less-than-optimum memory handling. Why swap if it's not needed? Why has the system to stop responding when it's swapping?</li>
<li>On the <a href="http://www.gnegg.ch/archives/each_problem_has_a_solution.html">Exchange Server</a>, I've set up a daily backup-task using Microsoft Backup. It never run and did not provide any error-message at all. Why? Because the command-line created from MS-Backup's planning assistant was too long to be executed by windows. Why is such invalid command line created? Why is there no error message?</li>
<li>I had to support one computer where surfing to any website immediatly created a 404-error. I've double checked - the pages where there, the websited did in fact work. Cleaning the browser-cache (the supported person insists on using IE) helped. Why? What was the problem?</li>
</ul>
<p>Stupid, Stupid, Stupid.</p>
<p>I mean: I'm writing software myself and I really check not to do such stupid things. They happen. They happen all the time when programming. Your mind is thinking completly differently when you are buried deep in program code. But then: Why don't the people take some time to actually test theiur products? Why do such stupid things happen? Why can't we live in a world without bugs? Without software-stupidity?
</p>
<p>Apple, you are coming closer and closer...</p>
Have a bite...2004-02-24T00:00:00+00:00http://pilif.github.com/2004/02/have-a-bite<div align="center">
<img alt="0860095.jpg" src="http://www.gnegg.ch/archives/0860095.jpg" width="255" height="177" border="0" />
</div>
<p>
... if you still can ;-)
</p>
<p>
I found this in the online shop of one of our customers which distributes dental equipment. The device is called "Lippenexpander" and I have no idea what practical use a dentist may have for it but it's no pleasant picture at all.
</p>
Too bad!2004-02-19T00:00:00+00:00http://pilif.github.com/2004/02/too-bad<div align="center"><img alt="Crash" src="http://www.gnegg.ch/archives/image_1.jpg" width="356" height="272" border="0" /></div>
<p>
The red train you see in this picture is the new model of trains, <a href="http://www.forchbahn.ch">Forchbahn</a>, a small train leading from Zürich zu Forch and Esslingen, recently bought. This is the very same train that sohuld have been used for public transport next monday. The first new model for 12 years.
</p>
<p>
Now it seems that I have to wait for some more time before trying it out ;-)
</p>
Final Fantasy in concert2004-02-19T00:00:00+00:00http://pilif.github.com/2004/02/final-fantasy-in-concert<p>And again: Final Fantasy <a href="http://www.square-enix-usa.com/uematsu/concert/index.html">in concert</a> not as far away as the last one in Japan, but still too far. Of course I will be getting the CD’s…</p>
Responding to search-strings2004-02-18T00:00:00+00:00http://pilif.github.com/2004/02/responding-to-search-strings<p>I’ve just looked at the logs of this webserver and - under the search strings used to find this page, found this: <blockquote>delphi cannot debug anymore</blockquote>It happens that tough I have not written about this particular topic, I certainly have some hints to this fellow searcher (although, they possibly come to late now):</p>
<ul>
<li>Have you compiled your project with debug information? (Project/Options/Compiler).</li>
<li>Have you rebuilt your project after changing above settings?</li>
<li>Do your files by any chance have Unix-Lineendings? If so, the debugger won't work</li>
<li>Have you restarted your PC? Sometimes this works too.</li>
</ul>
<p>I’m quite sure there are more things that could make the debugger unusable, but unfortunatly I can’t currently think of any more of them. Maybe becuase just the ones listed above are common enough that I remeber them? Delphi is very nice, but sometimes it can be <a href="http://www.gnegg.ch/archives/70-The-anatomy-of-a-delphi-crash.html">so unstable</a><br /><br /></p>
Personal Toolbar2004-02-18T00:00:00+00:00http://pilif.github.com/2004/02/personal-toolbar<p>My own bookmarks-management is somewhat non-existant. Those few pages I’ve actually bookmarked are the ones with long URLs (longer than <a href="http://www.lwn.net">lwn.net</a> for example - a page which definitely is one worthy of being in my bookmarks-file). There are so few bookmarks, that I use only the “Personal Toolbar”-Feature of <a href="http://www.mozilla.org/products/firefox/">Mozilla Firefox</a> - the Bookmarks-Menu is completely empty.</p>
<p>As I wanted to do some coding and tweaking around with MovableType, I’ve deceided to create <a href="/perstoolbar.php">this little tool</a> which renders my original <tt>bookmarks.html</tt> from my mozilla profile to something more useful.</p>
<p>I initally tried to just apply a CSS-style to the original file, but that was not possible because a) it has much too less named identifiers or structure to style it properly, b) I would have had to do some coding anyway bacause I wanted to display the bookmark-image and c) the original file is nowhere near XHTML-compliant - so that’s another reason which would have forced me to do some coding anyway.</p>
<p>I hope you like the thing and forgive me those two “evil” links - there is virtually no way to get to subtitled animees the legal way, so I have to refer to “other” channels. Much the same with english video games: You simply can’t get them here in Switzerland, so I usually buy the german version and download the english one (Broken Sword 3 <a href="http://www.gnegg.ch/archives/broken_sword_iii.html#000073">was the last one</a>).</p>
Printing2004-02-16T00:00:00+00:00http://pilif.github.com/2004/02/printing<p>I’ve just finished a first version of a printing-stylesheet. Now when you print one of the pages here, you will get a more suitable layout without navigation and background-colors.</p>
<p>The more I’m working with stylesheets, the more I begin to like that. There is virtually nothing you can’t do, it’s quite browser-interoperable, it’s easy to do. It’s just nice.</p>
<p>Now the only thing left to do is to convice Richard also to use CSS whenever possible so that our company’s webpages get the same clean code.</p>
<p>Very nice!</p>
That's it... for now2004-02-15T00:00:00+00:00http://pilif.github.com/2004/02/thats-it-for-now<p>With the monthly archives now working, I finally enabled the new layout per default. There are still some tweaks to do, but those will require me to learn even more CSS which is the next thing I’ll do. And there is the search results template still coming in the old design. Will fix that too.</p>
<p>But for now: Enjoy!</p>
<p>Many thanks go to Richard for providing me with the template.</p>
Going on2004-02-15T00:00:00+00:00http://pilif.github.com/2004/02/going-on<p>Now that I’ve created the new design for the <a href="index2.htm">front page</a>, the next step was to get working at the archive. As I disliked the Popup-Window, MT created for comments, I had a look into the detail view of an entry, aka. the individual archives.</p>
<p>It’s not yet linked from the new index-page, but <a href="http://www.gnegg.ch/archives/27-The-13-most-annoying-things-of-the-P800-phone.html">here’s an entry</a> using the new template. You may see that I’ve chosen some more useful filenames for the individual entries, that the sidebar is now also visible in the sub-pages and that the comments still look like they did in the old MT-layout - richard has not provided me with a good looking comments-template yet.</p>
<p>By the way: <a href="http://www.gnegg.ch/archives/78-Delphi,-WinXP-and-Password-Edits.html">This</a> would be an entry with trackback-pings. Looks quite nice to me.</p>
<p>If you by any chance visit gnegg.ch, tell my what you think!</p>
CSS - I'm getting into it2004-02-14T00:00:00+00:00http://pilif.github.com/2004/02/css-im-getting-into-it<p>With my recent motivation in posting here, the desire grew to actually create a layout for gnegg.ch. I’ve asked richard whether he would be so kind to create one which he actually did. Many thanks Richard.</p>
<p>Now my knowledge in HTML is quite limited. The last time I actually did something in HTML besides tweaking some templates here and there to enter some dynamical content in, was about three years ago - way before CSS was something browsers did actually understand.</p>
<p>So - to make things interesting for me, I deceided that I will use no tables, no old-fashioned HTML-hacks, but straight CSS and DIV’s for “converting” Richards Photoshop-File to something your browser understands.</p>
<p>At first, it was quite difficult, but I made quite some progress as the time continued. The code looks quite nice too.</p>
<p>But <a href="index2.html">have a look</a> for yourself.</p>
<p>Currently I’m working at adopting the layout to the rest of this MT-based pages which at first will involve quite some reading the documentation. Until I’m ready, the alternative index-page above is the only thing you’re going to see about the new layout, but it’s already MT-integrated, you if you want, you can bookmark it insted of the old index-page.</p>
<p>I will post some newbie-notes for CSS-beginners later on.</p>
T610/Z600, Outlook, MobileAgent and Bluetooth2004-02-11T00:00:00+00:00http://pilif.github.com/2004/02/t610z600-outlook-mobileagent-and-bluetooth<p>
If you own either a <a href="http://www.sonyericsson.com/t610/">T610</a> or <a href="http://www.sonyericsson.com/z600/">Z600</a> mobile phone, you may know of <a href="http://fma.xinium.com/index2.htm">floAt's Mobile Agent</a> a not-so-stable but even more powerful tool for accessing the phone from your PC. Sending SMSes, Managing contacts, even getting a popup windows when somebody calls you - everything is possbible.
</p>
<p>
Everything but synchronizing with outlook. There's just some kind of CVS export for your contacts, but this is very uncomfortable to handle. The bluetooth sync-profile the Widcomm software provides would do the trick, but I've many more contacts in outlook than there's space on the phone. So I need a way to specify which contacts to synchronize.
</p>
<p>
The software that comes from ericsson, XTNDConnect PC, has support for filters (I've created a category T610 and I'm syncing only contacts whithin this category), so would be doing the job.
</p>
<p>
Unfortunatly, this Ericsson PhoneMonitor-thing which XTNDConnect relies on is slightly incompatible to MobileAgent - either the phone is not detected or MobileAgent loses its connection (which locks my workstation because I'm using the proximity detection). I've never succeded in finding a way to reproducibly use both programs concurrently.
</p>
<p>
Not 'till now.
</p>
<p>
(BT-Driver is Widcomm 1.4.x but it should work with 1.3 too)
</p>
<ol>
<li>Open the Advanced Bluetooth configuration.</li>
<li>Client Applications Tab.</li>
<li>Add COM Port</li>
<li>OK to everything</li>
<li>Double click the BT-Icon in the Tray</li>
<li>"View devices in range"</li>
<li>Double click your phone</li>
<li>Right-click "Serial Port 2" and create a shortcut.</li>
<li>go up two levels.</li>
<li>right click the created shortcut, properties.</li>
<li>Select the newly created port</li>
<li>OK everything</li>
<li>In the control panel open the Ericsson Phone Monitor</li>
<li>In COM Ports, select the newly created port, chose "Reserve" and "Enable"</li>
<li>OK</li>
</ol>
<p>
Before synchronizing, double click the newly created shortcut in your "bluetooth places". The phone will not immediatly be detected, but as soon as you start XTNDConnect and hit "synchronize", it will be.
</p>
<p>
What you did with this steps is creating two virtual com-ports for the phone that can be concurrently used. That way you can use XTNDConnect to synchronize with outlook and MobileAgent for the rest. Very nice.</p>
Forte Agent 2.02004-02-10T00:00:00+00:00http://pilif.github.com/2004/02/forte-agent-20<p><a href="http://www.forteinc.com">Forte</a> finally released Agent 2.0, a complete rewrite of the IMHO best newsreader for the windows plattform. I’ve just downloaded (and registered - there’s a quite reasonable upgrade fee of $15) the new version and I’m going to have a look at it…</p>
Programmers Font2004-02-04T00:00:00+00:00http://pilif.github.com/2004/02/programmers-font<p>When writing code here and then, you usually use a monospaced font for better alignment of the characters. The default in Windows is Courier New which is quite readable, but its characters are a bit wide.</p>
<p>My favourite so far has been Lucida Console which comes with Windows XP and maybe Microsoft Office. The fonts characters are a bit less wide than those in Courier New. This is my default-font in <a href="http://www.google.com/url?sa=U&start=1&q=http://www.chiark.greenend.org.uk/~sgtatham/putty/&e=7415">Putty</a> as it’s very readable (for a monospaced font) and doesn’t use as much space as Courier does.</p>
<p>Somewhere on the web, I came across <a href="http://www.tobias-jung.de/seekingprofont/">ProFont</a> which is a modified version of Monaco, the mac users have as default monospaced font. ProFont is optimized to be used by programmers, can be set to a very small size (lots of code visible without scrolling) and is very very readable.</p>
<p>On the page you’ll find a bitmap and a truetype version. The latter looks quite badly on Windows XP with ClearType and the former doesn’t work in Java-Applications. Unfortunatly <a href="http://www.jedit.org">jEdit</a> is indeed written in Java, so I have to use the TT variant.</p>
<p>Then again, Delphi and Putty are not written in Java, so I want to use the bitmap version.</p>
<p>Unfortunatly, it’s not possible to install both fonts at the same time as the are both named the same, so the TT version always wins.</p>
<p>My solution: I’ve opened the bitmap-font with a hex editor and changed all ocurrences of <tt>ProFontWindows</tt> to something else which finally allowed me to install both fonts at the same time.</p>
<p>Get the hacked version <a href="/files/profont.zip">here</a>. Note that I’ve only hacked the fontname. All its copyright belong to the author of the page above.</p>
Omniweb 52004-02-04T00:00:00+00:00http://pilif.github.com/2004/02/omniweb-5<p>Last monday, <a href="http://www.omnigroup.com">OmniGroup</a> has released the public BETA version of <a href="">OmniWeb 5</a>. The most interesting thing I’ve seen in it (I’m currently writing this entry using OmniWeb) is this text in the EULA:</p>
<blockquote>
IMPORTANT - Read this License Agreement carefully before clicking on the "Agree" button. By clicking on the "Agree" button, you agree to be bound by the terms of the License Agreement.
--
The document that follows this paragraph is a license agreement. Why do we need such a thing? Well, to be perfectly honest, our lawyers have told us that we need to protect ourselves. We at The Omni Group pride ourselves on our low-key style, but the global nature of the software business means that one lawsuit from one user in a far-flung jurisdiction could put us out of business. It also means that, without this agreement, we might not have protection from people who misuse our software. We do not want to bet our entire company on such possibilities, however unlikely, because we like doing what we do and want to continue to be able to do it. And, so, we require you to read and agree to this license. We think you will find it quite reasonable. Obviously, if you disagree, click "Disagree." But, don't just stop there. Let us know. Send some e-mail to <info@omnigroup.com> telling us what you find unacceptable about our license agreement. We can't promise to change anything, but we will do our best to get back to you.
</blockquote>
<p>Nice. It’s their style, so it does not really surprise me, but it’s nice anyway.</p>
<p>What I like is this tab-drawer. I’ve just seen that even the bookmark-window is displayed as a tab. Very nice. Very integrated.</p>
<p><img alt="omniweb_textarea.png" src="http://www.gnegg.ch/archives/omniweb_textarea.png" width="120" height="189" border="0" align="right" /></p>
<p>In the stauts bar there are some Icons. I don’t really understand what they are doing and there’s no tooltip, but I’m sure, this will be fixed in a later release.</p>
<p>Very good-looking. I’m going to surf around a bit and will possibly post something more later on.</p>
<p>Just before hitting “submit”, I’ve seen this small plus-sign above the scroll bar on any <tt><textarea></tt>-element. This thing is just great. When you click on it, an extra window will open providing you with a nice large texteditor and the possibility to import any text file. Nice! Very Nice.</p>
<p>I think, I’m going to find a whole lot of nice little things as I continue trying the thing out.</p>
dvdupgrades.ch2004-02-03T00:00:00+00:00http://pilif.github.com/2004/02/dvdupgradesch<p>When I was looking for a new AV-receiver, I soon found out that my whishes cannot be pleased in common consumer-shops like <a href="http://www.mediamarkt.ch">Media Markt</a>. I found <a href="http://www.sony.ch/view.x?cat=1000077&prod=1008751&loc=de_CH">my device</a> by googling.</p>
<p>Where to get it? Although, I am one of those guys that like to go into a store and just take the stuff with me, this was not possible this time as said receiver is quite uncommon (newly on the market and quite expensive [but sounds very nice]), so buying online was my only way to get it.</p>
<p>Browsing around a bit finally lead to <a href="http://www.dvdupgrades.ch">dvdupgrades.ch</a> which has the most complicated user interface I’ve ever seen on a website (and runs only on IE as the JavaScript for the menues at the top is somewhat strange), but looked quite ok anyway.</p>
<p>They promised delivery somewhere around February, 10th, but the stuff was here already yesterday. They’ve worked fast and professional. Very nice.</p>
<p>If you live near switzerland and need exotic AV-stuffm especially modified consoles or DVD-players (region code, macrovision, … - they even wrote a new firmware for Pioneer Players from scratch), give them a shot!</p>
Delphi, WinXP and Password Edits2004-01-29T00:00:00+00:00http://pilif.github.com/2004/01/delphi-winxp-and-password-edits<p>I’m still into making Delphi apps look more “native” when run under Windows XP. Now that <a href="http://www.gnegg.ch/archives/73-Delphi,-Windows-XP,-Styles-and-embedded-IE.html">I got the IE-Control working</a>, I was looking into the password-edit case.</p>
<p>
The Problem: When using the standard way for creating password edits (Drop a TEdit on the Form and set the PasswordChar property to ° ), this may look and work on in Win 9x, NT and 2000, but under XP, some features are missing:
<p>
<ul>
<li>In XP, password-edits cannot be read from other applications by sending the WM_GETTEXT message. Delphi's TEdit can.
<li>In XP, the edits have a nice bullet instead of a * to mark the entered characters
<li>When CapsLock is active, a balloon-hint appears warning the user that maybe she is not doing what she expects
</ul>
<p>
How to fix this?
<p>
Simple: Create a descendant of <tt>TEdit</tt>, override <tt>CreateParams</tt> and set the controls style to <tt>ES_PASSWORD</tt>. Provided you are supplying a valid Manifest for XP, you now have a fully fledged and nice working password-edit:
<p>
<pre>
procedure TPasswordEdit.CreateParams(var Params: TCreateParams);
begin
inherited;
Params.Style := Params.Style or ES_PASSWORD;
end;
</pre>
<p>
Oh. One thing is still missing: The dots look wrong. This is because Delphi does not use Windows' standart font per default but overrides this with "MS Sans Serif" where "Tahoma" is the standard under XP. So Delphi-Apps generally look kind of foreign - even more so, when ClearType is enabled (MS Sans Serif is a bitmap font and cannot be antialiased).
<p>
This can be fixed by setting each forms <tt>DesktopFont</tt> property to <tt>true</tt>. Note that it's a protected property, so it must me called from withing the form.
<p>
Now the bullets look right and the font's in your application are proper anti-aliased (provided <tt>ParentFont</tt> is set to true in every component on the form).
<div align="center">
<img alt="delphi_pwchar.png" src="http://www.gnegg.ch/archives/delphi_pwchar.png" width="408" height="190" border="0" /></div>
</p></p></p></p></p></p></li></li></li></ul></p></p>
Easier to use? Cheaper because of that? Dream on!2004-01-28T00:00:00+00:00http://pilif.github.com/2004/01/easier-to-use-cheaper-because-of-that-dream-on<p>The Exchange Server I already had <a href="http://www.gnegg.ch/archives/18-Things-I-hate.html">strange problems</a> [read this and related postings there] with, today had another one of them. I had to give reading-permission to some public folder to some users (although the GUI to do that from within Outlook is <em>really</em> easy to use, some people rely on me doing that for them because that’s even easier).</p>
<p>The Exchange Server Manager threw a strange message at me whenever I tried to expand the folder-list in the tree. The text was useless as ever and nothing was posted to the event-logs - as ever. Why is there a logging-framework if it is not used? (besides, if it would have been used, the message would have been just as un-understandable as the one I was getting).</p>
<p>This time, I was lucky: I got an error number along with the message that was even known to the knowledgebase. The error was 80040e19 and the knowledgebase article was Q328659.</p>
<p>The problem was easy to fix and had something to do with some “security-tool” that got installed alongside the IIS-Lockdown tool which itself got installed alongside the common Windows-Update procedure. Nice to know that just updating the system via such an easy procedure can bring essential functions down without any warning.</p>
<p>Microsoft always emphasises the ease of use of their products and the better support you are getting when using their closed source solutions. Granted: The “ease of use”-thing can sometimes really be true (many things just work out-of-the box with not nearly as much work as I would have when using a common linux distribution), but when something does not work, fixing microsofts server software is much more difficult than fixing equivalent linux software as the fixes are un-obvious and the error messages are unusable.</p>
<p>The level of support for me is just the same as with comparable open-source software: Use google, enter the error message you get and pray someone had a fix for it posted somewhere. If not, I see virtually no solution in Microsoft land (besides paying a lot of money for support) whereas in the open-source land I would be able to fix the problem sometime later as I have readable error-messages and if that does not help I could try to understand the problem by reading the sourcecode.</p>
<p>That’s why I usually prefer open solutions. Or have you ever seen software working flawlessly?</p>
Double double dash2004-01-28T00:00:00+00:00http://pilif.github.com/2004/01/double-double-dash<p><a href="http://hello.typepad.com/hello_nintendo/2004/01/double_double_d.html">This</a> is what I’ve ever wanted to do, but I have neither enough room, nor enough friends willing to participate nor enough equipement. Maybe later…</p>
Keyboard review2004-01-21T00:00:00+00:00http://pilif.github.com/2004/01/keyboard-review<p><a href="http://www.pcmag.com/article2/0,4149,1436357,00.asp">This</a> review of logitechs diNovo wireless desktop was slashdotted today. I wonder why the reviewer does not say anything about the stupid size of the delete key which actually spans both rows of this 2 by three row with the page-up/down, home and end keys. Insert is where you exspect scroll-lock to be which is missing.</p>
<p>You can’t believe how many times I mass-deleted some files in <a href="http://www.totalcommander.com">Total Commander</a> instad of just marking them.</p>
<p>Then again, maybe this layout will be the new “standard”: Looking at the other reviewed desktops, the one from Microsoft also has this layout and because of the slashdot-effect I cannot check out the others.</p>
<p>By the way: Besides this delete-problem, the diNovo is the best keyboard I ever had so far: Great typing-feeling, great design and good access-technology (it you can live with <a href="http://www.gnegg.ch/archives/000078.html">this</a>)</p>
Java and native libraries2004-01-21T00:00:00+00:00http://pilif.github.com/2004/01/java-and-native-libraries<p>As you may know, I am working with barcode scanners - actually it’s all about my companies product <a href="http://www.popscan.net">PopScan</a> which is a software-tool for accessing a nice little barcode scanner which is essentially a barcode scanner and nothing more and thus quite inexpensive.</p>
<p>We have two similar products: One is the enterprise version which is sort of a framework for implementing custom made barcode solutions. Two quite big companies here in Switzerland are already using it (just visit PopScan’s webpage to learn more, I won’t make any more sales-pitches here).</p>
<p>The other product - PopScan SMB - is a out-of-the-box solution for small and medium businesses which allows them to provide a easy to use barcode ordering system to their customers (ok. now I’m really finished sales-pitching. I’m coming to the technical aspects…)</p>
<p>PopScan SMB is largely webbased: On the client side we have a very little application that does nothing but hanging there and wait for a scanner to be connected. When that happens, it reads the scanned codes and displays (using the IE ActiveX control) the webpage with the filled shoppingcart - very nice and simple.</p>
<p>The drawback so far was that we could only support Windows with this solution (written in Delphi - but as a reader of this blog, you may know that already). The point is that we got quite some requests to get this to work on the Mac and additionally we have some ideas involving Linux….</p>
<p>As I wanted to learn Java for quite some time now, I deceided to rework the thing as a Java-Thing (Applet, Webstart, see below).</p>
<p>The first Problem was accessing the serial port where the scanner is connected to. Possible? Yes. Sun has created a specification for accessing serial and paralell ports and provided a <a href="http://java.sun.com/products/javacomm/index.jsp">sample implementation</a> for Windows and Solaris.</p>
<p>If you want support for all the other OSes and if you want a solution that is acutally working, I propose, you have a look at the library from <a href="http://www.serialio.com">SerialIO</a> which is what I’m using. Works like a charm and is definitely worth the money.</p>
<p>Next problem: How do we install the thing on the clients and how do we keep it upgraded? Two solutions come to my mind:</p>
<ol>
<li>Java WebStart: Just put a JNLP-File somewhere on your server and link to it. The browser downloads it and Java WebStart does the rest, meaning installing and keeping the software updated. The big advantage: The mechanism has explicit support for native libraries (what this blog entry is about) and works quite nicely. The disadvantages: 1) I'm not sure whether <tt>java.net.URLConnection</tt> does use appropriate preconfigured proxy servers which is a requirement for the solution to be usable (quite a lot of our possible customers have quite strict firewalls and forced proxies) and 2) it does not work on Mac OS < X which has only Java 1.1
<li>Java Applet: Put it on a wepage which the user opens and that's it. No installation necessary, Proxy-Support, Java 1.1 support - you name it. The optimal solution if there were not that small little problem: No support for native libraries (which I have to install to access the serial port). Anyhow: The applet is what I did
</ol>
(actually there is a third solution: Create a "normal" application and a platform-specific installer and let the user install and run it. This would work, but would force me to again create a special auto-update-mechanism and it would require quite a lot of user-intervention.
So it all can be broken down to the one question: How to handle native libraries with Java-Applets?`
The answer is as simple as the question:
<ol>
<li>Write your code
<li>When the library is accessed for the first time and can't be loaded, a <tt>java.lang.UnsatisfiedLinkError</tt> is thrown. Catch it and...
<li>... download the required libraries to the local computer into the correct directory
<li>Tell the user to restart the browser
</ol>
Of course your applet has to be signed for this to work, but this can be done quite nicely in a <a href="http://ant.apache.org/">Ant</a>-Task.
Where to download the file to?
Into some directory in <tt>java.library.path</tt>, where each platform has its preferred location (which is - by the way not what the SerialIO-Documentation suggests):
<table border=0>
<tr>
<td>Windows</td><td><tt>{java.home}/bin</tt></td>
</tr>
<tr>
<td>OS X (Java 1.3)</td><td>somewhere under <tt>/System</tt> which is bad</td>
</tr>
<tr>
<td>OS X (Java 1.4)</td><td><tt>~/Library/Java/Extensions</tt></td>
</tr>
</table>
(I must check OS 9 later)
<a href="http://www.apple.com/safari">Safari </a> uses Java 1.4, where both IE and Mozilla (<a href="http://www.mozilla.org/products/camino/">Camino</a>, <a href="http://www.mozilla.org/products/firebird/">Firebird</a> and <a href="http://www.mozilla.org/products/mozilla1.x/">Mozilla</a> itself) use 1.3.
The problem with MacOS'es 1.3 library path is that it's never writable by the user currently logged in (not even she's in the <tt>admin</tt> group). To put a file there within the Finder, you must authenticate yourself as super-user (which calls <tt>sudo</tt> somewhere under the hood) which is not possible from within java.
The solution: The current directory "." is also in <tt>java.library.path</tt>. On Richards mac, . pointed to the root of the harddrive "Macintosh HD" (/), which is writable by users of the <tt>admin</tt> group. So for now installing the library under "." when using the 1.3 VM does work as long as the current user is an administrator, which is the same requirement like under windows and can be explained somewhere in the handbook or on the webpage. Problem solved (Safari users have the advantage of being able to use the applet even without the admin installing the native library first as a directory in the users homedir is in the library path in 1.4)
I really searched the web before writing this entry and I've not found anything about applets and native libraries (especially not under Mac OS). Maybe there is a simpler way to do what I am doing. I'd be glad to hear from you!
</li></li></li></li></ol></li></li></ol>
Fun with Logitech2004-01-09T00:00:00+00:00http://pilif.github.com/2004/01/fun-with-logitech<p>I recently bought the <a href="http://www.logitech.com/index.cfm?page=products/details&CRID=486&CONTENTID=7321&countryid=16&languageid=1">diNovo Media Desktop</a> from Logitech: I really liked it’s design and the bluetooth-support as this is the only really usable way for wireless equipement (no problems with multiple devices per room, encryption, … you name it)</p>
<p>The problem was: The driver on the CD-ROM installed just another Widcomm Bluetooth-stack which despite being the same piece (down to the version) of software that was installed with my think pad’s internal bluetooth-adaptor (you will have to update to version 1.4 on IBM’s webpage to use the HID-profile), was not compatible with the prior Widcomm-Software (which is a political/legal problem and has no technical reasons at all).</p>
<p>So, when using the diNovo-drivers, the internal bluetooth-adaptor does not work (too bad when trying to use your cellphone to connect to the internet when other means of connectivity are not availabe), and when not using them, I cannot configure the special keys and the media-player support (which is stupid anyway as it does not support Winamp).</p>
<p>My final solution was to revert back to only IBM’s internal driver and pair the logitech devices whith that one (hint: the mouse uses the key 0000). Installing set point which would work perfectly well with IBM’s BT-stack (as it’s the same as logitechs), was not possible beacuse the logitech BT adaptor could not be found. Ergo: No media keys, but at least a really nice keyboard and mouse together with a working BT-support.</p>
<p>Talk about BT-interoperability…</p>
<p>I really look forward to the Windows-integrated BT-stack (which probably will be the widcomm one too - just look at the stack of Windows Mobile 2003)</p>
Delphi, Windows XP, Styles and embedded IE2004-01-09T00:00:00+00:00http://pilif.github.com/2004/01/delphi-windows-xp-styles-and-embedded-ie<p>Let’s say you have a delphi (delphi 7 - altough prior versions can use Mike Lischkes <a href="http://www.delphi-gems.com/ThemeManager.php">Theme Manager</a> application which embedds the Microsoft Internet Explorer ActiveX Control. Let’s assume furhter that you have created your Manifest so the application appears in the themed style under Windows XP.</p>
<p>
Unfortunatly, the embedded IE does not do that: Controls are still drawn in the old theme-less style. Why? How to tell the Control to use the themed style (which it certainly supports - just look at IE itself)?
<p>
For long I was looking for a solution which I've just found.
<p>
First, call <tt>SetThemeAppProperties</tt> (defined in <tt>UxTheme.pas</tt>), then send <tt>WM_THEMECANGED</tt> to your forms - at least to the one that uses the IE-Control. Example:
<pre>
SetThemeAppProperties( STAP_ALLOW_NONCLIENT OR
STAP_ALLOW_CONTROLS OR
STAP_ALLOW_WEBCONTENT );
PostMessage(frmBrowser.Handle, WM_THEMECHANGED, 0, 0);
</pre>
Especially important is the flag <tt>STAP_ALLOW_WEBCONTENT</tt>
<p>
Then, in the form containing the browser, just add a message-procedure:
<p>
Form-declaration:<pre>
private
procedure wmthemechanged (var msg: TMessage); message wm_themechanged;
</pre>
<b>Update:</b> I've turned off the comment-feature as this entry somehow got listed in some spammers database. I'm currently deleting about 10 entries per day that are just there to provide links to some stange sites. I'll post about this later.
</p></p></p></p></p>
Woah! It works?2003-12-02T00:00:00+00:00http://pilif.github.com/2003/12/woah-it-works<p>A little history lesson:</p>
<ul>
<li>My goal <a href="http://www.gnegg.ch/archives/43-Fun-with-OpenLDAP.html">was</a> single-sign on on our Linux-, OSX- and Windows Boxes
<li>It did <a href="http://www.gnegg.ch/archives/44-And-on-to-replication.html">not</a> <a href="http://www.gnegg.ch/archives/45-Its-coming-along....html">work</a> <a href="http://www.gnegg.ch/archives/46-OSX-and-OpenLDAP.html">very</a> <a href="http://www.gnegg.ch/archives/47-LDAP-again....html">well</a>
<li>So I turned it off and forgot about it. Or better: I had it in it's sort-of-working state until I had to upgrade SASL for Cyrus Imapd which in turn brought the OpenLDAP-Replica server to a state where it would consume 100% of CPU time and not respond to any requests. This is where I've given up. Talk DLL-Hell ;-)
</ul>
Then came the time with our <a href="http://www.gnegg.ch/archives/63-Each-problem-has-a-solution....html">Exchange-Trial</a> which turned out to be working quite nicely.
And finally, yesterday, Jonas asked for a shell-account on one of the Linux boxes - Samba-Access was already working (by using <tt>security = domain</tt> and <tt>password server = *</tt> in smb.conf). This is where I really wanted to rethink the whole single-sign-on-thing - even more that I really want to create users just once so I don't forget to remove them at different places, should I have to remove (or disable) one once in a while.
LDAP was no alternative (as you can read here on gnegg.ch).
I haven't tried out <tt>winbind</tt> back then, which is what I've set up this morning.
And it's funny: It just worked. First I was joining the Samba-Servers to the ADS-Domain following <a href="http://www.samba.org/samba/docs/man/Samba-HOWTO-Collection.html#ads-member">this quide</a>. No problems (which I could not believe at first). Then I followed <a href="http://www.samba.org/samba/docs/man/Samba-HOWTO-Collection.html#winbind">this guide</a> and the manpage of <tt>smb.conf</tt> to get <tt>winbind</tt> to work and as before: It runs flawlessly (after adding <tt>UsePAM yes</tt> to <tt>sshd.conf</tt>). Even more interesting: Here on the <a href="http://www.gentoo.org">Gentoo</a> box I was trying this out first, it worked even without any PAM-configuration at all.
Nice.
What do I have?
I can manage my users at a central place - this time on the Windows Server with quite good looking GUI tools. This is what I've always wanted to do. Nothing more, nothing less.
I'm a bit afraid from trying to configure our Mac OS X-computer, but we'll see.
Very nice and satisfying.
</li></li></li></ul>
The anatomy of a delphi crash2003-11-26T00:00:00+00:00http://pilif.github.com/2003/11/the-anatomy-of-a-delphi-crash<p>Delphi has the habit of crashing on exit from time to time. This time it was quite resourceful in finidng different styles of error-messages:</p>
<center>
<img alt="Harmless" src="http://www.gnegg.ch/archives/1harmless.png" width="399" height="126" border="0" />
<i>Quite ordinary</i>
<img alt="Overlay" src="http://www.gnegg.ch/archives/2overlay.png" width="399" height="126" border="0" />
<i>Overlay</i>
<img alt="Transparent" src="http://www.gnegg.ch/archives/3transparent.png" width="399" height="126" border="0" />
<i>Transparent</i>
<img alt="Captionless" src="http://www.gnegg.ch/archives/4captionless.png" width="399" height="126" border="0" />
And finally: <i>Captionless</i>
</center>
<p>New messages popped out just after closing the previous one with “OK”. Finally I had to close the <tt>delphi32.exe</tt> process using the Task Manager. Delphi would be the perfect piece of software if only it’d be more stable.</p>
What a cute fairy this is...2003-11-22T00:00:00+00:00http://pilif.github.com/2003/11/what-a-cute-fairy-this-is<p><a href="http://www.gnegg.ch/archives/philip_die_gluecksfee.jpg"><img alt=”philip_die_gluecksfee.jpg” src=”http://www.gnegg.ch/archives/philip_die_gluecksfee-thumb.jpg” width=”100” height=”144” border=”0” align=”left” hspace=5 /></a>Jonas’ girlfried soes some work for us updating <a href="http://internet.sunrise.ch/de/adsl">sunrise ADSL-World</a>. If possible, we try to get the people to return to the site by using contests like about a week ago, where one ended and they needed me to chose the lucky winner as I did not have the time yet to write a tool taking this work from me.</p>
<p>Anyway: Instead of just reminidng me to do it, Nina sent this picture which is a really great photoshop work. Thanks, Nina.</p>
Cinecard2003-11-20T00:00:00+00:00http://pilif.github.com/2003/11/cinecard<p>Here in zurich one of the company running the cinemas (Kitag AG) has a quite good working reservation system based on the “cinecard” which allows you (for one thing) reserve or buy tickets from the internet with a realtime preview which seats your’re going to get.</p>
<p>Recently they have changed the old chipcards to a thing containing an RFID-Tag. As my problems with this (they don’t even have a privacy policy on their site) mostly concerns people in Zürich, Basel and Bern, I’ll post a small article I have written in german. This is from an Email I sent to an employee of Kitag AG. She doesn’t like what I wrote either:</p>
<p><tt><font color="blue"></font></tt></p>
<blockquote>
<p>ehrlich gesagt. Das mit der Ciné-Card habe ich im Fall nicht gewusst -
wie funktioniert das?? Über den Magnetstreifen/Chip? Hilfe, dann bin
auch ich trackbar!! Im Internet sowieso überall… das ist furchtbar.</p>
</blockquote>
<p></font>
Der Chip, der in die neuen Cinecards eingebaut ist (bei den CD’s war er noch sichtbar unter dem weissen Papier rund um das Loch - bei den ganz neuen ist er in das Material eingearbeitet) wird als RFID-Chip bezeichnet. Das Ding kostet, wenn Du genügend grosse Stückzahlen abnimmst, so um die CHF 1.50 pro Stück, ist weniger als ein Millimeter dünn und funktioniert folgendermassen:</p>
<p>Du kannst das Teil durch Induktion (man bedenke: Ein Rechts-System, ein Rechts-System) über ca. 30 Meter mit Strom versorgen. Hat es Strom, kannst Du spezielle Kommandos senden, um die eingespeicherte ID auszulesen.</p>
<p>Also: Jeder RFID-Tag hat eine eindeutige Nummer gespeichert und diese Nummer kann aus 30 Metern Distanz ausgelesen werden, ohne dass Du es merkst.</p>
<p>Soviel zur Technik an sich.</p>
<p>Die Idee war, dass man damit die Barcodes ersetzen kann. Und die Sache hat was: Im Migros füllst Du deinen Einkaufswagen mit Kram, fährst ihn zur Kasse und <em>plopp</em> steht da, was Du bezahlen musst, weil die Kasse eben die RFID-Tags der Waren im Wagen ausgelesen hat. Bequem.</p>
<p>Gleiches im Lager: Du hast ein Lager mit verschiedenen Regalen. Ein RFID-Scanner überwacht nun den Füllstand desselbigen konstant. Wird das letzte Stück aus dem Lager genommen, <em>plopp</em> wird nachbestellt. Bequem.</p>
<p>Wall-Mart in den USA hat das System weitergetrieben: Am Regal mit den Gilette-Rasierklingen (schweineteuer) wurde ein RFID-Scanner und eine Webcam angebracht. Die Kamera hat jeden photographiert, der ein Packet Klingen aus dem Regal entnommen hat. An der Kasse wurde dann über einen ernuten RFID-Scanner festgestellt, wenn einer eine Rasierklinge bei sich trug. Wenn ja, wurde ein Photo gemacht und mit den Regal-Photos verglichen. Eventuelle Diebe konnten so nachdem sie die kasse passiert hatten bequem von den Hausdetektiven geschnappt werden. Dumm nur, dass das System nicht zuverlässig funktioniert hatte (z.B. das Zurückstellen von Klingen zurück ins Regal), was zu tonnenweise sinnlosen Durchsuchungen und Anzeigen geführt hat. Weniger Bequem.</p>
<p>Szenario: Benetton verwendet RFID-Tags auf ihren Kleidern. Die Tags sind konstant aktiv und können von überall her ausgelesen werden. Ich ziehe mir so einen Pullover an und kaufe daraufhin im Coop eine Kiste Bier. Ein RFID-Scanner bei der Kasse findet einen unbekannten Tag (den in meinem Pullover), meldet den bei der Kasse, die gleichzeitig meine Präferenz nach Bier speichert. Bedenke: RFID-Tags sind weltweit eindeutig. Nun komme ich das nächste Mal in den Coop. Der RFID-Scanner am Eingang erkennt meinen Pollover wieder und <em>plopp</em> habe ich einen Mitarbeiter von Coop am Arsch, der mir eine Kiste bier verkaufen will. Mühsam.</p>
<p>Es wird noch besser: Mit meinem Pullover gehe ich nun und kaufe mir einen PC, den ich per Kreditkarte bezahle. Der RFID-Scanner erfasst die eindeutige ID meines Pullovers und sendet die zusammen mit Daten über meinen Einkauf und mit meiner Kreditkarten-Nummer an die örtliche Mastercard-Niederlassung. Zwei Tage später: <em>plopp</em> Werbung für einen passenden Drucker in meinem Briefkasten - direkt von Mastercard.</p>
<p>Da nirgens auf kitag.com steht, dass sie meine persönlichen Daten nicht an 3. weitergeben, habe ich leider keine Garantie, dass mein Name und Adresse, der jetzt ja eindeutig der eindeutigen ID auf dem RFID-Tag der cinecard zuweisbar ist, nicht früher oder später weitergegeben wird. Bequem für Händler und Strafverfolger (von denen ich zum Glück nichts zu befürchten habe), die Partner der kitag sind: Wann immer ich mit meiner Cinecard (habe sie immer im portemonaie dabei) ein Laden des Partners betrete könnte ein RFID-Scanner die ID erfassen und damit meinem Namen und meiner Adresse, die ich selbst auf kitag.com eingegeben habe die getätigten Einkäuft zuordnen. Kitag und deren Partner hätten in kürzester Zeit ein genaues Profil, was Philip Hofstetter so tut. Was er einkauft, wo er das tut, welche Filme er schaut, wo er wohnt,… Cool, was?
</tt></p>
<p>If you live here in switzerland, coordinate with <a href="mailto:cinecard-article@pilif.ch">me</a> to get something done. I’ll already be fine with a statement from kitag that they do not give away personal data.</p>
Broken Sword III2003-11-16T00:00:00+00:00http://pilif.github.com/2003/11/broken-sword-iii<p>I really don’t like sequels. Really.</p>
<p>But then again: <a href="http://www.revolution.co.uk">Broken Sword III</a> has been released these days (it was called “Baphomeths Fluch” here in the german-speaking part of the world). BS was one of the greatest classical adventure games I’ve ever played.</p>
<p>It’s not only about the great story, it’s about the tings I learned (quite a lot about the order of the templars) and the great sequences I came acrosse here and then (the scene with this boy, the kebab-guy and the toilet-brush in Syria comes to mind).</p>
<p>It was really great and I think, I played through it at least five times (with enough time between each session to forget how to solve this scene in Paris with the painter, the watchman and this old toilet - how I liked that puzzle!!!)</p>
<p>The second part was good too, but not <em>that</em> good. What I really found a shame is that they have not provieded such a nice booklet as they provided with the first part, where quite a thick booklet was in the package, explaining the rise and fall of the templars. And then was that not-so-great story, the not-so-great humor and so on. All in all a great adventure, but nothing compared to the first part (just like Monkey 3 compared to Monkey 2 or even 1 [part 4 was better again])</p>
<p>And now the third incarnation of “Broken Sword” is out.</p>
<p>I can’t wait to get it (if it’s as good as “The longest journey” has been, I will do the same as back then and buy multiple copies. That’s the only thing that keeps my favourite genre alive. Adventures where the reason for me getting a computer and thus opening the path in my live I am currently walking on) and I really hope that I can get my hands on an english version (that’s quite difficult here in switzerland).</p>
<p>If not, I will buy the german version, but I will use “different channels” to get to the english version anyway.</p>
<p>Can’t wait. Must have!</p>
<p>Review follows ;-)</p>
Fun with SPAM2003-11-14T00:00:00+00:00http://pilif.github.com/2003/11/fun-with-spam<p>Sometimes, some SPAM slips through my <a href="http://www.spamassassin.org">SpamAssasin</a> filter and is delivered to my mailbox. A real pearl of quality-spam was delivered today. It contained the following paragraph:</p>
<p><tt>Note that this is not one of the emails that some corrupt Africans do distribute to other countries of the world. You are the only person I am contacting for this transaction and can only contact another person if I found out that you are not ready to be of help. This requires a private arrangement.</tt></p>
<p>yeah, right!</p>
daily strips2003-11-14T00:00:00+00:00http://pilif.github.com/2003/11/daily-strips<p>My newest toy is a small script that uses a <a href="http://dailystrips.sf.net">larger script</a> to download daily comic strips and send them as HTML-Messages to my mailbox. Maybe it’s something for you too?</p>
<p><a href="http://www.pilif.ch/stuff/stripget/index.php">stripget.php</a></p>
KDE 3.2 Beta 12003-11-08T00:00:00+00:00http://pilif.github.com/2003/11/kde-32-beta-1<p>I finally found some time to compile the current Beta Version of the upcoming <a href="http.www.kde.org">KDE</a>-Release.</p>
<p>Although it needs quite some more time to start, the overall speed-impression seems much faster.</p>
<p>The user expirience can be explained by one word: slick (very slick, actually).</p>
<p>What a nice work!</p>
<p>I’ll definitely post something more as soon as I finished reviewing it</p>
<p>Many thanks to the KDE-Team for this great release - actually the first one where I not only like the functionality, but also the look of it. Very nice indeed.</p>
Each problem has a solution...2003-10-31T00:00:00+00:00http://pilif.github.com/2003/10/each-problem-has-a-solution<p>… it’s just a question whether you like it or not.</p>
<p>But then again: Does idealism justify using the wrong tool for a particular problem just because the right tool does not seem ideologically right?</p>
<p>We’ve installed an evaluation version of <a href="http://www.microsoft.com/exchange">Microsoft Exchange</a> and despite some problems at first, it’s working very well and is the best groupware-solution we have tried so far.</p>
<p>Needless to say that there are many proxies and relays between the net and the actual box. <em>That</em> much I don’t want to trust it ;-)</p>
Usability2003-10-29T00:00:00+00:00http://pilif.github.com/2003/10/usability<p>I really forgot to post this message, Access recently dared to show me:</p>
<p><img src="http://www.gnegg.ch/archives/usability.png" width="368" height="115" border="0" /></a></p>
<p>Another great product!</p>
pptp + linux = much fun.2003-10-29T00:00:00+00:00http://pilif.github.com/2003/10/pptp-linux-much-fun<p>Actually it’s not that bad. Its just another of those things-that-work-stop-working-and-it-takes-ages-to-find-out-why-things.</p>
<p>For about four weeks I had a problem that LAN-Connections did not work after resuming from hibernation and I was unable to access my pptp-server in the office from home. On the linux side a got a timeout while waiting for LCP-Resonse (or something like that) and on the windows-side, the whole process stopped while validating my (long and thus quite secure despite the flaws in the pptp-protocol) password.</p>
<p>Who would have thought that those problems share one thing: The common cause ;-)</p>
<p>For accessing another server of a client behind a cisco-router, they provided me with the “CISCO VPN Dialer” which, when connected provides an option called “Stateful firewall (Always On)”. I confess. The “always on” suggest that this not-so-well working firewall (have I said that I hate desktop-firewalls, especially those by <a href="http://www.zonelabs.com">ZoneLabs</a> which this VPN Dialer obviously uses) also is running when the applicatoin is not, but then again: Who could think, that something stays running even though there is not GUI indication (and no way to turn it off, besides re-dialing) whatsoever?</p>
<p>I found this out when I tried to ping my workstation form a Linux-Server within our network, which I tried after seeing that VMWare stopped working too (incredibly useful for making screenshots of strange OSes).</p>
<p>So my expirience with this cool CISCO VPN-Dialer is as follows:</p>
<ul>
<li>Breaks well-working applications (VMWare)
<li>Makes me unable to use my own network while connected (despite the checkbox telling me otherwise)
<li>Breaks PPTP (and I already suspected Linux)
<li>Is incompatible with the Hibernation Mode that comes with Win 2000 and later
<li>Is an usability nightmare as it does not provide any visual feedback of being running despite the fact that an always running firewall and a VPN-Dialer do have nearly nothing in common.
<li>Is an even worse usability nightmare as there is no way to turn that firewally thing off besides building up the VPN-Connection which has even less to do with a firewall than the tool alone.
<li>Is insecure: Everyting besides the PPTP-Connection was well working when using WLAN to connect to the network - even the ping from the server to my machine.
</ul>
Great product indeed.
</li></li></li></li></li></li></li></ul>
Gentoo on a xSeries 235 Server2003-09-18T00:00:00+00:00http://pilif.github.com/2003/09/gentoo-on-a-xseries-235-server<p>Yesterday, one of the harddisks (or was it the SCSI-Controller - it does not matter…) of our very old, self-assembled developement/fileserver went down. As we had backupped the important data and I had a spare PC running Linux (the multimedia machine I wrote about <a href="http://www.gnegg.ch/archives/6-Fun-with-Linux-and-new-Hardware.html">here</a>), getting a working environement was a matter of about two hours (one I used up trying to get the old server to boot again).</p>
<p>Anyway: We deceided that it was time to move away from self-assembled machines to something more professional (and hopefully more reliable), so we ordered a IBM (we really like those machines - great support, long warranty and rock-solid) xSeries 235 machine which arrived today.</p>
<p>I deceided to install <a href="http://www.gentoo.org">Gentoo Linux</a> on the machine as it will mostly be used as my developement server (and as a windows-fileserver for our data), so eventual downtimes do not really matter (but latest versions of the installed software are important) - a nice testbed for this distribution until I roll out production machines running Gentoo.</p>
<p>Besides the hardware-RAID5 the new server had built in, we plugged an old 120GB IDE drive to be used as storage area for not-so-important files (read: music, temporary files,…) - additionally it contained all the current developement work, so I had to copy it’s contents down to the new virtual RAID5 drive.</p>
<p>Installing was quite easy, but unfortunatly, the current <tt>gentoo-sources</tt> kernel (2.4.20 - heavily patched) does not support the DMA-Mode for IDE-Devices on the onboard chipset (ServerWorks something), so copying about 30 GB of data from the IDE drive to the RAID was not funny and neither was doing anything on the server when transfers to the IDE drive where running. It was slooow!</p>
<p>Installing a current 2.4.22 <tt>vanilla-sources</tt> kernel solved the DMA-Problem but raised another: The xSeries 235 uses a Broadcom bcm5700 Gigabit Ethernet chipset which is not supported under a vanilla kernel. Of course, I forgot to patch the driver in before I rebooted the newly created kernel which forced me to go down to the basement, compile the driver and go up here again to write this text.</p>
<p>Anyway: The server is now working like a charm. I really look forward to really use it and to take advantage of the greatly increased speed (PII 500 Mhz -> Xenon 2.6 Ghz and more than twice as much RAM than before)</p>
Another Mail client2003-08-26T00:00:00+00:00http://pilif.github.com/2003/08/another-mail-client<p>It’s just not over yet. As a fellow reader of my blog, you may have notticed that I am looking for the <a href="http://www.gnegg.ch/archives/000035.html">best email client</a> (read there for my requirements). Becky! which I reviewed back in march really is great, but the threading function does not work very well and neither does GPG or PGP which I made a requirement in our company.</p>
<p>Since very long, I know of the program <a href="http://www.ritlabs.com">The Bat!</a> which was no alternative so far, as it fulfilled all my requirements but being IMAP compliant (apart from that, it’s a <em>really</em> great program)…</p>
<p>Now they released a <a href="ftp://ftp.ritlabs.com/pub/the_bat/beta/">beta version</a> of The Bat 2.0 which has full IMAP-Support. I am currently looking into that and I’m going to post a full review soon.</p>
What I dislike about Java2003-08-21T00:00:00+00:00http://pilif.github.com/2003/08/what-i-dislike-about-java<p>I really tried to get into programming something in Java. Alone the psossibilities with JSP/Servelets seem very interesting compared to what you get when using PHP. Another thing would be the many excellent IDE’s (even free ones like Eclipse) out there that only support java (I know that PHP-Plugins for Eclipse exist, but they are not really usable - a thing I’ll write about in the future).</p>
<p>But I never really took off and until today I never really could describe what is holding me back.</p>
<p>Today I found <a href="http://www.ferg.org/projects/python_java_side-by-side.html">this article</a> which explains it by using examples to compare python to java. Although it’s about python (a language I don’t really like), most of the point in there apply to other scripting languages (PHP, Perl, Ruby,…) too.</p>
<p>Really good read and finally the explanation I was looking for.</p>
Just another debian install2003-08-16T00:00:00+00:00http://pilif.github.com/2003/08/just-another-debian-install<p>Today I was going to install <a href="http://www.debian.org">Debian Linux</a> on another of those <a href="http://www-132.ibm.com/webapp/wcs/stores/servlet/CategoryDisplay?catalogId=-840&storeId=1&categoryId=2559454&langId=-1&dualCurrId=73">IBM xSeries 345</a> servers.</p>
<p>I really like those products as they are quite powerful and use only two units in your rack anyway. And they are rack-mountable without screws which makes the whole process quite a pleasure.</p>
<p>The problem when installing those machines is that Debian 3.0 does not support the built in ServeRAID controller. There is an extended boot-floppy on <a href="http://people.debian.org/~blade/install/preload/">http://people.debian.org/~blade/install/preload/</a>, but unfortuantly, today people.debian.org is down.</p>
<p>My solution was to <tt>apt-get install kernel-headers-2.4.18-bf2.4</tt> (on another debian machine), to download vanilla 2.4.18 kernel sources, to copy over <tt>/usr/src/kernel-headers-2.4.18-bf2.4/.config</tt> to the directory where I unpacked the vanilla sources, to <tt>make oldconfig</tt>, to <tt>make menuconfig</tt>, to select Support for IBM ServeRAID in the configuration tool and finally to <tt>make modules</tt>.</p>
<p>I then copied the compiled <tt>ips.o</tt> to a blank disk in a directory called <tt>/boot</tt>. I could later on use this disk in the debian installation process (booted from CDROM with bf42 on the bootprompt) when I can “Load essential modules from disk”.</p>
<p>I did the about same thing for the e1000 driver, the built in ethernet chipset requires:</p>
<ul>
<li>Download it <a href="ftp://aiedownload.intel.com/df-support/2897/eng/e1000-5.1.13.tar.gz">here</a> and uncompress it.
<li>Hack src/Makefile to use the kernel-sources above.
<li><tt>make</tt>
<li>ignore the warning that a module not matching the current kernel will be built (because that's what I want)
<li>Copy <tt>e1000.o</tt> to the disk
</ul>
Now it installs flawlessly and I'm quite happy...
</li></li></li></li></li></ul>
IBM Thinkpad T402003-08-16T00:00:00+00:00http://pilif.github.com/2003/08/ibm-thinkpad-t40<p>I got my hands on one of those new T40 model Thinkpads from IBM and I thought, I should post a little review here.</p>
<p>I was working on a T23 for a very long time, so I quite used to that machine. This review will focus on the changes between the models, but will provide a good overview for you users without knowledge of the T23.</p>
<p><b>Outside</b>
The new Thinkpad comes with a new flatter but bigger TrackPoint which I don’t really like, but this may be a matter of getting used to (if not: IBM provides you with the old cap, but I’m really trying to use the new one as it is built from plastic which does not get so dirty during common usage). Additionally, the T40 has a standard TrackPad for those users that do not like the TrakPoint (I am definitely not one of them. Although nothing feels as good as a trackball, the trackpoint is a real cool thing and I was a bit disappointed to see IBM putting an additional trackpad there).</p>
<p>As usual, the keyboard is just great. This time, it’s even a bit better, but I am not sure whether this is just because it’s new or because IBM really changed something. What I really, really, really hate about the new keyboard are those Back- an Forward keys above the left- and right cursor keys. I used the empty space around “up” to orientate myself (down was where there was no free space around). So I am constantly hitting “up” when I meant “down” and - even worse: those senseless keys instead left or right. I really hope, I will get used to this or I will have to plug an external keyboard (programming is a quite cursor-intensive task).</p>
<p>The Volume- and the “Thinkpad” (now renamed to “Access IBM”)-Buttons got smaller and have more spacing between them, making it easyier to hit them in the dark.</p>
<p>The status LED’s went from above the keyboard to below the display and are much more visible now. Good thing. Additionally, the Scroll Lock indicator was removed. In Windows it does not make sense anyway (while a missing indicator may be very disturbing in Linux as Scroll Lock quite locks down the console if it’s on). They added another LED indicating that the Notebook is powered. Not so important.</p>
<p>When the display is closed, only three leds are visible: Battery, Sleep and Bluetooth (why bluetooth and not WLAN or both?).</p>
<p>The whole device got a little flatter than the previous model. <i>Extremely flat</i> would be a good term to describe the T40. This unfortunatly breaks compatibility with older UltraBay Extensions (Batteries and Drives) as the new one is slightly flatter.</p>
<p>The T40 is the first Thinkpad where the ThinkLight (a small white LED at the top of the display to enlighten the keyboard when working in dark places) is really useful. It got <i>bright</i>.</p>
<p>Where the connectors where on the back at my T23, they went to the side of the device at the T40 - just the paralell port and the AC plug are at the back (and the big extended battery providing the computer with enough power for about 5 to 6 hours). The COM-Port went away. I liked it for the developement of our barcode solution, but the scanner we uase has an USB cable and USB-to-serial adaptors do exist.</p>
<p>The PCMCIA slots went to the front - the audio plugs (now color-coded) to the back. I don’t really like this as I am plugging audio equipement much more often than PCMCIA cards).</p>
<p>Speaking of Audio: It’s really not that bad for a notebook, but not noticably better than in my T23.</p>
<p><b>Running the thing</b></p>
<p>When you turn it on, the first thing you notice is that it is <i>calm</i>. But the Thinkpads have a tendency to grew louder as you use them, so the T40 will probably go louder in a few months too…</p>
<p>The BIOS cannot be accessed directly any more. Instead something called “Predesktop Area” can be reached by pressing the “Access IBM” button during bootup sequence. The PA is something that can be controlled by the mouse and allows access to the recovery system, to the BIOS (a standard textmode one), to a partition imager (without suport for external storage) and quite extensive support material. The whole thing eats up quite a lot of harddrive space in a hidden partition (which I have not removed so far but I will not like the outcome as I certainly have read the Service Manual and all those scary error messages about an inaccessible service-partition. Maybe sometime later ;-)</p>
<p>The first thing I did was to reinstall a clean retail Windows XP Professional - I do not like all those customizations the vendors do to the OS these days. This went flawlessly besides the fact that the setup routine did not recognize most of the hardware: No Newtork, standard VGA, no Power Management, no nothing.</p>
<p>The IBM Support Area on their website provided me with all the drivers I needed (and <em>only</em> those I needed - not a lot of useless tools).</p>
<p>The bluetooth-support must be turned on by pressing Fn-F5 which is documented nowhere. It’s a (integrated) USB-Device, by the way: When turining on Bluetooth for the first time (<em>after</em> installing the driver, of course), Windows reports to have found a new USB device.</p>
<p>The bluetooth software is provided by WIDCOM (as nearly always) and comes in the really current version 1.3.something.</p>
<p>The driver for the trackpad is a really great piece of software as it allows quite a lot of tuining to your habits. I really like this scrolling-feature where a scrolling event in the windows the mouse cursor is currently over is triggered just by moving your finger around the right border of the pad. Nice.</p>
<p>The WLAN Support of the T40 is the first I came across that worked without any tweaking in more than one wireless network. Cool. Maybe the time is ready for endusers to use WLAN?</p>
<p>The expirience with the notebook is a very pleasing one: It’s fast, stable and looks great.</p>
<p>If you can spare the money (IBM notebooks certainly are more expensive than others but they work better and have three years of warranty), you should go and buy yourself a T40 - it’s a great piece of hardware.</p>
iPod 1.3 for windows2003-07-30T00:00:00+00:00http://pilif.github.com/2003/07/ipod-13-for-windows<p>Yesterday, Apple has released a <a href="http://www.apple.com/ipod/download/ipodsoftwareupdate13.html">windows installer</a> for the 1.3 firmware. This really is no interesting news as there are so many ways for getting the 1.3 Mac-Firmware to a Windows iPod:</p>
<ul>
<li>Using a Mac (requires double-reformating)
<li>Using <a href="http://www.mediafour.com">XPlay</a>
<li>Using PodTronics <a href="http://www.podtronics.com/ipodupdater.htm">Updater</a>
</ul>
And about this hassle with the not supported 2.0 firmware on old devices: I am quite sure that the new firmware can be installed on old iPods using the last two methods above. Unfortuantly, I don't have the old iPod any more (my father is having much fun with it), so I cannot try this out [and you should not try it either - at least not try it and have the slightest idea I am going to be responsible for what you are doing - I may very well be mistaken]
</li></li></li></ul>
by the way2003-07-30T00:00:00+00:00http://pilif.github.com/2003/07/by-the-way<p>i’ve fixed the search-engine and the comments feature yesterday. <tt>apt-get upgrade</tt> can be disasterous when you have manually installed perl-modules and perl is automatically updated vom 5.6.1 to 5.8.0. I had to comment the mod_perl-stuff from the httpd.conf just to get the server up again. And then in the rush for fixing everything else, I completly forgot to re-enable the mod_perl directives for this weblog. Sorry.</p>
and again...2003-07-30T00:00:00+00:00http://pilif.github.com/2003/07/and-again<p>I wonder if someone has already noticed the all-new design of <a href="http://www.mozilla.org/">mozilla.org</a>. I really like it.</p>
Progress is relative2003-07-29T00:00:00+00:00http://pilif.github.com/2003/07/progress-is-relative<p><a href="http://www.gnegg.ch/archives/progress.html" onclick="window.open('http://www.gnegg.ch/archives/progress.html','popup','width=499,height=307,scrollbars=no,resizable=no,toolbar=no,directories=no,location=no,menubar=no,status=no,left=0,top=0'); return false"><img src="http://www.gnegg.ch/archives/progress-thumb.png" width="200" height="123" border="0" align="right" /></a>
I was installing Microsoft Encarta today when I noted an interesting note during the installation. I took the liberty of making a screenshot and highlighting the phrase for you…</p>
<p>The good thing about this: It’s honest!</p>
<p>I’m really asking myself why they created those prograss bars in the first place. They never work.</p>
Firebird and Thunderbird2003-07-29T00:00:00+00:00http://pilif.github.com/2003/07/firebird-and-thunderbird<p>I should have posted it yesterday: Both <a href="http://www.mozilla.org/projects/firebird">Firebird</a> and <a href="http://www.mozilla.org/projects/thunderbird">Thunderbird</a> got a little Mozilla 1.5a caused refreshment. Firebird does not crash during autocomplete anymore and thunderbird still works as it used to when I was using the nightly builds. The good thing about Thunderbird 0.1: I got <a href="http://enigmail.mozdev.org/thunderbird.html">Enigmail</a> to work with it (my Public-Key can be found <a href="http://www.pilif.ch/phofstetter.asc">here</a>).</p>
How to get a Lamp2003-07-09T00:00:00+00:00http://pilif.github.com/2003/07/how-to-get-a-lamp<p>Last sunday, the lamp of my IBM iL2220 video projector (no link as it is neither available nor would I recommend it any more) exploded. This was especially stupid as I just bought <a href="http://www.nintendo.com/games/gamepage/gamepage_main.jsp?gameId=1272">Wario World</a> and <a href="http://www.metroid.com/prime/">Metroid Prime</a> (which I <em>had</em> to have after finishing Metroid Fusion on my GBA and getting to know this wonderful series) and I really wanted to play them.</p>
<p>On Monday, I called IBM’s support line and asked for the part number of the replacement lamp to be able to buy it at the IBM distributor our company has an account at. The supporter told me that he would need the serial- and partnumber of my projector which I did not know.</p>
<p>Today I finally wrote down those numbers before I went to the office and gave IBM another call.</p>
<p>This time the supporter told me (without needing the neither partnumber nor serial number this time) to call another number, which I did thereafter.</p>
<p>This time I was in one hell of a callcenter menu requiring me to press buttons, giving my name and finally my phonenumber for an automated callback. When I finally had a human on the other end of the line (of course I had to make the phonecall with my cellphone - our PBX does not support DTFM sequences), he laughed at me and told me he was from software support and whether he should “flash” my defunct lamp. Funny, but not after having to wait 30 minutes for it ;-)</p>
<p>Anyway: I got another number where I called later on.</p>
<p>This time the supporter knew what I was speaking about (after having explain to her for about three times that I <b>knew</b> the warranty has expired and that I just want the part number to place an <b>order</b> for another lamp). She told me that she was not allowed to give out any part numbers but that she would try to help me anyway.</p>
<p>20 minutes stupid music</p>
<p>“hmmh… please hold the line. This is complicated”</p>
<p>another 10 minutes</p>
<p>Finally she told me that she will connect me to someone else that knows what to do.</p>
<p>2 Minutes</p>
<p>Now I had another supporter at the phone. I told my story again and she finally gave me this stupid part number (33L3426) which the previous supporter was not allowed to give me.</p>
<p>In the webshop of our IBM retailer, I learned that the lamp would cost ~CHF 700.- and that I would have to wait at least 20 days for the new lamp to arrive. Not good as I really want to play Metroid Prime.</p>
<p>Using google I learned that the IBM iL2220 is nothing more than an inFocus LP350 with an IBM label on top of it. Something worth to give a try with.</p>
<p>The supporter at inFocus gave me the number of a retailer of theirs, I called them and learned that they have a lamp on stock and that it would cost at most CHF 485.- more than CHF 200 less than the IBM lamp. Needless to say that I’ve placed my order. The lamp will arrive on friday - about 10 times sooner than the IBM lamp would have arrived.</p>
<p>So much to the great IBM support. So much for buying an IBM product to have a good supply in replacements.</p>
SOAP needs soap2003-06-06T00:00:00+00:00http://pilif.github.com/2003/06/soap-needs-soap<p>For our Web-Portal <a href="http://www.adsl.ch">superspeed</a>, I am working on a webservice to give some clients access to our provider/offer database.</p>
<p>
As the whole portal is written in PHP, I deceided to write the Webservice (fully fledged using the SOAP-Protocol) in PHP too. After some searching, found <a href="http://dietrich.ganx4.com/nusoap/index.php">NuSOAP</a> and the SOAP-Package in <a href="http://pear.php.net/package-info.php?pacid=87">PEAR</a>.
<p>
Both packages have virtually no documentation, but the PEAR-package has some nicely documented samples (disco_server.php, example_server.php just to name the most interesting two).
<p>
While nuSOAP is very easy to handle, it doesn't have a way to autogenerate WSDL-output which would have forced me to learn writing WSDL. Unfortunatly I did not have time for this, so I went with the PEAR-Package which is able to create the WSDL for you.
<p>
The first tests using PHP as SOAP client worked very well.
<p>
tip: to increase "debugability" to an actually useful level, use something like the code here for debugging your server:
<pre>
// include the actual server class
require_once 'modules/ss3_Provider/xml_access.php';
if ($_SERVER['argv'][1] != 'direct'){
// use the SOAP-Interface to our class
include("SOAP/Client.php");
$wsdl = new SOAP_WSDL("http://your.server.com/server.php?wsdl");
$object = $wsdl->getProxy();
}else{
// Use the class directly
$object = new CProvServiceInfo_Class();
}
// do something with $object
</pre>
If the script is called with the "direct" parameter, the class will be used directly thus giving you back all the debug information you need without an XML-parser trying and failing to unserstand them.
<p>
As the customer for this service is going to use ASP.NET to access the webservice, the next step was to try accessing the service via Visual Studio.NET. This was not fun (pasting the complete error here in the hope that google will catch this and will help future users having my problem):
<p>
<tt>Custom tool warning: At least one import Binding for the ServiceDescription is an unsupported type and has been ignored.</tt>
<p>
The hairy thing: I have no expirience at all with VS.NET, so I first thought this was a minor problem and I was just too stupid to actually use the imported class. But sooner or later (after trying out importing the Google Webservice), I came to the conclusion that this warning actually is a grave error: Nothing got imported. Nothing at all.
<p>
Searching google did not yield any results.
<p>
The next step for me was to learn WSDL (which I did not want to in the first place ;-). Unfortunatly, the PHP generated WSDL-File seemed quite ok (besides the missing <documentation>-Tags).
<p>
I could not get VS to report a mor detailed/useful error message.
<p>
Just when I wanted to give up, i thought of this tool, <tt>wsdl.exe</tt> that gets installed with the .NET Framework SDK. Running <tt>wsdl <filename.wsdl></tt> gave me the same error message, but with a note to look into the generated <tt>.cs</tt>-File.
<p>
This finally gave an usable error-message:
<p>
<tt>// CODEGEN: The binding 'SuperspeedProvidersBinding' from namespace 'urn:SuperspeedProviders' was ignored. There is no SoapTransportImporter that understands the transport 'http://schemas.xmlsoap.org/soap/http/'.</tt>
<p>
A quick comparison of the <soap:binding&gt-Tags showed:
<p>
Googles Version: <tt>http://schemas.xmlsoap.org/soap/http</tt><br />
PHP's Version: <tt>http://schemas.xmlsoap.org/soap/http/</tt>
<p>
Note the slash at the end.
<p>
I hate problems with simple solutions that are so awfully difficult to find because of un-usable error messages!
<p>
Just for reference: The following patch fixes the wrong Transport-URL in PEAR::SOAP (0.7.3 - I will report this to the author, so maybe it's fixed in later versions):
<pre>
--- Base.php Thu Jun 5 13:16:03 2003
+++ - Fri Jun 6 22:51:08 2003
@@ -91,7 +91,7 @@
define('SCHEMA_DISCO_SCL', 'http://schemas.xmlsoap.org/disco/scl/');
define('SCHEMA_SOAP', 'http://schemas.xmlsoap.org/wsdl/soap/');
-define('SCHEMA_SOAP_HTTP', 'http://schemas.xmlsoap.org/soap/http/');
+define('SCHEMA_SOAP_HTTP', 'http://schemas.xmlsoap.org/soap/http');
define('SCHEMA_WSDL_HTTP', 'http://schemas.xmlsoap.org/wsdl/http/');
define('SCHEMA_MIME', 'http://schemas.xmlsoap.org/wsdl/mime/');
define('SCHEMA_WSDL', 'http://schemas.xmlsoap.org/wsdl/');
</pre>
<p>
As you can see, there are more URLs having a slash at the end - possibly more candidates? We'll see. At least I know now, how to debug such problems...
</p></p></p></p></p></p></p></p></p></p></p></p></p></p></p></p></p></p></p></p>
iSync 1.1 but I will not need it2003-06-03T00:00:00+00:00http://pilif.github.com/2003/06/isync-11-but-i-will-not-need-it<p>Apple finally has released iSync 1.1 with <a href="http://www.apple.com/isync/devices.html">P800</a> support, although it remains to be seen whether this support is just for iSync or also for the adressbook which, in my oppinion, is the killer-feature of apples bluetooth initiative.</p>
<p>I will definitely try that out sometime in the future, but not now: I was weak and could not resist from buying myself a SonyEricsson <a href="http://www.sonyericsson.com/t610">T610</a> which is - besides the known problem with heavy noise while making calls - the best cellphone I’ve seen so far:</p>
<ul>
<li>It's very small. It's very comfortable to finally not have to remove the phone from my pocket when I sit down
<li>The UI looks great. OK. That should not be important, but it's a point anyway.
<li>It has a *real* AT-Interface which even resembles the one of the T68 very much. This makes tools like <a href="http://sourceforge.net/projects/fma">MobileAgent</a> (an excellent freeware for Windows) possible.
<li>It has a T9-dictionary: Although I thought that the handwriting would be fast, T9 is much faster for text-entry.
<li>It has a really good keypad: Like the T68, the T610 has a really great keypad - the best I've seen so far.
<li>It has no blinking LEDs - uncommon for Ericsson phones, maybe a tribute to Nokia?
</ul>
The only drawback are the limited PIM functionality and much lesser (and less sophisticated) software, but I can more then live with those problems.
I just hope, they will fix the problem with the noise - and I hope they will do the repair for free.
</li></li></li></li></li></li></ul>
OSX and OpenLDAP2003-05-05T00:00:00+00:00http://pilif.github.com/2003/05/osx-and-openldap<p>Finally. It works. I got Richard’s OSX-Box to authenticate against my OpenLDAP server, I set up yesterday (acutually, it authenticates against the replica but this does not make any difference). Here’s what I did:<ol></p>
<li>As I have the <tt>homeDirectory</tt> attribute in the form <tt>/home/username</tt>, and Mac OS X has the users in <tt>/Users/username</tt>, I actually have two ways to fix this: a) add another attribute to the LDAP-Server called osxHomeDirecotry or something like that. This was no alternative as I don't have an enterprise number yet so I could not legally create an OID for such an attribute. b) symlink /home to /Users. That's what I did.
<li>Now I started the "Directory Access" Utility in the <tt>Application/Utilities</tt> folder.
<li>I've removed the checkmark on LDAPv2, selected LDAPv3 and clicked on "configure"
<li>The next step was to remove the checkmark "Use DHCP supplied LDAP-Server" as my DHCP-Server does not supply an LDAP server (and I don't even know which option-code that would be on the DHCP-Server).
<li>Now I've clicked on the "more"-Arrow to display the advanced settings where I've entered the hostname of the internal (replica) LDAP-Server. In LDAP Mappings, I've selected "Custom", the SSL-Checkbox stayed un-checked after my un-successful tries to get OpenLDAP to use my self-signed certificate yesterday. I'll get back to this as before I get productive with my setup.
<li>In the new dialog that popped up, I had to make some adjustments:
(In my explanations, I assume, your accounts have objectClasses of <tt>inetOrgPerson</tt>, <tt>posixAccount</tt> and <tt>shadowAccount</tt>).
<ol>
<li>Under "Users", set the RecordName to "uid"
<li>I had to add a Record called "Group" to Users and assign "primaryUID" to it or the group of the user was not recognized (see the prior entry to this blog)
<li>Under "Group" add the RecordName-Attribute and assign cn to it or the Group was not recognized later on.
</ol>
<li>Now close the dialog by hitting "OK" and then close the Next dialog too with "OK"
<li>Now select the "Authentication"-Tab and chose a "Custom" search path. Add your newly added LDAP-Server.
<li>Do the same with the Contacts-Tab - although I have not yet figured out how to get this to work.
<li>Hit "Apply"
<li>Reboot
</ol>The last step is very annoying: I had to experiment quite a bit with the mapping settings to finally get my LDAP-Groups recognized and get the right primary group assigned to LDAP-Users (it was always 0/wheel which is not what I wanted - not at all). There is no way to get the OS to recognize changes you make in the Direcotry Access Utility but to reboot the machine. I'm happy, OSX boots that fast. If it had been windows I'd stell be wating for the reboots to complete ;-)
<b>What have I accomplished?</b><ul>
<li>I can login with the LDAP-Accounts be selecting "other" in the Login-Screen and then entering username and password
<li>I can <tt>su</tt> to any LDAP-Account
</ul><b>What still does not work:</b><ul>
<li><tt>passwd</tt>
<li>Although I can set a new password in the system preferences, the changes do not get written back to the LDAP-Server
</ul>
About the password-changing-problems, I will have a look at pam. Until then, I'm quite happy, I finally got it to work.
I really hope, someone will find this useful...
</li></li></ul></li></li></ul></li></li></li></li></li></li></li></li></ol></li></li></li></li></li></li>
LDAP again...2003-05-05T00:00:00+00:00http://pilif.github.com/2003/05/ldap-again<p>I know… it’s getting boring…</p>
<p>I just wanted to say that I’ve sucessfully fixed two problems:</p>
<ol>
<li>I had a problem where <tt>passwd</tt> immediatly failed one another server I just LDAPed:
<pre>pilif@sen1 ~ % passwd
LDAP Password incorrect
passwd: User not known to the underlying authentication module
pilif@sen1 ~ %</pre>
The problem was a <tt>use_first_pass</tt> I had in the pam_ldap-line of <tt>/etc/pam.d/passwd</tt>. When changing the password, it checked the authentity with an empty password (first_pass was empty - I never ever entered one) which failed. If somebody could please tell me the log level to set in slapd.conf to actually get useful logging information describing the problem: step forward!
<li>You have to set <tt>rootbinddn</tt> in you (pam|nss)_ldap configuration file. This will enable <tt>root</tt> to change a users password without having to know it first.
</ol>
Oh.. both updatedn and updateref where not correctly set in the replicas slapd.conf. I've fixed this too.
</li></li></ol>
It's coming along...2003-05-05T00:00:00+00:00http://pilif.github.com/2003/05/its-coming-along<p>I’ve just authenticated my first test-user on Richard’s Mac OS X (10.2.5) box via LDAP. It worked nicely - besides the fact that the GID was not assigned correctly. I will have a look into this before I’m going to post a little tutorial here.</p>
<p>Stay tuned…</p>
Fun with OpenLDAP2003-05-04T00:00:00+00:00http://pilif.github.com/2003/05/fun-with-openldap<p>I bought “<a href="http://www.oreilly.com/catalog/ldapsa/">LDAP System Administration</a>” because I was interested in LDAP for a long time and I never really understood what one could do with it.</p>
<p>While the reading book is great (it lacks some details here and there, but it’s really nice to read and has very understandable explanations), putting it to work wasn’t:</p>
<p>What I want to acomplish is to have a central user-database for our 3 people company: Two Windows PC’s, one Linux-Router, a Mac OS X workstation, 3 Linux-Servers, my Home-PC - I want to be able to log into all of them with my one password I have in the LDAP-Server. That’s what LDAP is for anyway.</p>
<p>Setting up the server was done in no time (although it required some sweat because I first installed the <a href="http://www.openldap.org">OpenLDAP</a> Server of debian stable but then deceided to upgrade to the current release (debian is lagging like ever) by using the server from the unstable distribution. I got it to install eventually (after purging the former installation that caused the update-script of the new installation to quit beacuse ldap-utils where not installed [side note: if a packages installation script requires tools from another package: why isn’t this dependency marked in the package?]).</p>
<p>Soon I’ve created my test-account, installed nss_ldap and pam_ldap and it seemd to work.</p>
<p>Actually it crashed my SSH-daemon as soon as I tried to log on to the machine, I could not change the password of LDAP-accounts, su did not work and login was not possible either - despite the fact I was following the clear instructions in the LDAP-Book.</p>
<p>The SSH-Problem got solved by updating to the latest version (uncommenting the LDAP-Support for groups in <tt>/etc/nsswitch.conf</tt> did help with the older version but this was no alternative. <tt>su</tt>ing eventually began to work without me really changing anything, changing the password required me to edit <tt>/etc/pam.d/passwd</tt> despite the fact that the in-file documentation of that file states that it is not necessary. Just like the <tt>su</tt>-problem, the one with login went away eventually.</p>
<p><tt>/bin/passwd</tt> requires still requires me to enter the users old password when called as root. Stupid, but can be circumvented by using a LDAP-Admin-Tool. <tt>chsh</tt> authenticates via PAM and gets the current entries correctly but tries to save back to <tt>/etc/passwd</tt>. As stupid as the thing with <tt>passwd</tt></p>
<p>So the adventure is not even half completed but a day is used and I had to fight problems which are not even supposed to be existing…</p>
<p>Is what I am trying to do really that sophisticated that it sinply does not work? Or am I just plain stupid?</p>
<p>I’ll keep you updated…</p>
And on to replication2003-05-04T00:00:00+00:00http://pilif.github.com/2003/05/and-on-to-replication<p>The show must go on. As our ADSL connection from the office to the internet is not that reliable, I deceided to use <a href="http://www.openldap.org">OpenLDAP</a>s <tt>slurpd</tt> to replicate the LDAP tree to an internal LDAP-Server. The setup is quite well described in my LDAP-Book and it did work at the first time I tried it.</p>
<p>At least it sort of worked…</p>
<p>Although changed attributes appeared on the replica, a newly created user was not synchronized. There was no reject on the master either - the data just vanished [sidenote: why is there a replication-rejectlog if data can vanish anyway - this is not reliable behaviour at all].</p>
<p>Reading the syslog finally gave me the idea: The permissions of the replicas data directory where not set correctly: some of the files (and the directory istelf) belonged to <tt>root.root</tt> while <tt>slapd</tt> was running as <tt>slapd.slapd</tt>.</p>
<p>Now it’s working like a charm and I am looking forward to trying to authenticate richards mac against the internal LDAP-Server.</p>
<p>When this works, I’m going to finally convert the SAMBA-installation to a PDC and setup something to synchronize the windows-password with the unix one (both in LDAP - of course).</p>
<p>I’ll keep you updated on my progress…</p>
Philips Streamium2003-04-30T00:00:00+00:00http://pilif.github.com/2003/04/philips-streamium<p>I got my hands on a <a href="http://www.streamium.com/">Philips Streamium</a>. Not because I wanted one, but because I’m going to write a review for our <a href="http://www.superspeed.ch">broadband portal</a>. I really wondered whether it was possible to use the device without the stupid musicmatch jukebox, so I went behind the scene using a network sniffer.</p>
<p>I will post a deeper review of what I’ve found (its just plain old XML over HTTP) later this day, because now I have to do some real work. Till then, you can have a look at the exchange between my musicmatch and the streamium <a href="/files/smlog.txt">here</a> (and before you ask: I really have bought all the CD’s from which I have ripped the MP3’s you will see in the log. I rarely ever download music from P2P Networks).</p>
Two more bugs... gone!2003-04-29T00:00:00+00:00http://pilif.github.com/2003/04/two-more-bugs-gone<p>No. This is not about the new iPods, Apple announced today (of course I’ve ordered myself a 30GB one, but this really is another history).</p>
<p>I’m just very pleased that two Bugs in <a href="http://www.jedit.org">jEdit</a>’s current CVS-Version that have been fixed by Slava the same day, I’ve reported them. This is just great!</p>
<p>If you are in need of a good editor, go and get jEdit!</p>
Mario...2003-04-29T00:00:00+00:00http://pilif.github.com/2003/04/mario<p>It just came to my mind: I am through with Super Mario Advance 2 on my GameBoy Advance SP - at least, I’ve finished all 96 goals. No I’ve only to get all Yoshi-coins, but when I think of the dammed special world, I come to the conclusion that I’ll possibly never manage to get those coins.</p>
That's nice...2003-04-28T00:00:00+00:00http://pilif.github.com/2003/04/thats-nice<p>You may know <a href="http://www.codeweavers.com/products/office/">CrossOver Office</a> from <a href="http://www.codeweavers.com">CodeWeavers</a>: It’s a commercial <a href="http://www.winehq.com">Wine</a>-Distribution specificially targeted at supporting MS Office and a couple of other often used Windows applications under Linux.</p>
<p>As you can imagine, the CodeWeavers people are implementing featrues for their product independent of the Wine community but feed them back to the OpenSource project once a new release of CrossOver Office is released. This practice makes sense as it allows them to get media coverage by announcing lots of not-there-before features, but still work together with the community.</p>
<p>Just now that CrossOver Office 2.0 got released, there was a thread on the Wine-mailinglist because someone tried to implement tablet support for the Open Source Version only to learn, that it is already there in CrossOver Office. The changes got commited to the Wine-Code, but there was soem discussion why it did not get announced to the community so sensless duplicated work could have been prevented.</p>
<p>I was really happy to see the response of the guy at CodeWeaver. I just hope, every company would react to and work with the community in that way…</p>
<p>Read a conclusion of the thread <a href="http://kt.zork.net/wine/latest.html#3">here</a></p>
A name is a name... or not?2003-04-17T00:00:00+00:00http://pilif.github.com/2003/04/a-name-is-a-name-or-not<p>I really saw this mess coming when I read the announcement that Mozilla’s Phoenix will be called Firebird for now: <a href="http://firebird.sf.net">Firebird</a> is a spin off of the once open-sourced Interbase-Database Server by Borland existing for three years now and using the name “Firebird” since then.</p>
<p>As you can imagine, the Firebird (DB)-People were not too happy about this - Phoenix had to be renamed because of a naming conflict and the new solution still creates one - but this time it’s not a commercional product it’s conflicting with - its another Open Source project.</p>
<p>I can understand both sides:</p>
<p><b>Mozilla</b>
The name Firebird has been checked by Netscapes/AOLs legal departement (why have they not noticed this? or is it maybe that they thought it would not matter?) and another name-change would again involve the legal departement which won’t please neither the BIOS vendor Phoenix not the Mozilla-Team as they will not release another milestone called phoenix.</p>
<p><b>Firebird</b>
Firebrid already suffers from not really be known in the public. The RDBMS it spun off is known mainly by delphi-developers and neither Interbase nor Firebird were often in the press these days. A more known product with the same name will further divert attention. And the psycological reason: The name Firebird <a href="http://firebird.sourceforge.net/index.php?op=history&id=opensource">was chosen</a> based on the real political mess around open-sourcing Interbase and is, in my oppinion, a very well chosen name.</p>
<p>Why I can understand the arguments on both sides, I can neither offer a solution pleasing for both projects (besides the question why Phoenix is not to be called simply “Mozilla” - after all, the Browser-Component in the Mozilla Suite is to be replaced by Firebird (the browser) anyway) nor can I understand the way the folks around Firebird (the DB) <a href="http://www.ibphoenix.com/main.nfs?a=ibphoenix&page=ibp_Mozilla0">react to the problem</a> (and <a href="http://www.mozillazine.org/weblogs/dave/archives/2003_04.html#003073">here</a> - an entry in Dave Hyatts blog). War is never a solution - never!</p>
Long time no see2003-04-16T00:00:00+00:00http://pilif.github.com/2003/04/long-time-no-see<p>I really should have more discipline concerning this little weblog ;-)</p>
<p>Just some notes for now:</p>
<p><b>P800 and the calendar</b>
About a week ago, after I’ve updated the SonyEricsson P800 Sync-Software to 1.3.1, it stopped synchronizing my calendar entries. There was no error message, but it did not work either - it id not touch the calendar entries at all. Then I performed a full synchronisation, overwriting the phone with the effect of having no entries at all on the phone.</p>
<p>Reinstalling the Sync-Software did not help. What finally had effect was reinstalling both Office (with outlook), cleaning the Registry from Office-Settings and the Sync-Software (first removing everything and then reinstalling it). This process took about 3 hours (and many of them figuring out how to fix it <i>without</i> having to reinstall everything). Stupid Software.</p>
<p><b>iPod and Linux</b>
I am running <a href="http://www.gentoo.org">Gentoo</a> Linux using Kernel 2.4.20-gentoo-r2. Although I had HFS and IEE1394-Support int the kernel, one of the modules (sbp2 I think) oopsed when I plugged my Mac-Formated iPod and <tt>modprobe</tt>‘d ohci1394. Reformating the iPod with the FAT32-Filesystem (use the <a href="http://www.apple.com/ipod/download/">Windows-iPod-Updater</a> from apple.com, but remove MacDrive if it’s installed - else your iPod will not be detected) did help me with this so I finally have a device to quickly exchange large amounts of data between home and office.</p>
<p><b>Browsers (1)</b>
Apple recently released Beta 2 of <a href="http://www.apple.com/safari">Safari</a>. Looks great - especially the Tabbed Brwosing-Feature. Too bad I still don’t have my own Mac.</p>
<p><b>Mario</b>
I’m in World 6 of Super Mario Advance 2 and the game really is great. If only the Special World would not be so difficult to master…</p>
<p><b>Browsers (2)</b>
I recently tried out the latest build of <a href="http://www.mozilla.org/projects/phoenix">Phoenix</a> which will <a href="http://www.mozilla.org/roadmap.html">replace</a> the browser-Part in the Mozilla-Suite someday. It really looks great and is a pleasure to use. I am thinking of dropping Mozilla entirely and use Phoenix for the web an Becky! for email.</p>
<p>So. That’s half a month of notable internet expirience. I promise to report more often in the future!</p>
Just like SMS - only cheaper2003-03-18T00:00:00+00:00http://pilif.github.com/2003/03/just-like-sms-only-cheaper<p>When surfing around on <a href="http://www.my-symbian.com">my-symbian.com</a>, I came across <a href="http://my-symbian.com/uiq/applications/applications.php?fldAuto=62&faq=2">myBuddies</a>, a free ICQ client for the P800. Most surprisingly it works quite well (I did my first testing with an active link to my desktop PC to avoid senseless GPRS charges).</p>
<p>What may look like a little toy is quite useful actually: GPRS connections are payed for transmitted data not for connection time so I can stay connected to the network without much cost and reach most of the people I usually send SMS to via the internet.</p>
<p>So let’s do a little calculation how many caracters I can send via ICQ to be as expensive as an SMS:</p>
<p>Swisscom currently charges CHF 0.20 for a SMS (max. 160 characters as you know). According to the current <a href="http://www.swisscom-mobile.ch/sp/FDAAAAAA-de.html">price list</a> you get 10KBytes GPRS transfer volume for the same price which corresponds to 10240 Bytes. Subtract a protocol overhead of about 20%, you still get 8192 Bytes for the same price as an SMS - that’s 51 times cheaper than an SMS!</p>
<p>Drawback: Swisscom charges at least 10KB for every connection, so I will try to stay connected ;-)</p>
<p>Of course, if I’d switch to <a href="http://mobile.sunrise.ch">Sunrise</a> it would get even cheaper: There is no 10 KB-Limit and it’s just <a href="http://mobile.sunrise.ch/wap/wap_sun_gprs/tar_ser.htm">CHF 7.50 per MByte</a> (CHF 0.07 per 10 KBytes) so it’s 3 times cheaper than Swisscom [note to myself: I really should switch! NOW!] which means 8192*3 = 24576 Bytes for the same price as an SMS.</p>
<p>You get the idea how cool this seemingly senseless ICQ-Application for the P800 can be ;-)</p>
<p>If only someone would release a Jabber client allowing me to connect to my own pet jabber-server… Or maybe it’s time to <em>really</em> begin brushing up my Java-knowledge?</p>
So I'm not alone...2003-03-07T00:00:00+00:00http://pilif.github.com/2003/03/so-im-not-alone<p>As I’m quite interested in Apples software, especially Safari and as I’m a big fan of Dave Hyatt’s writing stlye, I am a reader of his <a href="http://www.mozillazine.org/weblogs/hyatt/">weblog</a>, I was quite happy to see that he <a href="http://www.mozillazine.org/weblogs/hyatt/archives/2003_03.html#002589">seems to like</a> Xenosaga quite a bit ;-)</p>
Mail for Windows as I like it2003-03-06T00:00:00+00:00http://pilif.github.com/2003/03/mail-for-windows-as-i-like-it<p>I had a problem.</p>
<p>My Problem was to still utilize Windows (I have customers requiring me to build windows-programs for them) but having a decent mail program anyway. With decent I mean that it must at least support the following featureset (in the order of decreasing importance):</p>
<ol>
<li>IMAP-Support. Not just IMAP-Support, but a good one with things as storing Sent-Mail on the server, using the server to search for messages (although I doubt the efficiency of this as long as I am using <a href="http://www.inter7.com/courierimap">Courier</a> on the server side. Searching through 10'000+ textfiles without any index whatsoever is not what *I* call efficient), but also some kind of local caching so that opening folders does not require to get all headers again (which disqualifies <a href="http://www.cyrusoft.com">Mulberry</a> [and <a href="http://www.mutt.org">Mutt</a> on Linux]).
<li>Threading Support. I want to have nicely sorted message threads and I want to see a real tree structure
<li>Correct formatting of messages. I absolutly don't want a thing like Outlook Express that does not allow proper quoting, mime-headers, line-breaks and so on...
<li>Multiple Identities. I have a corporate email-adress I use for customers and a more private one, I used to subscribe to some mailinglists. At least I need to be able to enter more than one Email-Adress per Account (Mullbery does this) or even better to tell the program to use different sender addresses depending on the currently opened folder.
<li>Adressbook synchronisation. As a reader of this weblog you may have seen that I am quite a "synchronization guy". I want the addresses from my P800 to be in my Mail program. How does not matter for me.
<li>Checking for new Mail in subfolders. I am subscribed to a whole lot of mailinglists and I filter them already on the server (using <a href="http://www.exim.org">Exim</a> as MTA, this can be done even without spawning many subprocesses for every message). Many Mail Programs insist on just checking the "INBOX"-Folder for new messages despite the fact that Courier would provide a new message count for every folder.
<li>Color Coding of Messages (Quotations ins different colors).
</ol>
I've been using Mozilla Mail so far and it fails to support items 4, 5 and 6 in the list above. Mulberry which I tried for a month or two did even fail so support item 1. Calling mutt via ssh on the mailserver also worked, but I had problems with 1, 5 and 6.
No the point of this article is, that I've finally found what I was looking for for the last three years. A Mail client for Windows supoprting the whole list above! The tool is called "Becky!" and comes from a japanese company called <a href="http://www.rimarts.co.jp">RimArts</a>. Only the import of my adresses required a bit of hacking, but everything else (and even more like the "Mailinglist Manager", the excellent editor, the possibility to use external editors, ...) is there.
Importing the adresses involved a tool called <a href="http://www.stoer.de/ipod/ipod_en.htm">OutPod</a> which is thought for getting a vCard-File out of outlook to store it on the iPod. Becky! has an import-filter for vCard-Files, so this worked nicely (Mozilla *does* have an import-tool for Outlook, but it did not work on my system).
Just go and give Becky! a try. It's great!
</li></li></li></li></li></li></li></ol>
Another reason to buy a mac...2003-03-05T00:00:00+00:00http://pilif.github.com/2003/03/another-reason-to-buy-a-mac<p>… would be <a href="http://www.ambrosiasw.com/games/pop-pop/">pop-pop</a>. This is the greatest adaption of breakout I’ve ever seen. Additionally it comes with a greeeeat 2 Player mode and is the first game ever with support fro two mice. Go and get it!</p>
SonyEricsson did it again...2003-03-04T00:00:00+00:00http://pilif.github.com/2003/03/sonyericsson-did-it-again<p>I’ve read that SonyEricsson today announced another phone, the <a href="http://www.sonyericsson.com/t610/">T610</a>, thich should be the successor of the really cool T68i. The site is not very specific about the size of the device, but it seems to be a bit larger than the T68. It has a built-in camera, bluetooth and all the other thing you like on those advanced phones.</p>
<p>It seems like it uses the same OS, the T68 did (no Symbian), has just 2 MBytes of Memory and comes in various design-flavours. Unfortunatly, the <a href="http://www.sonyericsson.com/t610/specifications/">specifications page</a> is not as detailed as I’d like, but I suspect, the T610 is optimized for not-so-advanced users that like having a phone and not a difficult-to-use PDA.</p>
<p>I’ve read on <a href="http://slashdot.org/article.pl?sid=03/03/04/1354235&mode=nested&tid=137">Slashdot</a> that the T610 will be iSync-compatible from the beginning which worries me a bit as my P800 is not. This does not really matter for me as I am more a PC-guy (but more and more thinking of muying me a Mac just for fun), but it matters for Richard who I’d really like to see buying a P800 too… Possibly, the P800 will never be supported on the Mac, as all the other phones are and there may be no need for Apple to implement another protocol just for one phone. I really hope that’s not the case.</p>
<p>Another note: I really think, SonyEricsson is currently doing the right thing: They release quite a lot of devices for quite a broad range of possible users. This combined with the high quality those devices have (Richard recently put his T68i into the water and it’s still working…), they really may be able to beat Nokia where tey belong to…</p>
got it...2003-03-03T00:00:00+00:00http://pilif.github.com/2003/03/got-it<p>If only I’d check my mailbox more often…</p>
<p>When I was writing my article about getting Xenosaga today, the game was already lying in my mailbox for some hours. Anyway: When I found it, I’ve started up my beamer and the playstation and began playing the game. Be prepared for a first kind-of review (nearly no spoilers - I could not really give some as I’ve only played 8 hours so far):</p>
<p>First of all: I like it.
Second: I don’t like it as much as I liked Xenogears (which is not very surprising given the fact that Xenogears is the best RPG ever created ;-)</p>
<p>As usual I am first providing you with the things I <i>dislike</i>:</p>
<ul>
<li>Story: Please: You can do it better than this. The whole thing is much to clear in the first place. Where is the slow unfolding of events I liked so much in Xenogears? And: Misterious Plate floating through the unsiverse - androids freaking out - lunatic professors working for the goverment: That's nothing new at all. I hope there is more to come and I hope it's less obvious.
<li>Loading times: Too long. It's not that there are loading screens all over the place - there are no screens at all. I am currently on this ship of space trash collectors (what have I told you about the story??) and where the loading times when changing rooms in previous games by Squaresoft were not really noticable, in Xenosaga, they are: About 30 seconds waiting before a black screen just for entering the passengers cabins? That's too long.
<li>Stupidity: This is related to my complaint about the story and actually I've only once came across the problem: On said spaceship, the sequence of events is as follows:
<ul>
<li>From the citchen, go all the way down to the cargo bay where KOS-MOS and the commander are.
<li>From there go all the way up to the bridge just to learn that there may be a problem with the catapult which of course is again all the way back down in the ship.
<li>When I've finally reached the damn catapult (that's no spaceship. It's a labyrinth), there seems nothing to be wrong, so I am ordered all the way back up to the bridge where the next story sequence awaits me. Note: Till' this point, going back and forth did involves nothing more than going back and forth - no enemy encounters at all, so no fights, so: boring.
<li>Of course, although nothing seemed wrong, the catapult actually malfunctions during the story sequence (talk about non-obvious story) and I've once again to go all the way back - but this time *with* enemy encounters.
</ul>
This is boring, stupid and not what I expect from a successor of the best RPG ever.
<li>Movie or game? I really like story sequences. I also like long ones with much content. But those in Xenosaga are too long. Many times in those 8 hours the play counter is displaying me, I sort of forgot that I am playing a game instead of watching a movie.
<li>Camera perspective: No. It's not nearly as annoying as in Kingdom Hearts for example. After all, the camera is fixed. And this is so much good at it is bad: Many times I am thinking that I don't see something and I wish to rotate the scenario - but unfortunatly that's not possible. However, I think, this is about getting used to it. Before Xenogears and FFX the camera has always been fixed.
</ul>
So. That's it. In all other aspects, the game is just great. Especially I'd like to note the following points:
<ul>
<li>Music. Just Great. Mr. Yasunori Mitsuda did a wonderful job once again. And this time it's even better as the Soundtrack is played by a real orchestra.
<li>Voice-Acting: We are not quite there yet, but it's waaaay better than FFX or Kingom Hearts.
<li>Graphics and Animations: Great. I like them very much.
<li>Battle-Time-Counter: On the victory-screen after a battle there is a timer that shows you how long it took to finish the opponent off. This is nice (I'd never have thought that killing a boss may well take 20 minutes)
</ul>
If you can: Go and get the game. I've not yet rated it relativly to the other RPGs I've played so far, but it will certainly occupy one of the top positions just because of the athmosphere, the good music, the balanced gameplay and the really good leveling-up system which is quite sofisitcated but understadable anyway (and does not have the same strange side-effects as the system of FF8 had where leveing up was actually a <b>bad thing</b>
</li></li></li></li></ul></li></li></li></li></li></li></ul></li></li></li></ul>
Yippieh!2003-03-01T00:00:00+00:00http://pilif.github.com/2003/03/yippieh<p>I have just been on my favourite game importers <a href="http://www.alcom.ch">website</a> and I have seen that <a href="http://www.xenosaga.com">Xenosaga</a>, which I have pre-ordered on January the 13th has been shipped yesterday.</p>
<p>You cannot beleive how much I am looking forward to monday when I will receive the game!</p>
Yippieh! - New Software2003-02-23T00:00:00+00:00http://pilif.github.com/2003/02/yippieh-new-software<p>I’ve just visited the acer-website and downloaded the <a href="ftp://ftp.support.acer-euro.com/wireless/bluetooth/acer/usb-bt500/bt500_1327.exe">driver</a> for my BT500 Bluetooth USB Adaptor. There was no modification date on the website, but a short view on the <a href="ftp://ftp.support.acer-euro.com/wireless/bluetooth/acer/usb-bt500/">FTP-Server</a> revealed that the current release is quite new - from February 19th, actually.</p>
<p>Launching the setup first wanted to remove the current driver (it said, that it was already installed in the newset version and asked whether it should uninstall itself - not quite true - the new software definitly is newer…)</p>
<p>The new Acer-Driver-Release comes with a lot of new assistants, Audio-Profile-Support (a complete new feature for free - I can now use my PC as a headset for my P800) and of course, the way Symbian devices connect to the pc is now fully supported and no more error-messages ar being displayed. Using Bluetooth to synchronize my phone finally makes fun.</p>
<p>Too bad the P800 comes with a USB Base-Station which is faster than BT and is now permanently plugged to my PC ;-) But it was fun to get BT working anyway.</p>
The 13 most annoying things of the P800 phone2003-02-23T00:00:00+00:00http://pilif.github.com/2003/02/the-13-most-annoying-things-of-the-p800-phone<p>I had to buy myself the <a href="http://www.sonyericsson.com">SonyEricsson</a> smartphone P800 as I really liked what all the reviews wrote about it. And it’s cool. I really like it - much more than my former Nokia 7650 (don’ t tell me that I am buying much too many cellphones. I know that, but I’ve not found <i>my</i> solution yet - at least not until I bought the P800…</p>
<p>Anyway. During the first three days, I am using the phone, I came across the following list of annoying things, you should have in mind when buying the phone:</p>
<ul>
<li>The "Select all" Option in the Messaging-Application is quite well hidden. When I deceided to write this article here, I still thought there was no such option at all and I wanted to write a big complaint about having to select each and every sms in the "Sent"-Box to empty it. If you are in the place to design a GUI: Use Menu separators wise and don't mix toggle-options with commands. The "Select all" command is just above the display-toggle-option in the edit-menu (don't ask me what display-options have to do in a "Edit"-Menu)
<li>There is no support for SMS-delivery-reports. A pity. I really liked the handling of SMS-reports in my Nokia 7650. But then: How many times did I really *read* those reports? Learn: Not every feature is that important to be implemented...
<li>The handwriting recognition works really nice - besides that one problem: In PocketPCs and of course Palms, there is a seperate area on the screen for entering text. The P800 uses the whole screen. This seems nice as it allows you to write quite large letters. But actually it is a big problem: As the recognition area overlays the GUI, the software in the phone has to guess whether the current screen-contact was for a GUI-element (like a button) or a stroke. This combined with the fact that a dot (.) is just a line from top left to bottom right and with the extreme sensitivity of the recognition engine lead me to overwrite many textfields with a dot instead of pressing a button in the GUI - I made a small line instead of just a click. I really want to either have an extra recognition area or an adjustable sensitivity for the recognition.
<li>Why is the Clock-Application not avaliable with a closed lid? I hate it to open the keypad just to see when I have the alarm set to.
<li>The T68i and the Nokia 7650 both had a Screensaver/Standby-Screen that was useful as it displayed the current time. The P800 does not: When inactive, it first displays a screensaver (an animated gif) and then turns the display off. Nice for battery lifetime - bad for people that do not wear clocks. Workaround: Make just one click with the Jogdial and the P800 will display the standard screen showing the time (but not the date - see below).
<li>If you have cell info enabled, the cellinfo string will be shown below the provider-name where the current date would be placed instead. There is no way to see the current date with enabled cellinfo besided opening the phone and selecting the clock application. Stupid.
<li>Jogdial: Veeery nice idea. This is great. Too bad it's so hard to push it down into the phone. I always push it forward instead of down. But this may be a problem with my fingers.
<li>Keyboard. I know there is no better (at least none as cheap as the current) way to create a removable keyboard than to have the keys press on virtual keys on the touchscreen.... But: The keys are very hard to press down and pressing them feels so much "rubber-ish". Buärk. And: I had to recalibrate the display to make those two small "back" and "c" buttons work. I am not sure, a default consumer knows about this...
<li>mRouter: Say what you want, but I life after the principle: Once broken - always broken. And I have never seen a more broken piece of software than the Nokia PC Suite with its mRouter-Tool (even Microsoft Word is better). The P800 uses the same thing (I think, this is Symbian related and cannot be changed that easily). Anyway: The Ericsson software looks more stable to me than the Nokia Software did. It worked flawlessly with IR and USB on my Notebook. I'll see what it does tomorrow on my office-pc where I will try to synchronize via Bluetooth and USB.
<li>Browser: Why does it open the startpage when I open the browser? OK... actually this makes sense... but then: Mobile Internet Connections are expensive. I want an option to open the bookmarks-page per default. Not the homepage. After all: To change the homepage, I have to open the browser which will automatically open a connection to the internet and display the homepage set per default by your Mobile Provider.
<li>Browser-Key: I prefer Opera as my Webbrowser. SonyEricsson gives it away <a href="http://www.sonyericsson.com/spg.jsp?page=gsapopup&Redir=template%3DSP1_17_1%26B%3Die%26PID%3D9940%26LM%3DSD_V%26FID%3D2300">for free</a> and it works much better than the internal browser for HTML. Why can't I reconfigure the browser-key at the side of my Phone to launch opera instead of the internal browser?
<li>Shortcuts: I like the shortcuts to the different applications. Why are the shortcuts for the closed operation mode configured in the control panel and the shortcuts for the mode with the keyboard open in the Preferences-Menu of the launcher-application? Or in common: Why are some settings in the control panel and others in the corresponding application?
<li>Multitasking: Yes. It's a Phone - no PC and no PDA either. I understand that multitasking may not be possible, but then please provide me with a) a list of last started applications or b) the possibility to sort the application-list after self-defined criteria or c) at least sort that list alphapethically! Let me make an example for this: Say you are playing the (greeeeeeat) Solitaire-Game, then you want to have a look at the current time. Thus you start the clock-application (why is the clock not always displayed in the status-bar?). Now to go back to your game, you have to go back to the launcher, scroll all the way down in the list and restart solitaire (after you have found it - the list is sorted by the ideas of a marketing-guy - not of one that really uses the phone (Camera before Adresses for example)...
</ul>
This list seems quite big. Didn't I say the phone is great? Yes. I did. The phone <b>is</b> great. The above list is complete. There are no more problems and many of the existing ones are quite easy fixable. I will list them down in a more professional way and I will be sending them to the SonyEricsson Support. I dont' think, I'll get an answer, but maybe they fix one of the problems in a future release...
Go and buy your P800 - you will like it.
</li></li></li></li></li></li></li></li></li></li></li></li></li></ul>
P800 and Bluetooth2003-02-23T00:00:00+00:00http://pilif.github.com/2003/02/p800-and-bluetooth<p>I’ve just arrived at my office and tried out connecting to my P800 (see earlier posting) via Bluetooth. As the software underlying the SonyEricsson PC-Suite is the same as in the new Nokia PC-Suite (mRouter strikes back…), I suspected everything to nearly-work as usual.</p>
<p>I am using a Acer BT500 USB Bluetooth adaptor that comes with the usual widcomm software. Connecting to the P800 requires me to check the COM-Port that is assigned to the BT-Adaptor (<b>not</b> to the phone!) in the mRouter-Configuration. Then I open the COM-Port on the Phone with the Bluetooth.-Software on my PC. The Phone receives this request, closes the port again (results in an error-message on the PC) and then opens the COM-Port of the PC’s BT-Adaptor.</p>
<p>Every now and then (about every second time), the mrouter-Software notifies this and opens the channel to the phone.</p>
<p>I heard that newer versions of the widcomm software can handle the way those Symbian Phones connect via Bluethooth without annoying me with error-message. I will check the Acer-Website if they have updated their driver but I don’t really think they did…</p>
Apple X112003-02-11T00:00:00+00:00http://pilif.github.com/2003/02/apple-x11<p>Yesterday, a new release of Apples X11-Server has been released. It can be downloaded at the <a href="http://www.apple.com/macosx/x11/download/">usual location</a>.</p>
<p>What I really like: Apple has addressed all concerns with the Application so far. The feedback on the mailinglist really got attention and everything has been implemented as requested: Keyboard-Mappings, the different Hints to the Windowmanager, …</p>
<p>The tool is still as fast as the previous release.</p>
<p>I’ve read about one problem: The new release 0.2 reads the global /etc/X11/xinitrc which the old release did not. This can lead to the eventually installed twm or another windowmanager being executed instead of the quartz-wm one should expect.</p>
<p>The Solution is either to delete the above file or to customize the installation of the new release and chosing to install “XConfig” which will overwrite any configuration file possibly being still on the system from a different X-Server.</p>
Back Again2003-02-01T00:00:00+00:00http://pilif.github.com/2003/02/back-again<p>It’ done. I’ve not only sucessfully survived the relaunch of our broadband-portal <a href="http://www.superspeed.ch">Superspeed</a>, I too have survived the installation of a new server at a new location. Although I’ve had muc htoo less time, I think, everything should be working again - everything besides my <a href="http://www.spamassassin.org">SpamAssissin</a> installation which I will again patch to use my mailserver virtual user sql authentication scheme. And believe me: After my todays look at my non-spam-assasinated mailbox, I came to the conclusion, that this issue has my top priority ;-)</p>
More X112003-01-15T00:00:00+00:00http://pilif.github.com/2003/01/more-x11<p>As you really seem to like my last posting about the Apple X-Server, I hereby do a followup:</p>
<p>I’ve not spent much time with the tool as I am primary a Linux- and Windows guy. Although I really like Mac OS X and the nice design of the Apple computers, I do not own one and thus can only use the one that Richard has in our office.</p>
<p>We are currently in the last phase of a big project which leads to less free time for me and Richards computer being occupied most hours of the day…</p>
<p>Anyway: Apple recently opened a <a href="http://www.lists.apple.com/mailman/listinfo/x11-users">mailing list</a> which I have subscribed myself to. It’s quite cool to read the messages: The level is quite high - as is the traffic. And best of all: People from Apple working on the project are activly posting there.</p>
<p>Someone already created a <a href="http://www.misplaced.net/fom/X11/">unofficial FAQ</a> (the official one is still a text-document posted to the mailinglist). <a href="http://www.misplaced.net/fom/X11/26.html">One Article</a> deals with the Keymapping, but goes a bit further and explains how to get the Alt-Keys working.</p>
<p>Unfortunately I’ve not yet had the time to check it out, but I will keep you updated…</p>
Apple X11 - cool2003-01-08T00:00:00+00:00http://pilif.github.com/2003/01/apple-x11-cool<p>OK. It took me quite some time to review the <a href="http://www.apple.com/macosx/x11/">X-Server</a> (and to fix the one big problem I’ve head with it - but see below). I got tired and had to go home so I’m writing this now.</p>
<p>First: The thing is fast. I am used to the speeds of XDarwin and so I was really surprised about Apple’s work. It launches in about half a second on Richard’s mac and launching Eterm or nedit just happens instantly without any remarkable delay. I’ve read that the X-Server is not only 2D-accelerated (which alone is a big improvement over XDarwin), but also provides OpenGL-Support for X11-Applications. I’ve not tried that out yet.</p>
<p>When launched, the Server starts an Xterm with it and I’ve not yet found out how to change that. I was really disappointed to see that it used an US keymap which, although I know where one or another character lies on my swiss keyboard, is not an option for production use.</p>
<p>It turns out, that the US-Keymap is hardcoded in this release, so it cannot be changed. But a workaround exists anyway: Create a Symlink from <tt>/System/Libarary/Keyboards/<<your keymap>></tt> to <tt>~/Library/Keyboards/US.keymapping</tt> and the X-Server will use your keymapping. Of course this breaks US-Keyboards possibly plugged with your account, but if you really have an US keyboard, there is nearly nothing to stop you from using it ;-)</p>
<p>The Xterm provided by Apple is not able to display umlauts which may as well be a configuration problem. I’ve yet to find that out, although I am not really motivated to do so. <a href="http://www.eterm.org/">Eterm</a> is a much better alternative.</p>
<p>So I am quite happy with Apples solution - even Copy & Paste works between Aqua and X - something XDarwin fails to be able to. The only Problem: Characters you get by combining your Keys with the Alt-Modifier cannot be created (which is maybe the reason why Apple hardcoded the US-Keymap) but the only one of those characters I use really often is the @-sign which I can create with Copy & Paste for now.</p>
<p>Another tipp: I’ve written yesterday that Safari does not support Window-Cycling-Shortcuts. This turned out to be not true: The shortcuts are just not added to the Menu and are Command-> and Command-<. This allows Richard to use the browser and makes me happy as he will finally stop using IE ;-)</p>
New Year / Macworld Keynote2003-01-07T00:00:00+00:00http://pilif.github.com/2003/01/new-year-macworld-keynote<p>First of all: A happy new year to my fellow readers. I was in Paris from december 26th to january 2nd which (at least partially) should explain the lack of updates here.</p>
<p>I’ve just watched the quciktime stream Steve Job’s keynote on this years Macworld in San Francisco. And I mostly like what I saw.</p>
<p>OK. The loooooong introduction of iDVD was quite boring and the presentation of iMovie was quite uninteresing (to me), but the rest was quite cool.</p>
<p>The whole thing began with <a href="http://www.apple.com/ipod/burton/">this little thing</a> which I really like but is much too expensive for what it is. Then a down-stripped version of Final Cut Pro, <a href="http://www.apple.com/finalcutexpress/">Final Cut Express</a> has been introduced. Not quite interesting for me.</p>
<p>The <a href="http://www.apple.com/ilife/">renewed i<em>-Applications</a> where also not that interesting to me. One exception: iPhoto seems quite cool to me and I will try it out on Richard’s Mac here in the office when it’s available. What I really liked: Apart from what “analysts” where saying (and from what slashdot gladly <a href="http://apple.slashdot.org/apple/03/01/03/2149209.shtml?tid=107">picked up</a>), the i</em>-Apps remain free to use.</a></p>
<p>The new presentation software <a href="http://www.apple.com/keynote/">Keynote</a> really looks interesting. Maybe I should give it a shot. If it’s just half as annoying as PowerPoint, I will really like it.</p>
<p>I was quite surprised to see the new Webbrowser <a href="http://www.apple.com/safari/">Safari</a> which was announced just after Keynote. I just went to apple.com and downloaded it. Some points:</p>
<ul>
<li>It does not support tabs
<li>There is no shortcut for window-cycling (which will render it useless for Richard)
<li>It's fast.
<li>It's reat-looking
<li>I've no idea why it's in the metal-look
</ul>
I was surprised to learn that Apple did not use the <a href="http://www.mozilla.org">Gecko-Engine</a> but took KHTML from the <a href="http://www.kde.org">KDE Project</a>. This is now the second big project prefering KHTML before Geko (the other one will probably be <a href="http://www.wine.org">Wine</a>). Stefe Jobs produly announced that Apple will give the community back any modifications they made to KHTML. he told that Apple belongs to the nice guys respecting Free Software. But when I think of it, I come to the conclusion, that they really <b>had</b> to give the source back. Actually the even must provide us with the <b>full sourcecode</b> of Safari (which they have not yet done so) because KHTML as the rest of KDE is licensed under the GPL and Safari definitlely is a "dereived work".
I hope to see the sorcecode soon. Mostly because I want to see this browser with Tab-Support.
And then came those <a href="http://www.apple.com/powerbook/">Powerbooks</a>...
I really like them and one of those will definitely be the first Mac i am going to buy myself. I am not quite sure which of them as both of them have a flaw:
<ul>
<li>The <a href="http://www.apple.com/powerbook/index17.html">17''-PB</a>is just a little to big to carry around. Additionally I am asking myself why they did not use the free space to enlarge the keyboard. It seems quite small to me and the wide free space right and left of it looks stupid.
<li>The <a href="http://www.apple.com/powerbook/index12.html">12''-PB</a> is too small for my likings. I prefer a bigger resolution than 1024x768 (which is very high for a 12'' display).
</ul>
Anyway: The devices are qute cool and I really want to get one.
After all, the Keynote was cool to watch and I am looking forward for the next one.
PS: When downloading Safari, I came across an <a href="http://www.xfree.org">XFree86</a> based <a href="http://www.apple.com/macosx/x11/">X-Server by Apple</a> but the download-script for collecting Email-Adresses did not work so I could not get it (yet). I wonder: Does this have something to do with FilmGimp? And: Does the clipboard work with this X-Server (it did not with XDarwin)? I'll keep you informed...
</li></li></ul></li></li></li></li></li></ul>
Downloadscript fixed2003-01-07T00:00:00+00:00http://pilif.github.com/2003/01/downloadscript-fixed<p>They have just fixed their <a href="http://www.apple.com/macosx/x11/download/">download-script</a> for the X-Server. I am downloading now… The archive is 40 Megabytes in size.</p>
<p>What I am keen to see: I have <a href="http://fink.sf.net">Fink</a> and with Fink, XDarwin installed on this machine here.</p>
<p>We’ll see, what the Mac-X-Server does to the current X11-installation ;-)</p>
Things I hate2002-12-24T00:00:00+00:00http://pilif.github.com/2002/12/things-i-hate<p>Long time, no post. Sorry for that, but I was quite busy.</p>
<p>Today, I was invited to a nice pre-christmas dinner by the mother of my girlfriend. I really looked forward to the event and I deceided to just come to the office for some hours and then to go and take the train to Erlenbach where my girlfriend lives.</p>
<p>As soon as I was in the office, someone came to me and told me that a Win2k-Server just went down. I did what I always do in such cases: Go and reboot the thing.</p>
<p>But this time, it did not help.</p>
<p>So I went to get a TFT-Display and a keyboard to see what’s wrong. And I was not pleased: Bluescreen at startup.</p>
<p>None of the debugging-tools provided by Microsoft was of any help, so I took the server at my place and inserted the original installation disk.</p>
<p>As I suspected, the repairing-tool launched by pressing “R” in the Setup-Screen did not help. The <em>real</em> good system repair tool can be gotten when chosing to “I”nstall a new Installation and <em>then</em> chosing “R” when the old installation has been found.</p>
<p>I was pleased to see that the server booted again, when the installation was complete. All the settings and the whole configuration was still there <em>yess!</em></p>
<p>But two things were wrong:</p>
<ul>
<li>The WINS-Service could not be started. The error in the error-log was "File not found". An indication *what* file was missing was not given.
<li>The Exchange-Server used by our renter was down and could not be started. The error in the log is german and I will not even try to translate it for you as it is meaningless anyway.
</ul>
In short: I could not fix the problem before I went to Erlenbach, so I had to return to the office instead of going back home after the (excellent) dinner because I am away around christmas.
My solutions for the problems:
<ul>
<li>The WINS-Server could be reaniomated by un-installing and re-installing it.
<li>With the Exchange-Server I am still trying, but I think, <a href="http://support.microsoft.com/default.aspx?scid=kb;EN-US;Q257415">Q257415</a> and <a href="http://support.microsoft.com/default.aspx?scid=kb;EN-US;Q296790">Q296790</a> may be of help (Note: <a href="http://groups.google.com">Google Groups</a> is really great if you don't know any solutions any more.
</ul>
I'll keep you updated on my progress here.
</li></li></ul></li></li></ul>
Things I hate (3)2002-12-24T00:00:00+00:00http://pilif.github.com/2002/12/things-i-hate-3<p>Jepp. The test was successful. The installation is up and running again.</p>
<p>After many hours of stupid system administration work, I am thinking about what I have had to do if Linux would have been running on said server.</p>
<p>First of all, it would be highly unlikely that something like this i-will-not-boot-anymore would happen on a Linux-Server. the architecture is more straight-forward there and it cannot happen that the system destablizes itself without external intervention. But let’s say, it happened anyway (stupid administrator or even a hardware defect (like defect ram causing currupted data to be written to the harddrive at an incorrect location).</p>
<p>If I cannot boot Linux (or whatever other UNIX-flavor you like), I just take a rescue disk and boot from it. Unlike the disk provided by Microsoft, I would get a full-fledged console allowing me to do everything I could do on the defective installation. The Windows disk provides me with a recovery console which does not allow much more than writing a new boot-record to the harddrive and an automated recovery procedure (actually two - one works better, the other worse. As usual, the better one is hidden (behind the “new installation” step)) which will do something intransparently which is supposed to fix your installation. And: I had to work with a german Windows installation disk and the translation is <em>really</em> bad. I would have preferred the english version, but the administrator does not have the choice there.</p>
<p>As always: Intransparence is bad. Where the boot-process of every Linux-Distribution is well-documented and <em>very</em> transparent and thus can be modified, debugged or even stripped down to the bare minimum (init=/bin/sh), the process in windows is very complex and cannot be altered at all. This forces the user to do unneccesary time-taking reinstallations as the software is not smart enough to fix the problem and the administration is not allowed to.</p>
<p>Debugging the problem: In UNIX/Linux I get most of the time a nice and understanable error-message. If I can’t understand it, I can enter it to google and usually get answers. If not, I can even <tt>grep</tt> through the sourcecode and thus make me an image what it means.</p>
<p>Under Windows - at least some parts of the Windows-Servers, getting a really useful error-message is difficult: The Event-Viewer uses the same Error-Codes for completly different things and the same things may have the same error-message which renders google quite useless (and don’t even try to understand those messages - they are <em>not</em> helpful at all). Greping through the sourcecode is no alternative at all.</p>
<p>So after all I think my odyssey with this crashed server would have taken much less time and work if the server would have been running Linux or a different flavor of UNIX. Too bad it isn’t .</p>
<p>Now I am really going home</p>
Things I hate (2)2002-12-24T00:00:00+00:00http://pilif.github.com/2002/12/things-i-hate-2<p>I got it to work.</p>
<p>The <tt>/disasterrecovery</tt>-Option for the Setup.exe of the exchange-server was not enough. Searching more in google finally brought the solution: <a href="http://support.microsoft.com/default.aspx?scid=KB;en-us;267573&">Q267573</a>.</p>
<p>I’ve created a .reg-File so you don’t have to make 5000 clicks when in the same situation:</p>
<pre>
Windows Registry Editor Version 5.00
[HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Rpc\ClientProtocols]
"ncacg_ip_udp"="rpcrt4.dll"
"ncacn_http"="rpcrt4.dll"
"ncacn_nb_tcp"="rpcrt4.dll"
"ncacn_ip_tcp"="rpcrt4.dll"
"ncacn_np"="rpcrt4.dll"
</pre>
<p>If somebody can tell me why the dedicated disaster-recovery-option of the setup program does not create those entries, please tell me here and now!</p>
<p>I will now make some tests with an Outlook-Client and then finally go home (it’s 1:30am localtime)</p>
ScummVM 0.32002-12-10T00:00:00+00:00http://pilif.github.com/2002/12/scummvm-03<p><a href="http://scummvm.sourceforge.net/">ScummVM 0.3.0b</a> has been released a couple of days ago even without the agreement between the project group and <a href="http://www.lucasarts.com">LucasArts</a>. ScummVM is a free (as in speech) engine capable of running most of the old Lucas Arts Adventures everyone likes so much.</p>
<p>Why is this important? All the older games (read: all the good ones) came out when computers where powered by DOS (mostly MS-DOS) and had an original Soundbalster ISA Card (or something compatible to it). Even with <a href="http://ntvdm.cjb.net/">tricks</a> it’s impossible to get those old games to run with sound on most of the modern systems.</p>
<p>And this is where ScummVM kicks in: The program runs on many modern OSes and understands the format the old games where written in (the actual executable was also just an interpreter back then) and thus allows to play the original games in the new environement. Of course you still need the original game, but who doesn’t?</p>
<p>The sad thing: LucasArts did not seem to understand what the project is acutally doing and tried to shut it down using the dammed DMCA. Forunately the voices of many fans stopped LucasArts from proceeding and they began negotiating an agreement with the ScummVM Group. But just read <a href="http://scummvm.sourceforge.net/">here</a> (last paragraph of the announcement).</p>
<p>Anyway: I did not have the time to test the new version, but the <a href="http://sourceforge.net/project/shownotes.php?release_id=126358">Release Notes</a> look promising.</p>
Syncing, Syncing and Syncing....2002-12-09T00:00:00+00:00http://pilif.github.com/2002/12/syncing-syncing-and-syncing<p>OK…. the odyssey goes on: When I posted the last entry here, I just had synced my Nokia 7650 (links in previous entry) over bluetooth with my outlook and besides my dissappointment about not being able to send SMS to richard and call him from within Outlook (what is perfectly possible from Mac’s Addressbook-Application) I was happy to finally have Bluetooth working with my cellphone.</p>
<p>Then some time later when Richard left the office, I deceided to try out my phone with his Mac (I have my own account there). First thing I noted: BT-Pairings are per computer and not per user on the Macs which is not really what I would have exspected as it can lead to problems.</p>
<p>Anyway: I was quite pleased to see that Ricahrd’s Mac does not recognize my Nokia Phone and thus does not offer any of those cool options, making my Windows-BT-Configuration actually superior to the one that is possible with the Mac. At least with <em>my</em> phone :-) [not that I’m really happy about this - it’s just better than before, but not good at all. As a PDA, the 7650 rocks. As a phone too. But not as a companion for other devices. And the PC-Suite provided with the phone is quite crappy too, besides its capability to sync with outlook]</p>
<p>I don’t really use outlook for anything but as a common denominator between all my PIM-devices and applications as every one of them can synchronize with outlook: My Zaurus, my IPAQ which the company provided me to write applications for it and finally, Mozilla for sending Email.</p>
<p>Anyway: After the sychronisation of Outlook with the Nokia, the outlook-addressbook was cluttered with two empty contacts and many of the imported ones had fields just containing a single space. You can call me a perfectionist, but I did not want them in my other devices/applications. So I’ve removed them.</p>
<p>Then I synced again. Effect: Contacts where doubled on my cellphone and in outlook. The corrupt ones where back.</p>
<p>I removed all the contacts from the Nokia adressbook (over the Phone itself. The PC-Suite does not provide a GUI to access the contacts directly. It was a <em>lengthy</em> procedure) and I synced again with the effect, that the contact database of Outlook was now complete empty. I’ve had somewhat forseen this and thus made a backup before the synchronisation which I reimported into Outlook.</p>
<p>Again: Synchronisation.</p>
<p>This time it seemd to work. But my own contact entry was again doubled. Once the correct entry and once with spaces in the secondary email-adress.</p>
<p>I removed the wrong entry and synced again. -> Yippie! It worked.</p>
<p>Then I made the next mistake.</p>
<p>I syncronized the IPAQ (which does not work over bluetooth regardles of the software telling me the opposite).</p>
<p>Effect: double contacts in Outlook again. I’ve no idea why because all the entries on the IPAQ had a much older modification date.</p>
<p>Again: Removed the old contacts and synchronized again. IPAQ and Outlook were in sync.</p>
<p>Then: Sync with the Nokia.</p>
<p>Double contacts again….</p>
<p>I hope, you are getting the point. It’s strictly impossible for me to have a single working contact-database on all the devices. I don’t know where the error slips into (but I tend to blame Nokia for it) and I certainly don’t know how to fix it. It’s just terribly annoying. And as the Nokia way of synchronizing is completly proprietary, there is no way to replace the faulty part.</p>
<p>I’m just beginning to regret having bought the Nokia 7650 and given my old T68i to my girlfriend. But then again: I really like the user interface, the speed, the stability [I’m trying not to remeber the having-to-reflash-the-software-incident last week] and the featureset of the 7650. After all, I must conclude that the perfect solution for a techie like me does not exist yet. Hopefully it will sometime.</p>
Another day full of "fun" with hard- and software2002-12-09T00:00:00+00:00http://pilif.github.com/2002/12/another-day-full-of-fun-with-hard-and-software<p>I was very happy this morning when I’ve seen that my <a href="http://www.acer.ch/vi/page0.jsp-page79,,,16,,,116,,,,,,,,,1516,,,,,,,,,,,116,16,,16,16,,,,16,,,,16,,,,16,,,,,,,,,,,16,,,0,0,16,,2051291666.htm">Bluetooth-USB-Adaptor</a> (link points to a german page, but I could not find the product on the english pages) finally arrived. It took me about 3 months to actually get one.</p>
<p>I ordered the part to backup and synchronize my <a href="http://www.nokia.com/nokia/0,,137,00.html">Nokia 7650</a> with my Desktop PC as I’ve not seen a way to get the data from my Notebook (where I can use Infrared for synchronmisation) to my Desktop in a simple, automated way, not involving writing a program for myself. And the additional benefit was that BT is a lot faster than the old IR-Connection.</p>
<p>I began installing the adaptor at the same time as Richard did. The difference: He had an Ericsson T68i and - that’s my point - a PowerMac with OS 10.2.</p>
<p>The sad stroy: Getting the Nokia PC Suite to work involved hacking the 3Com driver to get it to install with my Acer BT-Card, rebooting about 500 times. And - after many <em>hours</em> of trail and error - the results were not satisfying at all: I can synchronize with my Outlook (the good thing), but I cannot do anyting else, Richard can with his mac (where the installation took about 2 minutes): Sending and receiving SMS, making calls, receiving calls,…</p>
<p>I really am thinking about byuing myself a Mac…</p>
Underground History2002-12-08T00:00:00+00:00http://pilif.github.com/2002/12/underground-history<p>There are not many things I like more than the athmosphere of a dark, dusty place some meters underground. When I was a child, we used to play in a small cavern below the house where my paretns had their flat (the whole story ended with us nearly being arrested by the criminal police of Zollikon which suspeced us to be trespassing drug-addicts - but this is another story which I will tell you perhaps another time).</p>
<p>Anyway: On <a href="http://www.slashdot.org">Slashdot</a>, I just saw an article about abandoned subway stations in London and I <em>had</em> to visit the site to learn everything of another place with this great athmosphere. To be honest: Just while writing this, I deceided that I will have to go to those subway stations sometime to make my own photographs - to breathe the dusty air for myself and to tell you, fellow reader, about it.</p>
<p>Until then: Have fun with <a href="http://www.starfury.demon.co.uk/uground/index.html">Underground History</a></p>
pots.ch2002-12-06T00:00:00+00:00http://pilif.github.com/2002/12/potsch<p>I must confess: I’ve slept not nearly enough last night. And today I was ICQ-ing a bit with Jonas and we finally came to the point when we found that POTS (Plain Old Telephone Service - the official technical name for the standard analog telephone technology) is quite a nice name.</p>
<p>Following the tradition of buying domain names for internet access technologies (adsl.ch), I went and reserved pots.ch for me.</p>
<p>The domain is active and after about 2 o clock CET, you will be able to reach me at <a href="mailto:ph@pots.ch">ph@pots.ch</a>.</p>
<p>I really like this domain. It comes just after gnegg.ch in the list of strange domain-names I own :-)</p>
MLDonkey2002-12-04T00:00:00+00:00http://pilif.github.com/2002/12/mldonkey<p>Possibly, you have heard of the <a href="http://www.edonkey2000.com">eDonkey</a> filesharing program. Since long, there exists a compatible OpenSource-Program called <a href="http://savannah.nongnu.org/projects/mldonkey">MLDonkey</a>. MLDonkey needs a Unix based system to run (although I think, i’ve heard of a cygwin port). MLDonkey has a nice GUI and generally seems to work better than the original Linux-Client - even more in combination with the Windows-Remote-Control-GUI <a href="http://run.to/mldonkeywatch">MLDonkey Watch</a>.</p>
<p>The developer of MLDonkey seems to not longer have time to continue the development, which is a shame as there are still some small problems with the client - some of them making problems on the eDonkey-Servers out there.</p>
<p>Pierre Etchemaite now provides some patches under <a href="http://concept.free.free.fr/mldonkey">http://concept.free.free.fr/mldonkey</a> which fix a lot of the problems currently still in the client. If you want to use MLDonkey, you should really apply them.</p>
<p>On the Mailinglist (subscription via the Savannah-Page linked above) the patches and their results are discussed.</p>
<p>Don’t hesitate and give MLDonkey and the Pango-Patches a try!</p>
PostgreSQL 7.3 - it works2002-12-03T00:00:00+00:00http://pilif.github.com/2002/12/postgresql-73-it-works<p>I’ve installed the new release of my favourite database, <a href="http://www.postgresql.org">PostgreSQL</a> today and I can happily announce that the upgrade from 7.2.3 went without any problems (a strange thing to announce when having my luk with software in mind ;-).</p>
<p>I’ve not yet had the time to check out all the wonderful new features (Schemas, Domains, a very extended ALTER TABLE and much more), but I will try it somewhere this week.</p>
<p>What I’ve noticed during the update: The current <a href="http://www.webmin.com/">Webmin</a>-Module (1.030) for administering Postgres’ users does not work with the current format of <tt>pg_hba.conf</tt> but editing the file by hand is quite straightforward - more so because of the very extensive comments in the file.</p>
JclDebug2002-12-03T00:00:00+00:00http://pilif.github.com/2002/12/jcldebug<p>If you are a Delphi Programmer like me, you surely know the problem with users reporting an exception here and there but you cannot reproduce it at your place. This can get even more dramatical if such exceptions are thrown within threads as this will lead to an immediate bluescreen in Windows 9x/ME and to a “visit” by Dr. Watson in the NT-based versions of windows.</p>
<p>Imagine you could get a detailed error-report containing a full callstack of where the error occured combined with information about file and line-number. This report could be generated directly on the users computer and be sent to you via email or directly via the internet, using a custom procedure - even directly creating entries in the bugtracking-tool you are using.</p>
<p>This and more is made possible by the <a href="http://www.delphi-jedi.org/">Project JEDI</a> - more accuratly, the JCL-Subproject with its JclDebug-Framework. When you have completed the installation of the package, a new Menu Option called “Inser JCL Debug Data” will be added to the Project-Menu of your Delphi-IDE.</p>
<p>Now you add an Exception-Dialog to the Application using “File, New, Other…” followed by “Dialog, Exception Dialog”.</p>
<p>The newly added Form can easily be customized to your likings.</p>
<p>Now make a complete build. The IDE-Plugins will create a MAP-File, compress it and add it to the .EXE-File of your Project. When an Exception is thrown, the new error-dialog will be used, displaying a complete callstack with filenames and linenumbers.</p>
<p>I’ve created a small CGI-Script for receiving such reports and automatically filing it into my <a href="http://phpbt.sourceforge.net/">phpBugTracker</a> (a very nice “Bugzilla-Light” written in PHP). This has already helped me to track two stupid bugs down which I was never able to reproduce on my development system.</p>
<p>Oh and before I forget: The whole thing can be downloaded on it’s <a href="http://jvcl.sourceforge.net/">Webpage</a> at Sourceforge.</p>
Fixed Searchengine2002-12-03T00:00:00+00:00http://pilif.github.com/2002/12/fixed-searchengine<p>I’ve just realized that the searchengine-setup for Movabletype in a mod_perl environment has not been documented and I thus never added the needed directives to the apache configuration leading the search-engine on this site not to work.</p>
<p>This is fixed now.</p>
<p>For those wondering what to add to the apache-configuration to enable the searchengine with mod_perl, please use the following code snippet to enlighten you:</p>
<pre>
PerlModule MT::App::Search
<Location /mt/search>
SetHandler perl-script
PerlHandler MT::App::Search
PerlSetVar MTConfig /path/to/your/mt.cfg
</Location>
</pre>
PostgreSQL 7.32002-12-02T00:00:00+00:00http://pilif.github.com/2002/12/postgresql-73<p>I’ve just seen that <a href="http://www.postgresql.org">PostgreSQL 7.3</a> has been released. Postgres is an Open Source database which has a surprisingly rich feature-set. Just like the MTA <a href="http://www.exim.org">Exim</a>, PostgreSQL belongs to the group of softweare I <em>really</em> like. I ‘m looking forward trtying the out the new release…</p>
Amazing delivery times2002-11-30T00:00:00+00:00http://pilif.github.com/2002/11/amazing-delivery-times<p>Three days ago, I finally deceided theat I had to the the OST of my favourite Game, Xenogears. Last year, I bought the OST of Chrono Cross and FFVII through Amazon an had to wait about 3 weeks from order to delivery.</p>
<p>As I really like the Xenogears Soundtrack (no wonder: it’s from Yasunori Mitsuda, the same guy that created to soundtrack for Chrono Cross and Chrono Trigger) I did not want to wait this long.</p>
<p>So I gave <a href="http://www.animenation.com/">Animenation</a> a shot. I was surprised when they asked me for a photograph of my credit card for security reasons but I gave it to them anyway. They where satisfied with my picture where I blurred out the mid six digits of the number. I’ve suggested them that I would gladly provide an unaltered picture, if they would provide me with their GPG Public Key.</p>
<p>What was really amazing: Three days after I placed the order, a small package was on my office-desk: The delivery already arrived.</p>
<p>So: What took amazon 3 weeks to do, was done in 3 days by Animenation! Congratulations! I will definitly buy there again!</p>
Fun with Linux and new Hardware2002-11-28T00:00:00+00:00http://pilif.github.com/2002/11/fun-with-linux-and-new-hardware<p>Ooops… what a delay between the last post and this one. I really should post more often or this really gets uninteresting.</p>
<p>However: Recently, I could not resist anymore and bought myself a new desktop PC - initially intended to use at home, but I never could get araound moving it out of the office. One of the reasons may be Richard and our common love in Unreal Tournament which ceratinly works better on a 2.5 GHz P4 with a Radeon 9700 than on my Thinkpad ;-)</p>
<p>Anyway: After having seen KDE 3.1rc3 using TrueType-Fonts with Font-Hinting on my Gentoo-Box at home, I finally deceided that it is time to give linux a shot on this new PC to finally use it for the daily development-work (which I did in jEdit [see below] under Windows on SAMBA-exported directories).</p>
<p>I mean: The time was right: ATI just released a <a href="http://mirror.ati.com/support/drivers/linux/radeon-linux.html">driver</a> for the new Radeon series and I finally wanted to give it a shot.</p>
<p>And I shouldn’t have.</p>
<p>I chose Gentoo as my distribution. One one side because I wanted to see how long the new box takes for compiling the whole stuff I need and on the other side because I really <em>knew</em> that every other distribution will not work as they do not let the user do enough customizing in the installation and they certainly will not recognize my new hardware.</p>
<p>In short: Even installing Gentoo with its always-brand-new software-packages was a time-consuming frustrating thing. Some points:</p>
<ul>
<li>I used the integrated Braodcom NetXtreme Gigabit Chipset on my Asus P4PE mainboard. Unfortunatly the driver is not included in the kernel and on the gentoo-install-cd is no compiler to compile a module matching to the running kernel. My solution was using <a href="http://www.knoppix.de">Knoppix</a> with a /lib copied to the partition I wanted to use for Gentoo. Another one would have been trying to get the kernel-headers used to compile the gentoo-install-kernel and compile the driver on another machine.
<li>2.4.19 does not support the ICH4-integraded IDE-Controller, so I had to install 2.4.20-rc2. I was to lazy to patch in the cool Gentoo-Patches. I will not upgrade the kernel anytime soon as I will certainly forget to re-compile all the modules I had to compile in addition to the ones provided with the kernel.
<li>In the first night of using emerge &lt<a lot of stuff>> without sitting in front of the monitor, emerge failed about 10 Minutes after I left when compiling <a href="http://www.postgresql.org">PostgreSQL</a> because of a bug in that ebuild. One night the PC run in vain.
<li>The ATI-Drivers did not work for me: When Starting XFree a strange error about fglrx not containing some object-data appeared and X closed down. Possibly, the <a href="http://dri.sf.net">DRI-Project</a> was of help in at least getting X to work (the current CSV-version seems to support the new Radeon-Chips) - although not very fast and without all the 3D-features I could have. As I am currently not sitting in front of the machine, I could just see X not going down but I could not check if it really works, yet.
</ul>
I've learned that I will *never again* install linux on anything newer than 6 months old. I really am no crack in setting up Linux and the procedure I had to go through was a pain in the ass. Many times I wanted to give up as with every problem I solved, another one arised.
Finally, my liking for <a href="http://www.gentoo.org">Gentoo</a> may be another problem. Compiling everything from Source is cool, but on the other hand does not bring that much of a performance improvement and certainly takes time, even more if ebuilds marked for production use are strictly broken and do not compile. As compiling is a time consuming process, I nearly *demand* that it works without myself having to sit in front of the monitor just to fix a compile-problem here ant there as this (nearly) defeats the whole sense of using gentoo instead of <a href="http://www.linuxfromscratch.org">LFS</a>
Anyway: I am looking forward to the evening when I will possibly finally be ready to start using linux productivly.
</li></li></li></li></ul>
MacOS 10.2.22002-11-12T00:00:00+00:00http://pilif.github.com/2002/11/macos-1022<p>MacOS X 10.2.2 has just been released. As always there is a <a href="http://docs.info.apple.com/article.html?artnum=107140">document</a> describing what’s changed. It’s unexplicable to me why they have not fixed the bug with Mac Mail not recognizing the IMAP-Folder-Prefix, leading to IMAP-Folders not being displayed. Apple itself <a href="http://docs.info.apple.com/article.html?artnum=107069">suggests</a> quite a stupid workaround: Create another dummy-accout and the folders will be displayed.</p>
<p>After all just setting the IMAP-Folder-Prefix in the Account-Properties to “INBOX” does help in most of the cases without having the user to create a dummy account. Anyway: This is clearly a bug and should be fixed by Apple. I don’t know why the did not.</p>
Xenogears - Yepp. I like it2002-11-11T00:00:00+00:00http://pilif.github.com/2002/11/xenogears-yepp-i-like-it<p>This seems to be becoming more game-centered than I intended… However:</p>
<p>Yesterday I did another lengthy session with Kingdom Hearts on my video-beamer and after not being able to kill the boss in Atlantica (Ursula, it’s name), remembering the frustrating hours used up for my tries to kill the fish-boss in “Zelda - Majoras Mask”, I switched my playstation off, entered the Xenogears-CD and turned it on again.</p>
<p>The next 7 hours where most interesting. I began playing just after Fei and his crew where shot down by Bart when fleeing from the attack against Kislev when I arrived at the Thames fleet. From there I played until the fight against bishop Stone in his gear where I screwed up and deceided that two o clock in the morning is a good time to stop playin anyway (why must this always happen to me in fights after a long cut-scene without a chance to save after having seen it? Square should consider giving the possibility to skip a scene if you at least have seen it once).</p>
<p>All Squaresoft RPG’s (with the exception of Kingdom Hearts) have a good deep story, but Xenogears has the best of them. I remember lively when the whole plot araund the Ethos, something the player fears for the whole game, got revealed to Billy. It’s jsut great.</p>
<p>If you find the game somwhere, consider buying it. It’s just great. I would even suggest you to import an americain PS just for Xenogears as the game was never released here in Europe (I’ve imported mine for CronoCross which too is great but not nearly as great as Xenogears).</p>
Why I like jEdit2002-11-08T00:00:00+00:00http://pilif.github.com/2002/11/why-i-like-jedit<table>
<tbody>
<tr>
<td><a href="/assets/images/jedit.png"><img src="/assets/images/jedit-thumb.png" width="200" height="136" align="right" /></a> <a href="http://www.jedit.org">jEdit</a> is a texteditor written in Java. Actually it’s not just a texteditor - it’s <i>the</i> texteditor. It combines the usage-guidelines known of other programs running under Windows or windows-like environements with the functionality (as an editor, not as a newsreader</td>
<td>webbrowser</td>
<td>filemanager</td>
<td>«insert whatever else emacs can do) of Emacs.</td>
</tr>
</tbody>
</table>
<p>When you download the current release (you can easily take the current 4.1pre-Release - even the CVS-Snapshot is stable enough for production use [at least for me]) and install it via the provided installer, you will get quite a simple looking UI. So the first thing to to is to load the PlugIn-Manager and to download and install whatever you need. Then restart the program and begin with the configuration-session…</p>
<p>On the screenshot, you can see many of the features I like about jEdit:</p>
<ul>
<li>The cool look&feel (install the L&F-Plugin and chose the Metouia-Look to get mine).</li>
<li>The File-Browser always open on the left side. You have to select it under Global Options/Docking to get it sticking at the left side.</li>
<li>The search-bar which even supports regular expressions</li>
<li>The split-view. I am currently looking at the same file, but chaning this is a matter of selecting another tab (install the BufferTabs-Plugin to get those) in one of the views.</li>
<li>The color-scheme of the editfield. I really like having bright text on a dark background. It’s so much easier to read.</li>
<li>The yellow triangle marks in the gutter of the editfield are for folding the sourcecode. Click them and the associated block will be folded.</li>
</ul>
<p>Please give jEdit a chance even though it’s written in Java: The thing is extremly feature-loaden and really fast. Trust me!</p>
Kingdom Hearts2002-11-06T00:00:00+00:00http://pilif.github.com/2002/11/kingdom-hearts<p>Yesterday evening I played another round of Square’s newest RPG, <a href="http://www.squaresoft.com/playonline/kingdomhearts/index1.html">Kingdom Hearts</a> and I have to admit, this is the first Square RPG which I don’t really like. Although it’s from the same team as the Final Fantasy Series, it’s more like Zelda than FF. With one difference: The camera sucks. When playing an action RPG like Zelda, it’s important to have your characters always in view and to know where you will be jumping to. Unfortunatly the camera in KH is quite buggy and thus you run into situations where you cannot see where to jump next. Rotating the screen does not work either as it’s quite jumpy and seems to know better what the user wants than the user does…</p>
<p>And: What I like best about the FF series (at least the newer games) or even more in Xenogears, is the complex story in those games. Unfortnuatly, KH’s story is soooo obvious and simple. And finally to the battle-system: As I said, it’s like Zelda. But in Zelda there is only Link to take care of. In KH, it’s three characters. Two of them are controlled by the KI and they have a tendency to senslessly use Ethers and Potions all the time which would not be that a big problem if only those where not that expensive to get…</p>
<p>After all, the only two things I like in the game are its graphics and its sound effects.</p>
<p>I hope, Xenosaga comes out soon…</p>
Welcome2002-11-05T00:00:00+00:00http://pilif.github.com/2002/11/welcome<p>My first entry to the newly opened gnegg.ch weblog. Here you will find all kinds of stuff, from technical stories over reviews to unimportant stuff I came across, but also announcements if I have time to add more to my full-fledged <a href="http://www.pilif.ch">Webpage</a>. As I am not-so-good™ with layout, I kept the default one of <a href="http://www.movabletype.org/">Movable Type</a>, my blogging-engine. Maybe <a href="http://www.rhaydon.ch">Richard</a> will help me here sometime in the future.</p>