When the 2nd generation of the AppleTV came out and offered AirPlay support, I bought one more or less for curiosity value, but it worked so well in conjunction with AirVideo that it has completely replaced my previous attempts at an in-home media center system.

It was silent, never really required OS or application updates, never crashed and never overheated. And thanks to AirVideo it was able to play everthing I could throw at it (at the cost of a server running in the closet of course).

The only inconvenience was the fact that I needed too many devices. Playing a video involved my TV, the AppleTV and my iOS device. Plus remotes for TV and AppleTV. Personally, I didn't really mind much, but while I would have loved to give my parents access to my media library (1 Gbit/s upstream FTW), the requirement to use three devices and to correctly switch on AirPlay has made this a complete impossibility due to the complexity.

So I patiently awaited the day when the AppleTV would finally be able to run apps itself. There was no technical reason to prevent that - the AppleTV totally was powerful enough for this and it was already running iOS.

You can imagine how happy I was when finally I got what I wanted and the new 4th generation AppleTV was announced. Finally a solution my parents can use. Finally something to help me to ditch the majority of the devices involved.

So of course I bought the new device the moment it became available.

I even had to go through additional trouble due to the lack of the optical digital port (the old AppleTV was connected to a Sonos playbar), but I found an audio extractor that works well enough.

So now after a few weeks of use, the one thing that actually pushed me to write this post here is the fact that the new AppleTV is probably the most unfinished and unpolished product that I have ever bought from Apple. Does it work? Yes. But the list of small oversights and missing pieces is as big as I have never seen in an Apple product. Ever.

Let me give you a list - quite like what I've done 12 years ago for a very different device

  • While the AppleTV provides you with an option to touch it with an iOS device for the configuration of the Wifi and the appleid settings, I still had to type in my appleid password twice: Once for the AppStore and once for the game center. Mind you, my appleid password is 30 characters long, containing uppercase, lowercase digits and symbols. Have fun doing this on the on-screen keyboard
  • The UI is laggy. The reason for having to type in the game center password was because the UI was still loading the system appleid as I was pressing the "Press here to login button". First nothing happened, then the button turned into a "Press here to sign out button" and then the device reacted to my button press. Thank you
  • The old AppleTV supported either the Remote app on an iPhone or even a bluetooth keyboard for character entry. The new one doesn't support any of this, so there's really no way around the crappy on-screen keyboard.
  • While the device allows you to turn off automatic app updates, there is no list of apps with pending updates. There's only "Recently updated", but that's a) limited to 20 apps, b) lists all recently updated apps, c) gives no indication what app is updated yet and what isn't and finally d) isn't even sorted by date of the last update. This UI is barely acceptable for enabled automatic updates, but completely unusable if you want them disabled to the point that I decided to just bite the bullet and enable them.
  • The sound settings offer "Automatic", "Stereo" and "Dolby Surround". Now, "Dolby Surround" is a technology from the mid-90-ies that encodes one additional back-channel in a stereo signal and is definitely not what you want (which would be "Dolby Digital"). Of course I've assumed that there's some "helpfulnes" at work here, detecting the fact that my TV doesn't support Dolby Digital (but the playbar does, so it's perfectly fine to send out AC/3 sinal). Only after quite a bit of debugging I found out that what Apple calls "Dolby Surround" is actually "Dolby Digital". WHY??
  • The remote is way too sensitive. If you so much as lift it up, you'll start seeking your video (which works way better than anything I've seen before, but still...)
  • Until the first update (provided without changelog or anything the like), the youtube app would start to constantly interrupt playback and reload the stream once you paused a video once.
  • Of course in Switzerland, Siri doesn't work, even though I would totally be able to use it in english (or german - it's available in Germany after all) Not that it matters because the Swiss store is devoid of media I'd actually be interested in anyways and there's no way for third-parties to integrate into the consolidated system-wide interface for media browsing.
  • Home Sharing doesn't work for me. At. All. Even after typing in my Apple ID password a third time (which, yes, it asked me to).
  • It still doesn't wake up on network access, nor appear in the list of Airplay-able devices of my phone when it's in sleep mode. This only happens in one segment of my network, so it might be an issue with a switch though - wouldn't be the first time :/

I'm sure as time goes on we'll see updates to fix this mess, but I cannot for the life of me understand why Apple thought that the time was ready to release this.

Again: It works fine and I will be brining one to my mother next Friday because I know she'll be able to use it just fine (especially using the Plex app). But this kind of lack of polish we're used to on Android and Windows. How can Apple produce something like this?

Read on →

Today marks another big milestone in the availabilibty of ubuquitous SSL encryption: The «Let's Encrypt» project got their cross-signature, so come a few more weeks, they will be ready for the public to use.

However, with an unlimited amount of available free SSL certificates, we get another problem: Because back in the day nobody thought about name based virtual hosting, the initial implementation of SSL didn't support the client telling the server what host it's trying to connect to. This means that the server didn't know what certificate to present when multiple host names were to be used for the same address.

This meant that for every site you wanted to offer over SSL, you needed an IP address, which are harder to get as time moves on and we're running out of them.

«SNI» is a protocol extension that allows the client to tell the server the host-name it's connecting to, so the server can chose the correct certificate to serve. This fixes above issue and finally allows virtual hosting based on the host name even over SSL.

Unfortunately, SNI isn't as widely supported as we'd like: Older Android devices and all IEs under Windows XP (which still is a sizeable portion of our users) dont' support SNI.

What's also tricky is that you don't know a client doesn't support SNI until it's too late: They connect to your port 443, don't send a host name and now the server needs to a) answer and b) send a server certificate. So unless the client accidentally hit the correct host name, the client will get a certificate mismatch and it will thus display the usual SSL error message.

This is of course not very good UX as you don't even get to tell the user what's wrong before they see the browser-specific error message.

However, I still want to support SSL for all my sites wherever I can. If I could have non-SNI-supporting clients on an unencrypted site and then adding encryption only if they support SNI, then encryption would become a progressive enhancement. The sites I'm dealing with aren't that far in the «needs encryption» territory, so offering encryption only for good (read: non-outdated) browsers is a viable option, especially as I want to offer this for free for the sites I'm hosting and I only have so many IP addresses at my disposal right now.

Generally, the advice to do that is to do user agent sniffing but that's error-prone. I'd much rather feature detect.

So after a bit of thinking, I came up with this (it requires JS though):

  • Over port 80, serve the normal site unencrypted instead of just redirecting to https.
  • On that regular site do a jsonp request for some beacon file on your site over https.
  • If that beacon loads properly, then your client is obviously SNI compliant, so redirect to the https version of your site using JS.
  • If the beacon doesn't load, then the browser probably doesn't support SNI, so keep them on the unencrypted page. If you want to, you can set a cookie to prevent further probing on subsequent requests.
  • On port 443, serve a HSTS header, so next time the browser visits, they'll use HTTPS from the start.

IE8 will still show the page correctly but also show a warning that it has blocked content for your own security, so you might want to immediately redirect again (with the cookie set) in order to get rid of the warning.

Contrary to the normal immediate redirect to HTTPS, this means that the first page-view even of compliant browsers will be unencrypted, so absolutely make sure that you serve all your cookies with the secure flag. This also means that in order to get to the encrypted version of the page, you need JavaScript enabled - at least for the first time.

Maybe you can come up with some crazy hack using frames, but this method seems to be the cleanest.

Read on →

Yesterday, I talked about why we need IPv6 and to make that actually happend, I decided to do my part and make sure that all of our infrastructure is available over IPv6.

Here's a story of how that went:

First was to request an IPv6 allocation by our hosting provider: Thankfully our contract with them included a /64, but it was never enabled and when I asked for it, they initially tried to bill us CHF 12/mt extra, but after pointing them to the contract, they started to make IPv6 happen.

That this still took them multiple days to do was a pointer to me that they were not ready at all and by asking, I was forcing them into readyness. I think I have done a good deed there.


Before doing anything else, I made sure that our DNS servers are accessible over IPv6 and that IPv6 glue records existed for them.

We're using PowerDNS, so actually supporting IPv6 connectivity was trivial, though there was a bit of tweaking needed to tell it about what interface to use for outgoing zone transfers.

Creating the glue records for the DNS servers was trivial too - nic.ch has a nice UI to handle the glue records. I've already had IPv4 glue records, so all I had to do was to add the V6 addresses.

web properties

Making our web properties available over IPv6 was trivial. All I had to do was to assign an IPv6 address to our frontend load balancer.

I did not change any of the backend network though. That's still running IPv4 and it will probably for a long time to come as I have already carefully allocated addresses, configured DHCP and I even know IP addresses by heart. No need to change this.

I had to update the web application itself a tiny bit in order to copy with a REMOTE_ADDR that didn't quite look the same any more though:

  • there were places where we are putting the remote address into the database. Thankfully, we are using PostgreSQL whose native inet type (it even supports handy type specific operators) supports IPv6 since practically forever. If you're using another database and you're sotoring the address in a VARCHAR, be prepared to lengthen the columns as IPv6 addreses are much longer.
  • There were some places where we were using CIDR matching for some privileged API calls we are allowing from the internal network. Of course, because I haven't changed the internal network, no code change was strictly needed, but I have updated the code (and unit tests) to deal with IPv6 too.

The last step was to add the AAAA record for our load balancer.

From that moment on, our web properties were available via IPv6 and while there's not a lot of traffic from Switzerland, over in Germany, about 30% of all requests are happening over IPv6.


Of the bunch, dealing with email was the most complicated step. Not so much for enabling IPv6 support in the MTA as that was supported since forever (we're using Exim (warning: very old post)).

The difficulty lied in getting everything else to work smoothly though - mostly in regards to SPAM filtering:

  • Many RBLs don't support IPv6, so I had to make sure we weren't accidentally treating all mail delivered to us over IPv6 as spam.
  • If you want to have any chance at your mail being accepted by remote parties, then you must have a valid PTR record for your mail server. This meant getting reverse DNS to work right for IPv6.
  • Of course you also need to update the SPF record now that you are sending email over IPv6.

PTR record

The PTR record was actually the crux of the matter.

In IPv4, it's inpractical or even impossible to get a reverse delegation for anything smaller than a /24, because of the way how reverse lookup works in DNS. There was RFC 2317 but that was just too much hassle for most ISPs to implement.

So the process normally was to let the ISP handle the few PTR records you wanted.

This changes with IPv6 in two ways: As the allocation is mostly fixed to a /64 or larger and because there are so many IPv6 addreses to allow splitting networks at byte boundaries without being stingy, it is trivially easy to do proper reverse delegation to customers.

And because there are so many addresses available for a customer (a /64 allocation is enough addresses to cover 2^32 whole internets), reverse delegation is the only way to make good use of all these addresses.

This is where I hit my next roadblock with the ISP though.

They were not at all set up for proper reverse delegation - the support ticket I have opened in November of 2014 took over 6 months to finally get closed in May of this year.

As an aside: This was a professional colocation provider for business customers that was, in 2014, not prepared to even just hand out IPv6 addresses and who required 6 months to get reverse delegation to work.

My awesome ISP was handing out IPv6 addresses since the late 90ies and they offer reverse delgation for free to anybody who asks. As a matter of fact, it was them to ask me whether I wanted a reverse delegation last year when I signed up with them.

Of course I said yes :-)

This brought me to the paradoxical situation of having a fully working IPv6 setup at home while I had to wait for 6 months for my commercial business ISP to get there.

it's done now

So after spending about 2 days learning about IPv6, after spending about 2 days updating our application, after spending one day convincing our ISP to give us the IPv6 allocation they promised in the contract and after waiting 6 months for the reverse delegation, I can finally say that all our services are now accessible via IPv6.

Here are the headers of the very first Email we've transmitted via IPv6

screenshot showing off array support

And here's the achievement badge I waited so patiently (because of the PTR delegation) to finally earn 🎉

IPv6 Certification Badge for pilif

I can't wait for the accompanying T-Shirt to arrive 😃

Read on →

As we are running out of IPv4 network addresses (and yes, we are), there's only two possible future scenarios and one of the two, most people are not going to like at all.

As IP addresses get more and more scarce, things will start to suck for both clients and content providers.

As more and more clients connect, carrier grade NAT will become the norm. NAT already sucks, but at least you get to control it and using NAT-PMP or UPnP, applications in your network get some control over being able to accept incoming connections.

Carrier Grade NAT is different. That's NAT being done on the ISPs end, so you don't get to open ports at all. This will affect gaming performance, it will affect your ability to use VoIP clients and of course file sharing clients.

For content providers on the other hand, it will become more and more difficult to get the public IP addresses needed for them to be able to actually provide content.

Back in the day, if you wanted to launch a service, you would just do it. No need to ask anybody for permission. But in the future, as addresses become scarce and controlled by big ISPs which are also acting as content provider, the ISPs become the gatekeepers for new services.

Either you do something they like you to be doing, or you don't get an address: As there will be way more content providers fighing over addresses than there will be addresses available, it's easy for them to be picky.

Old companies who still have addresses of course are not affected, but competing against them will become hard or even impossible.

More power to the ISPs and no competition for existing content providing services both are very good things for players already in the game, so that's certainly a possible future they are looking forward to.

If we want to prevent this possible future from becoming reality, we need a way out. IPv4 is draining up. IPv6 exists for a long time, but people are reluctant to upgrade their infrastructure.

It's a vicious cycle: People don't upgrade their infrastructure to IPv6 because nobody is using IPv6 and nobody is using IPv6 because there's nothing to be gained from using IPv6.

If we want to keep the internet as an open medium, we need to break the cycle. Everybody needs to work together to provide services over IPv6, to the point of even offering services over IPv6 exclusively.

Only then can we start to build pressure for ISPs to support IPv6 on their end.

If you are a content provider, ask your ISP for IPv6 support and start offering your content over IPv6. If you are an end user, pressure your ISP to offer IPv6 connectivity.

Knowing this, even one year ago, after getting motivated by my awesome ISP who offered IPv6 connectivity ever since, I started to get our commercial infrastructure up to speed.

Read on to learn how that went.

Read on →