The Care And Feeding Of Songs

It is a truth universally acknowledged that some songs are better than others. And it ought to be pretty widely acknowledged that there are some people who can take a perfectly good song and perform it in a way that makes you wish you were temporarily deaf.

The second thought came to me this morning. Our neighbour on one side is a perfectly nice old Aussie bloke who likes music from a certain period and/or genre. The genius of the Shadows frequently features on his playlist. Top hits of the late 60s and early 70s can often be heard, especially since he’s a little deaf and likes to turn it up a bit. But this is all fine… or was, until he played Richard Clayderman whilst we were eating breakfast.

To be fair, it may not have been Mr Clayderman. I’m sure there are other instrumentalists who make their livings by reducing good songs to feeble elevator muzak… in any event it got me thinking about what makes a good song. Or rather, a good “track”, and by that I mean a performance of a song, rather that just the song itself.

Here’s my theory: in order to be listenable, a track has to have at least one of:
(a) an interesting melody that’s balanced between predictability and surprise
(b) interesting lyrics that mean something to the listener
(c) a genuine and moving performance

Time for some examples. Let’s take (c) first, because I’m feeling perverse. The Sex Pistols ‘God Save The Queen’ has no real melody and the lyrics aren’t all that hot (“potential H-bomb”? What?). But it works (for me) because it’s played with a sneering, raucous enthusiasm that makes a very simple song work. The same’s true of, say, Iggy Pop’s ‘I Wanna Be Your Dog’: four chords, not very many words and yet it can grab some people by the scruff of their brain. Or (c) can be achieved by just a voice; personally, I could listen to Cerys Matthews reciting the digits of pi for hours just to hear her voice.

Moving away from tracks to songs: it’s easy to find examples for (b). And it’s this that Mr Clayderman’s performance was making me ponder this morning. Since there were no words, all you could hear was the melody and when he performed Foreigner’s ‘I Wanna Know What Love Is’, it showed up just how boring the melody of that song is. Not that it’s a song I particularly like anyway, but if it works at all, it only works with the words. As a piece of music, it’s tedious.

Or we could go back a generation or two: the first verse of Cole Porter’s ‘Night And Day’ is all sung on one note. The words are what’s important. Or take almost anything by AC/DC: there’s more melody in one bar of any of Angus’s solos than in any verse, but (especially when Bon Scott was writing them) the lyrics carry the song. Or for an example that’s a whole subculture: rap. No melody, all words.

So finally, we come to (a): songs with melodies that can lift even simple lyrics. This is what (for me) separates real musicians from wannabees. Example one: Kate Bush’s ‘Wuthering Heights’. Now, I quite like Ms B’s music, but I wouldn’t argue that the words make the song. Or even that the words make sense half the time… but you could play the melody of that song without any words and it stands up as a damn good piece of music.

Example two (and this is going to date me): any one of a whole bunch of tracks by Yes. Let’s take ‘Close To The Edge’: the lyrics are incomprehensible stoned-hippy trash, and yet (assuming you’re ok with progressive rock) they’re carried by interesting music.

And then there are the exceptions that prove the rule: tracks with none of the above. Well, you could turn on the radio and wait ten minutes and you’re bound to hear an example or two. Anything by Good Charlotte would do: worthless, lazy songwriting. Churning out ‘product’ with as much attention as the average burger-flipper pays to the hundredth Big Mac of the day.

But then again… there are the tracks with two or even three of the Key Attributes. In the 70s you could have heard Carole King produce a whole album of them (Tapestry). In the last few years, Elbow have done the same. The art of songwriting is far from dead, and one great song can remind me that music is worth persevering with.

Of course, these are my examples. Yours will be different. But I reckon that in every track you really love, there will be at least one of (a), (b) or (c).

Posted via LiveJournal app for iPad.

Ready-to-handness, and bed.

So here’s an interesting study done on a sample size of one (me). I’m pondering a blog entry, and I have two ways to write it. I can (a) get up, walk to the study, unlock one of the Linux laptops waiting patiently, fire up the LiveJournal web page and then type, on a proper keyboard. And it’ll be a fast machine, with a sprightly web browser to find interesting and relevant links.

Or, (b), I could use the LiveJournal app on the iPad. Which means two-finger typing on a screen keyboard, fighting the autocorrect. And a slow browser, with the sluggish copy-and-paste that iOS 5 has brought to the original iPad (thanks, Apple, I was worried the iPad was too fast before you released iOS 5). So, in terms of the actual task I want to do, no contest.

But (b) means I don’t have to get up yet. And the iPad’s already here. And I’m comfy. Damn.

Posted via LiveJournal app for iPad.

Hash and cache smash

Working around the egregious Firefox cache collision defect.  Properly.

It just occurred to me that the subject line of this post won’t work nearly so well if you’re Australian, or American, since natives of those nations pronounce "cache" in a novel colonial way; it should of course rhyme with "cash", which is how the Queen would pronounce it, if she knew what a cache was.

Anyway… those of you who have to deal with the complex and complicated world of efficiently serving up HTTP responses may know about an issue with Firefox (or possibly Mozilla, depending on how you like to classify it).  Probably the most relevant bug report  is here as Mozilla bug 290032, but here’s a summary:

  • Firefox stores cached data by hashing the URL to a value.
  • The hash algorithm is weak, and generates duplicate values for similar URLs.
  • The cache can only store one cached URL per has value.

The upshot is that some URLs are always reloaded from the server and never cached.  Now, this may not seem like a very big deal, and if you’re just a plain old Firefox user, you may not care if there’s a little bit more network traffic.  But for anyone running a high-traffic server, it can be a serious issue.  Every time a user reloads data from your server rather than their cache, you have a server hit and a bandwidth hit.  Multiply that by a large number of users and it becomes a problem worth considering.

Let’s look at the problem in more detail.  First, we could do with a way to generate hashes from URLs so that we can do some tests.  Here, in the Programming Language Of The Gods, is an implementation of the Firefox/Mozilla cache hash algorithm:

def hash(url):
        #Hash starts as zero
        h=0
        #Iterate through the characters of the URL
        for c in url:
                #Take the ASCII value of each character
                cv=ord(c)
                #Rotate the hash value left four bits,
                #as a 32-bit value
                bits1=(h&0xF0000000) >> 28
                bits2=(h&0x0FFFFFFF) << 4
                h = bits1 | bits2
                #XOR the ASCII character value with the
                #hash
                h = h ^ cv
        return h

There are various plavces online where you can find the fault with this algorithm summed up as:
the Firefox disk cache hash functions can generate collisions for URLs that differ only slightly, namely only on 8-character boundaries and the hash algorithm generates collisions for urls that are "similar enough". From the bug "similiar enough" seems to mean every 4 characters (or perhaps 8) the urls are the same.  In fact, it’s more complex than that.  Let’s take two example URLs provided by sembiance at stackoverflow.

ben@quad-14:~/working/hashing$ python
Python 2.6.2 (release26-maint, Apr 19 2009, 01:58:18)
[GCC 4.3.3] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> from hashcheck import hash
>>> print "0x%08x" % hash("http://worldofsolitaire.com/decks/paris/5/12c.png")
0x40d8c8e9
>>> print "0x%08x" % hash("http://worldofsolitaire.com/decks/paris/5/13s.png")
0x40d8c8e9
>>>

Yow.  Both URLs generate the same hash, though they differ by two adjacent characters.  This means that only one of them can be cached by Firefox; if pages on the site regularly use both images, then at least one of them will be reloaded more often than it should. If you care to do the binary mathematics of the hash, you’ll see that the reason for the clash is that differences between two characters in the two URLs can be cancelled out by differences in later characters.

Someone else who suffers from this bug: Google, or more specifically, Google Maps.  Like most mapping sites that are based on "image tiles" (which includes Google Maps, Bing Maps, OpenStreetMap, NearMap, etc), there is a URL to load any individual 256×256 pixel "tile" image from their servers.  Since there are many[1] possible tiles, there are many very similar URLs to load them.  Let’s consider a couple of Google URLs that clash:

>>> print "0x%08x"%hash("http://khm2.google.com/kh/v=52&x=118346&y=80466&z=17")
0x198cc05f
>>> print "0x%08x"%hash("http://khm2.google.com/kh/v=52&x=118350&y=80470&z=17")
0x198cc05f
>>>

Because these two URLs clash, as with the other example above, only one of them can be cached by Firefox at any one time.  I picked these two also because they’re URLs for two images that are quite likely to appear together; they’re only 4 tiles away from each other horizontally and vertically, so it’s entirely possible to have them displayed together.

But if you crank up Firebug and watch Google URLs, you’ll see that they use a trick to work around the issue; they append a substring (as the s parameter) to their URLs.  This adds a substring of "Galileo" to each URL. This is sometimes (incorrectly) referred to as a "safeword" and described as a security measure that Google use to restrict access to their site; it’s nothing of the sort, as you can see by testing the URLs without the s parameter, or even with it set to something completely different.  The use of the substring is just to make simiular URLs sufficiently different that they don’t generate the same hash value in the Firefox cache system.

The length of the substring is generated from the x and y values using the algorithm <i>length=((3*x)+y)%8</i> (where % means modulo).  So if we use the URLs with the s parameter added, we get:

>>> print "0x%08x"%hash("http://khm2.google.com/kh/v=52&x=118346&y=80466&z=17&s=")
0xcc05d095
>>> print "0x%08x"%hash("http://khm2.google.com/kh/v=52&x=118350&y=80470&z=17&s=")
0xcc05d095
>>>

Oops. What happened?  The answer is that in both cases, the s parameter value is the same; 0 characters from the string "Galileo":

>>> ((3*118346)+80466)%8
0
>>> ((3*118350)+80470)%8
0

So there are still clashes. Google’s extra parameter reduces the number of collisions, but certainly doesn’t elimate them.  In fact, it’s extremely difficult to completely eliminate collisions[2] between URLs, but we can do better than Google’s approach.
At NearMap, we’ve been testing out a new way to generate substring, which uses the individual characters of the variant parts of the URL: the x and y and also the nmd (date) parameter that Nearmap have (and Google don’t). Here’s a Python sample implementation:

substring="Vk52edzNRYKbGjF8Ur0WhmQlZs4wgipDETyL1oOMXIAvqtxJBuf7H36acCnS9P"      #Nonrepeating A-Z, a-z and digits
def differenceEngine(s,a):
        """Use string a to return a deterministic but pseudo-random substring from within substring s"""
        result=""
        #Take the characters of a, which MUST all be digits
        offset=0
        for c in a:
                try:
                        v=int(c)
                except ValueError:
                        #This exception fired if the character is not a digit, which is wrong, but
                        #we should cope without crashing.
                        continue
                offset += v
                #Use the value of this digit as an offset into the string, and take
                #the character at that position.
                p=s[offset%len(s)]
                result += p
        return result

def substringFor(x,y,nmd=None):
        """Return the substring parameter for a URL, given the x, y and nmd values.
        The x and y should be integers, the nmd should be a string - if no date is
        included in the URL, pass an empty string or None."""
        arg=str(x)+str(y)+str((3*x)+y)
        if nmd:
                arg += nmd
        return differenceEngine(substring,arg)

To test this, I took the logs for a day’s worth of traffic on the NearMap site and analyzed the has collisions.  With the Google "Galileo" algorithm, around 10% of all URLs collided.  With this alternative approach, the collision rate is around 0.02%.

Okay, so this isn’t a simple and clear chunk of code that you can use to modify your own URLs, but it demonstrates some principles:

  • The hash algorithm is based on characters, so if you use the ‘add a substring parameter’ approach, base that substring on the characters of the URL, not parameter values.
  • Use as many different characters as possible in your substring so that it varies as much as possible; "Galileo" doesn’t allow for very many different substrings.
  • Build yourself a test setup (feel free to use the hash method above) that you can pass URLs through to spot collisions.  Use that to tune your URL construction code to minimize the collision rate.

[1]Many? The standard tiling system that all these sites use divides the world up into square tiles. At zoom level 0, the whole world is shown in one tile. At zoom level 1, the world is shown in 4 tiles, at zoom level 2, 8 tiles, and so on. So at each zoom level z, the number of tiles is 2z squared. By the time you’re at zoom level 21, where Google tend to stop, there are more than 4×1012 tiles. Which is many, by most standards.
[2]Actually, it’s impossible, since to completely avoid them, you’d need to modify all the URLs used in the world. Which you can’t. What I’m really aiming at here is to try and avoid collisions across the set of similar URLs used on one site.
[3]Paolo Emilio Sfondrati was the Secretary of the Inquisition at the time that Galileo was denounced for heresy and brought to trial. So probably Sfondrati would be an invalid string to replace Galileo

Solving Sudoku With Superpositions

So I’m dead.

Actually, no, but that is a killer first line, from the movie Confidence. And when you have a killer first line, you go with it.

So I find myself writing up some examples and explanations of how to do basic object oriented design and development, and thus having to come up with something to serve as an example of how you might actually approach designing and implementing some code in C#. A problem slightly more connected to the real world than, say, yet another example like:

class Shape
{
    virtual void Draw{}
    { }
}

class Circle : Shape
{
    override void Draw{}
    { }
}

Sudoku will serve pretty well. First of all, it’s a fairly easy problem to understand. Also, anyone who’s of an algorithmic frame of mind and has encoutered a Sudoku puzzle has probably spent a while thinking about how to generally solve them. Programmers know that working out how to solve any problem of type X is more fun than actually solving any given individual problem of type X. And writing the code to do it also keeps you looking busy while someone else goes and does the actual work.

So, here is the algorithm that I’ve used as a basis, which I like to call solving with superposition because it lets me get all quantum and esoteric about it. To follow, you’ll need to know at least the basic rules of Sudoku puzzles, which are ably explained in the relevant Wikipedia article.

On first encountering a Sudoku puzzle, such as this:
Example Sudoku Puzzle
we can see that:

  • It’s made up of 81 cells
  • They’re arranged into 9 rows, 9 columns and 9 squares, each of which contains 9 cells.
  • Each cell either contains a number (we’ll call these solved), or is blank (unsolved)

For the purposes of this algorithm, let’s start off by assuming that each unsolved cell contains all the possible numbers 1 to 9 that it might hold. The work of solving the puzzle then involves finding numbers that cannot be in unsolved cells, and removing them, until each unsolved cell only contains one possible number, which means that it’s then a solved cell. I call this process pruning.

I should point out here that there are many different ways to solve Sudoku puzzles. This is just one, and not necessarily the best one; I’ve chosen it because it’s a nice little algorithmic example. If you have better approaches, go write a Wikipedia article.

Anyway, back to the explanation. Let’s show how the top-left group of nine cells in the example puzzle might be written out by a programmer trying out this algorithm. Conceptually it looks like this:

   
 5 
   
  3
   
   
123
456
789
   
  6
   
123
456
789
123
456
789
123
456
789
   
   
  9
   
   
 8 

When we prune the puzzle, we follow this approach:

  • For each solved cell in turn, remove the number it contains from all the unsolved cells in the same row, column and square.
  • If, as a result of that, an unsolved cell is left containing only one number, that cell is now solved.
  • Keep doing this until either the puzzle is completely solved, or there are no more solved cells to process.

The thoughtful amongst you will have noticed that this will, so far, only solve very simple puzzles. Patience: more is yet to follow. For the moment, let’s prune that example square of the board I showed above.

First, the topmost/leftmost cell contains a 5. So we know that we must remove the 5 from all the unsolved cells in the same row, column and square.

   
 5 
   
  3
   
   
123
4 6
789
   
  6
   
123
4 6
789
123
4 6
789
123
4 6
789
   
   
  9
   
   
 8 

We can then do the same for the other cells that are already solved (the 3, 6, 9 and 8):

   
 5 
   
  3
   
   
12 
4  
7  
   
  6
   
12 
4  
7  
12 
4  
7  
12 
4  
7  
   
   
  9
   
   
 8 

Okay, we now have a number of cells that are still ambiguous; that is, we know that they may still be one of n possible values, even though we’ve reduced the value of n by eliminating values that can’t be in the cell.

Here’s the superposition bit: we take a cell that still needs to be solved, and for every value that the cell could hold, we generate a different version of the board. So if we take the cell after the 5 & 3, that could be 1, 2, 4 or 7 (in red here):

   
 5 
   
  3
   
   
12 
4  
7  
   
  6
   
12 
4  
7  
12 
4  
7  
12 
4  
7  
   
   
  9
   
   
 8 

…we generate four versions of the board, one in which the red cell holds 1, one in which it holds 2, one in which it holds 4 and one in which it holds 7. You can think of this (if you like to play with quantum terminology) as a superposition of four possible boards. We then take each board in turn and proceed to try and solve it, using the same techniques again. For example, if we take the superposition in which the red cell holds 2, then we need to remove the 2 from any other cells in the same row, column and square…

   
 5 
   
  3
   
   
 2 
   
   
   
  6
   
1  
4  
7  
1  
4  
7  
1  
4  
7  
   
   
  9
   
   
 8 

We still have some ambiguous cells, such as the one in blue that could hold 1, 4 or 7. So we create three further superpositions, one for each of the possible values of the blue cell, and then proceed to solve each of those.

Okay, the software-oriented amongst you will immediately have started thinking about recursion and stacks. Which is exactly what I was after: an example which requires that the students think about objects as elements in data structures that must then be subjected to processing. I need only make it compulsory for them to use the generic System.Collections.Generic.Stack collection, and I have an interesting algorithmic problem for them to solve…

The Same Only Different

A tiny entry, but I have to…

A friend of mine: “C# is the closest thing to Python of all the C-derived languages”.

I think I get this: C# isn’t dynamic, of course, but the Collections bring to mind the power of Python lists and dictionaries, nullable types are as useful as None and Namespaces are one honking great idea.

I’m not doing much Python these days, but I am doing quite a lot of C#.  So I’m minded[0] to post a bunch of C# stuff.

[0] minded is the UK Policitician’s phrase for going to.  As in I am minded to do what I’ve been paid to do (as is a politician’s wont).

Sometimes, The Point Is To Have No Point

It occurred to me last night, as I examined the calluses and blisters on the fingertips of my left hand, that I’ve been playing the guitar for thirty years.

This is an approximation; I don’t remember exactly when I started, but it would have been around age 12 or so. The fact that I don’t remember starting probably means that it snuck[1] up on me and gradually became, at first, something I did and then later, a part of my self-definition. Interestingly, it would have been around the same sort of time that I began to see myself as a programmer, and then an engineer. Thus are sown the seeds of one’s own self-limitation, or something along those lines.

Part of what interests me about music is the complexity. It’s like trying to understand a fractal; every part of it that you open up reveals yet more to learn. On the other hand, as Sid Vicious said You just pick a chord, go twang, and you’ve got music, so it’s both complex and accessible in one easy measure. It slices and it dices. That fractal complexity[2], though, can be intimidating. It’s a sort of endless challenge, a mountain range that always has a higher peak. Nobody can be best at every single aspect of music; for all musicians, there is always someone else who is better than you at some part of what you do. You can become dispirited by that, or you can learn to set your own goals and measure your progress by them.

To do that, I think it’s important to understand who it is you’re playing for. Most of us have our own internal critics (and when I played in bands I wished, on occasion, that some people had more of them) but it can take a degree of introspection to work out whether those critics are worth impressing. For example, at some point in my twenties I became aware that I was judging my playing by whether my father would be impressed. As soon as I realised this, it was evident how ridiculous it was: my dad’s an talented, intelligent man of towering achievements, but he can’t play a note on the guitar and our tastes in music overlap only slightly at best. The worst internal critic, though, is myself, at around age sixteen or so. For him, what matters is being able to play better than someone else; faster, using fancier fretboard tricks, and so forth. It’s taken much longer to get rid of him than it should, and to accept, finally, that the only person who need judge how well I can play is me. The same me who sets myself goals to achieve, for no other (or better) reason than I think that they would be fun to do.

Which lets me segue towards some sort of point; the reason that you’re doing something, whether it’s playing the guitar or building software, is easy to forget and yet vital to keep in mind. This is a simple and obvious truism, captured in the endearingly bluff American aphorism When you’re up to your ass in alligators it’s difficult to remember that your initial objective was to drain the swamp. Simple and obvious, yet there are still those moments, usually at a pause in the meeting, when someone[3] says “hang on a minute, let’s get back to why we’re doing this” and thus short-circuits a deep and inwardly-spiralling argument (which is usually very technical).

Anyway, getting back to playing the guitar (which is far more interesting than actual work); since I realised that there is actually no point to it, that there is no final grade to be given or accolade to be awarded, it’s become far more enjoyable. My latest goal is to be able to play the guitar parts from Pink Floyd’s Money, Shine On You Crazy Diamond and Another Brick In The Wall (Part 2), including the solos, to my own satisfaction. In this I will be ably abetted by the excellent backing tracks available from LickLibrary[4]. Should you also be of a guitar-playing frame of mind, you may find it a Good Site To Visit.

Let there be Rock…

[1] A far nicer past participle than “sneaked”, even if it is American.
[2] An excellent name for a geek jazz band, if anyone fancies it.
[3] Occasionally this someone is actually me, but not often enough for me to feel superior about it.
[4] Anyone else get a frisson of Spinal Tap when you hear that name?

It Lives, Igor!

It’s been a while since my last entry, but there have been reasons.

The job of writing a technical article lands on my to-do list every so often. Between occurrences, I manage to forget about the need to do these so that each time a new one pops up it comes as the same surprise: a mix of anticipation at the chance to do some writing and worry about where the time to do it will come from. Anyway, in order to bolster my usual mix of weak reasoning and doubtful conclusions with some half-baked evidence, I turned to Google. In the recesses of my memory I seemed to recall that someone had once combined a PC and a Sega Megadrive[1] in one unit; a spectacular example of the misplaced faith that will lead to a convergence disaster. Googling for combined pc megadrive found me what I wanted… and also, ninth entry on the first page was a blog posting by some lamebrained pontificator who seemed to think that using eight words where one would do was somehow admirable or witty. Of course, it was me.

When I’d raided my own half-forgotten rambings for some vaguely relevant and nearly true “facts” for the article, I noticed that the last entry was back in November 2005 and decided that, come coffee-break, it was time to blog again[2].

The reasons for the break are several:

  • Back at the end of last year, I had to change jobs, quite suddenly and not out of choice, having been made redundant. It wasn’t a pleasant parting of ways, and it turned legal (you should read that phrase as roughly equivalent to and then it turned gangrenous). During times like that blogging is neither fun nor advisable: you don’t want to say anything that might get quoted out of context against you. All the legal stuff carried on until a couple of months ago.
  • My new job (like many) started off fairly undefined and has only recently evolved to the point where I could say for sure what it is I’m responsible for and where we’re going. During times like that, blogging is tricky, since finding your feet in a new organisation is difficult enough without potentially revealing all your incorrect assumptions to everyone you work with 🙂
  • I just flat out haven’t had the time or mental bandwidth to come up with anything worth writing about.

Well, anyone who reads more than a couple of entries here will know that the last point shouldn’t really have been any barrier: there’s not an entry in here that’s ever been worth the time. However, as ever, I shall quixotically endeavour to say something worth saying, knowing that my continual failure will annoy others far more than it inconveniences me. Hooray for the Internet 🙂

So now I’m working for the company who built a good part of The Mobile Phone Project; an excellent bunch of engineers. And despite entries like this I’ve been involved in something that’s remarkably heavy on C++. Which has confirmed some predjudices. Other predjudices about C++ have remained unscathed at the very least.

A large part of what I’m doing is concerned with the way in which a services organization develops products. It’s fascinating and rewarding to start reusing the same insights gained and lessons learned (the hard way) about the vast gap that lies between the unsullied, tender idea, newly conceived, dew-fresh and trembling with antici….pation[3] and the eventual product: buttressed with user guides, FAQs, installation (and deinstallation) scripts and ready to face the cruel, unforgiving world of actual users. Which leads me, finally and via a deliberately circuitous route, to what might actually be a useful link for this entry: Finance For Geeks, where Eric Sink summarises a number of basic business principles in a way that makes sense to yer average techie. I think he does a cracking job (in that and other articles) of wrapping up stuff that I learned the hard way in the dim and distant days when it began to dawn on me that the world did not operate by the same sort of rules as computer systems. But that’s another posting…

[1] That’d be a Sega Genesis, for Americans. To save you the tedium of looking, it was Amstrad, a British company who have often displayed a flair for producing items that are so high-tech they’re ahead of the market and yet so cheaply made that they’re beneath the contempt of even the wildest early adopter.
[2] “Oh, baby, I’ll try to blog again but I know… the first post is the deepest”. Etc. Well, it was in my head, now it’s in yours.
[3] It’s a Rocky Horror reference. Either you’ll get it, or you won’t.

Doing The Light Handango

I’m getting rather brassed off with hearing the word “Handango“. Not, you understand, because I have a problem with them or their business, though they are in fact rather difficult to do business with. Nor because of any aesthetic judgements and opinions I may have about their site. Not even because of their distinctly American Corporate brand of English… no, my problem is with the word as an answer to a question.

The question to which I refer is: “suppose we develop product X, how would we sell it?” In the world of mobile applications it’s a pretty important consideration when evaluating any idea; arguably more important than “what could we build” or “hey, look at what I coded up last night”. If you can’t sell it, you can’t make money from it, and you’re out of business[0]. Unfortunately, “Handango” is not an answer to the question.

At the time of writing, Handango are proudly advertising 75,000 downloads. Let’s think about that a second; seventy five thousand different things that you might want for your device. They don’t break it down by platform unfortunately (and don’t get me started on the way they use “Symbian” as a platform – as if the average end user knew what that meant or whether they even have a Symbian phone). But let’s say that there are 10,000 applications in the Symbian section. A new application offered for sale there is, therefore, competing for attention with all those thousands of other products. There’s no way to get attention; it’s the same problem as was faced in the early days of the web – unless you promote your site, nobody knows that you’re there.

So putting your product on Handango is really just the very first step in a process of getting it to market. Where will it be advertised? Who will pay for the advertising? Who will write the copy for the adverts (and for the pages on Handango, come to that)? Too often the assumption seems to be that one puts the product on Handango and then starts leafing through the Rolls-Royce brochures whilst waiting for the cheques. In the immortal words of Helen Parr[1], “I don’t think so!”.

[0] Of course, if you’re talking open source, freeware or just-put-it-up-for-vanity-ware, this doesn’t (necessarily) apply. But you still want people to find it, right?
[1] Elastigirl, of course.

Two Nations Divided By A Common Language

Now that was annoying. I see news of an excellent new feature in Google Mail; custom “From” addresses. Now, this is a good thing because it allows me to appear to my contacts via the same address they’ve been used to seeing but also because “googlemail.com” addresses seem to be helping my emails get marked as spam. Hmph. Anyway, off I go to set up a new address… and I find that there’s no way to do it. The “Accounts” tab that the Help so carefully explains doesn’t appear for me. I feel so rejected.
However, a few moment’s thought and Googling shows me the problem; I changed my language to “English (UK)”. Changing it to “English (US)” causes the magical new options to appear in full. Hoorah. Now my email has the right “From:” header… but also a different “Sender:” header, causing yet more spam detectors to scream with suspicion. Hey ho.
Anyway, there you have it; blogged so that others in the same situation might find the solution.

The Applicance Of Science

A couple of contacts have become involved with science.tv, a “broadband TV channel dedicated to science”. Maybe it’s just me, but everytime I type the word science I hear Magnus Pyke, as sampled in Thomas Dolby’s timeless[0] classic “She Blinded Me With Science”.

Anyway, hand-waving wild-haired boffins apart, they’re soliciting Ideas for Science Programmes. A bit of a wide brief, really. I tend to find myself thinking about this like I do technology product development; don’t start from what can we make? but start from who would we make it for?, or even better, who can we sell it to?. Find the market need first, guys. And there is a market need; as I’ve noted before, the public perception of science and scientists is pretty skewed by people like Dan Brown (on whom may everlasting opprobrium and contempt fall) or even Michael “We’re In The Hands Of Engineers” Crichton. Where are the Bronowskis of today? David Attenborough is hanging up his optimistically white suits after his next series. There’s a need, oh yes.

[0] Timeless in the sense that the 80s classics are all, um, timeless. Which is to say, dated.