Unintended Consequences

In which I dare to dip a toe into the murky waters that surround the subject of… Microsoft.  But hey, no zealotry here, one way or the other.

Though I can find no source for it, I have a feeling it was Larry Niven who wrote this (probably for one of his characters): “Why would anyone want to conquer the world?  They’d have to run it…”  If you know the actual source, let me know.  Any road up[1], I have no particular hatred or enthusiasm for That Company In Redmond.  They’re a fact of life, like the lousy weather this summer, or the way gravity pulls me towards the centre of the planet.  One just deals with it.

But I do think that some of their top executives, in their fervour to dominate the world’s computers, have missed an Unintended Consequence.  They have, in effect, won that domination as far as Home Computers are concerned.  Okay, great, fine, well done Mr Gates and the boys.  But with great power, comes great responsibility (as some arachnid’s Uncle once said).

In the UK we have an interesting approach to vital services.  The Conservative governments of the 80s and early 90s privatised many of them; water, electricity, phones, etc, but appointed ombudmen to oversee them.  Thus you have private companies operating within government-set regulations, so water companies (for example) can’t arbitrarily disconnect people.  The idea seems to be that it’s okay to make money, as long as you face up to your responsibilities.

Being a wooly-minded liberal[2] Guardian reader, I’m not sure I agree with that, but let’s take it as a model and run with it, see where it goes.  First, I’ll posit that in your average Western society, computing, and Internet connectivity, should be considered alongside phone service; not an absolute necessity of life, perhaps, but pretty basic and to be expected.  If Microsoft want to make money from it, so be it, but they have to follow the rules.  And the rules are where it bites, or should bite.  If your product isn’t up to scratch, there should be redress.  If you provide something which exposes people to hazards (of whatever kind), you carry some liability.  If your product contains defects, you have to be held to that.  You took money for that product; people have a right to expect that it operate.  Do it well, you’ll be rich forever.  But you can’t do it badly and expect to get away with it indefinitely.  Running the world is difficult, sure; it’s insanely complex, has a zillion different groups with different agendas and you can’t please all of the people all of the time.  But Mr Gates, what did you expect?

Is this an anti-MS rant?  No.  Like I said, I have no agenda one way or the other.  There are five computers in this house, and all but one run XP.  I’d hold Apple to the same standards, or Linux vendors if they had the same dominant market position.  But they don’t, and there’s a get-out.

Here’s a prediction; shrink-wrap license agreements for software won’t last as a protection any more than the equivalent would for a car.  Yes, the problems with Windows are due to Bad People who unleash Nasty Things on the web.  That’s the world as it is; it would be nice if things were different, but they’re not, and software that is sold today should be sold in that context, not as though the world was basically nice and fluffy and your software isn’t to blame if Bad People exploit holes in it.  Hey, Microsoft, you won; you control it all.  Now you have to run it.  We’re waiting.

[1] As we say Up North here in the UK; means “anyway”.
[2] Liberal doesn’t mean the same thing in the UK as it does in the USA, ok?

Interesting Times

As in the Chinese curse; “may you live in interesting times”.  wxPython is currently going through some, as the migration from 2.4 to 2.5 takes place.  I saw the announcement of the new PythonCard pop up on the RSS feed, and thought that this could be just what I’ve been looking for as a way to play very-rapid-development games for Python apps and indulge Small Daughter.

You see, the company I work for is an ideas company.  We think things up.  On learning this, Small Daughter disappeared into her room and came out with a drawing of her Toy Idea; not just a conception, but carefully labelled, the function and position of every button clearly thought out, the whole forming a product whose form and function had been conceived together, mutually supporting.  Of course, it’s in the nature of fathers to see such things in the work of their children, but by God, this was a whole lot better considered that some of the ideas I’ve heard from adults in my time.

Anyway, what I wanted to do (and still will do, in my Copious Free Time[1]) is to take her image and turn it into a working demo, where she can click on the buttons she drew and have the thing actually appear to work for her.  Pythoncard is just the job.  Could you hear the “but” approaching in that sentence?

The shiny new Pythoncard requires wxPython 2.5.something.  Now, I know that one of my favourite editors, SPE, doesn’t yet work with 2.5, but (I reasoned), I can live without that for a while – Boa Constructor does all the Zope editing, including Python scripts, so I may as well work in that.  Eventually SPE will catch up and all will be well.  So I install wxPython 2.5.  Bang.  Goodbye Boa Constructor too.

That I can’t deal with – I need at least one tabbed, STC-esque, session-saving editor at hand.  Usually I have at least five or six files open at any one time; IDLE or PythonWin just don’t do it for me anymore.  So (I reason again), I’ll go to a previous release of Pythoncard, one that’s less functional but which works with an older wxPython.  Ah.  There are no earlier releases on the Pythoncard Sourceforge page.

Do I have any sort of reasoned conclusion here?  Well; for one, I should stop reasoning about the most likely combinations of packages; my track record is demonstrably lousy.  Two; there’s a phenomenon that occurs in open source projects wherein the world starts to look like one of those cartoon-style tourist maps they sell in the USA.  You know, the ones that show all the local landmarks in $area large and close up, the rest of the US as spread around further away and reduce the rest of the entire planet to a dim strip on the horizon.  A sort of distance-based scaling, presenting one’s current point of view as definitive; more importantly, one’s current set of installed packages, modules and other related stuff.  This isn’t a bad thing; it’s probably inevitable.  The world is too full of variables to try and cope with them all.  I notice that more and more of the Python stuff I play with these days sports a note about use of Python 2.3; testing code with a myriad different versions and combinations is a lot of effort, and not nearly as Interesting as pushing forward with new functionality.  Testing, for open source, is sometimes seen as something achieved by getting lots of people to use your code.
Ho hum.  I add “learn to knock up basic wxPython app” to my to-do list and return to the Inbox.

Edit: Andy47 pointed me at the older Pythoncard releases – they’re under the file release name “prototype”, so I’d assumed that was something else altogether. Ah well, it’s being an educational day so far 🙂

[1] This is a Joke.  I don’t have any.

On The Reading Of Lines

A selection of readline modules, their proliferation, care and feeding.

When I typed the title of this entry, it kept emerging from my fingers as “The Readling Of Linels”; how wonderfully Stanley Unwin.  Which has nothing much to do with anything, but I like it.

Those of us who try to write shells, or other command-line-oriented stuff, eventually come up against the world of confusion that is cunningly hidden behind the simple statement:

import readline

If you’re in the happy situation of running code on only one machine, or one platform whose configuration you control utterly, then that may be all you need; a fully functional readline implementation may spring into being, installing itself as the handler for any calls to sys.stdin.readline() and providing you with hours of error-free input, history recall and what not.  Or, if you’re not so fortunate, it may not.  This will probably matter to you whether you’re writing console-based Python or not – the Python interpreter itself loads readline at startup to provide line-editing functionality.

Most (if not all) Unix-based python installations will have a readline that wraps the GNU readline library or some equivalent thereof.  This is the model for readline functionality against which other implementations are measured.  It’s not perfect from a Python point of view (for example, the command history content is available only via get_history_item() and get_current_history_length()), but since it’s essentially a C library wrapped up, such things can be forgiven.

On Windows, the situation’s much more variable.  Those of us who use cygwin and run Python from a bash prompt get an equivalent readline built in, assuming we use the python environment that comes with cygwin, but run another Python distribution, such as Enthought’s or ActiveState’s, especially under the Windows CMD shell, and you’re back with very basic line functionality.
There are at least two alternative readlines that you can install.  Chris Gonnerman’s Python Alternative Readline falls short of the GNU benchmark in a couple of significant respects.  For instance, it has minimal history handling – there’s no way to access the actual history list other than by referencing readline.history directly (which is not especially safe) and the history file is read at module import time.  It provides a number of the “standard” methods, such as set_completer, parse_and_bind, etc., but they’re all stubs (all they do is pass).  This is somewhat less than useless, since one can’t even detect whether the module has the actual functionality without inspecting the source.

Gary Bishop’s equivalent from the UNC Python Tools project is considerably more functional.  It requires Thomas Heller’s ctypes to work, making installation not quite as simple as drop-into-site-packages, but it’s well worth it.  It has an almost complete set of equivalents to the GNU module but scores highly by providing a large number of extended methods (as attributes of a Readline class) and adding intelligence to the others.  For instance, the add_history is smart enough not to add a duplicate entry to the end of the existing list.  The best feature, in my humble and problem-oriented opinion, is that the readline.Readline class is easy to subclass, for example to re-enable tab functionality (by default “tab” is the command-completion character, and if you’re used to using it for indent, losing it can drive you mad).
In fact, with this module, readline support is arguably better on Windows than on Linux (heresy!).  The ability to subclass the behaviour of readline is a real advantage; allowing extra functions to be added by defining methods and binding them at runtime using parse_and_bind.

The problem with there being several different ways to address the same problem is that those of us attempting to write portable code have to try and deal with them all.  Thus Quasi has to attempt to detect what subset of all readline functionality is available and work around it where it isn’t.  Of course, the obvious solution to that is to write yet another readline that interoperates with all the existing ones… and if I had time for that I’d be a happier person than I am now 🙂  Anyway, should you know of any implementations I’ve missed, or find fault with the way Quasi uses readline (hint; try the history and recall commands), please let me know.

Start At The End

The Mobile Phone Project is, metaphorically, tipping over the summit of the whole development life cycle and reaching that point where it will begin to slither at ever increasing speed towards completion.  Those doomed to stand between the majestic unstoppable oncoming mass of it and the fragile walls of deadlines risk being crushed.  That’d be me, then.  There is one fact which, at first sight, seems to offer hope; the deadline is entirely of our own making and subject only to our control, for the moment.  This puts us in the allegedly happy situation where the developers of the project are also those mandating the pace and dates for development.  However, it’s not actually that convenient.  In fact, it’s been rather annoying.

There seems to be a general truism about most people (including me; I’m sure there are exceptions).  We know what we have to do today.  We’re pretty sure what we have to do this week.  We have a general idea of how next week may go… but beyond that, things get fuzzy.  Yes, there’s a to-do list, maybe even a task database and a project plan, but conceptually there’s this big empty space under the carpet of the future where it’s all too easy to sweep anything that you can’t get around to right now.

There are, of course, many ways in which time-perception affects work, and those of us who do project management have seen this particular demon before.  But I’m not really referring to the problem of estimating, more to the way in which it’s easy to become unmoored and to drift in the timeline of a project if there’s no definite end.

So, we’re fixing the end date.  Nailing it, in fact, by commiting external parties and actual financial resources even though, in theory, we don’t actually need to do that yet.  When that date’s attached to huge anchors and embedded in the calendar, all sorts of good consequences flow.  Chief amongst them is that the project’s paths now extend backwards in time from that point, forming chains of tasks in dependency order.  It’s actually easier to do this backwards, since it focuses attention on the actual things needed for product launch as opposed to the things-we-thought-we-would-need-to-do-but-don’t.  Not being a fan of BigDesignUpFront, I find this a liberating and useful shift of focus.

Incidentally, I make no apologies for linking ad nauseam to the C2 Wiki on Programs, Patterns and Projects; one of the best resources out there for anyone even nominally in charge of software development, whether or not Extreme Programming is your bag.

Within You And Without You

A blog on namespaces and, simultaneously, a tribute to the late George Harrison.  I do this all without a safety net, you know.

As the Zen of Python so, er, neatly puts it; namespaces are indeed one honking great idea.  Though one meaning of the verb “to honk” in Brit/Oz circles is “to vomit”, and I’m sure Tim Peters didn’t actually intend that.  Namespaces are Interesting, when they’re not being tricky – I’m sure most beginners at Python have been caught at least once by the name-resolution rules in methods.  Anyway, yet again I can use a pondering on a Python point to blatantly plug Quasi, where namespaces have led to some intriguing code.

Consider a Python interpreter, into which one types some code, such as:

while True:
  print "Repeat after me; endless loops are bad"

There are two parallel programs operating here; the shell, which receives keystrokes or lines of input, and the actual interpreter, that runs the resulting parsed-and-compiled code.  Both are the same process, they share process-level attributes like the current working directory.  However, depending on how the shell’s been written, it and the interpreter may not share namespaces.

The exec keyword allows passing of dicts for the local and global namespaces in which the code object is to be executed.  In Quasi, these are separate namespaces, so a command like:

x = 1

will define an x that’s in the interpreter namespace, but not the shell’s.  This has some interesting practical consequences.  Firstly, when exceptions are raised in the executed code, control returns back to the shell, which doesn’t have access to the variables that may have caused it, and therefore can’t do much about it, other than catch the exceptions and show a traceback.  This isn’t, in practice, a problem in any context other than that of someone trying to run a debugger on a shell… but if people will insist on such mental convolutions, that’s their own lookout.

Secondly, and more Interestingly, any shell commands (that is, stuff that’s executed directly by the shell) don’t have access to that namespace.  You can’t, for example, hack in a cd command to the shell and allow it to do stuff like

myDir = "/mnt/extended/downloads"
cd myDir

Instead, you must implement the cd command so that it passes some equivalent command to the Python interpreter, where myDir is accessible.  There are some nice advantages to this; the shell variables don’t pollute the interpreter namespace; the interpreter and the shell can have different sets of loaded modules, and so forth.

This has led to a Design Principle for Quasi that’s been sort of implicit for a while, but which was thrown into sharper focus when cd, pwd, pushd and popd were added.  It is that: with very minor exceptions, Quasi does everything by turning the commands typed in into Python source code that’s then compiled and passed to the interpreter.

Consider that most canonical of Quasi examples:

myFiles = `os ls *.py`

This gets translated to the poetic:

myFiles = quasi.QuasiOs(True).execute('os',"ls *.py",())

Just rolls off the tongue, doesn’t it?  It gets even more involved if we do variable substitution:

os cp $myFiles backup

expands to:

quasi.QuasiOs(True).execute('os',"cp %s backup",[(myFiles,'i')])

Note: that’s a string, which is then compiled (using the CommandCompiler class from codeop) and finally passed to exec.

So Quasi is, in fact, an interpreter for an interpreter, which interprets typed commands and compiles them into Python source which is then compiled into Python bytecodes and executed by a Python interpreter which is also the same interpreter that’s executing the whole thing.

And some people wonder why programming is so fascinating…

Caterpillar to Chrysalis to Butterfly

Or: idea to software to product.
I like the guys at FudCo.  They write thoughtful, reasoned pieces; since I found them, I’ve tried to limit myself to working back through their archive slowly, one article at a time, so that I get a chance to think about what they’ve said; to chew it over and spit out the gristle.  There’s remarkably little gristle; Chip’s How To Deconstruct Almost Anything is a tiny polished gem of demolition; a shaped charge placed against the weakest point of the great castle of post-modern theory.  But this is a technical blog, so I’ll link instead to the first of his Beware The Platform posts.

In this, Chip says:

A better paradigm is a different paradigm, and a different paradigm is nearly unsellable. The problem was that in order to enable customers to do things they couldn’t [previously] do… they had to do things that you just don’t do … (which was simply unacceptable). We were selling a technically superior solution, but one which asked too much of its customers. It wasn’t that what were asking them to do was hard but we were asking them to adopt a completely different view of application architecture, and that’s just not the kind of change people make without an overwhelmingly compelling reason.

Well, uh, yeah.  Don’t get the wrong idea here; I’m not claiming superior insight or anything, but that’s a lesson that we (meaning the somewhat off-the-wall company that employs me) learned the hard way.

A while ago, we went through a phase where people would bring us Ideas.  We got very good at assessing these very fast; sorting the few grains of actual value from the heap of chaff with which we were presented.  There were object lessons aplenty; in how an inventor tends to become blind to the problems of the invention, how oddball individuals can’t resist generalizing from themselves to model people in general and how wildly absurd the valuation of one’s own cherished ideas can be.  I’m talking “pay me a million pounds up front and you can have the idea” absurd here.

One of the biggest traps was the familiar old boil-the-ocean conceit.  If you’ve never come across this, I think the canonical reference is here, but you can use this example: my newly invented rolling-ball mode of transport is a fantastic idea and worth a fortune.  All that’s required is that every country in the world refit their entire transport infrastructure to accommodate it.
Very, very few ideas and products that require a paradigm shift ever succeed[1].  Some do, but to use them as models to follow is to miss the point; new products catch on where they follow the way in which things are already done, with an incremental improvement.  If you can get an order of magnitude improvement, but still follow the way people expect to work, even better.

Let’s take one of my absolute favourite examples du jour, video calls.  Here in Europe[2], we have video phones – not so advanced as in Asia, but they’re here.  The 3 network (website so feeble that it fails to show anything in Opera; use IE) started out promoting video calls with a vengeance.  They were, it was reasoned, so amazingly cool and better than boring ole voice that the population would flock to buy phones – even if coverage was patchy and the handsets uncomfortably bricksized, with a battery life that could be measured in mayfly lifetimes. They quickly shifted to trying to promote video clips, then to selling vast numbers of free minutes. Look on their works, ye mighty, and despair.

Video calling shifts paradigms.  Phones, experience says, are held to the ear, not in front of the face.  People do other things whilst on the phone – they type, read, scribble rude notes about the caller for their colleagues – even drive (though, of course, you and I would never dream of doing that, even hands-free).  The mental model of a phone call does not include seeing the person at the other end.  That requires a different model – and if/when video calling succeeds, it’ll be because that different model catches on, in different social contexts, for different uses.  Mobile phones work sufficiently closely to the way phones always have to succeed.  It’s a phone – a dial, you hold it up and talk.  The amount of adaptation required is minimal.

Anyway, enough derivative pontification.  How do I relate this to software?  Chip’s article is, in part, about the failure of a software product that attempted to do something better by doing it in rather too different a way.  When software moves from being an intellectual curiosity to a consumer or business product, certain things must be true if there’s to be a reasonable expectation of success.

Firstly, it must fit into the way the real world works.  Users and developers rarely have time to try new things for the fun of it (unlike geeks).  There are risks; financial (to the buyer, company or CIO/CTO who signs off on the purchase), political (“hey, there goes Ben, the guy who committed us to writing the thing in SmallTalk to run on Plan 9 – he’ll never make promotion”) and personal (“If I knew then what I know now about Zope, I’d not be working late every night this week”).

Secondly, it needs to be presented correctly.  I’m notorious in certain places for saying things like “packaging is 80% of a software product” but hell, I stand by that.  People buy things based (in part) on the packaging – and by that term I don’t just mean the box.  I mean the box plus the manuals plus the advertising plus the website plus the name plus the reputation plus the user community plus the knowledge base plus the… you get the idea.  The Mobile Phone Project we’re working on is a case in point; the idea is simple.  The effort is going into presenting it, with an appropriate name, image and way of working when in front of The Average User.

Even software that’s technically mediocre can succeed with all that; an uncomfortable truth, but it’s demonstrably true for other markets, why should software be different?  Nobody eats at McDonalds because the food is wonderful; it’s familiar and well-packaged.  Most people understand how a McDonalds[3] works.  Windows has many technical weaknesses but people know and understand the way it works; the packaging is reassuring.[4]

Enough.  There’s work to do, and tea to drink.  Oh, and my video phone is ringing; excuse me while I go and put on some clothes.

[1] I can’t help but remember the Dilbert cartoon (no online reference) in which someone refers to “a paradigm shifting without a clutch”.  Excellent metal image.
[2] I write with the built-in attitude that most of the people reading this are American; 60% US IP addresses last time I bothered to do a study.
[3] The grammar pedant in me just screamed and fell over at that; “a McDonalds works”.  Where the hell does the apostrophe go?  And is it really singular?
[4] And Linux has the opposite problem; technical strengths, but far too high a paradigm-shift-cost for the mass market.  Bear in mind by “people” here, I mean the completely non-technical majority of the population.

Don’t Talk To Me About Life (Hacks)

Danny O’Brien says that Life Hacks are those little quick hacks you do to solve a problem, that the rest of the world could do with.  Invest 53 minutes of your time (plus download) in watching the video of his talk at NOTCON.  Not living in London, or the Valley, or working in the sort of job where I can jaunt off to stuff like this at the drop of a hat, I couldn’t attend.  But hey; this is the twenty-first century, so I can watch at leisure and ponder.

Releasing small hacks is good.  BUT; what are they useful for?  What’s the intended context of use?  I’d never expect to take someone else’s personalised backup script and drop it into the mess of interrelated machines and VPNs that is my digital environment.  I guess it’s more for inspiration; the how has someone else done this? thing.  Yesterday I spent twenty minutes finding out how Amazon get those little one-pixel blue borders around the edges of their “boxes” (left hand column).  It’s a bizarre trick – see if you can work it out.  Those who do HTML for a living are excused.  Anyway, this example-nature, to me, is the true value of a Life Hack.

Releasing Life Hacks has a potential downside.  When I was younger (so much younger than today), I released a little package to let Java classes be run under IIS and other ISAPI web servers.  It was a good hack to give out, but one thing I learned is that packaging is 80% of software development (that’s a deliberately controversial figure).  It’s not the code that does the job that matters so much; it’s the code that installs it, cushions it, protects it, interfaces with stupid (and clever) humans on its behalf.  That’s the difference between something that you can release and something that should stay in your home directory, hidden from sight like the mad deformed cousin in the attic.

Danny also talks about how scripting isn’t more widely used because it’s “brittle” .  I see this is a symptom of the worldview that inevitably gets wired into an application – no matter how many objects it exposes, the worldview permeates – and the more objects exposed, the more complex the scripting.  The worldview is, of course, that of the developer(s); it’s the way they’ve chosen to represent the problem domain in software.  The more one tries to use a package for some task that is not quite the one it was written to solve, the more you can feel it creak around you as it’s bent to fit the shape of the new problem.  But then this is inevitable, because you can never really design for the future; there are always unintended consequences and unexpected uses.  What you can do is to expect that your software will be used as a toolbox.

I still prefer, in general, what one might call the Unix approach – a bunch of separate tools that each do one thing well, and expect to be automated, scripted and generally mixed together in a heterogeneous environment[1].  Canonical example: the entire set of grep, sed, bash, uniq, sort, cut (and so on) utilities.  This is the opposite of what I tend to end up labelling “monolithic” code; one mucking great system that does everything for you, just as long as you work the way the designers intended.  Canonical example: Outlook.  There’s a future blog entry about how much I dislike Outlook but lack the time required to replace it, but that’s something too many bloggers have done.  And yes, I’ve been into the Outlook object model.

For me, the way in which the libraries of scripting languages have evolved represents the Unix approach in a different context (and here I have both Python and Perl in mind).  With each comes a vast assembly of modules that do things, like parse strings, invoke http requests or walk directory structures.  They’re all intended to be scripted; to be mixed together and used to solve general problems.  Along with this goes the wrapping of other applications or packages in Perl or Python libraries so they can be jolted around from inside a scripting language.  Part of what’s required to let this happen is that the wrapped thing has take-it-apart-to-see-how-it-works-ness, that the scriptor can peer into the workings of the thing to be scripted to see what’s actually going on.  Windows is bad at this – COM discovery sucks[2], even if you have the documentation.  Python, in contrast, is very good at it; it’s all built into the language – you can dir() stuff, read __doc__ strings, and so forth.  The killer is the ability to experiment that a scripting language gives you.  Don’t know what a module does?  Fire up the Python interpreter and instantiate some objects.  Play around.  Explore.  The very antithesis of the monolithic approach.  The essence of exploration.

If you can’t find out how to actually do it, you won’t script it unless you’re paid to.  And nobody’s paid to do Life Hacks.

P.S. I noticed today (6/8/04) that Danny O’B’s coverage of OSCON talks about part of the Subversion philosophy: “Binding surfaces is big with Subversion. Having a lot of ways to plug in your own code into a system is good (CVS just has a pipe). The Apache foundation are big on big binding surfaces for glue, because that’s how they felt Apache beat out Netscape server”. Couldn’t agree more. It’s on the Oblomovka site I linked at the top.

[1] I know that, to be pedantic, this is the Unix shell approach, but that would make for an uglier sentence.
[2] I’ve never been 100% sure that this Americanism isn’t rude, but now it’s on children’s TV, so it can’t be that bad.  Or maybe it’s like “freak” – people are just being carefully ignorant of what it actually means…