Wednesday 30 December 2009

Say it with me: dumb ideas are dumb

There is a prevalent and dangerous meme rife in society today, and though some people may find the following offensive, judgemental or unfashionable, I believe it needs to be said. Your forbearance is therefore appreciated while I do so. ;-)

First, some axioms. These should be unarguable:

  • Everyone is entitled to their own opinion.
  • Not everyone's opinions is as valid, useful or has as much merit as everyone else's in every single situation.
  • Nobody is entitled to their own facts.
  • You have freedom of speech, thought and association. You do not have freedom from criticism, freedom from offence or freedom from correction.

The problem happened where the first axiom (a healthy recognition that other people have different opinions) turned into the second and subsequent beliefs; that everyone's opinion is equally valid, and that contradicting someone in error is impolite, arrogant or somehow infringing on their freedoms.

One look in some Lit Crit classrooms will show you what happens when you aren't allowed to contradict or dispute someone else's opinions, and one look in a politicised fundamentalist church will show you what happens when you believe you're allowed your own facts, instead of just your own opinions.

And while people might enjoy studying Lit Crit or subscribe to fundamentalist religions, if they've got any sense they'll notice that people acting in either of these two roles have rarely done anything tangible to better the overall lot of their fellow man... unlike all those rude, elitist, judgemental, snobby scientists, engineers, geeks and other educated types (who instinctively recognise that ideas vary in quality and efficacy, and have therefore been quietly and industriously changing the world for the better for the last few hundred years).

The Western world (ably lead, as ever, by America) is learning the hard way what happens when you confuse recognition of existence of everyone's opinions with equality or worth of everyone's opinions. Moreover, while we mouth thought-terminating clichés like "everyone deserves an equal say", we routinely disregard them in practice. Who seriously consults their toddler in the back seat on how to get home when lost in the car? Who leaves their neurosurgeon's office and seeks a second opinion from their local garage mechanic?

It's ok to judge and disregard things which demonstrably have no merit. We commonly all agree that "all people" deserve some sort of minimum baseline freedoms, protection, treatment and standard of living. And yet we still deny some of those benefits to those people who we have judged and found undeserving of them or actively dangerous (imprisoned criminals, for example).

We try to pretend that all ideas are equal, but it's not true - some ideas are brilliant, explanatory and useful, but some are stupid, dangerous or self-destructive. And refusing to judge them and pretending those ideas are harmless, valid or beneficial has much the same effect on society in the long term as refusing to judge dangerous people would have on society - internal chaos and developmental stagnation.

We don't have to ban stupid ideas or opinions, like we don't have to kill criminals. Instead we isolate criminals using jails so they can't damage society any more.

We can do the same with ideas, simply by agreeing they're dumb.

Refusing to publicly label a dumb idea "dumb" for fear of offending someone is - long term - as bad for our culture and society as refusing to lock away criminals "because their families might be upset".

Although it's unpopular to point out, sometimes people and ideas need to be judged for the good of society, even if it does end up upsetting or offending some people.

For the last decade or two - beginning around the advent of political correctness, though I suspect that was a symptom rather than a cause - we've done the intellectual equivalent of systematically dismantling the judicial system and all the courts and prisons in society. Now - in the same way if we dismantled all the prisons we'd be overrun with criminals - we're overrun with stupid ideas, unqualified but strongly-expressed opinions and people who act as if they can choose their own facts.

The only way you can help redress this situation is by not being afraid to offend people - if someone says something stupid, call them on it. Politely but firmly correct when people make erroneous claims. Question badly-thought-out ideas, and don't let people get away with hand-waving or reasoning based on obvious flaws or known logical fallacies. Yes they'll get annoyed, and yes they'll choose to take offence, but we don't free criminals because they or their families are "offended" at their having to stay in prison. They are there - largely - because they deserved and invited it, and because the world is better with them there. Likewise, dumb ideas deserve and invite correction, and the world would be a better place for everybody if more people judged and criticised them when we came across them.

Sometimes uncomfortable things do need to happen to people, and certainly if they invite them. There's no advancement without the possibility of failure, and removing the opportunity for failure removes the opportunity to develop. If no-one ever tells you you're wrong, how will you ever learn?

But most important of all, while judging people is unfashionable, can be dangerous and should largely be left to trained professionals, don't ever be afraid to judge ideas.

Internet memes are not without purpose

Internet Memes get a lot of stick - they're usually considered mildly amusing at best, and sterile, content-free, mindless, bovine group-think at worst. However, both these assessments are incomplete - they fall into the trap of judging memes as "good" or "bad", instead of asking "why they are" at all.

Memes aren't just jokes - they're the way we form bonds and generate shared context in distributed virtual communities, just like "living near" and "saying hello every day" were the ways we formed context and social bonds in physical, centralised communities like villages, and "chatting around the water-cooler" and "bitching about the boss" are ways we form social bonds and shared context at work.

Part of the problem in society is that as we centralise in huge cities with too many people we don't know we lose the feeling of belonging to a distinct community, which is why city life can be so isolating for some, and others fulfil the need elsewhere (churches, sports teams, hobby/interest clubs, etc).

The only difference between this and the kind of people who make up the core of communities like reddit, Fark or 4chan is that instead of physically going somewhere to interact with other community-members, we're geographically separated and typically a lot more diverse in terms of outlook, age, race, physical appearance and interests.

This means that - for a community to form - we require shared context and some way of differentiating between people "in the community" and those out of it. This is where memes, references and in-jokes come it... and it's also why we have terms like "redditor" or "digger", instead of "people who read reddit" or "people who read Digg".

You can even compare different kinds of communities, and memes seem overwhelmingly to arise where other, more traditional forms of shared-context-building are unavailable or inapplicable:

  • At one extreme, memes rarely arise in traditional physical communities - it's pretty rare where a village - say - gives birth to catchphrases or memes, because the community already has plenty of shared context from living in the same region, sharing the same culture and language, sharing largely the same core beliefs and seeing each other regularly.
  • TV shows pioneered the way, where catchphrases and quotes (though typically only a few per show) could be used to find and bond with like-minded individuals when we encountered them, even though we didn't necessarily live near them, or see them regularly.
  • Moving online, sites like Facebook are still largely clustered around groups of people who have some real-world relationship, and though people occasionally make use of imported memes from other communities for the purpose of humour, for this reason these sites still rarely give birth to new memes.
  • More frequently, memes arise from forums (fora?) or social news sites like Slashdot, reddit and Digg. These are sites with a strictly limited ability to share context - their communities are culturally, socially and intellectually extraordinarily diverse, and stories are posted (and disappear beneath later submissions) so fast that there's no guarantee that any two individuals will have seen the same news or read the same content from one day to the next. Practically all that these sites offer in the way of shared-context-building is the ability to recognise the usernames of other users when they post, which - with the sheer number of users - is a wildly inadequate method to generate strong social bonds.
  • Most clearly of all, 4chan is a website which is prolific in the generation of new memes - indeed, many memes which users of other sites assume originated there in fact originated on 4chan. 4chan is also unusual in that it does not enforce uniqueness of username, but instead assigns a deeply unmemorable number as the only guarantee that a given "Bob Smith" is the same as a "Bob Smith" whose comments you remember reading previously. In fact, 4chan even allows completely anonymous posting, and in 4chan's most famous meme-originating boards (/b/ and others) the overwhelming majority of posters post anonymously. This means that users are literally bereft of any way to reliably recognise each other or establish a sense of community, and they're simultaneously the most prolific creators of internet memes.

You can see from this trend that memes are a distinct method of community-building, almost unknown in human history, which has largely evolved in the last few decades in response to the increasing isolation of modern life, with its lack of traditional ways to build shared context or easily encounter familiar individuals.

When you get right down to it we're social monkeys, who are usually happiest in a tribe of one kind or another. Due to lifestyle and technology how we form and maintain those tribes is changing, even in the last a few years, and if we can resist the temptation to dismissively complain about this emergent behaviour it can teach us a lot - both about ourselves and about the new kinds of communities we are forming.

Friday 20 November 2009

Your Kids Aren't Lazy; They're Just Smarter Than You

There's a recurring theme in the media, and in conversations with members of older generations, and it goes something like this:

"Kids these days have no concentration span. They're always Twittering or texting or instant messaging, and they're always playing these loud, flashy computer games instead of settling down to listen to the radio or read a good book. Computer games and the internet are ruining our kids minds! Won't someone think of the children?"

Oddly enough, these criticisms are often associated with complaints that "kids will spend all hours of the day on the bloody internet or playing these damned games, instead of going outside and climbing trees or riding their bikes", although nobody seems to see the inherent contradiction there.

In a nutshell it's this: surely if these kids really had poor attention spans they'd get bored of the game in short order and move onto something else? And if they lacked the ability for delayed gratification how would they manage to spend hours unlocking every achievement in Soul Calibur or grinding for loot on World of Warcraft?

I've been thinking for a while that much of the perceived "reduction in attention span" is merely kids getting bored with an activity that has inadequate input bandwidth to satisfy them.

For example, my grandparents could sit and listen to the radio with their eyes shut for hours on end, but the pathetically slow drip... drip... drip of information through the radio would rapidly drive me to distraction. Even my parents have trouble doing this - they usually listen to the radio while also doing other things, like household chores or driving.

Likewise, my parents can sit and watch TV for hours on end, but even this eventually bores me - being forced into passively watching and waiting for programmes to get to the point or adverts to finish leaves my brain with too much spare capacity - I either start to over-analyse the content of the show and get annoyed by the perceived agenda, or I start to get fidgety and end up picking up a book or going and doing something more engaging.

Conversely I can browse the web, program or play computer games for hours on end, and observation of most younger people will bear out that this is the norm, rather than the exception. The problem here is clearly not attention-span, or I'd rapidly get bored of surfing or gaming just as I get bored of the radio or TV.

The problem here is that with radio and TV the rate information comes to me is slower, and is determined by an external source - the broadcaster.

Conversely, when I'm playing a game or surfing the web the information-flow is limited only by my ability to absorb it. Result: my attention is fully engaged, I don't get fidgety or bored, and I'm happy indefinitely.

Books are another telling case: personally I love reading, and most "short attention span" kids I know who have a good reading-speed can still sit and read books (surely the least instantly-gratifying and most boring-looking of all media) indefinitely. Their reading-speed matches or exceeds their information-absorption rate, so they're happy.

On the other hand, even "normal" kids I know who have a slow reading-speed get bored and restless after only minutes of reading - even though their information-absorption rate is low, it's still higher than their reading-speed can provide, so they get bored.

I've noticed this in my grandparents, parents and myself, and I'm just past 30. I'd be frankly gob-smacked if this didn't apply to kids who'd only grown up in a world of globally-networked computers, millions of channels, the web at their fingertips and ever-increasing amounts of data to sift through.

It also raises questions about the sudden and questionable upsurge in diagnoses of low-grade ADHD and related disorders in young people over the last few years. Although in the more serious cases these are undoubtedly very real disorders, it's entirely possible that at the lower end much of what the older generation (and psycho-pharmaceuticals industry) perceive as pathological behaviour is simply plain old frustrated boredom in minds adapted to faster and better information-processing than they're capable of.

In summary, I suspect this phenomenon has little to do with "short attention spans", and everything to do with old media (still largely aimed at the older generations) appearing frustratingly slow and boring to ever-more-agile minds raised in our ever-more-information-rich society.

If this is true, this phenomenon could actually be a good thing - our brains are getting faster and better at information-processing, so things which seemed fun to our slower, less-capable ancestors now seem un-stimulating, or no better than momentary diversions.

However, generations who found crocheting or games of "tag" or charades the most amazingly fun experience in their lives now have to watch kids try their cherished childhood hobbies before discarding them as boring, trivial or simplistic.

It's therefore understandable that they find it a lot more comforting to automatically decide there's something wrong with kids today (a refrain that echoes down through the generations)... rather than realise that their own brains are by comparison so poor at information-processing that activities that were stimulating to them as children are just too simple for kids these days.

Wednesday 29 July 2009

So piracy is killing the music and movie industries?

The MPAA and RIAA (and their various non-USA equivalents) are famous for claiming that piracy (tapes, DVDs, digital downloads, etc) are "killing the [music/movie] industry".

Apart from the fact that these industry bodies (with a strong vested interest) have been claiming this ever since the introduction of 8-track tapes in the 1960s (and yet still the MPAA/RIAA lurch onward, living hand-to-mouth in their luxury penthouses, and somehow barely scraping by with continuing exponential growth ever since they first started making these claims), there's another problem with their claim:

The very day our culture stops producing popular music, television, news reports, novels, movies or art I will agree that "something needs to be done", about the expectation of free access, and would support instituting some sort of internet-wide micropayments system.

In fact, scratch "stops producing", and replace it with "noticeably slows production of". In fact, screw it all; scratch even that and instead substitute "stops continually increasing production of popular music, television, news reports, novels, movies or art".

Until then, commentary like this is simply worrying the sky will fall because you don't have the wit or imagination to develop new business models not based around repressive monopolies and artificial scarcity.

We had music as a species for long before we had trade bodies like the MPAA and RIAA around, and we'll have music long afterwards, too. The only difference is that in those times there wasn't an enormous, fat, unnecessary middle-man sat square between the artist and the audience, raking in cash hand over fist from both sides.[1]

If newspapers start to die because they can't afford to give away their on-line content for free then they'll stop doing it, people's expectations will change, and they'll start paying for subscriptions.

Although I hate to sound like a (big-L) Libertarian, the market will sort this one out just fine if left to its own devices.

Claims that "music/movies" or "news" will die are really worries about the deaths of "the RIAA/MPAA" or "some news organs unable to adapt to the changing conditions".

Obviously, however, the thought that bloated dinosaurs who refuse to adapt to the technological advancement of our species might go extinct doesn't really bother people.

Hansom cab drivers were pretty pissed off at the idea of the motor-car, but they didn't form industry bodies and try to ban cars or trains. And now - a hundred or more years a later - aren't we really fucking glad they didn't?


Footnotes

[1] And although we didn't have movies for anything like as long, ever-cheaper video recording equipment, the increasing popularity of sites like Youtube and the ability of even basic modern desktop computers to create high-quality special effects shows that even high-quality movies aren't out of reach of the talented amateur.

Cognitive dissonance

A reddit discussion came up recently, in relation to a recently-surfaced suppressed World Health Organisation report on global cocaine use, which concluded that the western "War on Drugs" is overblown, and that automatically criminalising all prohibited-drug users[1] was counter-productive and unsupported by scientific evidence.

A comment was posted asking why - in the face of evidence, and after a conclusions by the experts tasked to investigate this very topic - governments and supporters would not only disregard the conclusions and evidence, but actively seek to suppress the report and the evidence it contains.

The answer is simple, and generalises to many different opinions and topics. Largely, it's because they're making decisions subconsciously and emotively, instead of consciously and/or rationally.

People holding these positions claim to be pro-prohibition because it "saves lives" (and that may well be how they initially started believing in it, or how they justify the belief to themselves and others), but when you believe something strongly enough for long enough it ends up becoming part of your identity. Then, if you're confronted by evidence your long-cherished belief is in fact wrong, you have one of two choices:

  1. Reject the belief, and accept that - by some measure - you've wasted your life and been an idiot for however long you've held the opinion (possibly as much as your whole life up to that point!), or
  2. Reject the evidence, and continue believing you're right.

Believing you're right is comfortable and safe, but believing you're wrong (and moreover, have been for years) is uncomfortable and scary. It's like having the rug yanked out from under you - first you have to find a defensible new position that you can take, and then you have to revisit every single opinion you hold that depended on the "wrong" one, and see if any of those also need changing in light of the new information.

This process takes effort, and may involve discarding many other ideas that you hold dear. This is - needless to say - highly unnerving and uncomfortable for most people.

Although it's irrational to the point of complete intellectual bankruptcy, when faced with this choice many people will simply (and irrationally) ignore the evidence to the contrary. They might go quiet and try to change the subject, they might bluster and try to shout you down, or they might declare that the contrary evidence or reasoning "offends them", and demand you stop out of politeness.

All of these things are simply tactics to get the inconvenient evidence to go away - believing something you know at some level to be false is normally easy when you don't think about it, but gets increasingly difficult and uncomfortable the closer the dichotomy rises to your conscious mind.

By offering counter-indicative evidence you force the cognitive dissonance closer to their conscious mind, so they become increasingly irritable and uncomfortable. However, the rationalisation process is almost entirely subconscious, so they often don't realise why they're getting worked up - all they know is that you're the cause of it, so they tend to become frustrated and irritated with you.

This is also why you can't easily persuade people out of irrational ideas, and why it's hard to have a good conversation about politics, religion or the like which doesn't end in offence or shouting.

The key problem(s) in the irrational person's psyche isn't saving face, it's one or more of:

  • Laziness - the person doesn't want to have to undergo the effort of re-evaluating all their beliefs, so they just don't.
  • Egocentricity - the person doesn't want to admit to themselves how wrong (or stupid, or duped) they were.
  • Excessive attachment to their present identity - the person is too attached to their present identity (for reasons of comfort, personal gain, etc) to allow themselves to accept that part of it might need changing.
  • Centrality of the opinion to who they are - the threatened belief is so central to the core of the person's identity and beliefs that re-evaluating it might leave them a largely different person (effectively, the current version of them might "die" in the process).

This approach explains many, many things that otherwise seem inexplicable - why it's so hard for people to leave religions, why it's so hard to convert people to another political party, why people continue to back politicians who violate the very tenets they espouse and why people will stick to comfort beliefs even in the face of absolute proof to the contrary.

The only way to cure someone of this kind of egotistical, emotive self-deception is to bring the cognitive dissonance to the surface, and show them how irrational it is.

Nevertheless, they'll fight you every step of the way, and if you force the issue they just end up seeing you as the enemy and disregarding what you show them.

It's a knotty problem, because you can't "cure" someone of identity-related emotive irrationality unless they want to be cured... but there are literally billions of people with these kind of incorrect or irrational opinions, and they're materially retarding the progress and development of the entire human race.


Footnotes

[1] Of course, legal-drug use - like caffeine, alcohol or tobacco - is usually considered perfectly ok, and ingesting any of these three is a right that would cause rioting in the streets if a government tried to ban it.

On homosexuality as a choice

Many people - usually religious, right-wing "family values" types - claim that "homosexuality is a choice", and that one piece of legislation or another will "encourage kids to be gay".

This is the bit I don't get - even as a straight guy raised in a pretty liberal household, I've never once looked at the idea of hot gay buttsecks and thought "y'know, I think that's the sex for me!".

I'm straight and non-homophobic, but even offering tax-breaks and free ice-cream to gays wouldn't tempt me to indulge in hot man-loving.

I literally can't comprehend of someone examining their own feelings and deciding homosexuality was a choice, unless they're naturally inclined that way themselves and in viciously deep denial about it[1] ("it's got to be a choice, so I can choose not to be gay!").

So when they say that X or Y will encourage homosexuality, what they actually mean is that it will encourage people who are naturally that way inclined to not live their lives miserable, unhappy and in denial, never knowing the companionship they crave and at constant war with their own essential nature, until they become bitter and twisted by their own unrelenting self-loathing.

It therefore appears that the correct response to "Homosexuality is a choice" is "Well maybe in your case, ducky".


Footnotes

[1] This is a truly fascinating study, and I thoroughly recommend reading it. A full version of the paper in (PDF format) is available here

Your opinion is worthless

This is a slightly self-indulgent post, relating to website and forum discussions, rather than a generally-applicable epiphanette. Nevertheless, I think it's an important point, and one which far too few people understand...

I find when browsing internet discussion forums, when someone with a controversial or non-mainstream opinions posts and gets voted down I frequently run across run across comments similar to the following:

I find I get downmodded a lot because I'm a person willing to speak my mind. That makes a lot of the insecure people here (of which there are many!) uncomfortable, and to try and counter that they downmod my posts.

Straight to it: although sometimes the commenter has a point (people get very attached to their ideas, and can react irrationally when they're threatened), general attitudes like this always make me uncomfortable, because they smack of self-delusion and comfort-beliefs.

Everyone has some element of this in their thinking, but it's rarely justified. As an experiment, consider the following:

Aside from your own clearly-biased personal opinion of your posts, what evidence do you have that your thoughts or beliefs are generally:

  1. Insightful
  2. Interesting
  3. Well-expressed, or
  4. Correct?

Secondly, how many people - even really stupid, boring people - do you think get up in the morning, look in the mirror and think "shit man, I'm a really windy, boring, unoriginal fucker", and then spend a lot of time expressing their opinions to others?

Most people think what they have to say is insightful, interesting, adequately-expressed and correct, or they wouldn't bother posting it.

Now, this idea is correct in that some people vote down anything which contradicts the prevailing wisdom, but people also vote down things which are wrong, stupid, ridiculous or badly-expressed.

Conversely, I know from repeated personal experience that in many communities a well-written, well-argued, non-whingey post which counters the prevailing wisdom frequently still gets a high score, sometimes because of its contrary position.

I know when I post all I have to go on is my own opinion of my posts, which (as we've established) is almost laughably unreliable. Instead, the votes my posts get serve as a useful barometer of how much my opinion of a well-written, well-argued post compares with the general opinion.

It's terribly flattering to think of oneself as a persecuted martyr, but it also usually requires a lot of egotism and a willing blindness to statistics.

To quote the great Carl Sagan:

They laughed at Galileo... but they also laughed at Bozo the clown.

Given a poster's personal opinion is biased to the point it's worthless, and given there are many more clowns in the world than misunderstood geniuses, on what basis do people claim to be downmodded for the content of their opinions, rather than for their worth, or the reliability of the arguments they use to support them?

Claiming you're being downvoted simply because your opinions run counter to the prevailing wisdom, rather than simply because you're self-important or wrong requires you to not only assume you're vastly more intelligent or educated than the average person, but also that most people voting you down are doing so because of a deficiency in their psychology, rather than your own.

When all the objective evidence you have is that a lot of other people disagree with you, it's terribly tempting to believe you're a misunderstood intellectual martyr like Galileo.

The trouble with this, of course, is that while paradigm-shifting geniuses like Galileo only come along a few times a generation, we're knee-deep in idiots, and the tide is rising.

There are literally thousands of times more idiots than geniuses, so claiming you must be a genius on the basis you were voted down doesn't mean you're a genius - it means not only are you overwhelmingly likely to be a self-important idiot, but you're also bad at maths.

Act appropriately.

Thursday 18 June 2009

The myth of idiot-proofing

Idiot-proofing is a myth. Attempting to simplify an over-complex task is good, but be careful how you do it - beyond a certain point you aren't idiot-proofing, just idiot-enabling.

Three classes of tool

Tools (like programming languages, IDEs, applications and even physical tools) can be grouped into three loose categories: idiot-prohibiting, idiot-proof and the merely idiot-enabling.

Idiot-prohibiting tools are those which are effectively impossible to do anything useful with unless you've at least taken some steps towards learning the subject - tools like assembly language, or C, or Emacs. Jumping straight into assembly without any idea what you're doing and typing code to see what happens will never, ever generate you a useful program.

Perhaps "prohibiting" is too strong a word - rather than prohibiting idiots these tools may only discourage naive users. These idiot-discouraging tools are those with which it's possible to get some results, but which leave you in no doubt as to your level of ability - tools like perl, or the W3C XHTML validator. Sure you might be able to open a blank .html or .pl file and write a few lines of likely-looking pseudocode, but if you want to have your HTML validate (or your Perl code actually do anything useful and expected), you're soon going to be confronted with a screen-full of errors, and you're going to have to stop, study and get to grips with the details of the system. You're going to have to spend time learning how to use the tool, and along the way you'll practice the skills you need to get good at what you're doing.

The myth of idiot-proofing

Garbage in, garbage out is an axiom of computing. It's generally impossible to design a system such that it's completely impossible for someone sufficiently incompetent to screw it up - as the old saw goes: "make something idiot-proof and they'll invent a better idiot".

A natural fall-out consequence of this is that making anything completely idiot-proof is effectively impossible - any tool will always be somewhere on the "prohibiting->discouraging->enabling" scale.

Furthermore, even if your tool merely makes it harder for "idiots" to screw things up, at the same time that very feature will attract more idiots to use it.

Shaper's Law of Idiot-Proofing:

Lowering the bar to prevent people falling over it increases the average ineptitude of the people trying to cross it.

Or, more simply:

Making a task easier means (on average) the people performing it will be less competent to undertake it.

Obviously I don't have hard statistics to back this up (though there could be an interesting project for a Psychology graduate...), but it often seems the proportion of people failing at the task will stay roughly constant - all you're really doing is increasing the absolute number of people who just scrape over the bar... and who then go on to produce nasty, inefficient, buggy - or just plain broken - solutions.

Fixing the wrong problem

The trouble with trying to make some things "idiot-proof" is that you're solving the wrong problem.

For example, when people are first learning programming they typically spend a lot of their time concentrating on the syntax of the language - when you need to use parentheses, remembering when and when not to end lines with semi-colons, etc. These are all problems for the non-programmer, but they aren't the important ones.

The major difficulty in most programming tasks is understanding the problem in enough detail to solve it[1]. Simplifying using the tools doesn't help you understand the problem better - all it does is allow people who would ordinarily never have attempted to solve a problem to try it.

Someone who's already a skilled developer won't benefit much from the simplicity, but many unskilled or ignorant people will be tempted by the extra simplicity to try and do things that are (realistically) completely beyond their abilities. By simplifying the wrong thing you permit more people to "succeed", but you don't increase the quality of the average solution (if anything, you drastically decrease it).

By analogy, spell-checkers improved the look of finished prose, but they didn't make anyone a better author. All they did was make it easier to look professional, and harder to tell the difference between someone who knew what they were talking about and a kook.

Postel's Law

The main difficulty of programming is the fact that - by default - most people simply don't think in enough detail to successfully solve a programming problem.

Humans are intelligent, flexible and empathic, and we share many assumptions with each other. We operate on a version of Postel's Law:

"Be conservative in what you emit, liberal in what you accept"

The trouble begins because precision requires effort, and Postel's Law means we're likely to be understood even if we don't try hard to express ourselves precisely. Most people communicate primarily with other humans, so to save time and effort we express ourselves in broad generalities - we don't need to specify every idea to ten decimal places, because the listener shares many of our assumptions and understands the "obvious" caveats. Moreover, such excessive precision is often boring to the listener - when I say "open the window" I don't have to specify "with your hand, using the handle, without damaging it and in such a way that we can close it again later" because the detail is assumed.

Sapir-Whorf - not a character on Star Trek

The problem arises that because we communicate in generalities, we also tend to think in generalities - precision requires effort and unnecessary effort is unpleasant, so we habitually tend to think in the most vague way we can get away with. When I ask you to close the window I'm not imagining you sliding your chair back, getting up, navigating your way between the bean-bag and the coffee table, walking over to the window, reaching out with your hand and closing the window - my mental process goes more like "window open -> ask you -> window closed".

To be clear: there's nothing inherently wrong with thinking or communicating in generalities - it simplifies and speeds our thought processes. Problems only arise when you try to communicate with something which doesn't have that shared library of context and assumptions - something like a computer. Suddenly, when that safety-net of shared experience is removed - and our communication is parsed exclusively on its own merits - we find that a lifetime of dealing in vague generalities has diminished our ability to deal with specifics.

For example, consider probably the simplest programming-style problem you'll ever encounter:

You want to make a fence twenty metres long. You have ten wooden boards, each two metres long, and you're going to the hardware shop - how many fence-posts do you need?

No programmer worth his salt should get this wrong, but most normal people will have to stop and think carefully before answering[2]. In fact this type of error is so common to human thinking that it even has its own name - the fencepost (or off-by-one) error.

Solve the right problem

Any task involves overcoming obstacles.

Obstacles which are incidental to the task ("publishing my writing">"learning HTML") are safe to ameliorate. These obstacles are mere by-products of immature or inadequate technology, unrelated to the actual task. You can remove these without affecting the nature of the task at hand.

Obstacles which are an intrinsic part of the task are risky or even counter-productive to remove. Removing these obstacles doesn't make the task any less difficult, but it removes the experience of difficulty.

Counter-intuitively, these obstacles can actually be a good thing - when you don't know enough to judge directly, the difficulties you experience in solving a problem serve as a useful first-order approximation of your ability at the task[3].

Despite years of trying, making programming languages look more like English hasn't helped people become better programmers[4].

This is because learning the syntax of a programming language isn't an important part of learning to program. Issues like task-decomposition, system architecture and correct control flow are massively more important (and difficult) than remembering "if x then y" is expressed as "if(x) { y; }".

Making the syntax more familiar makes it easier to remember and reduces compile-time errors - making the task seem easier to a naive user - but it does nothing to tackle the real difficulties of programming - inadequately-understood problems or imprecisely-specified solutions.

The trouble is that the "worth" of a program is in the power and flexibility of its design, the functionality it offers and (inversely) the number of bugs it has, not in how many compiler errors it generates the first time it's compiled.

However, to a naive beginner beauty and solidity of design are effectively invisible, whereas compiler-errors are obvious and easy to count. To a beginner a program that's poorly designed but compiles first time will seem "better" than a beautifully-designed program with a couple of trivial syntax errors.

Thus to the beginner a language with a familiar syntax appears to make the entire tasks easier - not because it is easier, but because they can't accurately assess the true difficulty of the task. Moreover, by simplifying the syntax we've also taken away the one indicator of difficulty they will understand.

If a user's experiencing frustration because their fire-alarm keeps going off the solution is for the user to learn to put out the fire, not for the manufacturer to make quieter fire alarms.

This false appearance of simplicity begets over-confidence, directly working against the scepticism with the current solution which is an essential part of the improvement process[5].

Giving a short person a stepladder is a good thing. Giving a toddler powertools isn't.


Footnotes

[1] This is why talented programmers will so often use exploratory programming (or, more formally, RAD) in preference to designing the entire system first on paper - because although you might have some idea how to tackle a problem, you often don't really understand it fully until you've already tried to solve it. This is also why many developers prefer to work in dynamic scripting languages like Perl or Python rather than more static languages like C or Java - scripting languages are inherently more flexible, allowing you to change how a piece of code works more easily. This means your code can evolve and mutate as you begin to understand more about the problem, instead of limiting your options and locking you into what you now know is the wrong (or at least sub-optimal) approach.

[2] Obviously, the answer's not "ten".

[3] When I'm learning a new language I know I'm not very good at it, because I have to keep stopping to look up language syntax or the meanings of various idioms. As I improve I know I'm improving, because I spend less time wrestling with the syntax and more time wrestling with the design and task-decomposition. Eventually I don't even notice the syntax any more - all I see is blonde, brunette, readhead... ;-)

[4] Regardless of your feelings about the languages themselves, it's a truism that for years many of the most skilled hackers have preferred to code in languages like Lisp or Perl (or these days, Python or Ruby), which look little or nothing like English. Conversely, it's a rare developer who would disagree that some of the worst code they've seen was written in VB, BASIC or PHP. Hmm.

[5] From Being Popular by Paul Graham, part 10 - Redesign:

To write good software you must simultaneously keep two opposing ideas in your head. You need the young hacker's naive faith in his abilities, and at the same time the veteran's scepticism... The trick is to realize that there's no real contradiction here... You have to be optimistic about the possibility of solving the problem, but sceptical about the value of whatever solution you've got so far. People who do good work often think that whatever they're working on is no good. Others see what they've done and are full of wonder, but the creator is full of worry. This pattern is no coincidence: it is the worry that made the work good.

Stereotypes are useful tools

Humans generalise. It's what we do.

If you chose to handle every single experience as an isolated event, you'd never go anywhere or do anything for constantly investigating options, exactly like how you'd never get out of your house if you had to check every room was empty before leaving - by the time you've checked the last one, someone could have entered the house and got into the first one again, so you have to start back at the beginning and check them all over again.

Stereotyping is a very useful, essential mechanism for bypassing all of that - when we meet a new situation, we compare it to situations we've experienced before, and this gives us a guide as to what this one is likely to be like. For example, "this room was empty and I closed the door. People don't generally break into second-story rooms in any given five-minute period, so it's safe to assume it's still empty and leave the house".

The problem comes when people assume that stereotypes are facts - stereotypes/generalisations only give good indications of probabilities, and as long as you're always aware of the possibility that this situation is an edge-case where the "general rule" doesn't apply, there's no harm in it.

In our touchy-feely, inclusive, non-discriminatory society it's become deeply un-trendy to stereotype or generalise. People feel that because stereotypes have been over-used, or used to excuse discrimination or bigotry, there must be something inherently wrong with stereotyping. This is itself stereotyping, and - in this case - it's wrong.

What people really disapprove of are:

  • Unfair generalisations (although since stereotypes come from repeated observations, there are a lot less of them than you think)
  • People mistaking statistical guidelines for hard facts.

However, as ever as a culture we err on the side of throwing the baby out with the bathwater, and conclude that because some people have tried to use stereotypes to justify bad actions in the past, there's something inherently wrong with the whole idea of stereotypes. That's not the case.

Friday 1 May 2009

The incompetent leading the credulous - your mildly disconcerting thought for the day

It's well-known to psychologists, public speakers, politicians and con-men[1] that in general the more confident an individual appears, the more persuasive they are to other people. This effect holds regardless of the veracity or provability of their assertions. In other words, confidently and assertively talking horseshit will make you more persuasive than simply talking horseshit on its own, regardless of the fact it's horseshit.

In other news, the Dunning-Kruger effect demonstrates that - in general - the more incompetent or ignorant someone is of a subject, the more they will over-estimate their own expertise or understanding of it. Equally, the more experienced and competent a person becomes in a subject, the more they will begin to underestimate their true level of knowledge or expertise, downplaying their understanding and qualifying their statements. In effect, when trying to assess one's own level of ability in a subject increased expertise is inversely proportional to confidence in your expertise.

The net effect of this is that - again, in general - ignorant or incompetent people are subconsciously predisposed to be more confident in their opinions, and all people are subconsciously predisposed to find confident people persuasive.

In a nutshell, all things being equal, people are instinctively predisposed to find ignorant or incompetent people disproportionately persuasive and trustworthy compared to more competent, more experienced experts.

Distressingly it appears that in the kingdom of the blind the one-eyed man is not king. Instead, in the kingdom of the blind the true king is the one blind guy who's sufficiently incompetent or delusional that he honestly believes he can still see.

This has been your mildly disconcerting thought for the day.


Footnotes

[1] The author acknowledges that there may be some overlap in these categories.

Monday 16 March 2009

Rules for system designing #1: If a system can be gamed, it will

I first encountered this rule in web development, but once spotted I discovered it holds true in many, many diverse areas of life.

When designing a system of rules or procedures (a computer program, laws, a business's internal policies and procedures, etc) it's always tempting to ignore or avoid edge-cases - they seem so obscure or unlikely it's tempting to decide they don't matter, and not to bother resolving or fixing any ambiguities or loopholes.

People think about systems of rules the way they think about other people - you don't have to be too precise, because it'll be clear what the intent of your words is.

However, once set up systems are administrated according to the rules which define them - while "the original designer's intent" is nebulous and open to interpretation, the letter of the law is usually quite specific, even if the eventual result of them is quite different from what the original architects intended. Nobody ever got fired for following the letter of the law, even if by doing so they did great violence to its spirit.

This can be seen in all walks of life - if you're a naive programmer developing a web application it can seem tempting to ignore security holes or undefined edge-cases. "Who will ever spot that?" you think to yourself, "nobody will bother poking around in odd corners of my application, or try firing odd url parameters into my server. I'm much better off adding Whizzy New Feature #436 to my application than tidying up some dusty old corner of the code".

This sounds perfectly reasonable to most people, but any experienced web developers will be shaking their heads about now - first off, when you write code for websites your code is exposed to the entire internet, and there's always someone out there who'll start poking it with a stick, just to see what it does.

Even worse, there are also whole swathes of entirely automated systems like web spiders, spam-bots and automated vulnerability scanners that will systematically follow every link and try every combination of URL parameters it can imagine, simply to see what will happen.

The key point to take away here is that - almost invariably - your audience will turn out to be a lot larger and more diverse than you imagine, and what might seem obscure, boring or unimportant to you might not seem the same way to all of them... and neglecting to handle these edge-cases can lead to the entire system becoming compromised.

Likewise, laws suffer from this problem - they're typically crafted using vague language, and - like any non-trivial system - typically contain numerous unspotted edge-cases and loopholes. Moreover, the equivalent of issuing a patch to an existing law once it has been passed is about as complicated, fraught and long-winded as passing the law in the first place, making it difficult, time-consuming and expensive to correct errors once a law has been passed.

Like programs, relying on obscurity to paper over these loopholes is a mug's game - when laws apply to the number of people in an entire country you're pretty much guaranteed that eventually someone will either deliberately target or just stumble upon an unhandled edge-case. When this happens the system can be gamed, and the laws fail to perform their required function.

When this happens, the results may be anything from a single individual getting away with a parking ticket to your entire society taking a turn for the worse.

Remember: if a system can be gamed, it will, so take care to eliminate all possible edge-cases, and practice defence in depth so when an unknown compromise or loop-hole is inevitably eventually discovered, the amount of the system which is affected and can then be compromised is limited.

This advice applies equally whether you're a developer writing computer code, a politician crafting new laws or a manager adjusting business processes in a company. If it's a system of rules, this design axiom applies.

Friday 9 January 2009

Lifecycle of a meme

A recent discussion on reddit prompted me to sketch out the stages of an internet meme's lifecycle.

Lest there be any confusion, I'm talking here about internet memes - LOLcats, Soviet Russia jokes and the like, not about memes in the more general sense of the word.

As far as I can see, all memes go through this lifecycle:

  1. Meme is born. Almost nobody understands it, and it's barely funny even when you do.
  2. Meme gets adopted by a specific social group. Meme now serves as a shibboleth indicating membership of the group, and encourages feelings of belonging and "insidership" whenever it's encountered. At this stage, the meme is usually either utterly baffling or hilarious, depending on whether or not you're an insider in that social group.
  3. Meme becomes mainstream - everyone is using it at every opportunity, and - its use as a shibboleth negated - it gradually gets stale from overuse. Meme is hilarious to newcomers, but increasingly sterile and boring to older users.
  4. Meme effectively dies - people using it are generally downmodded or castigated for trying for "easy" posts. Importantly, it can still be funny even at this stage if it's used particularly well... however, 99% of the uses at this point are people trying to cash in on easy karma - the kind of people who tell the same jokes for years without realising that the 17th time you hear it, it's no longer funny.
  5. Meme is effectively dead, but may experience rare and infrequent resurrection in particularly deserving cases. Generally these uses get applauded, because nobody wants to risk approbation for posting stale memes unless they're really sure this is a perfect opportunity for it - one that's literally too good to miss.

Importantly, by stage 5 the meme starts once again to be funny, because it's once again serving as a shibboleth... though this time instead of showing how advanced and up-to-date the poster is, it instead serves to indicate his membership in the "old guard" of whatever social group it's posted to - "I've been around here so long I remember when this was funny", it quietly indicates to other old-timers and well-educated newbies alike.

Thursday 1 January 2009

Engines of reason

Initially we as acted as individuals - what we understood of reality was what we experienced and determined for ourselves. There was no understanding or appreciation of the world outside our direct experience.

Later we developed language, and what we understood of reality was formed from our own perceptions and conclusions, influenced by the perceptions and conclusions of our family and social group (family, clan, village, etc).

Next we developed the printing press, and mass-media. These allowed centralised governments and organisations to accumulate and weigh information and experiences and broadcast them to the populace. We still held our own council on personal or local matters, but since we rarely (if ever) knew anyone who had experienced such events outside our local region, we largely received all our knowledge and understanding of the outside world from centralised authority - governments, news media organisations, etc.

Finally, with the advent of the web we're enabling anyone to publish their personal experiences, in a way that anyone else in the world can then receive. No longer do we simply not have access to information, nor do we receive distilled, refined, potentially biased information from one or a few sources. Now we're capable in theory of hearing every point of view from every participant in an event, untainted by anything but their personal, arbitrary biases.

We are still receiving information on a global scale, but for the first time it's potentially all primary evidence, untainted and unfiltered by a single agenda or point of view.

The trouble with this is that brains, personalities, cultures and institutions long-accustomed to received wisdom now have to compare, contrast, weigh and discern the trustworthiness of multiple conflicting points of view for themselves. For the first time since our pre-linguistic ancestors you - and only you - are primarily responsible for determining truth from falsehood, and for the first time in history you have to do so on a global scale, involving events of which you have no direct experience.

To be clear: this is hard. Many people instinctively reject the terrifying uncertainty and extra effort, instead abdicating their personal responsibility and fleeing to any source of comforting certainty they can find. This explains why even in these supposedly scientific and rational times people still subscribe to superstitions or religions, or simplistic, fundamentalist philosophies, or blindly consume and believe opinionated but provably-biased sources like political leaders, charismatic thinkers or biased news organisations.

So it's a double-edged sword - for the first time in history we have access to primary evidence about events in the world, rather than receiving conclusions from a central source along with only what secondary or tertiary evidence supports them. However, in doing so the one thing we've noticed is that the channels we've relied-upon up till now are biased, agenda-laden and incomplete.

Obviously in an ideal world everyone could be relied-upon to train their bullshit-filters and research and determine the truth for themselves. However, given the newness of the current situation we can't rely upon this any time soon. Likewise, given both the sheer volume of information and humanity's propensity for laziness and satisficing, we'll likely never be able to rely on the majority of people doing this for every single issue they hold an opinion on.

So what's a species to do? We've turned on the firehose of knowledge, and it's shown that the traditional channels of received wisdom are unreliable, but many people find it impractically hard to drink from it.

There are three choices here:

  • We could allow the majority of people to reject their responsibilities and abdicate their reasoning processes to others of unknown reliability... though this is the kind of thing that leads to fundamentalism, anti-intellectualism and cultural and scientific stagnation.
  • Alternatively, we could encourage people to distrust authority and try to decide for themselves... though even if we win, if the effort of self-determination is too great we risk merely leaving people floundering in a morass of equally-likely-looking alternatives (I believe this is a primary cause of baseless, unproven but trendy philosophies like excessive cultural relativism - if you're lost in a sea of indistinguishable alternatives, it's flattering and tempting to believe there is no difference in their correctness).
  • Lastly, we can make an effort to formalise and simplify the process of determining reliability and truth - striving to create democratic, transparent mechanisms where objective truth is determined and rewarded, and falsehood or erroneous information is discarded... lowering the bar to individual decision-making, but avoiding unilateral assumption of authority by a single (or small group of) agendas.

Stupid as it may seem, I believe this is the ultimate destination towards which sites like reddit or Wikipedia are slowly converging - people post evidence, assertions or facts, those facts are discussed, weighed and put in context, and (so the theory goes) accuracy and factual truth is ascertained by exposing the process to a large enough consensus.

It doesn't always work - many of these early attempts suffer from a poor mechanism, or attract a community who vote based on their prejudices rather than rational argument, or end up balkanised in one or more isolated areas of parochial groupthink.

However, the first heavier-than-air aircraft didn't work too well either, and here we are a few decades later flying around the planet faster than the sun. As a species we're still only a few years into what may be a decades- or centuries-long process - one which could change the very foundations of (and mechanism by which we determine) what we understand as factual reality.

People love to rag on social news sites, discussion forums and sites like Wikipedia for what amounts to failing to have already achieved perfection. I prefer to salute them for what they are - hesitant, often blind, stumbling baby-steps towards solving a problem many people don't yet even realise exists.