Showing posts with label web. Show all posts
Showing posts with label web. Show all posts

Thursday, 15 November 2012

Why did Google release Ingress?

Google recently announced a move into a market that's confused many people - an Alternate Reality Game called Ingress.

It's hardly an obvious move - Google is a Big Data, search and advertising company, not a games manufacturer.  They provide services with mass-appeal that users typically use every day or so, like e-mail (GMail), search (Google search), social networking (Google+), blogging (Blogger) and others, but from the perspective of Google as a business these are - fundamentally - all merely ways to deliver paid advertising to consumers.

An Alternate Reality Game is none of these things - it doesn't have mass appeal (most people still don't fully understand what an ARG even is, let alone play one) and even dedicated ARG players spend most of their lives and leisure time not playing their chosen game.  It lacks mass-appeal, even dedicated players may not use the service from one day to the next, and there's no clear, obviously way to insert advertising into the game in any particularly lucrative way.

So why release it? 

What Google are good at 

As stated above, Google are a Big Data company - they excel at managing, querying and manipulating huge datasets, and at making simple interfaces to this data available to the public.

However, they also excel at another related skill - getting other people to generate the data for them in the first place.

Remember the free automated directory enquiries service that everyone wondered why Google would launch several years ago?  Much confusion was provoked by the choice - "why would they do that?  What's the benefit?  Where's the business model?", people asked.  It seemed mystifying at the time, but Google were cleverly using the service to quickly and effectively build a vast corpus of spoken word queries in a variety of accents, to train the voice-recognition systems that subsequently made it into Google Voice and Android.... and then as soon as it was built, they shut down GOOG-411.

It's worth pausing at this point to reflect on the genius of this move - most commenters were focusing on the (not-inconsiderable) expense of setting up an automated directory-enquiries service.  However Google was playing a much larger game, wherein this cost was trivial compared to the value of building a world-class database of voice-samples to train voice recognition software.  Not only did they gain a vast corpus of sound-samples, but by setting up an interactive service where users were strongly motivated to correct any misinterpretations (in order to obtain accurate results), they also ensured that they had the correct interpretation of each sound sample - neatly removing almost all of the effort from not only collecting but also categorising/interpreting the learning dataset.

The value of this dataset really can't be over-estimated - Google were already capable of indexing and querying all the digitised text data in the world, but they were completely incapable when it came to audio data - spoken-word, recorded audio, video soundtracks and the like.

By developing an acceptable voice-to-text system they made whole rafts of new application and services possible - automatically generated e-mail transcripts of voicemail messages left via Google Voice, auto-transcripts of video media (allowing both closed-caption subtitles and the possibility of text-searching within video media the way it's already possible to search within textual media), permitting reliable voice search and voice control of software and hardware, and many others.

Or for another example, consider ReCAPTCHA (initially created by an independent company, but quickly snapped up by Google), where their free CAPTCHA service also helped them to automatically resolve edge-cases and unrecognised words when production-line digitising books for Google Books (thereby turning analogue, offline text into digitised, searchable text).

Google Image Labeller was another - a small, fun game Google launched that got users to collaboratively tag images, which in turn hugely improved Google's image-search system's accuracy and specificity.

So how does this relate to Ingress?

At a guess, it's about getting Google good data for footpath routes to compete with Nokia's recently announced turn-by-turn navigation for pedestrians.Google maps/navigation are great for driving directions, but pretty terrible for walking - they typically just tell you to follow the nearest road, which can lead you on annoyingly roundabout, unattractive or even unnecessarily dangerous routes in many areas.

Many hints about the new game are dropped in this All Things D article.  Note how Ingress is specifically geared around pedestrians:
Users can generate virtual energy needed to play the game by... travelling walking paths, like a real-world version of Pac-Man. Then they spend the energy going on missions around the world to “portals,” which are virtually associated with public art, libraries and other widely accessible places... Outdoor physical activity is a big component of this, though driving between locations isn’t banned
I.e., it's very, very much about walking places... while carrying a GPS-enabled mobile device with a camera and accelerometer and wi-fi and mobile data connection built into it... while running their app that can report whatever it wants back to their servers and has to for you to be able to play the game (i.e., no privacy concerns, as would be the case if they started aggressively tracking and recording the movement of every mobile Google Maps user).

Players walk around footpaths and pedestrian routes that Google Maps currently doesn't cover well, and then as a reward they get to... walk around art installations, libraries and other large, pedestrian-only public areas that Google Streetview cars can't get to.  All the time the game client is reporting back to Google their position, speed and the like, so Google gets to build a massive database of popular pedestrian-accessible areas and common routes between and around them.  It's genius.

Based on this I also predicted that Google Ingress would also encourage users to take geotagged photos of these various locations in the game as mission objectives.  After all, if you've just managed to convince thousands or millions of people to build you a massive GPS-tagged pedestrian-accessible location and route database essentially for free, you'd have to be pretty stupid not to also get them to take geotagged photos and similar media for you while they do it.

I also argued that the game probably records wi-fi SSIDs and a whole bunch of other useful datapoints, too... both proven correct with further investigation in this reddit thread.

Google are very good at manipulating vast datasets, and if anything they're even better at finding inventive and mutually-beneficial ways to convince large numbers of people to voluntarily build those datasets for them.

In short, whatever the plot of Ingress is about, the point of it is to quickly and cheaply build an unrivalled corpus of pedestrian-accessible routes, locations and journey-times for the next generation of foot-enabled Google Maps and Navigation apps, or I'll eat my hat.

Postscript

This idea seems to have gained some traction since I first advanced it on reddit - within a day or two it was all over the social web, suspiciously similar ideas (usually with exactly the same examples) popped up without attribution on half-a-dozen SEO and spammy unattributed-aggregation blogs, passing reference was made to the idea in a FastCompany article about Niantic Labs, and it was even featured (and cited!) on Forbes' technology blog.

Sunday, 17 January 2010

There are fewer conspiracies than theorists think, but you should still listen carefully to them

Being online for the last 15 years, and having a strong (if sceptical) fascination with conspiracy theories I've run into quite a few over the years.

Many are clearly and obviously ridiculous on the face of them, while others somehow suddenly turn from "ridiculous paranoid fantasy" into "boring history" in the public consciousness - usually (and oddly) without ever passing through the stage of "important and shocking revelation" in-between.

Obviously these days (after years of the X Files and similar cultural touchstones) "conspiracy theory" is a loaded and negative label, and most people instinctively disregard anything described as such. However, I think this is somewhat unfair - there are more conspiracies out there than people typically realise, and they've often played a much larger role in shaping the world than most people give them credit for, even starting wars, bringing down presidents and contributing to the maiming or deaths of hundreds of innocent citizens.

In addition to the "obviously idiotic" and the "obvious-with-hindsight", I believe there is a class of conspiracy theories which - while incomplete and mis-attributed - still conceal a nugget of truth and worthwhile insight, as long as you disregard their more fanciful claims.

As an example, with the rise in filtering systems and various countries' attempts to filter the net, the meme is gaining strength that these are simply cynical excuses by authoritarian governments to restrict their citizens' freedom, and censor the public discourse.

These concerns are persuasive in that they recognise the problems with such systems - that once in place they only tend to ratchet tighter, and that people will accept any amount of change as long as it's introduced in small enough increments. However, systems like censorship (and by extension even really huge conspiracy theories like the idea of the so-called New World Order - an internationalist/globalist conspiracy to dissolve national boundaries and unite the world under one global government) wouldn't necessarily even require a conscious conspiracy.

These trends (if they exist) aren't some Machiavellian super-conspiracy implemented by a smoky room full of the rich and powerful - they're simply the emergent behaviour of lots and lots of different people, all following their own, parochial agendas, who find themselves (often quite unconsciously, or inadvertently) all pushing society in a similar direction.

Returning to net censorship, what happens is that one short-sighted government puts a filtering system in place to filter out "unambiguously evil" content like child pornography, and then later on that mechanism is inherited by later governments, who have their own ideas about what's considered ban-worthy.

Successive governments only encroach on freedom a tiny bit from the previous government, but every time someone complains you get people shouting down dissenters on the grounds "it's only a trivial change, so why are you getting so bent out of shape about it?", or the ever-popular "Yeah, but X is evil - how can you not want X filtered out?" (where X is "terrorism", "hate speech", "child pornography" or the current bĂȘte noire.

The other important part of this process is that it's a ratchet effect. Almost no government - short of massive upheaval like a revolution or regime-change - is going to ease off on the filtering, because firstly there's no political capital in doing so, and secondly it would make them look soft on terrorism/paedophilia/whatever the current reason is.

So you have a mechanism where controls ratchet ever-tighter, it's practically impossible to ever loosen them short of a major social upheaval, each step is such a tiny one that people can't emotionally appreciate the importance of resisting it, and anyone who does resist is easily dismissed as reacting disproportionately, or being actively in favour of terrorists, or paedophilia, or whatever the excuse du jour is for "just tightening restrictions a little bit, just this once".

Importantly, and this can't be said enough, this doesn't even require a Machiavellian conspiracy or a particularly authoritarian government behind it - it can happen simply by lots of honest but short-sighted people of good conscience just doing what they think is for the greatest good... but if allowed to run unchecked (and as previously indicated, it's hard to check it without looking like a lunatic or conspiracy theorist) it still ends up in a more restrictive, less free, more authoritarian state in the end.

Project this trend far enough ahead (a few decades is usually enough, although sometimes as little as one will do) and you can quite easily get from an open, successful democratic society to an authoritarian police-state with no large or jarring social upheavals required.

This is exactly why it's so vitally important to never, ever grant any additional powers to any government unless they're absolutely unarguably necessary, and even then grant them for a limited span of time, and never, ever renew them unless there's a proven requirement to do so (ie, never renew because it's the default position to keep the law on the books, as was arguably the case with the PATRIOT Act renewal in 2005/2006).

Plenty of people instinctively recognise themes and trends like these, but a common cognitive illusion called an overactive sense of external agency (PDF warning) causes them to mistake simple but counter-intuitive emergent behaviour for a conscious, intentional conspiracy. This makes them easy to dismiss as paranoid or crazy, and makes it easy for others to dismiss both them and any legitimate trend they've identified (an example of the Association Fallacy, also known as damning by association).

Clearly I'm not suggesting that all (or even most) conspiracy theories are realistic, accurate or plausible. However, if you run across one it's always worth making an effort to separate out the What and the How from the Who and the Why, and seeing if the processes and effects it describes have any validity on their own.

If someone tells you that a concerted cabal of international bankers and financiers are attempting to bring together and integrate the disparate economies of the world, dissolving national sovereignty and bring the world to heel under one world government made up of shape-changing lizards, you can safely laugh at the lizards.

However, shorn of its intentional (and sensationalist) nature, there is a distinct trend towards economic and political integration in international politics, the advent of the internet and international trade deals have inadvertently acted to make national boundaries progressively more porous, and increasing geopolitical integration necessarily reduces national sovereignty somewhat.

When you put it like that it's boring and mundane, but wild-eyed, crazy-haired conspiracy theorists have been pointing out the What of it since the 70s or 80s, and - vaccinated against listening by their kook-like presentation and the cultural stereotype of the "crazy conspiracy theorist" - most of us still aren't even consciously aware it's going on.

I find that extremely interesting, although I ascribe to it no particular group, agenda or intent.

Wednesday, 30 December 2009

Internet memes are not without purpose

Internet Memes get a lot of stick - they're usually considered mildly amusing at best, and sterile, content-free, mindless, bovine group-think at worst. However, both these assessments are incomplete - they fall into the trap of judging memes as "good" or "bad", instead of asking "why they are" at all.

Memes aren't just jokes - they're the way we form bonds and generate shared context in distributed virtual communities, just like "living near" and "saying hello every day" were the ways we formed context and social bonds in physical, centralised communities like villages, and "chatting around the water-cooler" and "bitching about the boss" are ways we form social bonds and shared context at work.

Part of the problem in society is that as we centralise in huge cities with too many people we don't know we lose the feeling of belonging to a distinct community, which is why city life can be so isolating for some, and others fulfil the need elsewhere (churches, sports teams, hobby/interest clubs, etc).

The only difference between this and the kind of people who make up the core of communities like reddit, Fark or 4chan is that instead of physically going somewhere to interact with other community-members, we're geographically separated and typically a lot more diverse in terms of outlook, age, race, physical appearance and interests.

This means that - for a community to form - we require shared context and some way of differentiating between people "in the community" and those out of it. This is where memes, references and in-jokes come it... and it's also why we have terms like "redditor" or "digger", instead of "people who read reddit" or "people who read Digg".

You can even compare different kinds of communities, and memes seem overwhelmingly to arise where other, more traditional forms of shared-context-building are unavailable or inapplicable:

  • At one extreme, memes rarely arise in traditional physical communities - it's pretty rare where a village - say - gives birth to catchphrases or memes, because the community already has plenty of shared context from living in the same region, sharing the same culture and language, sharing largely the same core beliefs and seeing each other regularly.
  • TV shows pioneered the way, where catchphrases and quotes (though typically only a few per show) could be used to find and bond with like-minded individuals when we encountered them, even though we didn't necessarily live near them, or see them regularly.
  • Moving online, sites like Facebook are still largely clustered around groups of people who have some real-world relationship, and though people occasionally make use of imported memes from other communities for the purpose of humour, for this reason these sites still rarely give birth to new memes.
  • More frequently, memes arise from forums (fora?) or social news sites like Slashdot, reddit and Digg. These are sites with a strictly limited ability to share context - their communities are culturally, socially and intellectually extraordinarily diverse, and stories are posted (and disappear beneath later submissions) so fast that there's no guarantee that any two individuals will have seen the same news or read the same content from one day to the next. Practically all that these sites offer in the way of shared-context-building is the ability to recognise the usernames of other users when they post, which - with the sheer number of users - is a wildly inadequate method to generate strong social bonds.
  • Most clearly of all, 4chan is a website which is prolific in the generation of new memes - indeed, many memes which users of other sites assume originated there in fact originated on 4chan. 4chan is also unusual in that it does not enforce uniqueness of username, but instead assigns a deeply unmemorable number as the only guarantee that a given "Bob Smith" is the same as a "Bob Smith" whose comments you remember reading previously. In fact, 4chan even allows completely anonymous posting, and in 4chan's most famous meme-originating boards (/b/ and others) the overwhelming majority of posters post anonymously. This means that users are literally bereft of any way to reliably recognise each other or establish a sense of community, and they're simultaneously the most prolific creators of internet memes.

You can see from this trend that memes are a distinct method of community-building, almost unknown in human history, which has largely evolved in the last few decades in response to the increasing isolation of modern life, with its lack of traditional ways to build shared context or easily encounter familiar individuals.

When you get right down to it we're social monkeys, who are usually happiest in a tribe of one kind or another. Due to lifestyle and technology how we form and maintain those tribes is changing, even in the last a few years, and if we can resist the temptation to dismissively complain about this emergent behaviour it can teach us a lot - both about ourselves and about the new kinds of communities we are forming.

Friday, 20 November 2009

Your Kids Aren't Lazy; They're Just Smarter Than You

There's a recurring theme in the media, and in conversations with members of older generations, and it goes something like this:

"Kids these days have no concentration span. They're always Twittering or texting or instant messaging, and they're always playing these loud, flashy computer games instead of settling down to listen to the radio or read a good book. Computer games and the internet are ruining our kids minds! Won't someone think of the children?"

Oddly enough, these criticisms are often associated with complaints that "kids will spend all hours of the day on the bloody internet or playing these damned games, instead of going outside and climbing trees or riding their bikes", although nobody seems to see the inherent contradiction there.

In a nutshell it's this: surely if these kids really had poor attention spans they'd get bored of the game in short order and move onto something else? And if they lacked the ability for delayed gratification how would they manage to spend hours unlocking every achievement in Soul Calibur or grinding for loot on World of Warcraft?

I've been thinking for a while that much of the perceived "reduction in attention span" is merely kids getting bored with an activity that has inadequate input bandwidth to satisfy them.

For example, my grandparents could sit and listen to the radio with their eyes shut for hours on end, but the pathetically slow drip... drip... drip of information through the radio would rapidly drive me to distraction. Even my parents have trouble doing this - they usually listen to the radio while also doing other things, like household chores or driving.

Likewise, my parents can sit and watch TV for hours on end, but even this eventually bores me - being forced into passively watching and waiting for programmes to get to the point or adverts to finish leaves my brain with too much spare capacity - I either start to over-analyse the content of the show and get annoyed by the perceived agenda, or I start to get fidgety and end up picking up a book or going and doing something more engaging.

Conversely I can browse the web, program or play computer games for hours on end, and observation of most younger people will bear out that this is the norm, rather than the exception. The problem here is clearly not attention-span, or I'd rapidly get bored of surfing or gaming just as I get bored of the radio or TV.

The problem here is that with radio and TV the rate information comes to me is slower, and is determined by an external source - the broadcaster.

Conversely, when I'm playing a game or surfing the web the information-flow is limited only by my ability to absorb it. Result: my attention is fully engaged, I don't get fidgety or bored, and I'm happy indefinitely.

Books are another telling case: personally I love reading, and most "short attention span" kids I know who have a good reading-speed can still sit and read books (surely the least instantly-gratifying and most boring-looking of all media) indefinitely. Their reading-speed matches or exceeds their information-absorption rate, so they're happy.

On the other hand, even "normal" kids I know who have a slow reading-speed get bored and restless after only minutes of reading - even though their information-absorption rate is low, it's still higher than their reading-speed can provide, so they get bored.

I've noticed this in my grandparents, parents and myself, and I'm just past 30. I'd be frankly gob-smacked if this didn't apply to kids who'd only grown up in a world of globally-networked computers, millions of channels, the web at their fingertips and ever-increasing amounts of data to sift through.

It also raises questions about the sudden and questionable upsurge in diagnoses of low-grade ADHD and related disorders in young people over the last few years. Although in the more serious cases these are undoubtedly very real disorders, it's entirely possible that at the lower end much of what the older generation (and psycho-pharmaceuticals industry) perceive as pathological behaviour is simply plain old frustrated boredom in minds adapted to faster and better information-processing than they're capable of.

In summary, I suspect this phenomenon has little to do with "short attention spans", and everything to do with old media (still largely aimed at the older generations) appearing frustratingly slow and boring to ever-more-agile minds raised in our ever-more-information-rich society.

If this is true, this phenomenon could actually be a good thing - our brains are getting faster and better at information-processing, so things which seemed fun to our slower, less-capable ancestors now seem un-stimulating, or no better than momentary diversions.

However, generations who found crocheting or games of "tag" or charades the most amazingly fun experience in their lives now have to watch kids try their cherished childhood hobbies before discarding them as boring, trivial or simplistic.

It's therefore understandable that they find it a lot more comforting to automatically decide there's something wrong with kids today (a refrain that echoes down through the generations)... rather than realise that their own brains are by comparison so poor at information-processing that activities that were stimulating to them as children are just too simple for kids these days.

Wednesday, 29 July 2009

Your opinion is worthless

This is a slightly self-indulgent post, relating to website and forum discussions, rather than a generally-applicable epiphanette. Nevertheless, I think it's an important point, and one which far too few people understand...

I find when browsing internet discussion forums, when someone with a controversial or non-mainstream opinions posts and gets voted down I frequently run across run across comments similar to the following:

I find I get downmodded a lot because I'm a person willing to speak my mind. That makes a lot of the insecure people here (of which there are many!) uncomfortable, and to try and counter that they downmod my posts.

Straight to it: although sometimes the commenter has a point (people get very attached to their ideas, and can react irrationally when they're threatened), general attitudes like this always make me uncomfortable, because they smack of self-delusion and comfort-beliefs.

Everyone has some element of this in their thinking, but it's rarely justified. As an experiment, consider the following:

Aside from your own clearly-biased personal opinion of your posts, what evidence do you have that your thoughts or beliefs are generally:

  1. Insightful
  2. Interesting
  3. Well-expressed, or
  4. Correct?

Secondly, how many people - even really stupid, boring people - do you think get up in the morning, look in the mirror and think "shit man, I'm a really windy, boring, unoriginal fucker", and then spend a lot of time expressing their opinions to others?

Most people think what they have to say is insightful, interesting, adequately-expressed and correct, or they wouldn't bother posting it.

Now, this idea is correct in that some people vote down anything which contradicts the prevailing wisdom, but people also vote down things which are wrong, stupid, ridiculous or badly-expressed.

Conversely, I know from repeated personal experience that in many communities a well-written, well-argued, non-whingey post which counters the prevailing wisdom frequently still gets a high score, sometimes because of its contrary position.

I know when I post all I have to go on is my own opinion of my posts, which (as we've established) is almost laughably unreliable. Instead, the votes my posts get serve as a useful barometer of how much my opinion of a well-written, well-argued post compares with the general opinion.

It's terribly flattering to think of oneself as a persecuted martyr, but it also usually requires a lot of egotism and a willing blindness to statistics.

To quote the great Carl Sagan:

They laughed at Galileo... but they also laughed at Bozo the clown.

Given a poster's personal opinion is biased to the point it's worthless, and given there are many more clowns in the world than misunderstood geniuses, on what basis do people claim to be downmodded for the content of their opinions, rather than for their worth, or the reliability of the arguments they use to support them?

Claiming you're being downvoted simply because your opinions run counter to the prevailing wisdom, rather than simply because you're self-important or wrong requires you to not only assume you're vastly more intelligent or educated than the average person, but also that most people voting you down are doing so because of a deficiency in their psychology, rather than your own.

When all the objective evidence you have is that a lot of other people disagree with you, it's terribly tempting to believe you're a misunderstood intellectual martyr like Galileo.

The trouble with this, of course, is that while paradigm-shifting geniuses like Galileo only come along a few times a generation, we're knee-deep in idiots, and the tide is rising.

There are literally thousands of times more idiots than geniuses, so claiming you must be a genius on the basis you were voted down doesn't mean you're a genius - it means not only are you overwhelmingly likely to be a self-important idiot, but you're also bad at maths.

Act appropriately.

Friday, 5 December 2008

The web is not a sheet of paper

Print-turned-web designers:

  1. Learn the medium you're working in. A five minute video of even the best print advert makes for a lousy TV advert. Likewise, techniques and habits refined by years of print design are often sub-optimal or flatly counter-productive when applied to the web.
  2. For the love of god, give up on pixel-perfect positioning and learn to appreciate flow layout. Sure, it makes design harder... but if you think designing flow layouts is hard, think about the poor schmuck developers who have to implement the damn things. And if you think flow layouts are ugly, let's see how good your precious pixel-perfect design works when I do something freakishly unusual like resize my browser window.
  3. Print pages are Things To Look At. Web sites are Things To Use. Prioritising aesthetics over usability or functionality is like putting a car steering wheel in the middle of the dashboard. Sure it looks nicer, but it makes the whole product useless. Incidentally, I swear if I get one more design through with a "button" image specified but no "pressed button" image (or "link" style but no "active/hover/visited link" style) I will personally bite off your head and defecate into your body-cavity. You have been warned.
  4. Conventions are not boring - conventions are your friend. Putting light-switches near doors is a convention. Sure, putting them square in the middle of the ceiling is innovative, but then so is cheesegrating your knees (hey - do you know anyone who's done it?). Innovative means "nobody else is doing it". Accept the possibility that nobody else is doing it because it's a fucking stupid idea.
  5. I don't want to "explore the interface". I want to get in, do my shit and get out again. If you think forcing users to explore the interface is such a good idea, try ripping the labels off all the cans of food in your cupboard. A couple of meals of cat-food, chilli and peaches should demonstrate exactly how "fun" this is for your users.

PANT, pant, pant... pant... ahem.


Originally via reddit.

Wednesday, 16 April 2008

What's wrong with TV?

Why do people who watch TV typically watch so much of it, and why do people who stop watching TV do so? And why do people who don't watch TV often instinctively have such a poor opinion of it? After all, they usually used to watch it too, didn't they?

For the purpose of explaining my theory, I'm going to start with the conclusion I reached and work back from there there:

People dislike TV because it's an inherently sensationalist, emotive and passive medium. It positively discourages critical thought and subtly but definitely encourages a receptive, passive, uncritical mindset.

Constrained input frequency

Television drip-feeds you information without any activity on your part - indeed, you're better off not moving or doing anything else, as any activity on your part will only distract from the medium. Unlike reading (where you can read as fast as you feel comfortable), the speed of information-flow is limited by the television. This encourages a passive, receptive mindset, as there's literally nothing you can do to affect the incoming information flow without degrading it (e.g., by fast-forwarding).

Water, water everywhere...

The (often largely irrelevant) moving visual image also means that though there's a lot of information to take in at any one time, precious little of this is useful data.

Compare the amount of time and raw information in the written sentence

"The man walked twenty metres down the road."

compared with a video imparting exactly the same thing. Also compare the amount of useful, important data in the written sentence versus the sheer volume of unimportant information you have to assimilate from a video to get the same amount of data[1].

Nevertheless, with TV you still have to sift through all this unimportant information to select out the important parts, and this extra cognitive workload impairs and discourages any other thoughts you may be having. Additionally, having to handle such a comparatively large amount of input in real-time at least reduces the amount of attention you have to consider, analyse and critically evaluate what you're receiving.

Compared to other media

Reading, in contrast, is like accessing pre-filtered meaning - very little written text doesn't relate data essential to the communication, and text which does break this rule quickly becomes dull, windy and boring. Text is low-bandwidth (lower even than radio) and isn't inherently interesting to look at, so since the form won't hold your interest the content is required to be more interesting. Written text typically has a higher ratio of data-to-information - analogous to the idea of the signal:noise ratio in electronics.

Likewise, while radio has many of the same faults as TV (no random access, constrained input frequency) its lower bandwidth (audio compared to video) also communicates meaning much more efficiently than television - again, it has a higher data:information ratio.

Poor at communicating meaning

TV therefore communicates surprisingly inefficiently, in terms of the amount of raw information you have to sift through to extract meaning. This means that while it typically requires a large amount of attention to parse out the incoming information, it imparts relatively little actual data or knowledge... and what data it does impart is drip-fed to you at a rate much slower than you could typically assimilate it if it were presented in a more condensed or refined form.

Finally, its imposed (and slow) rhythm, high-bandwidth but data-poor input and a complete lack of interactivity (indeed, a disincentive to any activity at all) means no matter what you're watching, the medium itself acts to cultivate a more passive, receptive and uncritical mindset than you would otherwise experience.

Don't get me wrong - obviously there are plenty of situations where information could only sensibly be transmitted in a video medium (sports events, any communication where movement and visual change over time are important aspects, etc), and in any one particular instance these effects are typically very small[2]. However, when you compare the very nature of the medium of TV to other media (books, the web... arguably even computer games) it's hard not to come to the conclusion that it's the least interactive, least efficient and most unchallenging of all the mainstream media we typically use.


Footnotes

[1] Finally, if any extra detail is required or desired by the receiver (What man? How old was he? Was he wearing a hat? What colour?), compare the amount of mental exercise required be the receiver to imagine all these things compared to merely observing them.

[2] But the rather more worrying question is: are they cumulative? If one gets used to regularly existing in a passive, receptive, uncritical mindset does that make it easier (and more common) to to experience it in future? Obviously I'm not trying to claim TV turns people into mooing idiots or anything so excessive... nevertheless, at the very least it raises interesting questions...