How Data Can Make You More Creative (slides)

Slides from a talk I gave to the 3rd-year media students at Regent’s University in London a few days ago. Lots of examples from TV and from my own experience. The title was: How Data Can Make You More Creative.

The deck contains a framework for understanding the many different types of data available in the TV industry.

Questions welcome – enjoy!

How to understand Facebook

Joanne Garde-Hansen again:

Facebook is a database of users for users; each user’s page is a database of their life.

I disagree. Facebook does have a huge database, but it’s how it uses that data that is important.

Its real strength is using context to create meaning. Check this out.

A user (in this case, my bro!) types something into the status update box. Just one simple line of text.

 

 

On its own, that text is pretty meaningless. A short string of characters sitting on a server somewhere.

So let’s look at how Facebook connects pieces of information to create context.

 

 

Here that string of text has been connected with other info: the user’s name and profile pic, the time he posted it, and the method he used to do so.

There’s also the opportunity for other users to interact, and they do:

So now there are two Likes, so there’s social info connected too – info generated by other users of the site. So that one simple string of text has been put into context by Facebook, and re-presented to Sam and his friends along with a whole load of other information which, in sum, creates an engaging experience for users of the site.

And when you start to see Facebook like that – as a connector of small snippets of information – then you can see just how clever and complex it is:

Facebook connects many small things to make one big thing – and that big thing looks different to everyone, depending on the context Facebook creates for all those millions of small things.

So Facebook is not a database. Databases don’t offer context. Facebook connects information to create context, and context creates meaning.

Facebook is a meaning machine.

+++

Garde-Hansen, J. (2009). My memories? Personal digital archive fever and Facebook. In Save as… Digital memories, ed. Garde-Hansen, J., Hoskins, A., and Reading, A.. London: Palgrave Macmillan, pp.135-150

Full text: http://ds.haverford.edu/fortherecord/wp-content/uploads/2012/06/Garde-Hansen.pdf

A typology for interactivity in TV

Academics have offered a wide range of definitions of interactivity (Kiousis 2002). I prefer to follow the lead of Cover (2006, p.140) in defining interactivity as occurring when

Content is affected, resequenced, altered, customized or renarrated in the interactive process of audiencehood.

So an interactive show is defined as one in which the audience directly affects what happens on the TV screen by sending information that is collected by the production team. Many shows fit that description – but how can we differentiate between them in a useful way?

I propose a four-part model based on two axes. To illustrate the differences, I have included shows as examples – The X Factor, Million Pound Drop, Football Focus and Smart Interactive – Live Roulette.

In the diagram below, interactivity in TV shows is mapped according to the frequency of input and the impact of viewer contributions:

Interactivity in The X Factor occurs with low frequency but with high impact: once a week, during the two or more hours the show is on-air, the public decides who should progress.

Football Focus has low-frequency, low-impact interactivity: occasionally viewers’ tweets are read out, and only some of them inform the studio discussion.

The Million Pound Drop offers high-frequency, low-impact interactivity: viewers can give their answers to every question faced by the contestants; those answers may be used to create editorial content, but do not affect the course of the show.

High-frequency, high-impact shows tend to appear on smaller channels but, like Smart Live – Interactive Roulette, they focus entirely on the actions of viewers who are playing along at home.

+++

This is an adapted excerpt from an article I co-authored with Dr. Kris Erickson, now of Glasgow University.

References:

Cover, R., 2006. Audience inter/active: Interactive media, narrative control and reconceiving audience history. New Media & Society, 8(1), 139-158.

Kiousis, S., 2002. Interactivity: a concept explication. New Media & Society, 4(3), 355-383.

Programmes as platforms

This post proposes a new approach to understanding how interactivity changes traditional models of TV production.

It can be summarised simply: programmes as platforms.

A platform is a space into which consumers are invited, where their contributions are solicited and facilitated, and in which collaborative creations can be undertaken for the mutual benefit of the platform owner and the consumer-contributors.

The most prominent interactive media successes of recent years have been internet-powered platforms. Among them are Twitter and Facebook, which have amassed 200m and 1.1bn MAU (monthly active users) respectively (Etherington 2012, Olanoff 2013; MAU = people who have logged into the service 1+ times in the past 30 days).

Both Twitter and Facebook deal with consumer contributions in the same way. Consumers input information, and that information is given context and meaning by the framework that the platform owner has created.

For example, if a consumer types an update into Facebook on what they have been doing today – such as ‘I’ve been writing my article!’ – that input means little in isolation. But Facebook’s platform connects consumers who know one another – so the update is seen only by the consumer’s friends. The update is submitted for display on each friend’s Facebook homepage, but Facebook has an algorithm that gives a single update a different level of prominence for different friends – so it will be more prominent on the home page of friends with whom the consumer regularly interacts, and may not appear at all on more distant friends’ homepages. If a friend sees and chooses to comment on the update, the update gains greater prominence on the homepages of the consumer’s friends – so more interesting or amusing updates will gain greater prominence over time.

The contribution from the consumer is simple – just like a contribution to an interactive TV show. It is the way the contribution is handled by the platform that turns it into something useful, interesting, or entertaining – again, just like an interactive TV show.

TV executives seeking to understand the effects of interactivity should therefore think about interactive programmes as platforms, not as slightly modified versions of traditional shows.

In this context, ‘interactive programmes’ becomes a more suitable term than ‘interactive shows’. ‘Shows’ is historically connected with theatrical performances – pre-arranged, pre-packaged, and closed to consumer contributions; ‘programmes’ refers to computing, and so is more closely linked with web platforms.

The similarities are evident in the language used to describe web platforms. Cohen (2008) offers an analysis of web 2.0 sites such as Twitter and Facebook that could just as easily be applied to interactive TV shows:

Web 2.0 has [adjusted] consumers’ role in the production process. In [previous] models, the role of consumers has been… to consume, or to watch and read the product. Web 2.0 consumers, however, have become producers who fulfil a critical role.

There is also a connection in the language used to describe open innovation and open-source creation, both of which are usually facilitated by web platforms. Open innovation and open-source creation are discussed by Boudreau and Lakhani (2011). They believe that one critical ingredient in the success of the projects they cover is a high level of skill in channelling external contributions, and describe open innovation in terms that neatly fit the creation of interactive TV programmes (p.56):

[It] is necessarily about carefully designing a set of mechanisms to govern, shape, direct, and even constrain external innovators; it is not about blindly giving up control and hoping for the best.

Web platforms are therefore suggested as an excellent example for those seeking to understand interactive TV shows – a much better example, in fact, than traditional TV shows.

Programmes as platforms.

+++

This is an adapted excerpt from an article I co-authored with Dr. Kris Erickson, now of Glasgow University.

References:

Etherington, D., 18 December 2012. Twitter Passes 200M Monthly Active Users, A 42% Increase Over 9 Months. TechCrunch. http://techcrunch.com/2012/12/18/twitter-passes-200m-monthly-active-users-a-42-increase-over-9-months/ [Accessed 4 May 2013]

Olanoff, D., 1 May 2013. Facebook’s Monthly Active Users Up 23% to 1.11B; Daily Users Up 26% To 665M; Mobile MAUs Up 54% To 751M. TechCrunch. Available from: http://techcrunch.com/2013/05/01/facebook-sees-26-year-over-year-growth-in-daus-23-in-maus-mobile-54/ [Accessed 4 May 2013]

Cohen, N.S. (2008). The valorization of surveillance: towards a political economy of Facebook. Democratique Communiqué, 22(1):5-22.

Boudreau, K. and Lakhani, K. (2009). How to Manage Outside Innovation: Competitive Markets or Collaborative Communities? MIT Sloan Management Review vol. 50 (4) pp. 69-75. Available from: http://bit.ly/1686joX [Accessed 7 September 2013]

How to segment a TV audience

Forty years ago, Gary Alan Fine created the idea of frame analysis to describe tabletop gaming.

Frame analysis looks at how engrossed a participant is in the game:
1. The primary frame of the real world, the reference point for all activities
2. The game context, with its rules and structures
3. The fictional world presented within the game, in which players appear as characters

It strikes me that frame analysis would also be helpful in understanding different levels of engagement with other participatory media, such as interactive TV shows.

The three frames would look like this:
1. The real world beyond the TV set – everyone is and everything happens within this frame
2. The show context, with the format rules an mechanics – everyone watching show experiences this frame
3. The world of the contestants in the show, for whom a certain % of viewers have voted – those who vote in the show experience this frame. The recipients of their votes are their representatives in the show, analogous to the characters in a game.

At each level you see different degrees of engagement with the show. There could also be additional frames/levels, e.g. for those who only watch the show occasionally, for those watching for the first time, or for those who cast a very high number of votes.

These frames are useful because they create a segmentation profile of the audience. Segmentation is very well understood in games, where I work now – but in TV (my alma mater) it is barely even recognised by most producers.

The great gift of segmentation is that you can tailor what you provide to audience members according to their level of engagement.

The big difference between TV and games is this: a game can look different to every player, whereas in TV you must serve the same show to everyone.

The only way around this is to customise TV viewers’ experiences somewhere else, somewhere other than the TV screen. This is the real potential of interactive TV, a potential of which producers have barely yet scratched the surface: serving viewer-specific experiences not through the TV, but rather through a second screen – laptop, phone, tablet, or whatever device comes next.

Frame analysis could be very helpful in segmenting the audience and in defining who should see what.

+++

Fine, G.A. (1974), Shared Fantasy: Role-Playing Games as Social Worlds (Chicago: University of Chicago Press), in Glas, R. (2013), Breaking Reality: Exploring Pervasive Cheating in Foursquare, Transactions of the Digital Games Research Association 1:1, p.2-3

Link to full text of the Glas article: http://todigra.org/index.php/todigra/article/view/4/13

Interactive overload

More interactivity does not = more enjoyment.

Somehow the two have been conflated over the past few years, as if every TV viewer, radio listener, newspaper reader and games player had been barely able to contain their enthusiasm to ‘interact’ in some way with the content they were consuming – and as if, thanks to new technologies, that latent desire can finally burst forth and flourish to create a brave new world of ‘better’ media products.

As Espen Aareth put it, interactivity has come to imply that

the role of the consumer [has] (or very soon [will]) change for the better

This idea has gained traction both in industry and in academia.

But it is nonsense.

The role of the creator/producer/manager is to ensure that interactivity is channeled by the product and integrated into it in such a way that the whole becomes stronger.

American Idol is not a hyper-democratic free-for-all because the show would be terrible if the audience voted not only for the winners, but also for the presenters, the judges, the cameramen, the director, and the production team.

The best interactivity is actually structured and limited. Radical democracy does not great products make. Even Wikipedia (winner of Democratic Content Creation Idol ten years in a row) depends on a small army of admins and editors.

More interactivity does not = more enjoyment.

+++

Aareth, E.J. (1997), Cybertext: Perspectives on Ergodic Literature (Baltimore: Johns Hopkins University Press), p.48, quoted by Glas, R. (2013), Breaking Reality: Exploring Pervasive Cheating in Foursquare, Transactions of the Digital Games Research Association 1:1, p.2-3

Link to full text: http://todigra.org/index.php/todigra/article/view/4/13

Apophenia

A follow-up to my post on A/B testing being both an art and a science.

Here’s another quote from boyd and Crawford:

Big Data tempts some researchers to believe that they can see everything at a 30,000-foot view. It is the kind of data that encourages the practice of apophenia: seeing patterns where none actually exist, simply because massive quantities of data can offer connections that radiate in all directions.

This risk is beginning to become serious.

As governmental agencies start to crunch the numbers in a Big way, but with analysts no better than those working in the private sector, there’s a risk of guilt by algorithmic association.

+++

Reference:

boyd, d. and Crawford, K. (2011), Six Provocations for Big Data. Symposium on the Dynamics of the Internet and Society, September 2011.

Full text of draft paper: http://www.danah.org/papers/2012/BigData-ICS-Draft.pdf

#DoctorWho and The Norwegian Texters

Espen Ytreberg reports an interview with a Norwegian TV viewer who was describing the experience of sending an SMS and seeing it appear on the TV.

Interviewee: It’s great fun… You get that return message, ’Your message will be screened within so and so long’, right? That’s a real kick, you have to try it too. You haven’t sent a message yet?

Researcher: No, maybe I should try it too…

Interviewee: Yes, yes, yes! You have to experience that for yourself, seeing your words roll over the screen, it’s like ‘Wow, I wrote that’.

The thrill of interaction can be very simple. Every single SMS would appear on TV. This was the ‘guaranteed display’ model of interaction.

That model is no longer possible.

As the volume of interactions rises, the way in which viewers’ contributions are re-presented to them is starting to change. And that in turn changes the viewers’ expectations as reasons for interacting.

The announcement of the new Doctor Who on Sunday generated >1m tweets – way more than could be displayed meaningfully on-screen if the old model was used.

Instead, producers have two choices:
1. Pick out select interactions, and ignore the rest
2. Aggregate interactions so that all are taken into account, but no individual messages are singled out

#1 promotes competitive punditry.

#2 promotes peer-to-peer conversations peer-to-producer conversations.

The BBC went for #1, with a handful of selected tweets scrolling across the foot of the screen.

Neither is wrong. But for shows of any real size, Ytreberg’s Norwegian TV texters will have to be satisfied with a different model of interaction.

+++

Reference: Espen Ytreberg (2009), ‘Extended livefulness and eventfulness in multi-platform reality formats’, New Media & Society 11(4), p.475

Full text: http://www.academia.edu/3201460/Extended_liveness_and_eventfulness_in_multi-platform_reality_formats

The future of History

Libraries today are full of books. The Radcliffe Camera in Oxford is a fine example – a beautiful library with space for no fewer than 600,000 books.

Books are the main way in which we understand the past five hundred years or so. We supplement them with art, architecture, and other physical relics (bits of things, bits of people) to create an overall picture of the past.

Studying history, though, basically means reading lots of books – certainly when you’re studying History (with a capital ‘H’) at university, like I did – because it’s in books that previous cultures and societies were recorded best.

Clearly that’s not going to be true of our culture or society today.

The ability to create artefacts that in some way record aspects of the current age has been democratised. It’s quick, cheap and easy to create and store material in blogs, on Facebook, on Twitter, on YouTube, on SoundCloud – and that material is likely to be around a good while since digital items should degrade much more slowly than physical ones.

That will have an impact on how our era is studied by the historians of the future. The Radcliffe Camera won’t be as useful for the early twenty-first century as it is for the eighteenth.

What might the study of History look like in the future?

1. Increased scale

There will be a whole lot more material to get through. In 2010, Google Executive Chairman Eric Schmidt said that every two days we create as much information as we did from the dawn of civilisation up to 2003. Even if it’s not quite as much as that, that’s an incredible amount of info and it’s only going to get bigger.

2. Fewer books

As described above, a smaller proportion of the information created about our time will be stored in printed books.

3. Language barriers will get smaller

There are numerous web-based tools and apps that translate foreign languages on the fly. Google Translate is going to get better rather than worse, so written texts in other languages will become increasingly accessible, and combining it with something like Siri will open up spoken texts too. The end game here is that studying becomes language-agnostic, since texts in all languages are accessible.

4. Digitisation

As pretty much everything will be digitised (either its original form will be digital, or a digital copy will be made), it should be much easier to access information that might be of interest. Many books and journals currently exist only in paper form, so you need a physical copy in order to reach the information they contain.

5. The role of the librarian will be reinvented (and become much more highly-valued

Students of the history of today will need expert sherpas to guide them through the wealth of information available. There’s an interesting parallel here with some of the suggestions on how high street shops (including bookshops) might survive – as locations in which subject-matter expertise can be delivered in person.

6. Standardisation to assist discovery

We may need international standards for meta-tagging, categorising, or otherwise making information searchable. Otherwise even the experts will find navigating a path through it rather tricky.

The future of History be very different to its past.

+++

Photo credit: MPerel on Wikimedia Commons

What is B?

 

As we enter the age of ‘Big Data’, we’re at risk of excluding ourselves from decision-making a little too hastily.

danah boyd and Kate Crawford quote Chris Anderson, ex-Editor-in-Chief of Wired, speaking in praise of what he terms ‘The Petabyte Age’:

This is a world where massive amounts of data and applied mathematics replace every other tool that might be brought to bear. Out with every theory of human behavior, from linguistics to sociology. Forget taxonomy, ontology, and psychology. Who knows why people do what they do? The point is they do it, and we can track and measure it with unprecedented fidelity. With enough data, the numbers speak for themselves.

It is true that millions of companies are running billions of A/B tests on trillions of data points to try to determine how to improve their services.

But even though examining the data might tell you what you’re doing wrong, it can’t tell you how to put it right.

If A is what you have now, what is B?

B must be defined, built, designed by humans.

+++

Reference:

boyd, d. and Crawford, K. (2011), Six Provocations for Big Data. Symposium on the Dynamics of the Internet and Society, September 2011.

Full text of draft paper: http://www.danah.org/papers/2012/BigData-ICS-Draft.pdf