A Couple Of Thoughts From Twitter

Now and again you read something that perfectly sums up the jumble of thoughts in your head.

I think these 2 are spot on.

Faris

I’ve cracked it! do stuff people like! then people will like you more and maybe buy some of your stuff. if it’s good stuff.

James Cooper

Spend money on doing something interesting. Then make YouTube film about it rather than TV ad.

Anyone got any others?

Our Path To Truly Rich, Personalised Video Experiences

Dom wrote this feature for the 1st anniversary of the rebranded Revolution magazine. This is a copy+paste of the expanded version posted on his blog.

It gives you a glimpse into a some of the projects I’ve worked on at glue, and the technologies we’re looking into at the moment.

Little did I know it at the time, but a project for Mars’ sponsorship of Euro 2006 was the catalyst for a new approach to personalised video content here at glue.

What we did was crude and simple: we allowed people to create a fan by choosing a head, body and hands. These individual assets existed as PNGs on the server and depending on what was chosen, a JPEG was created using ImageMagik. Thinking not too much more about it, we moved on to the next project.

A year later our Get The Message recruitment campaign for the Royal Navy was born:

navy.jpg

We quickly realised that the audience likely to want to recruit weren’t exclusively those behind PC’s all day. In fact the bulk of them weren’t. For this audience the only real channel available at scale was mobile.

The problem was we’d become experts in interactive video using Flash, but Flash wasn’t (and broadly still isn’t) compatible with many handsets. The file format of choice was / is MPEG video so we needed to replicate the browser experience using it.

We scratched our heads and fairly quickly came round to the idea that if we can create individual JPEGs on the fly, stitching them together would create video. So that’s exactly what we did – this time combining ImageMagik with FFMPEG.

The video message is delivered as an SMS. The recipient downloads and watches the video, and also has the ability to respond direct on handset:

http://vimeo.com/moogaloop.swf?clip_id=7395885&server=vimeo.com&show_title=1&show_byline=1&show_portrait=1&color=00adef&fullscreen=1

Get the message mobile video from Dom O’Brien on Vimeo.

At the time this was a first and we all felt pretty happy and gave ourselves a slap on the back like only the ad industry can. But almost naively, and for a second time, we’d stumbled on the door to a much bigger opportunity:

Replicating the flash experience had fulfilled the requirements of this project, but we soon recognised that by automating motion graphics or 3D packages it’s immediately possible to generate video without creative limits.

Enter DYNAMIC VIDEO (a phrase we’ve banded about the agency for a few years now that REALLY needs a better name…)

Whilst traditional video is shot with a camera and broadcast, dynamic video allows for content to be generated specific to the person watching it, at the moment of viewing.

To help understand this complex concept, think about the gaming world where a game is produced but each game-play is unique to the actions of the game player. With dynamic video the same is now true for brand experiences.

Here’s one such example we created in 2008 for Bacardi using their existing endorsement of UK beatboxing champion Beardyman.

The project was initiated by the simple thought, ‘wouldn’t it be great if everyone could beatbox as well as Beardyman.’ And from there a project was born.

It’s a simple upload your face mechanic, using Kofi Annan here for the purposes of demo:

http://vimeo.com/moogaloop.swf?clip_id=7393426&server=vimeo.com&show_title=1&show_byline=1&show_portrait=1&color=00adef&fullscreen=1

Bacardi Beatology from Dom O’Brien on Vimeo.

kofi_annan-150x150.jpg

There’s all sorts of complex things going on under the bonnet.

There’s proprietary image recognition software interpreting the uploaded photo, identifying facial elements and stripping it out from its background (no need for manual intervention).

Then using 3DMax the video is generated by mapping the face texture onto existing wireframe animations.

This technique has 2 immediate benefits:

1. Visually pretty much anything is possible (at least anything that’s possible within motion graphics or 3d applications)

2. The generated file is the ubiquitous MPEG – enabling distribution across channels without the need to re-engineer

However the technique is fairly processor intensive – taking around 20 seconds per person to generate. This gives a through-put of 4,320 videos per processor per server per day. Whilst this is ok on a smallish campaign, the only real thing you can do for larger ones is throw more hardware at it which can be costly and only becomes viable once a client really values what is creatively being achieved.

The emergence of cloud computing farms and the rendering capacity these offer to an extent solves this issue, but it’s early days. These cloud farms not only offer scalable rendering capabilities, but with the proliferation of smaller devices in all our pockets, enable richer experiences to be created remotely and be viewed on device.

Another sector dabbling in using cloud farms in this way is the few virtual rendering games companies that have recently emerged, which negate the need for a console by rendering content virtually and bringing it into home via your broadband. (can our broadband really cope with realtime 1080p video content? Or is this partly the reason these services haven’t yet taken off). Definitely one to keep an eye on.

As is the recent emergence of open source video specific rendering farms like PandaStream.

Or potentially the answer is in not saving the generated video to file, but rather to dynamically construct the video within stream as done here:

audicar.jpg

It’s a neat solution, but the SDK means the production process is alien to existing skill sets in the short term.

So generally speaking it would be fair to say there’s lots of trial and error needed. And I can’t help but notice the aforementioned gaming industry is set on collision course with the digital industry – both attacking a similar problem but from different angles. This is a most exciting prospect. (Here’s the closest example of the two together I’ve seen to date).

In the mean time it would be great to think that the Adobes’ of this world, or maybe more likely the hardware guys of the world like nVidia or AMD move into this space and create a tool to ease the production process, but until they do these experiences will be built by ingenuity in combining niche technologies together to the needs of the project.

It therefore becomes apparent that to stay ahead of competitors R&D can’t be undervalued. The same goes for having the time and freedom to explore, trial and learn new technologies and techniques on paid for work. As we’ve testified here, bits of work that at the time may not seem like much, may in the future prove to be invaluable by re-emerging as a wholly different entity.

So collectively we (the industry) have come to a juncture where new creative opportunities exist. With this brings the need for internal re-education both on how we approach briefs conceptually, but also in how we approach capturing the assets in a new way that enables them to be manipulated with these techniques.

And with an eye on the future: glue recently ventured into the world of TV. I for one am really excited at the prospect of the day that the archaic TV broadcasting infrastructure is modernised and we can apply our digital know how onto the currently stagnant format. It defies belief that everything is still run from BetaMax. Admittedly I don’t know the setup intimately, but I’d have thought all it needs is for systems to be driven by an internet enabled computer – which happens on occasion, but not enough.

Here’s another more dynamic example that the clever boys and girls at MiniVegas were able to negotiate for a special short term deal for S4C a few years ago:

http://vimeo.com/moogaloop.swf?clip_id=7396307&server=vimeo.com&show_title=1&show_byline=1&show_portrait=1&color=00adef&fullscreen=1

S4C ident by Minivegas from Dom O’Brien on Vimeo.

We’re undoubtably in exciting times, and hats off to the team here driving all of this forward @SuperScam @BananaFritter @hellokinsella

Bring on the next project..

Glenn Beck – The Onion Just NAILED Him!

News as entertainment. Entertainment as news. Confusing?

I read recently that teens were getting their “real news” from satirical tv news programmes like The Daily Show with Jon Stewart.

Plus – once you begin to understand how media ‘makes the news’ you can’t watch it in the same way. Brass Eye and Screenwipe spring to mind as the main influencers for me.

News media doesn’t help itself when it gives airtime to weirdo dangerous pinheads like Glenn Beck. However, The Onion just smashed him into a million tiny pieces. I love the Onion.

The Onion – Victim In Fatal Car Accident Tragically Not Glenn Beck

http://www.theonion.com/content/themes/common/assets/onn_embed/embedded_player.swf
Victim In Fatal Car Accident Tragically Not Glenn Beck

If you don’t know who Glenn Beck is then watch this clip from Screenwipe by Charlie Brooker. It’s a brilliant bit of television that should be required viewing for all media students.

Beck gets going at 7m45secs.

Charlie Brooker on British and American TV News

Can you believe this guy’s got a primetime slot on a major network.

Ringing alarm bells yet?

Google Maps Trike View

You’re all familiar with Google Street View and the camera-topped Google Car – but what about all of the interesting places inaccessible to cars?

Enter the Google Trike, which started as a project by Daniel Ratner, a Senior Mechanical Engineer on the Street View team:

“I began thinking about building a bicycle-based Street View system after realizing how many interesting places around the world – ranging from historic landmarks to beautiful trails to shopping districts – aren’t accessible by car,” says Dan. “When I’m riding the trike, so many people come up to me and ask where it’s off to next or how they can get imagery of their favorite spot, so I can’t wait to see what our users come up with.”

Google’s Street View trike lets us take pictures of places that are not accessible by car.

Tell us where to send the trike next at http://www.google.com/trike

onedotzero interactive festival identity

It was the onedotzero / glue london party in Shoreditch last night.

One of the highlights (apart from the Microsoft Surface table and free bar) was a chance to see the visual identity and interactive installation for the upcoming “adventures in motion” event this September at BFI Southbank.

Wieden+Kennedy worked with Karsten Schmidt (aka Toxi) on a Processing application that collects conversations around onedotzero from the web (Twitter, Flickr, Vimeo, Facebook and blogs) and generates the identity.

http://vimeo.com/moogaloop.swf?clip_id=6417194&server=vimeo.com&show_title=1&show_byline=1&show_portrait=1&color=c9ff23&fullscreen=1

onedotzero interactive festival identity – preview from onedotzero on Vimeo.

Using the Nokia N900 people at the party could control the live conversations behind the identity – twisting, turning and feeding the aggregated words to help build our first living, breathing onedotzero identity.

A big slap on the back goes to Sermad for getting the installation working in the nick of time, and a happy 10th birthday goes out to my gang at glue London. Tuesday night drinking, bet there was a few sore heads in the office today.

Here’s a bit about onedotzero_cascade which glue also supported.

http://vimeo.com/moogaloop.swf?clip_id=6470261&server=vimeo.com&show_title=1&show_byline=1&show_portrait=1&color=c9ff23&fullscreen=1

onedotzero_cascade_09 from onedotzero on Vimeo.

I’m on holiday now for 2 weeks. No updates until I’m back. Cya…