The build of the We Like Small iPod wall

http://vimeo.com/moogaloop.swf?clip_id=13404489&server=vimeo.com&show_title=1&show_byline=1&show_portrait=1&color=00adef&fullscreen=1

The iPod Wall from welikesmall on Vimeo.

We Like Small built a wall made from 20 iPods and built an app that display photos from a library randomly across all iPhones or as a large image split between iPhones. Future releases will allow all sorts of external user interaction.

Plenty of blurb on the Vimeo site.

They’ve got a nice website too – http://www.welikesmall.com

New distorted ads from Sony + Honda

A couple of new pieces of work from 2 agencies that I really admire. Can’t decide about these. What do you think?

Sony 3D by Anomaly

Credits:

Agency: Anomaly, London
Creative Director: Mike Byrne
Director: Frank Budgen
Production: Gorgeous Enterprises
Producer: Rupert Smythe

Honda CR-Z by Wieden + Kennedy

Credits:

Agency: Wieden + Kennedy London
Creatives: Sam Heath, Chris Groom
Production company: Gorgeous Enterprises
Director: Frank Budgen
Post: The Mill

Style over substance? Technique over message? Or fuck it, they’re interesting, people will notice them and if it grabs their attention job done?

Both directed by Frank Budgen.

Would love to know your thoughts.

Help raise money for the Make-A-Wish Foundation this Christmas

Send a Xmas wish with the glue London 2009 Xmas card – and we’ll donate to the Make-A-Wish Foundation. Just share your wish on Twitter with the hashtag #tweetawish – and we’ll do the rest.

xmas-home.png

xmas-scene.png

xmas-twitter.png

We’ve partnered with Titan Outdoor so all your festive wishes are being showing live in Liverpool Street station. There’s also some festive tracking on the site for things like Reindeer, Turkey, Family and other Christmas fun.

http://www.tweetawish.co.uk

Advertising trends explained… via Toshiba and Grey London

Toshiba Space Chair Project by Grey London

YouTube link.

Escape Vehicle no.6 (chair in space), Simon Faithfull, 2004, 4min extract

YouTube link.

Looks like it all kicked off on the “Making Of..” page – then this appeared:

Toshibas new Space Chair ad was inspired by a sub culture of scientists and artists who send objects to the edge of space using weather balloons. Grey London collaborated with a number of talented individuals, including British artist Simon Faithfull, to re-create the concept of launching a generic chair into space, and by using their own HD cameras, to demonstrate how Toshiba technology can take something ordinary and make it extraordinary.

That’s proper PR ad-speak for a YouTube forum team Tosh. And I love the carefully worded slightly ass covering formal tone as well. Bonus for the single post and closed conversation. In and out. Boom!!

Our Path To Truly Rich, Personalised Video Experiences

Dom wrote this feature for the 1st anniversary of the rebranded Revolution magazine. This is a copy+paste of the expanded version posted on his blog.

It gives you a glimpse into a some of the projects I’ve worked on at glue, and the technologies we’re looking into at the moment.

Little did I know it at the time, but a project for Mars’ sponsorship of Euro 2006 was the catalyst for a new approach to personalised video content here at glue.

What we did was crude and simple: we allowed people to create a fan by choosing a head, body and hands. These individual assets existed as PNGs on the server and depending on what was chosen, a JPEG was created using ImageMagik. Thinking not too much more about it, we moved on to the next project.

A year later our Get The Message recruitment campaign for the Royal Navy was born:

navy.jpg

We quickly realised that the audience likely to want to recruit weren’t exclusively those behind PC’s all day. In fact the bulk of them weren’t. For this audience the only real channel available at scale was mobile.

The problem was we’d become experts in interactive video using Flash, but Flash wasn’t (and broadly still isn’t) compatible with many handsets. The file format of choice was / is MPEG video so we needed to replicate the browser experience using it.

We scratched our heads and fairly quickly came round to the idea that if we can create individual JPEGs on the fly, stitching them together would create video. So that’s exactly what we did – this time combining ImageMagik with FFMPEG.

The video message is delivered as an SMS. The recipient downloads and watches the video, and also has the ability to respond direct on handset:

http://vimeo.com/moogaloop.swf?clip_id=7395885&server=vimeo.com&show_title=1&show_byline=1&show_portrait=1&color=00adef&fullscreen=1

Get the message mobile video from Dom O’Brien on Vimeo.

At the time this was a first and we all felt pretty happy and gave ourselves a slap on the back like only the ad industry can. But almost naively, and for a second time, we’d stumbled on the door to a much bigger opportunity:

Replicating the flash experience had fulfilled the requirements of this project, but we soon recognised that by automating motion graphics or 3D packages it’s immediately possible to generate video without creative limits.

Enter DYNAMIC VIDEO (a phrase we’ve banded about the agency for a few years now that REALLY needs a better name…)

Whilst traditional video is shot with a camera and broadcast, dynamic video allows for content to be generated specific to the person watching it, at the moment of viewing.

To help understand this complex concept, think about the gaming world where a game is produced but each game-play is unique to the actions of the game player. With dynamic video the same is now true for brand experiences.

Here’s one such example we created in 2008 for Bacardi using their existing endorsement of UK beatboxing champion Beardyman.

The project was initiated by the simple thought, ‘wouldn’t it be great if everyone could beatbox as well as Beardyman.’ And from there a project was born.

It’s a simple upload your face mechanic, using Kofi Annan here for the purposes of demo:

http://vimeo.com/moogaloop.swf?clip_id=7393426&server=vimeo.com&show_title=1&show_byline=1&show_portrait=1&color=00adef&fullscreen=1

Bacardi Beatology from Dom O’Brien on Vimeo.

kofi_annan-150x150.jpg

There’s all sorts of complex things going on under the bonnet.

There’s proprietary image recognition software interpreting the uploaded photo, identifying facial elements and stripping it out from its background (no need for manual intervention).

Then using 3DMax the video is generated by mapping the face texture onto existing wireframe animations.

This technique has 2 immediate benefits:

1. Visually pretty much anything is possible (at least anything that’s possible within motion graphics or 3d applications)

2. The generated file is the ubiquitous MPEG – enabling distribution across channels without the need to re-engineer

However the technique is fairly processor intensive – taking around 20 seconds per person to generate. This gives a through-put of 4,320 videos per processor per server per day. Whilst this is ok on a smallish campaign, the only real thing you can do for larger ones is throw more hardware at it which can be costly and only becomes viable once a client really values what is creatively being achieved.

The emergence of cloud computing farms and the rendering capacity these offer to an extent solves this issue, but it’s early days. These cloud farms not only offer scalable rendering capabilities, but with the proliferation of smaller devices in all our pockets, enable richer experiences to be created remotely and be viewed on device.

Another sector dabbling in using cloud farms in this way is the few virtual rendering games companies that have recently emerged, which negate the need for a console by rendering content virtually and bringing it into home via your broadband. (can our broadband really cope with realtime 1080p video content? Or is this partly the reason these services haven’t yet taken off). Definitely one to keep an eye on.

As is the recent emergence of open source video specific rendering farms like PandaStream.

Or potentially the answer is in not saving the generated video to file, but rather to dynamically construct the video within stream as done here:

audicar.jpg

It’s a neat solution, but the SDK means the production process is alien to existing skill sets in the short term.

So generally speaking it would be fair to say there’s lots of trial and error needed. And I can’t help but notice the aforementioned gaming industry is set on collision course with the digital industry – both attacking a similar problem but from different angles. This is a most exciting prospect. (Here’s the closest example of the two together I’ve seen to date).

In the mean time it would be great to think that the Adobes’ of this world, or maybe more likely the hardware guys of the world like nVidia or AMD move into this space and create a tool to ease the production process, but until they do these experiences will be built by ingenuity in combining niche technologies together to the needs of the project.

It therefore becomes apparent that to stay ahead of competitors R&D can’t be undervalued. The same goes for having the time and freedom to explore, trial and learn new technologies and techniques on paid for work. As we’ve testified here, bits of work that at the time may not seem like much, may in the future prove to be invaluable by re-emerging as a wholly different entity.

So collectively we (the industry) have come to a juncture where new creative opportunities exist. With this brings the need for internal re-education both on how we approach briefs conceptually, but also in how we approach capturing the assets in a new way that enables them to be manipulated with these techniques.

And with an eye on the future: glue recently ventured into the world of TV. I for one am really excited at the prospect of the day that the archaic TV broadcasting infrastructure is modernised and we can apply our digital know how onto the currently stagnant format. It defies belief that everything is still run from BetaMax. Admittedly I don’t know the setup intimately, but I’d have thought all it needs is for systems to be driven by an internet enabled computer – which happens on occasion, but not enough.

Here’s another more dynamic example that the clever boys and girls at MiniVegas were able to negotiate for a special short term deal for S4C a few years ago:

http://vimeo.com/moogaloop.swf?clip_id=7396307&server=vimeo.com&show_title=1&show_byline=1&show_portrait=1&color=00adef&fullscreen=1

S4C ident by Minivegas from Dom O’Brien on Vimeo.

We’re undoubtably in exciting times, and hats off to the team here driving all of this forward @SuperScam @BananaFritter @hellokinsella

Bring on the next project..

Robots Wear Nike Too. More Impressive Spec Work

Big Lazy Robot Visual Effects created this impressive ‘spec commercial inspired by Nike’ in 1 month.

YouTube link.

We’re starting to see more and more great spec / off-roster / unofficial / unpaid work for some brands.

All this just as Coke are looking into paying agencies purely on a results based model.

Will this affect unknown outfits who make big shockwaves with spec work? How does this fit into the future of brand comms? Gah! What does it all mean?

I’m gonna see if one of our team of crack planners can make sense of it. It’s not one for me on a Saturday night.

What do you think?

via AdWeek