The New PUMA Fuseproject – Packaging as Branding

http://www.youtube-nocookie.com/v/vwRulz8hPKI&hl=en_GB&fs=1&rel=0&hd=1

Really nice work by Yves Behar for Puma to reduce packaging waste.

“Rethinking the shoebox is an incredibly complex problem, and the cost of cardboard and the printing waste are huge, given that 80M are shipped from China each year,” Béhar tells FastCompany.com. “Cargo holds in the ships can reach temperatures of 110 degrees for weeks on end, so packaging becomes an enormous problem. This solution protects the shoes, and helps stores to stock them, while saving huge costs in materials.”

The impact: Puma estimates that the bag will slash water, energy, and fuel consumption during manufacturing alone by 60%—in one year, that comes to a savings of 8,500 tons of paper, 20 million mega joules of electricity, 264,000 gallons of fuel, and 264 gallons of water. Ditching the plastic bags will save 275 tones of plastic, and the lighter shipping weight will save another 132,000 gallons of diesel.

There’s no doubting the Green credentials and I was gonna get more wax lyrical. But then I read the comments on YouTube. Check out these 2 gems:

DJHELLO 21 months? To replace a box with a bag? I think Yves saw you coming

FCule 21 FUCKING MONTHS?!?!? Are you f kidding me.. jesus christ

Genius.

via

Our Path To Truly Rich, Personalised Video Experiences

Dom wrote this feature for the 1st anniversary of the rebranded Revolution magazine. This is a copy+paste of the expanded version posted on his blog.

It gives you a glimpse into a some of the projects I’ve worked on at glue, and the technologies we’re looking into at the moment.

Little did I know it at the time, but a project for Mars’ sponsorship of Euro 2006 was the catalyst for a new approach to personalised video content here at glue.

What we did was crude and simple: we allowed people to create a fan by choosing a head, body and hands. These individual assets existed as PNGs on the server and depending on what was chosen, a JPEG was created using ImageMagik. Thinking not too much more about it, we moved on to the next project.

A year later our Get The Message recruitment campaign for the Royal Navy was born:

navy.jpg

We quickly realised that the audience likely to want to recruit weren’t exclusively those behind PC’s all day. In fact the bulk of them weren’t. For this audience the only real channel available at scale was mobile.

The problem was we’d become experts in interactive video using Flash, but Flash wasn’t (and broadly still isn’t) compatible with many handsets. The file format of choice was / is MPEG video so we needed to replicate the browser experience using it.

We scratched our heads and fairly quickly came round to the idea that if we can create individual JPEGs on the fly, stitching them together would create video. So that’s exactly what we did – this time combining ImageMagik with FFMPEG.

The video message is delivered as an SMS. The recipient downloads and watches the video, and also has the ability to respond direct on handset:

http://vimeo.com/moogaloop.swf?clip_id=7395885&server=vimeo.com&show_title=1&show_byline=1&show_portrait=1&color=00adef&fullscreen=1

Get the message mobile video from Dom O’Brien on Vimeo.

At the time this was a first and we all felt pretty happy and gave ourselves a slap on the back like only the ad industry can. But almost naively, and for a second time, we’d stumbled on the door to a much bigger opportunity:

Replicating the flash experience had fulfilled the requirements of this project, but we soon recognised that by automating motion graphics or 3D packages it’s immediately possible to generate video without creative limits.

Enter DYNAMIC VIDEO (a phrase we’ve banded about the agency for a few years now that REALLY needs a better name…)

Whilst traditional video is shot with a camera and broadcast, dynamic video allows for content to be generated specific to the person watching it, at the moment of viewing.

To help understand this complex concept, think about the gaming world where a game is produced but each game-play is unique to the actions of the game player. With dynamic video the same is now true for brand experiences.

Here’s one such example we created in 2008 for Bacardi using their existing endorsement of UK beatboxing champion Beardyman.

The project was initiated by the simple thought, ‘wouldn’t it be great if everyone could beatbox as well as Beardyman.’ And from there a project was born.

It’s a simple upload your face mechanic, using Kofi Annan here for the purposes of demo:

http://vimeo.com/moogaloop.swf?clip_id=7393426&server=vimeo.com&show_title=1&show_byline=1&show_portrait=1&color=00adef&fullscreen=1

Bacardi Beatology from Dom O’Brien on Vimeo.

kofi_annan-150x150.jpg

There’s all sorts of complex things going on under the bonnet.

There’s proprietary image recognition software interpreting the uploaded photo, identifying facial elements and stripping it out from its background (no need for manual intervention).

Then using 3DMax the video is generated by mapping the face texture onto existing wireframe animations.

This technique has 2 immediate benefits:

1. Visually pretty much anything is possible (at least anything that’s possible within motion graphics or 3d applications)

2. The generated file is the ubiquitous MPEG – enabling distribution across channels without the need to re-engineer

However the technique is fairly processor intensive – taking around 20 seconds per person to generate. This gives a through-put of 4,320 videos per processor per server per day. Whilst this is ok on a smallish campaign, the only real thing you can do for larger ones is throw more hardware at it which can be costly and only becomes viable once a client really values what is creatively being achieved.

The emergence of cloud computing farms and the rendering capacity these offer to an extent solves this issue, but it’s early days. These cloud farms not only offer scalable rendering capabilities, but with the proliferation of smaller devices in all our pockets, enable richer experiences to be created remotely and be viewed on device.

Another sector dabbling in using cloud farms in this way is the few virtual rendering games companies that have recently emerged, which negate the need for a console by rendering content virtually and bringing it into home via your broadband. (can our broadband really cope with realtime 1080p video content? Or is this partly the reason these services haven’t yet taken off). Definitely one to keep an eye on.

As is the recent emergence of open source video specific rendering farms like PandaStream.

Or potentially the answer is in not saving the generated video to file, but rather to dynamically construct the video within stream as done here:

audicar.jpg

It’s a neat solution, but the SDK means the production process is alien to existing skill sets in the short term.

So generally speaking it would be fair to say there’s lots of trial and error needed. And I can’t help but notice the aforementioned gaming industry is set on collision course with the digital industry – both attacking a similar problem but from different angles. This is a most exciting prospect. (Here’s the closest example of the two together I’ve seen to date).

In the mean time it would be great to think that the Adobes’ of this world, or maybe more likely the hardware guys of the world like nVidia or AMD move into this space and create a tool to ease the production process, but until they do these experiences will be built by ingenuity in combining niche technologies together to the needs of the project.

It therefore becomes apparent that to stay ahead of competitors R&D can’t be undervalued. The same goes for having the time and freedom to explore, trial and learn new technologies and techniques on paid for work. As we’ve testified here, bits of work that at the time may not seem like much, may in the future prove to be invaluable by re-emerging as a wholly different entity.

So collectively we (the industry) have come to a juncture where new creative opportunities exist. With this brings the need for internal re-education both on how we approach briefs conceptually, but also in how we approach capturing the assets in a new way that enables them to be manipulated with these techniques.

And with an eye on the future: glue recently ventured into the world of TV. I for one am really excited at the prospect of the day that the archaic TV broadcasting infrastructure is modernised and we can apply our digital know how onto the currently stagnant format. It defies belief that everything is still run from BetaMax. Admittedly I don’t know the setup intimately, but I’d have thought all it needs is for systems to be driven by an internet enabled computer – which happens on occasion, but not enough.

Here’s another more dynamic example that the clever boys and girls at MiniVegas were able to negotiate for a special short term deal for S4C a few years ago:

http://vimeo.com/moogaloop.swf?clip_id=7396307&server=vimeo.com&show_title=1&show_byline=1&show_portrait=1&color=00adef&fullscreen=1

S4C ident by Minivegas from Dom O’Brien on Vimeo.

We’re undoubtably in exciting times, and hats off to the team here driving all of this forward @SuperScam @BananaFritter @hellokinsella

Bring on the next project..

Proof of Concept: The Brain To Brain Internet

YouTube link.

This is a BCI experiment whereby one person uses BCI to transmit a series of digits over the internet to another person whos computer receives the digits and transmits them to the second users through flashing an LED array. The encoded information is extracted from the brain activity of the second user.

This shows true brain-to-brain activity. This is done as a proof of concept – to show that B2B *is* possible – which it is, as we show here.

It doesn’t look particularly glamourous but what they’ve done is pretty amazing.

via

Augmented Reality. Texture Extraction Experiment

http://vimeo.com/moogaloop.swf?clip_id=6660264&server=vimeo.com&show_title=1&show_byline=1&show_portrait=1&color=ff0179&fullscreen=1

Augmented Reality Texture Extraction Experiment from Lee Felarca on Vimeo.

This is an AR-based experiment that enables the user to lift textures from real-world objects in live video and apply them onto 3D objects that are overlayed on top of them.

Only box primitives are supported here, but the general idea could be extended to other types of 3D primitives or potentially even more complex objects with some clever image compositing and UV mapping.

See blog post for more info, and a live version of the demo:
http://www.zeropointnine.com/blog/augmented-reality-texture-extraction-experiment/