I watched a great episode of Horizon last night – asking the biggest questions that science can possibly ask: where did everything in our universe come from? How did it all begin? What followed was a dizzying exploration of cosmic bounces, rips and multiple universes. If you like that sort of stuff I can’t recommend it highly enough. Proper BBC documentary film making.
BBC iPlayer – Horizon: 2010-2011: What Happened Before the Big Bang?
If you want more, here’s a couple of others that might help blow your fragile mind fully apart.
Powers of Ten
Imagining the Tenth Dimension
The invisible ebb and flow of magnetics fields.
Magnetic Movie from Semiconductor on Vimeo.
Are we observing a series of scientific experiments, the universe in flux, or a documentary of a fictional world? The secret lives of invisible magnetic fields are revealed as chaotic ever-changing geometries.
Read lots more about it here:
An Animate Projects commission for Channel 4 in association with Arts Council England.
Aurora Borealis in a laboratory. Fascinating.
Bit off-topic to my usual stuff but watch this and think about what it means it for a while.
At least 100 billion galaxies. That’s 10 thousand, billion, billion stars (10 sextillion).
via Make: Online
Will extremely large databases of information, starting in the petabyte level, change how we learn. It may turn out that tremendously large volumes of data are sufficient to skip the theory in order to make a predicted observation.
Google was one of the first to notice this. For instance, take Google’s spell checker. When you misspell a word when googling, Google suggests the proper spelling. How does it know this? How does it predict the correctly spelled word? It is not because it has a theory of good spelling, or has mastered spelling rules. In fact Google knows nothing about spelling rules at all.
Instead Google operates a very large dataset of observations which show that for any given spelling of a word, x number of people say “yes” when asked if they meant to spell word “y.” Google’s spelling engine consists entirely of these datapoints, rather than any notion of what correct English spelling is. That is why the same system can correct spelling in any language.
In fact, Google uses the same philosophy of learning via massive data for their translation programs. They can translate from English to French, or German to Chinese by matching up huge datasets of humanly translated material. For instance, Google trained their French/English translation engine by feeding it Canadian documents which are often released in both English and French versions. The Googlers have no theory of language, especially of French, no AI translator. Instead they have zillions of datapoints which in aggregate link “this to that” from one language to another.
The Google Way of Science
The End of Theory: The Data Deluge Makes the Scientific Method Obsolete