|Photo credit: Yoram Reshef for Stratasys|
|Photo credit: Jonathan Williams|
Collaborative, Crowd-Sourced SymphoniesLed by Professor Tod Machover, the Opera of the Future group created new software tools to crowdsource music for collaborative music compositions in Toronto and Edinburgh.
Toronto Symphony: Concerto for Composer and City
Tod Machover, Peter Torpey, Akito Van Troyer, and Ben Bloomberg used Hyperscore and the Social Computing group’s DOG programming language to crowdsource and compose a symphony about Toronto.
“Festival City” at the Edinburgh International Festival
|Photo source: http://www.eif.co.uk/blog/festival-city-project|
Akito von Troyer and Tod Machover created Constellation and Cauldron to gather and remix music
samples contributed by citizens and lovers of Edinburgh.
Director’s Fellows Program
The E14 Fund is an independent investment fund that gives recent Media Lab alumni a “six-month runway” to entrepreneurship, in the form of startup support that includes a stipend, legal advice, meetings with venture capitalists, and more. The program also incorporates a way to put a portion of the profits from successful spinoffs back into MIT.
Ed Boyden Awarded the Brain Prize
|Photo credit: Dominick Reuter|
A new technique that converts an ordinary camera into a light-field camera, from Kshitij Marwah, Gordon Wetzstein, Yosuke Bando, and Professsor Ramesh Raskar of the Camera Culture group. Focii is a light-field camera attachment and software tool that can produce a full, 20-megapixel multi-perspective 3D image from a single exposure of a 20-megapixel sensor.
A carving tool designed by postdoc Amit Zoran of the Responsive Environments group. FreeD allows the user to control the carving process while aided by a computer guidance system that is preprogrammed with the desired three-dimensional shape.
An interactive dynamic display table from Sean Follmer, Daniel Leithinger, and Professor Hiroshi Ishii of the Tangible Media group. inFORM is a Dynamic Shape Display that can render 3D content physically, so users can interact with digital information in a tangible way. inFORM can also interact with the physical world around it, for example moving objects on the table’s surface.
Joe Jacobson Wins the Exner Medal
|Photo credit: Shuguang Zhang|
An automated conversation coach that helps with interview skills, social interactions, and conversational skills from Ehsan Hoque of the Affective Computing group. MACH, or My Automated Conversation CoacH, is a software program that simulates face-to-face interactions in different social and professional contexts, and offers feedback to improve performance.
New Faculty: Kevin Slavin and Sputniko!
The Privacy Bounds of Human Mobility
Yves-Alexandre de Montjoye of the Human Dynamics group and Professor César Hidalgo of the Macro Connections Group used 15 months of data from 1.5 million people to show that 4 points—approximate places and times—are enough to identify 95 percent of individuals in a mobility database. These findings have been recently used to understand the use of metadata by the NSA and have been cited in numerous media reports and editorials. Their paper, “Unique in the Crowd: The Privacy Bounds of Human Mobility,” was published in Nature.
Science Fiction to Science Fabrication
|Photo credit: Guillermo Bernal|
The Silk Pavilion
An exploration of the relationship between digital and biological fabrication from Professor Neri Oxman, Jorge Duro-Royo, Carlos Gonzalez, Markus Kayser, and Jared Laucks of the Mediated Matter group. The Silk Pavilion comprises a dome of CNC-deposited silk threads, onto which the researchers placed 6,500 silkworms at the bottom rim of the primary structure, spinning flat non-woven silk patches across the gaps in the dome. The silkworms were affected by spatial and environmental conditions such as the density of the existing silk threads and variation in temperature and sunlight; they migrated to darker and denser areas, creating a unique pattern of spun silk across the dome.
|Photo credit: Simon Bruty|
What We Watch
A tool to examine who's watching what, when, and where, all over the world, created by Ed Platt, Rahul Bhargava, and Ethan Zuckerman of the Center for Civic Media. What We Watch collects data from Youtube’s Trends Dashboard to determine what videos are popular in any of 61 countries at any given time; it then compares the video trends in one country to those in other countries.