Quantcast
Channel: MIT EECS
Viewing all 1281 articles
Browse latest View live

Enabling wireless virtual reality

$
0
0
November 14, 2016

Adam Connor-Simons | MIT News

Developed at Computer Science and Artificial Intelligence Laboratory, “MoVR” system allows VR headsets to communicate without a cord.

photo of virtual reality device

A new cordless virtual reality device consists of two directional "phased-array" antennas, each less than half the size of a credit card. Future versions could be small enough for users to have several in a single room, enabling multi-player gameplay. Photo: MIT CSAIL


One of the limits of today’s virtual reality (VR) headsets is that they have to be tethered to computers in order to process data well enough to deliver high-resolution visuals. But wearing an HDMI cable reduces mobility and can even lead to users tripping over cords.

Fortunately, researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have recently unveiled a prototype system called “MoVR” that allows gamers to use any VR headset wirelessly.

In tests, the team showed that MoVR can enable untethered communication at a rate of multiple Gbps, or billions of bits per second. The system uses special high-frequency radio signals called “millimeter waves” (mmWaves) that many experts think could someday help deliver blazingly-fast 5G smartphones.

“It’s very exciting to get a step closer to being able to deliver a high-resolution, wireless-VR experience,” says MIT professor Dina Katabi, whose research group has developed the technology. “The ability to use a cordless headset really deepens the immersive experience of virtual reality and opens up a range of other applications.”

Researchers tested the system on an HTC Vive but say that it can work with any headset. Katabi co-wrote a paper on the topic with PhD candidate Omid Abari, postdoc Dinesh Bharadia, and master’s student Austin Duffield. The team presented their findings last week at the ACM Workshop on Hot Topics in Networks (HotNets 2016) in Atlanta.

How it works

One issue with existing wireless technologies like WiFi is that they can’t support advanced data-processing.

“Replacing the HDMI cable with a wireless link is very challenging since we need to stream high-resolution multi-view video in real-time,” says Haitham Hassanieh, an assistant professor of electrical and computer engineering at the University of Illinois at Urbana Champaigna who was not involved in the research. “This requires sustaining data rates of more than 6 Gbps while the user is moving and turning, which cannot be achieved by any of today's systems.”

Since VR platforms have to work in real-time, systems also can’t use compression to accommodate lower data rates. This has led companies to make some pretty awkward attempts at untethered VR, like stuffing the equivalent of a full PC in your backpack.

The CSAIL team instead turned to mmWaves, which have promising applications for everything from high-speed Internet to cancer diagnosis. These high-frequency waves have one major downside, which is that they don’t work well with obstacles or reflections. If you want mmWaves to deliver constant connectivity for your VR game, you would need to always have a line of sight between transmitter and receiver. (The signal can be blocked even by just briefly moving your hand in front of the headset.)

To overcome this challenge, the team developed MoVR to act as a programmable mirror that detects the direction of the incoming mmWave signal and reconfigures itself to reflect it toward the receiver on the headset. MoVR can learn the correct signal direction to within two degrees, allowing it to correctly configure its angles.

“With a traditional mirror, light reflects off the mirror at the same angle as it arrives,” says Abari. “But with MoVR, angles can be specifically programmed so that the mirror receives the signal from the mmWave transmitter and reflects it towards the headset, regardless of its actual direction.”

Each MoVR device consists of two directional antennas that are each less than half the size of a credit card. The antennas use what are called “phased arrays” in order to focus signals into narrow beams that can be electronically steered at a timescale of microseconds.

Abari says that future versions of MoVR’s hardware could be as small as a smartphone, allowing for users to put several devices in a single room. This would enable multiple people to play a game at the same time without blocking each others’ signals.

Read this article on MIT News.

Research Themes: 

News Image: 

Labs: 


Teaching Hong Kong students to embrace computational thinking

$
0
0
November 15, 2016

School of Engineering| MIT News

Powered by MIT expertise and regional universities, CoolThink@JC will offer training to primary teachers and students at 32 schools.

photo of students

CoolThink@JC will target 16,000 students at 32 primary schools across the city of Hong Kong, offering tools and expertise to boost computational thinking abilities. Insights from the initiative, being done in collaboration with MIT and others, will eventually inform the development of curriculum for all Hong Kong teachers and students. Photo courtesy of the City University of Hong Kong.


CoolThink@JC, a four-year initiative of The Hong Kong Jockey Club Charities Trust, was launched today to empower the city’s primary school teachers and students with computational thinking skills, including coding.

Developed through a collaboration with MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL), the Education University of Hong Kong, and City University of Hong Kong, the eventual aim is to integrate computational thinking into all Hong Kong primary schools. Initially, CoolThink@JC will target over 16,500 students at 32 primary schools across the city.

Deputy Chair of The Hong Kong Jockey Club Anthony W. K. Chow said that CoolThink@JC was not only about coding and programming but also about empowering students to become problem-solvers and creators in the digital world. "The trust has encompassed innovation as one of the core value drivers in its charitable works to create more social impact, be it innovation in service models, application of technology, or development of innovation capacity in the community.”

CoolThink@JC will offer training to students in grades 4 through 6 and their teachers, and also provide orientation for parents. Areas of focus will include foundational programming concepts, problem solving skills, and creating mobile apps.

MIT has played a leading role in developing the two key technologies used in CoolThink@JC: Scratch and MIT App Inventor. Scratch is a visual programming language designed for novice programmers. MIT App Inventor is an intuitive introduction to programming and mobile computing.

Hal Abelson, the Class of 1922 Professor of Computer Science and Engineering in the Department of Electrical Engineering and Computer Science and a member of CSAIL, is MIT’s principal investigator for CoolThink@JC. The project is directed by CSAIL researcher Josh Sheldon.

“We see this as a great way to showcase the power of App Inventor and demonstrate our commitment to making the creative power of computing accessible to everyone,” says Abelson. “An experiment at this scale will be a real-life laboratory for technology-enriched primary, not just a small demonstration.”

To maximize the potential education and research benefits and to ease adoption throughout the city and beyond, CoolThink@JC is taking a three-pronged approach: developing evidence-based instruction and conducting and publishing rigorous, research to assess student benefits throughout the initiative; increasing teacher capacity by training 100 teachers from the program’s pilot schools as well as educators from Hong Kong non-governmental organizations; and creating a network for community support by City University of Hong Kong reaching out to more than 2,000 parents to improve their understanding of computational thinking and coding education.

In addition to committing HK$216 million in funding (about $30 million USD, with around $4 million being received by MIT) over three years, The Hong Kong Jockey Club Charities Trust has been actively engaging stakeholders in the government, businesses, and other local institutions of learning.

The Hong Kong Innovation Node, part of the MIT Innovation Initiative, launched its first formal programming in June 2016 and will soon house collaborative space and resources, like advanced manufacturing capabilities, and offer convening opportunities in Hong Kong and the neighboring Pearl River Delta. EdX, the online learning platform co-founded by MIT and Harvard University, counts the Hong Kong University of Science and Technology, Hong Kong University, and Hong Kong Polytechnic University, among its charter members, and offers a course on building apps with App Inventor. Finally, MIT President L. Rafael Reif will visit Hong Kong as part of MIT’s Campaign for a Better World Road Show in December.

Read this article on MIT News.

News Image: 

Labs: 

SuperUROP Research Preview

$
0
0

Card Title: 

SuperUROP Research Preview

Card URL: 

https://www.eecs.mit.edu/news-events/calendar/events/superurop-research-preview-0

Card Description: 

Learn about innovative research being conducted by undergrads across the School of Engineering at the SuperUROP Research Preview. 

Card Title Color: 

Black

Card Image: 

Meeting of the Minds for Machine Intelligence

$
0
0

Alison F. Takemura, Department of Electrical Engineering and Computer Science

Industry leaders, computer scientists, and venture capitalists gather to discuss how smarter computers are remaking our world. 

photo of students

Photo by Marcy Rolerson 


Surviving breast cancer changed the course of Regina Barzilay’s research. The experience showed her, in stark relief, that oncologists and their patients lack tools for data-driven decision making. That includes what treatments to recommend, but also whether a patient’s sample even warrants a cancer diagnosis, she explained at the Nov. 10 Machine Intelligence Summit, organized by MIT and venture capital firm Pillar.

“We do more machine learning when we decide on Amazon which lipstick you would buy,” said Barzilay, the Delta Electronics Professor of Electrical Engineering and Computer Science. “But not if you were deciding whether you should get treated for cancer.”

Barzilay now studies how smarter computing can help patients. She wields the powerful predictive approach called machine learning, a technique that allows computers, given enough data and training, to pick out patterns on their own — sometimes even beyond what humans are capable of pinpointing.

Machine learning has long been vaunted in consumer contexts — Apple’s Siri can talk with us because machine learning enables her to understand natural human speech — yet the summit gave a glimpse of the approach’s much broader potential. Its reach could offer not only better Siris (e.g., Amazon’s “Alexa”), but improved health care and government policies.

Machine intelligence is “absolutely going to revolutionize our lives,” said Jamie Goldstein ’89, Pillar co-founder. Goldstein and Anantha Chandrakasan, department head of Electrical Engineering and Computer Science (EECS) and the Vannevar Bush Professor of Electrical Engineering and Computer Science, organized the conference to bring together industry leaders, venture capitalists, students, and faculty from the Computer Science and Artificial Intelligence (CSAIL), Institute for Data, Systems, and Society (IDSS), and the Laboratory for Information and Decision Systems (LIDS) to discuss real-world problems and machine learning solutions.

Barzilay is already thinking along those lines. Her group’s work aims to help doctors and patients make more informed medical decisions with machine learning. She has a vision for the future patient in the oncologist’s office: “If you’re taking this treatment, [you’ll see] how your chances are going to be changed.”

Machine senses

Machine learning has already proven powerful. But Antonio Torralba, professor of electrical engineering and computer science, believes that machines can learn faster, and thereby do more. His team’s approach mimics the way humans learn in infancy. “We just start playing with things and seeing how they feel,” Torralba said. To illustrate, he showed the room a video of a baby turning over squeaky bubble wrap in her hands. Importantly, we notice the noises things make when we move them around, he said.

To give machines a similar sensory experience of the world, a student of Torralba’s recorded himself tapping more than a thousand objects with a wooden drumstick. Called “Greatest Hits,” the sound collection captured the drumstick clanging ceramic cups, ruffling bushes, and splashing water. After feasting on these videos, a computer could start predicting the sounds of the world — essentially reflecting a grasp of its physics — all without explicit instruction.

Videos of everyday scenes (sans drumstick) also prove deft teachers. Machines are usually guided to pick out objects by training them on annotated images. That means people would meticulously outline a photograph’s individual objects, such as people, lamps, and bar stools, so that computers could learn to identify them. But Torralba and his team have found that by giving computers video complete with objects’ sounds—a street’s ambient noise or people talking—a machine’s neural network could begin to pick out objects without any guidance at all.

Torralba recounted how a machine trained this way begins to identify water, the sky, and people’s faces. Machines become remarkably adroit at identifying infants, because “they make a very special noise,” Torralba said. The recognition of sounds resides in a machine’s artificial neurons called units. He continued: “There were a lot of units devoted to babies.”

Decision helpers

Once a machine is educated, it can help experts make better decisions.

Stefanie Jegelka, an assistant professor of electrical engineering and computer science, presented how to make machines learn faster and make predictions more reliably, by identifying maximally informative data. Her team has recently developed new techniques that make this process much more practical.

Alternatively, savvy machines can help us evaluate policies. Tamara Broderick, an assistant professor of electrical engineering and computer science, showed how this works. In collaboration with MIT economist Rachael Meager, her team focused on the question of quickly and accurately quantifying uncertainty. For instance, is microlending, or giving people small loans to jumpstart businesses, is actually helping alleviate poverty. We need to understand the variation in returns on these loans to say.

When we ask a computer to tell us how much more value a loan creates—for instance, $4 made for $3 invested — we can also use machine learning to evaluate how robust that outcome is. What would happen if we were to tweak the model?, Broderick asked. “Are we going to get the same number out at the end? Or are we going to get fundamentally different numbers and therefore fundamentally different decisions about what to do—what policy to make?” Machine learning can guide the way.

To our health

But the application of machine intelligence most discussed at the summit was in health care. Mandy Korpusik, a graduate student in CSAIL who shared her work during a pitch session, described an app called Lana that serves as a personal nutritionist. You can tell her what you ate for lunch, and she can recommend what nutrient-rich foods to have in your next meal.

Barzilay, the cancer survivor, wants not only to feed computers clinical reports, but medical scans. These images contain a wealth of information humans alone might be unable to articulate, she said. For example, a machine might be able to discern that given your mammogram, a particular treatment might be 90 percent likely to be effective.

With colleague Tommi Jaakkola, professor of computer science and engineering, Barzilay is also working on extracting the machine’s reasoning, a murkier but necessary endeavor. “Doctors, at least the ones at MGH, are not happy just getting a number at the end,” Barzilay said. “They need to know why.”

Intelligent machines can aid decision making beyond the doctor’s office. Data scientists capable of implementing machine learning have become ubiquitous in government agencies, said Aman Bhandari in a fireside chat-style interview with Ash Ashutoush, CEO of information technology firm Actifio. Bhandari is now at pharmaceutical developer Merck, but worked at the White House in President Barack Obama’s Office of Science and Technology Policy. During his tenure, the administration heavily pushed digitizing all medical records.

“If you think about health care, we’ve moved from — and we’re still moving from — this stone age of data collection, capture, production, and analysis into this possibly ‘industrial era’ of all of those things,” Bhandari said. “So, the first phase is digitizing the system. The next phase is unleashing data from the US government across every single sector."

Jacqueline Xi, an electrical engineering and computer science senior, came away feeling enthusiastic about machine learning’s possibilities. “Just to see everyone in the same room, and people who are founding startups, all here discussing these bigger ideas about how we can connect machine learning across all these groups, is really eye-opening,” she said. “It’s inspiring.”

November 17, 2016

Research Themes: 

News Image: 

Labs: 

Research Area: 

Professor Emeritus Jay Forrester, digital computing and system dynamics pioneer, dies at 98

$
0
0

Zach Church | MIT Sloan School of Management

The inventor of an early form of RAM had an outsized influence on organizational dynamics, supply chains, and sustainability.

photo of jay forrester

Jay Forrester in 2008


Jay W. Forrester SM ’45, professor emeritus in the MIT Sloan School of Management, founder of the field of system dynamics, and a pioneer of digital computing, died Nov. 16. He was 98.

Forrester’s time at MIT was rife with invention. He was a key figure in the development of digital computing, the national air defense system, and MIT’s Lincoln Laboratory. He developed servomechanisms (feedback-based controls for mechanical devices), radar controls, and flight-training computers for the U.S. Navy. He led Project Whirlwind, an early MIT digital computing project. It was his work on Whirlwind that led him to invent magnetic core memory, an early form of RAM for which he holds the patent, in 1949.

MIT Sloan Professor John Sterman, a student, friend, and colleague of Forrester’s since the 1970s, points to a 2003 photo of Forrester on a Segway as an illustration of his work’s lasting impact.

“He really is standing on top of the fruits of his many careers,” Sterman said. “He’s standing on a device that integrates servomechanisms, digital controllers, and a sophisticated feedback control system.”

“From the air traffic control system to 3-D printers, from the software companies use to manage their supply chains to the simulations nations use to understand climate change, the world in which we live today was made possible by Jay’s work,” he said.

Systems dynamics: A new view of management

It was after turning his attention to management in the mid-1950s that Forrester developed system dynamics — a model-based approach to analyzing complex organizations and systems — while studying a General Electric appliance factory. An MIT Technology Review article explores how he sought to combat the factory’s boom-and-bust cycle by examining its “weekly orders, inventory, production rate, and employees.” He then developed a computer simulation of the GE supply chain to show how management practices, not market forces, were causing the cycle.

Forrester’s “Industrial Dynamics” was published in 1961. The field expanded to chart the complexities of economies, supply chains, and organizations. Later, he cast the principles of system dynamics on global issues in “Urban Dynamics,” published in 1969, and “World Dynamics,” published in 1971. The latter was an integrated simulation model of population, resources, and economic growth. Forrester became a critic of growth, a position that earned him few friends.

“Many businesses, government officials, and academics hated it,” Sterman said, “yet today, the collision between the finite resources of our planet and population and economic growth drives issues from climate change to deforestation, collapsing fisheries, resource conflict, and mass migrations.” Four of Forrester’s students would rely on his ideas to write “The Limits to Growth,” a 1972 book that helped to launch the field of global modeling and the sustainability movement around the world.

In many ways, system dynamics stands in opposition to the idea that a charismatic or talented leader can steer a wayward firm to success, a tension Forrester explained to MIT Technology Review.

“Very often people are just role players within a [company’s] system,” he said. “They are not running it; they are acting within it. This has not been a popular idea with people who think they are in charge … but in fact, unless they are knowledgeable in systems, they will fall into a pattern of doing what the system dictates. If they understand the system, they can alter that behavior.”

At MIT Sloan, Forrester created the Refrigerator Game, a supply chain simulation that teaches the principles of system dynamics. It was later dubbed the Beer Game and remains a popular exercise during student orientation.

“What made Jay so special is because of his background in digital computing, he saw, with the advent of the digital computer, the ability to do simulations that were both large-scale and practical,” said MIT Sloan Professor Nelson Repenning. “He appreciated that far before anyone.”

From the family ranch to MIT

All this from a boy who grew up working the family ranch.

“I’ve had several careers,” he told MIT Technology Review. “Starting with ranch hand.”

Forrester was born July 14, 1918 in Nebraska. He earned a bachelor’s degree in electrical engineering from the University of Nebraska in 1939. He arrived at MIT the same year as a graduate student in the School of Engineering, earning a master's in 1945. He joined what would become the MIT Sloan faculty in 1956 and retired in 1989.

“To me, Jay was MIT,” Repenning said. “He showed up to work on gunsights and radar mounts for the U.S. military, ended up playing a pioneering role in digital computing, and suddenly became a social scientist. I can’t imagine that happening anywhere else. It was the perfect match of a unique person [and institution].”

Sterman said Forrester had high standards as a teacher, but that submitting work to his rigorous inspection was rewarding.

“It was a great experience to have Jay mark up one of your papers with his red pen,” he said. “The way to learn the most from Jay was first of all to recognize that he was probably right and you were wrong, and secondly, to just be grateful for the gift of all that criticism, because everything you did after that was better.”

Forrester was married for 64 years to Susan (Swett) Forrester, who died in 2010. He is survived by a daughter, Judith; two sons, Nathan and Ned; four grandchildren, Matthew, Julia, Neil, and Katherine; and two great grandchildren, Everett and Faraday.

Read this article on MIT News.

November 19, 2016

News Image: 

CSAIL's Howard Shrobe one of four MIT faculty elected 2016 AAAS Fellows

$
0
0
November 21, 2016

Adam Conner-Simons | CSAIL

Green, Ketterle, Nedivi, and Shrobe are among those recognized for their efforts toward advancing science.

photo of Howard Shrobe

Pictured: Howard Shrobe


Four current MIT faculty members have been elected as fellows of the American Association for the Advancement of Science (AAAS), according to a news release published today by the journal Science.

The new fellows are among a group of 391 AAAS members elected by their peers in recognition of their scientifically or socially distinguished efforts to advance science. This year’s fellows will be honored at a ceremony on Feb. 18, 2017, at the AAAS Annual Meeting in Boston.

Howard E. Shrobe, a principal research scientist at the Computer Science and Artificial Intelligence Laboratory, was recognized for “distinguished research in knowledge representation and its applicability to human-serving artificial intelligence systems, and both research and service to the federal government in comprehensive approaches to addressing cybersecurity problems.”

This year’s fellows will be formally announced in the AAAS News and Notes section of Science on Nov. 25.

For the full list of fellows, read this article on MIT News.

News Image: 

Labs: 

Research Area: 

Four MIT students named 2017 Marshall Scholars

$
0
0

Julia Mongo | Office of Distinguished Fellowships

Matthew Cavuto, Zachary Hulcher, Kevin Zhou, and Daniel Zuo will pursue two years of study in the U.K.

photos of students

Left: Daniel Zuo; Right: Zach Hulcher; Photo: Casey Atkins


Four MIT students — Matthew Cavuto, Zachary Hulcher, Kevin Zhou, and Daniel Zuo — are winners in this year’s prestigious Marshall Scholarship competition. Another student, Charlie Andrews-Jubelt, was named an alternate. The newest Marshall Scholars come from the MIT departments of Mechanical Engineering, Physics, Mathematics, and Electrical Engineering and Computer Science.

Funded by the British government, the Marshall Scholarships provide exceptional young Americans the opportunity for two years of graduate study in any field at a U.K. institution. Up to 40 scholarships are awarded each year in the rigorous nationwide competition. Scholars are selected on the basis of academic merit, leadership potential, and ambassadorial potential.

“The Presidential Committee on Distinguished Fellowships is so proud — as am I, personally — to have had the opportunity to help all the nominated MIT students through the Marshall Scholarship process,” says Kim Benard, assistant dean of distinguished fellowships and academic excellence. “Matthew, Zach, Kevin, and Daniel represent the very best of MIT. We have also had the great pleasure to work with students who ultimately didn’t win, but who will have extraordinary careers that will increase the reputation of MIT.”

Zachary Hulcher

Zachary Hulcher, from Montgomery, Alabama, is pursuing a dual major in electrical engineering and computer science and physics, with a minor in mathematics. As a Marshall Scholar, he will study and perform research in high-energy physics at Cambridge University, following in the footsteps of such luminary physicists as Newton, Maxwell, and Hawking. Hulcher plans to earn a PhD and, as a professor of physics, make contributions to expand the field of high energy physics.

Hulcher spent his sophomore summer conducting research with Professor Yen-Jie Lee at the Compact Muon Solenoid (CMS) Experiment at CERN’s Large Hadron Collider in Geneva, Switzerland. He returned to CERN his junior summer to continue with and present on his research. Since the fall of 2015, he has been a research assistant in the group of physics professor Krishna Rajagopal at the Center for Theoretical Physics at MIT. Hulcher has been improving the analysis and modeling of how CMS measurements can be used to probe quark-gluon plasma, a substance connected to the Big Bang that may lead to greater understanding of the formation of the universe. "Zach took on, mastered, and then drove a theoretical physics research project,” observes Rajagopal. “He will be the principal author of a paper describing an important advance, and he showed fearless confidence in giving a talk at an international workshop in which he showed new results (some only hours old) that garnered much attention. All the while, he is both well-grounded and well-rounded.”

Hulcher is also motivated by a desire to teach others. He has been a teaching assistant for the physics department at MIT, a grader in the mathematics department, and a tutor for MIT’s chapter of Eta Kappa Nu, the national honor society for electrical engineering and computer science. Through the MIT International Science and Technology Initiatives’ Global Teaching Labs, he traveled to Xalapa, Mexico, to assist with courses focused on mobile and internet technologies, and he taught courses on physics to high school students in Italy and Israel.

Since his freshman year, Hulcher has been an offensive lineman with MIT’s varsity football team and was named this year to the NEWMAC all-academic team for his outstanding scholarly and athletic performance. Hulcher also serves on the executive board for the MIT chapter of the Tau Beta Pi engineering honor society.

Daniel Zuo

Daniel Zuo, from Memphis, Tennessee, is graduating next June with a bachelor’s degree in electrical engineering and computer science, an MEng in electrical engineering and computer science, and a minor in creative writing. At Cambridge University, Zuo will do two consecutive one-year master’s degree programs: an MPhil in advanced computer science and an MPhil in machine learning, speech, and language technology. After completing his studies in the U.K., Zuo will pursue a PhD and hopes to develop a startup venture that will advance internet connectivity in the developing world. He ultimately plans to teach and conduct research as a professor of computer science.

Zuo is particularly interested in lossless datacenter architectures and their potential to help people interact more effectively with massive amounts of data. He is currently a research assistant for TIBCO Career Development Assistant Professor Mohammad Alizadeh in the Networks and Mobile Systems group at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL). Alizadeh’s group works to improve the performance, usability, and robustness of networks and cloud services; Zuo has been investigating algorithms that provide scheduling and congestion control to enhance network performance. “Daniel is brilliant,” Alizadeh says. “It’s been a joy to work with him. He is one of those rare students that can jump into an unfamiliar area and quickly figure out exactly the right way to think about the hard technical problems.”

Zuo has also conducted research in Professor Manolis Kellis’ group at CSAIL, which focuses on computational methods for accessing large data sets for the analysis of human disease. He developed “greedy” algorithms to produce a comprehensive set of overlapping enhancers across cell types for a specific gene. He has also worked as a software engineer at several technology and finance companies, including Electronic Arts, Arcadia Funds, and Complete Solar Solutions. Zuo’s own projects include Fold, a mobile payment service to allow easy and secure peer-to-peer Bitcoin transactions over Bluetooth technology.

In his freshman year, Zuo helped launch MakeMIT, the largest hardware hackathon in the nation, and has continued his involvement with the project as a committee member with the MIT student organization TechX. Zuo is also active in public service in the Boston community through his leadership roles with the Phi Kappa Theta fraternity.

For the full list of Marshall Scholars, read this article on MIT News.

November 28, 2016

News Image: 

Toward X-ray movies

$
0
0
November 22, 2016

Larry Hardesty | MIT News

Low-power tabletop source of ultrashort electron beams could replace car-size laboratory devices.

Illustration

This illustration shows a miniature electron gun driven by terahertz radiation. A UV pulse (blue) back-illuminates the gun photocathode, producing a high-density electron bunch inside the gun. The bunch is immediately accelerated by ultra-intense terahertz pulses to energies approaching 1 kiloelectronvolt. These high-field optically-driven electron guns can be utilized for ultrafast electron diffraction or injected into the accelerators for X-ray light sources. Courtesy of W. Ronny Huang


Ultrashort bursts of electrons have several important applications in scientific imaging, but producing them has typically required a costly, power-hungry apparatus about the size of a car.

In the journal Optica, researchers at MIT, the German Synchrotron, and the University of Hamburg in Germany describe a new technique for generating electron bursts, which could be the basis of a shoebox-sized device that consumes only a fraction as much power as its predecessors.

Ultrashort electron beams are used to directly gather information about materials that are undergoing chemical reactions or changes of physical state. But after being fired down a particle accelerator a half a mile long, they’re also used to produce ultrashort X-rays.

Last year, in Nature Communications, the same group of MIT and Hamburg researchers reported the prototype of a small “linear accelerator” that could serve the same purpose as the much larger and more expensive particle accelerator. That technology, together with a higher-energy version of the new “electron gun,” could bring the imaging power of ultrashort X-ray pulses to academic and industry labs.

Indeed, while the electron bursts reported in the new paper have a duration measured in hundreds of femtoseconds, or quadrillionths of a second (which is about what the best existing electron guns can manage), the researchers’ approach has the potential to lower their duration to a single femtosecond. An electron burst of a single femtosecond could generate attosecond X-ray pulses, which would enable real-time imaging of cellular machinery in action.

“We’re building a tool for the chemists, physicists, and biologists who use X-ray light sources or the electron beams directly to do their research,” says Ronny Huang, an MIT PhD student in electrical engineering and first author on the new paper. “Because these electron beams are so short, they allow you to kind of freeze the motion of electrons inside molecules as the molecules are undergoing a chemical reaction. A femtosecond X-ray light source requires more hardware, but it utilizes electron guns.” In particular, Huang explains, with a technique called electron diffraction imaging, physicists and chemists use ultrashort bursts of electrons to investigate phase changes in materials, such as the transition from an electrically conductive to a nonconductive state, and the creation and dissolution of bonds between molecules in chemical reactions.

Ultrashort X-ray pulses have the same advantages that ordinary X-rays do: They penetrate more deeply into thicker materials. The current method for producing ultrashort X-rays involves sending electron bursts from a car-sized electron gun through a billion-dollar, kilometer-long particle accelerator that increases their velocity. Then they pass between two rows of magnets — known as an “undulator” — that converts them to X-rays.

In the paper published last year — on which Huang was a coauthor — the MIT-Hamburg group, together with colleagues from the Max Planck Institute for the Structure and Dynamics of Matter in Hamburg and the University of Toronto, described a new approach to accelerating electrons that could shrink particle accelerators to tabletop size. “This is supposed to complement that,” Huang says, about the new study.

Franz Kärtner, who was a professor of electrical engineering at MIT for 10 years before moving to the German Synchrotron and the University of Hamburg in 2011, led the project. Kärtner remains a principal investigator at MIT’s Research Laboratory of Electronics and is Huang’s thesis advisor. He and Huang are joined on the new paper by eight colleagues from both MIT and Hamburg.

Subwavelength confinement

The researchers’ new electron gun is a variation on a device called an RF gun. But where the RF gun uses radio frequency (RF) radiation to accelerate electrons, the new device uses terahertz radiation, the band of electromagnetic radiation between microwaves and visible light.

The researchers’ device, which is about the size of a matchbox, consists of two copper plates that, at their centers, are only 75 micrometers apart. Each plate has two bends in it, so that it looks rather like a trifold letter that’s been opened and set on its side. The plates bend in opposite directions, so that they’re farthest apart — 6 millimeters — at their edges.

At the center of one of the plates is a quartz slide on which is deposited a film of copper that, at its thinnest, is only 30 nanometers thick. A short burst of light from an ultraviolet laser strikes the film at its thinnest point, jarring loose electrons, which are emitted on the opposite side of the film.

At the same time, a burst of terahertz radiation passes between the plates in a direction perpendicular to that of the laser. All electromagnetic radiation can be thought of as having electrical and magnetic components, which are perpendicular to each other. The terahertz radiation is polarized so that its electric component accelerates the electrons directly toward the second plate.

The key to the system is that the tapering of the plates confines the terahertz radiation to an area — the 75-micrometer gap — that is narrower than its own wavelength. “That’s something special,” Huang says. “Typically, in optics, you can’t confine something to below a wavelength. But using this structure we were able to. Confining it increases the energy density, which increases the accelerating power.”

Because of that increased accelerating power, the device can make do with terahertz beams whose power is much lower than that of the radio-frequency beams used in a typical RF gun. Moreover, the same laser can generate both the ultraviolet beam and, with a few additional optical components, the terahertz beam.

According to James Rosenzweig, a professor of physics at the University of California at Los Angeles, that’s one of the most attractive aspects of the researchers’ system. “One of the main problems you have with ultrafast sources like this is timing jitter between, say, the laser and accelerating field, which produces all sorts of systematic effects that make it harder to do time-resolved electron diffraction,” Rosezweig says.

“In the case of Kärtner’s device, the laser produces the terahertz and also produces the photoelectrons, so the jitter is highly suppressed. You could do pump-probe experiments where the laser is the driver and the electrons would be the probe, and they would be more successful than what you have right now. And of course it would be a very small-sized and modest-cost device. So it might turn out to be very important as far as that scenario goes.”

The researchers’ work was funded by the U.S. Air Force Office of Scientific Research and by the European Research Council. Ronny Huang was supported by a National Defense Science and Engineering Graduate fellowship.

Read this article on MIT News.

News Image: 

Labs: 

Research Area: 


MIT Admissions Blog: Kevin’s Room

$
0
0

Date Posted: 

Monday, November 28, 2016 - 10:15am

Card Title: 

MIT Admissions Blog: Kevin’s Room

Card URL: 

http://mitadmissions.org/blogs/entry/kevins-room

Card Description: 

Vincent, class of 2017, profiles his friend's unique dorm room for the MIT Admissions blog.

Card Image: 

Lim inducted into Consumer Technology Hall of Fame

$
0
0
November 28, 2016

Department of Electrical Engineering and Computer Science

Jae S. Lim recognized for role in development of HDTV.

photo of Jae S. Lim

Pictured: Jae S. Lim at the CT Hall of Fame Induction Ceremony on November 9, 2016.


Jae S. Lim, professor of electrical engineering and computer science, has been inducted into the Consumer Technology Hall of Fame in recognition of his role in the development of HDTV. The honor pays tribute to researchers and members of industry who have “advanced innovation and developed a foundation for the consumer technologies we enjoy today and others still to come.”

Lim was inducted in a November 9 ceremony in New York City for his role as a Distinguished Leader in the Digital HDTV Grand Alliance.

Lim participated in the Federal Communication Commission’s Advanced Television Standardization Process. He served as an ex-officio member of the FCC’s Advisory Committee on Advanced Television Service. He represented MIT in designing the MIT/GI System, which became one of the four finalist systems. He also represented MIT when the four finalist systems became a single system through the formation of the Grand Alliance. The Grand Alliance HDTV System became the basis for the U.S. Digital Television Standard adopted by the FCC in December 1996. Terrestrial Digital HDTV broadcasting began in the United States in 1998, and coexisted with analog terrestrial TV broadcasting until 2009. The analog service was discontinued in 2009.

The Consumer Technology Hall of Fame was created by the Consumer Technology Association. Inductees are selected by a group of media and industry professionals, who judge nominations submitted by manufacturers, retailers and industry journalists. Lim is one of 17 members inducted into the 2016 class.

News Image: 

Labs: 

Harder, better, faster, stronger

$
0
0

Camilla Brinkman | MIT News

The MIT Formula SAE team takes a great leap forward to build their 2017 race car.

photo of race car

MIT senior Brian Wanek wears a racing helmet inside a prototype of what will be the driver’s seat in the 2017 MIT Formula SAE car as sophomore Wasay Anwer measures the frame. Photo: Camilla Brinkman


Imagine if you and a group of students were tasked with designing, building, testing, and driving a Formula-style electric race car from the ground up. Every year.

For students who are members of MIT Motorsports — a.k.a. the MIT Formula SAE (FSAE) team, originally founded in 2001 — that task determines how they spend their free time, on weekends, evenings, and often, January Independent Activities Period as well as summer.

On a recent Saturday in the Edgerton Center’s Area 51 Student Shop (Room N51), about two dozen students on the FSAE team were sitting at large work tables, huddled over their laptops, designing components for their 2017 vehicle using the Solidworks design software. Friendly banter, shared jokes, and periods of serious focus characterized the day. One student, senior Brian Wanek, sat wearing a helmet inside a prototype of what would be the driver’s seat, while sophomore Wasay Anwer measured the frame.

The task for this year — borrowing from a Daft Punk song — is building a harder, better, faster, stronger car for the June 2017 Collegiate Design Series hosted by the Society of Automotive Engineers in Lincoln, Nebraska.

Last year’s performance was nothing to shrug at. MIT's team passed all inspections, placed fourth in vehicle cost (spending the least amount of money to construct the car), placed fourth overall in design, and sixth overall in a field of 21 teams from around the globe.

“Our previous car was an evolutionary step in our team’s history,” says team captain and MIT junior Luis Alberto Mora, who joined FSAE as a freshman. “We kept the working components the same and made minor improvements on previous designs. This year’s car is a revolutionary step; brand new electric powertrain and batteries, new tire size, giving us the freedom to make no compromises in performance.”

The team operates on a tight budget, just over $100,000, and they actively raise funds from sponsors inside and outside of MIT. The Edgerton Center, the Department of Mechanical Engineering, Ford Motor Company, and General Motors provide the bulk of funding necessary to keep the team running year after year.

More than 1,000 parts go into the construction of their car, most of which is fabricated in the shop using a mix of manual and computer numeric control (CNC) machines, rapid prototyping machines, and water jet cutters. The team typically purchases all the raw materials needed to build their new car, as well as high-cost items such as battery cells, electric motors, and motor controllers.

This year the team is designing their own battery pack based off of high-discharge lithium-ion cy­lindrical cells. “Our previous battery pack [used on last year’s car] was purchased by a company that later went out of business and needed lots of repairs. We gained a lot of knowledge from having to fix it,” says Elliot Owen, a junior in mechanical engineering and battery lead for MIT FSAE. “This year’s battery pack will have a different chemistry in a different format. Previously we had big floppy sheets, now we have little canisters, similar to what is used in Tesla. We can make a 25 percent weight reduction, we have more energy, and it’s safer,” Owen says.

The lighter car weighs in at 215 kilograms without a driver, 30 kg lighter than last year’s vehicle. The new car will have a chassis and suspension of a hybrid design: a primary steel tube structure with stressed carbon fiber panels.

Last year’s car also serves a purpose for this year’s build. “We can try out new software on the old car, we can debug it, and by the time the new car is built, the software, the mechanical parts, are already worked out,” Mora says.

To sustain a team that, without fail, loses a number of valuable team members each year to graduation is no small task — and recruiting new members is essential. Tianye Chen, a junior in electrical engineering and computer science who is the low voltage electrical systems lead, says: “We try to give new members meaningful things to do, teach them how to use the tools, and give them projects to keep them engaged.”

Patrick McAtamney, technical instructor and master machinist in Area 51, works closely with the FSAE team and sees multiple benefits being on an Edgerton Center team. “One thing that students get are social skills, how to work with other team members on sub teams, a mechanical engineering student will work with an aero-astro student to solve engineering problems together.”

“FSAE provides one of the most real-world engineering experiences offered on campus,” says assistant professor of mechanical engineering Amos Winter, the team’s faculty advisor who meets with them regularly to go over design specifications. “As an educator, it makes me very happy to see the students absorb, synthesize, and apply the theory we teach in classes into practical engineering solutions.”

While the June competition is still a ways off, for the FSAE students it will mark the culmination of a year-long project of long hours, hard work, and fun — a noteworthy accomplishment for all.

Read this article on MIT News.

November 29, 2016

News Image: 

In Depth Interview with Ray Stata

In the Media: CSAIL research project recruits robots made by high schoolers and middle schoolers

$
0
0

Date Posted: 

Wednesday, November 30, 2016 - 9:45am

News Image: 

Card Title: 

In the Media: CSAIL research project recruits robots made by high schoolers and middle schoolers

Card URL: 

http://lcnme.com/currentnews/local-students-build-robots-mit-research/

Card Description: 

The Lincoln County News reports that Professor Berthold Horn of CSAIL is partnering with Maine high schools and middle schools to test traffic flow models using robots.

Card Image: 

Helping Policy and Technology Work Together

$
0
0

Rachel van Heteren | Department of Electrical Engineering and Computer Science

MEng student Keertan Kini reflects on his work at the intersection of policy and technology, inside MIT and out.

Illustration

Photo credit: Rachel van Heteren


“When you’re part of a community, you want to leave it better than you found it,” says Keertan Kini, an MEng student in the Department of Electrical Engineering, or Course 6. That philosophy has guided Kini throughout his years at MIT, as he works to improve policy both inside and out of MIT.

As a member of the Undergraduate Student Advisory Group (USAGE), former chair of the Course 6 Underground Guide committee, member of the Internet Policy Research Initiative (IPRI), and of the Advanced Network Architecture group, Kini’s research focus has been in finding ways that technology and policy can work together. As Kini puts it, “there can be unintended consequences when you don’t have technology makers who are talking to policymakers and you don’t have policymakers talking to technologists.” His goal is to allow them to talk to each other.

At 14, Kini first started to get interested in politics. He volunteered for President Obama’s 2008 campaign, making calls and putting up posters. “That was the point I became civically engaged,” says Kini. After that, he was campaigning for a ballot initiative to raise more funding for his high school, and he hasn’t stopped being interested in public policy since.

High school was also where Kini became interested in computer science. He took a computer science class in high school on the recommendation of his sister, and in his senior year, he started watching computer science lectures on MIT OpenCourseWare by Hal Abelson, a professor in MIT’s Department of Electrical Engineering and Computer Science.

“That lecture reframed what computer science was. I loved it,” Kini recalls. “The professor said ‘it’s not about computers, and it’s not about science’. It might be an art or engineering, but it’s not science, because what we’re working with are idealized components, and ultimately the power of what we can actually achieve with them is not based so much on physical limitations so much as the limitations of the mind.”

In part thanks to Abelson’s OCW lectures, Kini came to MIT to study electrical engineering and computer science. Kini is currently pursuing an MEng in electrical engineering and computer science, a fifth-year master’s program following his undergraduate studies in 6-2, electrical engineering and computer science.

Combining two disciplines

Kini set his policy interest to the side his freshman year, until he took 6.805J, Foundations of Information Policy, with Professor Hal Abelson, the same professor from MIT OCW who inspired Kini to study computer science. After taking Professor Abelson’s course, Kini joined him and Daniel Weitzner, a principal research scientist in the Computer Science and Artificial Intelligence Laboratory, in putting together a big data and privacy workshop for the White House in the wake of the Snowden revelations. Four years later, Kini is now a TA for 6.805J.

With Weitzner as his advisor, Kini went on to work on a SuperUROP, an advanced version of UROP in which students take on their own research project for a full year. Kini’s project focused on making it easier for organizations that had experienced a cybersecurity breach to share how the breach happened with other organizations, without accidentally sharing private or confidential information as well.

Typically, when a security breach happens, there is a “human bottleneck,” as Kini puts it. Humans have to manually check all information they share with other organizations to ensure they don’t share private information or get themselves into legal hot water. The process is time-consuming, slowing down the improvement of cybersecurity for all organizations involved. Kini created a prototype of a system that could automatically screen information about cybersecurity breaches, determining what data had to be checked by a human, and what was safe to send along.

Once finished with his SuperUROP, Kini became involved in the development of Votemate, a web app designed to simplify the voter registration process in all fifty states.

Kini’s interest in Votemate wasn’t only about increasing voter registration. “I think most people in this nation are centrist, and one of the reasons our political system gets polarized is because people who are polarized primarily turn out to vote,” he says. “I think the only reliable way to fix that is to get more people to turn out to vote.”

Shaping policy on campus

Kini is also involved in making changes within the Institute. “I feel like the same interest that’s gotten me interested in policy is the same thing that’s gotten me interested in working with the Department of Electrical Engineering and Computer Science,” Kini admits.

As a member of USAGE, Kini has been involved in exploring ways to revitalize the electrical engineering curriculum, redesigning the undergraduate lounge, and compiling a list of the resources available to Course 6 students. On the reason for the list of resources, Kini recalls, “When I was a senior, I realized there were some resources that I had no idea about. And this was after I had been involved in the department and USAGE for 5 years! I should have known!”

Kini is especially interested in making sure students know about the MIT resources for prospective entrepreneurs, such as StartMIT, which he enrolled in last year.

StartMIT is an IAP course designed to help students learn about what it takes to create a startup from the ground up. With the advice of over 60 speakers involved in the startup space, StartMIT offers practical advice on how to actually get a startup off the ground.

On the usefulness of StartMIT, Kini says, “at MIT, we try to solve very difficult challenges, we try to solve very meaningful technical problems, but what gets lost in the shuffle, is after you come up with a great idea, how do you get it out of your head and into the world?” He adds, “there’s a saying: ‘if you build it they will come.’ I disagree heartily. But StartMIT helps bridge that divide.”

Thanks to his experience at StartMIT, Kini knows that he wants to start his own company one day. “I see starting a company not only as an option, but the option. It’s a way to make sustainable change in the world.”

Looking back at his experience with USAGE, curriculum development, and policy making, Kini observes “it’s not just about the nitty gritty of education, its about community.”

December 1, 2016

News Image: 

Research Area: 

Computer learns to recognize sounds by watching video

$
0
0
December 2, 2016

Larry Hardesty | MIT News

Machine-learning system doesn’t require costly hand-annotated data.

Illustration

The researchers’ neural network was fed video from 26 terabytes of video data downloaded from the photo-sharing site Flickr. Researchers found the network can interpret natural sounds in terms of image categories. For instance, the network might determine that the sound of birdsong tends to be associated with forest scenes and pictures of trees, birds, birdhouses, and bird feeders. Image: Jose-Luis Olivares/MIT


In recent years, computers have gotten remarkably good at recognizing speech and images: Think of the dictation software on most cellphones, or the algorithms that automatically identify people in photos posted to Facebook.

But recognition of natural sounds — such as crowds cheering or waves crashing — has lagged behind. That’s because most automated recognition systems, whether they process audio or visual information, are the result of machine learning, in which computers search for patterns in huge compendia of training data. Usually, the training data has to be first annotated by hand, which is prohibitively expensive for all but the highest-demand applications.

Sound recognition may be catching up, however, thanks to researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL). At the Neural Information Processing Systems conference next week, they will present a sound-recognition system that outperforms its predecessors but didn’t require hand-annotated data during training.

Instead, the researchers trained the system on video. First, existing computer vision systems that recognize scenes and objects categorized the images in the video. The new system then found correlations between those visual categories and natural sounds.

“Computer vision has gotten so good that we can transfer it to other domains,” says Carl Vondrick, an MIT graduate student in electrical engineering and computer science and one of the paper’s two first authors. “We’re capitalizing on the natural synchronization between vision and sound. We scale up with tons of unlabeled video to learn to understand sound.”

The researchers tested their system on two standard databases of annotated sound recordings, and it was between 13 and 15 percent more accurate than the best-performing previous system. On a data set with 10 different sound categories, it could categorize sounds with 92 percent accuracy, and on a data set with 50 categories it performed with 74 percent accuracy. On those same data sets, humans are 96 percent and 81 percent accurate, respectively.

“Even humans are ambiguous,” says Yusuf Aytar, the paper’s other first author and a postdoc in the lab of MIT professor of electrical engineering and computer science Antonio Torralba. Torralba is the final co-author on the paper.

“We did an experiment with Carl,” Aytar says. “Carl was looking at the computer monitor, and I couldn’t see it. He would play a recording and I would try to guess what it was. It turns out this is really, really hard. I could tell indoor from outdoor, basic guesses, but when it comes to the details — ‘Is it a restaurant?’ — those details are missing. Even for annotation purposes, the task is really hard.”

Complementary modalities

Because it takes far less power to collect and process audio data than it does to collect and process visual data, the researchers envision that a sound-recognition system could be used to improve the context sensitivity of mobile devices.

When coupled with GPS data, for instance, a sound-recognition system could determine that a cellphone user is in a movie theater and that the movie has started, and the phone could automatically route calls to a prerecorded outgoing message. Similarly, sound recognition could improve the situational awareness of autonomous robots.

“For instance, think of a self-driving car,” Aytar says. “There’s an ambulance coming, and the car doesn’t see it. If it hears it, it can make future predictions for the ambulance — which path it’s going to take — just purely based on sound.”

Visual language

The researchers’ machine-learning system is a neural network, so called because its architecture loosely resembles that of the human brain. A neural net consists of processing nodes that, like individual neurons, can perform only rudimentary computations but are densely interconnected. Information — say, the pixel values of a digital image — is fed to the bottom layer of nodes, which processes it and feeds it to the next layer, which processes it and feeds it to the next layer, and so on. The training process continually modifies the settings of the individual nodes, until the output of the final layer reliably performs some classification of the data — say, identifying the objects in the image.

Vondrick, Aytar, and Torralba first trained a neural net on two large, annotated sets of images: one, the ImageNet data set, contains labeled examples of images of 1,000 different objects; the other, the Places data set created by Torralba’s group, contains labeled images of 401 different scene types, such as a playground, bedroom, or conference room.

Once the network was trained, the researchers fed it the video from 26 terabytes of video data downloaded from the photo-sharing site Flickr. “It’s about 2 million unique videos,” Vondrick says. “If you were to watch all of them back to back, it would take you about two years.” Then they trained a second neural network on the audio from the same videos. The second network’s goal was to correctly predict the object and scene tags produced by the first network.

The result was a network that could interpret natural sounds in terms of image categories. For instance, it might determine that the sound of birdsong tends to be associated with forest scenes and pictures of trees, birds, birdhouses, and bird feeders.

Benchmarking

To compare the sound-recognition network’s performance to that of its predecessors, however, the researchers needed a way to translate its language of images into the familiar language of sound names. So they trained a simple machine-learning system to associate the outputs of the sound-recognition network with a set of standard sound labels.

For that, the researchers did use a database of annotated audio — one with 50 categories of sound and about 2,000 examples. Those annotations had been supplied by humans. But it’s much easier to label 2,000 examples than to label 2 million. And the MIT researchers’ network, trained first on unlabeled video, significantly outperformed all previous networks trained solely on the 2,000 labeled examples.

“With the modern machine-learning approaches, like deep learning, you have many, many trainable parameters in many layers in your neural-network system,” says Mark Plumbley, a professor of signal processing at the University of Surrey. “That normally means that you have to have many, many examples to train that on. And we have seen that sometimes there’s not enough data to be able to use a deep-learning system without some other help. Here the advantage is that they are using large amounts of other video information to train the network and then doing an additional step where they specialize the network for this particular task. That approach is very promising because it leverages this existing information from another field.”

Plumbley says that both he and colleagues at other institutions have been involved in efforts to commercialize sound recognition software for applications such as home security, where it might, for instance, respond to the sound of breaking glass. Other uses might include eldercare, to identify potentially alarming deviations from ordinary sound patterns, or to control sound pollution in urban areas. “I really think that there’s a lot of potential in the sound-recognition area,” he says.

Read this article on MIT News.

Research Themes: 

News Image: 

Labs: 


How the brain recognizes faces

$
0
0
December 2, 2016

Larry Hardesty | MIT News

Machine-learning system spontaneously reproduces aspects of human neurology.

Illustration

Tomaso Poggio, a professor of brain and cognitive sciences at MIT and director of the Center for Brains, Minds, and Machines, has long thought that the brain must produce “invariant” representations of faces and other objects, meaning representations that are indifferent to objects’ orientation in space, their distance from the viewer, or their location in the visual field. Image: MIT News


MIT researchers and their colleagues have developed a new computational model of the human brain’s face-recognition mechanism that seems to capture aspects of human neurology that previous models have missed.

The researchers designed a machine-learning system that implemented their model, and they trained it to recognize particular faces by feeding it a battery of sample images. They found that the trained system included an intermediate processing step that represented a face’s degree of rotation — say, 45 degrees from center — but not the direction — left or right.

This property wasn’t built into the system; it emerged spontaneously from the training process. But it duplicates an experimentally observed feature of the primate face-processing mechanism. The researchers consider this an indication that their system and the brain are doing something similar.

“This is not a proof that we understand what’s going on,” says Tomaso Poggio, a professor of brain and cognitive sciences at MIT and director of the Center for Brains, Minds, and Machines (CBMM), a multi-institution research consortium funded by the National Science Foundation and headquartered at MIT. “Models are kind of cartoons of reality, especially in biology. So I would be surprised if things turn out to be this simple. But I think it’s strong evidence that we are on the right track.”

Indeed, the researchers’ new paper includes a mathematical proof that the particular type of machine-learning system they use, which was intended to offer what Poggio calls a “biologically plausible” model of the nervous system, will inevitably yield intermediary representations that are indifferent to angle of rotation.

Poggio, who is also a primary investigator at MIT’s McGovern Institute for Brain Research, is the senior author on a paper describing the new work, which appeared today in the journal Computational Biology. He’s joined on the paper by several other members of both the CBMM and the McGovern Institute: first author Joel Leibo, a researcher at Google DeepMind, who earned his PhD in brain and cognitive sciences from MIT with Poggio as his advisor; Qianli Liao, an MIT graduate student in electrical engineering and computer science; Fabio Anselmi, a postdoc in the IIT@MIT Laboratory for Computational and Statistical Learning, a joint venture of MIT and the Italian Institute of Technology; and Winrich Freiwald, an associate professor at the Rockefeller University.

 

Emergent properties

The new paper is “a nice illustration of what we want to do in [CBMM], which is this integration of machine learning and computer science on one hand, neurophysiology on the other, and aspects of human behavior,” Poggio says. “That means not only what algorithms does the brain use, but what are the circuits in the brain that implement these algorithms.”

Poggio has long believed that the brain must produce “invariant” representations of faces and other objects, meaning representations that are indifferent to objects’ orientation in space, their distance from the viewer, or their location in the visual field. Magnetic resonance scans of human and monkey brains suggested as much, but in 2010, Freiwald published a study describing the neuroanatomy of macaque monkeys’ face-recognition mechanism in much greater detail.

Freiwald showed that information from the monkey’s optic nerves passes through a series of brain locations, each of which is less sensitive to face orientation than the last. Neurons in the first region fire only in response to particular face orientations; neurons in the final region fire regardless of the face’s orientation — an invariant representation.

But neurons in an intermediate region appear to be “mirror symmetric”: That is, they’re sensitive to the angle of face rotation without respect to direction. In the first region, one cluster of neurons will fire if a face is rotated 45 degrees to the left, and a different cluster will fire if it’s rotated 45 degrees to the right. In the final region, the same cluster of neurons will fire whether the face is rotated 30 degrees, 45 degrees, 90 degrees, or anywhere in-between. But in the intermediate region, a particular cluster of neurons will fire if the face is rotated by 45 degrees in either direction, another if it’s rotated 30 degrees, and so on.

This is the behavior that the researchers’ machine-learning system reproduced. “It was not a model that was trying to explain mirror symmetry,” Poggio says. “This model was trying to explain invariance, and in the process, there is this other property that pops out.”

Neural training

The researchers’ machine-learning system is a neural network, so called because it roughly approximates the architecture of the human brain. A neural network consists of very simple processing units, arranged into layers, that are densely connected to the processing units — or nodes — in the layers above and below. Data are fed into the bottom layer of the network, which processes them in some way and feeds them to the next layer, and so on. During training, the output of the top layer is correlated with some classification criterion — say, correctly determining whether a given image depicts a particular person.

In earlier work, Poggio’s group had trained neural networks to produce invariant representations by, essentially, memorizing a representative set of orientations for just a handful of faces, which Poggio calls “templates.” When the network was presented with a new face, it would measure its difference from these templates. That difference would be smallest for the templates whose orientations were the same as that of the new face, and the output of their associated nodes would end up dominating the information signal by the time it reached the top layer. The measured difference between the new face and the stored faces gives the new face a kind of identifying signature.

In experiments, this approach produced invariant representations: A face’s signature turned out to be roughly the same no matter its orientation. But the mechanism — memorizing templates — was not, Poggio says, biologically plausible.

So instead, the new network uses a variation on Hebb’s rule, which is often described in the neurological literature as “neurons that fire together wire together.” That means that during training, as the weights of the connections between nodes are being adjusted to produce more accurate outputs, nodes that react in concert to particular stimuli end up contributing more to the final output than nodes that react independently (or not at all).

This approach, too, ended up yielding invariant representations. But the middle layers of the network also duplicated the mirror-symmetric responses of the intermediate visual-processing regions of the primate brain.

“I think it’s a significant step forward,” says Christof Koch, president and chief scientific officer at the Allen Institute for Brain Science. “In this day and age, when everything is dominated by either big data or huge computer simulations, this shows you how a principled understanding of learning can explain some puzzling findings.”

“They’re very careful,” Koch adds. “They’re only looking at the feed-forward pathway — in other words, the first 80, 100 milliseconds. The monkey opens its eyes, and within 80 to 100 milliseconds, it can recognize a face and push a button signaling that. The question is what goes on in those 80 to 100 milliseconds, and the model that they have seems to explain that quite well.”

Read this article on MIT News.

News Image: 

Labs: 

Research Area: 

Bold research visions recognized and rewarded

$
0
0

Alice McCarthy | MIT News

Two EECS faculty members and four more MIT faculty members to receive Professor Amar G. Bose Research Grants.

Group photo

Four proposals from six MIT faculty members, pictured here, have been awarded Professor Amar G. Bose Research Grants. Left to right: Angela Belcher, Amy Keating, Karl Berggren, Betar Gallant, Domitilla Del Vecchio, and Ron Weiss.


Since 2013, the Professor Amar G. Bose Research Grant has been supporting MIT faculty with big, bold, and unconventional research visions. In the latest round of grants, four proposals from six MIT faculty members — Angela Belcher, Betar Gallant, Amy Keating, Karl Berggren, Domitilla Del Vecchio, and Ron Weiss — were awarded from more than 100 project submissions. The researchers aim to make groundbreaking advances in areas of environmental bioremediation, cell reprogramming, new electrochemical reactions, and protein nanofabrication.

The researchers were honored at a Nov. 21 reception featuring past and current awardees, hosted by MIT President L. Rafael Reif.

Bose Grants are awarded to support innovative projects that may be unlikely to receive funding through traditional means but will offer fellows an exciting opportunity for exploration likely to benefit their fields of research. Grants provide up to $500,000 over three years for each selected project.

The grant program celebrates the legacy of the late Amar Bose, a longtime member of the MIT faculty and the founder of Bose Corporation, well known for his visionary and intellectually adventurous career. “My father would be very happy with the innovation and freedom of exploration that these grants have made possible as it was exactly what he was all about,” said his son Vanu Bose ’88, SM ’94, PhD ’99, at the reception. “The awards acknowledge the spirit of insatiable curiosity that my father embraced.”

“Through the Bose Research Grant program, which is now in its fourth year, we have a unique community of individuals synonymous with learning, teaching, exploration, and opportunity,” said President Reif. “MIT is about making a better world, and I cannot think of a better example of this than what the Bose research fellows and scholars are doing at MIT today.”

Reprogramming a cell’s fate

Using today’s technologies, researchers can reprogram cells of the body into stem cells capable of becoming any cell type. However, massive amounts of biochemical factors are required to force the change. And even when this reprogramming happens, less than 1 percent of the original cells actually make the full transformation to a stem cell. For more than 10 years, these issues have plagued practical applications of these induced stem cells in medicine.

Domitilla Del Vecchio, associate professor in the Department of Mechanical Engineering, and Ron Weiss, a professor in the departments of Biological Engineering and Electrical Engineering and Computer Science, propose a new technology that may offer the promise of substantially increased transformation efficiency, with smaller amounts of factors required. “Our project is about changing how the reprogramming process works,” says Del Vecchio. The pair proposes a feedback strategy whereby the cell itself adds in the needed factors at different times along the process transformative process. “We want to make a genetic circuit that can be inserted into the cell so that the cell automatically adjusts the level of needed factors,” she explains.

Adds Weiss, “Our approach is to push the cells to make the protein factors needed to both induce the change while also overriding the cells’ natural resistance which essentially fights the change.”

Tool Kit for Novel Protein Nanofabrication

“We are looking for ways to combine two different fields — protein engineering and nanofabrication — to build a tool kit for arranging biomolecules in new ways on physical interfaces,” says Amy Keating, a professor of biology, explaining her project work with Karl Berggren, a professor of electrical engineering.

Keating, whose work centers on how proteins interact and function, will partner with Berggren, a nanofabrication and electrical engineering scientist, to explore new technologies for combining proteins with advanced silicon device surfaces. “We are hoping to find new ways of building very small-scale biological molecule complexes on surfaces,” she says. While living organisms naturally organize proteins and DNA into intricate pathways and complexes for a variety of functions, few engineering solutions are available now to provide that kind of design complexity.

Berggren and Keating hope to create giant biomolecular systems with the complexity of integrated circuits by leveraging their expertise in designing custom proteins with nanofabrication techniques. “We are looking at ways of making scaffolds that we can attach more complex molecules to, like sensors, or for driving biological interactions,” says Berggren. Though they are not themselves focused on a particular application, the researchers imagine possible uses for this work in the life sciences, materials sciences, and in computing.

For the full list of recipients, read this article on MIT News.

December 1, 2016

News Image: 

Community forum gives insight into how The Engine will run

$
0
0

Rob Matheson | MIT News

MIT leadership answers questions about the mission and features of the new accelerator.

engine logo


At a community forum on MIT’s new startup accelerator, The Engine, administrators discussed the new enterprise and fielded questions about its formation, components, mission, funding, mentor and equipment access, startup-selection process, and other issues.

The forum, held last night in Building 32, opened with remarks from MIT’s leadership who helped launch The Engine: MIT President L. Rafael Reif, professor and head of the Department of Electrical Engineering and Computer Science Anantha Chandrakasan, Provost Martin Schmidt, and Executive Vice President and Treasurer Israel Ruiz.

The floor then opened up to professors, students, alumni, and other MIT community members, who posed questions about The Engine’s selection process and components, and offered advice and potential opportunities for collaboration.

Announced in October, The Engine is a new venture aimed at supporting entrepreneurs pursuing transformative technologies that are capital- and time-intensive. The venture aims to provide those entrepreneurs hundreds of millions of dollars in funding, and make available hundreds of thousands of square feet of space in Kendall Square and nearby communities. A web-based app, called the Engine Room, will allow entrepreneurs to use or rent specialized resources from MIT, and participating companies and institutions, including office and conference spaces on and off campus, clean rooms, and other facilities and specialized equipment. The venture will also introduce entrepreneurs to peers, mentors, and established companies in innovation clusters across the region and around the world.

Within a day following the announcement, The Engine’s website received about a thousand inquiries from those wanting to become involved. In February, The Engine will host a forum for startups that have submitted or plan to submit applications to discuss the program and selection process in more detail. In the spring, The Engine program will formally be kicked off and the first startups will enter the accelerator.

Making The Engine

In his opening remarks, President Reif laid out key reasons why MIT launched The Engine. One is to provide “patient” capital to entrepreneurs developing transformative technologies, who sometimes have difficulty finding funding; another is to help keep startups in the region. A third reason, he added, is to set successful examples for aspiring entrepreneurs in developing transformative technologies.

“Many students at MIT that are interested in [pursuing] these ideas in science-based innovation see that ideas like theirs don’t make it through, because there’s no patient capital,” he said. “We’d like those ideas to have a path for success to the marketplace to inspire others that want to do something similar.”

Ruiz recalled the term “innovation orchard” from President Reif’s May 2015 op-ed in The Washington Post. The piece discussed how supporters of innovation from the public, for-profit, and nonprofit sectors could form coalitions to provide physical space, mentorship, and bridge funding to transformative startups, to support their transition from idea to investment.

In creating MIT’s “innovation orchard,” Ruiz said, the Institute considered several realities: transformative-technology startups at MIT didn’t have a lot of early-stage resources; venture capital funding was focusing increasingly on shorter timeframes, higher valuations, and higher expectations on returns; and most MIT and local entrepreneurs had to head to the West Coast for find support. “We wanted to think about how we could … say, ‘Yes, there are the same opportunities here at MIT,’” Ruiz said, referencing the final point.

Chandrakasan introduced the Engine Room app and discussed how The Engine will work collaboratively with MIT’s existing innovation programs to address issues with funding, scaling up, equipment usage, forming founding teams, and other matters.

In selecting startups, Chandrakasan added, The Engine’s evaluation committee will consist of experts from outside MIT to avoid any conflicts of interest. “It’s also worth pointing out that funding for this, the investors, will be those who are committed to patient capital and/or seeing regional growth,” he said.

Schmidt introduced several ad hoc working groups formed for The Engine, “which are going to be structured to look at some of the key pieces that are critical to the success of people from our campus community exercising and using The Engine and the Engine Room.”

The committees are: the Engine Advisory Committee, led by Chandrakasan and populated by the leaders of each other group; the Facilities Access Working Group, led by Martin Culpepper, a professor of mechanical engineering and the “maker czar” in MIT’s Department of Mechanical Engineering; the Technology Licensing Working Group, led by Tim Swager, the John D. MacArthur Professor of Chemistry; the Conflict of Interest Working Group, led by Klavs Jensen, a professor of chemical engineering; the Visas for Entrepreneurs Working Group, led by Dick Yue, the Philip J. Solondz Professor of Engineering; and MIT’s Innovation Ecosystem Working Group, led by MIT Innovation Initiative co-directors Fiona Murray, who is the William Porter (1967) Professor of Entrepreneurship at MIT Sloan School of Management, and Vladimir Bulovic, the Fariborz Maseeh (1990) Professor of Emerging Technology.

Addressing community needs

After remarks, Ruiz and Chandrakasan fielded questions from more than a dozen MIT community members, including entrepreneurs, students, professors, and representatives of organizations and startup incubators in the region.

Answers to many questions shed light on The Engine’s components and selection process. For example, digital resources, such as CAD and other software for hardware design, will most likely be offered through the equipment-sharing program; there are no current restrictions on the types of startups accepted into the accelerator; The Engine plans to begin talks about potential collaborations with local incubators, such as Greentown Labs; and coordinated events will connect engineering students with MIT Sloan students, other entrepreneurs, and mentors.

One commenter asked if The Engine would support clinical trials for medical-device startups, a key startup category along with biotechnology, robotics, manufacturing, and energy. Ruiz noted MIT has strong existing partnerships with Boston hospitals, which could help startups transition more easily into clinical trials. But, he added, the venture only aims to carry startups through early stages. “At some point, The Engine will not carry you through clinical trials, but we will facilitate early on [resources for] the development, which is the most crucial aspect,” Ruiz said.

An alumna from the MIT Media Lab who founded an education-hardware startup asked if there was a place in The Engine for startups that may not turn much profit, even in the long-run. Noting that the question is very important, Ruiz said, “There’s one word that drives MIT and that we want to translate into The Engine: ‘impact.’ The Engine would indeed be interested in, say, low-cost diagnostic technologies for developing countries, that don’t generate much profit. Certainly, we’re interested in social entrepreneurs with the opportunity to create a much wider impact.”

A few commenters offered advice and opportunities for collaboration with The Engine. MIT Sloan alumnus Peter Rothstein, now president of the Northeast Clean Energy Council, which works with startup incubators across the region, said his organization could discuss collaborating with The Engine on shared equipment, mentorship, and other resources. His recommendation for administrators was to form committees designated for facilitating partnerships with investors, industry, customers, and other incubators.

“Great ideas,” Chandrakasan replied. “That’s exactly the type of things we’re thinking about.”

Read this article on MIT News.

December 2, 2016

News Image: 

Smith awarded IEEE Noyce Medal

$
0
0

Department of Electrical Engineering and Computer Science

Henry Smith, professor emeritus of electrical engineering, recognized for contributions to field of nanofabrication.

Henry Smith

Photo courtesy of RLE/Nathan Fiske


Henry I. Smith, professor emeritus of electrical engineering, has been awarded the 2017 IEEE Robert N. Noyce Medal in recognition of his “contributions to lithography and nanopatterning through experimental advances in short-wavelength exposure systems and attenuated phase-shift masks."

The Noyce Medal honors exceptional contributions to the microelectronics industry, and was established in 1999 in honor of Robert N. Noyce, founder of the Intel Corporation and inventor of the integrated circuit. Recipients are judged on the basis of their leadership in the field, research contributions, originality, breadth and inventive value among other criteria.

Smith is known for a number of innovations in nanoscale science and engineering, including: x-ray lithography, the attenuating phase shifter, interference lithography, immersion photolithography, zone-plate-array lithography, graphoepitaxy and a variety of quantum-effect, short-channel, single-electron and microphotonic devices.

Smith founded the NanoStructures Laboratory (NSL) in the Research Laboratory of Elelctronics, and was an affiliate of the Microsystems Technology Laboratories. He held the Joseph F. and Nancy P. Keithley Chair from 1990 to 2005. He is also a Fellow of the American Academy of Arts and Sciences, the National Academy of Inventors, the International Society for Nanomanufacturing, the IEEE, the OSA and a member of the National Academy of Engineering.

“Hank’s work with colleagues in the NSL has made a lasting impact in the field of nanofabrication,” said Anantha Chandrakasan, head of the Department of Electrical Engineering and Computer Science. “The Robert N. Noyce Medal is well-deserved recognition of his many contributions to the microelectronics industry.”

December 2, 2016

Research Themes: 

News Image: 

Labs: 

Research Area: 

Jobs at EECS

$
0
0

Card Title: 

Jobs at EECS

Card URL: 

http://www.eecs.mit.edu/jobs

Card Title Color: 

Black
Viewing all 1281 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>