Quantcast
Channel: MIT EECS
Viewing all 1281 articles
Browse latest View live

Graphene to the rescue - Englund leads group to develop optoelectronic computer chips

$
0
0

http://web.mit.edu/newsoffice/2013/graphene-could-yield-cheaper-optical-chips-0915.htmlOptoelectronic computer chips that will use light to move data rather than electricity are much closer to reality thanks to the clever application of graphene (on silicon) by researchers from MIT, Columbia University and IBM's T.J. Watson Research Center under the direction of Dirk Englund, the Jamieson Career Development Assistant Professor of Electrical Engineering and Computer Science at MIT. The work, titled "Silicon photonics: Graphene benefits" appears in Nature Photonics, Sept. 15, 2013. Englund, a member of the Research Laboratory of Electronics (RLE) and the Microsystems Technology Laboratories (MTL) also directs the Quantum Photonics Laboratory. [Image: In a new graphene-on-silicon photodetector, electrodes (gold) are deposited, slightly asymmetrically, on either side of a silicon waveguide (purple). The asymmetry causes electrons kicked free by incoming light to escape the layer of graphene (hexagons) as an electrical current. Graphic courtesy of researchers.]

Read more in the Sept. 16, 2013 MIT News Office article by Larry Hardesty titled "Graphene could yield cheaper optical chips - Researchers show that graphene — atom-thick sheets of carbon — could be used in photodetectors, devices that translate optical signals to electrical," also posted below.


Graphene — which consists of atom-thick sheets of carbon atoms arranged hexagonally — is the new wonder material: Flexible, lightweight and incredibly conductive electrically, it’s also the strongest material known to man.

In the latest issue of Nature Photonics, researchers at MIT, Columbia University and IBM’s T. J. Watson Research Center describe a promising new application of graphene, in the photodetectors that would convert optical signals to electrical signals in integrated optoelectronic computer chips. Using light rather than electricity to move data both within and between computer chips could drastically reduce their power consumption and heat production, problems that loom ever larger as chips’ computational capacity increases.

Optoelectronic devices built from graphene could be much simpler in design than those made from other materials. If a method for efficiently depositing layers of graphene — a major area of research in materials science — can be found, it could ultimately lead to optoelectronic chips that are simpler and cheaper to manufacture.

“Another advantage, besides the possibility of making device fabrication simpler, is that the high mobility and ultrahigh carrier-saturation velocity of electrons in graphene makes for very fast detectors and modulators,” says Dirk Englund, the Jamieson Career Development Assistant Professor of Electrical Engineering and Computer Science at MIT, who led the new research.

Graphene is also responsive to a wider range of light frequencies than the materials typically used in photodetectors, so graphene-based optoelectronic chips could conceivably use a broader-band optical signal, enabling them to move data more efficiently. “A two-micron photon just flies straight through a germanium photodetector,” Englund says, “but it is absorbed and leads to measurable current — as we actually show in the paper — in graphene.”

Unbiased account

As Englund explains, the problem with graphene as a photodetector has traditionally been its low responsivity: A sheet of graphene will convert only about 2 percent of the light passing through it into an electrical current. That’s actually quite high for a material only an atom thick, but it’s still too low to be useful.

When light strikes a photoelectric material like germanium or graphene, it kicks electrons orbiting atoms of the material into a higher energy state, where they’re free to flow in an electrical current. If they don’t immediately begin to move, however, they’ll usually drop back down into the lower energy state. So one standard trick for increasing a photodetector’s responsivity is to “bias” it — to apply a voltage across it that causes the electrons to flow before they lose energy.

The problem is that the voltage will inevitably induce a slight background current that adds “noise” to the detector’s readings, making them less reliable. So Englund, his student Ren-Jye Shiue, Columbia’s Xuetao Gan — who, together with Shiue, is lead author on the paper — and their collaborators instead used a photodetector design developed by Fengnian Xia and his colleagues at IBM, which produces a slight bias without the application of a voltage.

In the new design, light enters the detector through a silicon channel — a “waveguide” — etched into the surface of a chip. The layer of graphene is deposited on top of and perpendicular to the waveguide. On either side of the graphene layer is a gold electrode. But the electrodes’ placement is asymmetrical: One of them is closer to the waveguide than the other.

“There’s a mismatch between the energy of electrons in the metal contact and in graphene,” Englund says, “and this creates an electric field near the electrode.” When electrons are kicked up by photons in the waveguide, the electric field pulls them to the electrode, creating a current.

Hot topic

In experiments, the researchers found that, unbiased, their detector would generate 16 milliamps of current for each watt of incoming light. Its detection frequency was 20 gigahertz — already competitive with germanium. (Some experimental germanium photodetectors have achieved higher speeds, but only when biased.) With the application of a slight bias, the detector could get up to 100 milliamps per watt, a responsivity commensurate with that of germanium.

Englund is confident that better engineering — thinner electrodes, or a narrower waveguide — could yield a photodetector whose responsivity is even higher. “It’s a matter of engineering,” he says. “We are already testing some new tricks to get another factor of two or four.”

“I think it’s great work,” says Thomas Mueller, an assistant professor at the Vienna University of Technology’s Photonics Institute. “The main drawback of graphene photodetectors was always their low responsivity. Now they have two orders of magnitude higher responsivity, which is really great.”

“The other thing that I like very much is the integration with a silicon chip,” Mueller adds, “which really shows that, in the end, you’ll be able to integrate graphene into computer chips to realize optical links and things like that.”

In fact, the same issue of Nature Photonics also features a paper by Mueller and colleagues, reporting work very similar to that conducted by Englund and his team. “We did not know that we were doing the same thing,” Mueller says. “But I’m very happy that two papers are coming out in the same journal on the same topic, which shows that it’s an important thing, I think.”

The chief difference between the two groups’ work, Mueller says, is that “we used slightly different geometry.” But, he adds, “Honestly, I think that Dirk’s geometry is more practical. We were also thinking about the same thing, but we didn’t have the technical capabilities to do this. There’s one process that they do that we were not able to do.”

September 16, 2013

News Image: 


MITx offers new XSeries - course sequences starting fall 2013

$
0
0

MITx offers new XSeries course sequences starting fall 2013 MITx has announced two new online learning opportunities that will provide course-sequences in related areas culminating in course-sequence certificates. One of these 'XSeries' titled “Foundations of Computer Science” has been developed by faculty members in the Electrical Engineering and Computer Science Department at MIT. The two-year sequence is offered this fall (2013) and will offer certificates (not credit) on completion.

Read more in the Sept. 18, 2013 MIT News Office article by Steve Carson, from the MIT Office of Digital Learning titled "MITx introduces 'XSeries' course-sequence certificates on edX. EdX also introduces new ID verification service," also posted below.


MITx, the massive open online course (MOOC) effort at MIT, has announced new certificates for completion of sequences of related modules or courses on the edX platform. The sequences, called “XSeries,” represent a new approach to MOOC instruction and certification across integrated offerings more expansive than the individual courses that have thus far defined the MOOC landscape.

The two initial XSeries sequences are “Foundations of Computer Science” and “Supply Chain and Logistics Management.” Curriculum for each XSeries is developed by MIT faculty members and overseen by their academic departments.

“These sequences are an opportunity for MIT to both explore how subjects can be addressed in depth through the MOOC format and to better understand student interest in various types of certification,” said Anantha Chandrakasan, the Joseph F. and Nancy P. Keithley Professor of Electrical Engineering at MIT and head of MIT’s Department of Electrical Engineering and Computer Science (EECS). “XSeries sequences allow our departments to re-imagine the building blocks that structure teaching in our disciplines for the digital environment.”

Each XSeries will cover content equivalent to two to four traditional residential courses and take between six months and two years to complete. In a break from previous MITx offerings, the “Foundations of Computer Science” and “Supply Chain and Logistics Management” XSeries sequences are composed of shorter, more targeted modules without direct equivalents among MIT’s residential courses.

“We are no longer constrained to structure course material in 14-week units to fit the academic semester,” said Chris Terman, a senior lecturer in EECS and part of the instructional team for the "Foundations of Computer Science" XSeries. “We can split the material into more approachable modules, each focused on key concepts of computer science and computational thinking, and assemble those modules into new programs intended for a larger audience.”

The first module of the “Foundations of Computer Science” XSeries will begin this fall, and the “Supply Chain and Logistics Management” XSeries will start in fall 2014. As part of the pilot, the initial XSeries sequences are pitched at different student levels: The “Foundations of Computer Science” XSeries is designed at the introductory undergraduate level, and the “Supply Chain and Logistics Management” XSeries has been developed at the graduate level for learners seeking to work professionally in the field. The programs will offer certificates of achievement but not academic credit.

“We’re hoping to understand more about the credentials that learners value,” said Chris Caplice, executive director of MIT’s Center for Transportation and Logistics (CTL), who, together with CTL faculty members, is developing the “Supply Chain and Logistics Management” XSeries. “We hope that learners and employers will ultimately find the ‘Supply Chain and Logistics Management’ XSeries certificate to be valuable in signaling meaningful professional development, but we are in the early stages of exploring these kinds of programs.”

Starting in spring 2014, the XSeries sequences will use edX’s new ID verification process, providing the added value of identity assurance for the certificates. This new edX functionality uses webcam photos to confirm student identity, provides linkable online certificates and requires a modest fee. Prices for XSeries courses will be announced later this fall; students will also have the option of auditing the sequences for free. EdX is piloting ID verification on three standalone courses this fall, including 6.002x (Circuits and Electronics) from MITx, and two courses from BerkeleyX: 169.1x (Software as a Service) and Stat 2.1x (Intro to Statistics: Descriptive Statistics). These courses will continue to be offered with an honor-code certificate option as well.

Related:

MIT News, June 24, 2013: "MIT's Academic Media Production Services joins the Office of Digital Learning"

MIT News, July 1, 2013: "OCW, MITx provide Myanmar student rigorous, high-quality educational opportunities"

September 18, 2013

Teaching computers to see — by learning to see like computers

$
0
0

With each of the raw images of the photos in color, today's state-of-the-art object-detection algorithms make errors — such as identifying a car (above) — that initially seem baffling. A new technique enables the visualization of a common mathematical representation of images (in black and white), which should help researchers understand why their algorithms fail.  Image courtesy the Researchers/MIT News OfficeIn an effort to understand how today's object-recognition systems fall short of their desired goal, Antonio Torralba, EECS associate professor and members of his group in the Computer Science and Artificial Intelligence Lab (CSAIL) have created a system that by allowing humans to see the world the way an object-recognition system does, will ultimately become a tool for researchers to improve current recognition systems. Torralba and his students — including first author EECS graduate student Carl Vondrick, will present their work titled "Inverting and Visualizing Features for Object Detection" at an upcoming International Conference on Computer Vision.

Read more in the Sept. 19, 2013 MIT News Office article by Larry Hardesty titled "Teaching computers to see — by learning to see like computers - By translating images into the language spoken by object-recognition systems, then translating them back, researchers hope to explain the systems’ failures," also posted below.

[Image left: With each of the raw images of the photos in color, today's state-of-the-art object-detection algorithms make errors — such as identifying a car (above) — that initially seem baffling. A new technique enables the visualization of a common mathematical representation of images (in black and white), which should help researchers understand why their algorithms fail. Images courtesy of the researchers/MIT News Office.]


Object-recognition systems — software that tries to identify objects in digital images — typically rely on machine learning. They comb through databases of previously labeled images and look for combinations of visual features that seem to correlate with particular objects. Then, when presented with a new image, they try to determine whether it contains one of the previously identified combinations of features.

Even the best object-recognition systems, however, succeed only around 30 or 40 percent of the time — and their failures can be totally mystifying. Researchers are divided in their explanations: Are the learning algorithms themselves to blame? Or are they being applied to the wrong types of features? Or — the “big-data” explanation — do the systems just need more training data?

To attempt to answer these and related questions, researchers at MIT’s Computer Science and Artificial Intelligence Laboratory have created a system that, in effect, allows humans to see the world the way an object-recognition system does. The system takes an ordinary image, translates it into the mathematical representation used by an object-recognition system and then, using inventive new algorithms, translates it back into a conventional image.

In a paper to be presented at the upcoming International Conference on Computer Vision, the researchers report that, when presented with the retranslation of a translation, human volunteers make classification errors that are very similar to those made by computers. That suggests that the learning algorithms are just fine, and throwing more data at the problem won’t help; it’s the choice of features that’s the culprit. The researchers are hopeful that, in addition to identifying the problem, their system will also help solve it, by letting their colleagues reason more intuitively about the consequences of particular feature decisions.

Whole HOG

Today, the feature set most widely used in object-detection research is called the histogram of oriented gradients, or HOG (hence the name of the MIT researchers’ system: HOGgles). HOG first breaks an image into square chunks, usually eight pixels by eight pixels. Then, for each square, it identifies a “gradient,” or change in color or shade from one region to another. It characterizes the gradient according to 32 distinct variables, such as its orientation — vertical, horizontal or diagonal, for example — and the sharpness of the transition — whether it changes color suddenly or gradually.

Thirty-two variables for each square translates to thousands of variables for a single image, which define a space with thousands of dimensions. Any conceivable image can be characterized as a single point in that space, and most object-recognition systems try to identify patterns in the collections of points that correspond with particular objects.

“This feature space, HOG, is very complex,” says Carl Vondrick, an MIT graduate student in electrical engineering and computer science and first author on the new paper. “A bunch of researchers sat down and tried to engineer, ‘What’s the best feature space we can have?’ It’s very high-dimensional. It’s almost impossible for a human to comprehend intuitively what’s going on. So what we’ve done is built a way to visualize this space.”

Vondrick; his advisor, Antonio Torralba, an associate professor of electrical engineering and computer science; and two other researchers in Torralba’s group, graduate student Aditya Khosla and postdoc Tomasz Malisiewicz, experimented with several different algorithms for converting points in HOG space back into ordinary images. One of those algorithms, which didn’t turn out to be the most reliable, nonetheless offers a fairly intuitive understanding of the process.

The algorithm first produces a HOG for an image and then scours a database for images that match it — on a very weak understanding of the word “match.”

“Because it’s a weak detector, you won’t find very good matches,” Vondrick explains. “But if you average all the top ones together, you actually get a fairly good reconstruction. Even though each detection is wrong, each one still captures the statistics of the original image patch.”

Dictionary definition

The reconstruction algorithm that ended up proving the most reliable is more complex. It uses a so-called “dictionary,” a technique that’s increasingly popular in computer-vision research. The dictionary consists of a large group of HOGs with fairly regular properties: One, for instance, might have a top half that’s all diagonal gradients running bottom left to upper right, while the bottom half is all horizontal gradients; another might have gradients that rotate slowly as you move from left to right across each row of squares. But any given HOG can be represented as a weighted combination of these dictionary “atoms.”

The researchers’ algorithm assembled the dictionary by analyzing thousands of images downloaded from the Internet and settled on the dictionary that allowed it to reconstruct the HOG for each of them with, on average, the fewest atoms. The trick is that, for each atom in the dictionary, the algorithm also learned the ordinary image that corresponds to it. So for an arbitrary HOG, it can apply the same weights to the ordinary images that it does to the dictionary atoms, producing a composite image.

Those composites are quite striking. What appears to be a blurry image of a woman sitting at a vanity mirror, for instance, turns out to be a reconstruction of the HOG produced by a photo of an airplane sailing over a forest canopy. And, indeed, a standard object-recognition system will, erroneously, identify a person in the image of the plane. It’s a mistake that’s baffling without the elucidation offered by the HOGgles.

To quantify the intuition that, given the representations of images in HOG space, object detectors’ false positives are not as bizarre as they initially seem, the MIT researchers presented collections of their HOG reconstructions to volunteers recruited through Amazon’s Mechanical Turk crowdsourcing service. The volunteers were slightly better than machine-learning algorithms at identifying the objects depicted in the reconstructions, but only slightly — nowhere near the disparity of 60 or 70 percent when object detectors and humans are asked to identify objects in the raw images. And the dropoff in accuracy as the volunteers moved from the easiest cases to the more difficult ones mirrored that of the object detectors.

Building intuitions

“One of the beauties of our field is that, unlike something like statistics or some kind of financial data, you can see what you’re working on,” says Alexei Efros, an associate professor of computer science and electrical engineering at the University of California at Berkeley who works on computer vision. “I think having large-scale data in computer vision is a very important phenomenon, but a negative side product of this has been that the new students, the new researchers … don’t look at the pixels anymore. They’re so overwhelmed with the data, there are so many images, that they’re just treating it as if it were stock market data, or biosequence data, or any kind of other data. They’re just looking at graphs and curves and spreadsheets and tables.”

The MIT researchers’ work could be a corrective to that trend, Efros says. “I think that is what appeals to me,” he says. “It’s breaking the tide of students not looking at images.”

Efros adds that, in a more direct way, HOGgles could be a useful research tool. “If you’re looking to do some task, and you’re using this [HOG] descriptor, and it doesn’t work, then before, you basically just stared at your code and you stared at the numbers and you thought, ‘I have no idea,’” he says. “Now you can really just invert the data and at least look to see whether the computer even had any chance.”

“But it’s not just a tool for getting better descriptors,” he adds. “It’s a tool for building up intuitions.”

September 23, 2013

News Image: 

Heldt, Sze and Vaikuntananthan join EECS Faculty

$
0
0

Meet the newest members of the MIT Electrical Engineering and Computer Science faculty — Thomas Heldt, Assistant Professor of Electrical and Biomedical Engineering, Vivienne Sze, Assistant Professor of Electrical Engineering and Vinod Vaikuntanathan, Assistant Professor of Computer Science.


THOMAS HELDT
Thomas Heldt
joined the EECS Department in July 2013 as Assistant Professor of Electrical and Biomedical Engineering. He was also appointed to MIT's new Institute for Medical Engineering and Science, where he holds the Hermann von Helmholtz Career Development Professorship. Thomas studied Physics at Johannes Gutenberg University, Germany, at Yale University and MIT. In 2004, he received the PhD degree in Medical Physics from MIT's Division of Health Sciences and Technology and commenced postdoctoral training at MIT's Laboratory for Electromagnetic and Electronics Systems. Prior to joining the faculty, Thomas was a Principal Research Scientist with MIT's Research Laboratory of Electronics, where he co-founded and co-directed (with Prof. George Verghese) the Computational Physiology and Clinical Inference Group.

Thomas's research interests focus on signal processing, mathematical modeling, and model identification to support real-time clinical decision making, monitoring of disease progression, and titration of therapy, primarily in neurocritical and neonatal critical care. In particular, Thomas is interested in developing a mechanistic understanding of physiologic systems, and in formulating appropriately chosen computational physiologic models for improved patient care. His research is conducted in close collaboration with colleagues at MIT and clinicians from Boston-area hospitals.

Thomas has been active in teaching Quantitative Physiology (with Prof. Roger Mark) and is currently co-teaching Cellular Biophysics and Neurophysiology (with Prof. Jay Han). He is looking forward to developing a course on Physiological Systems Modeling and Identification that puts equal emphasis on formulating mechanistic mathematical models of physiological systems, and on using these models to interpret clinical and physiological data.


Vivienne SzeVivienne Sze joined the EECS Department in August 2013 as an Assistant Professor and a member of RLE and MTL. She received the B.A.Sc. degree in Electrical Engineering from the University of Toronto in 2004, and the S.M. and Ph.D. degree in Electrical Engineering and Computer Science from MIT in 2006 and 2010, respectively. From September 2010 to July 2013, she was a Member of Technical Staff in the Systems and Applications R&D Center at Texas Instruments, where she designed low-power algorithms and architectures for video coding.

Her research focuses on pushing the power and performance limits through joint design of algorithms, architectures and circuits to build energy efficient and high performance systems for portable multimedia applications. Her work on implementation-friendly video compression algorithms was used in the development of the latest video coding standard HEVC/H.265.

She has received various awards for academic achievement including the Jin-Au Kong Outstanding Doctoral Thesis Prize in 2011, the 2008 A-SSCC Outstanding Design Award, the 2007 DAC/ISSCC Student Design Contest Award, the Natural Sciences and Engineering Research Council of Canada (NSERC) Julie Payette fellowship in 2004, the NSERC Postgraduate Scholarships in 2005 and 2007, and the Texas Instruments Graduate Woman's Fellowship for Leadership in Microelectronics in 2008. In 2012, she was selected by IEEE-USA as one of the “New Faces of Engineering”


Vinod Vaikuntanathan

Vinod Vaikuntanathan joined the EECS Department as an Assistant Professor of Computer Science in September 2013. No stranger to MIT, Vinod earned his SM and PhD in Computer Science in 2005 and 2008, respectively.

Following graduation from MIT, Vinod was the Josef Raviv Postdoctoral Fellow at IBM T.J. Watson Research Center (from 2008 – 2010), a Researcher in the Cryptography group at Microsoft Research, Redmond (from 2010 -- 2011), and an Assistant Professor in the Computer Science department at the University of Toronto (from 2011 -- 2013).

Prof. Vaikuntanathan's research is in the area of theoretical computer science, where he studies information-theoretic and computational techniques to achieve authenticity and privacy in computation and communication. In particular, he is interested in techniques for computing on encrypted data and programs including, fully homomorphic encryption, functional encryption and program obfuscation. He also develops new mathematical tools in cryptography, drawing from the theory of integer lattices. Prof. Vaikuntanathan was recently cited by the MIT News Office for his work with Prof. Shafi Goldwasser and Prof. Nickolai Zeldovich on securing the cloud while letting web servers process data without decrypting it, a work that they presented at the Association for Computing Machinery's 45th Symposium on the Theory of Computing in early June.

Vinod Vaikuntanathan is a recipient of the George M. Sprowls PhD thesis award from the MIT EECS Department, an Alfred P. Sloan research fellowship and a University of Toronto Connaught New Researcher award.

Date Posted: 

Monday, September 23, 2013 - 12:15pm

Card Title Color: 

Black

Card Description: 

Meet the newest members of the MIT Electrical Engineering and Computer Science faculty — Thomas Heldt, Assistant Professor of Electrical and Biomedical Engineering, Vivienne Sze, Assistant Professor of Electrical Engineering, and Vinod Vaikuntanathan, Assistant Professor of Computer Science.

Photo: 

Card Wide Image: 

Card Title: 

Heldt, Sze and Vaikuntananthan join EECS Faculty

Dina Katabi selected as MacArthur Fellow

$
0
0

Dina Katabi, professor in the MIT EECS Department, principal investigator in the Computer Science and Artificial Intelligence Lab (CSAIL) and co-director of Wireless@MIT has been selected as a 2013-14 MacArthur Fellow. She is cited by the MacArthur Fellows Program for her work "at the interface of computer science and electrical engineering to improve the speed, reliability, and security of data exchange. Katabi has contributed to a range of networking issues, from protocols to minimize congestion in high-bandwidth networks to algorithms for spectrum analysis, though most of her work centers on wireless data transmission." She is noted, along with six of the 24 Fellows in the MacArthur 2013 class -- for her pioneering insights into—in her case, the reliability and security of wireless networks.

The MacArthur MacArthur Fellows Program awards unrestricted fellowships to talented individuals who have shown extraordinary originality and dedication in their creative pursuits and a marked capacity for self-direction. Between June of 1981 and September of 2012, 897 Fellows have been named.

Read the article posted with the 2013 MacArthur Fellows Sept. 24 announcement below.


Dina Katabi is a communications researcher working at the interface of computer science and electrical engineering to improve the speed, reliability, and security of data exchange. Katabi has contributed to a range of networking issues, from protocols to minimize congestion in high-bandwidth networks to algorithms for spectrum analysis, though most of her work centers on wireless data transmission.

In WiFi (802.11) networks, it is common for two devices to send packets of information nearly simultaneously, resulting in partial data loss and rejection of both packets, a process that is repeated until each packet is transmitted without interference. Katabi and colleagues developed a “ZigZag” algorithm that reconstructs the contents of the collided packets by combining the usable fragments from each, thereby reducing the retransmission rates significantly. Additionally, while WiFi signals are typically thought of as communication signals, Katabi and her students have shown that they can be used to track the movements of humans, even if they are in a closed room or behind a wall. This technology can also be used to send commands to a computer via a person’s gestures as the signals reflect off of the person’s body.

Because 802.11 networks are radio broadcasts, their signals are vulnerable to interception and manipulation by nefarious third parties. Katabi designed a method that uses random wireless signals to protect low-power devices during the exchange of encryption keys and make it impossible for intermediaries to insert themselves undetected (“man-in-the-middle” attack) in the data stream. For safety reasons, some wireless devices need to transmit unencrypted data—for example, pacemakers—which makes them sensitive to malevolent interference. She and her colleagues are designing wearable devices that protect pacemakers against unwanted manipulation while allowing medical personnel emergency access without security codes.

Additional projects, such as showing the potential of radio interference to increase bandwidth and developing data protocols that address network congestion, demonstrate Katabi’s ability to translate long-recognized theoretical advances into practical solutions that could be deployed in the real world. Through her numerous contributions, Katabi has become a leader in accelerating our capacity to communicate high volumes of information securely without restricting mobility.

Dina Katabi received a B.S. (1995) from Damascus University and an M.S. (1999) and Ph.D. (2003) from the Massachusetts Institute of Technology. She joined the faculty of MIT in 2003, where she is currently a professor in the Department of Electrical Engineering and Computer Science, director of the MIT Center for Wireless Networks and Mobile Computing (Wireless@MIT), and a member of the Computer Science and Artificial Intelligence Laboratory, where she leads the Networks at MIT group (NETMIT).

 

September 25, 2013

Research Themes: 

News Image: 

Labs: 

Research Area: 

TEDxCambridge features Manolis Kellis for a glimpse of Genomics Revolutions effects on medicine

SuperUROP raises bar for undergraduate research and innovation

$
0
0

SuperUROP students Chase Lambert and Abdul Alfozan check the new SuperUROP brochure at the kickoff reception held for the students, their industry mentors and faculty advisors and guests on Sept. 26 in the MIT Stata R&D area.SuperUROP students David Couto and Rishi Patel are two of the 80 students who gathered for the Sept. 26 reception following the SuperUROP seminar class, 6.UAR that covers topics in Undergraduate Advanced Research.From left, Stacy Ho, left, Senior Technical Manager at MediaTek talks with EECS Professor Hae-Seung Lee and his advisee, SuperUROP student Yonglin Wu.SuperUROP student Xinyi Zhang discusses her project with her Amazon mentors Wei Jing and Ramez Nachman.CSAIL Research Scientist Boris Katz talks with his SuperUROP advisee Alvaro Morales at the Sept. 26 kickoff receptionSuperUROP students (in foreground from left) Erika Ye and Neil Fitzgerald talk with Jim Fiorenza, Chief Scientist, GaN at Analog Devices.The crowd at the SuperUROP kickoff reception gathers for brief remarks by event speakers.Anantha Chandrakasan gives the opening remarks and introduces Susan Hockfield, MIT president emerita followed by inaugural SuperUROP graduate Gustavo Goretkin followed by Ray Stata, ’57, SM ’58, Chairman and Co-Founder of Analog Devices, Inc.The teaching staff for the SuperUROP class 6.UAR including (from left) Anantha Chandrakasan, Jesika Haria, Nivedita Chandrasekaran, with Susan Hockfield, Aakanksha Sarda and EECS professor and MIT Dean for Undergraduate Education, Dennis Freeman SuperUROP students gather with Susan Hockfield for a group photo including from front, left, by row: Christy Swartz, Harshini Jayarama, Qui Nguyen, and Michelle Chen; standing from left: Angela Zhang, Chelsea Finn, Susan Hockfield, Jennifer Liu and Anvisha Pai; from left in the back row: Francis Chen and Benoit Landry.Inaugural SuperUROP student Gustavo Goretkin, who spoke earlier at the reception, posed with his advisor Prof. Tomas Lozano-Perez and Ray Stata.SuperUROP students (from left) Yoana Gyurova and Manushaqe Muco talked with Susan Hockfield.Zoran  Zvonar, MediaTek Fellow and Senior Director (left) talked with 6.UAR Teaching Assistant Nivedita Chandrasekaran (far right) and Bob Meisenhelder, Director of Government R&D Programs and University Gifts and Grants at Analog Devices, Inc.

SuperUROP in its second year features guest speakers Susan Hockfield and Ray Stata at its Sept. 26 reception

by Patricia Sampson and Danielle Festino
Photo credit: Bethany Versoy

The MIT Department of Electrical Engineering and Computer Science (EECS) held a kickoff reception at the Stata Center on Sept. 26 for 80 of its juniors and seniors who recently started their yearlong participation in the Advanced Undergraduate Research Opportunities Program (or SuperUROP).

Speakers at the reception praised SuperUROP, now in its second year, for continuing to raise the bar at MIT for undergraduate research and innovation, while fostering collaboration between faculty and industry.

In his welcoming remarks, Anantha Chandrakasan, the Joseph F. and Nancy P. Keithley Professor of Electrical Engineering and head of EECS, recollected his own experience at the University of California at Berkeley, where he was able to conduct extended research as an undergraduate. This experience not only excited and prepared him for graduate school and an academic career, but also inspired him to create the SuperUROP.

Chandrakasan went on to discuss the critical role that the SuperUROP seminar class 6.UAR (Preparation for Research), which he co-teaches with MIT professor Dennis Freeman, plays in acclimating students to the program — including introducing them to social and networking opportunities (such as the kickoff event). “We recognize how important it is for our students to get to know each other and develop a sense of community,” he said.

Chandrakasan acknowledged the generous support from the 15 companies and several donors who support the Research and Innovation Scholars Program (RISP), which funds the SuperUROP students and provides some discretionary funding for the host research group. The RISP program makes the SuperUROP possible, he said. “The industry mentors provide not only suggestions and research directions,” he said, “but detailed feedback on the technical aspects of the project.”

At the reception, SuperUROP mentor Steve Muir, director of the VMware Academic Program, was excited about returning to the program — particularly about the wide range of research projects available to the students. “We are looking forward to seeing what the VMware SuperUROP scholars can accomplish this year,” he said.

“We are also encouraged by the addition of a dedicated staff member to manage industry relationships,” he added, referring to MIT’s Ted Equi, who is serving as a SuperUROP industrial liaison.

MIT president emerita Susan Hockfield speaks at the SuperUROP reception, Sept. 26, 2013SuperUROP is more than a ‘warm-up’

Former MIT president Susan Hockfield welcomed the SuperUROP crowd, saying, “[The program] is as obvious as the sun in the sky.” She noted that not only do 85 percent of graduating seniors participate in a UROP, but also that UROP research is more than a “warm-up” for these students — it is real.

Hockfield praised the extraordinary ability and capacity of MIT students, saying, “If it’s OK for undergraduates to take graduate-level classes, why not graduate-level research?” She continued, “SuperUROP meets our students where they are.” She also congratulated and thanked EECS and Chandrakasan for taking on the SuperUROP “experiment.”

Gustavo Goretkin ’13 followed Hockfield in speaking to encourage the new SuperUROP class. In a unique position, Goretkin was a member of the original Undergraduate Student Advisory Group in EECS (USAGE) in 2011 and 2012 that helped shape the development of SuperUROP. He was also a member of the inaugural SuperUROP class and now, as an EECS graduate student, was given the chance to speak about his experience.

“SuperUROP not only made it possible for me to apply for graduate school with strong recommendation letters, but it prepared me well to present my research,” he said. “Most useful, was hearing from the ‘rock star’ people who presented to the [SuperUROP] class members during the year.”

Gustavo’s remarks fell on eager ears, including new SuperUROP student Benoit Landry, the MIT EECS-DENSO Undergraduate Research and Innovation Scholar who is working with associate professor Russ Tedrake to build robotic control systems that will allow for contact without interrupting both feedback and control. Landry said, “The 6UAR lectures are a constant reminder that doing research at MIT makes us a part of something truly exciting.”

Similarly, EECS senior Chelsea Finn applied to SuperUROP because she wants to gain experience in conducting research under the guidance of top EECS faculty. As an EECS-Qualcomm Research and Innovation Scholar with professor Seth Teller, Finn is working to improve computer vision methods for quickly detecting text in natural environments to create a real-time system that can aid the blind and vision-impaired.

The many legs of SuperUROP

Ray Stata ’57, SM ’58, chairman and co-founder of Analog Devices, was one of the first RISP partners for SuperUROP. In his remarks, Stata enthusiastically noted that as SuperUROP embarks on its sophomore year, the program presents “lots of learning on the how and why to engage industry.” He noted that the unique way that the program pairs faculty and industry in a shared mentorship, presents incentives and opportunities for building research collaborations.

Analog Devices Chairman and Co-founder Ray Stata, '57, SM '59 spoke at the Sept. 26 SuperUROP receptionStata gave a historical perspective on what he termed the two pillars that MIT has built so successfully over the years: education and research. Now, he noted, there is increasing focus by MIT — in a very deliberate way — on innovation and entrepreneurship. The SuperUROP program, Stata said, is contributing to this process.

He also pointed to the need in industry for cross-disciplinary teaming in order to learn from and solve complex problems — something that is at the root of the SuperUROP.

Zoran Zvonar, Fellow and senior director at MediaTek, found resonance with what he heard at the SuperUROP reception. Since MediaTek joined SuperUROP in 2012, he noted, “We are eager to leverage existing relationships with research centers at MIT such as the Center for Integrated Circuits and Systems (CICS) and Wireless@mit, and expand to building new relationships with a broader talent pool of MIT students and faculty. This year we are approaching the program with more focused projects, representing the intersection of research initiatives defined by MIT faculty and students and technology areas that interest the company.”

From the student perspective, SuperUROP represents deep research with the best possible support, and, for some of those students, it might provide the opportunity to feed this work into a startup.

Angela Zhang wants to accomplish both. As an MIT EECS-Amazon Undergraduate Research and Innovation Scholar, Zhang is working with the Database@CSAIL group under professor Sam Madden and postdoc Aditya Parameswaran. Excited about this work, Zhang said, “This fall, I hope to build out the fundamentals of the database platform so we can later experiment using different approaches to solve the problem of data visualization.” Her goal in taking on this SuperUROP, she said, “is to answer some very big unanswered questions as well as to potentially develop my research project into a startup.”

Regardless of the SuperUROP students’ goals — for graduate school, career in industry or launching a startup — the program, Chandrakasan said, “has demonstrated a new way to innovate how students, faculty and industry work together to generate ideas while building new leaders. It’s a win-win-win.”

October 17, 2013

News Image: 

Sze, Vaikuntanathan named for career development professorships

$
0
0

Department Head Anantha Chandrakasan recently announced the appointments of Vivienne Sze as the Emanuel E. Landsman (1958) Career Development Assistant Professor and Vinod Vaikuntanathan as the Steven G. ('68) and Renee Finn Career Development Assistant Professor. Chandrakasan also recognized Thomas Heldt for his appointment by the Institute for Medical Engineering and Science as the Hermann Von Helmholtz Career Development Assistant Professor.
Read more about these three new members of the EECS faculty in the Sept. 23, 2013 spotlight.

Date Posted: 

Friday, October 18, 2013 - 11:15am

Card Title Color: 

Black

Card Description: 

Department Head Anantha Chandrakasan recently announced the appointments of Vivienne Sze as the Emanuel E. Landsman (1958) Career Development Assistant Professor and Vinod Vaikuntanathan as the Steven and Renee Finn Career Development Assistant Professor. Read more.

Card Title: 

Sze, Vaikuntanathan named for career development professorships

Start6: A Bootcamp for EECS Entrepreneurs and Innovators

$
0
0

How to Start a Startup?  Come to Start6 Info Session to learn about applying and class structure for the IAP three week class called Start6: A Bootcamp for EECS Entrepreneurs and Innovators

 

Attention EECS students! Have you already done a UROP or are you currently a SuperUROP or MEng or graduate student? — The EECS Department is running its first-ever IAP Bootcamp for EECS Entrepreneurs and Innovators, titled Start6 from Jan. 13 - 29, 2014.

If you have developed an idea that you would like to carry to the level of preparing for a startup, please come to the Start6 Infosession at 5pm on Nov. 5 in Grier B (34-401B).  You will learn about applying to Start6 (visit the application site for Start6), and about the class structure and objectives.

Date Posted: 

Thursday, October 24, 2013 - 11:15am

Card Title Color: 

Black

Card Description: 

Attention EECS students! Have you already done a UROP or are you currently a SuperUROP or MEng or graduate student who wants to explore entrepreneurship? The EECS Department is running its first-ever IAP Bootcamp for EECS Entrepreneurs and Innovators, titled Start6 from Jan. 13 - 29, 2014. If you have developed an idea that you would like to carry to the startup level, please come to the Start6 Info Session at 5pm on Nov. 5 in Grier B (34-401B). Read more.

Photo: 

Card Wide Image: 

Card Title: 

How to Start a Startup? Come to the Start6 Info Session Nov. 5

Rivest is appointed to Vannevar Bush Professorship

$
0
0

Prof. Ron Rivest, the Vannevar Bush Professor at MIT.EECS Department Head Anantha Chandrakasan announced today the appointment of Ron Rivest as the new holder of the Vannevar Bush Professorship. The Bush Chair is an Institute-wide professorship established in 1982 as a memorial to one of the outstanding scientists and engineers of the twentieth century.

As one of the founding fathers of modern cryptography, Ron Rivest worked with colleagues Len Adleman and Adi Shamir to create the public key system, known worldwide as the RSA system – one which has resisted sophisticated attack (in the more than four decades since its invention) and which is based on the first known algorithm that supports both digital signing (authenticating the sender) and encryption. Besides playing a critical role in the success of today’s Internet commerce, the RSA code on which the system is based represents an example of elegant and abstract theory that has ultimately had immense practical impact.

Prof. Rivest is also a dedicated teacher, mentor and educator. Professors Rivest and Leiserson co-developed 6.046, Introduction to Algorithms – a course which Rivest has taught over a dozen times. Professor Rivest co-authored the text (by the same name) with colleagues Professors Cormen, Leiserson and Stein. This text also dubbed the “CLRS book” has been listed as the best selling textbook in all of Computer Science – over 500,000 copies of this text have been sold. Generations of computer programmers, world-wide, have learned their craft from the CLRS book, considered the standard reference on the subject.

Prof. Rivest, formerly the Viterbi Professor of Computer Science in the EECS Department at MIT, is a member of the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL), a member of CSAIL’s Theory of Computation Group and is a leader of the CSAIL Cryptography and Information Security Group. Prof. Rivest has made significant contributions in many other areas of computer science including computer-aided design of integrated circuits, data structures, computer algorithms and systems for electronic voting. In the past he has also worked extensively in the area of machine learning.

Prof. Rivest has served as a Director of the International Association for Cryptologic Research, the organizing body for the Eurocrypt and Crypto conferences, and as a Director of the Financial Cryptography Association. He is a founder of RSA Data Security, which was bought by Security Dynamics. The combined company was renamed to RSA Security and later purchased by EMC. He is also co-founder of Verisign and of Peppercoin.

As a member of the CalTech/MIT Voting Technology Project, Prof. Rivest served on the Technical Guidelines Development Committee (TGDC), advisory to the Election Assistance Commission. In this role, he developed recommendations for voting system certification standards. He was chair of the TGDC’s Computer Security and Transparence Subcommittee and serves on the Advisory Board of the Verified Voting Foundation. He was a member of the ‘Scantegrity’ team developing and testing voting systems that are verifiable “end-to-end.”

Date Posted: 

Thursday, October 24, 2013 - 5:30pm

Research Theme: 

Labs: 

Card Title Color: 

Black

Card Description: 

EECS Department Head Anantha Chandrakasan announced today the appointment of Ron Rivest as the new holder of the Vannevar Bush Professorship. The Bush Chair is an Institute-wide professorship established in 1982 as a memorial to one of the outstanding scientists and engineers of the twentieth century.

Photo: 

Card Title: 

Rivest is appointed to Vannevar Bush Professorship

Research Area: 

Baldo, Barzilay, Madden and Perreault are selected for 2013 Faculty Research and Innovation Fellowships

$
0
0

Faculty Research and Innovation Fellowship recipients for 2013 from left Marc Baldo, Regina Barzilay, Samuel Madden and David Perreault.

 

 

 

 

 

 

Department Head Chandrakasan announced today that the Faculty Research and Innovation Fellowship (FRIF) recipients for 2013–2014 are Marc Baldo, Regina Barzilay, Samuel Madden, and David Perreault. The FRIF is given to recognize senior EECS faculty members for outstanding research contributions and international leadership in their fields. Each Fellow will receive a three-year award totaling $60k (i.e., $20k per year for 3 years) to be used at the faculty member’s discretion for support of new or ongoing research projects. 

Professor Baldo's research focuses on building inexpensive and highly efficient organic light emitting devices and solar cells. His seminal contributions hinge on engineering the spin of excitons, which are quasiparticles of energy that mediate the emission and absorption of light within organic semiconductors. He has mixed excitons to quadruple the efficiency of LEDs, and split excitons to obtain more than one electron per photon in solar cells. Marc is currently the director of the Center for Excitonics at MIT. He is a member of the Research Laboratory of Electronics (RLE) at MIT.

Professor Barzilay studies how computers can understand and generate human language. She develops models of natural language, and uses those models to solve real-world language processing tasks. Her research enables the automated summarization of documents, machine interpretation of natural language instructions, and the deciphering of ancient languages.  She is acknowledged to be a world leader in computational linguistics.  She is a wonderful mentor to her students, who have received recognition within MIT and internationally for their doctoral theses and their research. She is a member of the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL).

Professor Madden works on databases and computer systems. His research ranges over data management, live sensing, and networking, and he is broadly known for his contributions in sensor data management (TinyDB and CarTel) and database storage (C-Store and H-Store). Professor Madden is a leader in the data management community, with a number of award papers in top conferences over the past few years. His recent research efforts include crowd-sourced database, data management for the cloud, and interactive data visualization. He leads the BigData@CSAIL initiative and directs the Intel Science and Technology Center (ISTC) in Big Data.  He is a member of CSAIL.

Professor Perreault’s research focuses on the design, manufacturing, control and application of power electronics.  He has made outstanding contributions to the development of power converters operating at very high frequencies and in their use to benefit efficiency and performance in renewable energy, lighting, communications, computation and transportation.  He is co-founder of Eta Devices, a startup company focusing on high-efficiency RF power amplifiers. Professor Perreault is a Fellow of the IEEE and co-author of six IEEE prize papers. He is a member of both the Microsystems Technology Laboratories (MTL) and RLE at MIT. 

Date Posted: 

Sunday, October 27, 2013 - 4:30am

Card Title Color: 

Black

Card Description: 

Department Head Chandrakasan announced today that the Faculty Research and Innovation Fellowship (FRIF) recipients for 2013–2014 are Marc Baldo, Regina Barzilay, Samuel Madden, and David Perreault. The FRIF is given to recognize senior EECS faculty members for outstanding research contributions and international leadership in their fields. Each Fellow will receive a three-year award totaling $60k (i.e., $20k per year for 3 years) to be used at the faculty member’s discretion for support of new or ongoing research projects.

Photo: 

Card Wide Image: 

Card Title: 

Baldo, Barzilay, Madden and Perreault are selected for 2013 Faculty Research and Innovation Fellowships

EECS Faculty take on new leadership roles at MIT

$
0
0

Vladimir Bulovic is appointed by President Rafael Reif as Dean for Innovation Initiative at MIT

Bulovic is new School of Engineering Associate Dean for Innovation; Jesús del Alamo becomes MTL Director

In a recent letter to the MIT Community, MIT President L. Rafael Reif announced the creation of an Institute-wide Initiative on Innovation. Reif selected two MIT faculty to lead this initiative as new Associate Deans for Innovation. Vladimir Bulovic, the Fariborz Maseeh (1990) Professor of Emerging Technology and Director of the Microsystems Technology Laboratories (MTL) will be the MIT School of Engineering’s Associate Dean for Innovation and Fiona Murray, the Alvin J. Siteman (1948) Professor of Entrepreneurship, will be MIT Sloan’s Dean for Innovation. (See MIT News Office Oct. 17 article).

In selecting these two, Reif noted that the feedback he received since Sept. 2012 resulted in two interconnected themes: strengthening MIT’s innovation ecosystem and focussing on manufacturing. Reif, a member of the EECS Department faculty and former Department Head as well as MIT Provost, asked a 19 member Advisory Committee to be led by Murray and Bulovic to engage the MIT community in building the Innovation Initiative, reporting their findings on Jan. 31, 2014. Members of a 19-member advisory panel to be led by Bulovic and Murray include EECS faculty members Yoel Fink (joint with EECS), David Gifford and Ron Weiss. Jesus del Alamo is appointed director of MTL - announced by School of Engineering Dean Ian Waitz.

In his Oct. 22 letter to the MIT community announcing the new associate deans for innovation, Reif described Vladimir Bulovic’s new role and his earlier leadership as the director of the Organic and Nanostructured Electronics laboratory (the ONE lab), co-director of the Eni-MIT solar Frontiers Center, one of MIT’s largest sponsored programs – in addition to his leadership of the Microsystems Technology Laboratories (MTL) – supporting over 700 investigators and $80M of research programs from across the Institute. (Read more in the MIT News Office Oct. 17 article.)

Jesús del Alamo, the Donner professor of Science in the EECS Department and a MacVicar Faculty Fellow, has been named director of MTL – a position he assumed on Oct. 28. In an email announcement to the MTL community, School of Engineering Dean Ian Waitz said he is looking forward to del Alamo’s “creative and energetic input as MTL continues to evolve, especially under the institute’s newly announced Innovation Initiative.” (Read more in the Oct. 28 MIT News Office article about del Alamo’s work and leadership in both silicon and compound semiconductor transistor technologies and in the iLab Project which he founded in 1998.)
 

Chancellor Eric Grimson is selected by President Reif to be Chancellor for Academic Advancement at MITGrimson and Schmidt are named for new leadership roles as Chancellor for Academic Advancement and MIT Acting Provost, respectively
 

MIT President Reif wrote to the MIT community on Oct. 22 announcing the formation of a new role for MIT Chancellor W. Eric Grimson to work with the institute’s faculty and students to ensure their needs and priorities are reflected in the upcoming multiyear multibillion-dollar capital campaign. As noted in the MIT News Office, Oct. 22, 2013 article, Grimson, the Bernard Gordon professor of Medical Engineering (and former head of MIT’s largest department, Electrical Engineering and Computer Science) will assume the new ad hoc position of “Chancellor for Academic Advancement” — a key role in making the case for MIT’s fundraising priorities with alumni and donors around the world. (Read more about Eric Grimson’s new role and the work he accomplished as MIT Chancellor since 2011 in the MIT News Oct. 22 article.)

In announcing Eric Grimson’s new role in the MIT capital campaign, President Reif also announced that Martin Schmidt, associate provost since 2008 and professor of electrical engineering and former MTL Director, will become the acting provost on Nov. 1, on the return of current Provost Chris Kaiser to his faculty role. Reif noted Schmidt’s work as associate provost during the challenges of allocating physical space on its campus as MIT responded to the global financial crisis and in the following period, as MIT’s plans for Kendall Square development and improvements. (Read more in President Reif’s MIT News Office Oct. 22 letter to the MIT community.)

 

October 27, 2013

News Image: 

Daniela Rus is appointed as Andrew (1956) and Erna Viterbi Professor

$
0
0

Daniela Rus is appointed Viterbi Professor of Electrical Engineering and Computer ScienceEECS Department Head Anantha Chandrakasan announced today the appointment of Professor Daniela Rus as the Andrew (1956) and Erna Viterbi Professor of Electrical Engineering and Computer Science. The Chair was established in 1999 by Andrew and Erna to recognize significant contributions in the field of communications and signal processing. Prof. Ronald Rivest was the first chair holder of the Viterbi Professorship.

Professor Rus, the Director of MIT’s Computer Science and Artificial Intelligence Laboratory, has made seminal contributions to motion planning, controlling, and fielding autonomous robots. Her research covers a broad spectrum of technical problems related to self-organizing networks of robots, including robot design, control, locomotion, manipulation, and high-level planning and control for groups of robots. Her work on shape shifting robots is foundational for the field of modular self-reconfiguring robot systems, where the objective is to design robot modules and planning and control algorithms that enable the resulting robot systems to self-organize as the shape best suited for the sensing, navigation, or manipulation needs of a task. Her work has contributed several new robot platforms with novel capabilities and algorithms for controlling networks of robots. (See the links to the media coverage of the numerous robotic systems developed by Prof. Rus.)

In his announcement, Prof. Chandrakasan noted Prof. Rus’ dedication as an educator, saying, “She is also an outstanding educator and a wonderful mentor to her students.” Prof. Rus developed, in collaboration with Professor Seth Teller, two very popular courses for the Robotics Science and Systems sequence (6.141 and 6.142). Two offerings of the advanced course (6.142) resulted in refereed conference publications authored with the class that were nominated as best papers at the premier conferences in robotics, ICRA and IROS. The first paper described a class project on an autonomous greenhouse and the second paper described a class project of assembling Ikea furniture with robots. More recently, Professor Rus has worked with Professor Erik Demaine and Chuck Hoberman to create an innovative course that explores the role of computation in mechanical innovation.

Professor Rus has also devoted significant time to robotics education outside of MIT. As education co-chair of the Robotics and Automation Society, she spearheaded an effort to create an electronic repository of robotics teaching materials with the goal of enabling non-experts in the field to offer undergraduate robotics courses. She has also played a leadership role in the field, serving on the Long Range Planning Committee of IEEE’s Robotics and Automation Society, as general chair for several robotics conferences including the International Symposium on Experimental Robotics and Algorithmic Foundations of Robotics, and in the roles of a program committee member, associate editor, and technical contributor for all the premier robotics journals and conferences. Her contributions have been recognized with a MacArthur fellowship in 2002 while she was an associate professor at Dartmouth College, where she directed the Dartmouth Robotics Laboratory, which she founded in 1994.

Date Posted: 

Monday, October 28, 2013 - 4:45pm

Research Theme: 

Labs: 

Card Title Color: 

Black

Card Description: 

EECS Department Head Anantha Chandrakasan announced today the appointment of Professor Daniela Rus as the Andrew (1956) and Erna Viterbi Professor of Electrical Engineering and Computer Science. The Chair was established in 1999 by Andrew and Erna to recognize significant contributions in the field of communications and signal processing. Prof. Ronald Rivest was the first chair holder of the Viterbi Professorship. Read more.

Photo: 

Card Title: 

Daniela Rus is appointed Viterbi Professor

Research Area: 

CSAIL News: Berthold Horn develops algorithm to alleviate traffic flow instabilities

$
0
0

Berthold Horn develops algorithm that paired with cruise control systems can alleviate unexplained traffic flow problems on major highways.At this month’s IEEE Conference on Intelligent Transport Systems, Berthold Horn, professor of computer science and engineering in the EECS Department at MIT and member of the Computer Science and Artificial Intelligence Lab (CSAIL), presented a new algorithm for alleviating traffic flow instabilities, which he believes could be implemented by a variation of the adaptive cruise-control systems that are an option on many of today’s high-end cars. [See his work (pdf)].

Read more in the Oct. 28, 2013 MIT News Office article by Larry Hardesty titled "Eliminating unexplained traffic jams - If integrated into adaptive cruise-control systems, a new algorithm could mitigate the type of freeway backup that seems to occur for no reason," also posted below.


Everybody’s experienced it: a miserable backup on the freeway, which you think must be caused by an accident or construction, but which at some point thins out for no apparent reason.

Such “traffic flow instabilities” have been a subject of scientific study since the 1930s, but although there are a half-dozen different ways to mathematically model them, little has been done to prevent them.

At this month’s IEEE Conference on Intelligent Transport Systems, Berthold Horn, a professor in MIT’s Department of Electrical Engineering and Computer Science, presented a new algorithm for alleviating traffic flow instabilities, which he believes could be implemented by a variation of the adaptive cruise-control systems that are an option on many of today’s high-end cars.

A car with adaptive cruise control uses sensors, such as radar or laser rangefinders, to monitor the speed and distance of the car in front of it. That way, the driver doesn’t have to turn the cruise control off when traffic gets backed up: The car will automatically slow when it needs to and return to its programmed speed when possible.

Counterintuitively, a car equipped with Horn’s system would also use sensor information about the distance and velocity of the car behind it. A car that stays roughly halfway between those in front of it and behind it won’t have to slow down as precipitously if the car in front of it brakes; but it will also be less likely to pass on any unavoidable disruptions to the car behind it. Since the system looks in both directions at once, Horn describes it as “bilateral control.”

Traffic flow instabilities arise, Horn explains, because variations in velocity are magnified as they pass through a lane of traffic. “Suppose that you introduce a perturbation by just braking really hard for a moment, then that will propagate upstream and increase in amplitude as it goes away from you,” Horn says. “It’s kind of a chaotic system. It has positive feedback, and some little perturbation can get it going.”

See the MIT News Office article for: A sample run of Horn's simulator (complete with brake lights). The system starts out in a stable state, but backups begin about 30 seconds in, even though all the cars are executing algorithms typical of current adaptive-cruise-control systems. The bilateral-control algorithm is switched on at the one-minute mark. Courtesy of Berthold Horn. 

Doing the math

Horn hit upon the notion of bilateral control after suffering through his own share of inexplicable backups on Massachusetts’ Interstate 93. Since he’s a computer scientist, he built a computer simulation to test it out.

The simulation seemed to bear out his intuition, but to publish, he needed mathematical proof. After a few false starts, he found that bilateral control could be modeled using something called the damped-wave equation, which describes how oscillations, such as waves propagating through a heavy fluid, die out over distance. Once he had a mathematical description of his dynamic system, he used techniques standard in control theory — in particular, the Lyapunov function — to demonstrate that his algorithm could stabilize it.

Horn’s proof accounts for several variables that govern real-life traffic flow, among them drivers’ reaction times, their desired speed, and their eagerness to reach that speed — how rapidly they accelerate when they see gaps opening in front of them. Horn found that the literature on traffic flow instabilities had proposed a range of values for all those variables, and within those ranges, his algorithm works very efficiently. But in fact, for any plausible set of values, the algorithm still works: All that varies is how rapidly it can smooth out disruptions.

Horn’s algorithm works, however, only if a large percentage of cars are using it. And laser rangefinders and radar systems are relatively costly pieces of hardware, which is one of the reasons that adaptive cruise control has remained a high-end option.

Digital cameras, on the other hand, have become extremely cheap, and many cars already use them to monitor drivers’ blind spots. “There are several techniques,” Horn says. “One is using binocular stereo, where you have two cameras, and that allows you to get distance as well as relative velocity. The disadvantage of that is, well, two cameras, plus alignment. If they ever get out of alignment, you have to recalibrate them.”

Time to impact

Horn’s chief area of research is computer vision, and his group previously published work on extracting information about distance and velocity from a single camera. “We’ve developed monocular methods that allow you to very accurately get the ratio of distance to velocity,” Horn says — a ratio known in transportation studies as “time to contact,” since it captures information about the imminence of collision. “Strangely, while it’s, from a monocular camera, difficult to get distance accurately without additional information, and it’s difficult to get velocity accurately without additional information, the ratio can be had.” In ongoing work, Horn is investigating whether his algorithm can be adapted so that it uses only information about time to contact, rather than absolute information about speed and distance.

“This is a beautiful paper,” says Mohan Trivedi, a professor of electrical engineering and computer science at the University of California at San Diego, and director of the school’s Laboratory for Intelligent and Safe Automobiles. “It’s a welcome addition to our literature, and I’m looking forward to other people picking up on this and pushing it forward.”

The real obstacle to the system’s adoption, Trivedi says, is not technical but psychological. “Generally, drivers really worry about what is good for me, rather than what is good for the whole platoon or the community of vehicles that are moving on this road with me,” he says.

But, Trivedi adds, his own group is investigating the use of rearview cameras for safety applications, rather than applications that address questions of collective welfare. “A lot of times, we might be intending to change lanes to overtake, to exit, to merge, and in those cases, we may not be aware of things that can come from behind us,” he says. “We are developing these large, wide-angle field-of-view systems that look behind and can fuse motion cues and appearance cues to look at those surrounding criticalities.” If such systems catch on, Trivedi says, Horn’s algorithm could piggyback on top of them, without increasing cars’ sticker prices.

October 29, 2013

Labs: 

Research Area: 

Calculating when our computers can safely make a small mistake

$
0
0

Graphic by Christine Daniloff, MIT News Office.Martin Rinard, professor in the MIT EECS Department and principal investigator in the MIT Computer Science and Artificial Intelligence Lab (CSAIL) and members of his research group have developed a new programming framework that knows when a bit of data can be sacrificed to permit timely and energy efficient performance -- while allowing for calculation of accurate results. Last week two graduate students in Prof. Rinard's group Michael Carbin and Sasa Misailovic, presented the new system at the Association for Computing Machinery’s Object-Oriented Programming, Systems, Languages and Applications conference, where their paper, co-authored with Rinard, won one a best-paper award.

Read more in the MIT News Office Nov. 4, 2013 article by Larry Hardesty titled "How to program unreliable chips - A new language lets coders reason about the trade-off between fidelity of execution and power or time savings in the computers of the future," also posted below.


As transistors get smaller, they also become less reliable. So far, computer-chip designers have been able to work around that problem, but in the future, it could mean that computers stop improving at the rate we’ve come to expect.

A third possibility, which some researchers have begun to float, is that we could simply let our computers make more mistakes. If, for instance, a few pixels in each frame of a high-definition video are improperly decoded, viewers probably won’t notice — but relaxing the requirement of perfect decoding could yield gains in speed or energy efficiency.

In anticipation of the dawning age of unreliable chips, Martin Rinard’s research group at MIT’s Computer Science and Artificial Intelligence Laboratory has developed a new programming framework that enables software developers to specify when errors may be tolerable. The system then calculates the probability that the software will perform as it’s intended.

“If the hardware really is going to stop working, this is a pretty big deal for computer science,” says Rinard, a professor in the Department of Electrical Engineering and Computer Science. “Rather than making it a problem, we’d like to make it an opportunity. What we have here is a … system that lets you reason about the effect of this potential unreliability on your program.”

Last week, two graduate students in Rinard’s group, Michael Carbin and Sasa Misailovic, presented the new system at the Association for Computing Machinery’s Object-Oriented Programming, Systems, Languages and Applications conference, where their paper, co-authored with Rinard, won one a best-paper award.

On the dot

The researchers’ system, which they’ve dubbed Rely, begins with a specification of the hardware on which a program is intended to run. That specification includes the expected failure rates of individual low-level instructions, such as the addition, multiplication, or comparison of two values. In its current version, Rely assumes that the hardware also has a failure-free mode of operation — one that might require slower execution or higher power consumption.

A developer who thinks that a particular program instruction can tolerate a little error simply adds a period — a “dot,” in programmers’ parlance — to the appropriate line of code. So the instruction “total = total + new_value” becomes “total = total +. new_value.” Where Rely encounters that telltale dot, it knows to evaluate the program’s execution using the failure rates in the specification. Otherwise, it assumes that the instruction needs to be executed properly.

Compilers — applications that convert instructions written in high-level programming languages like C or Java into low-level instructions intelligible to computers — typically produce what’s called an “intermediate representation,” a generic low-level program description that can be straightforwardly mapped onto the instruction set specific to any given chip. Rely simply steps through the intermediate representation, folding the probability that each instruction will yield the right answer into an estimation of the overall variability of the program’s output.

“One thing you can have in programs is different paths that are due to conditionals,” Misailovic says. “When we statically analyze the program, we want to make sure that we cover all the bases. When you get the variability for a function, this will be the variability of the least-reliable path.”

“There’s a fair amount of sophisticated reasoning that has to go into this because of these kind of factors,” Rinard adds. “It’s the difference between reasoning about any specific execution of the program where you’ve just got one single trace and all possible executions of the program.”

Trial runs

The researchers tested their system on several benchmark programs standard in the field, using a range of theoretically predicted failure rates. “We went through the literature and found the numbers that people claimed for existing designs,” Carbin says.

With the existing version of Rely, a programmer who finds that permitting a few errors yields an unacceptably low probability of success can go back and tinker with his or her code, removing dots here and there and adding them elsewhere. Re-evaluating the code, the researchers say, generally takes no more than a few seconds.

But in ongoing work, they’re trying to develop a version of the system that allows the programmer to simply specify the accepted failure rate for whole blocks of code: say, pixels in a frame of video need to be decoded with 97 percent reliability. The system would then go through and automatically determine how the code should be modified to both meet those requirements and maximize either power savings or speed of execution.

“This is a foundation result, if you will,” says Dan Grossman, an associate professor of computer science and engineering at the University of Washington. “This explains how to connect the mathematics behind reliability to the languages that we would use to write code in an unreliable environment.”

Grossman believes that for some applications, at least, it’s likely that chipmakers will move to unreliable components in the near future. “The increased efficiency in the hardware is very, very tempting,” Grossman says. “We need software work like this work in order to make that hardware usable for software developers.”

November 4, 2013

News Image: 


Faculty positions beginning September 2014

$
0
0

MASSACHUSETTS INSTITUTE OF TECHNOLOGY

FACULTY POSITIONS
 

The Department of Electrical Engineering and Computer Science (EECS) seeks candidates for faculty positions starting in September 2014. Appointment will be at the assistant or untenured associate professor level. In special cases, a senior faculty appointment may be possible. Faculty duties include teaching at the undergraduate and graduate levels, advising students, conducting original scholarly research, supervising student research, and developing course materials at the graduate and undergraduate levels. We will consider candidates with backgrounds and interests in any area of electrical engineering and computer science. Faculty appointments will commence after completion of a doctoral degree.

Candidates must register with the EECS search website at https://eecs-search.eecs.mit.edu, and must submit application materials electronically to this website. Candidate applications should include a description of professional interests and goals in both teaching and research. Each application should include a curriculum vita and the names and addresses of three or more individuals who will provide letters of recommendation. Letter writers should submit their letters directly to MIT, preferably on the website or by mailing to the address below. Please submit a complete application by December 1, 2013.

Send all materials not submitted on the website to:

Professor Anantha Chandrakasan
Department Head, Electrical Engineering and Computer Science
Massachusetts Institute of Technology
Room 38-401
77 Massachusetts Avenue
Cambridge, MA 02139

M.I.T. is an equal opportunity/affirmative action employer.

 

Data sets simplified - AI technique may aid flight delays analysis and more

$
0
0

Work of Willsky and Liu in LIDS leads to new ai algorithm that could increase efficiency of data analysis.

Using artificial intelligence including probabilistic graphical models, Ying Liu, graduate student in EECS working with Alan Willsky, EECS professor and director of the Laboratory for Information and Decisions Systems (LIDS) has developed a technique that can efficiently infer vital information about the propagation of flight delays at U.S. airports. Liu and Willsky will present their work, which has potential application to a wide range of areas, at the Neural Information Processing Systems Foundation in early December.

Read more in the Nov. 14, 2013 MIT News Office article by Larry Hardesty titled "Machine learning branches out - An algorithm that extends an artificial-intelligence technique to new tasks could aid in analysis of flight delays and social networks," also posted below.


Much artificial-intelligence research is concerned with finding statistical correlations between variables: What combinations of visible features indicate the presence of a particular object in a digital image? What speech sounds correspond with instances of what words? What medical, genetic, and environmental factors are correlated with what diseases?

As the number of variables grows, calculating their aggregate statistics becomes dauntingly complex. But that calculation can be drastically simplified if you know something about the structure of the data — that, for instance, the sound corresponding to the letter “T” is frequently followed by the sound corresponding to the letter “R,” but never by the sound corresponding to the letter “Q.”

In a paper being presented in December at the annual conference of the Neural Information Processing Systems Foundation, MIT researchers describe a new technique that expands the class of data sets whose structure can be efficiently deduced. Not only that, but their technique naturally describes the data in a way that makes it much easier to work with.

In the paper, they apply their technique to several sample data sets, including information about commercial airline flights. Using only flights’ scheduled and actual departure times, the algorithm can efficiently infer vital information about the propagation of flight delays through U.S. airports. It also identifies those airports where delays are most likely to have far-reaching repercussions, which makes it simpler to reason about the behavior of the network as a whole.

Thinking graphically In technical terms, the researchers’ work concerns probabilistic graphical models. In this context, a graph is a mathematical construct that consists of nodes and edges, usually depicted as, respectively, circles and the lines that connect them. A network diagram is a familiar example of a graph; a family tree is another.

In a graphical model, the edges have an associated number, which describes the statistical relationship between the nodes. In the linguistic example, the nodes representing the sounds corresponding to “T” and “R” would be connected by a highly weighted edge, while the nodes corresponding to “T” and “Q” wouldn’t be connected at all.

Graphical models simplify reasoning about data correlations because they eliminate the need to consider certain dependencies. Suppose, for instance, that your artificial-intelligence algorithm is looking for diagnostically useful patterns in a mountain of medical data, where the variables include patients’ symptoms, their genetic information, their treatment histories, and prior diagnoses. Without the graph structure, the algorithm would have no choice but to evaluate the relationships among all the variables at once. But if it knows, for instance, that gene “G” is a cause of disease “D,” which is treated with medication “M,” which has side effect “S,” then it has a much simpler time determining whether, for instance, “S” is a previously unidentified indicator of “D.” A graphical model is a way of encoding those types of relationships so that they can be understood by machines.

Historically, graphical models have sped up machine-learning algorithms only when they’ve had a few particular shapes, such as that of a tree. A tree is a graph with no closed loops: In a family tree, for instance, a closed loop would indicate something biologically impossible — that, say, someone is both parent and sibling to the same person.

Out of the loop

According to Ying Liu, a graduate student in MIT’s Department of Electrical Engineering and Computer Science who co-wrote the new paper with his advisor, Alan Willsky, the Edwin Sibley Webster Professor of Electrical Engineering, loops pose problems because they can make statistical inference algorithms “overconfident.” The algorithm typically used to infer statistical relationships within graphical models, Liu explains, is a “message-passing algorithm, where each node sends messages to only its neighbors, using only local information and incoming messages from other neighbors. It’s a very good way to distribute the computation.” If the graph has loops, however, “a node may get some message back, but this message is partly from itself. So it gets overconfident about the beliefs.”

In prior work, Liu and Willsky showed that efficient machine learning can still happen in a “loopy” probabilistic graph, provided it has a relatively small “feedback vertex set” (FVS) — a group of nodes whose removal turns a loopy graph into a tree. In the airline-flight example, many of the nodes in the FVS were airline hubs, which have flights to a large number of sparsely connected airports. The same structure is seen in other contexts in which machine learning is currently applied, Liu says, such as social networking.

In the new paper, they show that the structure of a graphical model can be deduced by similar means. A structureless data set is equivalent to a graph in which every node is connected to every other node. Liu and Willsky’s algorithm goes through the graph, sequentially removing nodes that break loops and, using the efficient algorithm they demonstrated previously, calculating how close the statistical dependencies of the resulting graph are to those of the fully connected graph.

In this manner, the algorithm builds up an FVS for the graph. What remains is a tree — or something very close to a tree — that allows for efficient calculation. In practice, Liu and Willsky found that in order to make machine learning efficient, they required an FVS whose size was only about the logarithm of the total number of nodes in the graph.

Oscillations

Sujay Sanghavi, an assistant professor of electrical and computer engineering at the University of Texas at Austin, has also studied the problem of learning the structure of graphical models. Of the MIT researchers’ work, Sanghavi says, “They kind of decompose the problem into two problems, each of which is much simpler to solve individually, and then they alternate between the two. That is a much better way to solve the problem than the original one, which has both issues mixed together.”

In cases where the FVS has been identified, Sanghavi says, “You can easily find the graph that does not include those nodes. That’s a cute observation, that once you have these nodes, the rest of the problem is easy. The other problem is also easy, which is, I give you only those nodes, which are a small number, and you need to find only those edges which have one of the endpoints as those nodes. It’s just when you try to solve both problems together that it becomes hard. That’s the main insight in this paper, and I think it’s quite nice.”

Sanghavi has recently been using graphical models to examine the structure of gene regulatory networks, and he’s curious to see whether the MIT researchers’ technique will apply to that problem. “Some genes fire and they fire other genes and so on,” Sanghavi explains. “Really what you want to do is find the dependence network between genes, and that can be posed as a Gaussian graphical-model learning problem. It would be interesting to see if their methods perform well.”

November 15, 2013

Research Themes: 

News Image: 

Labs: 

Research Area: 

Understanding Boston Transportation: MIT Big Data launches initiative

$
0
0
November 14, 2013

The Big Data Initiative at the MIT Computer Science and Artificial Intelligence Lab (CSAIL) today announced two new activities aimed at improving the use and management of Big Data. The first is a series of data challenges designed to spur innovation in how people use data to solve problems and make decisions. The second is a new Big Data and Privacy Working Group that will bring together leaders from industry, government and academia to address the role of technology in protecting and managing privacy. Both activities were announced as part of the White House event “Data to Knowledge to Action: Building New Partnerships,” which highlighted new high-impact collaborations in the field of Big Data. 
Read more in the November CSAIL press release.

Research Themes: 

News Image: 

Labs: 

Fall Thesis Awards are presented

$
0
0

Ten EECS graduate students were selected this fall to receive department SM and PhD thesis awards. These awards complete the roster of awards given by the department to students in the spring. The fall term awards were presented at the Nov. 18, 2013 faculty lunch. See the awards presentation (pdf) for the complete list including thesis titles. Congratulations!

Winston Chern received the Ernst A. Guillemin SM Thesis Prize for Electrical Engineering (first prize). Prof. Jesus del Alamo (left) Student Awards Chair announced the awards. Prof. Judy Hoyt, (second from left) is Chern's supervisor and Dept. Head Chandrakasan, right, presented the award certificates to the award recipients.From left, Prof. David Perreault, supervisor, Alexander Jurkov, winner of the second prize for the Ernst A. Guillemin SM Thesis Prize for Electrical Engineering, Dept. Head Anantha Chandrakasan and Prof. Jesus del Alamo.The William A. Martin Memorial SM Thesis Prize for Computer Science (second prize) was awarded to Aaron Sidford (second from left). Sidford's supervisor, Prof. Jonathan Kelner is pictured on the left and Dept. Head Anantha Chandrakasan and Prof. Jesus del Alamo appear to Sidford's left.The Jin-Au Kong Doctoral Thesis Prize for Electrical Engineering (first prize) was awarded to Han Wang (second from left). His supervisor Tomas Palacios appears on the left and Dept. Head Chandrakasan and Jesus del Alamo are standing on the right.The Second Prize for the Jin-Au Kong Doctoral Thesis Prize for Electrical Engineering was awarded to Jouya Jadidian, whose supervisor is Prof. Mark Zahn (unable to attend).Yang Cai received the George M. Sprowls Scholarship Fund for PhD Thesis in Computer Science (honorable mention). His supervisor, Prof. Costis Daskalakis is pictured left.The awardees in attendance posed with supervisors present. Front eft to right: Winston Chern, Han Wang, Yang Cai, Alexander Jurkov, Prof. Perreault, row behind from left: Jonathan Kelner, Aaron Sidford, Costis Daskalakis (behind), Jesus del Alamo, Anantha Chandrakasan and Jouya Jadidian.

 

November 18, 2013

News Image: 

Graphene: information at e-speed. Englund discusses

$
0
0

Dirk Englund is cited by Marketplace Tech for his work using graphene to build the next level of computing speed.
"There's a very strong need for that computer to turn electrical signals into optical signals very efficiently,"Dirk Englund the Jamieson Career Development Assistant Professor in the MIT Electrical Engineering and Computer Science Department explained to Marketplace Tech. Englund was approached to discuss his work in the Quantum Photonics Laboratory, where computer chips made of graphene and silicon are encouraging information to move near the speed of light.

Read more in the Nov. 21, 2013, Marketplace Tech feature titled "Graphene: OMG! The magic material that an elephant standing on a pencil couldn't break."

November 22, 2013

Research Themes: 

News Image: 

Labs: 

Research Area: 

Viewing all 1281 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>