Quantcast
Channel: MIT EECS
Viewing all 1281 articles
Browse latest View live

Smart workout wear, vegetable-killers, and inspiration from medieval music

$
0
0

EECS sophomore Mussie Demisse demonstrates his team's "Smart Suit" for workouts. Photo: Gretchen Ertl

Stephanie Schorow | EECS Contributor

It doesn’t get any better than this — at least not at MIT. There’s the roar of raucous laughter as students play games or test products that they themselves have designed and built. There’s the chatter of questions asked and answered, all to the effect of “How did you do that?” and “Here’s what I did.”  

To top it off, there’s the welcoming smell of pizza, slices being pulled from rapidly cooling boxes by a group of students and teaching assistants from the four sections of 6.08 (Introduction to EECS via Interconnected Embedded Systems). They have gathered for a special occasion during the last week of spring term: to show off their class final projects.

“This is the best class I have taken here,” says EECS sophomore Mussie Demisse, dressed in a hoodie with a square contraception on his back that could have fallen off Iron Man. He and his team have designed a “Smart Suit” that analyzes and assesses a user’s pushup form.

“The class has given me the opportunity to do research on my own,” Demisse says. “It’s introduced us to many things and it now falls on us to pursue the things we like.”

6.08 Class Slideshow

Flickr slideshow. All photos by Gretchen Ertl for EECS

The course introduces students to working with multiple platforms, servers, databases, and microcontrollers. For the final project, four-person teams design, program, build, and demonstrate their own cloud-connected, handheld, or wearable Internet of Things systems. The result: about 85 projects ranging from a Frisbee that analyzes velocity and acceleration to a “better” GPS system for tracking the location of the MIT shuttle.

“Don’t hit the red apple! Noooo,” yells first-year student Bradley Albright as Joe Steinmeyer, EECS lecturer and 6.08 instructor, hits the wrong target while playing “Vegetable Assassins.” The object of the game is to slice the vegetables scrolling by on a computer screen, but Steinmeyer, using an internet-connected foam sword, has managed to hit an apple instead.  

Albright had the idea for a “Fruit Ninja”-style game during his first days at MIT, when he envisioned the visceral experience of slicing the air with a katana, or Japanese sword, and hitting a virtual target. Then, he and his team of Johnny Bui and Eesam Hourani, both sophomores in EECS, and Tingyu Li, a junior in management, were able to, as they put it, “take on the true villains of the food pyramid: vegetables.” They built a server-client model in which data from the sword is sent to a browser via a server connection. The server facilitates communication between all components through multiple WebSocket connections.

 “It took a lot of work. Coming down to the last night, we had some problems that we had to spend a whole night finishing but I think we are all incredibly happy with the work we put into it,” Albright says.

Steinmeyer teaches 6.08 with two EECS colleagues: Max Shulaker, Emmanuel E. Landsman (1958) Career Development Assistant Professor, and Stefanie Mueller, X-Window Consortium Career Development Assistant Professor. The course was co-created by Steinmeyer and Joel Voldman, an EECS professor and associate department head.

Mueller, for one, is impressed with the students’ collaborative efforts as they developed their projects in just four weeks: “They really had to pull together to work,” she says. 

Even projects that don’t quite work as expected are learning experiences, Steinmeyer notes. “I’m a big fan of having people do work early on and then go and do it again later. That’s how I learned the best. I always had to learn a dumb way first.”

Demisse and his team — Amadou Bah and Stephanie Yoon, both sophomores in EECS, and Sneha Ramachandran, a junior in EECS — confronted a few setbacks in developing their Smart Suit. “We wanted something to force ourselves to play around with electronics and hardware,” he explains. “During our brainstorming session, we thought of things that would monitor your heart rate.”

Initially, they considered something that runners might use to track their form. “But running’s pretty hard. [We thought,] ‘Let’s take a step back,” Demisse recalls. “It was a natural evolution from that to pushups.”

They designed a zip-up hoodie with inertial measurement unit (IMU) sensors on an elbow, the upper back, and the lower back to measure the acceleration of each body part as the user does pushups for 10 seconds. That data is then analyzed and compared to the measurements of what is considered the “ideal” pushup form. 

A particular challenge: getting the data from various sources analyzed in reasonable amount of time. The system uses a multiplex approach, but just “listens” to one input at a time. “That makes it easier to record data at a faster rate,” Demisse says.

Another team developed a fishing game in which users cast a handheld pole and pick up “fish” viewed on a nearby screen. First-year Rafael Olivera-Cintron demonstrates by casting; a soft noise accompanies the movement. “Do you hear that ambient sound? That’s lake sounds, the sounds of water and mosquitos,” he says. He casts again and waits. And waits. “Yes, it’s a lot like fishing. A lot of waiting,” he says. “That’s my favorite part.” His teammates included EECS juniors Mohamadou Bella Bah and Chad Wood and EECS sophomores Julian Espada and Veronica Muriga.

Several teams’ projects involve music. Diana Voronin, Julia Moseyko, and Terryn Brunelle, all first-year students, are happy to show off  “DJam,” an interconnected spin on Guitar Hero. Rather than pushing buttons that correspond to imaginary guitar chords, users spin a turntable to different positions — all to the beat of a song playing in the background. 

 “We just knew we wanted to do something with music because it would be fun,” Moseyko says. “We also wanted to work with something that turned. From a technical point of view, it was interesting to use that kind of sensor.”

Music from the Middle Ages inspired the team of Shahir Rahman and Patrick Kao, both sophomores in EECS, and Adam Potter and Lilia Luong, both first-years. Using a plywood version of a medieval instrument called a hurdy-gurdy, they created “Hurdy-Gurdy Hero,” which uses a built-in microphone to capture and save favorite songs to a database that processes the audio into a playable game.

“The idea is to give joy, to be able to play an actual instrument but not necessarily just for those who [already] know to play,” Rahman says. He cranks the machine and slightly squeaky but oddly harmonic notes emerge. Other students are clearly impressed by what they’re hearing. Olivera-Cintron sums up in just three words: “That is awesome.”

 

 

Date Posted: 

Monday, June 24, 2019 - 4:45pm

Research Theme: 

Card Title Color: 

Black

Card Description: 

In this popular class, student teams design, build, and demonstrate their own cloud-connected, handheld, or wearable embedded systems.

Photo: 

Card Wide Image: 

Research Area: 


Two departments team up to study human and artificial intelligence

$
0
0

A new joint major, to be launched in the fall of 2019, combines human cognition, neuroscience, and computer science.

Department of Electrical Engineering and Computer Science | Department of Brain and Cognitive Science

As human brains increasingly interact with technology that mimics their own capabilities, the need for students to understand both the science and engineering of intelligence continues to grow as well. At the same time, ongoing advances in these technologies are driving demand for a deeper understanding of how the brain works.

In response, EECS and the Department of Brain and Cognitive Sciences (BCS) have teamed up to offer a new joint degree. Approved by the MIT faculty in April, the bachelor of science in computation and cognition (Course 6-9) is designed to help students explore how the brain produces intelligent behavior and how it can be replicated in machines. In May, the faculty also approved an associated master of engineering (MEng) degree.

“The 6-9 major fulfills a growing educational need at the intersection of cognitive science, neuroscience, and computer science,” says James DiCarlo, BCS department head and the Peter de Florez Professor of Neuroscience. “It is a forward-thinking step that embraces a dynamic new field of research that our faculty and students have shown great interest in. It’s an incredibly exciting time for students to be educated in the foundations of these efforts and to participate in shaping the future of the science and engineering of intelligence.”

Both DiCarlo and EECS department head Asu Ozdaglar noted that 6-9 also reflects Institute-wide enthusiasm for interdisciplinary initiatives such as the Quest for Intelligence and the new MIT Stephen A. Schwarzman College of Computing. “The new joint major also builds on an already strong relationship between our two departments. Ultimately, it will create a new community of graduates who are uniquely qualified to tackle cutting-edge research questions in this exciting emerging field,” says Ozdaglar, the School of Engineering Distinguished Professor of Engineering.

Anyone seeking to do novel research in artificial intelligence (AI) or machine learning will find it helpful to know “both how machines work and how humans make decisions and learn,” says Dennis M. Freeman, the Henry Ellis Warren (1894) Professor of Electrical Engineering and EECS education co-officer. “Both perspectives are critical to transformational advances.”  

"There’s a lot of shared interest around emerging new fields at the intersection of AI and cognitive science and of machine learning and brain science,” says Michale S. Fee, the Glen V. (1946) & Phyllis F. Dorflinger Professor of Neuroscience and the associate department head for education in BCS. “There is a lot of synergy between those areas, and advances would be facilitated by the cross-fertilization of ideas.”

The new course of study will launch in the fall of 2019 in recognition of the explosion of interest in topics at the intersection of the two departments. “We’ve always had students in EECS interested in cognitive sciences and vice versa,” Freeman says. “Recently, these have become the growth areas in both of our departments.” For example, he notes, nearly 500 students enrolled in 6.036 (Introduction to Machine Learning) in the spring of 2019.

The new major is expected to attract about 50 students annually, based on a survey of students already enrolled in BCS subjects focused on human cognition and computation, such as 9.66 (Computational Cognitive Science). “This is a substantial number, if it materializes — and we have every reason to think it will,” Fee says.

“There’s a big commercial push for these skills,” Freeman adds, noting that many of the methods used to help computers conduct “thinking” tasks — such as recognizing faces, driving cars, and even diagnosing diseases — are based on knowledge obtained studying humans.

At the same time, it’s also increasingly important for neuroscientists to have computational skills, Fee says. “One of the really transformative things that’s happening in brain science is that new technologies and methods are creating enormous data sets,” he says. One example: It’s now possible to record the activity of hundreds of thousands of neurons. “These are incredibly huge data sets, and the best way to analyze them is to use artificial intelligence. We’re basically building artificial brains to analyze the data in order to figure out how the human brain works,” he says.

The new major will equip students to take on advanced subjects in EECS; the architecture, circuits, and physiology of the brain; and computational approaches to cognition and intelligence. To provide a foundation for these academic pathways, 6-9 majors will be required to take both 6.0001 (Introduction to Computer Science Programming in Python) and 9.01 (Introduction to Neuroscience), as well as a foundational math class. They will also have to complete a senior-level, project-based class.

“One of the challenges we faced was how to combine these two disciplines into a flexible program,” Fee says. “I think we’ve got a really great curriculum that provides that flexibility.”

Freeman says the new major should prove a boon for students who might otherwise have double-majored in EECS and BCS, because few of the requirements of established majors overlap. For that reason, students should have more freedom to choose electives from options such as 6.021J/9.21J (Cellular Neurophysiology and Computing), 9.35 (Perception), 9.19 (Computational Psycholinguistics), and 9.40 (Introduction to Neural Computation).

Students who want to pursue the associated MEng degree would need to take six additional subjects, conduct lab research in either Course 6 or 9, and write a master’s thesis. “We think that master’s will be a great opportunity for students who want to get advanced training to be more competitive for employment either in industry or for graduate study,” Fee says.

This is the fourth EECS-related joint degree program launched in recent years. The first, Course 6-7 (Computer Science and Molecular Biology), launched in 2012, followed by Course 6-14 (Computer Science, Economics, and Data Science) in 2017 and 11-6 (Urban Science and Planning with Computer Science) last fall. The 6-9 major will jointly reside in EECS and BCS, and enrolled students will have a primary academic advisor in BCS with a secondary advisor in EECS.

For more information about Course 6-9, students should contact Jillian Auerbach in BCS and Anne Hunter in EECS.

 

 

Date Posted: 

Tuesday, June 25, 2019 - 4:30pm

Card Title Color: 

Black

Card Description: 

A new joint major offered by EECS and the Department of Brain and Cognitive Sciences combines human cognition, neuroscience, and computer science.

Photo: 

Card Wide Image: 

Research Area: 

Machine learning for everyone

$
0
0

Mingman Zhao, a PhD student in EECS, spoke to the 6.883/6.S083 class about common issues in using machine learning tools to address problems. Photo: Lillie Paquette, School of Engineering
 

Emily Makowski | School of Engineering

A graduate student researching red blood cell production, another studying alternative aviation fuels, and an MBA candidate: What do they have in common? They all enrolled in 6.883/6.S083 (Modeling with Machine Learning: From Algorithms to Applications) in the spring of 2019. The class, offered for the first time during that term, focused on machine learning applications in engineering and the sciences, attracting students from fields ranging from biology to business to architecture.

Among them was Thalita Berpan, who was in her last term before graduating from the MIT Sloan School of Management in June. Berpan previously worked in asset management, where she observed how financial companies increasingly focus on machine learning and related technologies. “I wanted to come to business school to dive into emerging technology and get exposure to all of it,” says Berpan, who has also taken courses on blockchain and robotics. “I thought; ‘Why not take the class so I can understand the building blocks?’”

The class provided Berpan with a thorough grounding in the basics — and more. “Not only do you understand the fundamentals of machine learning, but you actually know how to use them and apply them,” she says. “It’s very satisfying to know how to build machine learning algorithms myself and know what they mean.”

Berpan will go into product management immediately after graduation, and she plans to use what she has learned about data and algorithms to work with design engineers. “What are some of the ways that engineers and data scientists can leverage a data set? For me to be able to help guide them through that process is going to be extremely useful,” she says.

Open to both undergraduate and graduate students, 6.883/6.S083 enrolled 66 students for credit in its debut semester. It was created as an experimental alternative to 6.036 (Introduction to Machine Learning), a course that Barzilay and Jaakkola developed and initially taught, which has become one of the most popular on campus since its introduction in 2013.

Having received feedback that 6.036 was too specialized for some non-electrical engineering and computer science (EECS) majors, Barzilay and Jaakkola designed 6.883/6.S083 to focus on different applications of machine learning. For example, Berpan, along with students from the Department of Biology and the Department of Aeronautics and Astronautics, worked on a group project that used machine learning to predict the accuracy of DNA repair in the CRISPR/Cas9 genome-editing system.

“It doesn’t necessarily mean that the class is easier. It just has a different emphasis,” says Barzilay, the Delta Electronics Professor of Electrical Engineering and Computer Science. “Our goal was to provide the students with a set of tools that would enable them to solve problems in their respective areas of specialization.”

The class includes live lectures that focus on modeling and online materials for building a shared background in machine learning methods, including tutorials for students who have less prior exposure to the subject. “We wanted to help students learn how to model and predict, and understand when they succeeded — skills that are increasingly needed across the Institute,” says Jaakkola, the Thomas Siebel Professor in EECS and the Institute for Data, Systems, and Society (IDSS).

In fact, about two-thirds of those enrolled for spring term were non-EECS majors. “We had a surprising number of people from the MIT School of Architecture and Planning. That’s very exciting,” Jaakkola says.

Ultimately, the instructors say, the new course was built to bring together a variety of students together to study an exciting, fast-growing area. “They constantly hear about the wonders of AI, and this enables them to become part of it,” says Barzilay. “Obviously, it brings challenges, too, because they are now in totally new, uncharted territory. But I think they are learning a lot about their abilities to expand to new areas.”

 

Date Posted: 

Tuesday, June 25, 2019 - 7:00pm

Card Title Color: 

Black

Card Description: 

A new EECS course on applications of machine learning teaches students from a variety of disciplines about one of today’s hottest topics.

Photo: 

Card Wide Image: 

Song Han named to MIT Technology Review list of Innovators Under 35

$
0
0

EECS Assistant Professor Song Han

EECS Staff

EECS faculty member Song Han has been named to MIT Technology Review’s prestigious annual Innovators Under 35 list in the Pioneer category.

Han, who is the Robert J. Shillman (1974) Career Development Assistant Professor in EECS, was recognized for designing software that let powerful artificial intelligence (AI) programs run more efficiently on low-power mobile devices.

Han’s “deep-compression” technique makes it possible to compress deep neural networks by 10 to 50 times and run — on a smartphone, in real time — “AI algorithms that can recognize objects, generate imagery, and understand human language,” MIT Technology Review noted in its announcement, adding that Facebook and other companies use the technology.

He also cofounded a startup company called DeePhi Tech providing solutions for efficient deep learning computing, which was acquired by Xilinx last year.

Han received a PhD in electrical engineering from Stanford University. He joined the EECS faculty in July 2018, and is a core faculty member in MIT’s Microsystems Technology Laboratories (MTL) and an affiliate member of the Laboratory for Information and Decision Systems (LIDS).

His work has been featured by O’Reilly, IEEE Spectrum, TechEmergence, The Next Platform, and MIT News, among others. He received best-paper awards at the International Conference on Learning Representations and the International Symposium on Field Programmable Gate Arrays. Other honors include a Facebook Research Award, an Amazon Machine Learning Research Award, and a Sony Faculty Award.

Every year, MIT Technology Review’s Innovators Under 35 list recognizes 35 “exceptionally talented technologists whose work has great potential to transform the world.” Categories include Entrepreneurs, Humanitarians, Inventors, Pioneers, and Visionaries.

Date Posted: 

Wednesday, June 26, 2019 - 2:45pm

Labs: 

Card Title Color: 

Black

Card Description: 

The EECS faculty member was recognized for designing technology that lets powerful artificial intelligence programs run more efficiently on low-power hardware.

Photo: 

Card Wide Image: 

Research Area: 

Roundup: Recent EECS faculty awards, prizes, fellowships, and other honors

$
0
0

Constantinos Daskalakis (left0 and Vivienne Sze were among the EECS faculty members honored in the past year. Daskalakis received the Grace Murray Hopper Award from the Association for Computing Machinery, while Sze received MIT's Harold E. Edgerton Faculty Achievement Award.

EECS Staff

EECS professors are frequently recognized for excellence in teaching, research, service, and other areas. Following is a list of awards, prizes, medals, fellowships, memberships, grants, and other honors received by EECS faculty, primarily during the 2018-2019 academic year.

Mohammad Alizadeh, associate professor of electrical engineering and computer science, received a Microsoft Faculty Research Fellowship.

Dimitri Antoniadis, Ray and Maria Stata Professor of Electrical Engineering Emeritus, was elected to the American Academy of Arts and Sciences.

Regina Barzilay, Delta Electronics Researcher/Professor, was named among the “Top 100 AI Leaders in Drug Discovery and Advanced Healthcare” by Deep Knowledge Analytics.

Lecturer Ana Bell received the Louis D. Smullin (’39) Award for Excellence in Teaching from EECS.

Sir Tim Berners-Lee, 3COM Founders Professor of Engineering in the School of Engineering, was named Person of the Year by the Financial Times.

Dimitri Bertsekas, the Jerry McAfee (1940) Professor of Engineering, and John Tsitsiklis, director of the Laboratory for Information and Decision Systems (LIDS) and Clarence J. Lebel Professor in Electrical Engineering, jointly received the John von Neumann Theory Prize from the Institute for Operations Research and the Management Sciences (INFORMS).

Duane Boning, Clarence J. LeBel Professor in Electrical Engineering, received the Jerome H. Saltzer Award for Excellence in Teaching from EECS.  

Tamara Broderick, associate professor of electrical engineering and computer science, received a Notable Paper Award from the International Conference on AI and Statistics (AISTATS) for “A Swiss Army Infinitesimal Jackknife.”

Anantha Chandrakasan, dean of the School of the Engineering and Vannevar Bush Professor of Electrical Engineering and Computer Science, was elected to the American Academy of Arts and Sciences.

Adam Chlipala, associate professor of electrical engineering and computer science, received the Ruth and Joel Spira Award for Excellence in Teaching from the School of Engineering and EECS.

Luca Daniel, professor of electrical engineering and computer science, received a Thornton Family Faculty Research Innovation Fellowship from EECS.

Constantinos Daskalakis, professor of electrical engineering and computer science, received the Grace Murray Hopper Award from the Association for Computing Machinery (ACM), a Bodossaki Foundation Scientific Prize from the Bodossaki Foundation, and a Frank Quick Faculty Research Innovation Fellowship from EECS.

Jesús A. del Alamo, director of the Microsystems Technology Laboratories (MTL) and Donner Professor of Science, was named a fellow of the Materials Research Society.

Erik Demaine, professor of computer science and engineering, was selected as a Margaret MacVicar Faculty Fellow and received a Teaching with Digital Technology Award from MIT Open Learning and the Office of the Vice Chancellor.

Srini Devadas, Edwin Sibley Webster Professor of Electrical Engineering, received the Distinguished Alumnus Award from the Indian Institute of Technology, Madras.

Dirk Englund, associate professor of electrical engineering and computer science, received a Professor Amar G. Bose Research Grant from MIT.

Ruonan Han, associate professor of electrical engineering and computer science, received a 2019 Outstanding Researcher Award from Intel.

Song Han, Robert J. Shillman (1974) Assistant Professor of Electrical Engineering and Computer Science, was named to MIT Technology Review’s annual list of 35 Innovators Under 35.

Lecturer Adam Hartz received the Digital Innovation Award from EECS.  

Thomas Heldt, W.M. Keck Career Development Associate Professor in Biological Engineering and Associate Professor of Electrical and Biomedical Engineering, was named Distinguished Lecturer by the IEEE Engineering in Medicine and Biology Society. He also received the Burgess (1952) and Elizabeth Jamieson Prize for Excellence in Teaching from EECS.

Tommi Jaakkola, Thomas Siebel Professor in EECS and the Institute for Data, Systems, and Society (IDSS), was named among the “Top 100 AI Leaders in Drug Discovery and Advanced Healthcare” by Deep Knowledge Analytics.

David R. Karger, professor of computer science and engineering, was elected to the American Academy of Arts and Sciences.

Manolis Kellis, professor of electrical engineering and computer science, was named among the “Top 100 AI Leaders in Drug Discovery and Advanced Healthcare” by Deep Knowledge Analytics.

Jing Kong, professor of electrical engineering and computer science, received a Thornton Family Faculty Research Innovation Fellowship from EECS.

Luqiao Liu, associate professor of electrical engineering and computer science, received a Young Investigator Research Program grant from the U.S. Air Force Office of Scientific Research.

Nancy Lynch, EECS Associate Department Head for Strategic Directions and NEC Professor of Computer Software and Engineering, will receive the Award for Outstanding Technical Achievement from the IEEE Technical Committee on Distributed Processing, to be presented at the IEEE International Conference on Distributed Computing Systems in July 2019.

Muriel Médard, Cecil H. Green Professor of Electrical Engineering, was named as a fellow of the National Academy of Inventors.

Robert T. Morris, professor of computer science and engineering, was elected to the National Academy of Engineering.

Stefanie Mueller, X-Window Consortium Career Development Assistant Professor, received a National Science Foundation (NSF) CAREER Award.

Alan V. Oppenheim, Ford Professor of Engineering, received the 2018-2019 Creative Advising Activity Award from MIT’s Office of the First Year.

MIT President L. Rafael Reif, professor of electrical engineering, was named as a fellow of the National Academy of Inventors.

Ronitt Rubinfeld, professor of computer science and engineering, received the Seth J. Teller Award for Excellence, Inclusion, and Diversity from EECS.

Max Shulaker, Emmanuel E. Landsman (1958) Career Development Assistant Professor of Electrical Engineering and Computer Science, received the Ruth and Joel Spira Award for Excellence in Teaching from the School of Engineering and EECS.

Julian Shun, Douglas Ross (1954) Career Development Assistant Professor of Software Technology, received a National Science Foundation (NSF) CAREER Award.

David Sontag, Hermann von Helmholtz Associate Professor of Medical Engineering, and Peter Szolovits, professor of computer science and engineering and engineering, jointly received the Burgess (1952) and Elizabeth Jamieson Prize for Excellence in Teaching from EECS. Szolovits, also a professor of health sciences and technology, was also named among the “Top 100 AI Leaders in Drug Discovery and Advanced Healthcare” by Deep Knowledge Analytics.

Suvrit Sra, Edgerton Career Development Associate Professor, received a National Science Foundation (NSF) CAREER Award.

Lecturer Joseph Steinmeyer received the Outstanding Educator Award from EECS.

Vivienne Sze, associate professor of electrical engineering and computer science, received the Harold E. Edgerton Faculty Achievement Award from MIT.

Caroline Uhler, associate professor of electrical engineering and computer science, received a Simons Investigator Award in the Mathematical Modeling of Living Systems (MMLS) category from the Simons Foundation. 

Jacob White, Cecil H. Green Professor in Electrical Engineering, received the Bose Award for Excellence in Teaching from MIT.

Please send clarifications, updates, and information on new or pending EECS faculty awards to eecs-communications@mit.edu.

Date Posted: 

Wednesday, June 26, 2019 - 6:00pm

Card Title Color: 

Black

Card Description: 

Constantinos Daskalakis and Vivienne Sze were among more than 40 EECS faculty members honored for excellence in teaching, research, service, and more.

Photo: 

Card Wide Image: 

Roundup: EECS student awards, prizes and fellowships

$
0
0

EECS PhD candidate Arman Rezaee, who received MIT's 2019 Collier Medal, was among many EECS students honored over the past year. (L to R) MIT Police Captain Craig Martin, Rezaee, MIT Police Chief John DiFava, and MIT President L. Rafael Reif. Photo: Justin Knight.

EECS Staff

EECS undergraduates and graduate students routinely win major scholarships, fellowships, and awards. Following is a sampling of EECS student honors from the 2018-2019 academic year.

30 Under 30 Asia: EECS PhD student Nelson X. Wang was named to the 2019 Forbes 30 Under 30 Asia list in the Industry, Manufacturing, and Energy category.

Burchard Scholars: Several EECS majors are among 36 MIT undergraduates selected as 2019 Burchard Scholars. The program, run by the School of Humanities, Arts, and Social Sciences (SHASS), brings together MIT faculty and promising MIT sophomores and juniors who have demonstrated excellence in some aspect of SHASS fields for seminars and discussions. EECS Burchard Scholars include: Fiona Chen, Isabelle Chong, Patricia Gao, Robert Henning, Catherine Huang, Natasha Joglekar, Tara Liu, Erica Weng, Isabelle Yen, and Yiwei Zhu.

Collier Medal: EECS PhD candidate Arman Rezaee received MIT’s 2019 Collier Medal, which celebrates an individual or group whose actions demonstrate the importance of community. The award honors the memory of MIT Officer Sean Collier, shot and killed on campus during the manhunt for suspects following the Boston Marathon bombing in April 2013.

Fulbright Fellowships: Four EECS students were among 12 MIT students who received 2019 Fulbright U.S. Student Fellowships. All will spend the 2019-2020 academic year in research or teaching assignments for studying, teaching, or conducting research in other countries. EECS recipients include Annamaria Bair’18, MEng ’19; Abigail Bertics’18; Miranda McClellan’18, MEng ’19; and Samira Okudo ’18.  

Google PhD Fellowships: Two EECS students received 2019 Google PhD Fellowships. Amy Greene’14, SM ’16 received the Google Fellowship in Quantum Computing, while Shibani Santurkar SM ’17 received the Google Fellowship in Machine Learning.

Hertz Fellowship: First-year EECS PhD student Dylan Cable was among 11 scholars from nine U.S. research universities selected for a 2019 Hertz Research Fellowship from the nonprofit Fannie and John Hertz Foundation. Fellowship recipients receive up to $250,000 for up to five years.

Imperial College Exchange: Four EECS undergraduates were selected to participate in the new MIT-Imperial Academic Exchange program, allowing them to study at Imperial College London for the spring and summer 2019 terms. They are: Lily Bailey, Michael Hiebert, Dain Kim, and Chase Warren.

Siebel Scholars: Five EECS graduate students are among 96 worldwide chosen for the Siebel Scholars Class of 2019. Each will receive a $35,000 award to cover the final year of graduate study. The 2019 EECS Siebel Scholars include Nichole Clarke’18, MEng ’19; Logan Engstrom’19, MEng ’19; Alireza Fallah SM ’19; James Mawdsley’18, MEng ’19; and Andrew Mullen’17, MEng ’19.

Soros Fellowship:Helen Zhou ’17, MEng ’18, and current PhD student Jonathan Zong were among just 30 peoplenationwide to receive 2019 Paul and Daisy Soros Fellowships for New Americans. The awards are for U.S. immigrants or children of immigrants, based on their potential to make significant contributions to U.S. society, culture, or academia. They will receive up to $90,000 apiece to fund their doctoral educations.

Microsoft Research PhD Fellowship: EECS graduate student Joana M. F. da Trindade was among 10 students nationwide who received a 2019 Microsoft Research PhD Fellowship. The prestigious fellowships cover tuition and fees for two academic years and provide a $42,000 stipend to help with living expenses and conference travel. Recipients also receive invitations to interview for salaried research internships and to attend the PhD Summit at Microsoft.

In addition, EECS presented awards to a variety of students at the annual EECS Celebrates ceremony in May 2019.

Please send news about EECS student awards to eecs-communications@mit.edu.

 

 

Date Posted: 

Thursday, June 27, 2019 - 5:30pm

Card Title Color: 

Black

Card Description: 

PhD candidate Arman Rezaee, who won MIT’s 2019 Collier Medal, was among numerous EECS students to receive major honors in the past year.

Photo: 

Card Wide Image: 

Dina Katabi honored for contributions to American society

$
0
0

Professor Dina Katabi

Anne Stuart | EECS 

Dina Katabi, the Andrew and Erna Viterbi Professor of Electrical Engineering and Computer Science, has been named as a Great Immigrant by the Carnegie Corporation of New York. Katabi, who was born in Syria, is among 38 naturalized citizens from 35 countries of origin who are being celebrated for their contributions to American society.

Katabi, a principal investigator in the Computer Science and Artificial Intelligence Laboratory (CSAIL), was recognized for her “efforts to improve the speed, reliability, and data security of wireless networks.” In particular, the Carnegie Corporation cited Katabi’s work in “developing a wireless device that is able to track motion using radio signals reflected off of the human body, even through walls ­— a technology that has great potential for medical use.”

For example, the device can help medical personnel monitor patients’ breathing, heartbeat, and other vital signs — at home and without sensors. “Using artificial intelligence, the device is also able to recommend a course of action, such as a phone call to a care provider in the case of an elderly person who has fallen,” the organization noted.

Katabi, who joined the EECS faculty in 2003, also leads the Networks at MIT (NETMIT) research group and is a director of the MIT Center for Wireless Networks and Mobile Computing (Wireless@MIT), both in CSAIL.

Among other honors, Katabi has received a MacArthur Fellowship, the Association for Computing Machinery (ACM) Prize in Computing, the ACM Grace Murray Hopper Award,  a Test of Time Award from the ACM’s Special Interest Group on Data Communications (SIGCOMM), a National Science Foundation (NSF) CAREER Award, and a Sloan Research Fellowship. She is an ACM Fellow and was elected to the National Academy of Engineering. She received a bachelor’s degree from Damascus University and master’s and PhD degrees from MIT.

The Carnegie Corporation of New York, a philanthropic organization founded in 1911 by industrialist and Scottish immigrant Andrew Carnegie, launched its annual Great Immigrants program in 2006. This year’s class of Great Immigrants also includes Angelika Amon, the Marble Professor of Cancer Research in MIT’s Department of Biology; Honeywell CEO Darius Adamczyk; CNN anchor Wolf Blitzer; Nobel Prize-winning economist Angus Deaton; violinist Midori; former Miami Mayor Tomás Pedro Regalado, New York Yankees relief pitcher Mariano Rivera; Linux developer Linus Torvalds; and Planned Parenthood President Leana Wen, among other luminaries.

 

Date Posted: 

Thursday, June 27, 2019 - 5:45pm

Labs: 

Card Title Color: 

Black

Card Description: 

The EECS faculty member is named as a 'Great Immigrant' by the Carnegie Corporation of New York.

Photo: 

Card Wide Image: 

Drag-and-drop data analytics

$
0
0

For years, researchers from MIT and Brown University have been developing an interactive system that lets users drag-and-drop and manipulate data on any touchscreen. Now, they’ve added a tool that automatically generates machine-learning models to run prediction tasks on that data. Image: Melanie Gonick

In the "Iron Man" movies, Tony Stark uses a holographic computer to project 3-D data into thin air, manipulate them with his hands, and find fixes to his superhero troubles. In the same vein, researchers from MIT and Brown University have now developed a system for interactive data analytics that runs on touchscreens and lets everyone — not just billionaire tech geniuses — tackle real-world issues.

For years, the researchers have been developing an interactive data-science system called Northstar, which runs in the cloud but has an interface that supports any touchscreen device, including smartphones and large interactive whiteboards. Users feed the system datasets, and manipulate, combine, and extract features on a user-friendly interface, using their fingers or a digital pen, to uncover trends and patterns.

In a paper being presented at the Association for Computing Machinery (ACM) Special Interest Group on Management of Data (SIGMOD) conference, the researchers — including Tim Kraska, an EECS associate professor and long-time Northstar project lead — detail a new component of Northstar. Called VDS, for “virtual data scientist,” the component instantly generates machine-learning models to run prediction tasks on their datasets. Doctors, for instance, can use the system to help predict which patients are more likely to have certain diseases, while business owners might want to forecast sales. If using an interactive whiteboard, everyone can also collaborate in real-time.  

The aim is to democratize data science by making it easy to do complex analytics, quickly and accurately.

“Even a coffee shop owner who doesn’t know data science should be able to predict their sales over the next few weeks to figure out how much coffee to buy,” says Kraska, the paper's co-author, who is a principal investigator in the Computer Science and Artificial Intelligence Laboratory (CSAIL) and founding co-director of the new Data System and AI Lab (DSAIL). “In companies that have data scientists, there’s a lot of back and forth between data scientists and nonexperts, so we can also bring them into one room to do analytics together.”

VDS is based on an increasingly popular technique in artificial intelligence called automated machine-learning (AutoML), which lets people with limited data-science know-how train AI models to make predictions based on their datasets. Currently, the tool leads the DARPA D3M Automatic Machine Learning competition, which every six months decides on the best-performing AutoML tool.    

Joining Kraska on the paper are: first author Zeyuan Shang, a EECS graduate student, and Emanuel Zgraggen, a postdoc and main contributor of Northstar, both of EECS, CSAIL, and DSAIL; Benedetto Buratti, Yeounoh Chung, Philipp Eichmann, and Eli Upfal, all of Brown; and Carsten Binnig who recently moved from Brown to the Technical University of Darmstadt in Germany.

An “unbounded canvas” for analytics

The new work builds on years of collaboration on Northstar between researchers at MIT and Brown. Over four years, the researchers have published numerous papers detailing components of Northstar, including the interactive interface, operations on multiple platforms, accelerating results, and studies on user behavior.

Northstar starts as a blank, white interface. Users upload datasets into the system, which appear in a “datasets” box on the left. Any data labels will automatically populate a separate “attributes” box below. There’s also an “operators” box that contains various algorithms, as well as the new AutoML tool. All data are stored and analyzed in the cloud.

The researchers like to demonstrate the system on a public dataset that contains information on intensive care unit patients. Consider medical researchers who want to examine co-occurrences of certain diseases in certain age groups. They drag and drop into the middle of the interface a pattern-checking algorithm, which at first appears as a blank box. As input, they move into the box disease features labeled, say, “blood,” “infectious,” and “metabolic.” Percentages of those diseases in the dataset appear in the box. Then, they drag the “age” feature into the interface, which displays a bar chart of the patient’s age distribution. Drawing a line between the two boxes links them together. By circling age ranges, the algorithm immediately computes the co-occurrence of the three diseases among the age range.  

“It’s like a big, unbounded canvas where you can lay out how you want everything,” says Zgraggen, who is the key inventor of Northstar’s interactive interface. “Then, you can link things together to create more complex questions about your data.”

Approximating AutoML

With VDS, users can now also run predictive analytics on that data by getting models custom-fit to their tasks, such as data prediction, image classification, or analyzing complex graph structures.

Using the above example, say the medical researchers want to predict which patients may have blood disease based on all features in the dataset. They drag and drop “AutoML” from the list of algorithms. It’ll first produce a blank box, but with a “target” tab, under which they’d drop the “blood” feature. The system will automatically find best-performing machine-learning pipelines, presented as tabs with constantly updated accuracy percentages. Users can stop the process at any time, refine the search, and examine each model’s errors rates, structure, computations, and other things.

According to the researchers, VDS is the fastest interactive AutoML tool to date, thanks, in part, to their custom “estimation engine.” The engine sits between the interface and the cloud storage. The engine leverages automatically creates several representative samples of a dataset that can be progressively processed to produce high-quality results in seconds.

“Together with my co-authors I spent two years designing VDS to mimic how a data scientist thinks,” Shang says, meaning it instantly identifies which models and preprocessing steps it should or shouldn’t run on certain tasks, based on various encoded rules. It first chooses from a large list of those possible machine-learning pipelines and runs simulations on the sample set. In doing so, it remembers results and refines its selection. After delivering fast approximated results, the system refines the results in the back end. But the final numbers are usually very close to the first approximation.

“For using a predictor, you don’t want to wait four hours to get your first results back. You want to already see what’s going on and, if you detect a mistake, you can immediately correct it. That’s normally not possible in any other system,” Kraska says. The researchers’ previous user study, in fact, “show that the moment you delay giving users results, they start to lose engagement with the system.”

The researchers evaluated the tool on 300 real-world datasets. Compared to other state-of-the-art AutoML systems, VDS’ approximations were as accurate, but were generated within seconds, which is much faster than other tools, which operate in minutes to hours

Next, the researchers are looking to add a feature that alerts users to potential data bias or errors. For instance, to protect patient privacy, sometimes researchers will label medical datasets with patients aged 0 (if they do not know the age) and 200 (if a patient is over 95 years old). But novices may not recognize such errors, which could completely throw off their analytics.  

“If you’re a new user, you may get results and think they’re great,” Kraska says. “But we can warn people that there, in fact, may be some outliers in the dataset that may indicate a problem.”

For additional information on this story, including video clips, please visit the MIT News website.

 

Date Posted: 

Thursday, June 27, 2019 - 6:15pm

Labs: 

Card Title Color: 

Black

Card Description: 

The system lets nonspecialists use machine-learning models to make predictions for medical research, sales, and more.

Photo: 

Card Wide Image: 


Phi Beta Kappa inducts 20 EECS graduates

$
0
0

The Phi Beta Kappa Society, the nation’s oldest academic honor society, invited 76 graduating seniors from the Class of 2019 into the MIT chapter, Xi of Massachusetts. Twenty EECS-affilated students were among this year's inductees.

Phi Beta Kappa (PBK), founded in 1776 at the College of William and Mary, honors the nation’s most outstanding undergraduate students for excellence in the liberal arts, which include the humanities, the arts, sciences, and social sciences. Only 10 percent of higher education institutions have PBK chapters, and fewer than 10 percent of students at those institutions are selected for membership.

Congratulations to the newest EECS members of Phi Beta Kappa:

  • Abigail C. Bertics, La Jolla, California
  • Run Chen, Hangzhou, China
  • Neena Dugar, Doncaster, England
  • Rohan S. Kodialam, Marlboro, New Jersey
  • Chung-Yueh Lin, Taipei, Taiwan
  • Sophie Mori, Marietta, Georgia
  • Max K. Murin, Seattle, Washington
  • Timothy L. Ngotiaoco, Fremont, California
  • Fernando A. Ortiz-Soto, Bayamon, Puerto Rico
  • Priya P. Pillai, Oak Brook, Illinois
  • Elijah E. Rivera, Lake City, Florida
  • Basil N. Saeed, Jenin, West Bank
  • Helena A. Sakharova, Ridgewood, New Jersey
  • Christopher Shao, Princeton, New Jersey
  • Sky Shin, Seoul, South Korea
  • Kimberly M. Villalobos Carballo, Alajuela, Costa Rica
  • Katherine Y. Wang, Chapel Hill, North Carolina
  • Grace Q. Yin, Cambridge, Massachusetts
  • Yueyang Ying, Rockville, Maryland
  • Yunkun Zhou, Foster City, California

 

 

Date Posted: 

Friday, June 28, 2019 - 4:00pm

Card Title Color: 

Black

Card Description: 

In all, 76 members of MIT's Class of 2019 were invited to join the prestigious honor society.

Photo: 

Card Wide Image: 

EECS Professor Erik Demaine honored for innovative teaching

$
0
0

EECS Professor Erik Demaine

MIT Open Learning

Editor's note: For a related video of Erik DeMaine's class, please visit the MIT News website.

EECS faculty member Erik Demaine is among seven MIT educators honored recently for significant innovations in digital learning.

Demaine, also a principal investigator in the Computer Science and Artificial Intelligence Lab (CSAIL), received a Teaching with Digital Technology Award for his course 6.892 (Fun with Hardness Proofs). Co-sponsored by MIT Open Learning and the Office of the Vice Chancellor, the student-nominated awards are presented to faculty and instructors who have improved teaching and learning at MIT with digital technology.

Demaine's course flipped the traditional classroom model. Instead of lecturing in person, all lectures were posted online and problems were done in class. That approach allowed the students to spend class time working together on collaborative problem solving through an online application that Demaine created, called Coauthor.

MIT students nominated 117 faculty and instructors for this award this year, more than in any previous year. Other winners included John Belcher, Class of '22 Professor of Physics;  Amy Carleton, lecturer in Comparative Media Studies/Writing; and Jared Curhan, associate professor of work and organization studies at the MIT Sloan School of Management.

Three other MIT educators shared the third annual MITx Prize for Teaching and Learning in MOOCs for their work in developing massive open opline courses (better known as MOOCs). They are: Polina Anikeeva, associate professor in the Department of Materials Science and Engineering (DMSE); Martin Z. Bazant, E.G. Roos (1944) Professor of Chemical Engineering; and Jessica Sandland, a DMSE lecturer and MITx Digital Learning Scientist.

For an extended report on all the digital-learning award winners, please visit the MIT News website.

 

Date Posted: 

Wednesday, July 3, 2019 - 12:15pm

Labs: 

Card Title Color: 

Black

Card Description: 

EECS faculty member is among several recognized for using digital technology to improve classroom instruction and student engagement.

Photo: 

EECS Professor Emeritus Fernando Corbato, MIT computing pioneer, dies at 93

$
0
0

Professor Emeritus Fernando “Corby” Corbató. Photo courtesy of the Corbató family.

Fernando “Corby” Corbató, an MIT professor emeritus whose work in the 1960s on time-sharing systems broke important ground in democratizing the use of computers, died on Friday, July 12, at his home in Newburyport, Massachusetts. He was 93.

Decades before the existence of concepts like cybersecurity and the cloud, Corbató led the development of one of the world’s first operating systems. His “Compatible Time-Sharing System” (CTSS) allowed multiple people to use a computer at the same time, greatly increasing the speed at which programmers could work. It’s also widely credited as the first computer system to use passwords

After CTSS Corbató led a time-sharing effort called Multics, which directly inspired operating systems like Linux and laid the foundation for many aspects of modern computing. Multics doubled as a fertile training ground for an emerging generation of programmers that included C programming language creator Dennis Ritchie, Unix developer Ken Thompson, and spreadsheet inventors Dan Bricklin and Bob Frankston.

Before time-sharing, using a computer was tedious and required detailed knowledge. Users would create programs on cards and submit them in batches to an operator, who would enter them to be run one at a time over a series of hours. Minor errors would require repeating this sequence, often more than once.

But with CTSS, which was first demonstrated in 1961, answers came back in mere seconds, forever changing the model of program development. Decades before the PC revolution, Corbató and his colleagues also opened up communication between users with early versions of email, instant messaging, and word processing. 

“Corby was one of the most important researchers for making computing available to many people for many purposes,” says long-time colleague Tom Van Vleck. “He saw that these concepts don’t just make things more efficient; they fundamentally change the way people use information.”

Besides making computing more efficient, CTSS also inadvertently helped establish the very concept of digital privacy itself. With different users wanting to keep their own files private, CTSS introduced the idea of having people create individual accounts with personal passwords. Corbató’s vision of making high-performance computers available to more people also foreshadowed trends in cloud computing, in which tech giants like Amazon and Microsoft rent out shared servers to companies around the world. 

“Other people had proposed the idea of time-sharing before,” says Jerry Saltzer, who worked on CTSS with Corbató after starting out as his teaching assistant. “But what he brought to the table was the vision and the persistence to get it done.”

CTSS was also the spark that convinced MIT to launch “Project MAC,” the precursor to the Laboratory for Computer Science (LCS). LCS later merged with the Artificial Intelligence Lab to become MIT’s largest research lab, the Computer Science and Artificial Intelligence Laboratory (CSAIL), which is now home to more than 600 researchers. 

“It’s no overstatement to say that Corby’s work on time-sharing fundamentally transformed computers as we know them today,” says CSAIL Director Daniela Rus. “From PCs to smartphones, the digital revolution can directly trace its roots back to the work that he led at MIT nearly 60 years ago.” 

In 1990, Corbató was honored for his work with the Association of Computing Machinery’s Turing Award, often described as “the Nobel Prize for computing.”

From sonar to CTSS

Corbató was born on July 1, 1926, in Oakland, California. At 17, he enlisted as a technician in the U.S. Navy, where he first got the engineering bug working on a range of radar and sonar systems. After World War II he earned his bachelor's degree at Caltech before heading to MIT to complete a PhD in physics. 

As a PhD student, Corbató met Professor Philip Morse, who recruited him to work with his team on Project Whirlwind, the first computer capable of real-time computation. After graduating, Corbató joined MIT's Computation Center as a research assistant, soon moving up to become deputy director of the entire center. 

It was there that he started thinking about ways to make computing more efficient. For all its innovation, Whirlwind was still a rather clunky machine. Researchers often had trouble getting much work done on it, since they had to take turns using it for half-hour chunks of time. (Corbató said that it had a habit of crashing every 20 minutes or so.) 

Since computer input and output devices were much slower than the computer itself, in the late 1950s a scheme called multiprogramming was developed to allow a second program to run whenever the first program was waiting for some device to finish. Time-sharing built on this idea, allowing other programs to run while the first program was waiting for a human user to type a request, thus allowing the user to interact directly with the first program.

Saltzer says that Corbató pioneered a programming approach that would be described today as agile design. 

“It’s a buzzword now, but back then it was just this iterative approach to coding that Corby encouraged and that seemed to work especially well,” he says.  

In 1962, Corbató published a paper about CTSS that quickly became the talk of the slowly-growing computer science community. The following year, MIT invited several hundred programmers to campus to try out the system, spurring a flurry of further research on time-sharing.

Foreshadowing future technological innovation, Corbató was amazed — and amused — by how quickly people got habituated to CTSS’ efficiency.

“Once a user gets accustomed to [immediate] computer response, delays of even a fraction of a minute are exasperatingly long,” he presciently wrote in his 1962 paper. “First indications are that programmers would readily use such a system if it were generally available.”

Multics, meanwhile, expanded on CTSS’ more ad hoc design with a hierarchical file system, better interfaces to email and instant messaging, and more precise privacy controls. Peter Neumann, who worked at Bell Labs when they were collaborating with MIT on Multics, says that its design prevented the possibility of many vulnerabilities that impact modern systems, like “buffer overflow” (which happens when a program tries to write data outside the computer’s short-term memory). 

“Multics was so far ahead of the rest of the industry,” says Neumann. “It was intensely software-engineered, years before software engineering was even viewed as a discipline.” 

In spearheading these time-sharing efforts, Corbató served as a soft-spoken but driven commander in chief — a logical thinker who led by example and had a distinctly systems-oriented view of the world.

“One thing I liked about working for Corby was that I knew he could do my job if he wanted to,” says Van Vleck. “His understanding of all the gory details of our work inspired intense devotion to Multics, all while still being a true gentleman to everyone on the team.” 

Another legacy of the professor’s is “Corbató’s Law,” which states that the number of lines of code someone can write in a day is the same regardless of the language used. This maxim is often cited by programmers when arguing in favor of using higher-level languages.

Corbató was an active member of the MIT community, serving as associate department head for computer science and engineering from 1974 to 1978 and 1983 to 1993. He was a member of the National Academy of Engineering, and a fellow of the Institute of Electrical and Electronics Engineers and the American Association for the Advancement of Science. 

Corbató is survived by his wife, Emily Corbató, from Brooklyn, New York; his stepsons, David and Jason Gish; his brother, Charles; and his daughters, Carolyn and Nancy, from his marriage to his late wife Isabel; and five grandchildren. 

In lieu of flowers, gifts may be made to MIT’s Fernando Corbató Fellowship Fund via Bonny Kellermann in the Memorial Gifts Office. 

CSAIL will host an event to honor and celebrate Corbató in the coming months.

For additional information on Professor Corbató's life and legacy, please visit the MIT News website.
 

 

Date Posted: 

Tuesday, July 16, 2019 - 10:30am

Labs: 

Card Title Color: 

Black

Card Description: 

Longtime MIT professor developed early “time-sharing” operating systems and is widely credited as the creator of the world’s first computer password.

Photo: 

Card Wide Image: 

Research Area: 

EECS Professor Patrick Winston, former director of the MIT Artificial Intelligence Laboratory, dies at 76

$
0
0

Patrick Winston, a beloved professor and computer scientist at MIT, died on July 19 at Massachusetts General Hospital in Boston. He was 76.
 
A professor at MIT for almost 50 years, Winston was director of MIT’s Artificial Intelligence Laboratory from 1972 to 1997 before it merged with the Laboratory for Computer Science to become MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL).
 
A devoted teacher and cherished colleague, Winston led CSAIL’s Genesis Group, which focused on developing AI systems that have human-like intelligence, including the ability to tell, perceive, and comprehend stories. He believed that such work could help illuminate aspects of human intelligence that scientists don’t yet understand.
 
“My principal interest is in figuring out what’s going on inside our heads, and I’m convinced that one of the defining features of human intelligence is that we can understand stories,'” Winston, the Ford Professor of Artificial Intelligence and Computer Science, said in a 2011 interview for CSAIL. “Believing as I do that stories are important, it was natural for me to try to build systems that understand stories, and that shed light on what the story-understanding process is all about.”
 
He was renowned for his accessible and informative lectures, and gave a hugely popular talk every year during the Independent Activities Period called “How to Speak.” 
 
“As a speaker he always had his audience in the palm of his hand,” says MIT Professor Peter Szolovits. “He put a tremendous amount of work into his lectures, and yet managed to make them feel loose and spontaneous. He wasn’t flashy, but he was compelling and direct. ”
 
Winston’s dedication to teaching earned him many accolades over the years, including the Baker Award, the Eta Kappa Nu Teaching Award, and the Graduate Student Council Teaching Award.
 
“Patrick’s humanity and his commitment to the highest principles made him the soul of EECS,” MIT President L. Rafael Reif wrote in a letter to the MIT community. “I called on him often for advice and feedback, and he always responded with kindness, candor, wisdom and integrity.  I will be forever grateful for his counsel, his objectivity, and his tremendous inspiration and dedication to our students.”
 
Teaching computers to think

Born Feb. 5, 1943, in Peoria, Illinois, Winston was always exceptionally curious about science, technology and how to use such tools to explore what it means to be human. He was an MIT "lifer" starting in 1961, earning his bachelor’s, master’s and doctoral degrees from the Institute before joining the faculty of the Department of Electrical Engineering and Computer Science in 1970.
 
His thesis work with Marvin Minsky centered on the difficulty of learning, setting off a trajectory of work where he put a playful, yet laser-sharp focus on fine-tuning AI systems to better understand stories.
 
His Genesis project aimed to faithfully model computers after human intelligence in order to fully grasp the inner workings of our own motivations, rationality, and perception. Using MIT research scientist Boris Katz’s START natural language processing system and a vision system developed by former MIT PhD student Sajit Rao, Genesis can digest short, simple chunks of text, then spit out reports about how it interpreted connections between events.
 
While the system has processed many works, Winston chose “Macbeth” as a primary text because the tragedy offers an opportunity to take big human themes, such as greed and revenge, and map out their components.
 
“[Shakespeare] was pretty good at his portrayal of ‘the human condition,’ as my friends in the humanities would say,” Winston told The Boston Globe. “So there’s all kinds of stuff in there about what’s typical when we humans wander through the world.”
 
His deep fascination with humanity, human intelligence, and how we communicate information spilled over into what he often described as his favorite academic activity: teaching.
 
“He was a superb educator who introduced the field to generations of students,” says EECS Professor and longtime colleague Randall Davis. “His lectures had an uncanny ability to move in minutes from the details of an algorithm to the larger issues it illustrated, to yet larger lessons about how to be a scientist and a human being.”
 
A past president of the Association for the Advancement of Artificial Intelligence (AAAI), Winston also wrote and edited numerous books, including a seminal textbook on AI that’s still used in classrooms around the world. Outside of the lab he also co-founded Ascent Technology, which produces scheduling and workforce management applications for major airports.
 
He is survived by his wife Karen Prendergast and his daughter Sarah.

For more on Professor Winston's life and legacy, please visit the MIT News website.

Date Posted: 

Friday, July 19, 2019 - 7:45pm

Labs: 

Card Title Color: 

Black

Card Description: 

The beloved faculty member conducted pioneering research on imbuing machines with human-like intelligence, including the ability to understand stories.

Photo: 

Card Wide Image: 

Want to know what software-driven health care looks like? This class offers some clues

$
0
0

Students in 6.S897 at a health care poster session. Photo: Irene Chen

MIT professors David Sontag and Peter Szolovits don’t assign a textbook for their class, 6.S897/HST.956 (Machine Learning for Healthcare), because there isn’t one. Instead, students read scientific papers, solve problem sets based on current topics like opioid addiction and infant mortality, and meet the doctors and engineers paving the way for a more data-driven approach to health care. Jointly offered by MIT’s Department of Electrical Engineering and Computer Science (EECS) and the Harvard-MIT program in Health Sciences Technology, the class is one of just a handful offered across the country.

“Because it’s a new field, what we teach will help shape how AI is used to diagnose and treat patients,” says Irene Chen, an EECS graduate student who helped design and teach the course. “We tried to give students the freedom to be creative and explore the many ways machine learning is being applied to health care.”

Two-thirds of the syllabus this spring was new. Students were introduced to the latest machine-learning algorithms for analyzing doctors’ clinical notes, patient medical scans, and electronic health records, among other data. Students also explored the risks of using automated methods to explore large, often messy observational datasets, from confusing correlation with causation to understanding how AI models can make bad decisions based on biased data or faulty assumptions.

With all of the hype around AI, the course had more takers than seats. After 100 students showed up on the first day, students were assigned a quiz to test their knowledge of statistics and other prerequisites. That helped whittle the class down to 70. Michiel Bakker, a graduate student at the MIT Media Lab, made the cut and says the course gave him medical concepts that most engineering courses don’t provide.

“In machine learning, the data are often either images or text,” he says. “Here we learned the importance of combining genetic data with medical images with electronic health records. To use machine learning in health care you have to understand the problems, how to combine techniques and anticipate where things could go wrong.”

Most lectures and homework problems focused on real world scenarios, drawing from MIT’s MIMIC critical care database and a subset of the IBM MarketScan Research Databases focused on insurance claims. The course also featured regular guest lectures by Boston-area clinicians. In a reversal of roles, students held office hours for doctors interested in integrating AI into their practice. 

“There are so many people in academia working on machine learning, and so many doctors at hospitals in Boston,” says Willie Boag, an EECS graduate student who helped design and teach the course. “There’s so much opportunity in fostering conversation between these groups.”

In health care, as in other fields where AI has made inroads, regulators are discussing what rules should be put in place to protect the public. The U.S. Federal Drug Administration recently released a draft framework for regulating AI products, which students got to review and comment on, in class and in feedback published online in the Federal Register.

Andy Coravos, a former entrepreneur in residence at the FDA, now CEO of Elektra Labs in Boston, helped lead the discussion and was impressed by the quality of the comments. “Many students identified test cases relevant to the current white paper, and used those examples to draft public comments for what to keep, add, and change in future iterations,” she says.

The course culminated in a final project in which teams of students used the MIMIC and IBM datasets to explore a timely question in the field. One team analyzed insurance claims to explore regional variation in screening patients for early-stage kidney disease. Many patients with hypertension and diabetes are never tested for chronic kidney disease, even though both conditions put them at high risk. The students found that they could predict fairly well who would be screened, and that screening rates diverged most between the southern and northeastern United States.

“If this work were to continue, the next step would be to share the results with a doctor and get their perspective,” says team member Matt Groh, a PhD student at the MIT Media Lab. “You need that cross-disciplinary feedback.”

The MIT-IBM Watson AI Lab made the anonymized data available, and provided student-access to the IBM cloud, out of an interest in helping to educate the next generation of scientists and engineers, says Kush Varshney, principal research staff member and manager at IBM Research. “Health care is messy and complex, which is why there are no substitutes for working with real-world data,” he says.

Szolovits agrees. Using synthetic data would have been easier but far less meaningful. “It’s important for students to grapple with the complexities of real data,” he says. “Any researcher developing automated techniques and tools to improve patient care needs to be sensitive to its many nuances.” (In May 2019, Szolovits and Sontag received the Burgess (1952) and Elizabeth Jamieson Prize for Excellence in Teaching in recognition of their work in developing the class.)

In a recent recap on Twitter, Chen gave shout-outs to the students, guest lecturers, professors, and her fellow teaching assistant. She also reflected on the joys of teaching. “Research is rewarding and often fun, but helping someone see your field with fresh eyes is insanely cool.”

 

Date Posted: 

Friday, July 26, 2019 - 4:45pm

Card Title Color: 

Black

Card Description: 

An EECS-based course that combines machine learning and health care explores the promise of applying artificial intelligence to medicine.

Photo: 

Card Wide Image: 

Microfluidics device helps diagnose sepsis in minutes

$
0
0

A microfluidics device could help doctors diagnose sepsis, a leading cause of death in U.S. hospitals. Image: Felice Frankel

A novel sensor designed by MIT researchers could dramatically accelerate the process of diagnosing sepsis, a leading cause of death in U.S. hospitals that kills nearly 250,000 patients annually.

Sepsis occurs when the body’s immune response to infection triggers an inflammation chain reaction throughout the body, causing high heart rate, high fever, shortness of breath, and other issues. If left unchecked, it can lead to septic shock, where blood pressure falls and organs shut down. To diagnose sepsis, doctors traditionally rely on various diagnostic tools, including vital signs, blood tests, and other imaging and lab tests.

In recent years, researchers have found protein biomarkers in the blood that are early indicators of sepsis. One promising candidate is interleukin-6 (IL-6), a protein produced in response to inflammation. In sepsis patients, IL-6 levels can rise hours before other symptoms begin to show. But even at these elevated levels, the concentration of this protein in the blood is too low overall for traditional assay devices to detect it quickly.

In a paper being presented this week at the Engineering in Medicine and Biology Conference, MIT researchers describe a microfluidics-based system that automatically detects clinically significant levels of IL-6 for sepsis diagnosis in about 25 minutes, using less than a finger prick of blood.

In one microfluidic channel, microbeads laced with antibodies mix with a blood sample to capture the IL-6 biomarker. In another channel, only beads containing the biomarker attach to an electrode. Running voltage through the electrode produces an electrical signal for each biomarker-laced bead, which is then converted into the biomarker concentration level.

“For an acute disease, such as sepsis, which progresses very rapidly and can be life-threatening, it’s helpful to have a system that rapidly measures these nonabundant biomarkers,” says first author Dan Wu, a PhD student in the Department of Mechanical Engineering. “You can also frequently monitor the disease as it progresses.”

Joining Wu on the paper is Joel Voldman, a professor and associate head of the Department of Electrical Engineering and Computer Science, co-director of the Medical Electronic Device Realization Center, and a principal investigator in the Research Laboratory of Electronics and the Microsystems Technology Laboratories.

Integrated, automated design

Traditional assays that detect protein biomarkers are bulky, expensive machines relegated to labs that require about a milliliter of blood and produce results in hours. In recent years, portable “point-of-care” systems have been developed that use microliters of blood to get similar results in about 30 minutes.

But point-of-care systems can be very expensive since most use pricey optical components to detect the biomarkers. They also capture only a small number of proteins, many of which are among the more abundant ones in blood. Any efforts to decrease the price, shrink down components, or increase protein ranges negatively impacts their sensitivity.

In their work, the researchers wanted to shrink components of the magnetic-bead-based assay, which is often used in labs, onto an automated microfluidics device that’s roughly several square centimeters. That required manipulating beads in micron-sized channels and fabricating a device in the Microsystems Technology Laboratory that automated the movement of fluids.

The beads are coated with an antibody that attracts IL-6, as well as a catalyzing enzyme called horseradish peroxidase. The beads and blood sample are injected into the device, entering into an “analyte-capture zone,” which is basically a loop. Along the loop is a peristaltic pump — commonly used for controlling liquids — with valves automatically controlled by an external circuit. Opening and closing the valves in specific sequences circulates the blood and beads to mix together. After about 10 minutes, the IL-6 proteins have bound to the antibodies on the beads.

Automatically reconfiguring the valves at that time forces the mixture into a smaller loop, called the “detection zone,” where they stay trapped. A tiny magnet collects the beads for a brief wash before releasing them around the loop. After about 10 minutes, many beads have stuck on an electrode coated with a separate antibody that attracts IL-6. At that time, a solution flows into the loop and washes the untethered beads, while the ones with IL-6 protein remain on the electrode.

The solution carries a specific molecule that reacts to the horseradish enzyme to create a compound that responds to electricity. When a voltage is applied to the solution, each remaining bead creates a small current. A common chemistry technique called “amperometry” converts that current into a readable signal. The device counts the signals and calculates the concentration of IL-6.

“On their end, doctors just load in a blood sample using a pipette. Then, they press a button and 25 minutes later they know the IL-6 concentration,” Wu says.

The device uses about 5 microliters of blood, which is about a quarter the volume of blood drawn from a finger prick and a fraction of the 100 microliters required to detect protein biomarkers in lab-based assays. The device captures IL-6 concentrations as low as 16 picograms per milliliter, which is below the concentrations that signal sepsis, meaning the device is sensitive enough to provide clinically relevant detection.

A general platform

The current design has eight separate microfluidics channels to measure as many different biomarkers or blood samples in parallel. Different antibodies and enzymes can be used in separate channels to detect different biomarkers, or different antibodies can be used in the same channel to detect several biomarkers simultaneously.

Next, the researchers plan to create a panel of important sepsis biomarkers for the device to capture, including interleukin-6, interleukin-8, C-reactive protein, and procalcitonin. But there’s really no limit to how many different biomarkers the device can measure, for any disease, Wu says. Notably, more than 200 protein biomarkers for various diseases and conditions have been approved by the U.S. Food and Drug Administration.

“This is a very general platform,” Wu says. “If you want to increase the device’s physical footprint, you can scale up and design more channels to detect as many biomarkers as you want.”

The work was funded by Analog Devices, Maxim Integrated, and the Novartis Institutes of Biomedical Research.

 

Date Posted: 

Friday, July 26, 2019 - 5:00pm

Card Title Color: 

Black

Card Description: 

When time matters in hospitals, this automated system can detect an early biomarker for the potentially life-threatening condition.

Photo: 

Card Wide Image: 

Making it easier to program and protect the web

$
0
0

EECS Associate Professor Adam Chlipala. Photo: M. Scott Brauer

Behind the scenes of every web service, from a secure web browser to an entertaining app, is a programmer’s code, carefully written to ensure everything runs quickly, smoothly, and securely. For years, EECS Associate Professor Adam Chlipala has been toiling away behind behind-the-scenes, developing tools to help programmers more quickly and easily generate their code — and prove it does what it’s supposed to do.

Scanning the many publications on Chlipala’s webpage, you’ll find some commonly repeated keywords, such as “easy,” “automated,” and “proof.” Much of his work centers on designing simplified programming languages and app-making tools for programmers, systems that automatically generate optimized algorithms for specific tasks, and compilers that automatically prove that the complex math written in code is correct.

“I hope to save a lot of people a lot of time doing boring repetitive work, by automating programming work as well as decreasing the cost of building secure, reliable systems,” says Chlipala, who is a recently tenured professor of computer science, a principal investigator in the Computer Science and Artificial Intelligence Laboratory (CSAIL), and head of the Programming Languages and Verification Group.

One of Chlipala’s recent systems automatically generates optimized — and mathematically proven — cryptographic algorithms, freeing programmers from hours upon hours of manually writing and verifying code by hand. And that system is now behind nearly all secure Google Chrome communications.

But Chlipala’s code-generating and mathematical proof systems can be used for a wide range of applications, from protecting financial transactions against fraud to ensuring autonomous vehicles operate safely. The aim, he says, is catching coding errors before they lead to real-world consequences.

“Today, we just assume that there’s going to be a constant flow of serious security problems in all major operating systems. But using formal mathematical methods, we should be able to automatically guarantee there will be far fewer surprises of that kind,” he says. “With a fixed engineering budget, we can suddenly do a lot more, without causing embarrassing or life-threatening disasters.”

A heart for system infrastructure

As he was growing up in the Lehigh Valley region of Pennsylvania, programming became “an important part of my self-identity,” Chlipala says. In the late 1980s, when Chlipala was young, his father, a researcher who ran physics experiments for AT&T Bell Laboratories, taught him some basic programming skills. He quickly became hooked.

In the late 1990s, when the family finally connected to the internet, Chlipala had access to various developer resources that helped him delve “into more serious stuff,” meaning designing larger, more complex programs. He worked on compilers — programs that translate programming language into machine-readable code — and web applications, “when apps were an avant-garde subject.”  

In fact, apps were then called “CGI scripts.” CGI is an acronym for Common Gateway Interface, which is a protocol that enables a program (or “script”) to talk to a server. In high school, Chlipala and some friends designed CGI scripts that connected them in an online forum for young programmers. “It was a means for us to start building our own system infrastructure,” he says.

And as an avid computer gamer, the logical thing for a teenaged Chlipala to do was design his own games. His first attempts were text-based adventures coded in the BASIC programming language. Later, in the C programming language, he designed a “Street Fighter”-like game, called Brimstone, and some simulated combat tabletop games.

It was exciting stuff for a high schooler. “But my heart was always in systems infrastructure, like code compilers and building help tools for old Windows operating systems,” Chlipala says.

From then on, Chlipala worked far in the background of web services, building the programming foundations for developers. “I’m several levels of abstraction removed from the type of computer programming that’s of any interest to any end-user,” he says, laughing.

Impact in the real world

After high school, in 2000, Chlipala enrolled at Carnegie Mellon University, where he majored in computer science and got involved in a programming language compiler research group. In 2007, he earned his PhD in computer science from University of California at Berkeley, where his work focused on developing methods that can prove the mathematical correctness of algorithms.

After completing a postdoc at Harvard University, Chlipala came to MIT in 2011 to begin his teaching career. What drew Chlipala to MIT, in part, was an opportunity “to plug in a gap, where no one was doing my kind of proofs of computer systems’ correctness,” he says. “I enjoyed building that subject here from the ground up.”

Testing the source code that powers web services and computer systems today is computationally intensive. It mostly relies on running the code through tons of simulations, and correcting any caught bugs, until the code produces a desired output. But it’s nearly impossible to run the code through every possible scenario to prove it’s completely without error.

Chlipala’s research group instead focuses on eliminating the need for those simulations, by designing proven mathematical theorems that capture exactly how a given web service or computer system is supposed to behave. From that, they build algorithms that check if the source code operates according to that theorem, meaning it performs exactly how it’s supposed to, mostly during code compiling.

Even though such methods can be applied to any application, Chlipala likes to run his research group like a startup, encouraging students to target specific, practical applications for their research projects. In fact, two of his former students recently joined startups doing work connected to their thesis research.  

One student is working on developing a platform that lets people rapidly design, fabricate, and test their own computer chips. Another is designing mathematical proven systems to ensure the source code powering driverless car systems doesn’t contain errors that’ll lead to mistakes on the road. “In driverless cars, a bug could literally cause a crash, not just the ‘blue-screen death’ type of a crash,” Chlipala says.

Now on sabbatical from this summer until the end of the year, Chlipala is splitting his time between MIT research projects and launching his own startup based around tools that help people without programming experience create advanced apps. One such tool, which lets nonexperts build scheduling apps, has already found users among faculty and staff in his own department. About the new company, he says: “I’ve been into entrepreneurship over the last few years. But now that I have tenure, it’s a good time to get started.”

 

Date Posted: 

Tuesday, July 30, 2019 - 11:30am

Labs: 

Card Title Color: 

Black

Card Description: 

EECS Professor Adam Chlipala builds tools to help programmers more quickly generate optimized, secure code.

Photo: 

Card Wide Image: 


A much less invasive way to monitor pressure in the brain

$
0
0

Traumatic brain injuries, as well as infectious diseases such as meningitis, can lead to brain swelling and dangerously high pressure in the brain. If untreated, patients are at risk for brain damage, and in some cases elevated pressure can be fatal.

Current techniques for measuring pressure within the brain are so invasive that the measurement is only performed in the patients at highest risk. However, that may soon change, now that a team of researchers from MIT and Boston Children’s Hospital has devised a much less invasive way to monitor intracranial pressure (ICP).

“Ultimately the goal is to have a monitor at the bedside in which we only use minimally invasive or noninvasive measurements and produce estimates of ICP in real time,” says Thomas Heldt, the W. M. Keck Career Development Professor in Biomedical Engineering in MIT’s Institute of Medical Engineering and Science, an associate professor of electrical and biomedical engineering, and a principal investigator in MIT’s Research Laboratory of Electronics.

In a study of patients ranging in age from 2 to 25 years, the researchers showed that their measurement is nearly as accurate as the current gold standard technique, which requires drilling a hole in the skull.

Heldt is the senior author of the paper, which appears in the Aug. 23 issue of the Journal of Neurosurgery: Pediatrics. MIT research scientist Andrea Fanelli is the study’s lead author.

Elevated risk

Under normal conditions, ICP is between 5 and 15 millimeters of mercury (mmHg). When the brain suffers a traumatic injury or swelling caused by inflammation, pressure can go above 20 mmHg, impeding blood flow into the brain. This can lead to cell death from lack of oxygen, and in severe cases swelling pushes down on the brainstem — the area that controls breathing — and can cause the patient to lose consciousness or even stop breathing.

Measuring ICP currently requires drilling a hole in the skull and inserting a catheter into the ventricular space, which contains cerebrospinal fluid. This invasive procedure is only done for patients in intensive care units who are at high risk of elevated ICP. When a patient’s brain pressure becomes dangerously high, doctors can help relieve it by draining cerebrospinal fluid through a catheter inserted into the brain. In very severe cases, they remove a piece of the skull so the brain has more room to expand, then replace it once the swelling goes down.

Heldt first began working on a less invasive way to monitor ICP more than 10 years ago, along with George Verghese, Henry Ellis Warren Professor of Electrical Engineering at MIT, and then-graduate student Faisal Kashif. The researchers published a paper in 2012 in which they developed a way to estimate ICP based on two measurements: arterial blood pressure, which is taken by inserting a catheter at the patient’s wrist, and the velocity of blood flow entering the brain, measured by holding an ultrasound probe to the patient’s temple.

For that initial study, the researchers developed a mathematical model of the relationship between blood pressure, cerebral blood flow velocity, and ICP. They tested the model on data collected several years earlier from patients with traumatic brain injury at Cambridge University, with encouraging results.

In their new study, the researchers wanted to improve the algorithm that they were using to estimate ICP, and also to develop methods to collect their own data from pediatric patients.

They teamed up with Robert Tasker, director of the pediatric neurocritical care program at Boston Children’s Hospital and a co-author of the new paper, to identify patients for the study and help move the technology to the bedside. The system was tested only on patients whose guardians approved the procedure. Arterial blood pressure and ICP were already being measured as part of the patients’ routine monitoring, so the only additional element was the ultrasound measurement.

Fanelli also devised a way to automate the data analysis so that only data segments with the highest signal-to-noise ratio were used, making the estimates of ICP more accurate.

“We built a signal processing pipeline that was able to automatically detect the segments of data that we could trust versus the segments of data that were too noisy to be used for ICP estimation,” he says. “We wanted to have an automated approach that could be completely user-independent.”

Expanded monitoring

The ICP estimates generated by this new technique were, on average, within about 1 mmHg of the measurements taken with the invasive method. “From a clinical perspective, it was well within the limits that we would consider useful,” Tasker says.

In this study, the researchers focused on patients with severe injuries because those are the patients who already had an invasive ICP measurement being done. However, a less invasive approach could allow ICP monitoring to be expanded to include patients with diseases such as meningitis and encephalitis, as well as malaria, which can all cause brain swelling.

“In the past, for these conditions, we would never consider ICP monitoring. What the current research has opened up for us is the possibility that we can include these other patients and try to identify not only whether they’ve got raised ICP but some degree of magnitude to that,” Tasker says.

“These findings are very encouraging and may open the way for reliable, non-invasive neuro-critical care,” says Nino Stocchetti, a professor of anesthesia and intensive care medicine at Policlinico of Milan, Italy, who was not involved in the research. “As the authors acknowledge, these results ‘indicate a promising route’ rather than being conclusive: additional work, refinements and more patients remain necessary.”

The researchers are now running two additional studies, at Beth Israel Deaconess Medical Center and Boston Medical Center, to test their system in a wider range of patients, including those who have suffered strokes. In addition to helping doctors evaluate patients, the researchers hope that their technology could also help with research efforts to learn more about how elevated ICP affects the brain.

“There’s been a fundamental limitation of studying intracranial pressure and its relation to a variety of conditions, simply because we didn’t have an accurate and robust way to get at the measurement noninvasively,” Heldt says.

The researchers are also working on a way to measure arterial blood pressure without inserting a catheter, which would make the technology easier to deploy in any location.

“This estimate could be of greatest benefit in the pediatrician’s office, the ophthalmologist’s office, the ambulance, the emergency department, so you want to have a completely noninvasive arterial blood pressure measurement,” Heldt says. “We’re working to develop that.”

The research was funded by the National Institutes for Neurological Disorders and Stroke, Maxim Integrated Products, and the Boston Children’s Hospital Department of Anesthesiology, Critical Care, and Pain Medicine.

 

Date Posted: 

Wednesday, August 28, 2019 - 1:45pm

Labs: 

Card Title Color: 

Black

Card Description: 

The new technique could help doctors determine whether patients are at risk from elevated pressure.

Photo: 

Card Wide Image: 

Research Area: 

Artificial intelligence could help data centers run far more efficiently

$
0
0

Researchers have developed a system that "learns” to allocate data-processing operations across thousands of servers.

A novel system developed by MIT researchers automatically “learns” how to schedule data-processing operations across thousands of servers — a task traditionally reserved for imprecise, human-designed algorithms. Doing so could help today’s power-hungry data centers run far more efficiently.

Data centers can contain tens of thousands of servers, which constantly run data-processing tasks from developers and users. Cluster scheduling algorithms allocate the incoming tasks across the servers, in real-time, to efficiently utilize all available computing resources and get jobs done fast.

Traditionally, however, humans fine-tune those scheduling algorithms, based on some basic guidelines (“policies”) and various tradeoffs. They may, for instance, code the algorithm to get certain jobs done quickly or split resource equally between jobs. But workloads — meaning groups of combined tasks — come in all sizes. Therefore, it’s virtually impossible for humans to optimize their scheduling algorithms for specific workloads and, as a result, they often fall short of their true efficiency potential.

The MIT researchers instead offloaded all of the manual coding to machines. In a paper presented at a recent conference of the Association for Computing Machinery (ACM) special interest group on data communiations (SIGCOMM), they describe a system that leverages “reinforcement learning” (RL), a trial-and-error machine-learning technique, to tailor scheduling decisions to specific workloads in specific server clusters.

To do so, they built novel RL techniques that could train on complex workloads. In training, the system tries many possible ways to allocate incoming workloads across the servers, eventually finding an optimal tradeoff in utilizing computation resources and quick processing speeds. No human intervention is required beyond a simple instruction, such as, “minimize job-completion times.”

Compared to the best handwritten scheduling algorithms, the researchers’ system completes jobs about 20 to 30 percent faster, and twice as fast during high-traffic times. Mostly, however, the system learns how to compact workloads efficiently to leave little waste. Results indicate the system could enable data centers to handle the same workload at higher speeds, using fewer resources.

“If you have a way of doing trial and error using machines, they can try different ways of scheduling jobs and automatically figure out which strategy is better than others,” says Hongzi Mao, a PhD student in the Department of Electrical Engineering and Computer Science (EECS). “That can improve the system performance automatically. And any slight improvement in utilization, even 1 percent, can save millions of dollars and a lot of energy in data centers.”

“There’s no one-size-fits-all to making scheduling decisions,” adds co-author Mohammad Alizadeh, an EECS professor and researcher in the Computer Science and Artificial Intelligence Laboratory (CSAIL). “In existing systems, these are hard-coded parameters that you have to decide up front. Our system instead learns to tune its schedule policy characteristics, depending on the data center and workload.”

Joining Mao and Alizadeh on the paper are: postdocs Malte Schwarzkopf and Shaileshh Bojja Venkatakrishnan, and graduate research assistant Zili Meng, all of CSAIL.

RL for scheduling

Typically, data processing jobs come into data centers represented as graphs of “nodes” and “edges.” Each node represents some computation task that needs to be done, where the larger the node, the more computation power needed. The edges connecting the nodes link connected tasks together. Scheduling algorithms assign nodes to servers, based on various policies.

But traditional RL systems are not accustomed to processing such dynamic graphs. These systems use a software “agent” that makes decisions and receives a feedback signal as a reward. Essentially, it tries to maximize its rewards for any given action to learn an ideal behavior in a certain context. They can, for instance, help robots learn to perform a task like picking up an object by interacting with the environment, but that involves processing video or images through an easier set grid of pixels.

To build their RL-based scheduler, called Decima, the researchers had to develop a model that could process graph-structured jobs, and scale to a large number of jobs and servers. Their system’s “agent” is a scheduling algorithm that leverages a graph neural network, commonly used to process graph-structured data. To come up with a graph neural network suitable for scheduling, they implemented a custom component that aggregates information across paths in the graph — such as quickly estimating how much computation is needed to complete a given part of the graph. That’s important for job scheduling, because “child” (lower) nodes cannot begin executing until their “parent” (upper) nodes finish, so anticipating future work along different paths in the graph is central to making good scheduling decisions.

To train their RL system, the researchers simulated many different graph sequences that mimic workloads coming into data centers. The agent then makes decisions about how to allocate each node along the graph to each server. For each decision, a component computes a reward based on how well it did at a specific task — such as minimizing the average time it took to process a single job. The agent keeps going, improving its decisions, until it gets the highest reward possible.

Baselining workloads

One concern, however, is that some workload sequences are more difficult than others to process, because they have larger tasks or more complicated structures. Those will always take longer to process — and, therefore, the reward signal will always be lower — than simpler ones. But that doesn’t necessarily mean the system performed poorly: It could make good time on a challenging workload but still be slower than an easier workload. That variability in difficulty makes it challenging for the model to decide what actions are good or not.

To address that, the researchers adapted a technique called “baselining” in this context. This technique takes averages of scenarios with a large number of variables and uses those averages as a baseline to compare future results. During training, they computed a baseline for every input sequence. Then, they let the scheduler train on each workload sequence multiple times. Next, the system took the average performance across all of the decisions made for the same input workload. That average is the baseline against which the model could then compare its future decisions to determine if its decisions are good or bad. They refer to this new technique as “input-dependent baselining.”

That innovation, the researchers say, is applicable to many different computer systems. “This is general way to do reinforcement learning in environments where there’s this input process that effects environment, and you want every training event to consider one sample of that input process,” he says. “Almost all computer systems deal with environments where things are constantly changing.”

Aditya Akella, a professor of computer science at the University of Wisconsin at Madison, whose group has designed several high-performance schedulers, found the MIT system could help further improve their own policies. “Decima can go a step further and find opportunities for [scheduling] optimization that are simply too onerous to realize via manual design/tuning processes,” Akella says. “The schedulers we designed achieved significant improvements over techniques used in production in terms of application performance and cluster efficiency, but there was still a gap with the ideal improvements we could possibly achieve. Decima shows that an RL-based approach can discover [policies] that help bridge the gap further. Decima improved on our techniques by a [roughly] 30 percent, which came as a huge surprise.”

Right now, their model is trained on simulations that try to recreate incoming online traffic in real-time. Next, the researchers hope to train the model on real-time traffic, which could potentially crash the servers. So, they’re currently developing a “safety net” that will stop their system when it’s about to cause a crash. “We think of it as training wheels,” Alizadeh says. “We want this system to continuously train, but it has certain training wheels that if it goes too far we can ensure it doesn’t fall over.”

 

Date Posted: 

Wednesday, August 28, 2019 - 2:00pm

Labs: 

Card Title Color: 

Black

Card Description: 

The MIT system “learns” how to optimally allocate workloads across thousands of servers to cut costs and save energy.

Photo: 

Card Wide Image: 

Research Area: 

A battery-free sensor for underwater exploration

$
0
0

To investigate the vastly unexplored oceans covering most our planet, researchers aim to build a submerged network of interconnected sensors that send data to the surface — an underwater “internet of things.” But how to supply constant power to scores of sensors designed to stay for long durations in the ocean’s deep?

MIT researchers have an answer: a battery-free underwater communication system that uses near-zero power to transmit sensor data. The system could be used to monitor sea temperatures to study climate change and track marine life over long periods — and even sample waters on distant planets. They presented the system at the Association for Computing Machinery (ACM) Special Interest Group on data communications (SIGCOMM). The paper received the conference’s “best paper” award.

The system makes use of two key phenomena. One, called the “piezoelectric effect,” occurs when vibrations in certain materials generate an electrical charge. The other is “backscatter,” a communication technique commonly used for RFID tags, that transmits data by reflecting modulated wireless signals off a tag and back to a reader.

In the researchers’ system, a transmitter sends acoustic waves through water toward a piezoelectric sensor that has stored data. When the wave hits the sensor, the material vibrates and stores the resulting electrical charge. Then the sensor uses the stored energy to reflect a wave back to a receiver — or it doesn’t reflect one at all. Alternating between reflection in that way corresponds to the bits in the transmitted data: For a reflected wave, the receiver decodes a 1; for no reflected wave, the receiver decodes a 0.

“Once you have a way to transmit 1s and 0s, you can send any information,” says co-author Fadel Adib, an assistant professor in the Department of Electrical Engineering and Computer Science and fhe MIT Media Lab, where he is founding director of the Signal Kinetics Research Group. “Basically, we can communicate with underwater sensors based solely on the incoming sound signals whose energy we are harvesting.”

The researchers demonstrated their Piezo-Acoustic Backscatter System in an MIT pool, using it to collect water temperature and pressure measurements. The system was able to transmit 3 kilobits per second of accurate data from two sensors simultaneously at a distance of 10 meters between sensor and receiver.

Applications go beyond our own planet. The system, Adib says, could be used to collect data in the recently discovered subsurface ocean on Saturn’s largest moon, Titan. In June, NASA announced the Dragonfly mission to send a rover in 2026 to explore the moon, sampling water reservoirs and other sites.

“How can you put a sensor under the water on Titan that lasts for long periods of time in a place that’s difficult to get energy?” says Adib, who co-wrote the paper with Media Lab researcher JunSu Jang. “Sensors that communicate without a battery open up possibilities for sensing in extreme environments.”

Preventing deformation

Inspiration for the system hit while Adib was watching “Blue Planet,” a nature documentary series exploring various aspects of sea life. Oceans cover about 72 percent of Earth’s surface. “It occurred to me how little we know of the ocean and how marine animals evolve and procreate,” he says. Internet-of-things (IoT) devices could aid that research, “but underwater you can’t use Wi-Fi or Bluetooth signals … and you don’t want to put batteries all over the ocean, because that raises issues with pollution.”

That led Adib to piezoelectric materials, which have been around and used in microphones and other devices for about 150 years. They produce a small voltage in response to vibrations. But that effect is also reversible: Applying voltage causes the material to deform. If placed underwater, that effect produces a pressure wave that travels through the water. They’re often used to detect sunken vessels, fish, and other underwater objects.

“That reversibility is what allows us to develop a very powerful underwater backscatter communication technology,” Adib says.

Communicating relies on preventing the piezoelectric resonator from naturally deforming in response to strain. At the heart of the system is a submerged node, a circuit board that houses a piezoelectric resonator, an energy-harvesting unit, and a microcontroller. Any type of sensor can be integrated into the node by programming the microcontroller. An acoustic projector (transmitter) and underwater listening device, called a hydrophone (receiver), are placed some distance away.

Say the sensor wants to send a 0 bit. When the transmitter sends its acoustic wave at the node, the piezoelectric resonator absorbs the wave and naturally deforms, and the energy harvester stores a little charge from the resulting vibrations. The receiver then sees no reflected signal and decodes a 0.

However, when the sensor wants to send a 1 bit, the nature changes. When the transmitter sends a wave, the microcontroller uses the stored charge to send a little voltage to the piezoelectric resonator. That voltage reorients the material’s structure in a way that stops it from deforming, and instead reflects the wave. Sensing a reflected wave, the receiver decodes a 1.

Long-term deep-sea sensing

The transmitter and receiver must have power but can be planted on ships or buoys, where batteries are easier to replace, or connected to outlets on land. One transmitter and one receiver can gather information from many sensors covering one area or many areas.

“When you’re tracking a marine animal, for instance, you want to track it over a long range and want to keep the sensor on them for a long period of time. You don’t want to worry about the battery running out,” Adib says. “Or, if you want to track temperature gradients in the ocean, you can get information from sensors covering a number of different places.”

Another interesting application is monitoring brine pools, large areas of brine that sit in pools in ocean basins, and are difficult to monitor long-term. They exist, for instance, on the Antarctic Shelf, where salt settles during the formation of sea ice, and could aid in studying melting ice and marine life interaction with the pools. “We could sense what’s happening down there, without needing to keep hauling sensors up when their batteries die,” Adib says.

Polly Huang, a professor of electrical engineering at Taiwan National University, praised the work for its technical novelty and potential impact on environmental science. “This is a cool idea,” Huang says. “It's not news one uses piezoelectric crystals to harvest energy … [but is the] first time to see it being used as a radio at the same time [which] is unheard of to the sensor network/system research community. Also interesting and unique is the hardware design and fabrication. The circuit and the design of the encapsulation are both sound and interesting.”

While noting that the system still needs more experimentation, especially in sea water, Huang adds that “this might be the ultimate solution for researchers in marine biography, oceanography, or even meteorology — those in need of long-term, low-human-effort underwater sensing.”

Next, the researchers aim to demonstrate that the system can work at farther distances and communicate with more sensors simultaneously. They’re also hoping to test if the system can transmit sound and low-resolution images.

The work is sponsored, in part, by the U.S Office of Naval Research

 

Date Posted: 

Wednesday, August 28, 2019 - 2:30pm

Card Title Color: 

Black

Card Description: 

The submerged system uses the vibration of “piezoelectric” materials to generate power and send and receive data.

Photo: 

Card Wide Image: 

Automating artificial intelligence for medical decision-making

$
0
0

A new MIT-developed model automates a critical step in using AI for medical decision making. Image courtesy of the researchers.

MIT computer scientists are hoping to accelerate the use of artificial intelligence to improve medical decision-making, by automating a key step that’s usually done by hand — and that’s becoming more laborious as certain datasets grow ever-larger.

The field of predictive analytics holds increasing promise for helping clinicians diagnose and treat patients. Machine-learning models can be trained to find patterns in patient data to aid in sepsis care, design safer chemotherapy regimens, and predict a patient’s risk of having breast cancer or dying in the ICU, to name just a few examples.

Typically, training datasets consist of many sick and healthy subjects, but with relatively little data for each subject. Experts must then find just those aspects — or “features” — in the datasets that will be important for making predictions.

This “feature engineering” can be a laborious and expensive process. But it’s becoming even more challenging with the rise of wearable sensors, because researchers can more easily monitor patients’ biometrics over long periods, tracking sleeping patterns, gait, and voice activity, for example. After only a week’s worth of monitoring, experts could have several billion data samples for each subject.  

In a paper presented at the Machine Learning for Healthcare conference recently, MIT researchers demonstrate a model that automatically learns features predictive of vocal cord disorders. The features come from a dataset of about 100 subjects, each with about a week’s worth of voice-monitoring data and several billion samples — in other words, a small number of subjects and a large amount of data per subject. The dataset contain signals captured from a little accelerometer sensor mounted on subjects’ necks.

In experiments, the model used features automatically extracted from these data to classify, with high accuracy, patients with and without vocal cord nodules. These are lesions that develop in the larynx, often because of patterns of voice misuse such as belting out songs or yelling. Importantly, the model accomplished this task without a large set of hand-labeled data.

“It’s becoming increasing easy to collect long time-series datasets. But you have physicians that need to apply their knowledge to labeling the dataset,” says lead author Jose Javier Gonzalez Ortiz, a PhD student in the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL). “We want to remove that manual part for the experts and offload all feature engineering to a machine-learning model.”

The model can be adapted to learn patterns of any disease or condition. But the ability to detect the daily voice-usage patterns associated with vocal cord nodules is an important step in developing improved methods to prevent, diagnose, and treat the disorder, the researchers say. That could include designing new ways to identify and alert people to potentially damaging vocal behaviors.

Joining Gonzalez Ortiz on the paper are JohnGuttag, the Dugald C. Jackson Professor of Computer Science and Electrical Engineering and head of CSAIL’s Data Driven Inference Group; Robert Hillman, Jarrad Van Stan, and Daryush Mehta, all of Massachusetts General Hospital’s Center for Laryngeal Surgery and Voice Rehabilitation; and Marzyeh Ghassemi PhD '17, PD '17, who is now an assistant professor of computer science and medicine at the University of Toronto.

Forced feature-learning

For years, the MIT researchers have worked with the Center for Laryngeal Surgery and Voice Rehabilitation to develop and analyze data from a sensor to track subject voice usage during all waking hours. The sensor is an accelerometer with a node that sticks to the neck and is connected to a smartphone. As the person talks, the smartphone gathers data from the displacements in the accelerometer.

In their work, the researchers collected a week’s worth of this data — called “time-series” data — from 104 subjects, half of whom were diagnosed with vocal cord nodules. For each patient, there was also a matching control, meaning a healthy subject of similar age, sex, occupation, and other factors.

Traditionally, experts would need to manually identify features that may be useful for a model to detect various diseases or conditions. That helps prevent a common machine-learning problem in health care: overfitting. That’s when, in training, a model “memorizes” subject data instead of learning just the clinically relevant features. In testing, those models often fail to discern similar patterns in previously unseen subjects.

“Instead of learning features that are clinically significant, a model sees patterns and says, ‘This is Sarah, and I know Sarah is healthy, and this is Peter, who has a vocal cord nodule.’ So, it’s just memorizing patterns of subjects. Then, when it sees data from Andrew, which has a new vocal usage pattern, it can’t figure out if those patterns match a classification,” Gonzalez Ortiz says.

The main challenge, then, was preventing overfitting while automating manual feature engineering. To that end, the researchers forced the model to learn features without subject information. For their task, that meant capturing all moments when subjects speak and the intensity of their voices.

As their model crawls through a subject’s data, it’s programmed to locate voicing segments, which comprise only roughly 10 percent of the data. For each of these voicing windows, the model computes a spectrogram, a visual representation of the spectrum of frequencies varying over time, which is often used for speech processing tasks. The spectrograms are then stored as large matrices of thousands of values.

But those matrices are huge and difficult to process. So, an autoencoder — a neural network optimized to generate efficient data encodings from large amounts of data — first compresses the spectrogram into an encoding of 30 values. It then decompresses that encoding into a separate spectrogram.  

Basically, the model must ensure that the decompressed spectrogram closely resembles the original spectrogram input. In doing so, it’s forced to learn the compressed representation of every spectrogram segment input over each subject’s entire time-series data. The compressed representations are the features that help train machine-learning models to make predictions.  

Mapping normal and abnormal features

In training, the model learns to map those features to “patients” or “controls.” Patients will have more voicing patterns than will controls. In testing on previously unseen subjects, the model similarly condenses all spectrogram segments into a reduced set of features. Then, it’s majority rules: If the subject has mostly abnormal voicing segments, they’re classified as patients; if they have mostly normal ones, they’re classified as controls.

In experiments, the model performed as accurately as state-of-the-art models that require manual feature engineering. Importantly, the researchers’ model performed accurately in both training and testing, indicating it’s learning clinically relevant patterns from the data, not subject-specific information.

Next, the researchers want to monitor how various treatments — such as surgery and vocal therapy — impact vocal behavior. If patients’ behaviors move form abnormal to normal over time, they’re most likely improving. They also hope to use a similar technique on electrocardiogram data, which is used to track muscular functions of the heart.

  •  

 

Date Posted: 

Wednesday, August 28, 2019 - 2:45pm

Labs: 

Card Title Color: 

Black

Card Description: 

The model replaces the laborious process of annotating massive patient datasets by hand.

Photo: 

Card Wide Image: 

High-precision technique stores cellular 'memory' in DNA

$
0
0

Researchers have developed a new way to encode complex memories in the DNA of living cells. Image: MIT News

Using a technique that can precisely edit DNA bases, MIT researchers have created a way to store complex “memories” in the DNA of living cells, including human cells.

The new system, known as DOMINO, can be used to record the intensity, duration, sequence, and timing of many events in the life of a cell, such as exposures to certain chemicals. This memory-storage capacity can act as the foundation of complex circuits in which one event, or series of events, triggers another event, such as the production of a fluorescent protein.

“This platform gives us a way to encode memory and logic operations in cells in a scalable fashion,” says Fahim Farzadfard, a Schmidt Science Postdoctoral Fellow at MIT and the lead author of the paper. “Similar to silicon-based computers, in order to create complex forms of logic and computation, we need to have access to vast amounts of memory.”

Applications for these types of complex memory circuits include tracking the changes that occur from generation to generation as cells differentiate, or creating sensors that could detect, and possibly treat, diseased cells.

Timothy Lu, an MIT associate professor of electrical engineering and computer science and of biological engineering, is the senior author of the study, which appears in the Aug. 22 issue of Molecular Cell. Other authors of the paper include Harvard University graduate student Nava Gharaei, former MIT researcher Yasutomi Higashikuni, MIT graduate student Giyoung Jung, and MIT postdoc Jicong Cao.

Written in DNA

Several years ago, Lu’s lab developed a memory storage system based on enzymes called DNA recombinases, which can “flip” segments of DNA when a specific event occurs. However, this approach is limited in scale: It can only record one or two events, because the DNA sequences that have to be flipped are very large, and each requires a different recombinase.

Lu and Farzadfard then developed a more targeted approach in which they could insert new DNA sequences into predetermined locations in the genome, but that approach only worked in bacterial cells. In 2016, they developed a memory storage system based on CRISPR, a genome-editing system that consists of a DNA-cutting enzyme called Cas9 and a short RNA strand that guides the enzyme to a specific area of the genome.

This CRISPR-based process allowed the researchers to insert mutations at specific DNA locations, but it relied on the cell’s own DNA-repair machinery to generate mutations after Cas9 cut the DNA. This meant that the mutational outcomes were not always predictable, thus limiting the amount of information that could be stored.

The new DOMINO system uses a variant of the CRISPR-Cas9 enzyme that makes more well-defined mutations because it directly modifies and stores bits of information in DNA bases instead of cutting DNA and waiting for cells to repair the damage. The researchers showed that they could get this system to work accurately in both human and bacterial cells.

“This paper tries to overcome all the limitations of the previous ones,” Lu says. “It gets us much closer to the ultimate vision, which is to have robust, highly scalable, and defined memory systems, similar to how a hard drive would work.”

To achieve this higher level of precision, the researchers attached a version of Cas9 to a recently developed “base editor” enzyme, which can convert the nucleotide cytosine to thymine without breaking the double-stranded DNA.

Guide RNA strands, which direct the base editor where to make this switch, are produced only when certain inputs are present in the cell. When one of the target inputs is present, the guide RNA leads the base editor either to a stretch of DNA that the researchers added to the cell’s nucleus, or to genes found in the cell’s own genome, depending on the application. Measuring the resulting cytosine to thymine mutations allows the researchers to determine what the cell has been exposed to.

“You can design the system so that each combination of the inputs gives you a unique mutational signature, and from that signature you can tell which combination of the inputs has been present,” Farzadfard says.

Complex calculations

The researchers used DOMINO to create circuits that perform logic calculations, including AND and OR gates, which can detect the presence of multiple inputs. They also created circuits that can record cascades of events that occur in a certain order, similar to an array of dominos falling.

“This is very innovative work that enables recording and retrieving cellular information using DNA. The ability to perform sequential or logic computation and associative learning is particularly impressive,” says Wilson Wong, an associate professor of biomedical engineering at Boston University, who was not involved in the research. “This work highlights novel genetic circuits that can be achieved with CRISPR/Cas.”

Most previous versions of cellular memory storage have required stored memories to be read by sequencing the DNA. However, that process destroys the cells, so no further experiments can be done on them. In this study, the researchers designed their circuits so that the final output would activate the gene for green fluorescent protein (GFP). By measuring the level of fluorescence, the researchers could estimate how many mutations had accumulated, without killing the cells. The technology could potentially be used to create mouse immune cells that produce GFP when certain signaling molecules are activated, which researchers could analyze by periodically taking blood samples from the mice.

Another possible application is designing circuits that can detect gene activity linked to cancer, the researchers say. Such circuits could also be programmed to turn on genes that produce cancer-fighting molecules, allowing the system to both detect and treat the disease. “Those are applications that may be further away from real-world use but are certainly enabled by this type of technology,” Lu says.

The research was funded by the National Institutes of Health, the Office of Naval Research, the National Science Foundation, the Defense Advanced Research Projects Agency, the MIT Center for Microbiome Informatics and Therapeutics, and the NSF Expeditions in Computing Program Award.

 

Wednesday, August 28, 2019 - 4:30pm

Card Wide Image: 

Engineers program human and bacterial cells to keep a record of complex molecular events.
Viewing all 1281 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>