Quantcast
Channel: MIT EECS
Viewing all 1281 articles
Browse latest View live

Empowering Innovation

$
0
0

Terri Park | MIT Innovation Initiative

Panel highlights women innovators and entrepreneurs from a range of fields.

StartMIT featured panelists from a variety of fields, from academia, to industry, to startups. Left to right: Jesse Draper, Helen Greiner, Susan Hockfield, Payal Kadakia, and Dina Katabi.  Susan Hockfield, MIT president emerita and president elect of the American Association for the Advancement of Science, shared her experiences fostering innovation in Kendall Square.Jesse Draper, creator and host of the “Valley Girl Show,” moderated the panel. Joi Ito, director of the MIT Media Lab, offered closing remarks. Anantha Chandrakasan, the Vannevar Bush Professor of Electrical Engineering and Computer Science and StartMIT workshop chair, introduced the panel.  Left to right: Susan Hockfield, Payal Kadakia, Dina Katabi, Helen Greiner, Jesse Draper, Anantha Chandrakasan

All photos by Rose Lincoln.

An all-star panel of women entrepreneurs shared their experiences as part of the evolving innovation ecosystem at “Empowering Innovation and Entrepreneurship”— the capstone event of StartMIT, an IAP class aimed at exposing students to the elements of entrepreneurship.

Moderated by Jesse Draper, creator and host of the Emmy-nominated “Valley Girl Show,” the Tuesday night panel included Susan Hockfield, president emerita of MIT; Helen Greiner ’89, SM ’90 CEO and founder of CyPhy Works and co-founder of iRobot; Payal Kadakia ‘05, CEO and co-founder of ClassPass; and Dina Katabi, the Andrew and Erna Viterbi Professor of Electrical Engineering and Computer Science.

Draper, a former Nickelodeon star, brought her trademark approachable style to the event, encouraging panelists to jump in to the discussion with a question on how MIT has touched their careers.

“The key critical characteristic I think I found in myself and in other entrepreneurs is the ability to problem solve, and that’s the thing I learned here the most in my curriculum,” Kadakia shared. She emphasized that this skill has seen her through the development of many phases of ClassPass, a fitness membership startup that she launched in 2013. “It taught me to always take something and figure out the solution, I never got stuck.”

The advice hit home for the audience made up of alumni and students enrolled in StartMIT, a program developed by the Department of Electrical Engineering and Computer Science and supported by the Innovation Initiative, and chaired by EECS department head Anantha Chandrakasan. Over the last two weeks, the undergraduates, graduate students, and postdocs in the program have heard from founders and innovators in startups, industry, and academia about challenges they faced in their own careers.

For Greiner, it was the power of the MIT network that proved to be most valuable. “I met my business partners at MIT and many other people in my network. You want to use this opportunity while you’re at MIT to meet the professors, meet other people that are in your field, because you never know where people will end up.”

Shifting the focus to MIT’s history and its involvement in creating the current ecosystem of entrepreneurship in greater Boston, Hockfield shared that during her tenure as president, she saw a new wave of innovation rising in the region. She knew MIT could foster it, leading the Institute to “participate in accelerating the development of Kendall Square by being a really good partner to the city and to the companies. My role was to pour a little gasoline on the flames.”

Addressing the topic of raising funding, Draper asked the panel whether it was the idea, the product or the user base that was more important.

To Katabi, who has seen a couple of startups out of her lab at MIT, it was your promise to the investor that mattered the most. “People don’t see much at the beginning. Your promise is in the future. So it’s you, it’s the idea and it’s the market. It’s also that the promise once delivered it will make a difference.”

An audience member asked the panel to share what catalyzed their decision to leap into entrepreneurship. Greiner replied that she made the move early on in her career, co-founding iRobot shortly after graduating from MIT. She continued that it took the company 12 years to move into market, calling it “the longest overnight success story you’ve ever seen.” She further explained that her experience developing iRobot had its low points as well when funded projects couldn’t move forward, and that “it’s really not always about the idea, it’s about the timing of the idea, which is just as critical.”

When asked about the challenges of being a woman in technology, Greiner answered, “You have to look at everything for what it is. It can be a double edge sword. Back in 1990, there were even fewer women in technology. That was bad, but on the other hand when I would go meetings, people would remember me as the ‘robot lady.’ Everything’s a double edge sword and if you look at the positive, you keep going forward because we need women in tech.”

Joi Ito, director of the MIT Media Lab, offered the panel closing remarks on the innovative research happening at the media lab. Ito, who spent much of his career as an entrepreneur and venture capitalist, commented on the complementary roles that academia and startups play in developing new technology.

The difference between the mindset at startups and in the academic world was particularly interesting to Ito, who joined MIT in 2011. Whereas startups must focus on short-term goals and the marketability of their products, academics can spend time thinking about long-term goals and the math and science underpinning new technology, he remarked, and both ways of thinking play a role in deploying new technologies.

“I’ve spent the last five years trying to understand how technology makes it out into the real world,” Ito said. “Having done that I see the importance of translation of technology into the real world, and the role that startups have in that.”

January 22, 2016

News Image: 


Marvin Minsky, “father of artificial intelligence,” dies at 88

$
0
0
January 26, 2016

MIT Media Lab

Professor emeritus was a co-founder of CSAIL and a founding member of the Media Lab, and an EECS faculty member for 43 years.

Marvin Minsky

Marvin Minsky. Photo: Louis Fabian Bachrach


Marvin Minsky, a mathematician, computer scientist, and pioneer in the field of artificial intelligence, died at Boston’s Brigham and Women’s Hospital on Sunday, Jan. 24, of a cerebral hemorrhage. He was 88.

Minsky, a professor emeritus at the MIT Media Lab, was a pioneering thinker and the foremost expert on the theory of artificial intelligence. His 1985 book “The Society of Mind” is considered a seminal exploration of intellectual structure and function, advancing understanding of the diversity of mechanisms interacting in intelligence and thought. Minsky’s last book, “The Emotion Machine: Commonsense Thinking, Artificial Intelligence, and the Future of the Human Mind,” was published in 2006.

Minsky viewed the brain as a machine whose functioning can be studied and replicated in a computer — which would teach us, in turn, to better understand the human brain and higher-level mental functions: How might we endow machines with common sense — the knowledge humans acquire every day through experience? How, for example, do we teach a sophisticated computer that to drag an object on a string, you need to pull, not push — a concept easily mastered by a two-year-old child?

A native New Yorker, Minsky was born on Aug. 9, 1927, and entered Harvard University after returning from service in the U.S. Navy during World War II. After graduating from Harvard with honors in 1950, he attended Princeton University, receiving his PhD in mathematics in 1954. In 1951, his first year at Princeton, he built the first neural network simulator.

Minsky joined the faculty of MIT’s Department of Electrical Engineering and Computer Science in 1958, and co-founded the Artificial Intelligence Laboratory (now the Computer Science and Artificial Intelligence Laboratory) the following year. At the AI Lab, he aimed to explore how to endow machines with human-like perception and intelligence. He created robotic hands that can manipulate objects, developed new programming frameworks, and wrote extensively about philosophical issues in artificial intelligence.

“Marvin Minsky helped create the vision of artificial intelligence as we know it today,” says CSAIL Director Daniela Rus, the Andrew and Erna Viterbi Professor in MIT’s Department of Electrical Engineering and Computer Science. “The challenges he defined are still driving our quest for intelligent machines and inspiring researchers to push the boundaries in computer science.” Minsky was convinced that humans will one day develop machines that rival our own intelligence. But frustrated by a shortage of both researchers and funding in recent years, he cautioned, “How long this takes will depend on how many people we have working on the right problems.”

In 1985, Minsky became a founding member of the MIT Media Lab, where he was named the Toshiba Professor of Media Arts and Sciences, and where he continued to teach and mentor until recently.

Professor Nicholas Negroponte, co-founder and chairman emeritus of the Media Lab, says: “Marvin talked in riddles that made perfect sense, were always profound and often so funny that you would find yourself laughing days later. His genius was so self-evident that it defined ‘awesome.’ The Lab bathed in his reflected light.”

In addition to his renown in artificial intelligence, Minsky was a gifted pianist — one of only a handful of people in the world who could improvise fugues, the polyphonic counterpoint that distinguish Western classical music. His influential 1981 paper “Music, Mind and Meaning” illuminated the connections between music, psychology, and the mind.

Other achievements include Minsky’s role as the inventor of the earliest confocal scanning microscope. He was also involved in the inventions of the first “turtle,” or cursor, for the LOGO programming language, with Seymour Papert, and the “Muse” synthesizer for musical variations, with Ed Fredkin.

Minsky received the world’s top honors for his pioneering work and mentoring role in the field of artificial intelligence, including the A.M. Turing Award — the highest honor in computer science — in 1969.

In addition to the Turing Award, Minsky received honors over the years including the Japan Prize; the Royal Society of Medicine’s Rank Prize (for Optoelectronics); the Optical Society of America’s R.W. Wood Prize; MIT’s James R. Killian Jr. Faculty Achievement Award; the Computer Pioneer Award from IEEE Computer Society; the Benjamin Franklin Medal; and, in 2014, the Dan David Foundation Prize for the Future of Time Dimension titled “Artificial Intelligence: The Digital Mind,” and the BBVA Group’s BBVA Foundation Frontiers of Knowledge Lifetime Achievement Award. Minsky is survived by his wife, Gloria Rudisch Minsky, MD, and three children: Henry, Juliana, and Margaret Minsky. The family requests that memorial contributions be directed to the Marvin Minsky Foundation, which supports research in artificial intelligence, including support for graduate students.

A celebration of Minsky’s life will be held at the MIT Media Lab later this year.

Read this article on MIT News.

News Image: 

Labs: 

Research Area: 

Galvanizing entrepreneurs

$
0
0

Rob Matheson | MIT News

Intensive course helps students navigate early challenges in starting a company.

A panel of MIT alumni who shared their experiences with starting and working at young companies included Alice Brooks (speaking), co-founder of Roominate. Other speakers were (from left to right), moderator Arun Saigal, co-founder and CEO of Rappidly, Wei Li, a principal engineer at Eta Devices; Amrita Saigal, co-founder of Saathi and Theodora Koullias, founder and CEO of Jon Lou.

A panel of MIT alumni who shared their experiences with starting and working at young companies included Alice Brooks (speaking), co-founder of Roominate. Other speakers were (from left to right): moderator Arun Saigal, co-founder and CEO of Rappidly; Wei Li, a principal engineer at Eta Devices; Amrita Saigal, co-founder of Saathi; and Theodora Koullias, founder and CEO of Jon Lou. Photo: Audrey Resutek


There are myriad challenges for entrepreneurs when first starting a company: fundraising, recruiting talent, developing an innovative product, networking, scaling, and — not least of all — finding customers.

StartMIT, a course offered during MIT’s Independent Activities Period between semesters, aims to help engineering students navigate those early challenges, with advice from founders who have been through it all. The course is co-organized by the Department of Electrical Engineering and Computer Science (EECS) and the MIT Innovation Initiative.

Held this year from Jan. 11 to Jan. 26, StartMIT (formerly Start6) organized an extensive schedule of talks and panel discussions that focused on a broad range of topics, including product development, founders’ stories, MIT’s entrepreneurial resources, networking, common startup mistakes, and creating company culture. The lineup of speakers was equally diverse, ranging from startup novices to serial entrepreneurs, and spanning multiple industries. Additional activities — such as mock customer interviews and creating a pitch — focused on honing basic entrepreneurial skills.

“The aim is to give our students and postdocs … a rich view of what it means to be an entrepreneur,” says StartMIT head organizer and EECS department head Anantha Chandrakasan. “In three weeks, [participants] walk away with an understanding of what entrepreneurship is all about, the different views, and learning how to put a pitch together, among other things.”

During the final two days, student groups were required to deliver brief pitches for commercial ideas they formed during the course. These included hacking-recruitment services, smart windows, new airplane-de-icing technologies, various apps, and waterproof purses, among other ideas. The course also included field trips to local companies — including iRobot, Ministry of Supply, and Kayak — as well as the Cambridge Innovation Center and MassChallenge startup incubators.

Around 100 engineering students participated in this year’s StartMIT, now in its third year. On Jan. 14, President L. Rafael Reif dropped by Building 34-101, where most of the talks were held, to stress the importance of StartMIT in carrying out the Institute’s mission of using commercial innovations to make tangible impact on society.

“There are many ways in which we can do good … and one of those ways is to start companies,” he said, adding: “By learning from experts and by learning from one another, you’re on your way to fulfilling not just … the MIT mission, but fulfilling your own mission in starting companies.”

Seasoned entrepreneurs

Eight days of talks and panel discussions saw seasoned MIT-affiliated entrepreneurs and innovators offering sage advice to students about starting companies.

In her Jan. 13 talk, MIT professor Sangeeta Bhatia, who has launched several biotech companies, gave a behind-the-scenes look at the sometimes-arduous process of taking innovations from lab to market — “to highlight everything you don’t see in the press,” she said.

In 2008, Bhatia, the John J. and Dorothy Wilson Professor in the Institute for Medical Engineering and Science and the Department of Electrical Engineering and Computer Science, spun out 14 years of MIT research into Hepregen, now a successful company.

Hepregen’s “micro-liver” platform allows liver cells to function outside the body for up to six weeks, for use by researchers and pharmaceutical firms. Among other topics, Bhatia discussed struggles of developing the technology for commercial use, manufacturing hassles, and anxieties of dealing with big-name pharmaceutical customers.

“What did we learn? We learned it takes a long time [to start a biotech company],” Bhatia said. However, “as engineers and scientists, translation of your technology to make an impact through commercialization is actually imperative.”

In a kickoff talk on Jan. 11, EECS lecturer Christina Chase, a former Entrepreneur-in-Residence at the Martin Trust Center for MIT Entrepreneurship and founder of several tech companies, detailed key reasons tech startups fail. Among these, she said, are scaling a company too soon by hiring too many employees, splurging on machinery, or renting unnecessarily large office space.

“Ultimately, understand ‘what can I not spend my money on,’ because cash is oxygen to your company,” said Chase, who also led activities on discovering value propositions and conducting mock customer interviews.

Other speakers included: David H. Koch Institute Professor Robert Langer, founder of more than 20 companies, who discussed commercializing breakthrough technologies; Ethernet co-inventor and 3Com founder Robert Metcalfe ’68 who discussed forming Internet companies and led an activity on writing effective press releases; Michael Stonebraker, a pioneer in database management systems with three big data companies under his belt, who gave students five easy steps for starting a company; and co-founder and CEO of Dropbox Drew Houston ’05, who gave advice to budding entrepreneurs via satellite.

Fresh faces

While hearing from seasoned entrepreneurs was certainly informative, mechanical engineering senior Keertan Kini connected most with a panel of recent MIT alumni entrepreneurs.

That panel, held on Jan. 19, included four MIT alumni who shared their experiences of starting or working at young companies: Alice Brooks '10, co-founder of Roominate, which is developing STEM-focused toys; Theodora Koullias '13, founder and CEO of luxury fashion-tech brand Jon Lou; Wei Li SM '09, PhD '13, a principal engineer at Eta Devices, which is making mobile communications more efficient; Amrita Saigal '10, co-founder of Saathi, which manufactures sanitary pads made from waste banana tree fiber, for girls in rural India; and moderator Arun Saigal '13, SM '13, co-founder and CEO of Rappidly, a startup that makes drag-and-drop programming tools to build apps.

“Hearing from people who are just recently out of MIT … was incredibly meaningful and incredibly impactful,” he says. The alumni could “relate easily to our experiences, or some of the doubts we have about our own abilities, or discuss how certain classes might actually make an impact, or relate to the challenges we’d faced as first-time founders because we don’t have the track record that a lot of the other professionals have.”

With support from MIT’s Sandbox Innovation Fund, Kini and his partner are now prototyping their StartMIT project, called Ember, an interactive cooking app that uses voice commands to walk people through recipes.

A talk by MIT alumnus Jeremy Conrad ’06, founding partner of Lemnos Labs, a seed-stage investment firm and incubator in San Francisco, resonated with freshman Anelise Newman, who developed a sewing-education startup for StartMIT.

Conrad’s talk, Newman said, introduced her to one important and sometimes overlooked facet of entrepreneurship: networking. In his talk, Conrad discussed how he’d rush around to make sure he was in the same room as an investor, or build relationships with people that could provide an introduction to a person of interest.

“It opened my eyes to the hustle that you have to get used to if you want to be an entrepreneur,” Newman said. “Not only do you need to define your product … you have to define your goals and go about them and vigorously pursue them in whatever way possible.”

In three years, more than 100 projects have been developed through StartMIT, including Smarking (2014), now a successful company that uses big data analytics to help parking-garage managers maximize pricing and availability; GelSight (2014), which is commercializing sensors that can make 3-D maps of surfaces and could be used for more sensitive robotics fingertips; and Belleds Q (2015), which is developing a consumer product that uses streaming music to control wireless smart LED bulbs in homes.

Read this article on MIT News.

January 29, 2016

News Image: 

People person

$
0
0
January 29, 2016

Catherine Curro Caruso | MIT News Correspondent

Senior Sami Alsheikh helps others, solves problems, and has fun doing both.

Sami Alsheikh

“In research contexts, I’m driven mostly by the technical difficulty of the problem, and the opportunity to learn,” Sami Alsheikh explains. Photo: Ian MacLellan


Sami Alsheikh’s time at MIT has been guided by a few simple principles: pursue things you enjoy, develop useful skills that can be applied in different ways, and help people as much as possible. Over the past four years, whether he has been absorbed in computer science courses, building a platform to increase political engagement, or playing with campers as a counselor at Camp Kesem, Alsheikh has managed to live by all three.

Alsheikh, a senior computer science and engineering major, grew up in Pensacola, Florida. His parents, both physicians, moved from Syria to the United States before he was born, seeking better professional opportunities. Alsheikh’s extended family still lives in Syria, and he and his family visited every summer until 2011. Alsheikh describes Pensacola as a politically conservative region, a sharp contrast to the more liberal area around MIT. He appreciates that his experiences living in parts of the United States with differing political views, along with his visits to Syria, have provided him with diverse perspectives about the world.

Alsheikh first set his eyes on MIT while researching top institutions for math and science, but for him, what makes MIT special is the people.

“That was one of the biggest draws for me and has proven to be one of the best things,” explains Alsheikh. “There's kind of this guarantee that everyone here has worked to become really clever at something. It's awesome to be a part of a collaborative community with people like that.”

Computer science is the tool

When Alsheikh came to MIT he planned on studying physics, but over the course of his freshman year, he found himself gravitating toward computer science, which he describes as more of a tool than a subject.

“I saw that computer science touched on a lot of different subjects,” he says. “It has this theoretical and precise appeal of purer sciences like math and physics, while still providing the applicability of engineering.”

He experienced this first-hand during an MIT course where he designed a computer game called “Balloon Boy” that involves a boy with a balloon who must move left and right to avoid falling nails that could pop the balloon.

“That sounds really silly,” he admits. “But it actually kick-started everything because of how interdisciplinary it was.”

As Alsheikh and his classmates designed the game, they drew on a number of different fields including physics, which they needed to accurately simulate the drag and buoyancy of the balloon.

Alsheikh is currently doing computer science research in MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) with Wojciech Matusik, associate professor of electrical engineering and computer science, who leads the Computational Fabrication Group. The research focuses on using newly developed radar technologies to understand different environments. Radar offers advantages over video because it doesn’t require light and has the potential to pick up on additional characteristics such as texture or material properties. Alsheikh’s work involves taking readings with small, new high-frequency radar chips, and exploring different analysis methods to derive meaning from the data. He stays motivated by focusing on the technical problem at hand.

“In research contexts, I’m driven mostly by the technical difficulty of the problem, and the opportunity to learn,” he explains.

Alsheikh is also interested in socially impactful applications of his computer science skills. As part of Start IAP, a four-week startup accelerator for MIT students, he is working on building a Web-based platform that will encourage people to become more engaged in U.S. policy.

“I don't know what legislation is passed, and neither do a lot of people,” explains Alsheikh. “We're trying to build this platform where, based on your demographic information and your interests, we tell you how current legislation that's being considered affects you.”

Helping others and having fun

Beyond his academic interests, Alsheikh also enjoys helping people, offering the simple explanation that he does it because it makes him feel good. Over the past three years, Alsheikh has been involved in Camp Kesem, a one-week overnight camp for kids ages 6 to 18 whose parents have been affected by cancer. The camp, which is celebrating its 10th anniversary this summer, now serves over 170 children each summer, free of charge. It is a place where the campers are able to open up and find support, but also have fun and enjoy themselves despite their tough situations.

“It's really a great deal,” says Alsheikh. “It sounds like it's supposed to be a therapy camp, maybe have a little bit of a sad mood, but it's actually nothing like that.”

Alsheikh, who enjoys that camp allows him to embrace his goofy, energetic side, reveals that all of the campers and counselors choose camp names, new identities that heighten the fun. His is Cheese, the counterpart of Mac, a.k.a. Jonathon Zuniga, Alsheikh’s freshman-year roommate and fellow counselor. While there are opportunities for campers to share their stories and talk about everything they are dealing with, for Alsheikh the magic of camp is that the kids can be regular kids, which often isn’t the case at home.

“It's really about the small, fun moments more than anything,” he explains. “The campers already know their situation at home and they don't always want to explore that further. I think they enjoy being with counselors and other campers who are there to have fun and offer support. It shows them that they can still have a light-hearted time despite whatever situation they’re in.”

Alsheikh joined Camp Kesem as a counselor during his freshman year so he could help kids who were dealing with circumstances beyond their control. The following year he wanted to learn more about the families, so he returned to camp as the outreach coordinator, who is the point of contact for all families. The next year, Alsheikh, along with Mac and Peaches (counselor and MIT senior Lizy Trujillo), co-directed the camp, leading a 13-member coordinating board that handles fundraising, developing programming, organizing events, and recruiting counselors.

This summer will be his final year with Camp Kesem, and he is excited to return as a counselor and focus on spending time with the kids. “Our campers are amazing,” he says. “They're sharing these things that I cannot even imagine having gone through, and the next day we're singing a camp song about Chocolate Oreos, and they’re just as into it as anyone else. Their resilience is unbelievable.”

The confidence to tackle any challenge

Alsheikh will graduate from MIT this June, and he hopes to pursue a master’s degree in computer science at MIT. His short-term goal is to explore topics that interest him and develop technical skills that he can utilize in different contexts; eventually he wants to use what he’s learned to help people as much as possible.

“Right now I'm just trying to build skills and learn more about different aspects of the world,” he says. “I've had exposure to a few different areas, and now I'm exploring computer science research to hopefully gain some technical rigor that I could apply elsewhere.”

For Alsheikh, one of the most important things he has gained while at MIT is the confidence to tackle any challenge. He has learned how to research a problem, ask the right questions, contact the right people, and quickly get up to speed so that he can attempt to solve it.

“Honestly, I think a lot of people are capable of tackling these problems, because often the resources are out there,” he explains. “But the confidence is a big deal … having the confidence to tackle things and knowing that you have just as good a chance as anybody else.”

Read this article on MIT News.

News Image: 

Labs: 

Learning to solve

$
0
0

Audrey Resutek | Department of Electrical Engineering and Computer Science

Hallmark program “SuperUROP” lets undergrad engineers dive into a year-long research experience.

Course 6 junior Ashley Wang

Course 6 junior Ashley Wang presents her project, “Visualizing Big Data in Mobile Application Development,” at the SuperUROP Research Preview in December. Photo: Gretchen Ertl


From developing smart 3-D scanners, to refining desalination techniques, to designing football helmets that can prevent concussions — undergraduates across the School of Engineering are midway through the year-long research projects that are part of the Advanced Undergraduate Research Opportunities Program, or SuperUROP.

Students participating in the now school-wide program, which was launched in 2012 in the Department of Electrical Engineering and Computer Science (EECS), are immersed in a graduate-level research experience. Taking a deep dive into a single problem, participants work under the supervision of an MIT faculty member or researcher, and their projects often lead to published research or a prototype.

“Engaging in research gives our undergraduates the confidence to push boundaries and solve problems that no one has ever solved before — and that’s the very definition of research,” says Ian Waitz, dean of the School of Engineering. “The skills that students gain from SuperUROP and other research-based programs continue to impact their lives well after research is finished.”

 

Watch a montage of student research that was presented at the SuperUROP Poster Session 2015. Video courtesy of SuperUROP.

Tackling the hard problems

The 177 students enrolled in SuperUROP are not shying away from hard-hitting problems.

Bradley Walcher is working on a football helmet design to reduce the risk of concussion. Walcher, a junior studying aeronautics and astronautics, notes that concussions are a growing issue for football players — an estimated 15 percent of football players get a concussion every year. The prototype Walcher and his colleagues are developing uses an inverted cushioning structure and 3-D printing to produce a snug fit.

“I took SuperUROP because I want to go to grad school,” Walcher says, “and this has given me the experience of doing higher-level research, which is hard to do as an undergraduate.”

Christian Argenti, a junior studying mechanical engineering, is tackling another health problem, investigating a compression bandage design for treating venous leg ulcers. Currently, health care professionals don’t have a good indication of the pressure being placed on a wound and may thus miss the ideal pressure range for the healing process. Argenti’s potential solution is a bright one: employing polymer-coated fibers that change colors as stretched, indicating the amount of pressure the bandage applies.

He has been working in the Lab for Bio-inspired Photonic Engineering in the Department of Mechanical Engineering for two years and says SuperUROP gave him an opportunity to be immersed in a research setting: “I love SuperUROP because it gave me a bigger project to work on. It really helped me be independent as well as challenging me with projects like the poster session and writing papers.”

In addition to completing the research project, students round out their experience by enrolling in a yearlong course, 6.UAR (Preparation for Undergraduate Research), where assignments include conducting a literature review, writing a journal or conference-style research paper, and presenting a research poster.

Flora Tan, a senior studying computer science, is applying machine learning to financial data — an idea that is just taking hold in the finance sector. Tan’s project uses deep-learning techniques to identify and perhaps predict trends in currency exchange rates such as Bitcoin.

“I’m interested in how technology can disrupt industries like finance, and I’d like to start a company or technology team in the future,” Tan says. “SuperUROP has given me a chance to spend time getting a deep understanding of the technologies being used today.”

An interdisciplinary community of scholars

Expanding SuperUROP to include students from departments across the School of Engineering has already shown great promise, creating community and exposing undergraduates to new fields.

“We hope to create an interdisciplinary community of scholars,” says Anantha Chandrakasan, the Vannevar Bush Professor of Electrical Engineering and Computer Science and EECS department head. “It is amazing to see the enthusiasm and innovative ideas that emerge as they interact with their peers in their own and other areas.”

One of the new departments participating in SuperUROP this year is the Department of Aeronautics and Astronautics (AeroAstro), in which students are working on projects ranging from biometric telemonitoring for astronauts to airfoil-enabled heat exchangers that reduce fuel burn in advanced aeropropulsion systems.

"SuperUROPs are a fantastic opportunity for AeroAstro undergraduates to step ‘outside the box’ and work with industry, faculty, and grad students on exciting, real-world challenges,” said Jaime Peraire, AeroAstro department head and the H.N. Slater Professor Aeronautics and Astronautics.

“When we say, ‘only at MIT,’ SuperUROP is exactly what we are talking about,” adds Waitz. In fact, nearly 90 percent of undergraduate students participate in research during their time at MIT, a hallmark of MIT’s motto, “mens et manus” (“mind and hand”).

“I think that’s profound,” Waitz says. “Our emphasis on research and hard problems exemplifies how we train our engineers, and that has become a magnet for the kinds of fearless students we attract.”

Read this article on MIT News.

February 1, 2016

News Image: 

Undergraduate Research on Showcase

$
0
0
Monday, February 1, 2016 - 10:45am

New chip fabrication approach

$
0
0

Larry Hardesty | MIT News Office

Depositing different materials within a single chip layer could lead to more efficient computers.

Illustration

Researchers used the MIT and Tim the Beaver logos to show photoluminescence emissions from a monolayer of molybdenum disulfide inlayed onto graphene. The arrow indicates the graphene-MoS2 lateral heterostructure, which could potentially form the basis for ultrathin computer chips. Courtesy of the researchers.


Today, computer chips are built by stacking layers of different materials and etching patterns into them. But in the latest issue of Advanced Materials, MIT researchers and their colleagues report the first chip-fabrication technique that enables significantly different materials to be deposited in the same layer. They also report that, using the technique, they have built chips with working versions of all the circuit components necessary to produce a general-purpose computer.

The layers of material in the researchers’ experimental chip are extremely thin — between one and three atoms thick. Consequently, this work could abet efforts to manufacture thin, flexible, transparent computing devices, which could be laminated onto other materials. “The methodology is universal for many kinds of structures,” says Xi Ling, a postdoc in the Research Laboratory of Electronics and one of the paper’s first authors. “This offers us tremendous potential with numerous candidate materials for ultrathin circuit design.”

The technique also has implications for the development of the ultralow-power, high-speed computing devices known as tunneling transistors and, potentially, for the integration of optical components into computer chips.

“It’s a brand new structure, so we should expect some new physics there,” says Yuxuan Lin, a graduate student in electrical engineering and computer science and the paper’s other first author.

Ling and Lin are joined on the paper by Mildred Dresselhaus, an Institute Professor emerita of physics and electrical engineering; Jing Kong, an ITT Career Development Professor of Electrical Engineering; Tomás Palacios, an associate professor of electrical engineering; and by another 10 MIT researchers and two more from Brookhaven National Laboratory and Taiwan’s National Tsing-Hua University.

Strange bedfellows

Computer chips are built from crystalline solids, materials whose atoms are arranged in a regular geometrical pattern known as a crystal lattice. Previously, only materials with closely matched lattices have been deposited laterally in the same layer of a chip. The researchers’ experimental chip, however, uses two materials with very different lattice sizes: molybdenum disulfide and graphene, which is a single-atom-thick layer of carbon.

Moreover, the researchers’ fabrication technique generalizes to any material that, like molybdenum disulfide, combines elements from group six of the periodic table, such as chromium, molybdenum, and tungsten, and elements from group 16, such as sulfur, selenium, and tellurium. Many of these compounds are semiconductors — the type of material that underlies transistor design — and exhibit useful behavior in extremely thin layers.

Graphene, which the researchers chose as their second material, has many remarkable properties. It’s the strongest known material, but it also has the highest known electron mobility, a measure of how rapidly electrons move through it. As such, it’s an excellent candidate for use in thin-film electronics or, indeed, in any nanoscale electronic devices. To assemble their laterally integrated circuits, the researchers first deposit a layer of graphene on a silicon substrate. Then they etch it away in the regions where they wish to deposit the molybdenum disulfide.

Next, at one end of the substrate, they place a solid bar of a material known as PTAS.

They heat the PTAS and flow a gas across it and across the substrate. The gas carries PTAS molecules with it, and they stick to the exposed silicon but not to the graphene. Wherever the PTAS molecules stick, they catalyze a reaction with another gas that causes a layer of molybdenum disulfide to form.

In previous work, the researchers characterized a range of materials that promote the formation of crystals of other compounds, any of which could be plugged into the process.

Future electronics

The new fabrication method could open the door to more powerful computing if it can be used to produce tunneling-transistor processors. Fundamentally, a transistor is a device that can be modulated to either allow a charge to cross a barrier or prohibit it from crossing. In a tunneling transistor, the charge crosses the barrier by means of a counterintuitive quantum-mechanical effect, in which an electron can be thought of as disappearing at one location and reappearing at another.

These effects are subtle, so they’re more pronounced at extremely small scales, like the one- to three-atom thicknesses of the layers in the researchers’ experimental chip. And, because electron tunneling is immune to the thermal phenomena that limit the efficiency of conventional transistors, tunneling transistors can operate at very low power and could achieve much higher speeds.

"This work is very exciting,” says Philip Kim, a physics professor at Harvard University. “The MIT team demonstrated that controlled stitching of two completely different, atomically thin 2-D materials is possible. The electrical properties of the resulting lateral heterostructures are very impressive."

Read this article on MIT News.

January 27, 2016

Research Themes: 

News Image: 

Labs: 

Research Area: 

Recognizing correct code

$
0
0

Larry Hardesty | MIT News Office

Automatic bug-repair system fixes 10 times as many errors as its predecessors.

Illustration

“One of the most intriguing aspects of this research is that we’ve found that there are indeed universal properties of correct code that you can learn from one set of applications and apply to another set of applications,” Martin Rinard says. Photo: MIT News


MIT researchers have developed a machine-learning system that can comb through repairs to open-source computer programs and learn their general properties, in order to produce new repairs for a different set of programs.

The researchers tested their system on a set of programming errors, culled from real open-source applications, that had been compiled to evaluate automatic bug-repair systems. Where those earlier systems were able to repair one or two of the bugs, the MIT system repaired between 15 and 18, depending on whether it settled on the first solution it found or was allowed to run longer.

While an automatic bug-repair tool would be useful in its own right, professor of electrical engineering and computer science Martin Rinard, whose group developed the new system, believes that the work could have broader ramifications.

“One of the most intriguing aspects of this research is that we’ve found that there are indeed universal properties of correct code that you can learn from one set of applications and apply to another set of applications,” Rinard says. “If you can recognize correct code, that has enormous implications across all software engineering. This is just the first application of what we hope will be a brand-new, fabulous technique.”

Fan Long, a graduate student in electrical engineering and computer science at MIT, presented a paper describing the new system at the Symposium on Principles of Programming Languages last week. He and Rinard, his advisor, are co-authors.

Users of open-source programs catalogue bugs they encounter on project websites, and contributors to the projects post code corrections, or “patches,” to the same sites. So Long was able to write a computer script that automatically extracted both the uncorrected code and patches for 777 errors in eight common open-source applications stored in the online repository GitHub.

Feature performance

As with all machine-learning systems, the crucial aspect of Long and Rinard’s design was the selection of a “feature set” that the system would analyze. The researchers concentrated on values stored in memory — either variables, which can be modified during a program’s execution, or constants, which can’t. They identified 30 prime characteristics of a given value: It might be involved in an operation, such as addition or multiplication, or a comparison, such as greater than or equal to; it might be local, meaning it occurs only within a single block of code, or global, meaning that it’s accessible to the program as a whole; it might be the variable that represents the final result of a calculation; and so on.

Long and Rinard wrote a computer program that evaluated all the possible relationships between these characteristics in successive lines of code. More than 3,500 such relationships constitute their feature set. Their machine-learning algorithm then tried to determine what combination of features most consistently predicted the success of a patch.

“All the features we’re trying to look at are relationships between the patch you insert and the code you are trying to patch,” Long says. “Typically, there will be good connections in the correct patches, corresponding to useful or productive program logic. And there will be bad patterns that mean disconnections in program logic or redundant program logic that are less likely to be successful.”

Ranking candidates

In earlier work, Long had developed an algorithm that attempts to repair program bugs by systematically modifying program code. The modified code is then subjected to a suite of tests designed to elicit the buggy behavior. This approach may find a modification that passes the tests, but it could take a prohibitively long time. Moreover, the modified code may still contain errors that the tests don’t trigger.

Long and Rinard’s machine-learning system works in conjunction with this earlier algorithm, ranking proposed modifications according to the probability that they are correct before subjecting them to time-consuming tests.

The researchers tested their system, which they call Prophet, on a set of 69 program errors that had cropped up in eight popular open-source programs. Of those, 19 are amenable to the type of modifications that Long’s algorithm uses; the other 50 have more complicated problems that involve logical inconsistencies across larger swaths of code.

When Long and Rinard configured their system to settle for the first solution that passed the bug-eliciting tests, it was able to correctly repair 15 of the 19 errors; when they allowed it to run for 12 hours per problem, it repaired 18.

Of course, that still leaves the other 50 errors in the test set untouched. In ongoing work, Long is working on a machine-learning system that will look at more coarse-grained manipulation of program values across larger stretches of code, in the hope of producing a bug-repair system that can handle more complex errors.

“A revolutionary aspect of Prophet is how it leverages past successful patches to learn new ones,” says Eran Yahav, an associate professor of computer science at the Technion in Israel. “It relies on the insight that despite differences between software projects, fixes — patches — applied to projects often have commonalities that can be learned from. Using machine learning to learn from ‘big code’ holds the promise to revolutionize many programming tasks — code completion, reverse-engineering, et cetera.”

Read this article on MIT News.

January 29, 2016

Research Themes: 

News Image: 

Labs: 

Research Area: 


MIT students win first round of SpaceX Hyperloop contest

$
0
0

Leda Zimmerman |  School of Engineering

Design tops more than 100 entries at an international high-speed transportation competition inspired by Elon Musk and sponsored by SpaceX.

Hyperloop pod design

MIT's Hyperloop pod design. Image courtesy of MIT Hyperloop Team.


A team from MIT took top honors Saturday at a competition at Texas A&M University to design the Hyperloop, a high-speed transportation concept dreamed up by Tesla Motors and SpaceX CEO Elon Musk.

Beating out a field of more than 100 other teams from around the world, the group of MIT graduate students won the best overall design award for a vehicle, or pod, that will ride inside the Hyperloop, a system of tubes connecting major cities — or what Musk calls “a fifth mode of transportation.” They will now move on to build a small-scale prototype of their design and test it this summer on a track being built next to the SpaceX headquarters in Hawthorne, California.

“MIT has been involved in so many technological breakthroughs in the past century,” says team captain Philippe Kirschen, a master’s student in aeronautics and astronautics. “It just makes sense we would help advance what might be the future of transportation.”

Hyperloop team

In 2013, Musk declared war on conventional inter-city travel. Last summer, he threw down the gauntlet, announcing a year-long competition to design vehicles for his Hyperloop scheme, a transit system ideally suited for major city pairs separated by 900 miles or less (think San Francisco and Los Angeles). In Hyperloop, people and freight are propelled in pods through tubes maintained at a near-vacuum. In the absence of air or surface friction, the pods travel at close to the speed of sound (around 750 miles per hour), using low-energy propulsion systems.

Since the fall, Kirschen and approximately two dozen fellow graduate students from a variety of engineering disciplines have been racing to create a design and sub-scale, prototype pod for the competition. Pods must accommodate a mechanical pusher that will serve as a propulsion system, and may levitate inside a near-vacuum tube that encloses the track. The capsules must also be equipped with sensors that can broadcast real-time telemetry data during the mile-long run.

With strengths in aeronautics, mechanical engineering, and electrical engineering and computer science, the MIT Hyperloop Team focused on speed, braking, stability, and levitation. For the latter problem, they developed a model for electrodynamic suspension that relies on powerful magnets placed over a conducting plate, which in this case is the aluminum track SpaceX is building. The magnets generate lift. “The beauty of the system we designed,” says Kirschen, “is that it’s completely passive, an elegant property that will make our pod very scalable.”

This innovation, a departure from Musk’s original notion of pods levitating on a cushion of air, required a major research thrust. “None of us knew anything about magnets, and there has definitely been a steep learning curve for us,” Kirschen says.

The team is gathering support from all over MIT. Douglas P. Hart of Mechanical Engineering, is facilitating team members working on the project for credit as part of his Engineering System Development course. During MIT's Independent Activities Period in January, the team finalized its pod design for the competition at Texas A&M. Their final capsule came in roughly 2.5 meters long, about one meter wide; it weighs 250 kilograms, and has the aerodynamic feel, says Kirschen, of a bobsled.

With the first stage of the competition behind them, the action now shifts to fabrication on a larger scale. The team will move from simulations to aluminum and carbon fiber, trying out braking systems, and, with great caution, testing dangerously strong magnets. Final assembly must be complete by mid-May. “Ideally, it will reach a speed in excess of 100 meters per second,” Kirschen says. There will be no passengers on board for the 20-second inaugural run.

Read this article on MIT News.

February 1, 2016

News Image: 

A virtual “guide dog” for navigation

$
0
0

Larry Hardesty | MIT News Office

Low-power chip processes 3-D camera data, could enable wearable device to guide the visually impaired.

photo of braille

A new device lets visually impaired users carry a mechanical Braille interface developed at MIT’s Computer Science and Artificial Intelligence Laboratory, which conveys information about the distance to the nearest obstacle in the direction the user is moving.


MIT researchers have developed a low-power chip for processing 3-D camera data that could help visually impaired people navigate their environments. The chip consumes only one-thousandth as much power as a conventional computer processor executing the same algorithms.

Using their chip, the researchers also built a prototype of a complete navigation system for the visually impaired. About the size of a binoculars case and similarly worn around the neck, the system uses an experimental 3-D camera from Texas Instruments. The user carries a mechanical Braille interface developed at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), which conveys information about the distance to the nearest obstacle in the direction the user is moving.

The researchers reported the new chip and the prototype navigation system in a paper presented earlier this week at the International Solid-State Circuits Conference in San Francisco.

“There was some prior work on this type of system, but the problem was that the systems were too bulky, because they require tons of different processing,” says Dongsuk Jeon, a postdoc at MIT’s Microsystems Research Laboratories (MTL) when the work was done who joined the faculty of Seoul National University in South Korea this year. “We wanted to miniaturize this system and realized that it is critical to make a very tiny chip that saves power but still provides enough computational power.”

Jeon is the first author on the new paper, and he’s joined by Anantha Chandrakasan, the Vannevar Bush Professor of electrical engineering and computer science; Daniela Rus, the Andrew and Erna Viterbi professor of electrical engineering and computer science; Priyanka Raina, a graduate student in electrical engineering and computer science; Nathan Ickes, a former research scientist at MTL who’s now at Apple Computer; and Hsueh-Cheng Wang, a postdoc at CSAIL when the work was done who will join the National Chiao Tung University in Taiwan as an assistant professor this month.

In work sponsored by the Andrea Bocelli Foundation, which was founded by the blind singer Andrea Bocelli, Rus’ group had developed an algorithm for converting 3-D camera data into useful navigation aids. The output of any 3-D camera can be converted into a 3-D representation called a “point cloud,” which depicts the spatial locations of individual points on the surfaces of objects. The Rus group’s algorithm clustered points together to identify flat surfaces in the scene, then measured the unobstructed walking distance in multiple directions.

For the new paper, the researchers modified this algorithm, with power conservation in mind. The standard way to identify planes in point clouds, for instance, is to pick a point at random, then look at its immediate neighbors, and determine whether any of them lie in the same plane. If one of them does, the algorithm looks at its neighbors, determining whether any of them lie in the same plane, and so on, gradually expanding the surface.

This is computationally efficient, but it requires frequent requests to a chip’s main memory bank. Because the algorithm doesn’t know in advance which direction it will move through the point cloud, it can’t reliably preload the data it will need into its small working-memory bank.

Fetching data from main memory, however, is the biggest energy drain in today’s chips, so the MIT researchers modified the standard algorithm. Their algorithm always begins in the upper left-hand corner of the point cloud and scans along the top row, comparing each point only to the neighbor on its left. Then it starts at the leftmost point in the next row down, comparing each point only to the neighbor on its left and to the one directly above it, and repeats this process until it has examined all the points. This enables the chip to load as many rows as will fit into its working memory, without having to go back to main memory.

This and similar tricks drastically reduced the chip’s power consumption. But the data-processing chip isn’t the component of the navigation system that consumes the most energy; the 3-D camera is. So the chip also includes a circuit that quickly and coarsely compares each new frame of data captured by the camera with the one that immediately preceded it. If little changes over successive frames, that’s a good indication that the user is still; the chip sends a signal to the camera, which can lower its frame rate, saving power.

Although the prototype navigation system is less obtrusive than its predecessors, it should be possible to miniaturize it even further. Currently, one of its biggest components is a heat dissipation device atop a second chip that converts the camera’s output into a point cloud. Adding the conversion algorithm to the data-processing chip should have a negligible effect on its power consumption but would significantly reduce the size of the system’s electronics.

In addition to the Andrea Bocelli Foundation, the work was cosponsored by Texas Instruments, and the prototype chips were manufactured through the Taiwan Semiconductor Manufacturing Company’s University Shuttle Program.

Read this article on MIT News.

February 2, 2016

Research Themes: 

News Image: 

Labs: 

Research Area: 

Hack-proof RFID chips

$
0
0

Larry Hardesty | MIT News Office

New technology could secure credit cards, key cards, and pallets of goods in warehouses.

standard RFID chip

Researchers have designed an RFID chip that prevents so-called side-channel attacks, which analyze patterns of memory access or fluctuations in power usage when a device is performing a cryptographic operation, in order to extract its cryptographic key. Pictured here is a standard RFID chip.


Researchers at MIT and Texas Instruments have developed a new type of radio frequency identification (RFID) chip that is virtually impossible to hack.

If such chips were widely adopted, it could mean that an identity thief couldn’t steal your credit card number or key card information by sitting next to you at a café, and high-tech burglars couldn’t swipe expensive goods from a warehouse and replace them with dummy tags.

Texas Instruments has built several prototypes of the new chip, to the researchers’ specifications, and in experiments the chips have behaved as expected. The researchers presented their research this week at the International Solid-State Circuits Conference, in San Francisco.

According to Chiraag Juvekar, a graduate student in electrical engineering at MIT and first author on the new paper, the chip is designed to prevent so-called side-channel attacks. Side-channel attacks analyze patterns of memory access or fluctuations in power usage when a device is performing a cryptographic operation, in order to extract its cryptographic key.

“The idea in a side-channel attack is that a given execution of the cryptographic algorithm only leaks a slight amount of information,” Juvekar says. “So you need to execute the cryptographic algorithm with the same secret many, many times to get enough leakage to extract a complete secret.”

One way to thwart side-channel attacks is to regularly change secret keys. In that case, the RFID chip would run a random-number generator that would spit out a new secret key after each transaction. A central server would run the same generator, and every time an RFID scanner queried the tag, it would relay the results to the server, to see if the current key was valid.

Blackout

Such a system would still, however, be vulnerable to a “power glitch” attack, in which the RFID chip’s power would be repeatedly cut right before it changed its secret key. An attacker could then run the same side-channel attack thousands of times, with the same key. Power-glitch attacks have been used to circumvent limits on the number of incorrect password entries in password-protected devices, but RFID tags are particularly vulnerable to them, since they’re charged by tag readers and have no onboard power supplies.

Two design innovations allow the MIT researchers’ chip to thwart power-glitch attacks: One is an on-chip power supply whose connection to the chip circuitry would be virtually impossible to cut, and the other is a set of “nonvolatile” memory cells that can store whatever data the chip is working on when it begins to lose power.

For both of these features, the researchers — Juvekar; Anantha Chandrakasan, who is Juvekar’s advisor and the Vannevar Bush Professor of Electrical Engineering and Computer Science; Hyung-Min Lee, who was a postdoc in Chandrakasan’s group when the work was done and is now at IBM; and TI’s Joyce Kwong, who did her master’s degree and PhD with Chandrakasan — use a special type of material known as a ferroelectric crystals.

As a crystal, a ferroelectric material consists of molecules arranged into a regular three-dimensional lattice. In every cell of the lattice, positive and negative charges naturally separate, producing electrical polarization. The application of an electric field, however, can align the cells’ polarization in either of two directions, which can represent the two possible values of a bit of information.

When the electric field is removed, the cells maintain their polarization. Texas Instruments and other chip manufacturers have been using ferroelectric materials to produce nonvolatile memory, or computer memory that retains data when it’s powered off.

Complementary capacitors

A ferroelectric crystal can also be thought of as a capacitor, an electrical component that separates charges and is characterized by the voltage between its negative and positive poles. Texas Instruments’ manufacturing process can produce ferroelectric cells with either of two voltages: 1.5 volts or 3.3 volts.

The researchers’ new chip uses a bank of 3.3-volt capacitors as an on-chip energy source. But it also features 571 1.5-volt cells that are discretely integrated into the chip’s circuitry. When the chip’s power source — the external scanner — is removed, the chip taps the 3.3-volt capacitors and completes as many operations as it can, then stores the data it’s working on in the 1.5-volt cells.

When power returns, before doing anything else the chip recharges the 3.3-volt capacitors, so that if it’s interrupted again, it will have enough power to store data. Then it resumes its previous computation. If that computation was an update of the secret key, it will complete the update before responding to a query from the scanner. Power-glitch attacks won’t work.

Because the chip has to charge capacitors and complete computations every time it powers on, it’s somewhat slower than conventional RFID chips. But in tests, the researchers found that they could get readouts from their chips at a rate of 30 per second, which should be more than fast enough for most RFID applications.

“In the age of ubiquitous connectivity, security is one of the paramount challenges we face,” says Ahmad Bahai, chief technology officer at Texas Instruments. “Because of this, Texas Instruments sponsored the authentication tag research at MIT that is being presented at ISSCC. We believe this research is an important step toward the goal of a robust, low-cost, low-power authentication protocol for the industrial Internet.”

The MIT researchers' work was also funded by the Japanese automotive company Denso.

Read this article on MIT News.

February 3, 2016

Research Themes: 

News Image: 

Labs: 

Research Area: 

Energy-friendly chip can perform powerful artificial-intelligence tasks

$
0
0
February 3, 2016

Larry Hardesty | MIT News Office

Advance could enable mobile devices to implement “neural networks” modeled on the human brain.

Illustration

MIT researchers have designed a new chip to implement neural networks. It is 10 times as efficient as a mobile GPU, so it could enable mobile devices to run powerful artificial-intelligence algorithms locally, rather than uploading data to the Internet for processing. Image: MIT News


In recent years, some of the most exciting advances in artificial intelligence have come courtesy of convolutional neural networks, large virtual networks of simple information-processing units, which are loosely modeled on the anatomy of the human brain.

Neural networks are typically implemented using graphics processing units (GPUs), special-purpose graphics chips found in all computing devices with screens. A mobile GPU, of the type found in a cell phone, might have almost 200 cores, or processing units, making it well suited to simulating a network of distributed processors.

At the International Solid State Circuits Conference in San Francisco this week, MIT researchers presented a new chip designed specifically to implement neural networks. It is 10 times as efficient as a mobile GPU, so it could enable mobile devices to run powerful artificial-intelligence algorithms locally, rather than uploading data to the Internet for processing.

Neural nets were widely studied in the early days of artificial-intelligence research, but by the 1970s, they’d fallen out of favor. In the past decade, however, they’ve enjoyed a revival, under the name “deep learning.”

“Deep learning is useful for many applications, such as object recognition, speech, face detection,” says Vivienne Sze, an assistant professor of electrical engineering at MIT whose group developed the new chip. “Right now, the networks are pretty complex and are mostly run on high-power GPUs. You can imagine that if you can bring that functionality to your cell phone or embedded devices, you could still operate even if you don’t have a Wi-Fi connection. You might also want to process locally for privacy reasons. Processing it on your phone also avoids any transmission latency, so that you can react much faster for certain applications.”

The new chip, which the researchers dubbed “Eyeriss,” could also help usher in the “Internet of things” — the idea that vehicles, appliances, civil-engineering structures, manufacturing equipment, and even livestock would have sensors that report information directly to networked servers, aiding with maintenance and task coordination. With powerful artificial-intelligence algorithms on board, networked devices could make important decisions locally, entrusting only their conclusions, rather than raw personal data, to the Internet. And, of course, onboard neural networks would be useful to battery-powered autonomous robots.

Division of labor

A neural network is typically organized into layers, and each layer contains a large number of processing nodes. Data come in and are divided up among the nodes in the bottom layer. Each node manipulates the data it receives and passes the results on to nodes in the next layer, which manipulate the data they receive and pass on the results, and so on. The output of the final layer yields the solution to some computational problem.

In a convolutional neural net, many nodes in each layer process the same data in different ways. The networks can thus swell to enormous proportions. Although they outperform more conventional algorithms on many visual-processing tasks, they require much greater computational resources.

The particular manipulations performed by each node in a neural net are the result of a training process, in which the network tries to find correlations between raw data and labels applied to it by human annotators. With a chip like the one developed by the MIT researchers, a trained network could simply be exported to a mobile device.

This application imposes design constraints on the researchers. On one hand, the way to lower the chip’s power consumption and increase its efficiency is to make each processing unit as simple as possible; on the other hand, the chip has to be flexible enough to implement different types of networks tailored to different tasks.

Sze and her colleagues — Yu-Hsin Chen, a graduate student in electrical engineering and computer science and first author on the conference paper; Joel Emer, a professor of the practice in MIT’s Department of Electrical Engineering and Computer Science, and a senior distinguished research scientist at the chip manufacturer NVidia, and, with Sze, one of the project’s two principal investigators; and Tushar Krishna, who was a postdoc with the Singapore-MIT Alliance for Research and Technology when the work was done and is now an assistant professor of computer and electrical engineering at Georgia Tech — settled on a chip with 168 cores, roughly as many as a mobile GPU has.

Act locally

The key to Eyeriss’s efficiency is to minimize the frequency with which cores need to exchange data with distant memory banks, an operation that consumes a good deal of time and energy. Whereas many of the cores in a GPU share a single, large memory bank, each of the Eyeriss cores has its own memory. Moreover, the chip has a circuit that compresses data before sending it to individual cores.

Each core is also able to communicate directly with its immediate neighbors, so that if they need to share data, they don’t have to route it through main memory. This is essential in a convolutional neural network, in which so many nodes are processing the same data.

The final key to the chip’s efficiency is special-purpose circuitry that allocates tasks across cores. In its local memory, a core needs to store not only the data manipulated by the nodes it’s simulating but data describing the nodes themselves. The allocation circuit can be reconfigured for different types of networks, automatically distributing both types of data across cores in a way that maximizes the amount of work that each of them can do before fetching more data from main memory.

At the conference, the MIT researchers used Eyeriss to implement a neural network that performs an image-recognition task, the first time that a state-of-the-art neural network has been demonstrated on a custom chip.

“This work is very important, showing how embedded processors for deep learning can provide power and performance optimizations that will bring these complex computations from the cloud to mobile devices,” says Mike Polley, a senior vice president at Samsung’s Micro Plasma Ion Lab. “In addition to hardware considerations, the MIT paper also carefully considers how to make the embedded core useful to application developers by supporting industry-standard [network architectures] AlexNet and Caffe.”

Read this article on MIT News.

News Image: 

Labs: 

Research Area: 

Computer science meets economics

$
0
0

Larry Hardesty |  MIT News Office

Constantinos Daskalakis adapts techniques from theoretical computer science to game theory.

Picture of Constantinos Daskalakis

Daedalus of Crete — who, according to Greek myth, designed the labyrinth that trapped the Minotaur — is one of the oldest symbols of human ingenuity, credited with the invention of the saw, the ax, glue, and the ship’s sail, among other things.

Constantinos Daskalakis, a recently tenured associate professor of computer science and engineering at MIT, comes from a Cretan family, and while it’s fanciful to suggest that the ingenuity of his work in theoretical computer science owes anything to the example of Daedalus, the problems he explores are undoubtedly labyrinthine.

Much of Daskalakis’ work concentrates on the application of computer science techniques to game theory, a discipline that attempts to get a quantitative handle on human strategic reasoning. Game theory models human interactions as a series of moves in a clearly defined game; each move represents an instance of a particular strategy and may elicit a different response from the other players, leading to different rewards. Even a simple game with only a handful of players can take vastly more twists and turns than the largest physical labyrinth.

Daskalakis’ parents are both from Crete, but they met in Athens, where they had come for college — his father to study mathematics, and his mother to study Ancient Greek literature and philosophy. Until he came to the U.S. for graduate school, Daskalakis lived in the greater Athens area — like one-third of the country’s population. “When you ask the question ‘Where are you from?’ in Greece, it has a different meaning than when you ask it in the States,” Daskalakis says. “In the States it means where you grew up. In Greece it means where your family originated from.”

Both of Daskalakis’ parents were teachers, and as a child, he showed an interest in and an aptitude for both of their disciplines. In junior high, however, he competed in the math Olympiad and finished second in the country. Though literature remains important to him — his MIT Web page features the complete text of a poem by the great Greek modernist poet Constantine Cavafy — from then on, he was marked as a student with exceptional mathematical promise.

Game theory

In Greece, every high school senior opts to take one of three sets of standardized tests, which determines his or her university placement. Daskalakis’ score on the technical exam was the fifth highest in the country, earning him a spot at the prestigious National Technical University of Athens. He enrolled in the five-year electrical engineering and computer science curriculum, the first half of which is spent canvassing a huge range of topics, from the physics of individual electrical components to the most esoteric questions in theoretical computer science.

“Sampling this big spectrum satiated my desire in the applied domain,” Daskalakis says. “I understood I could write a complicated program and then decided, ‘OK, now I know how to write a program. Let’s do math.’”

Daskalakis applied to and was accepted by graduate programs at several U.S. universities, and during his fifth year, he came to the United States to visit them. At the University of California at Berkeley, he was captivated by the computer scientist Christos Papadimitriou, a recipient of both of the Association for Computing Machinery’s major awards for theoretical computer science. Papadimitriou’s larger-than-life personality was celebrated in the bestselling 2009 comic book “Logicomix.”

“He’s a very inspiring person,” Daskalakis says, “and a founding father of the interaction between computer science and economics.”

After returning to Greece, Daskalakis chose to focus on that interaction for his undergraduate thesis. Game theory has been a staple of economics research since 1950, when John Nash, who taught at MIT from 1951 to 1959 and is the subject of the movie “A Beautiful Mind,” published the seminal paper that would ultimately win him the Nobel Prize in economics. Every game has what’s called a Nash equilibrium, which describes a balance of strategies that no player has an incentive to change unilaterally. Daskalakis’ thesis investigated Nash equilibria for games that can be represented as highly regular networks of interactions. The paper was accepted to the 13th Annual European Symposium on Algorithms. “I still find it a very elegant piece of work,” Daskalakis says.

Intractability

In 2004, after graduating, Daskalakis moved to Berkeley, to continue his study of algorithmic game theory with Papadimitriou. Four years later, his doctoral dissertation won the Association for Computing Machinery’s thesis award.

In it, Daskalakis proves that computing the Nash equilibrium for a three-person game is computationally intractable. That means that, for any but the simplest of games, all the computers in the world couldn’t calculate its Nash equilibrium in the lifetime of the universe. Consequently, Daskalakis argues, it’s unlikely that the real-world markets modeled by game theorists have converged on Nash equilibria either.

“I have been blessed throughout my career with the most brilliant graduate students and collaborators, but Costis [Daskalakis] is different from all,” Papadimitriou says. “I had been working on what ended up being his thesis problem — the complexity of Nash equilibria — for more than two decades. In the fall of 2004, conversations with Costis, who, remarkably, had just started his first year of graduate studies at Berkeley, inspired me to give it another good push, and this ultimately led to an important result.”

During his last year at Berkeley, Daskalakis got a job offer from MIT, but he deferred it for a year to do a postdoc at Microsoft Research New England. “It was really a year for me to step back and think about what I want to do next before coming to MIT and being very busy,” Daskalakis says.

When computer scientists run up against an intractable problem, their first recourse is to investigate the tractability of approximate solutions to it. After his doctoral thesis, Daskalakis focused on importing notions of approximation from computer science into economics. First, he published several papers examining the computation of approximate Nash equilibria. Some of those results were disheartening: For general games, even relatively coarse approximations are still intractably hard to find.

Ideal auctions

Other problems in game theory, however, have proven more susceptible to analysis from a computational perspective. In 2012, after coming to MIT, Daskalakis and his students solved a 30-year-old problem in economics, a generalization of work that helped earn the University of Chicago’s Roger Myerson the Nobel Prize in economics. That problem was how to structure auctions for multiple items so that, even if all the bidders adopt strategies that maximize their own returns, the auctioneer can still extract the greatest profit.

Since then, Daskalakis’ group has taken on topics in computational genetics, probability theory, and machine learning. They’ve also been working to generalize their results on auction design. “The computer science aesthetic is, ‘Given a problem, I am looking for an algorithm that solves instances of this problem,’” Daskalakis says. “The economics aesthetic is, ‘Given a problem, I want to understand the structure of the solutions to different instances of this problem. I want to be able to make universal statements about the structure of these solutions.’ Working at this interface of economics and computation, you have to balance the two aesthetics. Now we’re trying to import more of that economics aesthetic into our work.”

“To do that,” Daskalakis adds, “it turned out we had to develop new tools in the field of mathematics called optimal transport theory,” which examines the most efficient way to move objects — or data — between multiple origins and destinations. The labyrinthine path that Daskalakis started down as a senior in college continues to branch in unexpected ways.

Read this article on MIT News.

February 4, 2016

Research Themes: 

News Image: 

Labs: 

Research Area: 

Welcome the MIT Siebel Scholars for 2016

$
0
0

Michael Patrick Rutter |  School of Engineering

Graduate students from bioengineering, business, computer science, and energy science join a distinguished intellectual community.

Picture of Siebel Scholars

EECS group photo at the Siebel Scholars Award Lunch. Shown (top row, left to right) EECS Graduate Administrator Leslie Kolodziejski, Department Head Anantha Chandrakasan, (bottom row, left to right) Jane Hsin-Yu Lai, Sirma Orguc, Alvaro Morales, and Dogyoon Song. Missing from the photo Wei Ouyang. Photo: Audrey Resutek.


Sixteen MIT graduate students are among the 2016 cohort of Siebel Foundation Scholars hailing from the world’s top graduate programs in business, bioengineering, computer science, and energy science.

Honored for their academic achievements, leadership, and commitments to addressing crucial global challenges, the select MIT students are part of a class of 90 individuals receiving a $35,000 award for their final year of study. In addition, they will join a community of more than 1,000 past Siebel Scholars, including 200 MIT affiliates.

“We are deeply pleased and thankful that our students continue to benefit from the Siebel Foundation’s longstanding support for academic excellence and leadership,” says Ian A. Waitz, dean of the MIT School of Engineering. “The program provides a rare and wonderful opportunity to convene a group of exceptional students who are poised to become future innovators, educators, and thought leaders. By connecting them and asking them to collectively think about some of the world’s most challenging problems, it amplifies their individual talents.”

Established in 2000 by the Thomas and Stacey Siebel Foundation, the program offers grants through 25 partner programs at global universities in the United States, China, France, Italy, and Japan.

“In addition to recognizing the individual excellence of our students, the Siebel Scholars program opens the door to a powerful global network,” says David C. Schmittlein, dean of the MIT Sloan School of Management. “Past MIT Sloan recipients have all conveyed the value of bumping up against disciplines and approaches different from their own. In the world of modern business, that perspective is not just nice to have, but a necessity.”

The Siebel Scholars are chosen by the deans of their respective schools on the basis of outstanding academic achievement and demonstrated leadership.

MIT was one of the few schools that named a scholar in the newly established energy-science field: Morgan Edwards, a PhD candidate in the Institute for Data, Systems, and Society (IDSS), who is developing tools to assess the performance of energy technologies in the face of changing climate and environmental constraints.

With 16 scholars, MIT represents the largest group of Siebel recipients. This year's MIT honorees are:

Read this article on MIT News.

February 4, 2016

News Image: 

John Wyatt, professor and cofounder of Boston Retinal Implant Project, passes away at 69

$
0
0

Department of Electrical Engineering and Computer Science

Dedicated researcher was a circuits expert developing a retinal implant to help the blind see.

John Wyatt

Photo credit: Boston Retinal Implant Project


John Wyatt ’68, who served as a professor of electrical engineering for 36 years, passed away at home in the company of his family on Wednesday, February 3. He was 69.

Wyatt was a devoted researcher who spent decades developing retinal implants to restore sight to people affected by age-related macular degeneration and retinitis pigmentosa, the two leading causes of blindness worldwide. An expert in circuits, his work focused on developing a chip that could be implanted in the retina to transmit visual information to the optic nerve.

A native of Nashville, Tennessee, Wyatt received an SB from MIT in 1968, an MS from Princeton University 1970, and a PhD from the University of California at Berkeley in 1979, all in electrical engineering. He joined the EECS faculty in 1979, and retired from MIT in June 2015.

Wyatt was a driven researcher, who pursued big ideas. In 1989, he cofounded the Boston Retinal Implant Project with Dr. Joseph Rizzo of the Harvard Medical School and the Massachusetts Eye and Ear Infirmary, where he led the engineering team. Their group was the first to use microfabricated electrode arrays to electrically stimulate the human retina. Wyatt and Rizzo also cofounded Bionic Eye Technologies, which is working to commercialize their work to help the blind.

Their prosthetic design uses a camera embedded in a pair of glasses worn by the user to “see.” The camera then transmits visual information to a chip embedded in the retina, with the goal of restoring enough sight that a user might be able to find a door in a room, or walk down the street without the aid of a cane.

More recent advances in microfabrication and packaging made by their team led to the development of a prosthetic with the largest number of individually-controllable stimulation channels of any neural prosthetic, an advance that could allow a substantially greater level of vision to be obtained.

“I said, that sounds really like science fiction. I spent about three months trying to think about why it couldn’t be done, and I really couldn’t find a reason it couldn’t be done,” Wyatt remembered about his reaction to the implant’s concept in a 2012 video. “So I said, ‘ok, I’ll give it a shot,’ and I’ve been at it for 23 years.”

Wyatt did his first research on the retina during his graduate studies at the University of California at Berkeley, in the lab of Professor Frank Werblin. His doctoral dissertation included a study of how circuits could be used to model forces and flows in biological processes, and he developed this work further during his postdoctoral work at the Medical College of Virginia.

In 1990, Wyatt was appointed the Department of Electrical Engineering and Computer Science’s first Adler Scholar. Named for Richard B. Adler, a professor of electrical engineering and computer science and a friend of Wyatt’s, the appointment allows MIT faculty to take a class for one semester as a student.

In an article about the Adler Scholar program published the same year, the New York Times described Wyatt’s delight at the chance to be a student again: “I don't want to be a research supervisor particularly. I want to be a researcher." He enrolled in 6.866 (Machine Vision), a course taught by Berthold K.P. Horn.

In 1998 the Retinitis Pigmentosa International Foundation awarded him the Jules Stein Living Tribute Award.

“John devoted his research to improving the quality of life for millions of people affected by blindness,” wrote David Perreault, associate department head of the Department of Electrical Engineering and Computer Science, in an email to faculty. “He will be long remembered by the many colleagues, students, and patients whose lives he touched.”

Wyatt is survived by his wife, Christie Baxter, his daughter, Julia Wyatt, and stepson Andrew Cook, all of Sudbury, and his brother James Wyatt and nephew Timothy Wyatt, both of Berlin, Germany.

A funeral service for Wyatt will be held on Tuesday, February 9, at 11:00 A.M. at the First Parish Church in Sudbury. Calling hours will be the previous evening, Monday, February 8, from 5:00 P.M.–8:00 P.M., at the Duckett Funeral Home in Sudbury.

February 5, 2016

Research Themes: 

News Image: 

Labs: 

Research Area: 


Peh named 2016 Singapore Research Professor

$
0
0

Department of Electrical Engineering and Computer Science

Li-Shiuan Peh, associate professor of electrical engineering and computer science, receives professorship from the Singapore-MIT Alliance for Research and Technology.

Professor Peh


Li-Shiuan Peh, associate professor of electrical engineering and computer science, has been appointed as a 2016 Singapore Research Professor.

Peh has been conducting research in the Singapore-MIT Alliance for Research and Technology (SMART), a major research enterprise established by MIT in partnership with the National Research Foundation of Singapore (NRF) in 2007.

Her research focuses on low-power interconnection networks, on-chip networks, and parallel computer architectures. Peh coins her research thrust as “network-driven computing,” where the architecture of future computer chips is more significantly driven by how cores are interconnected, rather than the design of the cores themselves. Her research has motivated, proposed and prototyped these on-chip networks, so as to enable the continued scaling of Moore's Law into future many-core chips.

Peh is also involved in the SMART Low Energy Electronic Systems (LEES) research group, which aims to find a new innovation path in the semiconductor industry to create novel integrated circuits. LEES Interdisciplinary Research Group (IRG) is structured as a vertically-integrated organization, having expertise in research from materials-to-circuits. The LEES research process relies on iterative innovation in which the various researchers create self-organized constraints through their work, allowing research to have maximum potential impact in the marketplace.

Date Posted: 

Friday, February 5, 2016 - 2:15pm

Research Theme: 

Labs: 

Card Title Color: 

Black

Card Description: 

Li-Shiuan Peh, associate professor of electrical engineering and computer science, receives professorship from the Singapore-MIT Alliance for Research and Technology.

Photo: 

Card Title: 

Peh named 2016 Singapore Research Professor

Research Area: 

SuperUROP Info Sessions

$
0
0

Card Title: 

SuperUROP Info Sessions

Card URL: 

https://www.eecs.mit.edu/news-events/calendar/events/superurop-info-session#

Card Description: 

Spend a year in the lab conducting research for your own SuperUROP project. Learn how at one of our our info sessions. 

Card Title Color: 

Black

Card Wide Image: 

Faculty Promotions

$
0
0

Tomás Palacios, Devavrat Shah, Russ Tedrake and Dirk Englund

Top row, left to right: Tomás Palacios, Devavrat Shah; bottom row, left to right: Russ Tedrake, Dirk Englund.

The Department of Electrical Engineering and Computer Science is pleased to announce the promotions of Tomas Palacios, Devavrat Shah, and Russ Tedrake to Full Professor and Dirk Englund to Associate Professor without Tenure. The promotions are effective July 1.

Tomás Palacios, Full Professor
Tomás Palacios has made groundbreaking contributions to electron devices through the use of new materials and nanotechnology. His work includes both advancing the design, fabrication and application of semiconductors such as GaN, ubiquitous in today’s solid state lighting and power electronics, as well as developing some of the first device concepts and applications of graphene and other two-dimensional materials. In addition to leading endeavors such as the MIT GaN Energy Initiative, Palacios has made important contributions to our undergraduate classes from freshman through senior levels, and has taken on prominent service activities such as the directorship of the EECS VI-A program.

Devavrat Shah, Full Professor
Devavrat Shah is a leading figure in the area of statistical inference and stochastic networks. His seminal contributions span a variety of areas including resource allocation in communications networks, inference and learning on graphical models, and algorithms for social data processing including ranking, recommendations and crowdsourcing among many others. He has received many accolades for his work, including numerous prize paper awards and the Erlang prize for outstanding contributions to applied probability. Shah is also a highly valued teacher, has been highly active in curriculum development and has taken a leading role developing educational aspects in the center for statistics that is part of the new Institute for Data, Systems and Society (IDSS).

Russ Tedrake, Full Professor
Russ Tedrake is a world leader at the intersection of robotics and control theory. He pushes the frontier of humanoid and flying robots by developing both better theory and remarkably novel algorithms. In the DARPA Robotics challenge, his team has demonstrated robots capable of walking autonomously over rough terrain as well as climbing stairs. He and his students have also demonstrated robots that can fly at high speeds using only a commodity cell-phone processor. Tedrake received universal recognition for his ground-breaking work and is the winner of numerous best-paper awards from influential robotics conferences. A superb teacher and mentor, Tedrake has vastly enriched our curriculum, and his edX courses are followed by thousands of students.

Dirk Englund, Associate Professor without Tenure
Dirk Englund researches the development of photonic and quantum devices and systems and their use in quantum computation, communications, and sensing. Quantum-based approaches have promise for revolutionary impact on computation, secure communications, high-precision sensing, and many other areas. Among other advances, Dirk has made important contributions in each of quantum and photonic devices, quantum information processing, quantum communications, and quantum sensing, and he has become widely recognized for his work in these areas. Englund has also been making valuable educational contributions to the department, including introduction of a new combination undergraduate and graduate course, “Fundamentals of Photonics.”

Date Posted: 

Monday, February 8, 2016 - 11:00am

Card Title Color: 

Black

Card Description: 

Palacios, Shah, Tedrake promoted to Full Professor, Englund promoted to Associate Professor without Tenure.

Photo: 

Card Title: 

Faculty Promotions

Leiserson elected to the National Academy of Engineering

$
0
0
February 10, 2016

Michael Patrick Rutter |  School of Engineering

EECS faculty Charles E. Leiserson one of three from MIT community elected to National Academy of Engineering.

Charles Leiserson

Photo: Charles E. Leiserson


Three members of the MIT community — Charles E. Leiserson, Emanuel M. Sachs, and Grant H. Stokes — are among the 80 new members and 22 foreign associates elected to the National Academy of Engineering.

Election to the National Academy of Engineering (NAE) is among the highest professional distinctions accorded to an engineer. Academy membership honors those who have made outstanding contributions to "engineering research, practice, or education, including, where appropriate, significant contributions to the engineering literature," and to "the pioneering of new and developing fields of technology, making major advancements in traditional fields of engineering, or developing/implementing innovative approaches to engineering education."

Elected this year:

Charles E. Leiserson, the Edwin Sibley Webster Professor in the Department of Electrical Engineering and Computer Science, for theoretically grounded approaches to digital design and parallel computer systems;

Emanuel M. Sachs, a professor in the MIT Department of Mechanical Engineering, for contributions and commercialization in photovoltaics and three-dimensional printing; and

Grant H. Stokes, head of the Space Systems and Technology Division at the MIT Lincoln Laboratory, for innovations in systems for space situational awareness and the discovery of near-Earth asteroids.

“It’s the depth and breadth of their scholarship and level of impact of the work that impresses me about our latest group of NAE members,” says Ian A. Waitz, dean of the School of Engineering and the Jerome C. Hunsaker Professor in the Department of Aeronautics and Astronautics. “Election to the NAE is one of the highest honors an engineer can receive from his or her peers.” Including this year’s inductees, 134 current faculty and staff from MIT are members of the National Academy of Engineering. With this week’s announcement, NAE’s total U.S. membership stands at 2,275; the number of foreign associates is at 232.

At least eight MIT alumni were also named to the NAE this year, including Neal Bergano MS ’83; Frederick Chang ’91; Richard A. Gottscho PhD ’79; James E. Hubbard Jr. ’77, PhD ’82; David S. Johnson PhD ’73; Brian D. Kelley ’92; Michael Maloney PhD ’89; Molly Shoichet ’87; and Jennifer L. West ’92.

 

Read this article on MIT News.

 

News Image: 

Labs: 

Chandrakasan receives honorary doctorate from KU Leuven

$
0
0
February 11, 2016

EECS Department Head Anantha Chandrakasan recognized as global authority in the field of electronic chip development by KU Leuven. 

Anantha Chandrakasan

Photo: Anantha Chandrakasan. Photo by Michael De Lausnay


EECS Department Head Anantha Chandrakasan has been recognized by KU Leuven with an honorary doctorate, conferred at the university's Patron Saint's Day celebration in Leuven, Belgium on February 10, 2016.

Chandrakasan, the Vannevar Bush Professor of Electrical Engineering and Computer Science, was recognized in a laudatio presented by Wim Dehaene, head of KU Leuven's MICAS (Microelectronics and Sensors) research division. 

"Professor Anantha Chandrakasan is one of the most influential researchers in the field of integrated circuit design. This sentence makes most sense to other researchers from that field. To anyone else, the enormous impact of Professor Chandrakasan’s work may not be obvious at first. And yet, integrated circuits have radically changed our everyday lives. Is it still possible to imagine a society without internet, tablets, or mobile phones? Can you imagine your daily life without wireless garage openers or TV remote controls? None of these things can exist without integrated circuits or chips, as they are usually called. It is a common mistake to think that the information society is only about software. Software needs a platform to run on, and that platform is a piece of hardware. At the heart of that hardware are chips."

Read the full Laudatio from the ceremony.

Learn more about Anantha Chandrakasan's work in a profile by KU Leuven.

News Image: 

Labs: 

Viewing all 1281 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>