Quantcast
Channel: MIT EECS
Viewing all 1281 articles
Browse latest View live

Duane Boning named head of Leaders for Global Operations program

$
0
0
August 17, 2016

School of Engineering | MIT News

Boning succeeds David Simchi-Levi as engineering faculty director of master’s program.

Duane Boning

 


Duane S. Boning has been named engineering faculty co-director of the Leaders for Global Operations (LGO) program, effective Sept. 1.

“Professor Boning’s distinguished record of teaching and service to MIT makes him uniquely qualified to lead this important program,” Dean Ian A. Waitz wrote in an email to LGO-affiliated faculty and industry partners earlier today. “I have worked closely with Duane on a range of projects, and he is always a thoughtful, diligent, and energetic partner. The LGO program will continue to thrive under his direction.”

Founded in 1988 as the Leaders for Manufacturing (LFM) program, LGO is a dual-degree program run as a collaboration between the MIT School of Engineering and the MIT Sloan School of Management. Its two-year curriculum features internships at elite partner companies that are all leaders in their industries. Students develop leadership skills for the pharmaceutical, manufacturing, geosciences, energy, high-tech, and global supply chain industries. Boning, who is the Clarence J. LeBel Professor in the Department of Electrical Engineering and Computer Science (EECS), has deep connections with LGO, having co-advised more than 40 student theses with the program.

Boning’s research focuses on manufacturing and design, with emphasis on statistical modeling, control, and variation reduction in semiconductor, microelectromechanical, photonic, and nanomanufacturing processes. He is a recognized leader in the characterization and modeling of spatial variation in integrated circuit and nanofabrication processes, including plasma etch and chemical-mechanical polishing. The tools developed in his group have been commercialized and widely adopted in industry. His recent work is in developing and applying statistical and machine learning methods to model and reduce variation in emerging technologies, including integrated silicon photonics, nanoimprint processes, and biomedical electronics. He served as editor in chief for the IEEE Transactions on Semiconductor Manufacturing from 2001 to 2011 and was named a fellow of the IEEE for contributions to modeling and control in semiconductor manufacturing in 2005.

Boning’s teaching has been recognized at both the undergraduate and graduate level. He is active as a recitation and laboratory instructor, and as a formal and informal advisor to MIT students at all levels. He helped redesign the undergraduate curriculum in electrical engineering and has created a number of new classes, at both the undergraduate and graduate level. He is a past recipient of the Ruth and Joel Spira Teaching Award, he won the Best Advisor Award from the MIT ACM/IEEE student organization in 2012, and was the 2016 recipient of the Capers and Marion McDonald Award for Excellence in Mentoring and Advising in the School of Engineering.

Boning has an especially lengthy and distinguished record of service to MIT, with a particular emphasis on international engagements. Since 2011, he has served as the director for the MIT/Masdar Institute Cooperative Program, for which he has worked to help establish a new graduate university in Abu Dhabi, where he has fostered many joint research, education, and outreach activities between MIT and Masdar Institute faculty and students. From 2011 to 2013 he served as founding faculty lead in the MIT Skoltech Initiative, working to conceive and launch a new graduate university, the Skolkovo Institute of Science and Technology (Skoltech) outside Moscow, Russia. And from 2003 to 2004 he served as co-director for undergraduate education in the Cambridge-MIT Institute, fostering a number of undergraduate education research efforts.

Currently associate director of the Microsystems Technology Laboratories, where he oversees the information technology and computer-aided design services organization, Boning also served from 2004 to 2011 as associate head of EECS, where he helped implement a new undergraduate curriculum. He is also a former chair of the Committee on Undergraduate Admissions and Financial Aid, and chair of the Committee on the Undergraduate Program.

Boning received BS degrees in electrical engineering and in computer science from MIT in 1984. He went on to earn an MS and a PhD, both also from MIT, in 1986 and 1991, respectively. An National Science Foundtion Fellow from 1984 to 1989 and an Intel Graduate Fellow in 1990, Boning worked on semiconductor pro­cess representation, process/device simulation tool integration, and statistical modeling and optimi­zation at the Texas Instruments semiconductor process and design center in Dal­las, Texas, from 1991 to 93. He joined the faculty at MIT in 1992.

Boning succeeds David Simchi-Levi, a professor of civil and environmental engineering and of engineering sytems. Simchi-Levi served as faculty co-director of the LGO program through the change of its name and strategy in 2009. He was also instrumental in launching the China LGO program at Shanghai Jiao Tong University in 2006, helped the program reach a historic high of industrial partners (now 26), and recently helped to create a supply chain track for LGO within the Department of Civil and Environmental Engineering.

Read this article on MIT News.

News Image: 

Labs: 


User-friendly language for programming efficient simulations

$
0
0
August 10, 2016

Larry Hardesty | MIT News

New language can speed up computer simulations 200-fold or reduce the code they require by 90 percent.

illustration

Computer simulations of physical systems are common in science, engineering, and entertainment, but they use several different types of tools.

If, say, you want to explore how a crack forms in an airplane wing, you need a very precise physical model of the crack’s immediate vicinity. But if you want to simulate the flexion of an airplane wing under different flight conditions, it’s more practical to use a simpler, higher-level description of the wing.

If, however, you want to model the effects of wing flexion on the crack’s propagation, or vice versa, you need to switch back and forth between these two levels of description, which is difficult not only for computer programmers but for computers, too.

A team of researchers from MIT’s Computer Science and Artificial Intelligence Laboratory, Adobe, the University of California at Berkeley, the University of Toronto, Texas A&M, and the University of Texas have developed a new programming language that handles that switching automatically.

In experiments, simulations written in the language were dozens or even hundreds of times as fast as those written in existing simulation languages. But they required only one-tenth as much code as meticulously hand-optimized simulations that could achieve similar execution speeds.

“The story of this paper is that the trade-off between concise code and good performance is false,” says Fredrik Kjolstad, an MIT graduate student in electrical engineering and computer science and first author on a new paper describing the language. “It’s not necessary, at least for the problems that this applies to. But it applies to a large class of problems.”

Indeed, Kjolstad says, the researchers’ language has applications outside physical simulation, in machine learning, data analytics, optimization, and robotics, among other areas. Kjolstad and his colleagues have already used the language to implement a version of Google’s original PageRank algorithm for ordering search results, and they’re currently collaborating with researchers in MIT’s Department of Physics on an application in quantum chromodynamics, a theory of the “strong force” that holds atomic nuclei together.

“I think this is a language that is not just going to be for physical simulations for graphics people,” says Saman Amarasinghe, Kjolstad’s advisor and a professor of electrical engineering and computer science (EECS). “I think it can do a lot of other things. So we are very optimistic about where it’s going.”

Kjolstad presented the paper in July at the Association for Computing Machinery’s Siggraph conference, the major conference in computer graphics. His co-authors include Amarasinghe; Wojciech Matusik, an associate professor of EECS; and Gurtej Kanwar, who was an MIT undergraduate when the work was done but is now an MIT PhD student in physics.

Graphs vs. matrices

As Kjolstad explains, the distinction between the low-level and high-level descriptions of physical systems is more properly described as the distinction between descriptions that use graphs and descriptions that use linear algebra.

In this context, a graph is a mathematical structure that consists of nodes, typically represented by circles, and edges, typically represented as line segments connecting the nodes. Edges and nodes can have data associated with them. In a physical simulation, that data might describe tiny triangles or tetrahedra that are stitched together to approximate the curvature of a smooth surface. Low-level simulation might require calculating the individual forces acting on, say, every edge and face of each tetrahedron. Linear algebra instead represents a physical system as a collection of points, which exert forces on each other. Those forces are described by a big grid of numbers, known as a matrix. Simulating the evolution of the system in time involves multiplying the matrix by other matrices, or by vectors, which are individual rows or columns of numbers. Matrix manipulations are second nature to many scientists and engineers, and popular simulation software such as MatLab provides a vocabulary for describing them. But using MatLab to produce graphical models requires special-purpose code that translates the forces acting on, say, individual tetrahedra into a matrix describing interactions between points. For every frame of a simulation, that code has to convert tetrahedra to points, perform matrix manipulations, then map the results back onto tetrahedra. This slows the simulation down drastically.

So programmers who need to factor in graphical descriptions of physical systems will often write their own code from scratch. But manipulating data stored in graphs can be complicated, and tracking those manipulations requires much more code than matrix manipulation does. “It’s not just that it’s a lot of code,” says Kjolstad. “It’s also complicated code.”

Automatic translation

Kjolstad and his colleagues’ language, which is called Simit, requires the programmer to describe the translation between the graphical description of a system and the matrix description. But thereafter, the programmer can use the language of linear algebra to program the simulation.

During the simulation, however, Simit doesn’t need to translate graphs into matrices and vice versa. Instead, it can translate instructions issued in the language of linear algebra into the language of graphs, preserving the runtime efficiency of hand-coded simulations.

Unlike hand-coded simulations, however, programs written in Simit can run on either conventional microprocessors or on graphics processing units (GPUs), with no change to the underlying code. In the researchers’ experiments, Simit code running on a GPU was between four and 20 times as fast as on a standard chip.

“One of the biggest frustrations as a physics simulation programmer and researcher is adapting to rapidly changing computer architectures,” says Chris Wojtan, a professor at the Institute of Science and Technology Austria. “Making a simulation run fast often requires painstakingly specific rearrangements to be made to the code. To make matters worse, different code must be written for different computers. For example, a graphics processing unit has different strengths and weaknesses compared to a cluster of CPUs, and optimizing simulation code to perform well on one type of machine will usually result in sub-optimal performance on a different machine.”

“Simit and Ebb” — another experimental simulation language presented at Siggraph — “aim to handle all of these frustratingly specific optimizations automatically, so programmers can focus their time and energy on developing new algorithms,” Wojtan says. “This is especially exciting news for physics simulation researchers, because it can be difficult to defend creative and raw new ideas against traditional algorithms which have been thoroughly optimized for existing architectures.” This work was supported by the National Science Foundation and by the Defense Advanced Research Projects Agency SIMPLEX program.

Read this article on MIT News.

News Image: 

Labs: 

Research Area: 

Booting up spin-based device studies

$
0
0
August 15, 2016

Dennis Paiste | Materials Processing Center

Summer Scholar Grant Smith works to establish parameters for making ferromagnetic thin films in the Luqiao Liu lab.

Duane Boning

Summer Scholar Grant Smith looks into a sputter deposition chamber, where he makes ultrathin films — from 2 to 10 nanometers thick — of magnetic materials suitable for spin-based electronics such as those used in computer memory systems. He is working under MIT assistant professor of electrical engineering and computer science Luqiao Liu. Smith’s summer project involves growing the films, making experimental device prototypes, and measuring their properties. Photo: Denis Paiste/Materials Processing Center


MIT Materials Processing Center (MPC)-Center for Materials Science and Engineering (CMSE) Summer Scholar Grant Smith is working in the lab of MIT assistant professor of electrical engineering and computer science Luqiao Liu, to create special thin film materials suitable for spin-based devices such as magnetic tunnel junctions used in computer memory.

For his summer project, Smith is operating a sputter deposition chamber, where he grows ultrathin films from 2 to 10 nanometers thick. He is making devices that are precursors to a memory device and measuring their properties.

Magnetic tunnel junctions used in spin-based systems for computer memory got their start with a key breakthrough in 1994 at MIT by research scientist Jagadeesh S. Moodera and colleagues. They are especially valued because they retain information even when the power is off.

A magnetic tunnel junction pairs two thin film materials, each with a special property called ferromagnetism. “Those ferromagnetic layers can either have their magnetizations aligned or anti-aligned,” Smith explains. If they are aligned, that is their magnetic fields both point in the same direction, the electrons in one layer will have more states available for them in the other layer, but if they are anti-aligned [with magnetic fields pointing in opposite directions], there will be fewer states for electrons available in that other layer.

Change in resistance

“When you’re trying to push a current through and the magnetizations are aligned, the resistance is much lower. So if you fix one of the magnetic layers and flip the other one based on whether you want it to be a zero or a one or if you’re just trying to detect the existence of a magnetic field, you’ll be able to see something on the order of a 100 to 300 percent change in the resistance of that device,” Smith says. This is about 10 to 30 times greater that the approximately 10 percent shift in resistance in the first such devices.

Smith is working with a dual-layer of an antiferromagnet called iridium manganese and a ferromagnet called cobalt iron boron. “Those two in conjunction, when you condition them in a specific way, they pin the magnetization of the one ferromagnet in that one specific direction. So that is your fixed layer,” he explains. For his summer project, Smith seeks to establish that ability to grow these magnetic tunnel junctions in Liu’s lab, and if that is a success, to try to manipulate that magnetization with the spin texture of a topological semimetal in order to do switching.

Nice spot to be

“I’m just happy to learn anything about this field basically,” says Smith, a rising senior at Penn State University majoring in physics, who hopes to pursue a doctorate in the sciences. “I’m glad to be learning how to manufacture these magnetic tunnel junctions. That’s a really important skill. They’re used everywhere as far as doing experiments in this field. They’re useful in industry. It’s actually a very nice spot to be in.”

Liu, who joined the MIT faculty in September 2015, says, “So far I have been very glad with Grant Smith's performance. Having a summer intern working in our lab does provide a good advantage to our research as it allows us to look into directions that we were not able to previously due to a shortage of manpower. Moreover, Mr. Smith is really diligent and smart. It is a very nice experience so far to work with such a motivated undergraduate student.”

For Smith, working in Liu’s lab on materials at room temperature is a change of pace from his work at Penn State on materials at extremely low temperatures in the range of 4 kelvins (-452.47 degrees Fahrenheit). “When you’re working with these sort of things you can learn about new behaviors, new scientific phenomenon,” he says. “Here everything is very room temperature focused working much closer towards, working much more closely with the place industry is at right now,” Smith says.

‪MPC‬‬‬‬‬‬‬‪ and CMSE sponsor the nine-week National Science Foundation (NSF) Research Experience for Undergraduates internships with support from NSF’s Materials Research Science and Engineering Centers program. The program runs from June 7 through Aug. 6. ‬‬‬‬‬‬‬

Read this article on MIT News.

Research Themes: 

Labs: 

Research Area: 

Recording analog memories in human cells

$
0
0

Anne Trafton | MIT News

Engineers program human cells to store complex histories in their DNA.

Illustration

MIT biological engineers have devised a memory storage system illustrated here as a DNA-embedded meter that is recording the activity of a signaling pathway in a human cell. Courtesy of the researchers


MIT biological engineers have devised a way to record complex histories in the DNA of human cells, allowing them to retrieve “memories” of past events, such as inflammation, by sequencing the DNA.

This analog memory storage system — the first that can record the duration and/or intensity of events in human cells — could also help scientists study how cells differentiate into various tissues during embryonic development, how cells experience environmental conditions, and how they undergo genetic changes that lead to disease.

“To enable a deeper understanding of biology, we engineered human cells that are able to report on their own history based on genetically encoded recorders,” says Timothy Lu, an associate professor of electrical engineering and computer science, and of biological engineering. This technology should offer insights into how gene regulation and other events within cells contribute to disease and development, he adds.

Lu, who is head of the Synthetic Biology Group at MIT’s Research Laboratory of Electronics, is the senior author of the new study, which appears in the Aug. 18 online edition of Science. The paper’s lead authors are Samuel Perli SM ’10, PhD ’15 and graduate student Cheryl Cui.

Analog memory

Many scientists, including Lu, have devised ways to record digital information in living cells. Using enzymes called recombinases, they program cells to flip sections of their DNA when a particular event occurs, such as exposure to a particular chemical. However, that method reveals only whether the event occurred, not how much exposure there was or how long it lasted.

Lu and other researchers have previously devised ways to record that kind of analog information in bacteria, but until now, no one has achieved it in human cells.

The new MIT approach is based on the genome-editing system known as CRISPR, which consists of a DNA-cutting enzyme called Cas9 and a short RNA strand that guides the enzyme to a specific area of the genome, directing Cas9 where to make its cut.

CRISPR is widely used for gene editing, but the MIT team decided to adapt it for memory storage. In bacteria, where CRISPR originally evolved, the system records past viral infections so that cells can recognize and fight off invading viruses.

“We wanted to adapt the CRISPR system to store information in the human genome,” Perli says.

When using CRISPR to edit genes, researchers create RNA guide strands that match a target sequence in the host organism’s genome. To encode memories, the MIT team took a different approach: They designed guide strands that recognize the DNA that encodes the very same guide strand, creating what they call “self-targeting guide RNA.”

Led by this self-targeting guide RNA strand, Cas9 cuts the DNA encoding the guide strand, generating a mutation that becomes a permanent record of the event. That DNA sequence, once mutated, generates a new guide RNA strand that directs Cas9 to the newly mutated DNA, allowing further mutations to accumulate as long as Cas9 is active or the self-targeting guide RNA is expressed.

By using sensors for specific biological events to regulate Cas9 or self-targeting guide RNA activity, this system enables progressive mutations that accumulate as a function of those biological inputs, thus providing genomically encoded memory.

For example, the researchers engineered a gene circuit that only expresses Cas9 in the presence of a target molecule, such as TNF-alpha, which is produced by immune cells during inflammation. Whenever TNF- alpha is present, Cas9 cuts the DNA encoding the guide sequence, generating mutations. The longer the exposure to TNF-alpha or the greater the TNF-alpha concentration, the more mutations accumulate in the DNA sequence.

By sequencing the DNA later on, researchers can determine how much exposure there was.

“This is the rich analog behavior that we are looking for, where, as you increase the amount or duration of TNF-alpha, you get increases in the amount of mutations,” Perli says.

“Moreover, we wanted to test our system in living animals. Being able to record and extract information from live cells in mice can help answer meaningful biological questions,” Cui says. The researchers showed that the system is capable of recording inflammation in mice.

Most of the mutations result in deletion of part of the DNA sequence, so the researchers designed their RNA guide strands to be longer than the usual 20 nucleotides, so they won’t become too short to function. Sequences of 40 nucleotides are more than long enough to record for a month, and the researchers have also designed 70-nucleotide sequences that could be used to record biological signals for even longer.

Tracking development and disease

The researchers also showed that they could engineer cells to detect and record more than one input, by producing multiple self-targeting RNA guide strands in the same cell. Each RNA guide is linked to a specific input and is only produced when that input is present. In this study, the researchers showed that they could record the presence of both the antibiotic doxycycline and a molecule known as IPTG.

Currently this method is most likely to be used for studies of human cells, tissues, or engineered organs, the researchers say. By programming cells to record multiple events, scientists could use this system to monitor inflammation or infection, or to monitor cancer progression. It could also be useful for tracing how cells specialize into different tissues during development of animals from embryos to adults.

“With this technology you could have different memory registers that are recording exposures to different signals, and you could see that each of those signals was received by the cell for this duration of time or at that intensity,” Perli says. “That way you could get closer to understanding what’s happening in development.” ‬‬‬‬‬‬‬

Read this article on MIT News.

August 18, 2016

Research Themes: 

News Image: 

Labs: 

Research Area: 

EECS Communication Lab

$
0
0

Grad students: Do you need help polishing a paper or fine-tuning a poster? How about putting together the perfect conference talk or designing visuals? The EECS Communication Lab, a new resource open to graduate students in the Department of Electrical Engineering and Computer Science, can help you with these tasks.

Make an appointment to attend a coaching session for feedback on a wide range of scientific tasks including publications, job applications, figures, and conference talks. The Communication Advisors, since they are scientists first and foremost, focus on high-level issues such as motivation, audience, and clarity; they are not intended to be used as a line-editor for grammar or English.

How to use the EECS Communication Lab:

 

There are 11 EECS Communication Advisors — your peers
Currently there are eleven EECS Communication Advisors: Samiya Alkhairy, Ross Finman, Chris Foy, Alex Hanson, Joel Jean, Pete Lindahl, Phillip Stanley-Marbell, Chris Musco, Julia Rubin, Greg Stein, and Samantha Dale Strasser.

The EECS online “CommKit” will be available in mid-September
The Communication Advisors have spent the summer developing an online searchable “CommKit.” This new resource breaks down approximately 20 graduate student communication tasks into quick tips and annotated examples in a range of EECS fields so that, when students are writing at 2 A.M., they have some tools to help them get started. The CommKit is intended to complement the Communication Lab’s individualized in-person consultations.

EECS Communication Lab will offer targeted workshops
This year the Communication Lab will be hosting several 1-2 hour scientific communication workshops for graduate students on a rage of topics. The goal of these targeted workshops is to provide timely hands-on support for graduate students in connection with authentic deadlines. For instance the Communication Advisors will be working closely with the leadership for Masterworks to offer best practices and individualized feedback to students right before the spring poster deadline.

Alison Takemura will join EECS as the Communication Lab Administrator in September
Dr. Alison Takemura will be joining the EECS Communication Lab team as their Program Administrator in mid-September. Takemura received a PhD from MIT, previously served as a Communication Fellow in the Department of Biological Engineering, and is an active science journalist. EECS Prof. Dirk Englund and Communication Lab Program Director Jaime Goldstein will continue to provide support and oversight for the initiatives.

Scientific communication speaker series will launch in September
As part of the School of Engineering Communication Lab program, EECS’s Communication Lab will co-host six high-profile events this year to help engineers and scientists think critically about their role in communicating their research to a range of audiences. The speaker series will cover topics such as the future of science journalism, data visualization, funding your ideas, and the next generation of scientific publication.

The EECS Communication Lab looks forward to working with you
The Communication Advisors and Communication Lab staff will be partnering closely with EECS headquarters and student leadership groups to develop a strategy on how best to support the EECS graduate student community in learning key writing, speaking, and visual design skills. The team looks forward to hearing from faculty, students and staff on to be most useful to the department. If you have a workshop idea or have a group of students who want support on a topic, please let us know.

Date Posted: 

Friday, August 19, 2016 - 4:30pm

Card Title Color: 

Black

Card Description: 

New peer-coaching program will help EECS graduate students learn to write, speak, and design visuals.

Photo: 

Card Title: 

EECS Communication Lab

Family and friends recall Drew Esquivel

$
0
0

Kathy Wren | MIT News

A talented scholar and athlete, the MIT rising senior “never failed to see the good in others.”

Drew Esquivel

Drew Esquivel


At a memorial service for Drew Esquivel in California, a family friend described a photo that reflected the rising MIT senior’s joyful, adventurous spirit.

Drew and his friends were jumping off some boulders into a pool, all “getting some good air,” according to Steve Vargas, whose family has been close to Drew’s for many years. High above the others, arms outstretched as if shouting “Cowabunga!,” soared Drew.

“He was a go-for-it guy,” Vargas said. “He’d say, ‘Let’s get going.’ ‘I’ll go first.’ ‘We can DO this!’ Drew also had the ability to move others to action and used that wisely to accomplish great things. … We all know he was fiercely competitive, yet he encouraged the success of others and found joy in it.” Drew Esquivel died on July 16, killed by an alleged drunk driver in Brooklyn, New York. Rising MIT junior Sophia Tabchouri, alum James Balchunas '14, and a third friend, Divya Menezes, were all seriously injured.

On campus at MIT and at the California memorial service in July, which drew hundreds of people from across the country, those who knew Drew recalled a gifted, fun-loving, and compassionate young man, who shared tight bonds with his family and friends.

An “energetic, curious, engaged” student

A native of Healdsburg, California, Drew was majoring in electrical engineering and computer science. He had been living in New York for the summer, working at an internship with the mobile marketing firm Appboy.

MIT faculty knew him as a strong student who connected easily with others, in classes, campus life, and on the wrestling club, of which he was an officer and respected leader.

“His smiling face and gentle manner is imprinted in my mind,” says Arvind, the Johnson Professor of Computer Science and Engineering at MIT and Drew’s academic advisor. “We used to meet to discuss his course registration every term, and he would tell me what was going on his life. I knew he was keen on wrestling but he did not quite fit my naive model of a wrestler, who I thought ought to look more menacing.”

Hari Balakrishnan, the Fujitsu Professor in Electrical Engineering and Computer Science, got to know Drew well while teaching him in Course 6.S062 (Mobile and Sensor Computing), a small class co-taught with Professor Sam Madden, that involved programming for iPhones in XCode.

“Drew was a pleasure to talk to and interact with, both in class and during our project meetings through the term,” says Balakrishnan. “He was energetic, curious, engaged. A great team player. Not only smart technically, but he also had the ability to collaborate well with others and communicate ideas well.”

In Balakrishnan’s course, Drew’s group developed a location-based, gesture-encrypted messaging application that allowed users to send each other targeted messages based on their locations.

Drew also participated in the Undergraduate Research Opportunities Program (UROP) in the Laboratory for Computational Physiology, where he worked on developing electronic medical record systems for underserved areas.

In high school, Drew attended the distinguished Summer Science Program in New Mexico. When selected for the prestigious scholarship that would give him a full ride to MIT, Drew told the Healdsburg Tribune, “I have been thinking of college since second grade. I always had a dream school in mind and that made it easy to focus on school.”

Athlete and adventurer

A member of MIT’s wrestling club, Drew was named most valuable player in 2015 and rookie of the year in 2014. He was team captain, a three-time National Collegiate Wrestling Association (NCWA) national qualifier, and a NCWA Northeast conference finalist.

In high school, Drew was a varsity swimmer as well as wrestler, and he ran cross-country. He was an Eagle Scout who completed 50-mile hikes with his troop in the High Sierras, and he adventured in the California outdoors with his family and their friends, backpacking, kayaking, and skiing. Drew’s adventurous spirit and strong work ethic led him to scale many summits, both literal and figurative, says his mother, Susanne Esquivel. “He always ‘bagged the peak,’ thought outside the box to solve problems, strove to be the best, found joy in helping others, put family and friends first, and had fun,” she says.

Forging bonds at MIT

Drew was very “deliberate” about where he applied to college, according to Susanne Esquivel, who says he was “really only interested in Stanford or MIT.” He was offered early admission to Stanford, in his home state, but he had also applied for and received a James Family Foundation scholarship, available at the time to a student at his high school, which provided full tuition to a highly competitive, out-of-state school.

So, Drew attended MIT’s Campus Preview Weekend and returned “excited and firmly committed to MIT,” Susanne says: “He felt he had found the right environment for learning, competing, and making life-long friends. He was right. He received the best education in the world and took advantage of amazing opportunities and making a difference.”

On campus, Drew became very close with his friends, including his fraternity brothers at Lambda Chi Alpha.

“I remember meeting Drew at a fraternity event in the fall of 2013. I noticed immediately how lively and happy he was to be there meeting all these new people — his energy was infectious,” said rising senior Zak Psaras, in a eulogy at the memorial service.

“As quickly as he bonded with the world around him, he bonded with the city and each of us,” Psaras said, reminiscing about the tight-knit group of friends he and Drew were part of.

“Drew knew how to make everyone laugh and would go out of his way to make others feel included, Psaras said. “He never failed to see the good in others. For as comical as Drew was, he always knew when to be serious. If he noticed one of his friends was feeling down, he would be the first to check in. He always went out of his way to make conversation with those who needed it most. When he would ask ‘How's it going, man?’ in the most nonchalant, genuine way possible, you knew he wanted the real answer.

“We all deserve to have a Drew in our lives,” Psaras said.

Drew is survived by his parents, Susanne and Andrew Esquivel, and his sisters, Elisabeth and Emma, all of Healdsburg, California; his grandfather Donald B. Boyd, PhD of Greenwood, Indiana; and his grandparents Andy and Maria Esquivel of Fremont, Ohio.

His parents have established the Drew Esquivel Memorial Scholarship, through the Rotary Club of Healdsburg Sunrise Foundation. This annual, merit-based scholarship will be available to graduates from Healdsburg High School.

A July 20 gathering at the MIT chapel also honored Drew. It followed an email sent to the MIT community at the request of MIT President L. Rafael Reif, who was traveling at the time, in which Chancellor Cynthia Barnhart extended MIT’s deepest sympathy to Drew’s family and friends. To the larger circle of the accident victims’ MIT friends and connections, “we join you in your shock and grief,” she wrote. A memorial service for Drew at MIT is planned for early in the fall semester.

Read this article on MIT News

August 23, 2016

News Image: 

Programmable network routers

$
0
0

Larry Hardesty | MIT News

New design should enable much more flexible traffic management, without sacrificing speed.

Illustration

“You need to have the ability for researchers and engineers to try out thousands of ideas,” says Hari Balakrishnan, the Fujitsu Professor in Electrical Engineering and Computer Science at MIT. “With this platform, you become constrained not by hardware or technological limitations, but by your creativity. You can innovate much more rapidly.”


Like all data networks, the networks that connect servers in giant server farms, or servers and workstations in large organizations, are prone to congestion. When network traffic is heavy, packets of data can get backed up at network routers or dropped altogether.

Also like all data networks, big private networks have control algorithms for managing network traffic during periods of congestion. But because the routers that direct traffic in a server farm need to be superfast, the control algorithms are hardwired into the routers’ circuitry. That means that if someone develops a better algorithm, network operators have to wait for a new generation of hardware before they can take advantage of it.

Researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and five other organizations hope to change that, with routers that are programmable but can still keep up with the blazing speeds of modern data networks. The researchers outline their system in a pair of papers being presented at the annual conference of the Association for Computing Machinery’s Special Interest Group on Data Communication.

“This work shows that you can achieve many flexible goals for managing traffic, while retaining the high performance of traditional routers,” says Hari Balakrishnan, the Fujitsu Professor in Electrical Engineering and Computer Science at MIT. “Previously, programmability was achievable, but nobody would use it in production, because it was a factor of 10 or even 100 slower.”

“You need to have the ability for researchers and engineers to try out thousands of ideas,” he adds. “With this platform, you become constrained not by hardware or technological limitations, but by your creativity. You can innovate much more rapidly.”

The first author on both papers is Anirudh Sivaraman, an MIT graduate student in electrical engineering and computer science, advised by both Balakrishnan and Mohammad Alizadeh, the TIBCO Career Development Assistant Professor in Electrical Engineering and Computer Science at MIT, who are coauthors on both papers. They’re joined by colleagues from MIT, the University of Washington, Barefoot Networks, Microsoft Research, Stanford University, and Cisco Systems.

Different strokes

Traffic management can get tricky because of the different types of data traveling over a network, and the different types of performance guarantees offered by different services. With Internet phone calls, for instance, delays are a nuisance, but the occasional dropped packet — which might translate to a missing word in a sentence — could be tolerable. With a large data file, on the other hand, a slight delay could be tolerable, but missing data isn’t.

Similarly, a network may guarantee equal bandwidth distribution among its users. Every router in a data network has its own memory bank, called a buffer, where it can queue up packets. If one user has filled a router’s buffer with packets from a single high-definition video, and another is trying to download a comparatively tiny text document, the network might want to bump some of the video packets in favor of the text, to help guarantee both users a minimum data rate.

A router might also want to modify a packet to convey information about network conditions, such as whether the packet encountered congestion, where, and for how long; it might even want to suggest new transmission rates for senders.

Computer scientists have proposed hundreds of traffic management schemes involving complex rules for determining which packets to admit to a router and which to drop, in what order to queue the packets, and what additional information to add to them — all under a variety of different circumstances. And while in simulations many of these schemes promise improved network performance, few of them have ever been deployed, because of hardware constraints in routers.

The MIT researchers and their colleagues set themselves the goal of finding a set of simple computing elements that could be arranged to implement diverse traffic management schemes, without compromising the operating speeds of today’s best routers and without taking up too much space on-chip.

To test their designs, they built a compiler — a program that converts high-level program instructions into low-level hardware instructions — which they used to compile seven experimental traffic-management algorithms onto their proposed circuit elements. If an algorithm wouldn’t compile, or if it required an impractically large number of circuits, they would add new, more sophisticated circuit elements to their palette.

Assessments

In one of the two new papers, the researchers provide specifications for seven circuit types, each of which is slightly more complex than the last. Some simple traffic management algorithms require only the simplest circuit type, while others require more complex types. But even a bank of the most complex circuits would take up only 4 percent of the area of a router chip; a bank of the least complex types would take up only 0.16 percent.

Beyond the seven algorithms they used to design their circuit elements, the researchers ran several other algorithms through their compiler and found that they compiled to some combination of their simple circuit elements.

“We believe that they’ll generalize to many more,” says Sivaraman. “For instance, one of the circuits allows a programmer to track a running sum — something that is employed by many algorithms.”

In the second paper, they describe the design of their scheduler, the circuit element that orders packets in the router’s queue and extracts them for forwarding. In addition to queuing packets according to priority, the scheduler can also stamp them with particular transmission times and forward them accordingly. Sometimes, for instance, it could be useful for a router to slow down its transmission rate, in order to prevent bottlenecks elsewhere in the network, or to help ensure equitable bandwidth distribution.

Finally, the researchers drew up specifications for their circuits in Verilog, the language electrical engineers typically use to design commercial chips. Verilog’s built-in analytic tools verified that a router using the researchers’ circuits would be fast enough to support the packet rates common in today’s high-speed networks, forwarding a packet of data every nanosecond.

“There are a lot of problems in computer networking we’ve never been able to solve at the speed that traffic actually flows through the network, because there wasn’t support directly in the network devices to analyze the traffic or act on the traffic as it arrives,” says Jennifer Rexford, a professor of computer science at Princeton University. “What’s exciting about both of these works is that they really point to next-generation switch hardware that will be much, much more capable — and more importantly, more programmable, so that we can really change how the network functions without having to replace the equipment inside the network.”

“At the edge of the network, applications change all the time,” she adds. “Who knew Pokémon Go was going to happen? It’s incredibly frustrating when applications’ needs evolve years and years more quickly than the equipment’s ability to support it. Getting the time scale of innovation inside the network to be closer to the time scale of innovation in applications is, I think, quite important.” ‬‬‬‬‬‬‬

Read this article on MIT News.

August 23, 2016

Research Themes: 

News Image: 

Labs: 

Research Area: 

Solving network congestion


Using light to control genome editing

$
0
0
August 29, 2016

Anne Trafton | MIT News

New technique offers precise manipulation of when and where genes are targeted.

Illustration

MIT researchers have added an extra layer of control to the CRISPR gene-editing approach, by making the system responsive to light.


The genome-editing system known as CRISPR allows scientists to delete or replace any target gene in a living cell. MIT researchers have now added an extra layer of control over when and where this gene editing occurs, by making the system responsive to light.

With the new system, gene editing takes place only when researchers shine ultraviolet light on the target cells. This kind of control could help scientists study in greater detail the timing of cellular and genetic events that influence embryonic development or disease progression. Eventually, it could also offer a more targeted way to turn off cancer-causing genes in tumor cells.

“The advantage of adding switches of any kind is to give precise control over activation in space or time,” says Sangeeta Bhatia, the John and Dorothy Wilson Professor of Health Sciences and Electrical Engineering and Computer Science at MIT and a member of MIT’s Koch Institute for Integrative Cancer Research and its Institute for Medical Engineering and Science.

Bhatia is the senior author of a paper describing the new technique in the journal Angewandte Chemie. The paper’s lead author is Piyush Jain, a postdoc in MIT’s Institute for Medical Engineering and Science.

Light sensitivity

Before coming to MIT, Jain developed a way to use light to control a process called RNA interference, in which small strands of RNA are delivered to cells to temporarily block specific genes.

“While he was here, CRISPR burst onto the scene and he got very excited about the prospect of using light to activate CRISPR in the same way,” Bhatia says.

CRISPR relies on a gene-editing complex composed of a DNA-cutting enzyme called Cas9 and a short RNA strand that guides the enzyme to a specific area of the genome, directing Cas9 where to make its cut. When Cas9 and the guide RNA are delivered into cells, a specific cut is made in the genome; the cells’ DNA repair processes glue the cut back together but permanently delete a small portion of the gene, making it inoperable.

In previous efforts to create light-sensitive CRISPR systems, researchers have altered the Cas9 enzyme so that it only begins cutting when exposed to certain wavelengths of light. The MIT team decided to take a different approach and make the binding of the RNA guide strand light-sensitive. For possible future applications in humans, it could be easier to deliver these modified RNA guide strands than to program the target cells to produce light-sensitive Cas9, Bhatia says.

“You really don’t have to do anything different with the cargo you were planning to deliver except to add the light-activated protector,” she says. “It’s an attempt to make the system much more modular.”

To make the RNA guide strands light-sensitive, the MIT team created “protectors” consisting of DNA sequences with light-cleavable bonds along their backbones. These DNA strands can be tailored to bind to different RNA guide sequences, forming a complex that prevents the guide strand from attaching to its target in the genome.

When the researchers expose the target cells to light with a wavelength of 365 nanometers (in the ultraviolet range), the protector DNA breaks into several smaller segments and falls off the RNA, allowing the RNA to bind to its target gene and recruit Cas9 to cut it.

Targeting multiple genes

In this study, the researchers demonstrated that they could use light to control editing of the gene for green fluorescent protein (GFP) and two genes for proteins normally found on cell surfaces and overexpressed in some cancers.

“If this is really a generalizable scheme, then you should be able to design protector sequences against different target sequences,” Bhatia says. “We designed protectors against different genes and showed that they all could be light-activated in this way. And in a multiplexed experiment, when a mixed population of protectors was used, the only targets that were cleaved after light exposure were those being photo-protected.”

This precise control over the timing of gene editing could help researchers study the timing of cellular events involved in disease progression, in hopes of determining the best time to intervene by turning off a gene.

“CRISPR-Cas9 is a powerful technology that scientists can use to study how genes affect cell behavior,” says James Dahlman, an assistant professor of biomedical engineering at Georgia Tech, who was not involved in the research. “This important advance will enable precise control over those genetic changes. As a result, this work gives the scientific community a very useful tool to advance many gene editing studies.”

Bhatia’s lab is also pursuing medical applications for this technique. One possibility is using it to turn off cancerous genes involved in skin cancer, which is a good target for this approach because the skin can be easily exposed to ultraviolet light.

The team is also working on a “universal protector” that could be used with any RNA guide strand, eliminating the need to design a new one for each RNA sequence, and allowing it to inhibit CRISPR-Cas9 cleavage of many targets at once.

The research was funded by the Ludwig Center for Molecular Oncology, the Marie-D. and Pierre Casimir-Lambert Fund, a Koch Institute Support Grant from the National Cancer Institute, and the Marble Center for Cancer Nanomedicine.

Read this article on MIT News.

Research Themes: 

News Image: 

Research Area: 

Max Shulaker

Nancy Lynch named associate department head

$
0
0

Expert in distributed computing to join department leadership.

Nancy Lynch

Nancy Lynch, the NEC Professor of Software Science and Engineering, has been appointed as associate head of the Department of Electrical Engineering and Computer Science (EECS), effective September 1.

Lynch is known for her fundamental contributions to the foundations of distributed computing. Her work applies a mathematical approach to explore the inherent limits on computability and complexity in distributed systems.

Her best-known research is the “FLP" impossibility result for distributed consensus in the presence of process failures. Other research includes the I/O automata system modeling frameworks. Lynch’s recent work focuses on wireless network algorithms and biological distributed algorithms.

The longtime head of the Theory of Distributed Systems (TDS) research group in CSAIL, Lynch joined MIT in 1981. She received her BS from Brooklyn College in 1968 and her PhD from MIT in 1972, both in mathematics. Recently, Lynch served as head of CSAIL’s Theory of Computation (TOC) group for several years.

She is also the author of several books and textbooks, including the graduate textbook Distributed Algorithms, considered a standard reference in the field. Lynch has also has co-authored several hundred articles about distributed algorithms and impossibility results, and about formal modeling and verification of distributed systems. She is the recipient of numerous awards, an ACM Fellow, a Fellow of the American Academy of Arts and Sciences, and a member of both the National Academy of Science and the National Academy of Engineering.

Lynch succeeds Silvio Micali, the Ford Professor of Computer Science and Engineering, who has served as associate department head since January 2015.

“Silvio brought his characteristic diligence and energy to all aspects of his work as department head,” said Anantha Chandrakasan, EECS department head and the Vannevar Bush Professor of Electrical Engineering and Computer Science. “I would like to extend my sincere thanks and express my appreciation for his tremendous service.”

Date Posted: 

Tuesday, September 6, 2016 - 3:15pm

Card Title Color: 

Black

Card Description: 

Expert in distributed computing to join department leadership.

Photo: 

Card Title: 

Nancy Lynch named associate department head

New applications for ultracapacitors

$
0
0
September 7, 2016

Rob Matheson | MIT News

Startup’s energy-storage devices find uses in drilling operations, aerospace applications, electric vehicles.

capacitor

FastCAP Systems' ultracapacitors (pictured) can withstand extreme temperatures and harsh environments, opening up new uses for the devices across a wide range of industries, including oil and gas, aerospace and defense, and electric vehicles. Courtesy of FastCAP Systems


Devices called ultracapacitors have recently become attractive forms of energy storage: They recharge in seconds, have very long lifespans, work with close to 100 percent efficiency, and are much lighter and less volatile than batteries. But they suffer from low energy-storage capacity and other drawbacks, meaning they mostly serve as backup power sources for things like electric cars, renewable energy technologies, and consumer devices.

But MIT spinout FastCAP Systems is developing ultracapacitors, and ultracapacitor-based systems, that offer greater energy density and other advancements. This technology has opened up new uses for the devices across a wide range of industries, including some that operate in extreme environments.

Based on MIT research, FastCAP’s ultracapacitors store up to 10 times the energy and achieve 10 times the power density of commercial counterparts. They’re also the only commercial ultracapacitors capable of withstanding temperatures reaching as high as 300 degrees Celsius and as low as minus 110 C, allowing them to endure conditions found in drilling wells and outer space. Most recently, the company developed a AA-battery-sized ultracapacitor with the perks of its bigger models, so clients can put the devices in places where ultracapacitors couldn’t fit before.

Founded in 2008, FastCAP has already taken its technology to the oil and gas industry, and now has its sights set on aerospace and defense and, ultimately, electric, hybrid, and even fuel-cell vehicles. “In our long-term product market, we hope that we can make an impact on transportation, for increased energy efficiency,” says co-founder John Cooley PhD ’11, who is now president and chief technology officer of FastCAP.

FastCAP’s co-founders and technology co-inventors are MIT alumnus Riccardo Signorelli PhD ’09 and Joel Schindall, the Bernard Gordon Professor of the Practice in the Department of Electrical Engineering and Computer Science.

A “hairbrush” of carbon nanotubes

Ultracapacitors use electric fields to move ions to and from the surfaces of positive and negative electrode plates, which are usually coated with a porous material called activated carbon. Ions cling to the electrodes and let go quickly, allowing for quick cycling, but the small surface area limits the number of ions that cling, restricting energy storage. Traditional ultracapacitors can, for instance, hold about 5 percent of the energy that lithium ion batteries of the same size can.

In the late 2000s, the FastCAP founding team had a breakthrough: They discovered that a tightly packed array of carbon nanotubes vertically aligned on the electrode provided much more surface area. The array was also uniform, whereas the porous material was irregular and difficult for ions to move in and out of. “A way to look at it is the industry standard looks like nanoscopic sponge, and the vertically aligned nanotube arrays look like a nanoscopic hairbrush” that provides the ions more efficient access to the electrode surface, Cooley says.

With funding from the Ford-MIT Alliance and MIT Energy Initiative, the researchers built a fingernail-sized prototype that stored twice the energy and delivered seven to 15 times more power than traditional ultracapacitors.

In 2008, the three researchers launched FastCAP, and Cooley and Signorelli brought the business idea to Course 15.366 (Energy Ventures), where they designed a three-step approach to a market. The idea was to first focus on building a product for an early market: oil and gas. Once they gained momentum, they’d focus on two additional markets, which turned out to be aerospace and defense, and then automotive and stationary storage, such as server farms and grids. “One of the paradigms of Energy Ventures was that steppingstone approach that helped the company succeed,” Cooley says.

FastCAP then earned a finalist spot in the 2009 MIT Clean Energy Prize (CEP), which came with some additional perks. “The value there was in the diligence effort we did on the business plan, and in the marketing effect that it had on the company,” Cooley says.

Based on their CEP business plan, that year FastCAP won a $5 million U.S. Department of Energy (DOE) Advanced Research Projects Agency-Energy grant to design ultracapacitors for its target markets in automotive and stationary storage. FastCAP also earned a 2012 DOE Geothermal Technologies Program grant to develop very high-temperature energy storage for geothermal well drilling, where temperatures far exceed what available energy-storage devices can tolerate. Still under development, these ultracapacitors have proven to perform from minus 5 C to over 250 C.

From underground to outer space

Over the years, FastCAP made several innovations that have helped the ultracapacitors survive in the harsh conditions. In 2012, FastCAP designed its first-generation product, for the oil and gas market: a high-temperature ultracapacitor that could withstand temperatures of 150 C and posed no risk of explosion when crushed or damaged. “That was an interesting market for us, because it’s a very harsh environment with [tough] engineering challenges, but it was a high-margin, low-volume first-entry market,” Cooley says. “We learned a lot there.”

In 2014, FastCAP deployed its first commercial product. The Ulysses Power System is an ultracapacitor-powered telemetry device, a long antenna-like system that communicates with drilling equipment. This replaces the battery-powered systems that are volatile and less efficient. It also amplifies the device’s signal strength by 10 times, meaning it can be sent thousands of feet underground and through subsurface formations that were never thought penetrable in this way before.

After a few more years of research and development, the company is now ready to break into aerospace and defense. In 2015, FastCAP completed two grant programs with NASA to design ultracapacitors for deep space missions (involving very low temperatures) and for Venus missions (involving very high temperatures).

In May 2016, FastCAP continued its relationship with NASA to design an ultracapacitor-powered module for components on planetary balloons, which float to the edge of Earth’s atmosphere to observe comets. The company is also developing an ultracapacitor-based energy-storage system to increase the performance of the miniature satellites known as CubeSats. There are other aerospace applications too, Cooley says: “There are actuators systems for stage separation devices in launch vehicles, and other things in satellites and spacecraft systems, where onboard systems require high power and the usual power source can’t handle that.”

A longtime goal has been to bring ultracapacitors to electric and hybrid vehicles, providing high-power capabilities for stop-start and engine starting, torque assist, and longer battery life. In March, FastCAP penned a deal with electric-vehicle manufacturer Mullen Technologies. The idea is to use the ultracapacitors to augment the batteries in the drivetrain, drastically improving the range and performance of the vehicles. Based on their wide temperature capabilities, FastCAP’s ultracapacitors could be placed under the hood, or in various places in the vehicle’s frame, where they were never located before and could last longer than traditional ultracapacitors.

The devices could also be an enabling component in fuel-cell vehicles, which convert chemical energy from hydrogen gas into electricity that is then stored in a battery. These zero-emissions vehicles have difficulty handling surges of power — and that’s where FastCAP’s ultracapacitors can come in, Cooley says.

“The ultracapacitors can sort of take ownership of the power and variations of power demanded by the load that the fuel cell is not good at handling,” Cooley says. “People can get the range they want for a fuel-cell vehicle that they’re anxious about with battery-powered electric vehicles. So there are a lot of good things we are enabling by providing the right ultracapacitor technology to the right application.”

Read this article on MIT News

Research Themes: 

News Image: 

Research Area: 

Summer in San Francisco

$
0
0

Date Posted: 

Wednesday, September 7, 2016 - 9:15am

News Image: 

Card Title: 

Summer in San Francisco

Card URL: 

http://mitadmissions.org/blogs/entry/summer-in-san-francisco1

Card Description: 

Course 6 sophomore Anelise N. writes for the MIT admissions blog about her summer interning at Playstation.

Card Image: 

In the Media: Using machine learning to create videos from photos

$
0
0

Date Posted: 

Monday, September 12, 2016 - 9:30am

News Image: 

Card Title: 

In the Media: Using machine learning to create videos from photos

Card URL: 

http://www.theverge.com/2016/9/12/12886698/machine-learning-video-image-prediction-mit

Card Description: 

MIT researchers create neural nets to 'predict' what happens next in stills, reports The Verge.

Card Image: 

In the Media: MIT's Bhatia refocuses spotlight on gender diversity in biotech

$
0
0

Date Posted: 

Monday, September 12, 2016 - 9:30am

News Image: 

Card Title: 

In the Media: MIT's Bhatia refocuses spotlight on gender diversity in biotech

Card URL: 

http://www.xconomy.com/boston/2016/09/12/mits-bhatia-refocuses-spotlight-on-gender-diversity-in-biotech/#

Card Description: 

Professor Sangeeta Bhatia talks to Xconomy about diversity in life-science and high tech industries.

Card Image: 


Tiny gold grids yielding secrets

$
0
0

Denis Paiste | MIT News

Summer Scholar Justin Cheng explores process in Berggren group for making ordered metal nanostructures that display interesting new properties.

Illustration

Summer Scholar Justin Cheng holds an experimental sample of nanostructured gold on silicon that has potential for use in sensors and display technologies based on its selective light absorption properties.  Photo: Maria E. Aglietti/Materials Processing Center


Ordered patterns of gold nanoparticles on a silicon base can be stimulated to produce collective electron waves known as plasmons that absorb only certain narrow bands of light, making them promising for a wide range of arrays and display technologies in medicine, industry, and science.

Materials Processing Center (MPC)-Center for Materials Science and Engineering (CMSE) Summer Scholar Justin Cheng worked this summer in MIT professor of electrical engineering Karl K. Berggren’s Quantum Nanostructures and Nanofabrication Group to develop specialized techniques for forming these patterns in gold on silicon. “Ideally, we’d want to be able to get arrays of gold nanoparticles to be completely ordered,” says Cheng, a rising senior at Rutgers University.

“My work deals with the fundamentals of how to write a pattern using electron-beam lithography, how to deposit the gold, and how to heat up the substrate so we can get completely regular arrays of particles,” Cheng explains.

In MIT’s NanoStructures Laboratory, Cheng wrote code to produce a pattern that will guide the dewetting of a thin gold film into nanoparticles, examined partially ordered grids with an electron microscope, and worked in a clean room to develop a polymer resist, spin coat the resist onto samples, and plasma clean the samples. He is part of a team that includes graduate student Sarah Goodman and postdoctoral associate Mostafa Bedewy. He was also assisted by the NanoStructures Lab manager James Daley.

“Plasmons are collective oscillations of the free-electron density at the surface of a material, and they give metal nanostructures amazing properties that are very useful in applications like sensing, optics and various devices,” Goodman explained in a presentation to Summer Scholars in June. “Plasmonic arrays are very good for visible displays, for example, because their color can be tuned based on size and geometry.”

This multi-step fabrication process begins with spin coating hydrogen silsesquioxane (HSQ), which is a special electron-beam resist, or mask, onto a silicon substrate. Cheng worked on software used to write a pattern onto the resist through electron-beam lithography. Unlike some resists, HSQ becomes more chemically resistant as you expose it to electron beams, he says. The entire substrate is about 1 centimeter by 1 centimeter, he notes, and the write area is about 100 microns (or 0.0001 centimeter) wide.

After the electron-beam lithography step, the resist is put through an aqueous (water-based) developer solution of sodium hydroxide and sodium chloride, which leaves behind an ordered array of posts on top of the silicon layer. “When we put the sample in the developer solution, all of the less chemically resistant areas of the HSQ mask come off, and only the posts remain,” Cheng says. Then, Daley deposits a gold layer on top of the posts with physical vapor deposition. Next, the sample is heat treated until the gold layer decomposes into droplets that self-assemble into nanoparticles guided by the posts.

Solid-state dewetting

A key underlying materials science phenomenon at work in this self-assembly, Cheng says, is known as solid-state dewetting. “Self-assembly is a process where you apply certain conditions to a material that allow it to undergo a transformation over a large area. So it’s a very efficient patterning technique,” Goodman explains.

Because of repulsive interaction between the silicon and gold layers, the gold tends to form droplets, which can be coaxed into patterns around the posts. The Berggren group is working collaboratively with Carl V. Thompson, the Stavros Salapatas Professor of Materials Science and Engineering and the director of the Materials Processing Center, who is an expert in solid-state dewetting. Using a scanning electron microscope, Cheng examines these patterns to determine their quality and consistency. “The gold naturally forms droplets because there is a driving force for it to decrease the surface area it shares with the silicon. It doesn’t look completely ordered but you can see beginnings of some order in the dewetting,” he says, while showing an SEM image on a computer. “[In] other pictures you can clearly see the beginnings of patterning.”

“When we take the posts and we make them closer together, you can see that the gold likes to dewet into somewhat regular patterns. These aren’t completely regular in all cases, but for certain post sizes and spacings, we start to see regular arrays. Our goal is to successfully fabricate a plasmonic array of ordered, monodisperse [equally sized] gold nanoparticles,” Cheng says.

Goodman notes that Thompson's group has demonstrated exquisite control over dewetting in single crystalline films at the micron scale, but the Berggren group hopes to extend this control down to the nanoscale. “This will be a really key result if we’re able to bring this dewetting that’s beautifully controlled on the micro scale and enable that on the nanoscale,” Goodman says.

Cheng says that during his summer internship in Berggren’s lab, he learned to operate the scanning electron microscope and learned about nanofabrication processes. “I have learned a lot. Aside from the lab work that I’m doing, I’ve been scripting for the [LayoutEditor] CAD program that I use, and I’ve been using Matlab, too,” he says. “I actually learned a lot about image analysis because there are a lot of steps that go into image analysis. Since we have so much data and so many images to analyze, I’m doing it quantitatively and automatically to make sure I have repeatability.”

‪‪MPC‬‬‬‬‬‪‬‬‬‬‬‬‬‬‬‬‬‬ and ‪CMSE‬‬‬‬‬‬ sponsor the nine-week National Science Foundation Research Experience for Undergraduates (NSF REU) internships, with support from ‪NSF‬‬‬‬‬‬’s Materials Research Science and Engineering Centers program. The program ran from June 7 through Aug. 6.‬‬‬‬‬ ‬‬‬‬

Read this article on MIT News.

September 7, 2016

Research Themes: 

News Image: 

Labs: 

Research Area: 

Faster parallel computing

$
0
0

Larry Hardesty | MIT News

New programming language delivers fourfold speedups on problems common in the age of big data.

graphic

Researchers have designed a new programming language that lets application developers manage memory more efficiently in programs that deal with scattered data points in large data sets. In tests on several common algorithms, programs written in the new language were four times as fast as those written in existing languages. Image: Christine Daniloff/MIT


In today’s computer chips, memory management is based on what computer scientists call the principle of locality: If a program needs a chunk of data stored at some memory location, it probably needs the neighboring chunks as well.

But that assumption breaks down in the age of big data, now that computer programs more frequently act on just a few data items scattered arbitrarily across huge data sets. Since fetching data from their main memory banks is the major performance bottleneck in today’s chips, having to fetch it more frequently can dramatically slow program execution.

This week, at the International Conference on Parallel Architectures and Compilation Techniques, researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) are presenting a new programming language, called Milk, that lets application developers manage memory more efficiently in programs that deal with scattered data points in large data sets.

In tests on several common algorithms, programs written in the new language were four times as fast as those written in existing languages. But the researchers believe that further work will yield even larger gains. The reason that today’s big data sets pose problems for existing memory management techniques, explains Saman Amarasinghe, a professor of electrical engineering and computer science, is not so much that they are large as that they are what computer scientists call “sparse.” That is, with big data, the scale of the solution does not necessarily increase proportionally with the scale of the problem.

“In social settings, we used to look at smaller problems,” Amarasinghe says. “If you look at the people in this [CSAIL] building, we’re all connected. But if you look at the planet scale, I don’t scale my number of friends. The planet has billions of people, but I still have only hundreds of friends. Suddenly you have a very sparse problem.”

Similarly, Amarasinghe says, an online bookseller with, say, 1,000 customers might like to provide its visitors with a list of its 20 most popular books. It doesn’t follow, however, that an online bookseller with a million customers would want to provide its visitors with a list of its 20,000 most popular books.

Thinking locally

Today’s computer chips are not optimized for sparse data — in fact, the reverse is true. Because fetching data from the chip’s main memory bank is slow, every core, or processor, in a modern chip has its own “cache,” a relatively small, local, high-speed memory bank. Rather than fetching a single data item at a time from main memory, a core will fetch an entire block of data. And that block is selected according to the principle of locality.

It’s easy to see how the principle of locality works with, say, image processing. If the purpose of a program is to apply a visual filter to an image, and it works on one block of the image at a time, then when a core requests a block, it should receive all the adjacent blocks its cache can hold, so that it can grind away on block after block without fetching any more data.

But that approach doesn’t work if the algorithm is interested in only 20 books out of the 2 million in an online retailer’s database. If it requests the data associated with one book, it’s likely that the data associated with the 100 adjacent books will be irrelevant.

Going to main memory for a single data item at a time is woefully inefficient. “It’s as if, every time you want a spoonful of cereal, you open the fridge, open the milk carton, pour a spoonful of milk, close the carton, and put it back in the fridge,” says Vladimir Kiriansky, a PhD student in electrical engineering and computer science and first author on the new paper. He’s joined by Amarasinghe and Yunming Zhang, also a PhD student in electrical engineering and computer science.

Batch processing

Milk simply adds a few commands to OpenMP, an extension of languages such as C and Fortran that makes it easier to write code for multicore processors. With Milk, a programmer inserts a couple additional lines of code around any instruction that iterates through a large data collection looking for a comparatively small number of items. Milk’s compiler — the program that converts high-level code into low-level instructions — then figures out how to manage memory accordingly.

With a Milk program, when a core discovers that it needs a piece of data, it doesn’t request it — and a cacheful of adjacent data — from main memory. Instead, it adds the data item’s address to a list of locally stored addresses. When the list is long enough, all the chip’s cores pool their lists, group together those addresses that are near each other, and redistribute them to the cores. That way, each core requests only data items that it knows it needs and that can be retrieved efficiently.

That’s the high-level description, but the details get more complicated. In fact, most modern computer chips have several different levels of caches, each one larger but also slightly less efficient than the last. The Milk compiler has to keep track of not only a list of memory addresses but also the data stored at those addresses, and it regularly shuffles both around between cache levels. It also has to decide which addresses should be retained because they might be accessed again, and which to discard. Improving the algorithm that choreographs this intricate data ballet is where the researchers see hope for further performance gains.

“Many important applications today are data-intensive, but unfortunately, the growing gap in performance between memory and CPU means they do not fully utilize current hardware,” says Matei Zaharia, an assistant professor of computer science at Stanford University. “Milk helps to address this gap by optimizing memory access in common programming constructs. The work combines detailed knowledge about the design of memory controllers with knowledge about compilers to implement good optimizations for current hardware.”

Read this article on MIT News.

September 13, 2016

Research Themes: 

News Image: 

Labs: 

Research Area: 

Calculating the financial risks of renewable energy

$
0
0

Rob Matheson | MIT News

Financial-modeling software for sustainable-infrastructure projects could boost investment in sector.

graphic

MIT spinout EverVest has built a data-analytics platform that gives investors rapid, accurate cash-flow models and financial risk analyses for renewable-energy projects. Illustration: Christine Daniloff/MIT


For investors, deciding whether to invest money into renewable-energy projects can be difficult. The issue is volatility: Wind-powered energy production, for instance, changes annually — and even weekly or daily — which creates uncertainty and investment risks. With limited options to accurately quantify that volatility, today’s investors tend to act conservatively.

But MIT spinout EverVest has built a data-analytics platform that aims to give investors rapid, accurate cash-flow models and financial risk analyses for renewable-energy projects. Recently acquired by asset-management firm Ultra Capital, EverVest’s platform could help boost investment in sustainable-infrastructure projects, including wind and solar power.

Ultra Capital acquired the EverVest platform and team earlier this year, with aims of leveraging the software for its own risk analytics. The acquisition will enable the EverVest platform to expand to a broader array of sustainable infrastructure sectors, including water, waste, and agriculture projects.

“If an investor has confidence in the performance and risk they are taking, they may be willing to invest more capital into the sustainable infrastructure asset class. More capital means more projects get built,” says EverVest co-founder and former CEO Mike Reynolds MBA ’14, now director of execution at Ultra Capital. “We wanted to give people more firepower when it comes to evaluating risk.”

The platform’s core technology was initially based on research at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), by EverVest co-founder and former chief technology officer Teasha Feldman ’14, now director of engineering at Ultra Capital.

The strength of data

EverVest’s platform analyzes data on a variety of factors that may impact the performance of renewable-energy projects. Layout and location of a site, certain contracts, type of equipment, grid connection, weather, and operation and maintenance costs can all help predict the financial rate of return.

Today, financial analysts use Excel spreadsheets to find a flat, annual production average for the next 20 to 30 years. “It leaves a lot to the imagination,” Reynolds says. “Renewable energy is volatile and uncertain.” By the time of its acquisition, EverVest had clients in the United States and Europe, including banks, investors, and developers for wind and solar power projects. Users enter information about their prospective project into the software, which would provide a detailed analysis of future cash-flow model, along with detailed statistical analysis of the project's financial risks.

“It’s the strength of the data that we wanted to give to investors, banks, and developers, to get a better understanding of their assets,” Reynolds says.

For example, consider a wind farm. With location data, the platform can use public data sets to calculate the last few decades of wind speed and determine the project’s overall performance. Location can also help determine the project’s profitability in the market. California could be a better market than, say, Texas or Maine.

Specific types of equipment and manufacturers matter, too. If an investor considers a certain type of wind turbine, “we can pull data to determine that a turbine in that location is going to need $2 million of replacement parts in year five,” Reynolds says. “In year seven, you might have a 50 percent probability that something is going to fail, potentially resulting in a shut-down of the site.”

The end result is a more detailed projection of the rate of return, Reynolds says. While an electronic spreadsheet might give an average rate of return of, say, 12 percent, the EverVest’s platform would show a full analysis of the quarterly performance, including the statistical uncertainty of the rate of return. While 12 percent may be the average, the returns may vary between 4 and 18 percent. “By understanding that range of risk, you can understand the true value,” Reynolds says.

Now at Ultra Capital, Feldman is further developing the platform. Reynolds is using it to invest in a wide array of sustainable-infrastructure projects, including solar energy projects, waste-to-energy assets, water treatment facilities, and recycling plants. “We’ve brought our technology in-house and have expanded it a great deal,” Reynolds says. “Now I get to use the software we built to make better investments.”

EverVest: The happy accident

EverVest (formerly Cardinal Wind) began as a CSAIL research project that was refined and developed through MIT’s entrepreneurial ecosystem before going to market.

As an MIT junior in 2012, Feldman wanted to branch out from her theoretical physics coursework to focus on renewable energy. She discovered a CSAIL project, led by research scientist Una-May O’Reilly, that involved collecting and analyzing data on wind farm energy. “I showed up in [O’Reilly’s] office and begged her to let me work on the project,” Feldman says.

In a year, Feldman had designed a machine-learning algorithm that collected 30 years of wind data from airports and other sites, to predict future wind power there for the next 30 years. During that time, she sought enrollment in Course 15.366 (Energy Ventures), where students from across departments plan businesses around clean technologies. Undergraduates are seldom accepted. But as luck would have it, the class wanted O’Reilly to speak about her research — and O’Reilly told them to ask Feldman.

“I said, ‘Yes, I’m working on that research. You should just let me into the class,’” Feldman says, laughing.

Enrolling in fall 2013, Feldman pitched her algorithm to the class, and it caught the eye of one student. Reynolds had come to MIT Sloan School of Management, he says, “with scars from working on Wall Street in investment banking … and I wanted to open my horizons and work with engineers who were building amazing things at MIT.”

During his time as an investment banker, Reynolds dealt with funding large projects in infrastructure, energy, transportation. So Feldman’s prediction algorithm resonated immediately. “I saw her algorithm and thought of how great it would be for investors to have a more accurate way to measure the rate of return for a potential wind project investment,” Reynolds says.

Joining forces, Feldman and Reynolds launched Cardinal Wind in 2013. The startup was somewhat of a “happy accident,” Feldman says. “The company took an insane amount of hard work to start and build. But by showing up in a lab and convincing them to give me a job, and then bringing the research to class, we were able to determine that there was a great opportunity and need for better financial risk analysis tools in the marketplace."

The following summer, Cardinal Wind entered the Global Founders’ Skills Accelerator (GFSA), run by the Martin Trust Center for MIT Entrepreneurship, “which was a huge boost,” Reynolds says. Mentors and entrepreneurs-in-residence offered guidance and feedback on pitches, and generous GFSA funding paid the startup’s bills. “And we worked alongside other startups going through the same challenges,” Reynolds says. “All those resources were incredibly helpful.”

By October 2015, Cardinal Wind had expanded Feldman’s algorithm into a full cash-flow modelling platform that also included analyses for solar power projects. That month, Cardinal Wind rebranded as EverVest, and this July it was acquired by Ultra Capital.

A key to EverVest’s success, Feldman says, was constantly developing the technology to fit customer needs — such as including solar power. “When we found the actual need was more than just predicting wind patterns, we departed from using that particular algorithm, and we’ve built a lot of our core platform since then,” she says.

Read this article on MIT News.

September 15, 2016

Research Themes: 

News Image: 

Labs: 

Research Area: 

Sixteen MIT grad students named Siebel Scholars for 2017

$
0
0

MIT News Office

MIT graduate students from bioengineering, business, computer science, and energy fields are honored.

Siebel logo

Each year through the Siebel Scholars program, a formidable group brings together their diverse perspectives from business, science, and engineering to influence the technologies, policies, and economic and social decisions that shape the future.


Sixteen MIT graduate students are among the 2017 cohort of Siebel Foundation Scholars hailing from the world’s top graduate programs in business, bioengineering, computer science, and energy science.

Honored for their academic achievements, leadership, and commitments to addressing crucial global challenges, the select MIT students are part of a class of 92 individuals receiving a $35,000 award for their final year of study.

In addition, they will join a community of more than 1,000 past Siebel Scholars, including 216 MIT affiliates.

Among the honorees are five EECS students:

Brian Axelrod

Karan Kashyap

Chengtao Li

Ruizhi (Ray) Liao

Srinivasan Raghuraman

Read about the other honorees on MIT News.

September 15, 2016

News Image: 

Project Sandcastle

$
0
0

Meg Murphy | MIT News

Five MIT EECS students take their startups to San Francisco for a summer of innovation.

Sandcastle students photo

Left to right: MIT seniors Mohamed “Hassan” Kane, Guillermo Webster, and Kevin Kwok work on their startup projects at the “Sandcastle” in San Francisco. The orange cast on Kwok’s leg is the result of a trampoline accident, and has inspired plenty of teasing from his MIT housemates. Photo: Andrei Ivanov '16;


On a foggy night in San Francisco, a Bat-Signal appears in the sky. It flickers above a house on the highest hilltop in the city, where five MIT students live in what other people call a “hacker house.” It’s a label the students avoid.

Inside, the lights are on at all hours. It has been that way since the group arrived in June, set down their things, claimed spots on the furniture, and opened their laptops. They've barely looked up since, except to blow off steam, when they send up Bat-Signals, invent gadgets, or take their waterproof air mattresses for a row.

Rising MIT seniors Guillermo Webster, Anish Athalye, Kevin Kwok, Mohamed “Hassan” Kane, and rising junior Laser Nite swap news about their various startups, which they refer to collectively as “Project Sandcastle” — their preferred moniker for the residence. They are spread out on an expansive sectional couch and a wicker chair, tapping away on their laptops. All five are electrical engineering and computer science majors, and all five of them are in San Francisco to pursue startup ideas.

For each, it was a choice that came at a cost. Many of their classmates took internships at industry giants, such as Google, Facebook, and Apple. Within the group, several received lucrative offers, and the majority worked at enviable resume-building gigs last summer. The five estimate that their collective summer income would have been at least $100,000, but none of them cracked.

“We are the diehards,” says Webster. “Nobody in this room was willing to accept an internship. We were going to work on our projects no matter what.”

Working together, the students approached the Sandbox Innovation Fund and the Department of Electrical Engineering and Computer Science (EECS) for help in covering their summer living expenses. After spirited lobbying, particularly by Kane, MIT agreed to pitch in. “This is an exciting experiment in how we can support our students to pursue their passions,” says Ian A. Waitz, the dean of engineering and faculty director of the Sandbox Innovation Fund Program.

For their part, the students are thrilled they didn’t end up in their parents’ basements, or some other isolated location, trying to stay in touch via videoconference. “Most of us didn’t think MIT would ever give us money to come out here and be creative,” says Webster. He snaps his gum, flips shut his computer, and smiles. “And here we are.”

“Our first priority was to be together this summer,” Athalye chimes in. “Our next was to be in San Francisco.”

Project Sandcastle

Webster, Athalye, Kwok, Kane, and Nite each has his own workspace — a particular stretch of couch or chair where they always seem to end up. It’s dinner time and they are firing news and updates back and forth as they type. Kane came up with the idea for the weekly ritual, but in its original version, there was a two-minute timer to keep things moving. Nobody pays attention to that anymore. Often they’ll spend two hours on a single project — heatedly asking each other things like: “How are you thinking about users?” “What about security?” They sum up the gist of the conversation as: “Change this. Start over.”

The sessions are loud, fun, productive, and lead them to entertain different approaches, new thoughts, and as they say “a lot of suggestions of various utility.” They also typically involve takeout food. Ordering out has become an art form with them. They deftly cycle through deals from Sprig, UberEATS, Freshly, and so on. As Webster says, “There’s a huge amount of venture capital going into food startups in San Francisco. It’s like a free food program.”

Their projects are ambitious. Kwok and Webster have created Carbide, a new programming environment that interleaves code, charts, math, and prose to help people compose, teach, and understand coding. It’s based on the idea of “notebook programming,” where code and prose are interleaved in blocks, known as cells. In Carbide, people can inspect, visualize, and manipulate any part of the expression of the code. No need to break up your code to see what goes on or to explain it. Carbide, say Kwok and Webster, represents “a step toward the future of programming itself.”

Meanwhile, Athalye and Kane intend to transform online learning with LearnX, which will use artificial intelligence and machine-learning algorithms to organize educational material. Athalye came up with the idea during an MIT class: He was listening to a professor talk about online education and realized “people spend so much time learning how to learn stuff.” Imagine you are making a video game and want to learn how to do ray tracing. You register for an online university course in computer graphics. You discover a prerequisite is linear algebra, and so take an entire course in that too. You end up spending a lot of time learning material that is not necessary for ray tracing. But, says Athalye, “What if you could type in what you want to learn and get an exact set of instructions?”

Like the others, Nite’s creation, Websee, aims to influence society at large. He is developing an information crowdsourcing platform enabling people to combine their browsing activity anonymously to discover what’s happening across the web in real time. You install a browser extension to passively share what you see, which is combined anonymously with other users to create a more open, democratized, and unbiased aggregator of web content. Nite says it will lead to “more transparency, diversity, and public influence on our information sources online.”

World-changing ideas

San Francisco is saturated with startups focused on the trivial, or as some say, “getting your sushi faster.” Business magnate Bill Gates has observed that innovation is thriving in California but “half of the companies are silly.” He has also said that from within that noise, a handful of brilliant world-changing ideas will surface. The Project Sandcastle students are aiming for the latter, and relying on a summer of mutual support and collaboration to get it done.

“These students came to Sandbox with ideas they wanted to pursue — and a plan for how to pursue them,” says Sandbox Executive Director Jinane Abounadi SM ’90, PhD ’98. “They wanted help running an experiment on innovation itself.”

Along with providing financial resources, Sandbox keeps students connected to activities in Boston through mentorship from MIT Venture Mentoring Service, alumni, and experienced professionals. Abounadi says Sandbox is working with the students and others to evaluate the educational benefits of investments like this one. And that “regardless of what happens with their own ideas, we know they’re going to learn a lot. That’s how we think about success.”

Crashing at the node

The Sandcastle is in Miraloma, a neighborhood of curving hilltop streets and Art Deco homes. It is a fair bet that most other residents do not, like Kwok and Webster, sleep on air mattresses. Spending money on a bed never even occurred to them. “What? Why do that?” asks Kwok. If the place had not been partially furnished, they’d probably all be working on the floor.

Their neighborhood is quieter than enclaves like the Mission District, a startup haven four miles away where the young, broke, and ambitious tend to gravitate. In the minds of the Sandcastle group, they are “really far” from the action. It works for them, though. Friends in the tech community, many of them MIT students, pass through a lot. “It’s like a node,” says Laser Nite. Growing up, he lived in “a hippie town” in Iowa and attended the Maharishi School of the Age of Enlightenment, a.k.a. high school. He was artistic from an early age and realized that creativity requires interactivity.

“Just look over there,” says Kwok, taking a bite of an egg salad sandwich that Nite has prepared. “That’s Logan, our favorite piece of furniture.” Rising sophomore Logan Engstrom, also an EECS major, is working on his laptop in an adjoining room. He waves. Engstrom is an intern at a big tech company in Cupertino, and crashed at the Sandcastle the night before.

Everything is connected

San Francisco is sometimes called “the Hollywood of Technology.” The concentration of well-known tech startups within the city proper, a grid area of 7 miles by 7 miles, includes Twitter, Uber, Airbnb, Pinterest, Dropbox, Yelp, and others. Also here are big tech internships, venture capitalists, networking opportunities, and a key element for startups: a market of early adopters.

“Everything is here and everything is connected,” says Anish Athalye. He and Kane, a native of Côte d'Ivoire, are part of Greylock X, an initiative in which the well-known Bay Area venture capital firm adopts a dozen talented young people and connects them directly to this network with invitations to social events where they can meet the heavy hitters in Silicon Valley.

The pair are measured and articulate, and they often act as an organizing (and steadying) influence at Sandcastle. They began LearnX after talking about online learning at a tea party — as in, a real tea party. About 15 to 20 students gathered in an MIT dorm room, drinking tea. “Instead of drinking or dancing,” says Athalye, “we talk about ideas.”

Creative paths to MIT

There is a massive cardboard check in a corner of the Sandcastle living room. It is from the Greylock Hackfest, which was hosted at Facebook this summer. Athalye, Engstrom, Kwok, and Webster won the $10,000 grand prize for a system that turns any laptop into a touchscreen with about $1 of hardware. (The earnings, they say, “mean a lot of Instacart orders.”)

The project was an offshoot of an idea Kevin Kwok came up with in junior high school. He grew up in Virginia and sold his first app when he was 15. As a teenager, he liked to work on and blog about his “little projects.” When he was still in high school an MIT student came across his work, encouraged him to apply, and offered to write a recommendation letter. And that, says Kwok, is how a high school kid with “an okay GPA and not great SAT scores,” who was rejected by his state school, landed at MIT. “I’m glad to have finally used that junior high idea for something,” he says.

Guillermo Webster grew up in Los Angeles, the child of a painter and a musician, and attended an arts school with “no walls, homework, grades, or calculus.” He concentrated in cello and modern dance, but self-studied math and excelled in the sciences, leading to MIT.

Webster and Kwok — who has one leg in plaster after a trampoline accident — have been working on projects together since they met in an MIT Media Lab first-year program. They worked through the night on a final presentation, went back to their dorm rooms, set their alarms — and slept through them. “We’ve gone on to sleep through bigger and better things,” Webster jokes.

Two years ago, the two created “Project Naptha,” browser extension software for Google Chrome that allows users to highlight, copy, edit and translate text within images. It currently has more than 200,000 users. More recently, they created matching rings, which from a distance resemble the “brass rat,” or MIT class ring. In fact, they are 3-D-printed joke rings for the Class of 2017 that feature, as Kwok says, “a dog elegantly facing left, adjacent the moon.”

The real world of innovation

Before coming to the Sandcastle, Laser Nite slept in a closet at Thiel Manor. The well-used Atherton mansion — so nicknamed because many living there have been or are fellows in a two-year accelerator program funded by early Facebook investor Peter Thiel — is packed with “an extremely eclectic mix of people,” and is a convening place for people “who want to create things that are awesome.” Nite lists off a sampling of its projects: photonic computers, mind-mapping interfaces, augmented reality neural investigation software, virtual reality education software, self-driving cars, programmable spreadsheets, and machine learning-based physical component optimization.

There are many “hacker houses” that are imposters — predatory “startups” that rent bunk beds to unwitting souls for thousands a month. In the real ones like the Sandcastle, though, Nite says “there's an underlying energy and optimism about creating the future, and a warmth and open-mindedness toward people and new ideas.”

“We hold the somewhat quirky shared belief that we can actually do big things that change the world for the better,” says Nite. “And here we are.”

Read this article on MIT News.

September 19, 2016

News Image: 

Viewing all 1281 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>