Quantcast
Channel: MIT EECS
Viewing all 1281 articles
Browse latest View live

“Giant atoms” enable quantum processing and communication in one

$
0
0
July 30, 2020

Two superconducting qubits acting as giant artificial atoms. These “atoms” are protected from decoherence yet still interact with each other through the waveguide.

Courtesy of the researchers

Michaela Jarvis | MIT News correspondent

MIT researchers have introduced a quantum computing architecture thatcan perform low-error quantum computations while also rapidly sharing quantum information between processors. The work represents a key advance toward a complete quantum computing platform.

Previous to this discovery, small-scale quantum processors have successfully performed tasks at a rate exponentially faster than that of classical computers. However, it has been difficult to controllably communicate quantum information between distant parts of a processor. In classical computers, wired interconnects are used to route information back and forth throughout a processor during the course of a computation. In a quantum computer, however, the information itself is quantum mechanical and fragile, requiring fundamentally new strategies to simultaneously process and communicate quantum information on a chip.

“One of the main challenges in scaling quantum computers is to enable quantum bits to interact with each other when they are not co-located,” says William Oliver, an associate professor of electrical engineering and computer science, MIT Lincoln Laboratory fellow, and associate director of the Research Laboratory for Electronics. “For example, nearest-neighbor qubits can easily interact, but how do I make ‘quantum interconnects’ that connect qubits at distant locations?”

The answer lies in going beyond conventional light-matter interactions.

While natural atoms are small and point-like with respect to the wavelength of light they interact with, in a paper published today in the journal Nature, the researchers show that this need not be the case for superconducting “artificial atoms.” Instead, they have constructed “giant atoms” from superconducting quantum bits, or qubits, connected in a tunable configuration to a microwave transmission line, or waveguide.

This allows the researchers to adjust the strength of the qubit-waveguide interactions so the fragile qubits can be protected from decoherence, or a kind of natural decay that would otherwise be hastened by the waveguide, while they perform high-fidelity operations. Once those computations are carried out, the strength of the qubit-waveguide couplings is readjusted, and the qubits are able to release quantum data into the waveguide in the form of photons, or light particles.

“Coupling a qubit to a waveguide is usually quite bad for qubit operations, since doing so can significantly reduce the lifetime of the qubit,” says Bharath Kannan, MIT graduate fellow and first author of the paper. “However, the waveguide is necessary in order to release and route quantum information throughout the processor. Here, we’ve shown that it’s possible to preserve the coherence of the qubit even though it’s strongly coupled to a waveguide. We then have the ability to determine when we want to release the information stored in the qubit. We have shown how giant atoms can be used to turn the interaction with the waveguide on and off.”

An optical micrograph image of a chip with two superconducting qubits (yellow) acting as giant artificial atoms. Each giant atom connects to the waveguide (blue) at three distinct and well-separated locations.  Courtesy of the researchers

An optical micrograph image of a chip with two superconducting qubits (yellow) acting as giant artificial atoms. Each giant atom connects to the waveguide (blue) at three distinct and well-separated locations.

Courtesy of the researchers

The system realized by the researchers represents a new regime of light-matter interactions, the researchers say. Unlike models that treat atoms as point-like objects smaller than the wavelength of the light they interact with, the superconducting qubits, or artificial atoms, are essentially large electrical circuits. When coupled with the waveguide, they create a structure as large as the wavelength of the microwave light with which they interact.

The giant atom emits its information as microwave photons at multiple locations along the waveguide, such that the photons interfere with each other. This process can be tuned to complete destructive interference, meaning the information in the qubit is protected. Furthermore, even when no photons are actually released from the giant atom, multiple qubits along the waveguide are still able to interact with each other to perform operations. Throughout, the qubits remain strongly coupled to the waveguide, but because of this type of quantum interference, they can remain unaffected by it and be protected from decoherence, while single- and two-qubit operations are performed with high fidelity.

“We use the quantum interference effects enabled by the giant atoms to prevent the qubits from emitting their quantum information to the waveguide until we need it.” says Oliver.

“This allows us to experimentally probe a novel regime of physics that is difficult to access with natural atoms,” says Kannan. “The effects of the giant atom are extremely clean and easy to observe and understand.”

The work appears to have much potential for further research, Kannan adds.

“I think one of the surprises is actually the relative ease by which superconducting qubits are able to enter this giant atom regime.” he says. “The tricks we employed are relatively simple and, as such, one can imagine using this for further applications without a great deal of additional overhead.”

Andreas Wallraff, professor of solid-state physics at ETH Zurich, says the research "investigates a piece of quantum physics that is hard or even impossible to fathom for microscopic objects such as electrons or atoms, but that can be studied with macroscopic engineered superconducting quantum circuits. With these circuits, using a clever trick, they are able both to protect their giant atom from decay and simultaneously to allow for coupling two of them coherently. This is very nice work exploring waveguide quantum electrodynamics."

The coherence time of the qubits incorporated into the giant atoms, meaning the time they remained in a quantum state, was approximately 30 microseconds, nearly the same for qubits not coupled to a waveguide, which have a range of between 10 and 100 microseconds, according to the researchers.

Additionally, the research demonstrates two-qubit entangling operations with 94 percent fidelity. This represents the first time researchers have quoted a two-qubit fidelity for qubits that were strongly coupled to a waveguide, because the fidelity of such operations using conventional small atoms is often low in such an architecture. With more calibration, operation tune-up procedures and optimized hardware design, Kannan says, the fidelity can be further improved.

Original article published on the MIT News website on July 29, 2020

 

News Image: 

Labs: 


Algorithm finds hidden connections between paintings at the Met

$
0
0
July 30, 2020

A machine learning system developed at MIT was inspired by an exhibit in Amsterdam's Rijksmuseum that featured the unlikely but similar pairing of Francisco de Zurbarán’s "The Martyrdom of Saint Serapion" (left) and Jan Asselijn’s "The Threatened Swan."  Image courtesy of MIT CSAIL.

A machine learning system developed at MIT was inspired by an exhibit in Amsterdam's Rijksmuseum that featured the unlikely but similar pairing of Francisco de Zurbarán’s "The Martyrdom of Saint Serapion" (left) and Jan Asselijn’s "The Threatened Swan."

Image courtesy of MIT CSAIL.

Art is often heralded as the greatest journey into the past, solidifying a moment in time and space; the beautiful vehicle that lets us momentarily escape the present. 

With the boundless treasure trove of paintings that exist, the connections between these works of art from different periods of time and space can often go overlooked. It’s impossible for even the most knowledgeable of art critics to take in millions of paintings across thousands of years and be able to find unexpected parallels in themes, motifs, and visual styles. 

To streamline this process, a group of researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and Microsoft created an algorithm to discover hidden connections between paintings at the Metropolitan Museum of Art (the Met) and Amsterdam’s Rijksmuseum. 

Inspired by a special exhibit “Rembrandt and Velazquez” in the Rijksmuseum, the new “MosAIc” system finds paired or “analogous” works from different cultures, artists, and media by using deep networks to understand how “close” two images are. In that exhibit, the researchers were inspired by an unlikely, yet similar pairing: Francisco de Zurbarán’s “The Martyrdom of Saint Serapion” and Jan Asselijn’s “The Threatened Swan,” two works that portray scenes of profound altruism with an eerie visual resemblance.

“These two artists did not have a correspondence or meet each other during their lives, yet their paintings hinted at a rich, latent structure that underlies both of their works,” says CSAIL PhD student Mark Hamilton, the lead author on a paper about “MosAIc.” 

To find two similar paintings, the team used a new algorithm for image search to unearth the closest match by a particular artist or culture. For example, in response to a query about “which musical instrument is closest to this painting of a blue-and-white dress,” the algorithm retrieves an image of a blue-and-white porcelain violin. These works are not only similar in pattern and form, but also draw their roots from a broader cultural exchange of porcelain between the Dutch and Chinese. 

“Image retrieval systems let users find images that are semantically similar to a query image, serving as the backbone of reverse image search engines and many product recommendation engines,” says Hamilton. “Restricting an image retrieval system to particular subsets of images can yield new insights into relationships in the visual world. We aim to encourage a new level of engagement with creative artifacts.” 

How it works 

For many, art and science are irreconcilable: one grounded in logic, reasoning, and proven truths, and the other motivated by emotion, aesthetics, and beauty. But recently, AI and art took on a new flirtation that, over the past 10 years, developed into something more serious. 

A large branch of this work, for example, has previously focused on generating new art using AI. There was the GauGAN project developed by researchers at MIT, NVIDIA, and the University of California at Berkeley; Hamilton and others’ previous GenStudio project; and even an AI-generated artwork that sold at Sotheby’s for $51,000

MosAIc, however, doesn’t aim to create new art so much as help explore existing art. One similar tool, Google’s “X Degrees of Separation,” finds paths of art that connect two works of art, but MosAIc differs in that it only requires a single image. Instead of finding paths, it uncovers connections in whatever culture or media the user is interested in, such as finding the shared artistic form of “Anthropoides paradisea” and “Seth Slaying a Serpent, Temple of Amun at Hibis.” 

Hamilton notes that building out their algorithm was a tricky endeavor, because they wanted to find images that were similar not just in color or style, but in meaning and theme. In other words, they’d want dogs to be close to other dogs, people to be close to other people, and so forth. To achieve this, they probe a deep network’s inner “activations” for each image in the combined open access collections of the Met and the Rijksmuseum. Distance between the “activations” of this deep network, which are commonly called “features,” was how they judged image similarity.

To find analogous images between different cultures, the team used a new image-search data structure called a “conditional KNN tree” that groups similar images together in a tree-like structure. To find a close match, they start at the tree’s “trunk” and follow the most promising “branch” until they are sure they’ve found the closest image. The data structure improves on its predecessors by allowing the tree to quickly “prune” itself to a particular culture, artist, or collection, quickly yielding answers to new types of queries.

What Hamilton and his colleagues found surprising was that this approach could also be applied to helping find problems with existing deep networks, related to the surge of “deepfakes” that have recently cropped up. They applied this data structure to find areas where probabilistic models, such as the generative adversarial networks (GANs) that are often used to create deepfakes, break down. They coined these problematic areas “blind spots,” and note that they give us insight into how GANs can be biased. Such blind spots further show that GANs struggle to represent particular areas of a dataset, even if most of their fakes can fool a human.

The machine learning system can uncover connections in whatever culture or media a user is interested in, such as finding the shared artistic form of “Anthropoides paradisea” and “Seth Slaying a Serpent, Temple of Amun at Hibis." The machine learning system can uncover connections in whatever culture or media a user is interested in, such as finding the shared artistic form of “Anthropoides paradisea” and “Seth Slaying a Serpent, Temple of Amun at Hibis.”  Image courtesy of MIT CSAIL.

The machine learning system can uncover connections in whatever culture or media a user is interested in, such as finding the shared artistic form of “Anthropoides paradisea” and “Seth Slaying a Serpent, Temple of Amun at Hibis." The machine learning system can uncover connections in whatever culture or media a user is interested in, such as finding the shared artistic form of “Anthropoides paradisea” and “Seth Slaying a Serpent, Temple of Amun at Hibis.”

Image courtesy of MIT CSAIL.

Testing MosAIc 

The team evaluated MosAIc’s speed, and how closely it aligned with our human intuition about visual analogies.

For the speed tests, they wanted to make sure that their data structure provided value over simply searching through the collection with quick, brute-force search. 

To understand how well the system aligned with human intuitions, they made and released two new datasets for evaluating conditional image retrieval systems. One dataset challenged algorithms to find images with the same content even after they had been “stylized” with a neural style transfer method. The second dataset challenged algorithms to recover English letters across different fonts. A bit less than two-thirds of the time, MosAIc was able to recover the correct image in a single guess from a “haystack” of 5,000 images.

“Going forward, we hope this work inspires others to think about how tools from information retrieval can help other fields like the arts, humanities, social science, and medicine,” says Hamilton. “These fields are rich with information that has never been processed with these techniques and can be a source for great inspiration for both computer scientists and domain experts. This work can be expanded in terms of new datasets, new types of queries, and new ways to understand the connections between works.” 

Hamilton wrote the paper on MosAIc alongside Professor Bill Freeman and MIT undergraduates Stefanie Fu and Mindren Lu. The MosAIc website was built by MIT, Fu, Lu, Zhenbang Chen, Felix Tran, Darius Bopp, Margaret Wang, Marina Rogers, and Johnny Bui, at the Microsoft Garage winter externship program.

Original article published on the MIT News website on July 29, 2020

 

News Image: 

Labs: 

New design principle could prevent catheter failure in brain shunts

$
0
0
July 31, 2020

These images provide a comparison of the detachment of astrocytes from static and flow culture upon sudden high fluid shear stress. Astrocytes that were cultured under prolonged fluid shear stress detached from the surface much easier compared with those cultured under no flow. This result suggests to researchers that they might design catheter shapes to maximize the wall shear stress distribution on the inner surfaces of ventricular catheters.  Image courtesy of Bourouiba Research Group.

This is a time-lapse microscopic image of particle-based flow visualization inside a ventricular catheter. The image shows the confluence of three flows: the upstream flow coming from the catheter tip direction and two flows coming in via holes facing each other.

Image courtesy of Bourouiba Research Group.

For medical professionals treating hydrocephalus — a chronic neurological condition caused by an abnormal accumulation of cerebrospinal fluid (CSF), resulting in pressure on the brain — there have been a limited range of treatment options. The most common is the surgical placement of a medical device called a shunt, a sort of flexible tube, which is placed in the ventricular system of the brain, diverting the flow of CSF from the brain to elsewhere in the body.

While effective, this surgery comes with risks (the procedure requires drilling a hole into the skull, after all), and the failure rate for these shunts, despite their lifesaving properties, is quite high. Whether congenital (present at birth, including spina bifida) or acquired (from a brain injury, for instance) — hydrocephalus affects more than 1 million Americans, ranging from infants and older children to seniors.

Now, MIT researchers have released a paper in the Journal of the Royal Society Interface that proposes and validates a new design principle for hydrocephalus catheters that seeks to overcome a central challenge in the design of these devices: that they regularly become clogged. A clogged catheter has life-threatening implications, especially for children, and usually leads to emergency surgery, the reopening of sealed scars and the possible need for resection of the implanted catheter from the brain before putting a new catheter in, followed by required additional healing time. This process carries with it the risk of damage to brain tissue and infection. For pediatric patients, catheters have a 60 percent chance of failure, often due to tissue that is clogs the catheters, eventually stopping the flow of CSF away from the brain.

The new research focuses on the potential redesign of the shunts, according to one of the authors of the paper, Thomas Heldt, an associate professor of electrical and biomedical engineering in the Department of Electrical Engineering and Computer Science and the Institute of Medical Engineering and Science (IMES). He points out that an important part of the research process was to conduct in vitro experiments exposing cell cultures to fluid shear stress, in addition to microfluidic flow imaging, and conducting fluid dynamic calculation and measurements.

“The point we are seeking to bring across is how to best design the catheter geometry to optimize the function of this medical device,” says Heldt. “These are design parameters that can change in such a way that a minimum force on the catheter walls is imposed to ensure minimal risk of cells adhesion in the first place.”

 

These images provide a comparison of the detachment of astrocytes from static and flow culture upon sudden high fluid shear stress. Astrocytes that were cultured under prolonged fluid shear stress detached from the surface much easier compared with those cultured under no flow. This result suggests to researchers that they might design catheter shapes to maximize the wall shear stress distribution on the inner surfaces of ventricular catheters.  Image courtesy of Bourouiba Research Group.

These images provide a comparison of the detachment of astrocytes from static and flow culture upon sudden high fluid shear stress. Astrocytes that were cultured under prolonged fluid shear stress detached from the surface much easier compared with those cultured under no flow. This result suggests to researchers that they might design catheter shapes to maximize the wall shear stress distribution on the inner surfaces of ventricular catheters.

Image courtesy of Bourouiba Research Group.

Lydia Bourouiba, the senior author of the paper and an associate professor in the departments of Civil and Environmental Engineering, Mechanical Engineering, and IMES, who directs The Fluid Dynamics of Disease Transmission Laboratory, says of the research: “The novelty is that we leveraged the coupling between mechanical (i.e., fluid dynamics here) principles and biological and cell response to enable novel pathways in design principles of these lifesaving medical devices.”

Along with Bourouiba and Heldt, the authors of the paper are Songkwon Lee, a PhD student in the Department of Mechanical Engineering; Nicholas Kwok, an MD student in the Harvard-MIT Program in Health Sciences and Technology; and James Holsapple, chief of neurosurgery at Boston Medical Center.

According to Heldt, the new research could lead to redesigned shunts that would “keep the minimum wall shear stress sufficiently high, above a threshold value we identified to be sufficient to minimize cell adhesion and proliferation. If  we prevent these cells from adhering in the first place, we undercut the key step responsible for long-term clogging and failure of brain catheters.”

Dwight Meglan, an engineer who is the chief technology officer of HeartLander Surgical, a medical device company, has a daughter, Emma, who has needed hydrocephalus catheters since birth. He says that due to his own background as an engineer, he has puzzled over how catheters could be more resistant to failure, and has sometimes conferred with Heldt on the challenge. He says that what he finds interesting about the new research, if it leads to a new catheter construction, is that “this is more foundational than some other research I’ve seen, because they are actually looking at this from the point of view that perhaps the problem is due to an underlying design failure.”

Bourouiba says that previously, research on preventing shunt failures has often focused on “surface engineering, with little translation into practice due to the sensitive location in which these catheters are used: the brain. A major concern is the durability and stability of chemical solutions in long-term usage in a patient’s brain, particularly when developing brains are involved.”

By contrast, she continues, “Our paper leveraged a novel combination of state-of-the-art flow visualization and quantification, fluid dynamics modeling, coupled with in-vitro experiments, to arrive at new design principles for these catheters, based on the concept of maximizing the minimal fluid shear stress so as to prevent cells from successfully adhering to and weakly proliferating onto the catheter in the first place.”

Kwok, a fourth-year medical student, said he was looking for a research project for his thesis when Heldt suggested the hydrocephalus catheter research idea, combining “engineering and medicine to develop new diagnostic and therapeutic technologies … and I was hooked.” He says he hopes to pursue basic science research during an internal medicine residency he will apply for in the fall, with the goal of “clinically oriented engineering research as a practicing physician, combining patient care with therapeutic innovation.”

For Edward Smith, the R. Michael Scott Chair in Neurosurgery at Boston Children’s Hospital, the potential for lifesaving advances that could mitigate the frequency of shunt malfunctions is encouraging. “The data presented in this manuscript are novel, and offer a different way of looking at a serious problem routinely faced by clinicians,” he adds.

Now that the researchers have demonstrated experimental validation to the design principles, prototypes would need to be produced and used in clinical trials. But whether the research advances and results in better functioning shunts somewhere down the line, Bourouiba says that the process has already proved rewarding. “It was very exciting to gain a fundamental understanding of the coupling between flow and particular brain cell behavior, and to leverage such understanding to develop fluid-dynamics-based and validated algorithms, guiding a novel design principle for hydrocephalus catheters, rooted in the inherent coupling between the physics and biology involved,” she says.

Original article published on the MIT News website on July 30, 2020

 

News Image: 

Labs: 

An automated health care system that understands when to step in

$
0
0
August 3, 2020

The system either queries the expert to diagnose the patient based on their X-ray and medical records, or looks at the X-ray to make the diagnosis itself.  Image courtesy of MIT CSAIL.

The system either queries the expert to diagnose the patient based on their X-ray and medical records, or looks at the X-ray to make the diagnosis itself.

Image courtesy of MIT CSAIL.

In recent years, entire industries have popped up that rely on the delicate interplay between human workers and automated software. Companies like Facebook work to keep hateful and violent content off their platforms using a combination of automated filtering and human moderators. In the medical field, researchers at MIT and elsewhere have used machine learning to help radiologists better detect different forms of cancer

What can be tricky about these hybrid approaches is understanding when to rely on the expertise of people versus programs. This isn’t always merely a question of who does a task “better;” indeed, if a person has limited bandwidth, the system may have to be trained to minimize how often it asks for help.

To tackle this complex issue, researchers from MIT’s Computer Science and Artificial Intelligence Lab (CSAIL) have developed a machine learning system that can either make a prediction about a task, or defer the decision to an expert. Most importantly, it can adapt when and how often it defers to its human collaborator, based on factors such as its teammate’s availability and level of experience.

The team trained the system on multiple tasks, including looking at chest X-rays to diagnose specific conditions such as atelectasis (lung collapse) and cardiomegaly (an enlarged heart). In the case of cardiomegaly, they found that their human-AI hybrid model performed 8 percent better than either could on their own (based on AU-ROC scores).  

“In medical environments where doctors don’t have many extra cycles, it’s not the best use of their time to have them look at every single data point from a given patient’s file,” says PhD student Hussein Mozannar, lead author with David Sontag, the Von Helmholtz Associate Professor of Medical Engineering in the Department of Electrical Engineering and Computer Science, of a new paper about the system that was recently presented at the International Conference of Machine Learning. “In that sort of scenario, it’s important for the system to be especially sensitive to their time and only ask for their help when absolutely necessary.”

The system has two parts: a “classifier” that can predict a certain subset of tasks, and a “rejector” that decides whether a given task should be handled by either its own classifier or the human expert.

Through experiments on tasks in medical diagnosis and text/image classification, the team showed that their approach not only achieves better accuracy than baselines, but does so with a lower computational cost and with far fewer training data samples.

“Our algorithms allow you to optimize for whatever choice you want, whether that’s the specific prediction accuracy or the cost of the expert’s time and effort,” says Sontag, who is also a member of MIT’s Institute for Medical Engineering and Science. “Moreover, by interpreting the learned rejector, the system provides insights into how experts make decisions, and in which settings AI may be more appropriate, or vice-versa.”

The system’s particular ability to help detect offensive text and images could also have interesting implications for content moderation. Mozanner suggests that it could be used at companies like Facebook in conjunction with a team of human moderators. (He is hopeful that such systems could minimize the amount of hateful or traumatic posts that human moderators have to review every day.)

Sontag clarified that the team has not yet tested the system with human experts, but instead developed a series of “synthetic experts” so that they could tweak parameters such as experience and availability. In order to work with a new expert it’s never seen before, the system would need some minimal onboarding to get trained on the person’s particular strengths and weaknesses.

In future work, the team plans to test their approach with real human experts, such as radiologists for X-ray diagnosis. They will also explore how to develop systems that can learn from biased expert data, as well as systems that can work with — and defer to — several experts at once. For example, Sontag imagines a hospital scenario where the system could collaborate with different radiologists who are more experienced with different patient populations.

“There are many obstacles that understandably prohibit full automation in clinical settings, including issues of trust and accountability,” says Sontag. “We hope that our method will inspire machine learning practitioners to get more creative in integrating real-time human expertise into their algorithms.” 

Mozanner is affiliated with both CSAIL and the MIT Institute for Data, Systems and Society (IDSS). The team’s work was supported, in part, by the National Science Foundation.

Original article published on the MIT News website on July 30, 2020

News Image: 

Labs: 

Can a quantum strategy help bring down the house?

$
0
0
August 4, 2020

Can a quantum strategy help bring down the house? MIT study finds quantum entanglement gives slight advantage in playing against the house.  Image: Christine Daniloff, MIT

Can a quantum strategy help bring down the house? MIT study finds quantum entanglement gives slight advantage in playing against the house.

Image: Christine Daniloff, MIT

Jennifer Chu | MIT News Office

In some versions of the game blackjack, one way to win against the house is for players at the table to work as a team to keep track of and covertly communicate amongst each other the cards they have been dealt. With that knowledge, they can then estimate the cards still in the deck, and those most likely to be dealt out next, all to help each player decide how to place their bets, and as a team, gain an advantage over the dealer.

This calculating strategy, known as card-counting, was made famous by the MIT Blackjack Team, a group of students from MIT, Harvard University, and Caltech, who for several decades starting in 1979, optimized card-counting and other techniques to successfully beat casinos at blackjack around the world — a story that later inspired the book “Bringing Down the House.”

Now researchers at MIT and Caltech have shown that the weird, quantum effects of entanglement could theoretically give blackjack players even more of an edge, albeit a small one, when playing against the house.

In a paper published this week in the journal Physical Review A, the researchers lay out a theoretical scenario in which two players, playing cooperatively against the dealer, can better coordinate their strategies using a quantumly entangled pair of systems. Such systems exist now in the laboratory, although not in forms convenient for any practical use in casinos. In their study, the authors nevertheless explore the theoretical possibilities for how a quantum system might influence outcomes in blackjack.

They found that such quantum communication would give the players a slight advantage compared to classical card-counting strategies, though in limited situations where the number of cards left in the dealer’s deck is low.

“It’s pretty small in terms of the actual magnitude of the expected quantum advantage,” says first author Joseph Lin, a former graduate student at MIT. “But if you imagine the players are extremely rich, and the deck is really low in number, so that every card counts, these small advantages can be big. The exciting result is that there’s some advantage to quantum communication, regardless of how small it is.”

Lin’s MIT co-authors on the paper are professor of physics Joseph Formaggio, associate professor of physics Aram Harrow, and Anand Natarajan of Caltech, who will start at MIT in September as assistant professor of electrical engineering and computer science.

Quantum dealings

Entanglement is a phenomenon described by the rules of quantum mechanics, which states that two physically separate objects can be “entangled,” or correlated with each other, in such a way that the correlations between them are stronger than what would be predicted by the classical laws of physics and probability.

In 1964, physicist John Bell proved mathematically that quantum entanglement could exist, and also devised a test — known a Bell test — that scientists have since applied to many scenarios to ascertain if certain spatially remote particles or systems behave according to classical, real-world physics, or whether they may exhibit some quantum, entangled states.

“One motivation for this work was as a concrete realization of the Bell test,” says Harrow of the team’s new paper. “People wrote the rules of blackjack not thinking of entanglement. But the players are dealt cards, and there are some correlations between the cards they get. So does entanglement work here? The answer to the question was not obvious going into it.”

After casually entertaining the idea during a regular poker night with friends, Formaggio decided to explore the possibility of quantum blackjack more formally with his MIT colleagues.

“I was grateful to them for not laughing and closing the door on me when I brought up the idea,” Formaggio recalls.

Correlated cards

In blackjack, the dealer deals herself and each player a face-up card that is public to all, and a face-down card. With this information, each player decides whether to “hit,” and be dealt another card, or “stand,” and stay with the cards they have. The goal after one round is to have a hand with a total that is closer to 21, without going over, than the dealer and the other players at the table.

In their paper, the researchers simulated a simple blackjack setup involving two players, Alice and Bob, playing cooperatively against the dealer. They programmed Alice to consistently bet low, with the main objective of helping Bob, who could hit or stand based on any information he gained from Alice.

The researchers considered how three different scenarios might help the players win over the dealer: a classical card-counting scenario without communication; a best-case scenario in which Alice simply shows Bob her face-down card, demonstrating the best that a team can do in playing against the dealer; and lastly, a quantum entanglement scenario.

In the quantum scenario, the researchers formulated a mathematical model to represent a quantum system, which can be thought of abstractedly as a box with many “buttons,” or measurement choices, that is shared between Alice and Bob.

For instance, if Alice’s face-down card is a 5, she can push a particular button on the quantum box and use its output to inform her usual choice of whether to hit or stand. Bob, in turn, looks at his face-down card when deciding which button to push on his quantum box, as well as whether to use the box at all. In the cases where Bob uses his quantum box, he can combine its output with his observation of Alice’s strategy to decide his own move. This extra information — not exactly the value of Alice’s card, but more information than a random guess — can help Bob decide whether to hit or stand.

The researchers ran all three scenarios, with many combinations of cards between each player and the dealer, and with increasing number of cards left in the dealer’s deck, to see how often Alice and Bob could win against the dealer.

After running thousands of rounds for each of the three scenarios, they found that the players had a slight advantage over the dealer in the quantum entanglement scenario, compared with the classical card-counting strategy, though only when a handful of cards were left in the dealer’s deck.

“As you increase the deck and therefore increase all the possibilities of different cards coming to you, the fact that you know a little bit more through this quantum process actually gets diluted,” Formaggio explains.

Nevertheless, Harrow notes that “it was surprising that these problems even matched, that it even made sense to consider entangled strategy in blackjack.”

Do these results mean that future blackjack teams might use quantum strategies to their advantage?

“It would require a very large investor, and my guess is, carrying a quantum computer in your backpack will probably tip the house,” Formaggio says. “We think casinos are safe right now from this particular threat.”

This research was funded, in part, by the National Science Foundation, the Army Research Office, the U.S. Department of Energy, and the MIT Undergraduate Research Opportunities Program (UROP).

Oriignal artcile published no the MIT News website on August 3, 2020

News Image: 

Labs: 

Data systems that learn to be better

$
0
0
August 11, 2020

One of the biggest challenges in computing is handling a staggering onslaught of information while still being able to efficiently store and process it.

One of the biggest challenges in computing is handling a staggering onslaught of information while still being able to efficiently store and process it.

Adam Conner-Simons | MIT CSAIL

Big data has gotten really, really big: By 2025, all the world’s data will add up to an estimated 175 trillion gigabytes. For a visual, if you stored that amount of data on DVDs, it would stack up tall enough to circle the Earth 222 times. 

One of the biggest challenges in computing is handling this onslaught of information while still being able to efficiently store and process it. A team from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) believe that the answer rests with something called “instance-optimized systems.”  

Traditional storage and database systems are designed to work for a wide range of applications because of how long it can take to build them — months or, often, several years. As a result, for any given workload such systems provide performance that is good, but usually not the best. Even worse, they sometimes require administrators to painstakingly tune the system by hand to provide even reasonable performance. 

In contrast, the goal of instance-optimized systems is to build systems that optimize and partially re-organize themselves for the data they store and the workload they serve. 

“It’s like building a database system for every application from scratch, which is not economically feasible with traditional system designs,” says MIT Professor Tim Kraska. 

As a first step toward this vision, Kraska and colleagues developed Tsunami and Bao. Tsunami uses machine learning to automatically re-organize a dataset’s storage layout based on the types of queries that its users make. Tests show that it can run queries up to 10 times faster than state-of-the-art systems. What’s more, its datasets can be organized via a series of "learned indexes" that are up to 100 times smaller than the indexes used in traditional systems. 

Kraska has been exploring the topic of learned indexes for several years, going back to his influential work with colleagues at Google in 2017. 

Harvard University Professor Stratos Idreos, who was not involved in the Tsunami project, says that a unique advantage of learned indexes is their small size, which, in addition to space savings, brings substantial performance improvements.

“I think this line of work is a paradigm shift that’s going to impact system design long-term,” says Idreos. “I expect approaches based on models will be one of the core components at the heart of a new wave of adaptive systems.”

Bao, meanwhile, focuses on improving the efficiency of query optimization through machine learning. A query optimizer rewrites a high-level declarative query to a query plan, which can actually be executed over the data to compute the result to the query. However, often there exists more than one query plan to answer any query; picking the wrong one can cause a query to take days to compute the answer, rather than seconds. 

Traditional query optimizers take years to build, are very hard to maintain, and, most importantly, do not learn from their mistakes. Bao is the first learning-based approach to query optimization that has been fully integrated into the popular database management system PostgreSQL. Lead author Ryan Marcus, a postdoc in Kraska’s group, says that Bao produces query plans that run up to 50 percent faster than those created by the PostgreSQL optimizer, meaning that it could help to significantly reduce the cost of cloud services, like Amazon’s Redshift, that are based on PostgreSQL.

By fusing the two systems together, Kraska hopes to build the first instance-optimized database system that can provide the best possible performance for each individual application without any manual tuning. 

The goal is to not only relieve developers from the daunting and laborious process of tuning database systems, but to also provide performance and cost benefits that are not possible with traditional systems.

Traditionally, the systems we use to store data are limited to only a few storage options and, because of it, they cannot provide the best possible performance for a given application. What Tsunami can do is dynamically change the structure of the data storage based on the kinds of queries that it receives and create new ways to store data, which are not feasible with more traditional approaches.

Johannes Gehrke, a managing director at Microsoft Research who also heads up machine learning efforts for Microsoft Teams, says that his work opens up many interesting applications, such as doing so-called “multidimensional queries” in main-memory data warehouses. Harvard’s Idreos also expects the project to spur further work on how to maintain the good performance of such systems when new data and new kinds of queries arrive.

Bao is short for “bandit optimizer,” a play on words related to the so-called “multi-armed bandit” analogy where a gambler tries to maximize their winnings at multiple slot machines that have different rates of return. The multi-armed bandit problem is commonly found in any situation that has tradeoffs between exploring multiple different options, versus exploiting a single option — from risk optimization to A/B testing.

“Query optimizers have been around for years, but they often make mistakes, and usually they don’t learn from them,” says Kraska. “That’s where we feel that our system can make key breakthroughs, as it can quickly learn for the given data and workload what query plans to use and which ones to avoid.”

Kraska says that in contrast to other learning-based approaches to query optimization, Bao learns much faster and can outperform open-source and commercial optimizers with as little as one hour of training time.In the future, his team aims to integrate Bao into cloud systems to improve resource utilization in environments where disk, RAM, and CPU time are scarce resources.

“Our hope is that a system like this will enable much faster query times, and that people will be able to answer questions they hadn’t been able to answer before,” says Kraska.

A related paper about Tsunami was co-written by Kraska, PhD students Jialin Ding and Vikram Nathan, and MIT Professor Mohammad Alizadeh. A paper about Bao was co-written by Kraska, Marcus, PhD students Parimarjan Negi and Hongzi Mao, visiting scientist Nesime Tatbul, and Alizadeh.

The work was done as part of the Data System and AI Lab (DSAIL@CSAIL), which is sponsored by Intel, Google, Microsoft, and the U.S. National Science Foundation. 

Original article published on the MIT News website on August 10, 2020

News Image: 

Labs: 

MIT researchers lead high school educational initiative on quantum computing

$
0
0
August 11, 2020

Students from the week-long Qubit by Qubit summer camp on quantum computing  Photo courtesy of The Coding School

Students from the week-long Qubit by Qubit summer camp on quantum computing

Photo courtesy of The Coding School

Research Laboratory of Electronics

Quantum computing has the potential to change the world as we know it, yet limited K-12 educational resources on quantum computing exist. MIT researchers have partnered with The Coding School (TCS), a technology education nonprofit, to address this gap. This first-of-its-kind initiative, playfully named “Qubit by Qubit,” aims to introduce high school students to quantum computing through two programs: a week-long summer camp and a year-long course, led respectively by Amir Karamlou and Francisca Vasconcelos of the MIT Research Laboratory of Electronics (RLE)’s Engineering Quantum Systems (EQuS) research group.

An all-online format with live instruction by MIT researchers, the program is open to high school students around the United States and worldwide. The goal of these programs is two-fold: first, to excite students about quantum computing and provide them with real-world quantum skills; and second, to increase diversity in the field of quantum computing by focusing on outreach to underrepresented groups in STEM, including women, students of color, and those from low socioeconomic backgrounds. To achieve these ends, they have partnered with MIT’s Center for Quantum Engineering, which supports the development of a U.S. quantum workforce through a range of fellowship programs, educational curriculum development, and quantum-related research activities at MIT. The educational initiative is also sponsored by Google AI and IBM Quantum.

MIT alumnus Amir Karamlou '18, MEng '18, who recently led the week-long summer camp, is currently a second-year graduate research fellow with EQuS at MIT. His research focuses on experimental quantum computation using superconducting qubits to simulate many-body quantum systems, and he has been teaching the undergraduate-level Introduction to Quantum Computing course during MIT’s Independent Activities Period (IAP) since 2017.

“The most exciting part of teaching this camp is that we get to introduce quantum mechanics and quantum computing to many students for the first time! " Karamlou says. "And through this process, we also get to reach out to groups that are normally underrepresented in our field, and get students interested in pursuing quantum computing in college and grad school. I vividly remember the excitement of learning about quantum computation from my freshman advisor, and would like to distill the same enthusiasm in the next generation of scientists and engineers. The technology has advanced considerably since my freshman year at MIT; learners now have access to real quantum processors through the cloud and can learn by running small-scale quantum programs in real-time.”

Francisca Vasconcelos of the Engineering Quantum Systems Group  Photo courtesy of the researchers

Francisca Vasconcelos of the Engineering Quantum Systems Group

Photo courtesy of the researchers

The camp had approximately 300 participants from 18 countries and 30 states, 70 percent of whom are from underrepresented backgrounds. Advanced topics in quantum computing were met with enthusiasm and excitement, and an overall interest to learn more. During the week, the students coded and executed over 1,000 quantum circuits on actual quantum hardware available through IBM Quantum’s cloud platform.

“Teaching this camp challenged me to think about many of the concepts that I use on a daily basis from a whole new perspective, which helped me a lot as well,” says Karamlou. “I found the experience to be very rewarding, especially after reading feedback that we got from the students at the end of the program. An overwhelming majority of the students indicated that attending this camp increased their interest in quantum computing, and that they want to learn more about quantum computing in the future. The program received many amazing testimonials, which means a lot to me personally.”

Student testimonials from the Qubit by Qubit camp

  • “As a Latin American student, I don’t have a lot of opportunities to learn topics like quantum computing or quantum mechanics. This experience was awesome, I enjoyed everything, of course it was challenging at first, but I learned a lot and It made me realize that maybe this is what I want to do for the rest of my life.”

 

  • “I enjoyed the Quantum Computing camp greatly. I have felt a lot of times in my life like I wasn't good enough at STEM to pursue it in the future and this camp helped me to realize this is false. Being taught by MIT researchers who are genuinely passionate about the quantum computing allowed me to also get super excited about technology, coding, and the future. It has motivated me to continue to pursue this field, and computer science in general, on my own and perhaps in my final year of high school and moving into college.”

 

  • “I really enjoyed being able to learn about quantum in an environment where I felt comfortable asking questions and interacting with teachers. The course material was challenging, but not so challenging that it was impossible to understand, and after reviewing it, things began to make sense. This was in no small part a result to the amazing teachers, who were incredibly patient and clear in explaining and re-explaining things in a simple, understandable way. The camp got me much more interested in quantum, and I hope to study it more intensively in the future.”

 

Leading the curriculum development for a year-long “Intro to Quantum Computing” course

Francisca Vasconcelos graduated from MIT this year, with a bachelor's degree in electrical engineering and computer science and in Physics, and will be studying statistical science at Oxford University this fall as a Rhodes Scholar. During her two years as an Undergraduate Research Opportunities Program student in the EQuS group, as well as through internships at Rigetti Computing and Microsoft Research Quantum, Vasconcelos worked on a number of research problems related to benchmarking, readout, and noise mitigation of superconducting qubits. In her future research, she is interested in studying the intersection of machine learning and quantum computing.

Amir Karamlou of the Engineering Quantum Systems Group  Photo courtesy of the researchers

Amir Karamlou of the Engineering Quantum Systems Group

Photo courtesy of the researchers

Vasconcelos was first introduced to quantum computation and information in her first year, when she took Karamlou’s IAP course. Sophomore year, she began volunteering for The Coding School, improving the “Intro to Python” curriculum. In 2019, she joined the IAP quantum course staff, which was her first classroom teaching experience. The experience gave her the confidence and enthusiasm to pitch the idea of a quantum computing curriculum for high schoolers to Kiera Peltz, founder and leader of The Coding School. Vasconcelos has since been leading the development of the year-long “Intro to Quantum Computing” course for high schoolers, which will launch this fall.

“I am looking forward to introducing many students to quantum computing for the first time, through the year-long course," she says. "I still remember when I first heard about a quantum computer in high school, via a YouTube video, and was absolutely amazed. I hope to spark similar excitement and enthusiasm in the kids taking our course, while giving them the opportunity to delve further. That being said, it is challenging to make broadly accessible such interdisciplinary content, which most students would otherwise only see late in undergraduate or even graduate school. I am thankful that a committed team of undergraduate and graduate students from MIT, Yale, Harvard, and EPFL have joined me in this effort and continue to find new ways to make the material approachable, yet exciting.”

The Coding School nonprofit is committed to empowering minorities in STEM (students of color, women, students with disabilities, and students from lower socioeconomic status). The organization has led numerous initiatives to provide scholarships to these groups that enable participation.

Building the quantum workforce from the ground up

Professor William D. Oliver, who leads the EQuS group and serves as director of the Center for Quantum Engineering (CQE) in the RLE, strongly supports the development of a diverse quantum workforce.

“One critical aspect of the CQE mission is educating a quantum workforce. This includes, of course, undergraduate and graduate students at MIT. But, there is also substantial work that we can and should do beyond the Institute. For example, the CQE in conjunction with MIT xPRO has developed a series of professional development courses aimed at midcareer professionals who want to pivot to quantum technologies.”

Oliver continues, “In addition to reaching to more senior learners, we also have the opportunity to reach junior learners, including high school students. Instilling the highly nonintuitive concepts of quantum mechanics in young people will help motivate and prepare them for quantum-related education at university. And, it’s not hard to find interested students! Quantum is 'cool,' and there is broad interest in the strange world of quantum mechanics and how it works. Capturing and building on this enthusiasm is why that CQE wholeheartedly supported the EQuS students and The Coding School. It’s a great way to reach students who are eager to learn about quantum technologies.”

Original article published on the MIT News website on August 10, 2020

News Image: 

Labs: 

Shrinking deep learning's carbon footprint

$
0
0
August 11, 2020

Deep learning has driven much of the recent progress in artificial intelligence, but as demand for computation and energy to train ever-larger models increases, many are raising concerns about the financial and environmental costs. To address the problem, researchers at MIT and the MIT-IBM Watson AI Lab are experimenting with ways to make software and hardware more energy efficient, and in some cases, more like the human brain.  Image: Niki Hinkle/MIT Spectrum

Deep learning has driven much of the recent progress in artificial intelligence, but as demand for computation and energy to train ever-larger models increases, many are raising concerns about the financial and environmental costs. To address the problem, researchers at MIT and the MIT-IBM Watson AI Lab are experimenting with ways to make software and hardware more energy efficient, and in some cases, more like the human brain.

Image: Niki Hinkle/MIT Spectrum

Kim Martineau | MIT Quest for Intelligence

In June, OpenAI unveiled the largest language model in the world, a text-generating tool called GPT-3 that can write creative fiction, translate legalese into plain English, and answer obscure trivia questions. It’s the latest feat of intelligence achieved by deep learning, a machine learning method patterned after the way neurons in the brain process and store information.

But it came at a hefty price: at least $4.6 million and 355 years in computing time, assuming the model was trained on a standard neural network chip, or GPU. The model’s colossal size — 1,000 times larger than a typical language model — is the main factor in its high cost.

“You have to throw a lot more computation at something to get a little improvement in performance,” says Neil Thompson, an MIT researcher who has tracked deep learning’s unquenchable thirst for computing. “It’s unsustainable. We have to find more efficient ways to scale deep learning or develop other technologies.”

Some of the excitement over AI’s recent progress has shifted to alarm. In a study last year, researchers at the University of Massachusetts at Amherst estimated that training a large deep-learning model produces 626,000 pounds of planet-warming carbon dioxide, equal to the lifetime emissions of five cars. As models grow bigger, their demand for computing is outpacing improvements in hardware efficiency. Chips specialized for neural-network processing, like GPUs (graphics processing units) and TPUs (tensor processing units), have offset the demand for more computing, but not by enough. 

“We need to rethink the entire stack — from software to hardware,” says Aude Oliva, MIT director of the MIT-IBM Watson AI Lab and co-director of the MIT Quest for Intelligence. “Deep learning has made the recent AI revolution possible, but its growing cost in energy and carbon emissions is untenable.”

Computational limits have dogged neural networks from their earliest incarnation — the perceptron — in the 1950s. As computing power exploded, and the internet unleashed a tsunami of data, they evolved into powerful engines for pattern recognition and prediction. But each new milestone brought an explosion in cost, as data-hungry models demanded increased computation. GPT-3, for example, trained on half a trillion words and ballooned to 175 billion parameters — the mathematical operations, or weights, that tie the model together — making it 100 times bigger than its predecessor, itself just a year old.

In work posted on the pre-print server arXiv, Thompson and his colleagues show that the ability of deep learning models to surpass key benchmarks tracks their nearly exponential rise in computing power use. (Like others seeking to track AI’s carbon footprint, the team had to guess at many models’ energy consumption due to a lack of reporting requirements). At this rate, the researchers argue, deep nets will survive only if they, and the hardware they run on, become radically more efficient.

Toward leaner, greener algorithms

The human perceptual system is extremely efficient at using data. Researchers have borrowed this idea for recognizing actions in video and in real life to make models more compact. In a paper at the European Conference on Computer Vision (ECCV) in August, researchers at the MIT-IBM Watson AI Lab describe a method for unpacking a scene from a few glances, as humans do, by cherry-picking the most relevant data.

Take a video clip of someone making a sandwich. Under the method outlined in the paper, a policy network strategically picks frames of the knife slicing through roast beef, and meat being stacked on a slice of bread, to represent at high resolution. Less-relevant frames are skipped over or represented at lower resolution. A second model then uses the abbreviated CliffsNotes version of the movie to label it “making a sandwich.” The approach leads to faster video classification at half the computational cost as the next-best model, the researchers say.

“Humans don’t pay attention to every last detail — why should our models?” says the study’s senior author, Rogerio Feris, research manager at the MIT-IBM Watson AI Lab. “We can use machine learning to adaptively select the right data, at the right level of detail, to make deep learning models more efficient.”

In a complementary approach, researchers are using deep learning itself to design more economical models through an automated process known as neural architecture search. Song Han, an assistant professor at MIT, has used automated search to design models with fewer weights, for language understanding and scene recognition, where picking out looming obstacles quickly is acutely important in driving applications. 

In a paper at ECCV, Han and his colleagues propose a model architecture for three-dimensional scene recognition that can spot safety-critical details like road signs, pedestrians, and cyclists with relatively less computation. They used an evolutionary-search algorithm to evaluate 1,000 architectures before settling on a model they say is three times faster and uses eight times less computation than the next-best method. 

In another recent paper, they use evolutionary search within an augmented designed space to find the most efficient architectures for machine translation on a specific device, be it a GPU, smartphone, or tiny Raspberry Pi. Separating the search and training process leads to huge reductions in computation, they say.

In a third approach, researchers are probing the essence of deep nets to see if it might be possible to train a small part of even hyper-efficient networks like those above. Under their proposed lottery ticket hypothesis, PhD student Jonathan Frankle and MIT Professor Michael Carbin proposed that within each model lies a tiny subnetwork that could have been trained in isolation with as few as one-tenth as many weights — what they call a “winning ticket.” 

They showed that an algorithm could retroactively find these winning subnetworks in small image-classification models. Now, in a paper at the International Conference on Machine Learning (ICML), they show that the algorithm finds winning tickets in large models, too; the models just need to be rewound to an early, critical point in training when the order of the training data no longer influences the training outcome. 

In less than two years, the lottery ticket idea has been cited more than more than 400 times, including by Facebook researcher Ari Morcos, who has shown that winning tickets can be transferred from one vision task to another, and that winning tickets exist in language and reinforcement learning models, too. 

“The standard explanation for why we need such large networks is that overparameterization aids the learning process,” says Morcos. “The lottery ticket hypothesis disproves that — it's all about finding an appropriate starting point. The big downside, of course, is that, currently, finding these ‘winning’ starting points requires training the full overparameterized network anyway.”

Frankle says he’s hopeful that an efficient way to find winning tickets will be found. In the meantime, recycling those winning tickets, as Morcos suggests, could lead to big savings.

Hardware designed for efficient deep net algorithms

As deep nets push classical computers to the limit, researchers are pursuing alternatives, from optical computers that transmit and store data with photons instead of electrons, to quantum computers, which have the potential to increase computing power exponentially by representing data in multiple states at once.

Until a new paradigm emerges, researchers have focused on adapting the modern chip to the demands of deep learning. The trend began with the discovery that video-game graphical chips, or GPUs, could turbocharge deep-net training with their ability to perform massively parallelized matrix computations. GPUs are now one of the workhorses of modern AI, and have spawned new ideas for boosting deep net efficiency through specialized hardware. 

Much of this work hinges on finding ways to store and reuse data locally, across the chip’s processing cores, rather than waste time and energy shuttling data to and from a designated memory site. Processing data locally not only speeds up model training but improves inference, allowing deep learning applications to run more smoothly on smartphones and other mobile devices.

Vivienne Sze, a professor at MIT, has literally written the book on efficient deep nets. In collaboration with book co-author Joel Emer, an MIT professor and researcher at NVIDIA, Sze has designed a chip that’s flexible enough to process the widely-varying shapes of both large and small deep learning models. Called Eyeriss 2, the chip uses 10 times less energy than a mobile GPU.

Its versatility lies in its on-chip network, called a hierarchical mesh, that adaptively reuses data and adjusts to the bandwidth requirements of different deep learning models. After reading from memory, it reuses the data across as many processing elements as possible to minimize data transportation costs and maintain high throughput. 

“The goal is to translate small and sparse networks into energy savings and fast inference,” says Sze. “But the hardware should be flexible enough to also efficiently support large and dense deep neural networks.”

Other hardware innovators are focused on reproducing the brain’s energy efficiency. Former Go world champion Lee Sedol may have lost his title to a computer, but his performance was fueled by a mere 20 watts of power. AlphaGo, by contrast, burned an estimated megawatt of energy, or 500,000 times more.

Inspired by the brain’s frugality, researchers are experimenting with replacing the binary, on-off switch of classical transistors with analog devices that mimic the way that synapses in the brain grow stronger and weaker during learning and forgetting.

An electrochemical device, developed at MIT and recently published in Nature Communications, is modeled after the way resistance between two neurons grows or subsides as calcium, magnesium or potassium ions flow across the synaptic membrane dividing them. The device uses the flow of protons — the smallest and fastest ion in solid state — into and out of a crystalline lattice of tungsten trioxide to tune its resistance along a continuum, in an analog fashion.

“Even though is not yet optimized, it gets to the order of energy consumption per unit area per unit change in conductance that’s close to that in the brain,” says the study’s senior author, Bilge Yildiz, a professor at MIT.

Energy-efficient algorithms and hardware can shrink AI’s environmental impact. But there are other reasons to innovate, says Sze, listing them off: Efficiency will allow computing to move from data centers to edge devices like smartphones, making AI accessible to more people around the world; shifting computation from the cloud to personal devices reduces the flow, and potential leakage, of sensitive data; and processing data on the edge eliminates transmission costs, leading to faster inference with a shorter reaction time, which is key for interactive driving and augmented/virtual reality applications.

“For all of these reasons, we need to embrace efficient AI,” she says.

Original article published on the MIT News website on August 7, 2020

News Image: 

Labs: 


Remembering Murray Eden, 1920-2020

$
0
0
September 1, 2020

Professor Emeritus Murray Eden, photo credit Abby Eden

 

Murray Eden, Emeritus Professor in Electrical Engineering at MIT, passed away on August 9, 2020, in Tucson, AZ. He was one week shy of his 100th birthday.

 

Eden was associated with MIT from 1959 until 1979; his groundbreaking body of work was split between MIT, Harvard Medical School, the National Institutes of Health, and the World Health Organization, and focused on pattern recognition and its application in medical image processing. The groups of researchers and students he collaborated with and mentored were responsible for foundational advances in medical technology, including the development of one of the first PET scanners and the first applications of wavelets to computed tomography. Eden also contributed to a dizzying number of collaborative efforts that marked the world in ways profound (during WWII, he helped produce Uranium-235 in the Princeton section of the Manhattan Project alongside then-student Richard Feynman) and picayune (he was responsible for the inclusion of numbers underneath every modern UPC code).

 

Eden’s career at MIT began in 1958, when he joined the Communications Biophysics Lab as an Associate Professor, becoming a Professor of Electrical Engineering in 1964. He cofounded the Cognitive Information Processing Group, served as House Master of Senior House, and co-edited and -authored the seminal work “Recognizing Patterns: Studies in Living and Automatic Systems”, published by MIT Press in 1968.

 

Eden’s influence on the seniors under his supervision was memorable, even a bit mischievous. Reflecting on his time in Senior House, lifelong friend Ken Kotovsky, now Professor Emeritus of psychology at Carnegie Mellon University, says, “[Eden] was a guide to a richer political world than growing up in the 1950s exposed me to, both the ideas and even a sort of activist stance toward the world. One example of the latter is about a lounge we wanted to put in the basement of our dorm. Murray accompanied us to a meeting with Institute administration where they explained that there wasn’t enough money in the housemaster’s budget to do the demolition and the renovation. Murray casually commented on the way back that in his day, students might have knocked down the walls themselves. That night, we did just that, leaving a floor covered with about two feet of rubble! Lest this suggests that Murray was a negative influence on us unruly undergrads, the dorm won the academic award for highest cumulative GPA that year—something that seemed to belong to the fraternity system up until then.”

 

Another of Eden’s early colleagues at MIT, Oleh Tretiak (now Professor Emeritus at Drexel University) said, “I was blessed by my association with Murray Eden. He served on my doctoral committee in 1963, and I continued working in the Cognitive Information Processing Group for 10 more years. This research collective, headed by Samuel Mason, William Schreiber, and Murray himself, initiated research in many aspects of Digital Image Processing, as well as in other areas such as Image Coding (now HDTV), sensory aids for the handicapped, and the study of human cognition. Murray and I started research on the analysis of microscopic images and published some of the earliest papers on Computer Tomography. In addition to research and teaching, Murray also involved me in the MIT student experience: as Housemaster of Senior House, he engaged me to become Senior Tutor at that dormitory. I enjoyed the stimulation and excitement of living with young people, fresh away from home and experiencing the excitement of the 1960’s.” 

 

Even within the context of that tumultuous decade, Eden was a particularly vocal and committed activist. One of the original members of the Union of Concerned Scientists, Eden was active in the anti-war movement during the Vietnam War, and introduced keynote speaker George Wald at the famous March 4, 1969 “Scientists Strike for Peace”, in which MIT faculty, students and staff brought research and teaching to a halt to protest institutional complicity in the war. By this point in his career, Eden was already a seasoned activist; as an undergraduate, he’d written for the City College of New York’s radical student newspaper, City College Campus, only quitting when he was pressured to join the Young Communist League. Looking for a group more in tune with his beliefs, he joined a secular Zionist student organization, Avukah, in which he made lifelong friends with noted linguists Zellig Harris and Noam Chomsky. Their association was wildly productive, guiding all three scholars to prescient interdisciplinary inquiry. Kotovsky remembers, “I took a course 6 class from [Eden], co-taught by Noam Chomsky, that was, in 1960-61, a harbinger of the cognitive science revolution that was to guide the rest of my academic life. It was a tour de force of interdisciplinary exploration, ranging from neuroscience to experimental psychology with many disciplines in between, and it led to my doing my senior thesis under Murray’s direction, even though I was not an electrical engineering student. The thesis was on using Claude Shannon’s information theory to model the transmission of information in visually presented words, based on word and trigram frequency. Its reach was typical of Murray’s breadth and openness, and it led to my subsequent career in cognitive science.”

 

In 1979, Eden accepted an offer to head NIH’S Biomedical Engineering and Instrumentation Program (BEIP); he received the Director’s award in 1993. He also held visiting appointments as adjunct professor of electrical engineering at Johns Hopkins University and guest professor at the Ecole Polytechnique Federale de Lausanne (Switzerland), and acted as a consultant on research and development for the Director-General of the World Health Organization—work for which he was awarded the WHO Medical Society medal in 1983.

 

After retiring from NIH in 1994, Eden remained active with invited lectures, seminars in optical illusion, and as an adjunct professor in environmental health at Boston University School Public Health. A Life Fellow of the IEEE, Eden served for many decades as the Editor of Information and Control; additionally, he served as a member of the IEEE’s Advisory Board, as well as the Spectrum Magazine editorial Board. He spent his retirement between Cherryfield, ME, and Tuscon, AZ, where he was the oldest member of the University of Arizona Community Chorus.

 

He leaves a brother, Dr. Alvin Eden, NYC, NY; five children, Abigail Eden of Cherryfield, ME; Susanna Eden of Tucson, AZ; Mark D. Eden of Taos, NM; Shirley H. McDaniel, Venice, FL; and John W. Hartle, Juneau, AK; and seven grandchildren.

 

With the family's permission, information from this published obituary was included in this report. Photo courtesy of Abby Eden.

Research Themes: 

News Image: 

Research Area: 

EECS Welcomes Five New Faculty Members in Fiscal Year 2020

$
0
0

Top row, L to R: Oliver, Corrigan-Gibbs, Chen. Bottom row, L to R: Yan, Ragan-Kelley

 

2020 has seen the addition of many new faculty members, including five recent hires within EECS. Learn more about their fascinating research below.

 

Yufeng (Kevin) Chen joined the Department of Electrical Engineering and Computer Science as an assistant professor in January 2020. He received his Ph.D. in engineering science from Harvard University and his B.S. in applied physics from Cornell. He did postdoctoral research at Harvard University, leading to the development of small robots that are highly agile, multifunctional, and robust. His work has appeared in top journals including Science Robotics, Nature, and Nature Communications, among others. He has been a Forbes 30 Under 30 fellow. He investigates millimeter-scaled biomechanics, distilling the underlying physical principles, and then applies these findings to enable novel functions in microrobots. He is also interested in developing novel soft actuators to enable agile and robust locomotion in microrobots.

 

Henry Corrigan-Gibbs joined the Department of Electrical Engineering and Computer Science as an assistant professor in July 2020. He received a Pd.D. and an M.S. in computer science from Stanford University and a BS in computer science from Yale University. His research interests are in computer security, cryptography, systems, and privacy. He has received the Eurocrypt Best Young Researcher Paper Award, the Caspar Bowden Award for Outstanding Research in Privacy-Enhancing Technologies, an IEEE Security and Privacy Distinguished Paper Award, a National Defense Science and Engineering Graduate (NDSEG) Fellowship, and an NSF Fellowship. Currently, he is a postdoc at the École Polytechnique Fédérale de Lausanne (EPFL) in Switzerland. Previously he was an intern at Microsoft Research and the New York Times Interactive News Group, and he has also worked on computer-science projects in Ghana, Nepal, and Uganda.   

 

William Oliver is a newly tenured associate professor working with the Quantum Information and Integrated Nanosystems Group at Lincoln Laboratory and the Engineering Quantum Systems Group at MIT, where he provides programmatic and technical leadership for programs related to the development of quantum and classical high-performance computing technologies for quantum information science applications. His interests include the materials growth, fabrication, design, and control of superconducting quantum processors, as well as the development of cryogenic packaging and control electronics involving cryogenic CMOS and single-flux quantum digital logic. He is director of the Center for Quantum Engineering and associate director of the Research Laboratory of Electronics.

 

Jonathan Ragan-Kelley joined the Department of Electrical Engineering and Computer Science as an assistant professor in January 2020. He received a Ph.D. and an S.M. in electrical engineering and computer science from MIT, and a B.S. in computer science from Stanford University. His research focuses on computer graphics, compilers, domain-specific languages, and high-performance systems. Among other honors, he has received research highlights in the Communications of the Association for Computing Machinery (CACM) journal, an Intel Foundation Pd.D. Fellowship, an NVIDIA Graduate Fellowship, an NSF Fellowship, and MIT’s William A. Martin Award for Best Master’s Thesis in Computer Science. Currently, he is an assistant professor at the University of California at Berkeley and has taught at Stanford and MIT. He was a postdoc at Stanford, and has been a researcher, intern, or consultant for Google, Adobe, and Intel, among others.

 

Mengjia Yan joined the Department of Electrical Engineering and Computer Science as an assistant professor in November 2019. She received a Pd.D. and an M.S. in computer science from the University of Illinois at Urbana-Champaign and a BS in computer science from Zhejiang University in China. Her research interests are in computer architecture, focusing on hardware support for security. Among other honors, she was a selected participant for Rising Stars in EECS at MIT and for Rising Stars in Computer Architecture at the Georgia Institute of Technology in 2018. At UIUC, she was a Mavis Future Faculty Fellow, a distinction awarded to students planning careers as engineering professors, and she received the W.J. Poppelbaum Memorial Award for architecture design creativity. She also served as a research intern for the NVIDIA Architecture Research Group.

 

 

September 4, 2020

Research Themes: 

News Image: 

Labs: 

Research Area: 

Asu Ozdaglar and Joel Voldman appointed to new chairs within EECS

$
0
0

L to R, Joel Voldman and Asu Ozdaglar

 

July saw two new chair appointments within the department’s leadership. Please join us in congratulating Asu Ozdaglar and Joel Voldman on their accomplishments, and learn more about the new chairs below.

Asu Ozdaglar has been appointed the inaugural MathWorks Professor of Electrical Engineering and Computer Science, effective July 1, 2020. Recognizing an outstanding senior faculty member, the inaugural professorship honors Professor Ozdaglar’s exceptional leadership and accomplishments.

Ozdaglar, a principal investigator at the Laboratory for Information and Decision Systems and Deputy Dean of Academics for the Schwarzman College of Computing, was named head of the Department of Electrical Engineering and Computer Science in January 2018. In 2018 she was also appointed the School of Engineering Distinguished Professor of Engineering, a professorship she held until June 2020.

A member of the MIT faculty since 2003, Ozdaglar earned her bachelor’s degree in electrical engineering from the Middle East Technical University in Ankara, Turkey in 1996, and SM and PhD in electrical engineering and computer science from MIT, in 1998 and 2003.

As the 2014 recipient of the Spira Award for Excellence in Teaching, Ozdaglar’s educational contributions to MIT are substantial. She has developed a range of graduate and undergraduate courses, including a graduate-level game theory subject and an undergraduate course on networks that is jointly listed with the Department of Economics. She likewise served as a champion of curriculum innovations through her role in launching the new undergraduate major in 6-14: Computer Science, Economics and Data Science, and the creation of Course 11-6: Urban Science and Planning with Computer Science, a new program that offers students an opportunity to investigate some of the most pressing problems and challenges facing urban areas today. Ozdaglar also served as technical program co-chair of the Rising Stars in EECS career-development workshop in 2015.

In her role as Deputy Dean for Academics in the College of Computing, Ozdaglar is working on developing the Common Ground for Computing Education, an interdepartmental teaching collaborative that will facilitate the offering of computing classes and coordination of computing-related curricula across academic units.

Through her research, Professor Ozdaglar has made fundamental contributions to optimization theory, economic and social networked systems, and game theory. Her research in optimization ranges from convex analysis and duality to distributed methods for large-scale systems and optimization algorithms for machine learning. Her work has integrated analysis of social and economic interactions within the study of networks and spans many dimensions of these areas, including the analysis of learning and communication, diffusion and information propagation, influence and misinformation in social networks, and cascades and systemic risk in economic and financial systems.

Recently, her research around targeted interventions in networked and multi-risk SIR models (and its impact on reopening the economy while containing the spread of the pandemic), was provided funding through the new C3.ai Digital Transformation Institute – a new consortium for which Ozdaglar also serves as MIT’s faculty lead.

Among many honors and achievements Ozdaglar has received a Microsoft fellowship, the NSF CAREER Award, the 2008 Donald P. Eckman Award of the American Automatic Control Council, and in 2011, she was named a Kavli Fellow of the National Academy of Sciences.

Joel Voldman has been named the Clarence J. LeBel Professor, effective July 1, 2020. This chair appointment recognizes his pioneering research, excellent teaching and mentoring, and outstanding contributions to the department and MIT.

Joel is a principal investigator in the Research Laboratory of Electronics (RLE) and the Microsystems Technology Laboratories (MTL). He received the B.S. degree in electrical engineering from the University of Massachusetts, Amherst, in 1995. He received the M.S. and Ph.D. degrees in electrical engineering from MIT in 1997 and 2001, respectively. He did his postdoctoral research at Harvard Medical School before joining the MIT faculty in the Department of Electrical Engineering and Computer Science in 2002. Joel has been the Faculty Head of Electrical Engineering since January 2020. From 2018 to 2019, he was the Associate Department Head in EECS.

Joel is an exceptional educator. Since joining the EECS faculty in 2002, Joel has taught 6.01 (Introduction to EECS), 6.002 (Circuits & Electronics), 6.003 (Signals and Systems), 6.021 (Quantitative Physiology), and 6.777 (MEMS). In addition, he co-developed two introductory EECS courses. 6.03 (Introduction to EECS via Medical Technology) uses medical devices to introduce EECS concepts such as signal processing and machine learning, while 6.S08/6.08 (Interconnected Embedded Systems) uses the Internet of Things to introduce EECS concepts such as system partitioning, energy management, and hardware/software co-design.  This latter course has been immensely successful, drawing over 350 students in its offerings. He was also part of the team that substantially revised 6.002 in 2017.

Joel’s research has also made pioneering contributions to BioMEMS, applying microfabrication technology to illuminate biological systems, ranging from point-of-care diagnostics to fundamental cell biology to applied neuroengineering. His work develops microfluidic technology for biology and medicine, with an emphasis on cell sorting and immunology. He has developed a host of technologies to arrange, culture, and sort diverse cell types, including immune cells, endothelial cells, and stem cells. His group was the first to demonstrate microfluidic culture of pluripotent stem cells, and the first to show that such culture conditions could productively alter the local environment of the cells. His group also has developed the highest-performance device to pair and fuse cells, and used this to establish microfluidic systems for longitudinal monitoring of cell-cell interactions. By careful arrangement of fluids and electric fields, Joel was also able to develop the first microfluidic system capable of continuous separation of cells based specifically on their electrical properties, and has recently used this technology to monitor the immune system during sepsis.

Joel’s awards include a National Science Foundation (NSF) CAREER award, an American Chemical Society (ACS) Young Innovator Award, a Bose Fellow grant, MIT’s Jamieson Teaching Award, a Louis D. Smullin (’39) Award for Teaching Excellence from EECS, a Frank Quick Faculty Research Innovation Fellowship from EECS, IEEE/ACM Best Advisor Award (2017 and 2019), AIMBE Fellow, and awards for posters and presentations at international conferences. 

The LeBel chair was created in 1967-1968 by a bequest from the late Mr. LeBel, who received SB and SM degrees from MIT in 1927, was a founder of the Audio Engineering Society, and served as its President in 1958. The chair was held by Professors Kenneth Stevens and Charles Sodini in the past, and is currently held also by Professor Duane Boning and John Tsitsiklis.

September 16, 2020

Research Themes: 

News Image: 

Labs: 

Research Area: 

Three new career development chairs appointed within EECS

$
0
0

L to R: Chen, Yan, Ragan-Kelley

This summer has seen three new career development chairs appointed within the EECS faculty. We congratulate Kevin Chen, Jonathan Ragan-Kelley, and Mengjia Yan on their achievements; you can learn more about the new chairs below.

YuFeng (Kevin) Chen, an assistant professor of EECS since January 2020, has been named the D. Reid Weedon, Jr. ’41 Career Development Professor. Kevin received his PhD in engineering science from Harvard and his BS in applied physics from Cornell. He did postdoctoral research at Harvard, leading to the development of small robots that are highly agile, multifunctional, and robust. Kevin focuses specifically on millimeter-scale robots, which have application for search and rescue, environmental exploration, and so on. In addition, these millimeter-scale microrobots have the potential to perform tasks (e.g., perching, walking on the surface of water, etc) that are difficult for traditional robots by exploiting the dominant physics (electrostatics, surface tension) at the millimeter scale. His work has appeared in top journals including Science Robotics, Nature, and Nature Communications, among others.

He has been a Forbes 30 Under 30 fellow. He investigates millimeter-scaled biomechanics, distilling the underlying physical principles, and then applies these findings to enable novel functions in microrobots. He is also interested in developing novel soft actuators to enable agile and robust locomotion in microrobots.

The chair was established by a bequest from D. Reid and Barbara J. Weedon and is a wonderful expression of our gratitude for their dedication to the Institute.

Jonathan Ragan-Kelley, an assistant professor of EECS since January 2020, has been named the Esther and Harold E. Edgerton Career Development Assistant Professor. Jonathan received a PhD and an SM in EECS from MIT and a BS in computer science from Stanford University. His research focuses on computer graphics, compilers, domain-specific languages, and high-performance systems. Jonathan's best-known work is the computer graphics language Halide, which has become the industry-standard language for image processing. His earlier work, on the language Lightspeed, was used in producing many movies, and was even a finalist for a technical Oscar award

Among other honors, he has received research highlights in the Communications of the Association for Computing Machinery (CACM) journal, an Intel Foundation PhD Fellowship, an NVIDIA Graduate Fellowship, an NSF Fellowship, and MIT’s William A. Martin Award for Best Master’s Thesis in Computer Science. Prior to MIT, Jonathan was an assistant professor at the University of California at Berkeley. He was a postdoc at Stanford, and has been a researcher, intern, or consultant for Google, Adobe, and Intel, among others.

The Edgerton Professorships were established in 1973 by the MIT Corporation to honor the late Professor and Mrs. Harold E. Edgerton. The Edgertons were a source of friendship and encouragement to students and young faculty members for more than half a century.

Mengjia Yan, an assistant professor of EECS since November 2019, has received the Homer A. Burnell Career Development Professorship. Mengjia received a PhD and an MS in computer science from the University of Illinois at Urbana-Champaign (UIUC) and a BS in computer science from Zhejiang University in China. Her research interests are in computer architecture, focusing on hardware support for security. Her research vision is to rethink computer architecture from the ground-up for security. She has proposed the first comprehensive hardware defense solution against speculative execution attacks in multiprocessor cache hierarchies. This design, called InvisiSpec, makes speculative execution invisible. As a graduate student she proposed the InvisiSpec to Intel, and received a three-year-long Intel Strategic Research Alliance (ISRA) Award.

Among other honors, she was a selected participant for Rising Stars in EECS at MIT and for Rising Stars in Computer Architecture at the Georgia Institute of Technology in 2018. At UIUC, she was a Mavis Future Faculty Fellow, a distinction awarded to students planning careers as engineering professors, and she received the W.J. Poppelbaum Memorial Award for architecture design creativity. She also served as a research intern for the NVIDIA Architecture Research Group.

This Professorship was made possible from the bequest of Homer A. Burnell (1928) to support a junior faculty member.

 

 

 

September 16, 2020

Research Themes: 

News Image: 

Labs: 

Research Area: 

Roundup: EECS faculty & lecturers awards and honors, 2020-2021

$
0
0

 

L to R: Nancy Lynch, Shafi Goldwasser

EECS professors are frequently recognized for excellence in teaching, research, service, and other areas. Following is an ongoing list of awards, prizes, medals, fellowships, memberships, grants, and other honors received by EECS faculty from September 2020 through August 2021.

Shafi Goldwasser, RSA Professor of Electrical Engineering and Computer Science, received an honorary degree from Tel Aviv University, which will be formally awarded in May 2021.

Nancy Lynch, NEC Professor of Software Science and Engineering, received the first edition of the CONCUR Test-of-Time award from CONCUR, International Conference on Concurrency Theory, on September 2nd.

September 17, 2020

News Image: 

A legacy of curiosity in the name of Hugh Hampton Young

$
0
0
September 21, 2020

Hugh Hampton Young Fellows for 2020: (top row, l-r) Juncal Arbelaiz, Sarah Cen, Emily Hanhauser, and Stewart Isaacs; (bottom row, l-r) Kristy Johnson, Tse Yang Lim, Erin Rousseau, and George Varnavides.

Hugh Hampton Young Fellows for 2020: (top row, l-r) Juncal Arbelaiz, Sarah Cen, Emily Hanhauser, and Stewart Isaacs; (bottom row, l-r) Kristy Johnson, Tse Yang Lim, Erin Rousseau, and George Varnavides.

Office of Graduate Education

American surgeon, urologist, and medical researcher Hugh Hampton Young (1870-1945) was known for leading with his curiosity. His inquisitive mind led him to explore the field of aviation, the arts, and civic enhancement, though he is best known for innovation in medical science. The aim of the Hugh Hampton Young Fellowship administered by MIT's Office for Graduate Education (OGE) is therefore to reward academic achievement across multiple disciplines and to honor students who possess exceptional character strengths. These students harbor outstanding potential to make a positive impact on humanity.

An anonymous donor established the fellowship in 1965 for the benefit of MIT graduate students, approximately 175 of whom have received funding over the past 55 years. Award recipients are selected by an external selection committee comprised of former Hugh Hampton Young Fellowship recipients and MIT alumni, who meet with fellowship finalists for interviews. "The committee is looking for individuals who give their curiosity free rein, allowing for a broad focus, as Hugh Hampton Young did," explains OGE Director of Graduate Fellowships Scott Tirrell. "Additionally, demonstrated leadership and initiative bring a candidate to the fore.”  

Indeed, the talent, drive, and multidisciplinary focus of the selected eight recipients should stand them in good stead among the ranks of former fellows, and provide a tribute to the storied innovator. 

Juncal Arbelaiz

Juncal Arbelaiz is a PhD candidate in applied mathematics, with a background in engineering. In her research, she draws inspiration from nature and biological cybernetics to inform novel optimal decentralized architectures for control and estimation of large-scale and spatially-distributed dynamic systems. She also addresses how autonomous systems can effectively handle uncertainty, and the role that the statistical properties of the noise present in the system dynamics and sensing play in the information requirements of control and estimation strategies. From a fundamental viewpoint, her research contributes to understanding the performance-decentralization trade-off in large-scale systems. From an engineering perspective, her contributions guide the design of algorithms for autonomous machines. She deeply enjoys multidisciplinary research, and particularly exploiting mathematical insight to solve complex technical problems. 

Sarah Cen 

Algorithms have the potential to offer widespread benefits, but they can also produce harmful trickle-down effects, such as the amplification of fake news on social media or the propagation of biases in advertising. Many view these effects as necessary evils, begging the question: What if the assumed trade-off between performance and social responsibility is false? In many cases, no such trade-off exists, making it feasible to design socially responsible algorithms without sacrificing performance. In designing these algorithms in the Department of Electrical Engineering and Computer Science, Cen considers exogeneous factors (e.g., legal norms and financial stakeholders) that are critical to enforceability. She also proves that the algorithms satisfy important properties in order to prevent undesirable corner cases from being discovered after deployment. Her group's most recent work shows that it is possible to regulate content filtering on social media in a way that is consistent with principles of online governance while imposing little to no penalty on the platform. 

Emily Hanhauser 

Emily is a PhD candidate in mechanical engineering (MechE). Her research centers on microtechnologies for the isolation and detection of pathogens. She seeks to use understanding of microscale transport processes to design and implement rapid, cost-effective and widely deployable biological analyses. This work combines training from both her BS in biology from University of Wisconsin-Madison and MS in MechE at MIT. Prior to MIT, Hanhauser worked on potential cures for HIV and microfluidic devices. This experience catalyzed her interest in equitable health technologies and her desire to work at the intersection of biology and engineering. Outside of research, she is a member of MechE Resources for Easing Friction and Stress (REFS) and a graduate resident assistant at the undergraduate residence New House. 

Stewart Isaacs 

As a PhD candidate in aeronautics and astronautics, Isaacs' research quantifies the economic and climate costs of using electrofuels for transportation. Electrofuels are a type of low-carbon transportation fuel derived of renewable energy, water, and atmospheric carbon that can work in existing engine technology to reduce life cycle emissions. He also quantifies the impact of dust and other aerosols on solar energy generation in West Africa.

Kristy Johnson 

Kristy Johnson is a PhD candidate in the Affective Computing research group at MIT, where she works at the intersection of neuroscience, engineering, and autism. In particular, she focuses on science and technology to improve the lives of individuals with complex neurodevelopmental differences, especially those with nonverbal autism and intellectual disabilities. Her research centers on personalized, naturalistic study paradigms; strengths-based research questions; and translational work extending from the lab to daily life. She combines techniques ranging from deep brain stimulation and fMRI neuroimaging to wearable biosensors and human-centered AI systems. Her most recent work has developed personalized machine learning models to interpret non-speech vocalizations from nonverbal individuals. This research connects fundamental questions (“What is verbal communication?”) to real-world solutions that can enrich the lives of neurodiverse individuals. 

Tse Yang Lim 

Tse Yang Lim is a PhD candidate in the System Dynamics group at the MIT Sloan School of Management, and an Oak Ridge Institute for Science and Education fellow at the U.S. Food and Drug Administration, where he is developing a systems model of the U.S. opioid crisis to help guide and inform government policy-making. He also currently works on modeling Covid-19 across multiple countries to estimate the true extent of the pandemic. His previous work has addressed inter-organizational learning and coordination in sustainable development practice, focusing on the United Nations. Fundamentally, he is interested in translating knowledge into action to advance human dignity and social resilience, while avoiding negative unintended consequences. Outside of research, he has been involved in campus organizing with Fossil Free (now Divest) MIT and the MIT Day of Action. He holds a BS in biology and a master’s in environmental management from Yale University. 

Erin Rousseau

Rousseau creates technologies to enable investigation and treatment of neurological disease in the Health Sciences and Technology program. Addiction, or substance use disorder, is partially mediated by small proteins known as neuropeptides. Until recently, studying neuropeptides has been difficult due to low concentrations in the brain and device failure in a biological system. Her research aims to tackle these problems through the creation of a chronically implantable microfluidic device coupled to powerful analytical techniques. These devices will give a clearer picture of the changing protein landscape both health and disease states and will lead to new treatment paradigms for people with substance use disorder.

George Varnavides 

Georgios Varnavides is a PhD candidate in the Department of Materials Science and Engineering, focused on nonequilibrium carrier transport. As a condensed matter theorist in a neural engineering lab, Varnavides is fascinated by the fundamental science mysteries such phenomena present at the nanoscale and hopeful his research will advance the Bioelectronics lab’s mission to understand and treat brain disorders. Constantly humbled by his experimental collaborators, he is excited to be part of one of the first teams to directly image nonequilibrium carrier transport using advanced characterization. Outside of research, Varnavides has a keen interest in the role computation and visualization can play in pedagogy, co-teaching several undergraduate courses, leading a materials-inspired generative-art workshop for MIT's Independent Activities Period, and coordinating a data visualization challenge within his department.

Original article published on the MIT News website on Sept. 15, 2020.

News Image: 

Labs: 

Helping robots avoid collisions

$
0
0
September 22, 2020

The startup Realtime Robotics is helping robots solve the motion planning problem by giving them collision avoidance capabilities. Here, a robot avoids a researcher’s waving hand. Courtesy of Realtime Robotics

  MIT News Office

George Konidaris still remembers his disheartening introduction to robotics.

“When you’re a young student and you want to program a robot, the first thing that hits you is this immense disappointment at how much you can’t do with that robot,” he says.

Most new roboticists want to program their robots to solve interesting, complex tasks — but it turns out that just moving them through space without colliding with objects is more difficult than it sounds.

Fortunately, Konidaris is hopeful that future roboticists will have a more exciting start in the field. That’s because roughly four years ago, he co-founded Realtime Robotics, a startup that’s solving the “motion planning problem” for robots.

The company has invented a solution that gives robots the ability to quickly adjust their path to avoid objects as they move to a target. The Realtime controller is a box that can be connected to a variety of robots and deployed in dynamic environments.

“Our box simply runs the robot according to the customer’s program,” explains Konidaris, who currently serves as Realtime’s chief roboticist. “It takes care of the movement, the speed of the robot, detecting obstacles, collision detection. All [our customers] need to say is, ‘I want this robot to move here.’”

Realtime’s key enabling technology is a unique circuit design that, when combined with proprietary software, has the effect of a plug-in motor cortex for robots. In addition to helping to fulfill the expectations of starry-eyed roboticists, the technology also represents a fundamental advance toward robots that can work effectively in changing environments.

Helping robots get around

Konidaris was not the first person to get discouraged about the motion planning problem in robotics. Researchers in the field have been working on it for 40 years. During a four-year postdoc at MIT, Konidaris worked with School of Engineering Professor in Teaching Excellence Tomas Lozano-Perez, a pioneer in the field who was publishing papers on motion planning before Konidaris was born.

Humans take collision avoidance for granted. Konidaris points out that the simple act of grabbing a beer from the fridge actually requires a series of tasks such as opening the fridge, positioning your body to reach in, avoiding other objects in the fridge, and deciding where to grab the beer can.

“You actually need to compute more than one plan,” Konidaris says. “You might need to compute hundreds of plans to get the action you want. … It’s weird how the simplest things humans do hundreds of times a day actually require immense computation.”

In robotics, the motion planning problem revolves around the computational power required to carry out frequent tests as robots move through space. At each stage of a planned path, the tests help determine if various tiny movements will make the robot collide with objects around it. Such tests have inspired researchers to think up ever more complicated algorithms in recent years, but Konidaris believes that’s the wrong approach.

“People were trying to make algorithms smarter and more complex, but usually that’s a sign that you’re going down the wrong path,” Konidaris says. “It’s actually not that common that super technically sophisticated techniques solve problems like that.”

Konidaris left MIT in 2014 to join the faculty at Duke University, but he continued to collaborate with researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL). Duke is also where Konidaris met Realtime co-founders Sean Murray, Dan Sorin, and Will Floyd-Jones. In 2015, the co-founders collaborated to make a new type of computer chip with circuits specifically designed to perform the frequent collision tests required to move a robot safely through space. The custom circuits could perform operations in parallel to more efficiently test short motion collisions.

“When I left MIT for Duke, one thing bugging me was this motion planning thing should really be solved by now,” Konidaris says. “It really did come directly out of a lot of experiences at MIT. I wouldn’t have been able to write a single paper on motion planning before I got to MIT.”

The researchers founded Realtime in 2016 and quickly brought on robotics industry veteran Peter Howard MBA ’87, who currently serves as Realtime’s CEO and is also considered a co-founder.

“I wanted to start the company in Boston because I knew MIT and lot of robotics work was happening there,” says Konidaris, who moved to Brown University in 2016. “Boston is a hub for robotics. There’s a ton of local talent, and I think a lot of that is because MIT is here — PhDs from MIT became faculty at local schools, and those people started robotics programs. That network effect is very strong.”

Removing robot restraints

Today the majority of Realtime’s customers are in the automotive, manufacturing, and logistics industries. The robots using Realtime’s solution are doing everything from spot welding to making inspections to picking items from bins.

After customers purchase Realtime’s control box, they load in a file describing the configuration of the robot’s work cell, information about the robot such as its end-of-arm tool, and the task the robot is completing. Realtime can also help optimally place the robot and its accompanying sensors around a work area. Konidaris says Realtime can shorten the process of deploying robots from an average of 15 weeks to one week.

Once the robot is up and running, Realtime’s box controls its movement, giving it instant collision-avoidance capabilities.

“You can use it for any robot,” Konidaris says. “You tell it where it needs to go and we’ll handle the rest.”

Realtime is part of MIT’s Industrial Liaison Program (ILP), which helps companies make connections with larger industrial partners, and it recently joined ILP’s STEX25 startup accelerator.

With a few large rollouts planned for the coming months, the Realtime team’s excitement is driven by the belief that solving a problem as fundamental as motion planning unlocks a slew of new applications for the robotics field.

“What I find most exciting about Realtime is that we are a true technology company,” says Konidaris. “The vast majority of startups are aimed at finding a new application for existing technology; often, there’s no real pushing of the technical boundaries with a new app or website, or even a new robotics ‘vertical.’ But we really did invent something new, and that edge and that energy is what drives us. All of that feels very MIT to me."

Research Themes: 

News Image: 

Labs: 


Regina Barzilay wins $1m Association for the Advancement of Artificial Intelligence Squirrel AI Award

$
0
0

 

Professor Regina Barzilay of MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL)

By Adam Conner-Simons, CSAIL

For more than 100 years Nobel Prizes have been given out annually to recognize breakthrough achievements in chemistry, literature, medicine, peace and physics. As these disciplines undoubtedly continue to impact society, newer fields like artificial intelligence (AI) and robotics have also begun to profoundly reshape the world.

In recognition of this, today the world’s largest AI society - the Association for the Advancement of Artificial Intelligence (AAAI) - announced the winner of their new Squirrel AI Award for Artificial Intelligence for the Benefit of Humanity, a brand-new $1 million award given to honor individuals whose work in the field has had a transformative impact on society.

The recipient, professor Regina Barzilay of MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), is being recognized for her work developing machine learning models to develop antibiotics and other drugs, and to detect and diagnose breast cancer at early stages.

In February AAAI will officially present Barzilay with the award, which comes with an associated prize of $1 million provided by the online education company Squirrel AI

“Only world-renowned recognitions such as the Association of Computing Machinery’s A.M. Turing Award, and the Nobel Prize carry monetary rewards at the million-dollar level,” says AAAI’s Past-President and Awards Committee Chair Yolanda Gil. “This award aims to be unique in recognizing the positive impact of artificial intelligence for humanity.” 

Barzilay has conducted research on a range of topics in computer science, ranging from explainable machine learning to deciphering dead languages. Since surviving breast cancer in 2014, she has increasingly focused her efforts on healthcare. She created algorithms for early breast cancer diagnosis and risk assessment that have been tested at multiple hospitals around the globe, including in Sweden, Taiwan, and at Boston’s Massachusetts General Hospital. She is now working with breast cancer organizations  such as Institute Protea in Brazil to make her diagnostic tools available for underprivileged populations around the world. (She realized from doing her work that, if a system like hers had existed at the time, her doctors actually could have detected her cancer two or three years earlier.) 

In parallel, she has been working on developing machine learning models for drug discovery: with collaborators she’s created models for selecting molecule candidates for therapeutics that have been able to speed up drug development, and last year helped discover a new antibiotic called Halicin that was shown to be able to kill many species of disease-causing bacteria that are antibiotic-resistant, including Acinetobacter baumannii and clostridium difficile (“c-diff”). 

“Through my own life experience, I came to realize that we can create technology that can alleviate human suffering and change our understanding of diseases,“ says Barzilay, the Delta Electronics Professor of Electrical Engineering and Computer Science at MIT and a member of the Koch Institute for Integrative Cancer Research. “I feel lucky to have found collaborators who share my passion and who have helped me realize this vision.”

Barzilay also serves as a member of MIT’s Institute for Medical Engineering and Science, and as faculty co-lead for MIT’s Abdul Latif Jameel Clinic for Machine Learning in Health. One of the Jameel-Clinic’s most recent efforts is “AI Cures,” a cross-institutional initiative focused on developing affordable COVID-19 antivirals. 

“Regina has made truly-changing breakthroughs in imaging breast cancer and predicting the medicinal activity of novel chemicals,” says MIT biology professor Phillip Sharp, a Nobel laureate who has served as director of both the McGovern Institute for Brain Research and what’s now the Koch Institute. “I am honored to have as a colleague someone who is such a pioneer in using deeply creative machine learning methods to transform the fields of healthcare and biological science.”

Barzilay joined the MIT faculty in 2003 after earning her undergraduate at Ben-Gurion University of the Negev, Israel and her PhD at Columbia University. She is also the recipient of a MacArthur “genius grant”, the National Science Foundation Career Award, a Microsoft Faculty Fellowship, multiple “best paper” awards in her field, and MIT’s Jamieson Award for excellence in teaching. 

"We believe AI advances will benefit a great many fields, from healthcare and education to smart cities and the environment," says Derek Li, founder and chairman of Squirrel AI. “We believe that Dr. Barzilay and other future awardees will inspire the AI community to continue to contribute to and advance AI’s impact on the world.”

AAAI’s Gil says the organization was very excited to partner with Squirrel AI for this new award to recognize the positive impacts of artificial intelligence “to protect, enhance, and improve human life in meaningful ways.” With more than 300 elected fellows and 6,000 members from 50 countries across the globe, AAAI is the world’s largest scientific society devoted to artificial intelligence.  Its officers have included many AI pioneers such as Allen Newell and John McCarthy. AAAI confers several influential AI awards including the Feigenbaum Prize, the Newell Award (jointly with ACM), and the Engelmore Award. 

“Regina has been a trailblazer in the field of healthcare AI by asking the important questions about how we can use machine learning to treat and diagnose diseases,” says Daniela Rus, director of CSAIL and the Andrew (1956) and Erna Viterbi Professor of Electrical Engineering and Computer Science. “She has been both a brilliant researcher and a devoted educator, and all of us at CSAIL are so inspired by her work and proud to have her as a colleague.” 

September 23, 2020

Research Themes: 

News Image: 

Labs: 

Research Area: 

Provably exact artificial intelligence for nuclear and particle physics

$
0
0
September 28, 2020

 

Artist’s impression of the machine learning architecture that explicitly encodes gauge symmetry for a 2D lattice field theory.
Credits:

Image courtesy of the MIT-DeepMind collaboration.

Researchers from Singapore-MIT Alliance for Research and Technology (SMART), MIT’s research enterprise in Singapore, have discovered a new way to manufacture human red blood cells (RBCs) that cuts the culture time by half compared to existing methods and uses novel sorting and purification methods that are faster, more precise, and less costly.

Blood transfusions save millions of lives every year, but over half the world’s countries do not have sufficient blood supply to meet their needs. The ability to manufacture RBCs on demand, especially the universal donor blood (type O+), would significantly benefit those in need of transfusion for conditions like leukemia by circumventing the need for large volume blood draws and difficult cell isolation processes.

Easier and faster manufacturing of RBCs would also have a significant impact on blood banks worldwide and reduce dependence on donor blood, which has a higher risk of infection. It is also critical for disease research, such as malaria, which affects over 220 million people annually, and can even enable new and improved cell therapies.

However, manufacturing RBCs is time-consuming and creates undesirable by-products, with current purification methods being costly and not optimal for large-scale therapeutic applications. SMART’s researchers have thus designed an optimized intermediary cryogenic storage protocol that reduces the cell culture time to 11 days post-thaw, eliminating the need for continuous 23-day blood manufacturing. This is aided by complementary technologies the team developed for highly efficient, low-cost RBC purification and more targeted sorting.

In a paper titled “Microfluidic label-free bioprocessing of human reticulocytes from erythroid culture,” recently published in the journal Lab on a Chip, the researchers explain significant technical advancements they have made toward improving RBC manufacturing. The study was carried out by researchers from two of SMART’s Interdisciplinary Research Groups (IRGs) —  Antimicrobial Resistance (AMR) and Critical Analytics for Manufacturing Personalised-Medicine (CAMP) — co-led by principal investigators Jongyoon Han, a professor of electrical engineering and computer science and of biological engineering at MIT, and Peter Preiser, a professor at NTU. The team also included AMR and CAMP IRG faculty appointed at the National University of Singapore and Nanyang Technological University.

“Traditional methods for producing human RBCs usually require 23 days for the cells to grow, expand exponentially, and finally mature into RBCs,” says Kerwin Kwek, lead author of the paper and senior postdoc at SMART CAMP. “Our optimized protocol stores the cultured cells in liquid nitrogen on what would normally be Day 12 in the typical process, and upon demand thaws the cells and produces the RBCs within 11 days.”

The researchers also developed novel purification and sorting methods by modifying existing Dean flow fractionation (DFF) and deterministic lateral displacement (DLD) and by developing a trapezoidal cross-section design and microfluidic chip for DFF sorting and a unique sorting system achieved with an inverse L-shape pillar structure for DLD sorting.

SMART’s new sorting and purification techniques using the modified DFF and DLD methods leverage the RBC’s size and deformability for purification instead of spherical size. As most human cells are deformable, this technique can have wide biological and clinical applications, such as cancer cell and immune cell sorting and diagnostics.

On testing the purified RBCs, they were found to retain their cellular functionality, as demonstrated by high malaria parasite infectivity, which requires highly pure and healthy cells for infection. This confirms SMART’s new RBC sorting and purifying technologies are ideal for investigating malaria pathology.

Compared with conventional cell purification by fluorescence-activated cell sorting, SMART’s enhanced DFF and DLD methods offer comparable purity while processing at least twice as many cells per second at less than a third of the cost. In scale-up manufacturing processes, DFF is more optimal for its high volumetric throughput, whereas in cases where cell purity is pivotal, DLD's high precision feature is most advantageous.

“Our novel sorting and purification methods result in significantly faster cell processing time and can be easily integrated into current cell manufacturing processes. The process also does not require a trained technician to perform sample handling procedures and is scalable for industrial production,” Kwek continues.

The results of their research would give scientists faster access to final cell products that are fully functional with high purity at a reduced cost of production.

SMART was established by MIT in partnership with the National Research Foundation of Singapore (NRF) in 2007. SMART is the first entity in the Campus for Research Excellence and Technological Enterprise (CREATE) developed by NRF. SMART serves as an intellectual and innovation hub for research interactions between MIT and Singapore, undertaking cutting-edge research projects in areas of interest to both Singapore and MIT. SMART currently comprises an Innovation Center and five IRGs: AMR, CAMP, Disruptive and Sustainable Technologies for Agricultural Precision, Future Urban Mobility, and Low Energy Electronic Systems.

SMART research is funded by the NRF under the CREATE program.

The AMR IRG is a translational research and entrepreneurship program that tackles the growing threat of antimicrobial resistance. By leveraging talent and convergent technologies across Singapore and MIT, they tackle AMR head-on by developing multiple innovative and disruptive approaches to identify, respond to, and treat drug-resistant microbial infections. Through strong scientific and clinical collaborations, they provide transformative, holistic solutions for Singapore and the world.

 

 

News Image: 

Labs: 

Research Area: 

Closing the GAAP: a new mentorship program encourages underrepresented students in the final stretch of their academic marathon.

$
0
0

Photo credit: Unsplash and the WOCinTech stock OSS Creative Commons photo collection

Many departmental diversity initiatives “start small”, but in MIT’s Department of Electrical Engineering and Computer Science, thinking small was never an option. With more than 700 enrolled doctoral students, MIT EECS is one of the largest research departments in the United States. Despite the department’s size, only 5% of EECS doctoral students are underrepresented minority students and only 22% are women, according to MIT Institutional Research and the MIT Registrar.

Now, with the launch of the Graduate Application Assistance Program (or GAAP), a determined group of graduate students are hoping to boost those numbers. “We wanted to know what we could do, as students, to boost diversity from a graduate admissions standpoint, so we started discussing how to launch an assistance program,” said Madeleine Laitz, a fourth year PhD candidate within EECS. “The issue we encountered when trying to translate it to EECS is the sheer size of the department.”

With over 3600 applicants to the graduate program last year, the students behind GAAP knew they had to narrow their focus to a specific, achievable goal. “The responsibility of making the department reflect the diversity of amazing EECS students in the world should not fall on us as graduate students; but as a small piece of the puzzle, we’re focusing on what we can control, and the outcomes that we can make possible,” said Micah Smith, GAAP organizer and fifth year PhD student in Computer Science in the Data to AI Lab. The resulting GAAP initiative will be one of the largest student-run graduate mentorship programs in the United States in its first year, pairing nearly 100 committed mentors with as many as 200 applicants from underrepresented backgrounds. Applicants and their mentors will work closely together throughout the fall semester to refine their applications. As the application deadline approaches, GAAP will additionally offer a series of office hours for program participants putting the finishing touches on their application materials.

Those application materials require a lot of thought. “If you’re applying to grad school, it may seem simple at first,” says Smith. “You send in materials including personal and research statements, plus your CV and transcript. You also need letters of recommendation sent out on your behalf. Those are relatively few materials, but for that reason each one has really high importance, so you have to be very careful and deliberate in order to make each material stand out.” The process is complicated by issues of access and power. “Some applicants might be coming from elite schools where they worked with really well-respected professors, or they might have had work experience at top tech companies with a network of coworkers who’ve gone through PhD programs, so they have a lot of background on how to craft those materials. But many applicants don’t have that extensive support from their network,” Smith explains.

GAAP will work to demystify the process and help make each application count. “We hope a conversation will happen about organization, storytelling, impact,” says Laitz. “Every word counts; the structure of it counts; the way the story is presented and told is a huge differentiator. Someone on the inside, who understands what the admissions committee hopes to see and what the grad student is expected to contribute, can help bring those qualities to the fore of the application.”

Each of the hundred GAAP mentors will receive extensive training on how to productively and confidentially critique application and research statements, as well as guidance on the wide variety of resources available to graduate school applicants, such as fee waivers. “We also want to support students who aren’t as early along in that process,” Smith points out. “We want to have conversations about what are your plans for grad school, what do you look for in an advisor. So, in addition to working with the application materials, we want them to coach people to have the best possible experience.” The GAAP program is designed to increase applicants’ confidence and encourage them to find the right program match for them, either within the Institute or outside it. “Hopefully, they’ll go to a grad school program within MIT; but if they end up at a similar program within another school, and feel really good about it, that’s a win, too,” says Smith.

GAAP is a concerted effort made possible with the help of multiple student groups and offices within MIT, including THRIVE at EECS, the EECS Graduate Students Association, Graduate Women in Course 6, and the EECS Communication Lab, with support from the EECS Graduate Office and the Committee on Diversity Equity and Inclusion. Eligible applicants can learn more about the initiative, and sign up for GAAP until November 8, 2020, right here.

 

September 29, 2020

News Image: 

A Circuit Board With A View: oscilloscopes from Keysight Technologies enable new insights in EECS teaching labs

$
0
0

A recent donation from Keysight Technologies includes 120 new oscilloscopes, in two cutting-edge models.  

When the in-person labs of MIT once again fill, the students of EECS will have a fascinating new set of tools at their fingertips. Thanks to a generous equipment donation from Keysight Technologies, the department has recently completed a renovation of its teaching labs with 120 new oscilloscopes whose advanced features will make exciting new experiments possible. “We used to have to transfer data from scopes to computers to analyze, but these scopes are so astonishing that you can sit and analyze the data while you’re working on it,” says Steven Leeb, professor of Electrical Engineering and associate director of the Research Laboratory of Electronics. “They also have secure internet connectivity, so they’re perfect for lecturing and teaching.”

If you aren’t one of Leeb’s students, a primer might be helpful. “An oscilloscope is, essentially, a TV for electronic wave forms,” Leeb explains. “When an oscilloscope’s probe tips touch the metal on a circuit board, they act in the same way as an EKG reading the electrical signals from your heart, and plot the waveform as a function of time.”

But the new oscilloscopes offer more than just display. “These scopes come with a thousand amazing built-in functions to give all sorts of figures and plots, for example a fast Fourier transform, which will tell you all the sine waves in a given signal,” Leeb explains. This type of function is critical for pulling apart the discrete frequencies involved in complex electronic signals. Leeb analogizes those complex signals to orchestral sound: “A whole pile of musical instruments can make a middle C, but they don’t sound the same because while they all have the same base frequency of middle C, they add on different sine waves. Say you have a tuning fork and an oboe. While the tuning fork has the purest and simplest sine wave, the oboe adds all kinds of different higher frequencies. Within the human voice, a whistle is a pure sine wave, but a full-throated ‘ahh’—even at the same note--adds other features. These scopes will expose those features; they can figure out all the different sine waves going on so you can isolate and recreate the signal.” One teaching experiment involves the creation of a small circuit board “piano”, played by touching oscilloscope probes to its individual keys.

Here, a Keysight InfiniiVision DSOX4154A Digital Storage Oscilloscope displays a Lissajous figure, created when one slower and one faster sine wave are plotted against each other in XY mode. At bottom, the student circuit board “playing” the sine waves is connected to the oscilloscope using two probes.

“These oscilloscopes are a fantastic upgrade for us,” says Manuel Gutierrez, an EECS PhD student in Prof. Leeb’s Electromechanical Systems Group, who has been designing new lesson plans for undergraduates just beginning to delve into electrical engineering. “The large touch screens make it easy for us to interact with and interpret waveforms, and added features like the built-in dual-channel waveform generators make them highly versatile.”

The students are especially likely to appreciate the scope’s capabilities when they are challenged to begin building their own electronic circuits, such as oscillators. “Oscillators are a staple of electrical engineering art—it would be difficult to make cellphones, computers, synthesizers, and lots of other products we love  without them,” explains Leeb. Like a pendulum, the oscillator moves back and forth at a given speed—its hertz speed—powering electronics much as a heartbeat powers a human body.

“Among many other advanced capabilities, these scopes create Bode plot graphs, which are a staple that students use to make oscillators,” says Leeb. “You can do a bunch of math to predict where a resonance is, but the scope helps you identify and verify that peak, which matters because even after you do all the math, you’re not 100% sure what’s going on until you check to ensure your model was a good one.” Thomas Krause, another EECS graduate student working with Leeb, finds that teaching is more effective with the scopes: “The scopes help emphasize what issues can come about when you build electrical circuits. We […] don’t really think about how construction or other unmodeled effects can impact circuit performance. A good scope visualizes the impact and makes this point very clear.”

Here, a Keysight DSOX1204G displays a Bode plot graph. The “electronic pendulum” creating the graphed peak is at bottom.

Gutierrez agrees that seeing the circuit board in action makes learning more visceral than a purely theoretical approach to circuitry: “We spend a lot of time teaching students how best to debug their circuits. There are many techniques that can be taught to help find and fix issues, but almost all of them rely on an oscilloscope. With this crucial equipment, we can visualize clearly what is going on in our circuits.” Even for advanced students like Gutierrez and Krause, the Keysight scopes have provided a new perspective on the finer details of circuitry. Krause notes, “I’ve been lucky to use one of the higher performance models, and it is the jewel of my workbench. The impressive technical specs of the scope such as bandwidth and update rate help me see properties of electrical signals that I might miss otherwise. This helps visualize circuit performance and troubleshoot issues.”

The Keysight gift is the latest in a long-standing tradition of generosity: in 2013, Agilent (a corporate precursor to Keystone) donated 100 oscilloscopes to the student labs at MIT, enabling half the labs to be refitted. Under the continued guidance of Chief Technical Officer Jay Alexander, the company has now contributed forty Keysight InfiniiVision DSOX4154A scopes and 80 Keysight DSOX1204G scopes, valued at approximately $1.2 million, thus completing the upgrade across all EECS teaching labs. This state-of-the-art equipment means better preparation for the students starting out on their EECS journey—and the scientists and engineers they will become.

October 5, 2020

News Image: 

Research Area: 

Generating photons for communication in a quantum computing system

$
0
0
October 8, 2020

Entangled pairs of photons are generated by and propagate away from qubits placed along a waveguide.
Credits:

Image credit: Sampson Wilcox

Michaela Jarvis I Research Laboratory of Electronics

 

MIT researchers using superconducting quantum bits connected to a microwave transmission line have shown how the qubits can generate on demand the photons, or particles of light, necessary for communication between quantum processors.

The advance is an important step toward achieving the interconnections that would allow a modular quantum computing system to perform operations at rates exponentially faster than classical computers can achieve.

“Modular quantum computing is one technique for reaching quantum computation at scale by sharing the workload over multiple processing nodes,” says Bharath Kannan, MIT graduate fellow and first author of a paper on this topic published today in Science Advances. “These nodes, however, are generally not co-located, so we need to be able to communicate quantum information between distant locations.”

In classical computers, wires are used to route information back and forth through a processor during computation. In a quantum computer, the information itself is quantum mechanical and fragile, requiring new strategies to simultaneously process and communicate information. 

“Superconducting qubits are a leading technology today, but they generally support only local interactions (nearest-neighbor or qubits very close by). The question is how to connect to qubits that are at distant locations,” says William Oliver, an associate professor of electrical engineering and computer science, MIT Lincoln Laboratory fellow, director of the Center for Quantum Engineering, and associate director of the Research Laboratory of Electronics. “We need quantum interconnects, ideally based on microwave waveguides that can guide quantum information from one location to another.”

That communication can occur via the microwave transmission line, or waveguide, as the excitations stored in the qubits generate photon pairs, which are emitted into the waveguide and then travel to two distant processing nodes. The identical photons are said to be “entangled,” acting as one system. As they travel to distant processing nodes, they can distribute that entanglement throughout a quantum network.

“We generate the entangled photons on demand using the qubits and then release the entangled state to the waveguide with very high efficiency, essentially unity,” says Oliver.

The research reported in the Science Advances paper utilizes a relatively simple technique, Kannan says.

“Our work presents a new architecture for generating photons that are spatially entangled in a very simple manner, using only a waveguide and a few qubits, which act as the photonic emitters,” says Kannan. “The entanglement between the photons can then be transferred into the processors for use in quantum communication or interconnection protocols.”

While the researchers said they have not yet implemented those communication protocols, their ongoing research is aimed in that direction.

“We did not yet perform the communication between processors in this work, but rather showed how we can generate photons that are useful for quantum communication and interconnection,” Kannan says. 

Previous work by Kannan, Oliver, and colleagues introduced a waveguide quantum electrodynamics architecture using superconducting qubits that are essentially a type of artificial giant atom. That research demonstrated how such an architecture can perform low-error quantum computation and share quantum information between processors. This is accomplished by adjusting the frequency of the qubits to tune the qubit-waveguide interaction strength so the fragile qubits can be protected from waveguide-induced decoherence to perform high-fidelity qubit operations, and then readjusting the qubit frequency so the qubits are able to release their quantum information into the waveguide in the form of photons.

This paper presented the photon generation ability of the waveguide quantum electrodynamics architecture, showing that the qubits can be used as quantum emitters for the waveguide. The researchers demonstrated that quantum interference between the photons emitted into the waveguide generates entangled, itinerant photons that travel in opposite directions and can be used for long-distance communication between quantum processors.

Generating spatially entangled photons in optical systems is typically accomplished using spontaneous parametric down-conversion and photodetectors, but the generated entanglement achieved that way is generally random and therefore less useful in enabling on-demand communication of quantum information in a distributed system.

“Modularity is a key concept of any extensible system,” says Oliver. “Our goal here is to demonstrate the elements of quantum interconnects that should be useful in future quantum processors.”

 

 

News Image: 

Labs: 

Viewing all 1281 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>