Things aren’t always what they seem . . . or are they? Extreme close-ups often obscure the full picture. Cloaked in mystery, this new video series introduces a variety of objects and moments — found at MIT — that invite you to take a second look.
A team of engineers at MIT have developed a novel method to mass-produce tiny robots, no bigger than a cell, quickly, easily and accurately with little to no external stimulus.
The Massachusetts Institute of Technology is an independent, coeducational, privately endowed university in Cambridge, Massachusetts. Our mission is to advance knowledge; to educate students in science, engineering, and technology; and to tackle the most pressing problems facing the world today. We are a community of hands-on problem-solvers in love with fundamental science and eager to make the world a better place
MIT's Cheetah 3 robot can now leap and gallop across rough terrain, climb a staircase littered with debris, and quickly recover its balance when suddenly yanked or shoved, all while essentially blind.
The Massachusetts Institute of Technology is an independent, coeducational, privately endowed university in Cambridge, Massachusetts. Our mission is to advance knowledge; to educate students in science, engineering, and technology; and to tackle the most pressing problems facing the world today. We are a community of hands-on problem-solvers in love with fundamental science and eager to make the world a better place.
System allows drones to cooperatively explore terrain under thick forest canopies where GPS signals are unreliable.
Finding lost hikers in forests can be a difficult and lengthy process, as helicopters and drones can’t get a glimpse through the thick tree canopy. Recently, it’s been proposed that autonomous drones, which can bob and weave through trees, could aid these searches. But the GPS signals used to guide the aircraft can be unreliable or nonexistent in forest environments.
In a paper being presented at the International Symposium on Experimental Robotics conference next week, MIT researchers describe an autonomous system for a fleet of drones to collaboratively search under dense forest canopies. The drones use only onboard computation and wireless communication — no GPS required.
Each autonomous quadrotor drone is equipped with laser-range finders for position estimation, localization, and path planning. As the drone flies around, it creates an individual 3-D map of the terrain. Algorithms help it recognize unexplored and already-searched spots, so it knows when it’s fully mapped an area. An off-board ground station fuses individual maps from multiple drones into a global 3-D map that can be monitored by human rescuers.
In a real-world implementation, though not in the current system, the drones would come equipped with object detection to identify a missing hiker. When located, the drone would tag the hiker’s location on the global map. Humans could then use this information to plan a rescue mission.
“Essentially, we’re replacing humans with a fleet of drones to make the search part of the search-and-rescue process more efficient,” says first author Yulun Tian, a graduate student in the Department of Aeronautics and Astronautics (AeroAstro).
The researchers tested multiple drones in simulations of randomly generated forests, and tested two drones in a forested area within NASA’s Langley Research Center. In both experiments, each drone mapped a roughly 20-square-meter area in about two to five minutes and collaboratively fused their maps together in real-time. The drones also performed well across several metrics, including overall speed and time to complete the mission, detection of forest features, and accurate merging of maps.
Co-authors on the paper are: Katherine Liu, a PhD student in MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and AeroAstro; Kyel Ok, a PhD student in CSAIL and the Department of Electrical Engineering and Computer Science; Loc Tran and Danette Allen of the NASA Langley Research Center; Nicholas Roy, an AeroAstro professor and CSAIL researcher; and Jonathan P. How, the Richard Cockburn Maclaurin Professor of Aeronautics and Astronautics.
Exploring and mapping
On each drone, the researchers mounted a LIDAR system, which creates a 2-D scan of the surrounding obstacles by shooting laser beams and measuring the reflected pulses. This can be used to detect trees; however, to drones, individual trees appear remarkably similar. If a drone can’t recognize a given tree, it can’t determine if it’s already explored an area.
The researchers programmed their drones to instead identify multiple trees’ orientations, which is far more distinctive. With this method, when the LIDAR signal returns a cluster of trees, an algorithm calculates the angles and distances between trees to identify that cluster. “Drones can use that as a unique signature to tell if they’ve visited this area before or if it’s a new area,” Tian says.
This feature-detection technique helps the ground station accurately merge maps. The drones generally explore an area in loops, producing scans as they go. The ground station continuously monitors the scans. When two drones loop around to the same cluster of trees, the ground station merges the maps by calculating the relative transformation between the drones, and then fusing the individual maps to maintain consistent orientations.
“Calculating that relative transformation tells you how you should align the two maps so it corresponds to exactly how the forest looks,” Tian says.
In the ground station, robotic navigation software called “simultaneous localization and mapping” (SLAM) — which both maps an unknown area and keeps track of an agent inside the area — uses the LIDAR input to localize and capture the position of the drones. This helps it fuse the maps accurately.
The end result is a map with 3-D terrain features. Trees appear as blocks of colored shades of blue to green, depending on height. Unexplored areas are dark but turn gray as they’re mapped by a drone. On-board path-planning software tells a drone to always explore these dark unexplored areas as it flies around. Producing a 3-D map is more reliable than simply attaching a camera to a drone and monitoring the video feed, Tian says. Transmitting video to a central station, for instance, requires a lot of bandwidth that may not be available in forested areas.
More efficient searching
A key innovation is a novel search strategy that let the drones more efficiently explore an area. According to a more traditional approach, a drone would always search the closest possible unknown area. However, that could be in any number of directions from the drone’s current position. The drone usually flies a short distance, and then stops to select a new direction.
“That doesn’t respect dynamics of drone [movement],” Tian says. “It has to stop and turn, so that means it’s very inefficient in terms of time and energy, and you can’t really pick up speed.”
Instead, the researchers’ drones explore the closest possible area, while considering their current direction. They believe this can help the drones maintain a more consistent velocity. This strategy — where the drone tends to travel in a spiral pattern — covers a search area much faster. “In search and rescue missions, time is very important,” Tian says.
In the paper, the researchers compared their new search strategy with a traditional method. Compared to that baseline, the researchers’ strategy helped the drones cover significantly more area, several minutes faster and with higher average speeds.
One limitation for practical use is that the drones still must communicate with an off-board ground station for map merging. In their outdoor experiment, the researchers had to set up a wireless router that connected each drone and the ground station. In the future, they hope to design the drones to communicate wirelessly when approaching one another, fuse their maps, and then cut communication when they separate. The ground station, in that case, would only be used to monitor the updated global map.
Innovative approach to controlling magnetism could lead to next-generation memory and logic devices.
A new approach to controlling magnetism in a microchip could open the doors to memory, computing, and sensing devices that consume drastically less power than existing versions. The approach could also overcome some of the inherent physical limitations that have been slowing progress in this area until now.
Researchers at MIT and at Brookhaven National Laboratory have demonstrated that they can control the magnetic properties of a thin-film material simply by applying a small voltage. Changes in magnetic orientation made in this way remain in their new state without the need for any ongoing power, unlike today’s standard memory chips, the team has found.
The new finding is being reported today in the journal Nature Materials, in a paper by Geoffrey Beach, a professor of materials science and engineering and co-director of the MIT Materials Research Laboratory; graduate student Aik Jun Tan; and eight others at MIT and Brookhaven.
As silicon microchips draw closer to fundamental physical limits that could cap their ability to continue increasing their capabilities while decreasing their power consumption, researchers have been exploring a variety of new technologies that might get around these limits. One of the promising alternatives is an approach called spintronics, which makes use of a property of electrons called spin, instead of their electrical charge.
Because spintronic devices can retain their magnetic properties without the need for constant power, which silicon memory chips require, they need far less power to operate. They also generate far less heat — another major limiting factor for today’s devices.
But spintronic technology suffers from its own limitations. One of the biggest missing ingredients has been a way to easily and rapidly control the magnetic properties of a material electrically, by applying a voltage. Many research groups around the world have been pursuing that challenge.
Previous attempts have relied on electron accumulation at the interface between a metallic magnet and an insulator, using a device structure similar to a capacitor. The electrical charge can change the magnetic properties of the material, but only by a very small amount, making it impractical for use in real devices. There have also been attempts at using ions instead of electrons to change magnetic properties. For instance, oxygen ions have been used to oxidize a thin layer of magnetic material, causing a extremely large changes in magnetic properties. However, the insertion and removal of oxygen ions causes the material to swell and shrink, causing mechanical damage that limits the process to just a few repetitions — rendering it essentially useless for computational devices.
The new finding demonstrates a way around that, by using hydrogen ions instead of the much larger oxygen ions used in previous attempts. Since the hydrogen ions can zip in and out very easily, the new system is much faster and provides other significant advantages, the researchers say.
Because the hydrogen ions are so much smaller, they can enter and exit from the crystalline structure of the spintronic device, changing its magnetic orientation each time, without damaging the material. In fact, the team has now demonstrated that the process produces no degradation of the material after more than 2,000 cycles. And, unlike oxygen ions, hydrogen can easily pass through metal layers, which allows the team to control properties of layers deep in a device that couldn’t be controlled in any other way.
“When you pump hydrogen toward the magnet, the magnetization rotates,” Tan says. “You can actually toggle the direction of the magnetization by 90 degrees by applying a voltage — and it’s fully reversible.” Since the orientation of the poles of the magnet is what is used to store information, this means it is possible to easily write and erase data “bits” in spintronic devices using this effect.
Beach, whose lab discovered the original process for controlling magnetism through oxygen ions several years ago, says that initial finding unleashed widespread research on a new area dubbed “magnetic ionics,” and now this newest finding has “turned on its end this whole field.”
“This is really a significant breakthrough,” says Chris Leighton, the Distinguished McKnight University Professor in the Department of Chemical Engineering and Materials Science at the University of Minnesota, who was not involved in this work. “There is currently a great deal of interest worldwide in controlling magnetic materials simply by applying electrical voltages. It’s not only interesting from the fundamental side, but it’s also a potential game-changer for applications, where magnetic materials are used to store and process digital information.”
Leighton says, “Using hydrogen insertion to control magnetism is not new, but being able to do that in a voltage-driven way, in a solid-state device, with good impact on the magnetic properties — that is pretty significant!” He adds, “this is something new, with the potential to open up additional new areas of research. … At the end of the day, controlling any type of materials function by literally flipping a switch is pretty exciting. Being able to do that quickly enough, over enough cycles, in a general way, would be a fantastic advance for science and engineering.”
Essentially, Beach explains, he and his team are “trying to make a magnetic analog of a transistor,” which can be turned on and off repeatedly without degrading its physical properties.
Just add water
The discovery came about, in part, through serendipity. While experimenting with layered magnetic materials in search of ways of changing their magnetic behavior, Tan found that the results of his experiments varied greatly from day to day for reasons that were not apparent. Eventually, by examining all the conditions during the different tests, he realized that the key difference was the humidity in the air: The experiment worked better on humid days compared to dry ones. The reason, he eventually realized, was that water molecules from the air were being split up into oxygen and hydrogen on the charged surface of the material, and while the oxygen escaped to the air, the hydrogen became ionized and was penetrating into the magnetic device — and changing its magnetism.
The device the team has produced consists of a sandwich of several thin layers, including a layer of cobalt where the magnetic changes take place, sandwiched between layers of a metal such as palladium or platinum, and with an overlay of gadolinium oxide, and then a gold layer to connect to the driving electrical voltage.
The magnetism gets switched with just a brief application of voltage and then stays put. Reversing it requires no power at all, just short-circuiting the device to connect its two sides electrically, whereas a conventional memory chip requires constant power to maintain its state. “Since you’re just applying a pulse, the power consumption can go way down,” Beach says.
The new devices, with their low power consumption and high switching speed, could eventually be especially useful for devices such mobile computing, Beach says, but the work is still at an early stage and will require further development.
“I can see lab-based prototypes within a few years or less,” he says. Making a full working memory cell is “quite complex” and might take longer, he says.
The work was supported by the National Science Foundation through the Materials Research Science and Engineering Center (MRSEC) Program.
Neuroscientists discover a circuit that helps redirect attention to focus on potential threats.
Imagine a herd of deer grazing in the forest. Suddenly, a twig snaps nearby, and they look up from the grass. The thought of food is forgotten, and the animals are primed to respond to any threat that might appear.
MIT neuroscientists have now discovered a circuit that they believe controls the diversion of attention away from everyday pursuits, to focus on potential threats. They also found that dopamine is key to the process: It is released in the brain’s prefrontal cortex when danger is perceived, stimulating the prefrontal cortex to redirect its focus to a part of the brain that responds to threats.
“The prefrontal cortex has long been thought to be important for attention and higher cognitive functions — planning, prioritizing, decision-making. It’s as though dopamine is the signal that tells the router to switch over to sending information down the pathway for escape-related behavior,” says Kay Tye, an MIT associate professor of brain and cognitive sciences and a member of MIT’s Picower Institute for Learning and Memory.
When this circuit is off-balance, it could trigger anxious and paranoid behavior, possibly underlying some of the symptoms seen in schizophrenia, anxiety, and depression, Tye says.
Tye is the senior author of the study, which appears in the Nov. 7 issue of Nature. The lead authors are former graduate student Caitlin Vander Weele, postdoc Cody Siciliano, and research scientist Gillian Matthews.
One major role of the prefrontal cortex, which is the seat of conscious thought and other complex cognitive behavior, is to route information to different parts of the brain.
In this study, Tye identified two populations of neurons in the prefrontal cortex, based on other brain regions that they communicate with. One set of neurons sends information to the nucleus accumbens, which is involved in motivation and reward, and the other group relays information to the periaqueductal gray (PAG), which is part of the brainstem. The PAG is involved in defensive behavior such as freezing or running.
When we perceive a potentially dangerous event, a brain region called the ventral tegmental area (VTA) sends dopamine to the prefrontal cortex, and Tye and her colleagues wanted determine how dopamine affects the two populations they had identified. To achieve that, they designed an experiment where rats were trained to recognize two visual cues, one associated with sugar water and one with a mild electrical shock. Then, they explored what happened when both cues were presented at the same time.
They found that if they stimulated dopamine release at the same time that the cues were given, the rats were much more likely to freeze (their normal response to the shock cue) than to head for the port where they would receive the sugar water. If they stimulated dopamine when just one of the cues was given, the rats’ behavior was not affected, suggesting that dopamine’s role is to enhance the escape response when the animals receive conflicting information.
“The reward-associated neurons drop their spiking by a substantial amount, making it harder for you to pay attention to a reward,” Tye says.
Further experiments suggested that dopamine acts by adjusting the signal-to-noise ratio in neurons of the prefrontal cortex. “Noise” is random firing of neurons, while the “signal” is the meaningful input coming in, such as sensory information. When neurons that connect to the PAG receive dopamine at the same time as a threatening stimulus, their signal goes up and the noise decreases. The researchers aren’t sure how this happens, but they suspect that dopamine may activate other neurons that help to amplify the signals already coming into the PAG-connected neurons, and suppress the activity of neurons that project to the nucleus accumbens.
Adapted for survival
This brain circuit could help give animals a better chance of surviving a threatening situation, Tye says. Any kind of danger sign, such as the snapping twig that startles a herd of deer, or a stranger roughly bumping into you on the sidewalk, can produce a surge of dopamine in the prefrontal cortex. This dopamine then promotes enhanced vigilance.
“You would be on the defensive,” Tye says. “There may be some times that you run when you don’t need to, but more often than not, it might make sense to turn your attention to a potential threat.”
Dysregulation of this dopamine-controlled switching may contribute to neuropsychiatric disorders such as schizophrenia, Tye says. Among other effects, too much dopamine could lead the brain to weigh negative inputs too highly. This could result in paranoia, often seen in schizophrenia patients, or anxiety.
Tye now hopes to determine more precisely how dopamine affects other neurotransmitters involved in the modulation of the signal-to-noise ratio. She also plans to further explore the role of this kind of modulation in anxiety and phobias.
The research was funded by the JPB Foundation, the Picower Institute Innovation Fund, the Picower Neurological Disorders Research Fund, the Junior Faculty Development Program, the Klingenstein Foundation, a NARSAD Young Investigator Award, the New York Stem Cell Foundation, the National Institutes of Health, the NIH Director’s New Innovator Award, and the NIH Pioneer Award.
In simulations, robots move through new environments by exploring, observing, and drawing from learned experiences.
When moving through a crowd to reach some end goal, humans can usually navigate the space safely without thinking too much. They can learn from the behavior of others and note any obstacles to avoid. Robots, on the other hand, struggle with such navigational concepts.
MIT researchers have now devised a way to help robots navigate environments more like humans do. Their novel motion-planning model lets robots determine how to reach a goal by exploring the environment, observing other agents, and exploiting what they’ve learned before in similar situations. A paper describing the model was presented at this week’s IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).
Popular motion-planning algorithms will create a tree of possible decisions that branches out until it finds good paths for navigation. A robot that needs to navigate a room to reach a door, for instance, will create a step-by-step search tree of possible movements and then execute the best path to the door, considering various constraints. One drawback, however, is these algorithms rarely learn: Robots can’t leverage information about how they or other agents acted previously in similar environments.
“Just like when playing chess, these decisions branch out until [the robots] find a good way to navigate. But unlike chess players, [the robots] explore what the future looks like without learning much about their environment and other agents,” says co-author Andrei Barbu, a researcher at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and the Center for Brains, Minds, and Machines (CBMM) within MIT’s McGovern Institute. “The thousandth time they go through the same crowd is as complicated as the first time. They’re always exploring, rarely observing, and never using what’s happened in the past.”
The researchers developed a model that combines a planning algorithm with a neural network that learns to recognize paths that could lead to the best outcome, and uses that knowledge to guide the robot’s movement in an environment.
In their paper, “Deep sequential models for sampling-based planning,” the researchers demonstrate the advantages of their model in two settings: navigating through challenging rooms with traps and narrow passages, and navigating areas while avoiding collisions with other agents. A promising real-world application is helping autonomous cars navigate intersections, where they have to quickly evaluate what others will do before merging into traffic. The researchers are currently pursuing such applications through the Toyota-CSAIL Joint Research Center.
“When humans interact with the world, we see an object we’ve interacted with before, or are in some location we’ve been to before, so we know how we’re going to act,” says Yen-Ling Kuo, a PhD student in CSAIL and first author on the paper. “The idea behind this work is to add to the search space a machine-learning model that knows from past experience how to make planning more efficient.”
Boris Katz, a principal research scientist and head of the InfoLab Group at CSAIL, is also a co-author on the paper.
Trading off exploration and exploitation
Traditional motion planners explore an environment by rapidly expanding a tree of decisions that eventually blankets an entire space. The robot then looks at the tree to find a way to reach the goal, such as a door. The researchers’ model, however, offers “a tradeoff between exploring the world and exploiting past knowledge,” Kuo says.
The learning process starts with a few examples. A robot using the model is trained on a few ways to navigate similar environments. The neural network learns what makes these examples succeed by interpreting the environment around the robot, such as the shape of the walls, the actions of other agents, and features of the goals. In short, the model “learns that when you’re stuck in an environment, and you see a doorway, it’s probably a good idea to go through the door to get out,” Barbu says.
The model combines the exploration behavior from earlier methods with this learned information. The underlying planner, called RRT*, was developed by MIT professors Sertac Karaman and Emilio Frazzoli. (It’s a variant of a widely used motion-planning algorithm known as Rapidly-exploring Random Trees, or RRT.) The planner creates a search tree while the neural network mirrors each step and makes probabilistic predictions about where the robot should go next. When the network makes a prediction with high confidence, based on learned information, it guides the robot on a new path. If the network doesn’t have high confidence, it lets the robot explore the environment instead, like a traditional planner.
For example, the researchers demonstrated the model in a simulation known as a “bug trap,” where a 2-D robot must escape from an inner chamber through a central narrow channel and reach a location in a surrounding larger room. Blind allies on either side of the channel can get robots stuck. In this simulation, the robot was trained on a few examples of how to escape different bug traps. When faced with a new trap, it recognizes features of the trap, escapes, and continues to search for its goal in the larger room. The neural network helps the robot find the exit to the trap, identify the dead ends, and gives the robot a sense of its surroundings so it can quickly find the goal.
Results in the paper are based on the chances that a path is found after some time, total length of the path that reached a given goal, and how consistent the paths were. In both simulations, the researchers’ model more quickly plotted far shorter and consistent paths than a traditional planner.
“This model is interesting because it allows a motion planner to adapt to what it sees in the environment,” says Stephanie Tellex, an assistant professor of computer science at Brown University, who was not involved in the research. “This can enable dramatic improvements in planning speed by customizing the planner to what the robot knows. Most planners don't adapt to the environment at all. Being able to traverse long, narrow passages is notoriously difficult for a conventional planner, but they can solve it. We need more ways that bridge this gap.”
Working with multiple agents
In one other experiment, the researchers trained and tested the model in navigating environments with multiple moving agents, which is a useful test for autonomous cars, especially navigating intersections and roundabouts. In the simulation, several agents are circling an obstacle. A robot agent must successfully navigate around the other agents, avoid collisions, and reach a goal location, such as an exit on a roundabout.
“Situations like roundabouts are hard, because they require reasoning about how others will respond to your actions, how you will then respond to theirs, what they will do next, and so on,” Barbu says. “You eventually discover your first action was wrong, because later on it will lead to a likely accident. This problem gets exponentially worse the more cars you have to contend with.”
Results indicate that the researchers’ model can capture enough information about the future behavior of the other agents (cars) to cut off the process early, while still making good decisions in navigation. This makes planning more efficient. Moreover, they only needed to train the model on a few examples of roundabouts with only a few cars. “The plans the robots make take into account what the other cars are going to do, as any human would,” Barbu says.
Going through intersections or roundabouts is one of the most challenging scenarios facing autonomous cars. This work might one day let cars learn how humans behave and how to adapt to drivers in different environments, according to the researchers. This is the focus of the Toyota-CSAIL Joint Research Center work.
“Not everybody behaves the same way, but people are very stereotypical. There are people who are shy, people who are aggressive. The model recognizes that quickly and that’s why it can plan efficiently,” Barbu says.
More recently, the researchers have been applying this work to robots with manipulators that face similarly daunting challenges when reaching for objects in ever-changing environments.
Hydrogen peroxide-sensing molecule reveals whether chemotherapy drugs are having their intended effects.
MIT chemical engineers have developed a new sensor that lets them see inside cancer cells and determine whether the cells are responding to a particular type of chemotherapy drug.
The sensors, which detect hydrogen peroxide inside human cells, could help researchers identify new cancer drugs that boost levels of hydrogen peroxide, which induces programmed cell death. The sensors could also be adapted to screen individual patients’ tumors to predict whether such drugs would be effective against them.
“The same therapy isn’t going to work against all tumors,” says Hadley Sikes, an associate professor of chemical engineering at MIT. “Currently there’s a real dearth of quantitative, chemically specific tools to be able to measure the changes that occur in tumor cells versus normal cells in response to drug treatment.”
Sikes is the senior author of the study, which appears in the Aug. 7 issue of Nature Communications. The paper’s first author is graduate student Troy Langford; other authors are former graduate students Beijing Huang and Joseph Lim and graduate student Sun Jin Moon.
Tracking hydrogen peroxide
Cancer cells often have mutations that cause their metabolism to go awry and produce abnormally high fluxes of hydrogen peroxide. When too much of the molecule is produced, it can damage cells, so cancer cells become highly dependent on antioxidant systems that remove hydrogen peroxide from cells.
Drugs that target this vulnerability, which are known as “redox drugs,” can work by either disabling the antioxidant systems or further boosting production of hydrogen peroxide. Many such drugs have entered clinical trials, with mixed results.
“One of the problems is that the clinical trials usually find that they work for some patients and they don’t work for other patients,” Sikes says. “We really need tools to be able to do more well-designed trials where we figure out which patients are going to respond to this approach and which aren’t, so more of these drugs can be approved.”
To help move toward that goal, Sikes set out to design a sensor that could sensitively detect hydrogen peroxide inside human cells, allowing scientists to measure a cell’s response to such drugs.
Existing hydrogen peroxide sensors are based on proteins called transcription factors, taken from microbes and engineered to fluoresce when they react with hydrogen peroxide. Sikes and her colleagues tried to use these in human cells but found that they were not sensitive in the range of hydrogen peroxide they were trying to detect, which led them to seek human proteins that could perform the task.
Through studies of the network of human proteins that become oxidized with increasing hydrogen peroxide, the researchers identified an enzyme called peroxiredoxin that dominates most human cells’ reactions with the molecule. One of this enzyme’s many functions is sensing changes in hydrogen peroxide levels.
Langford then modified the protein by adding two fluorescent molecules to it — a green fluorescent protein at one end and a red fluorescent protein at the other end. When the sensor reacts with hydrogen peroxide, its shape changes, bringing the two fluorescent proteins closer together. The researchers can detect whether this shift has occurred by shining green light onto the cells: If no hydrogen peroxide has been detected, the glow remains green; if hydrogen peroxide is present, the sensor glows red instead.
The researchers tested their new sensor in two types of human cancer cells: one set that they knew was susceptible to a redox drug called piperlongumine, and another that they knew was not susceptible. The sensor revealed that hydrogen peroxide levels were unchanged in the resistant cells but went up in the susceptible cells, as the researchers expected.
Sikes envisions two major uses for this sensor. One is to screen libraries of existing drugs, or compounds that could potentially be used as drugs, to determine if they have the desired effect of increasing hydrogen peroxide concentration in cancer cells. Another potential use is to screen patients before they receive such drugs, to see if the drugs will be successful against each patient’s tumor. Sikes is now pursuing both of these approaches.
“You have to know which cancer drugs work in this way, and then which tumors are going to respond,” she says. “Those are two separate but related problems that both need to be solved for this approach to have practical impact in the clinic.”
The research was funded by the Haas Family Fellowship in Chemical Engineering, the National Science Foundation, a Samsung Fellowship, and a Burroughs Wellcome Fund Career Award at the Scientific Interface.
In a novel system developed by MIT researchers, underwater sonar signals cause vibrations that can be decoded by an airborne receiver.
MIT researchers have taken a step toward solving a longstanding challenge with wireless communication: direct data transmission between underwater and airborne devices.
Today, underwater sensors cannot share data with those on land, as both use different wireless signals that only work in their respective mediums. Radio signals that travel through air die very rapidly in water. Acoustic signals, or sonar, sent by underwater devices mostly reflect off the surface without ever breaking through. This causes inefficiencies and other issues for a variety of applications, such as ocean exploration and submarine-to-plane communication.
In a paper being presented at this week’s SIGCOMM conference, MIT Media Lab researchers have designed a system that tackles this problem in a novel way. An underwater transmitter directs a sonar signal to the water’s surface, causing tiny vibrations that correspond to the 1s and 0s transmitted. Above the surface, a highly sensitive receiver reads these minute disturbances and decodes the sonar signal.
“Trying to cross the air-water boundary with wireless signals has been an obstacle. Our idea is to transform the obstacle itself into a medium through which to communicate,” says Fadel Adib, an assistant professor in the Media Lab, who is leading this research. He co-authored the paper with his graduate student Francesco Tonolini.
The system, called “translational acoustic-RF communication” (TARF), is still in its early stages, Adib says. But it represents a “milestone,” he says, that could open new capabilities in water-air communications. Using the system, military submarines, for instance, wouldn’t need to surface to communicate with airplanes, compromising their location. And underwater drones that monitor marine life wouldn’t need to constantly resurface from deep dives to send data to researchers.
Another promising application is aiding searches for planes that go missing underwater. “Acoustic transmitting beacons can be implemented in, say, a plane’s black box,” Adib says. “If it transmits a signal every once in a while, you’d be able to use the system to pick up that signal.”
Today’s technological workarounds to this wireless communication issue suffer from various drawbacks. Buoys, for instance, have been designed to pick up sonar waves, process the data, and shoot radio signals to airborne receivers. But these can drift away and get lost. Many are also required to cover large areas, making them impracticable for, say, submarine-to-surface communications.
TARF includes an underwater acoustic transmitter that sends sonar signals using a standard acoustic speaker. The signals travel as pressure waves of different frequencies corresponding to different data bits. For example, when the transmitter wants to send a 0, it can transmit a wave traveling at 100 hertz; for a 1, it can transmit a 200-hertz wave. When the signal hits the surface, it causes tiny ripples in the water, only a few micrometers in height, corresponding to those frequencies.
To achieve high data rates, the system transmits multiple frequencies at the same time, building on a modulation scheme used in wireless communication, called orthogonal frequency-division multiplexing. This lets the researchers transmit hundreds of bits at once.
Positioned in the air above the transmitter is a new type of extremely-high-frequency radar that processes signals in the millimeter wave spectrum of wireless transmission, between 30 and 300 gigahertz. (That’s the band where the upcoming high-frequency 5G wireless network will operate.)
The radar, which looks like a pair of cones, transmits a radio signal that reflects off the vibrating surface and rebounds back to the radar. Due to the way the signal collides with the surface vibrations, the signal returns with a slightly modulated angle that corresponds exactly to the data bit sent by the sonar signal. A vibration on the water surface representing a 0 bit, for instance, will cause the reflected signal’s angle to vibrate at 100 hertz.
“The radar reflection is going to vary a little bit whenever you have any form of displacement like on the surface of the water,” Adib says. “By picking up these tiny angle changes, we can pick up these variations that correspond to the sonar signal.”
Listening to “the whisper”
A key challenge was helping the radar detect the water surface. To do so, the researchers employed a technology that detects reflections in an environment and organizes them by distance and power. As water has the most powerful reflection in the new system’s environment, the radar knows the distance to the surface. Once that’s established, it zooms in on the vibrations at that distance, ignoring all other nearby disturbances.
The next major challenge was capturing micrometer waves surrounded by much larger, natural waves. The smallest ocean ripples on calm days, called capillary waves, are only about 2 centimeters tall, but that’s 100,000 times larger than the vibrations. Rougher seas can create waves 1 million times larger. “This interferes with the tiny acoustic vibrations at the water surface,” Adib says. “It’s as if someone’s screaming and you’re trying to hear someone whispering at the same time.”
To solve this, the researchers developed sophisticated signal-processing algorithms. Natural waves occur at about 1 or 2 hertz — or, a wave or two moving over the signal area every second. The sonar vibrations of 100 to 200 hertz, however, are a hundred times faster. Because of this frequency differential, the algorithm zeroes in on the fast-moving waves while ignoring the slower ones.
Testing the waters
The researchers took TARF through 500 test runs in a water tank and in two different swimming pools on MIT’s campus.
In the tank, the radar was placed at ranges from 20 centimeters to 40 centimeters above the surface, and the sonar transmitter was placed from 5 centimeters to 70 centimeters below the surface. In the pools, the radar was positioned about 30 centimeters above surface, while the transmitter was immersed about 3.5 meters below. In these experiments, the researchers also had swimmers creating waves that rose to about 16 centimeters.
In both settings, TARF was able to accurately decode various data — such as the sentence, “Hello! from underwater” — at hundreds of bits per second, similar to standard data rates for underwater communications. “Even while there were swimmers swimming around and causing disturbances and water currents, we were able to decode these signals quickly and accurately,” Adib says.
In waves higher than 16 centimeters, however, the system isn’t able to decode signals. The next steps are, among other things, refining the system to work in rougher waters. “It can deal with calm days and deal with certain water disturbances. But [to make it practical] we need this to work on all days and all weathers,” Adib says.
“TARF is the first system that demonstrates that it is feasible to receive underwater acoustic transmissions from the air using radar,” says Aaron Schulman, an assistant professor of computer science and engineering at the University of California at San Diego. “I expect this new radar-acoustic technology will benefit researchers in fields that depend on underwater acoustics (for example, marine biology), and will inspire the scientific community to investigate how to make radar-acoustic links practical and robust.”
The researchers also hope that their system could eventually enable an airborne drone or plane flying across a water’s surface to constantly pick up and decode the sonar signals as it zooms by.
The research was supported, in part, by the National Science Foundation.
Machine-learning system determines the fewest, smallest doses that could still shrink brain tumors.
MIT researchers are employing novel machine-learning techniques to improve the quality of life for patients by reducing toxic chemotherapy and radiotherapy dosing for glioblastoma, the most aggressive form of brain cancer.
Glioblastoma is a malignant tumor that appears in the brain or spinal cord, and prognosis for adults is no more than five years. Patients must endure a combination of radiation therapy and multiple drugs taken every month. Medical professionals generally administer maximum safe drug doses to shrink the tumor as much as possible. But these strong pharmaceuticals still cause debilitating side effects in patients.
In a paper being presented next week at the 2018 Machine Learning for Healthcare conference at Stanford University, MIT Media Lab researchers detail a model that could make dosing regimens less toxic but still effective. Powered by a “self-learning” machine-learning technique, the model looks at treatment regimens currently in use, and iteratively adjusts the doses. Eventually, it finds an optimal treatment plan, with the lowest possible potency and frequency of doses that should still reduce tumor sizes to a degree comparable to that of traditional regimens.
In simulated trials of 50 patients, the machine-learning model designed treatment cycles that reduced the potency to a quarter or half of nearly all the doses while maintaining the same tumor-shrinking potential. Many times, it skipped doses altogether, scheduling administrations only twice a year instead of monthly.
“We kept the goal, where we have to help patients by reducing tumor sizes but, at the same time, we want to make sure the quality of life — the dosing toxicity — doesn’t lead to overwhelming sickness and harmful side effects,” says Pratik Shah, a principal investigator at the Media Lab who supervised this research.
The paper’s first author is Media Lab researcher Gregory Yauney.
Rewarding good choices
The researchers’ model uses a technique called reinforced learning (RL), a method inspired by behavioral psychology, in which a model learns to favor certain behavior that leads to a desired outcome.
The technique comprises artificially intelligent “agents” that complete “actions” in an unpredictable, complex environment to reach a desired “outcome.” Whenever it completes an action, the agent receives a “reward” or “penalty,” depending on whether the action works toward the outcome. Then, the agent adjusts its actions accordingly to achieve that outcome.
Rewards and penalties are basically positive and negative numbers, say +1 or -1. Their values vary by the action taken, calculated by probability of succeeding or failing at the outcome, among other factors. The agent is essentially trying to numerically optimize all actions, based on reward and penalty values, to get to a maximum outcome score for a given task.
The approach was used to train the computer program DeepMind that in 2016 made headlines for beating one of the world’s best human players in the game “Go.” It’s also used to train driverless cars in maneuvers, such as merging into traffic or parking, where the vehicle will practice over and over, adjusting its course, until it gets it right.
The researchers adapted an RL model for glioblastoma treatments that use a combination of the drugs temozolomide (TMZ) and procarbazine, lomustine, and vincristine (PVC), administered over weeks or months.
The model’s agent combs through traditionally administered regimens. These regimens are based on protocols that have been used clinically for decades and are based on animal testing and various clinical trials. Oncologists use these established protocols to predict how much doses to give patients based on weight.
As the model explores the regimen, at each planned dosing interval — say, once a month — it decides on one of several actions. It can, first, either initiate or withhold a dose. If it does administer, it then decides if the entire dose, or only a portion, is necessary. At each action, it pings another clinical model — often used to predict a tumor’s change in size in response to treatments — to see if the action shrinks the mean tumor diameter. If it does, the model receives a reward.
However, the researchers also had to make sure the model doesn’t just dish out a maximum number and potency of doses. Whenever the model chooses to administer all full doses, therefore, it gets penalized, so instead chooses fewer, smaller doses. “If all we want to do is reduce the mean tumor diameter, and let it take whatever actions it wants, it will administer drugs irresponsibly,” Shah says. “Instead, we said, ‘We need to reduce the harmful actions it takes to get to that outcome.’”
This represents an “unorthodox RL model, described in the paper for the first time,” Shah says, that weighs potential negative consequences of actions (doses) against an outcome (tumor reduction). Traditional RL models work toward a single outcome, such as winning a game, and take any and all actions that maximize that outcome. On the other hand, the researchers’ model, at each action, has flexibility to find a dose that doesn’t necessarily solely maximize tumor reduction, but that strikes a perfect balance between maximum tumor reduction and low toxicity. This technique, he adds, has various medical and clinical trial applications, where actions for treating patients must be regulated to prevent harmful side effects.
The researchers trained the model on 50 simulated patients, randomly selected from a large database of glioblastoma patients who had previously undergone traditional treatments. For each patient, the model conducted about 20,000 trial-and-error test runs. Once training was complete, the model learned parameters for optimal regimens. When given new patients, the model used those parameters to formulate new regimens based on various constraints the researchers provided.
The researchers then tested the model on 50 new simulated patients and compared the results to those of a conventional regimen using both TMZ and PVC. When given no dosage penalty, the model designed nearly identical regimens to human experts. Given small and large dosing penalties, however, it substantially cut the doses’ frequency and potency, while reducing tumor sizes.
The researchers also designed the model to treat each patient individually, as well as in a single cohort, and achieved similar results (medical data for each patient was available to the researchers). Traditionally, a same dosing regimen is applied to groups of patients, but differences in tumor size, medical histories, genetic profiles, and biomarkers can all change how a patient is treated. These variables are not considered during traditional clinical trial designs and other treatments, often leading to poor responses to therapy in large populations, Shah says.
“We said [to the model], ‘Do you have to administer the same dose for all the patients? And it said, ‘No. I can give a quarter dose to this person, half to this person, and maybe we skip a dose for this person.’ That was the most exciting part of this work, where we are able to generate precision medicine-based treatments by conducting one-person trials using unorthodox machine-learning architectures,” Shah says.
The model offers a major improvement over the conventional “eye-balling” method of administering doses, observing how patients respond, and adjusting accordingly, says Nicholas J. Schork, a professor and director of human biology at the J. Craig Venter Institute, and an expert in clinical trial design. “[Humans don’t] have the in-depth perception that a machine looking at tons of data has, so the human process is slow, tedious, and inexact,” he says. “Here, you’re just letting a computer look for patterns in the data, which would take forever for a human to sift through, and use those patterns to find optimal doses.”
Schork adds that this work may particularly interest the U.S. Food and Drug Administration, which is now seeking ways to leverage data and artificial intelligence to develop health technologies. Regulations still need be established, he says, “but I don’t doubt, in a short amount of time, the FDA will figure out how to vet these [technologies] appropriately, so they can be used in everyday clinical programs.”
Novel chip keeps time using the constant, measurable rotation of molecules as a timing reference.
Rob Matheson | MIT
MIT researchers have developed the first molecular clock on a chip, which uses the constant, measurable rotation of molecules — when exposed to a certain frequency of electromagnetic radiation — to keep time. The chip could one day significantly improve the accuracy and performance of navigation on smartphones and other consumer devices.
Today’s most accurate time-keepers are atomic clocks. These clocks rely on the steady resonance of atoms, when exposed to a specific frequency, to measure exactly one second. Several such clocks are installed in all GPS satellites. By “trilaterating” time signals broadcast from these satellites — a technique like triangulation, that uses 3-D dimensional data for positioning — your smartphone and other ground receivers can pinpoint their own location.
But atomic clocks are large and expensive. Your smartphone, therefore, has a much less accurate internal clock that relies on three satellite signals to navigate and can still calculate wrong locations. Errors can be reduced with corrections from additional satellite signals, if available, but this degrades the performance and speed of your navigation. When signals drop or weaken — such as in areas surrounded by signal-reflecting buildings or in tunnels — your phone primarily relies on its clock and an accelerometer to estimate your location and where you’re going.
Researchers from MIT’s Department of Electrical Engineering and Computer Science (EECS) and Terahertz Integrated Electronics Group have now built an on-chip clock that exposes specific molecules — not atoms — to an exact, ultrahigh frequency that causes them to spin. When the molecular rotations cause maximum energy absorption, a periodic output is clocked — in this case, a second. As with the resonance of atoms, this spin is reliably constant enough that it can serve as a precise timing reference.
In experiments, the molecular clock averaged an error under 1 microsecond per hour, comparable to miniature atomic clocks and 10,000 times more stable than the crystal-oscillator clocks in smartphones. Because the clock is fully electronic and doesn’t require bulky, power-hungry components used to insulate and excite the atoms, it is manufactured with the low-cost, complementary metal-oxide-semiconductor (CMOS) integrated circuit technology used to make all smartphone chips.
“Our vision is, in the future, you don’t need to spend a big chunk of money getting atomic clocks in most equipment. Rather, you just have a little gas cell that you attached to the corner of a chip in a smartphone, and then the whole thing is running at atomic clock-grade accuracy,” says Ruonan Han, an associate professor in EECS and co-author of a paper describing the clock, published today in Nature Electronics.
The chip-scale molecular clock can also be used for more efficient time-keeping in operations that require location precision but involve little to no GPS signal, such as underwater sensing or battlefield applications.
Joining Han on the paper are: Cheng Wang, a PhD student and first author; Xiang Yi, a postdoc; and graduate students James Mawdsley, Mina Kim, and Zihan Wang, all from EECS.
In the 1960s, scientists officially defined one second as 9,192,631,770 oscillations of radiation, which is the exact frequency it takes for cesium-133 atoms to change from a low state to high state of excitability. Because that change is constant, that exact frequency can be used as a reliable time reference of one second. Essentially, every time 9,192,631,770 oscillations occur, one second has passed.
Atomic clocks are systems that use that concept. They sweep a narrow band of microwave frequencies across cesium-133 atoms until a maximum number of the atoms transition to their high states — meaning the frequency is then at exactly 9,192,631,770 oscillations. When that happens, the system clocks a second. It continuously tests that a maximum number of those atoms are in high-energy states and, if not, adjusts the frequency to keep on track. The best atomic clocks come within one second of error every 1.4 million years.
In recent years, the U.S. Defense Advanced Research Projects Agency has introduced chip-scale atomic clocks. But these run about $1,000 each — too pricey for consumer devices. To shrink the scale, “we searched for different physics all together,” Han says. “We don’t probe the behavior of atoms; rather, we probe the behavior of molecules.”
The researchers’ chip functions similarly to an atomic clock but relies on measuring the rotation of the molecule carbonyl sulfide (OCS), when exposed to certain frequencies. Attached to the chip is a gas cell filled with OCS. A circuit continuously sweeps frequencies of electromagnetic waves along the cell, causing the molecules to start rotating. A receiver measures the energy of these rotations and adjusts the clock output frequency accordingly. At a frequency very close to 231.060983 gigahertz, the molecules reach peak rotation and form a sharp signal response. The researchers divided down that frequency to exactly one second, matching it with the official time from atomic clocks.
“The output of the system is linked to that known number — about 231 gigahertz,” Han says. “You want to correlate a quantity that is useful to you with a quantity that is physical constant, that doesn’t change. Then your quantity becomes very stable.”
A key challenge was designing a chip that can shoot out a 200-gigahertz signal to make a molecule rotate. Consumer device components can generally only produce a few gigahertz of signal strength. The researchers developed custom metal structures and other components that increase the efficacy of transistors, in order to shape a low-frequency input signal into a higher-frequency electromagnetic wave, while using as little power as possible. The chip consumes only 66 milliwatts of power. For comparison, common smartphone features — such as GPS, Wi-Fi, and LED lighting —can consume hundreds of milliwatts during use.
The chips could be used for underwater sensing, where GPS signals aren’t available, Han says. In those applications, sonic waves are shot into the ocean floor and return to a grid of underwater sensors. Inside each sensor, an attached atomic clock measures the signal delay to pinpoint the location of, say, oil under the ocean floor. The researchers’ chip could be a low-power and low-cost alternative to the atomic clocks.
The chip could also be used on the battlefield, Han says. Bombs are often remotely triggered on battlefields, so soldiers use equipment that suppresses all signals in the area so the bombs won’t go off. “Soldiers themselves then don’t have GPS signals anymore,” Han says. “Those are places when an accurate internal clock for local navigation becomes quite essential.”
Currently, the prototype needs some fine-tuning before it’s ready to reach consumer devices. The researchers currently have plans to shrink the clock even more and reduce the average power consumption to a few milliwatts, while cutting its error rate by another one or two orders of magnitude.
This work was supported by a National Science Foundation CAREER award, MIT Lincoln Laboratory, MIT Center of Integrated Circuits and Systems, and a Texas Instruments Fellowship.
Personalized machine-learning models capture subtle variations in facial expressions to better gauge how we feel.
MIT Media Lab researchers have developed a machine-learning model that takes computers a step closer to interpreting our emotions as naturally as humans do.
In the growing field of “affective computing,” robots and computers are being developed to analyze facial expressions, interpret our emotions, and respond accordingly. Applications include, for instance, monitoring an individual’s health and well-being, gauging student interest in classrooms, helping diagnose signs of certain diseases, and developing helpful robot companions.
A challenge, however, is people express emotions quite differently, depending on many factors. General differences can be seen among cultures, genders, and age groups. But other differences are even more fine-grained: The time of day, how much you slept, or even your level of familiarity with a conversation partner leads to subtle variations in the way you express, say, happiness or sadness in a given moment.
Human brains instinctively catch these deviations, but machines struggle. Deep-learning techniques were developed in recent years to help catch the subtleties, but they’re still not as accurate or as adaptable across different populations as they could be.
The Media Lab researchers have developed a machine-learning model that outperforms traditional systems in capturing these small facial expression variations, to better gauge mood while training on thousands of images of faces. Moreover, by using a little extra training data, the model can be adapted to an entirely new group of people, with the same efficacy. The aim is to improve existing affective-computing technologies.
“This is an unobtrusive way to monitor our moods,” says Oggi Rudovic, a Media Lab researcher and co-author on a paper describing the model, which was presented last week at the Conference on Machine Learning and Data Mining. “If you want robots with social intelligence, you have to make them intelligently and naturally respond to our moods and emotions, more like humans.”
Co-authors on the paper are: first author Michael Feffer, an undergraduate student in electrical engineering and computer science; and Rosalind Picard, a professor of media arts and sciences and founding director of the Affective Computing research group.
Traditional affective-computing models use a “one-size-fits-all” concept. They train on one set of images depicting various facial expressions, optimizing features — such as how a lip curls when smiling — and mapping those general feature optimizations across an entire set of new images.
The researchers, instead, combined a technique, called “mixture of experts” (MoE), with model personalization techniques, which helped mine more fine-grained facial-expression data from individuals. This is the first time these two techniques have been combined for affective computing, Rudovic says.
In MoEs, a number of neural network models, called “experts,” are each trained to specialize in a separate processing task and produce one output. The researchers also incorporated a “gating network,” which calculates probabilities of which expert will best detect moods of unseen subjects. “Basically the network can discern between individuals and say, ‘This is the right expert for the given image,’” Feffer says.
For their model, the researchers personalized the MoEs by matching each expert to one of 18 individual video recordings in the RECOLA database, a public database of people conversing on a video-chat platform designed for affective-computing applications. They trained the model using nine subjects and evaluated them on the other nine, with all videos broken down into individual frames.
Each expert, and the gating network, tracked facial expressions of each individual, with the help of a residual network (“ResNet”), a neural network used for object classification. In doing so, the model scored each frame based on level of valence (pleasant or unpleasant) and arousal (excitement) — commonly used metrics to encode different emotional states. Separately, six human experts labeled each frame for valence and arousal, based on a scale of -1 (low levels) to 1 (high levels), which the model also used to train.
The researchers then performed further model personalization, where they fed the trained model data from some frames of the remaining videos of subjects, and then tested the model on all unseen frames from those videos. Results showed that, with just 5 to 10 percent of data from the new population, the model outperformed traditional models by a large margin — meaning it scored valence and arousal on unseen images much closer to the interpretations of human experts.
This shows the potential of the models to adapt from population to population, or individual to individual, with very few data, Rudovic says. “That’s key,” he says. “When you have a new population, you have to have a way to account for shifting of data distribution [subtle facial variations]. Imagine a model set to analyze facial expressions in one culture that needs to be adapted for a different culture. Without accounting for this data shift, those models will underperform. But if you just sample a bit from a new culture to adapt our model, these models can do much better, especially on the individual level. This is where the importance of the model personalization can best be seen.”
Currently available data for such affective-computing research isn’t very diverse in skin colors, so the researchers’ training data were limited. But when such data become available, the model can be trained for use on more diverse populations. The next step, Feffer says, is to train the model on “a much bigger dataset with more diverse cultures.”
Better machine-human interactions
Another goal is to train the model to help computers and robots automatically learn from small amounts of changing data to more naturally detect how we feel and better serve human needs, the researchers say.
It could, for example, run in the background of a computer or mobile device to track a user’s video-based conversations and learn subtle facial expression changes under different contexts. “You can have things like smartphone apps or websites be able to tell how people are feeling and recommend ways to cope with stress or pain, and other things that are impacting their lives negatively,” Feffer says.
This could also be helpful in monitoring, say, depression or dementia, as people’s facial expressions tend to subtly change due to those conditions. “Being able to passively monitor our facial expressions,” Rudovic says, “we could over time be able to personalize these models to users and monitor how much deviations they have on daily basis — deviating from the average level of facial expressiveness — and use it for indicators of well-being and health.”
A promising application, Rudovic says, is human-robotic interactions, such as for personal robotics or robots used for educational purposes, where the robots need to adapt to assess the emotional states of many different people. One version, for instance, has been used in helping robots better interpret the moods of children with autism.
Roddy Cowie, professor emeritus of psychology at the Queen’s University Belfast and an affective computing scholar, says the MIT work “illustrates where we really are” in the field. “We are edging toward systems that can roughly place, from pictures of people’s faces, where they lie on scales from very positive to very negative, and very active to very passive,” he says. “It seems intuitive that the emotional signs one person gives are not the same as the signs another gives, and so it makes a lot of sense that emotion recognition works better when it is personalized. The method of personalizing reflects another intriguing point, that it is more effective to train multiple ‘experts,’ and aggregate their judgments, than to train a single super-expert. The two together make a satisfying package.”