Logo

The Data Daily

Are You Ready for Workplace Brain Scanning?

Are You Ready for Workplace Brain Scanning?

Are You Ready for Workplace Brain Scanning?
Share
Search:
Explore by topic
FOR THE TECHNOLOGY INSIDER
Topics
Follow IEEE Spectrum
Support IEEE Spectrum
IEEE Spectrum is the flagship publication of the IEEE — the world’s largest professional organization devoted to engineering and applied sciences. Our articles, podcasts, and infographics inform our readers about developments in technology, engineering, and science.
About IEEE Contact & Support Accessibility Nondiscrimination Policy Terms IEEE Privacy Policy
© Copyright 2022 IEEE — All rights reserved. A not-for-profit organization, IEEE is the world's largest technical professional organization dedicated to advancing technology for the benefit of humanity.
IEEE websites place cookies on your device to give you the best user experience. By using our websites, you agree to the placement of these cookies. To learn more, read our Privacy Policy.
Enjoy more free content and benefits by creating an account
Saving articles to read later requires an IEEE Spectrum account
The Institute content is only available for members
Downloading full PDF issues is exclusive for IEEE Members
Access to Spectrum's Digital Edition is exclusive for IEEE Members
Following topics is a feature exclusive for IEEE Members
Adding your response to an article requires an IEEE Spectrum account
Create an account to access more content and features on IEEE Spectrum, including the ability to save articles to read later, download Spectrum Collections, and participate in conversations with readers and editors. For more exclusive content and features, consider Joining IEEE .
Join the world’s largest professional organization devoted to engineering and applied sciences and get access to all of Spectrum’s articles, archives, PDF downloads, and other benefits. Learn more →
Close
Access Thousands of Articles — Completely Free
Create an account and get exclusive content and features: Save articles, download collections, and talk to tech insiders — all free! For full access and benefits, join IEEE as a paying member.
Nadia Radic
DarkGray
Get ready: Neurotechnology is coming to the workplace. Neural sensors are now reliable and affordable enough to support commercial pilot projects that extract productivity-enhancing data from workers’ brains. These projects aren’t confined to specialized workplaces; they’re also happening in offices, factories, farms, and airports. The companies and people behind these neurotech devices are certain that they will improve our lives. But there are serious questions about whether work should be organized around certain functions of the brain, rather than the person as a whole.
To be clear, the kind of neurotech that’s currently available is nowhere close to reading minds. Sensors detect electrical activity across different areas of the brain, and the patterns in that activity can be broadly correlated with different feelings or physiological responses, such as stress, focus, or a reaction to external stimuli. These data can be exploited to make workers more efficient—and, proponents of the technology say, to make them happier. Two of the most interesting innovators in this field are the Israel-based startup InnerEye , which aims to give workers superhuman abilities, and Emotiv , a Silicon Valley neurotech company that’s bringing a brain-tracking wearable to office workers, including those working remotely.
The fundamental technology that these companies rely on is not new: Electroencephalography (EEG) has been around for about a century, and it’s commonly used today in both medicine and neuroscience research. For those applications, the subject may have up to 256 electrodes attached to their scalp with conductive gel to record electrical signals from neurons in different parts of the brain. More electrodes, or “channels,” mean that doctors and scientists can get better spatial resolution in their readouts—they can better tell which neurons are associated with which electrical signals.
What is new is that EEG has recently broken out of clinics and labs and has entered the consumer marketplace. This move has been driven by a new class of “dry” electrodes that can operate without conductive gel, a substantial reduction in the number of electrodes necessary to collect useful data, and advances in artificial intelligence that make it far easier to interpret the data. Some EEG headsets are even available directly to consumers for a few hundred dollars.
While the public may not have gotten the memo, experts say the neurotechnology is mature and ready for commercial applications. “This is not sci-fi,” says James Giordano , chief of neuroethics studies at Georgetown University Medical Center. “This is quite real.”
How InnerEye’s TSA-boosting technology works
InnerEye Security Screening Demo
youtu.be
In an office in Herzliya, Israel, Sergey Vaisman sits in front of a computer. He’s relaxed but focused, silent and unmoving, and not at all distracted by the seven-channel EEG headset he’s wearing. On the computer screen, images rapidly appear and disappear, one after another. At a rate of three images per second, it’s just possible to tell that they come from an airport X-ray scanner. It’s essentially impossible to see anything beyond fleeting impressions of ghostly bags and their contents.
“Our brain is an amazing machine,” Vaisman tells us as the stream of images ends. The screen now shows an album of selected X-ray images that were just flagged by Vaisman’s brain, most of which are now revealed to have hidden firearms. No one can knowingly identify and flag firearms among the jumbled contents of bags when three images are flitting by every second, but Vaisman’s brain has no problem doing so behind the scenes, with no action required on his part. The brain processes visual imagery very quickly. According to Vaisman, the decision-making process to determine whether there’s a gun in complex images like these takes just 300 milliseconds.
Brain data can be exploited to make workers more efficient—and, proponents of the technology say, to make them happier.
What takes much more time are the cognitive and motor processes that occur after the decision making—planning a response (such as saying something or pushing a button) and then executing that response. If you can skip these planning and execution phases and instead use EEG to directly access the output of the brain’s visual processing and decision-making systems, you can perform image-recognition tasks far faster. The user no longer has to actively think: For an expert, just that fleeting first impression is enough for their brain to make an accurate determination of what’s in the image.
InnerEye’s image-classification system operates at high speed by providing a shortcut to the brain of an expert human. As an expert focuses on a continuous stream of images (from three to 10 images per second, depending on complexity), a commercial EEG system combined with InnerEye’s software can distinguish the characteristic response the expert’s brain produces when it recognizes a target. In this example, the target is a weapon in an X-ray image of a suitcase, representing an airport-security application.
Chris Philpot
Vaisman is the vice president of R&D of InnerEye , an Israel-based startup that recently came out of stealth mode. InnerEye uses deep learning to classify EEG signals into responses that indicate “targets” and “nontargets.” Targets can be anything that a trained human brain can recognize. In addition to developing security screening, InnerEye has worked with doctors to detect tumors in medical images, with farmers to identify diseased plants, and with manufacturing experts to spot product defects. For simple cases, InnerEye has found that our brains can handle image recognition at rates of up to 10 images per second. And, Vaisman says, the company’s system produces results just as accurate as a human would when recognizing and tagging images manually—InnerEye is merely using EEG as a shortcut to that person’s brain to drastically speed up the process.
While using the InnerEye technology doesn’t require active decision making, it does require training and focus. Users must be experts at the task, well trained in identifying a given type of target, whether that’s firearms or tumors. They must also pay close attention to what they’re seeing—they can’t just zone out and let images flash past. InnerEye’s system measures focus very accurately, and if the user blinks or stops concentrating momentarily, the system detects it and shows the missed images again.
Can you spot the manufacturing defects?
Examine the sample images below, and then try to spot the target among the nontargets.
Ten images are displayed every second for five seconds on loop. There are three targets.
Can you spot the weapon?
Three images are displayed every second for five seconds on loop. There is one weapon.
InnerEye
Having a human brain in the loop is especially important for classifying data that may be open to interpretation. For example, a well-trained image classifier may be able to determine with reasonable accuracy whether an X-ray image of a suitcase shows a gun, but if you want to determine whether that X-ray image shows something else that’s vaguely suspicious, you need human experience. People are capable of detecting something unusual even if they don’t know quite what it is.
“We can see that uncertainty in the brain waves,” says InnerEye founder and chief technology officer Amir Geva . “We know when they aren’t sure.” Humans have a unique ability to recognize and contextualize novelty, a substantial advantage that InnerEye’s system has over AI image classifiers. InnerEye then feeds that nuance back into its AI models. “When a human isn’t sure, we can teach AI systems to be not sure, which is better training than teaching the AI system just one or zero,” says Geva. “There is a need to combine human expertise with AI.” InnerEye’s system enables this combination, as every image can be classified by both computer vision and a human brain.
Using InnerEye’s system is a positive experience for its users, the company claims. “When we start working with new users, the first experience is a bit overwhelming,” Vaisman says. “But in one or two sessions, people get used to it, and they start to like it.” Geva says some users do find it challenging to maintain constant focus throughout a session, which lasts up to 20 minutes, but once they get used to working at three images per second, even two images per second feels “too slow.”
In a security-screening application, three images per second is approximately an order of magnitude faster than an expert can manually achieve. InnerEye says their system allows far fewer humans to handle far more data, with just two human experts redundantly overseeing 15 security scanners at once, supported by an AI image-recognition system that is being trained at the same time, using the output from the humans’ brains.
InnerEye is currently partnering with a handful of airports around the world on pilot projects. And it’s not the only company working to bring neurotech into the workplace.
How Emotiv’s brain-tracking technology works
Emotiv’s MN8 earbuds collect two channels of EEG brain data. The earbuds can also be used for phone calls and music.
Emotiv
When it comes to neural monitoring for productivity and well-being in the workplace, the San Francisco–based company Emotiv is leading the charge. Since its founding 11 years ago, Emotiv has released three models of lightweight brain-scanning headsets. Until now the company had mainly sold its hardware to neuroscientists, with a sideline business aimed at developers of brain-controlled apps or games. Emotiv started advertising its technology as an enterprise solution only this year, when it released its fourth model, the MN8 system , which tucks brain-scanning sensors into a pair of discreet Bluetooth earbuds.
Tan Le , Emotiv’s CEO and cofounder, sees neurotech as the next trend in wearables, a way for people to get objective “brain metrics” of mental states, enabling them to track and understand their cognitive and mental well-being. “I think it’s reasonable to imagine that five years from now this [brain tracking] will be quite ubiquitous,” she says. When a company uses the MN8 system, workers get insight into their individual levels of focus and stress, and managers get aggregated and anonymous data about their teams.
The Emotiv Experience
The Emotiv Experience
Chris Philpot
Emotiv’s MN8 system uses earbuds to capture two channels of EEG data, from which the company’s proprietary algorithms derive performance metrics for attention and cognitive stress. It’s very difficult to draw conclusions from raw EEG signals [top], especially with only two channels of data. The MN8 system relies on machine-learning models that Emotiv developed using a decade’s worth of data from its earlier headsets, which have more electrodes.
To determine a worker’s level of attention and cognitive stress, the MN8 system uses a variety of analyses. One shown here [middle, bar graphs] reveals increased activity in the low-frequency ranges (theta and alpha) when a worker’s attention is high and cognitive stress is low; when the worker has low attention and high stress, there’s more activity in the higher-frequency ranges (beta and gamma). This analysis and many others feed into the models that present simplified metrics of attention and cognitive stress [bottom] to the worker.
Emotiv launched its enterprise technology into a world that is fiercely debating the future of the workplace. Workers are feuding with their employers about return-to-office plans following the pandemic, and companies are increasingly using “ bossware ” to keep tabs on employees—whether staffers or gig workers, working in the office or remotely. Le says Emotiv is aware of these trends and is carefully considering which companies to work with as it debuts its new gear. “The dystopian potential of this technology is not lost on us,” she says. “So we are very cognizant of choosing partners that want to introduce this technology in a responsible way—they have to have a genuine desire to help and empower employees,” she says.
Lee Daniels , a consultant who works for the global real estate services company JLL , has spoken with a lot of C-suite executives lately. “They’re worried,” says Daniels. “There aren’t as many people coming back to the office as originally anticipated—the hybrid model is here to stay, and it’s highly complex.” Executives come to Daniels asking how to manage a hybrid workforce. “This is where the neuroscience comes in,” he says.
Emotiv has partnered with JLL, which has begun to use the MN8 earbuds to help its clients collect “true scientific data,” Daniels says, about workers’ attention, distraction, and stress, and how those factors influence both productivity and well-being. Daniels says JLL is currently helping its clients run short-term experiments using the MN8 system to track workers’ responses to new collaboration tools and various work settings; for example, employers could compare the productivity of in-office and remote workers.
“The dystopian potential of this technology is not lost on us.” —Tan Le, Emotiv CEO
Emotiv CTO Geoff Mackellar believes the new MN8 system will succeed because of its convenient and comfortable form factor: The multipurpose earbuds also let the user listen to music and answer phone calls. The downside of earbuds is that they provide only two channels of brain data. When the company first considered this project, Mackellar says, his engineering team looked at the rich data set they’d collected from Emotiv’s other headsets over the past decade. The company boasts that academics have conducted more than 4,000 studies using Emotiv tech. From that trove of data—from headsets with 5, 14, or 32 channels—Emotiv isolated the data from the two channels the earbuds could pick up. “Obviously, there’s less information in the two sensors, but we were able to extract quite a lot of things that were very relevant,” Mackellar says.
Once the Emotiv engineers had a hardware prototype, they had volunteers wear the earbuds and a 14-channel headset at the same time. By recording data from the two systems in unison, the engineers trained a machine-learning algorithm to identify the signatures of attention and cognitive stress from the relatively sparse MN8 data. The brain signals associated with attention and stress have been well studied, Mackellar says, and are relatively easy to track. Although everyday activities such as talking and moving around also register on EEG, the Emotiv software filters out those artifacts.
The app that’s paired with the MN8 earbuds doesn’t display raw EEG data. Instead, it processes that data and shows workers two simple metrics relating to their individual performance. One squiggly line shows the rise and fall of workers’ attention to their tasks—the degree of focus and the dips that come when they switch tasks or get distracted—while another line represents their cognitive stress. Although short periods of stress can be motivating, too much for too long can erode productivity and well-being. The MN8 system will therefore sometimes suggest that the worker take a break. Workers can run their own experiments to see what kind of break activity best restores their mood and focus—maybe taking a walk, or getting a cup of coffee, or chatting with a colleague.
What neuroethicists think about neurotech in the workplace
While MN8 users can easily access data from their own brains, employers don’t see individual workers’ brain data. Instead, they receive aggregated data to get a sense of a team or department’s attention and stress levels. With that data, companies can see, for example, on which days and at which times of day their workers are most productive, or how a big announcement affects the overall level of worker stress.
Emotiv emphasizes the importance of anonymizing the data to protect individual privacy and prevent people from being promoted or fired based on their brain metrics. “The data belongs to you,” says Emotiv’s Le. “You have to explicitly allow a copy of it to be shared anonymously with your employer.” If a group is too small for real anonymity, Le says, the system will not share that data with employers. She also predicts that the device will be used only if workers opt in, perhaps as part of an employee wellness program that offers discounts on medical insurance in return for using the MN8 system regularly.
However, workers may still be worried that employers will somehow use the data against them. Karen Rommelfanger , founder of the Institute of Neuroethics , shares that concern. “I think there is significant interest from employers” in using such technologies, she says. “I don’t know if there’s significant interest from employees.”
Both she and Georgetown’s Giordano doubt that such tools will become commonplace anytime soon. “I think there will be pushback” from employees on issues such as privacy and worker rights, says Giordano. Even if the technology providers and the companies that deploy the technology take a responsible approach, he expects questions to be raised about who owns the brain data and how it’s used. “Perceived threats must be addressed early and explicitly,” he says.
Giordano says he expects workers in the United States and other western countries to object to routine brain scanning. In China, he says, workers have reportedly been more receptive to experiments with such technologies. He also believes that brain-monitoring devices will really take off first in industrial settings, where a momentary lack of attention can lead to accidents that injure workers and hurt a company’s bottom line. “It will probably work very well under some rubric of occupational safety,” Giordano says. It’s easy to imagine such devices being used by companies involved in trucking , construction, warehouse operations, and the like. Indeed, at least one such product, an EEG headband that measures fatigue , is already on the market for truck drivers and miners.
Giordano says that using brain-tracking devices for safety and wellness programs could be a slippery slope in any workplace setting. Even if a company focuses initially on workers’ well-being, it may soon find other uses for the metrics of productivity and performance that devices like the MN8 provide. “Metrics are meaningless unless those metrics are standardized, and then they very quickly become comparative,” he says.
Rommelfanger adds that no one can foresee how workplace neurotech will play out. “I think most companies creating neurotechnology aren’t prepared for the society that they’re creating,” she says. “They don’t know the possibilities yet.”
This article appears in the December 2022 print issue.
From Your Site Articles
logic gates quantum gate quantum logic gate transistors qubits Shor algorithm grover's algorithm nisq quantum computing
As powerful as quantum computers may one day prove, quantum physics can make it challenging for the machines to carry out quantum versions of the most basic computing operations. Now scientists in China have created a more practical quantum version of the simple AND operation, which may help quantum computing reach successful near-term applications.
Conventional electronics nowadays rely on transistors , which flick on or off to symbolize data as ones and zeroes. They connect transistors together to build devices known as logic gates , which implement logical operations such as AND, OR, and NOT. Logic gates are the building blocks of all digital circuits.
In contrast, quantum computers depend on components known as quantum bits or “qubits.” These can exist in a quantum state known as superposition, in which they are essentially both 1 and 0 at the same time. Quantum computers work by running quantum algorithms, which describe sequences of elementary operations called quantum logic gates applied to a set of qubits.
“Our work will help narrow the gap between the most anticipated near-term applications and existing noisy devices.”
—Fei Yan, Southern University of Science and Technology, Shenzhen, China
Superposition essentially lets each qubit perform two calculations at once. The more qubits a quantum computer has, the greater its computational power can grow in an exponential fashion. With enough qubits, a quantum computer could theoretically vastly outperform all classical computers on a number of tasks. For instance, on quantum computers, Shor’s algorithm can crack modern cryptography , and Grover’s algorithm is useful for searching databases at sometimes staggering speeds.
However, quantum computers face a physical limitation: All quantum operations must be reversible in order to work. In other words, a quantum computer may perform an operation only if it can also carry out an opposite operation that returns it to its original state. (Reversibility is necessary until a quantum computation is run and its results measured .)
In everyday life, many actions are reversible—for example, you can both tie and untie shoelaces. Others are irreversible—for instance, you can cook an egg but not uncook it.
Similarly, a number of logical operations are reversible—you could apply the NOT operation to a variable and then apply it again to return it to its original state. Others are generally irreversible—you could add 2 and 2 together to get an outcome of 4, a mathematical version of the AND operation, but you could not reverse the operation and know an outcome of 4 began as 2 and 2 unless you knew what at least one of the original variables was.
The AND gate is a fundamental ingredient of both classical and quantum algorithms. However, the demand for reversibility in quantum computing makes it challenging to implement. One workaround is to essentially use an extra or “ ancilla ” qubit for each AND gate that stores the data needed to reverse the operation.
However, quantum computers are currently noisy intermediate-scale quantum (NISQ) platforms , meaning their qubits number up to a few hundred at most and are error-ridden as well. Given quantum computing’s primitive state right now, it would prove “extremely cumbersome to design and build hardware for accommodating extra ancilla qubits on an already crowded processor,” says study cosenior author Fei Yan, a quantum physicist at the Southern University of Science and Technology in Shenzhen, China.
“Our technique presents a scaling advantage. The more qubits are involved, the more cost-saving our technique would be compared to the traditional one.”
—Fei Yan
Now Yan and his colleagues have constructed a new quantum version of the AND gate that removes this need for ancilla qubits. By getting rid of this overhead, they say, their new strategy could make quantum computing more efficient and scalable than ever.
“Our work will help narrow the gap between the most anticipated near-term applications and existing noisy devices,” Yan says. “We hope to see quantum AND functionality added to quantum programs on machines elsewhere, such as the IBM quantum cloud , and played with by more people.”
Instead of using ancilla qubits, the new quantum AND gate relies on the fact that qubits often can encode more than just zeroes and ones . In the new study, the researchers have qubits encode three states. This extra state temporarily holds the data needed to perform the AND operation.
“We do not use any ancilla qubits,” Yan says. “Instead, we use ancilla states.”
In the new study, the scientists implemented quantum AND gates on a superconducting quantum processor with tunable-coupling architecture. Google also employs this architecture with its quantum computers, and IBM plans to start using it in 2023.
“We think that our scheme is well-suited for superconducting qubit systems where ancilla states are abundant and easy to access,” Yan says.
In experiments, the researchers used their quantum AND gate to help construct Toffoli gates , with which quantum computers can implement any classical circuit. Toffoli gates are key elements of many quantum-computing applications, such as Shor’s and Grover’s algorithms and quantum error-correction schemes.
In addition, with six qubits the researchers could run Grover’s algorithm on a database with up to 64 entries. “To our knowledge, previous demonstrations of Grover’s search on any system was limited to 16 entries,” Yan says. This highlights the way in which the quantum AND operation can help scale up quantum computing, he adds.
All in all, “what we really want to emphasize is that our technique presents a scaling advantage,” Yan says. “The more qubits are involved, the more cost-saving our technique would be compared to the traditional one.”
Although these experiments were conducted with superconducting qubits, Yan notes that their quantum AND gate could get implemented with other quantum-computing platforms, “such as trapped ions and semiconductor qubits , by utilizing appropriate ancilla levels.”
The scientists detailed their findings online 14 November in the journal Nature Physics.
From Your Site Articles

Images Powered by Shutterstock