'NEUROSCIENCE: Exploring the Brain', a book review by Brenda Walker
22nd November 2024
26th May 2023
Brain Insights is the dedicated student section published in the British Neuroscience Association (BNA) Bulletin. It represents the voice of the BNA student: written by students for students. Selected articles are reproduced here on the BNA website.
In this student article Sudarshan Sreeram, 3rd year MEng Student in Computing at Imperial College London, peers through the lens of science fiction to evaluate the looming future of brain-computer interfaces (BCIs).
Writing for Brain Insights is now a members-only benefit. If you're not already a member you can join here from just £1 a month.
Humans have the innate ability to draw up fictional solutions to challenges we encounter in our daily lives. But, does thinking in fiction encourage a narrow mindset that closes pathways to innovative solutions that could benefit humanity? Has mass media introduced an unconscious bias that's shifted consumer behaviour towards future technology? Or, is fiction a tool to help gauge public reception of new product categories in a more invasive fashion? This article explores the design, functionality and consumer expectations of brain-computer interfaces, a technology ubiquitous in fiction yet scarce in reality.
For many years, science fiction (sci-fi) writers have employed the concept of beings who combine the human body with the technology of machines (cyborgs) to attain novel cognitive and physical prowess. The predominant mechanism through which this is achieved is through the integration of artificial components with existing neural pathways to facilitate an array of cognitive functions with precise motor control. Given the multitude of ways in which this integration can be achieved, sci-fi writers, armed with an unrestricted scope for imagination and creativity, have leveraged many examples.
For instance, the antagonist in Spider-Man 2 (2004), “Doctor Octopus”, is a cyborg fitted with four highly flexible, artificial intelligence (AI)-powered mechanised arms that borrow elements of visual and functional characteristics from the tentacles of an octopus. This advanced prosthetic was originally purposed for assistance in conducting research experiments under hazardous work environments. The arm’s AI (i.e. a synthetic brain) facilitates its autonomy in making decisions on behalf of Doctor Octopus. Albeit a fictional concern, the arms consider his intentional thoughts for movement and act accordingly. In fact, octopuses exhibit this orchestrated, distributed intelligence. Early in the screenplay, two plot devices are introduced: a “neural link” and an “inhibitor chip”. The former interfaces his neural pathways with the arm’s AI, and the latter, a crucial part of the link, supposedly protects these pathways from being manipulated by said intelligence. Predictably, a failed experiment destroys the chip, opening a reversed stream of communication and control that allows the arms to take over as Doctor Octopus’s sinister alter ego.
This “neural link” one of many aliases of the term brain-computer interface (BCI); a few others include neural chip, brain-machine interface, cybernetic implant, and neural interface system. This term is not universal, owing to the many nuances of the device’s functionality and use cases. Herein,“BCI” is used as a generalised term.
Fictional works best highlight the potential versatility of BCIs: in film franchises, such as Star Wars and RoboCop, BCIs are central control units for exoskeletons and advanced prostheses; in video-game series, such as Halo and Deus Ex, they are intelligence augmentation devices that present the bearer with enhanced environmental monitoring capabilities and suggested actions based on situational analysis; in serials, such as The 100 and Altered Carbon, they store memories and the human conscience — an unusual, nevertheless thought-provoking use case. In novels, such as Archangel Protocol and Nova, they are identity authentication systems for accessing the internet, payment networks, electronic devices, and transportation methods. These creative takes on how BCIs can apply to diversified challenging situations that influence many aspects of our day-to-day lives are only the tip of the iceberg.
A Lesson on Perception
Fictional ideas tend to motivate irrational expectations from the real world as writers recurrently exaggerate them. In the late 1950s, visionaries optimistic about the pace of progress in technology and engineering estimated the public adoption of flying cars in the early 2000s. Sci-fi works from the 80s, such as Blade Runner and Back to the Future, referencing flying cars, contributed to public appeal and awareness of this new form of transportation. Despite the remarkable research and development efforts that have brought personal vertical take-off and landing (VTOL) aircrafts a few steps closer towards widespread reality, the results are far from the visionaries' expectations. As Max Tegmark mentions in his renowned book Life 3.0, “history is full of technological over-hyping”.
In stark contrast to fictional ones, real-world BCIs, especially commercially available ones, are primitive. In the face of reality, our limited knowledge on the human brain, coupled with technological limitations, ethical implications, safety regulations, and even economical constraints, have grounded the possible, practical applications of this technology in commercial settings.
Many people often unintentionally overstep the line into fictional territory when contemplating state-of-the-art academic contributions, including the research applications of BCIs as prostheses in medical settings (e.g. decoding neural activity for synthesising speech or reconstructing images). Naturally, speculation drives the consideration of these applications for 'mind-reading' or 'thought deciphering', raising concerns regarding citizen privacy and freedom. Whilst one could ponder the possibilities of how BCIs will impact our lives, they may help deepen our understanding of what we currently consider to be the “real” world, a world constrained based on our very senses and socially agreed perceptions.
Humans more typically contemplate the immediately apparent niches than the greater context when discussing any high-impact technology; it’s similar to spotting needles before acknowledging the haystack. One common theme is the hard-to-answer ethical and moral questions surrounding BCI usage; who would be liable in a scenario where BCI-controlled prostheses inflict harm on an individual by misinterpreting the bearer’s intentions? Many such concerns are prevalent in discussions surrounding self-driving vehicles. A popular variant includes the “Trolley Dilemma” in which a vehicle’s control unit must decide between risking the lives of its passengers or that of a pedestrian. Similar discussions surround the impact of AI on future employability in terms of job loss due to automation.
Further, there is a need to address socioeconomic inequalities in providing access to these advanced technologies (e.g. prosthetics) and strategies to avoid social disruption from a wave of “purists” opposing cognitive enhancements. Discussions on these issues may aid progress towards ethically conscious research efforts and safe, security-focused applications that ultimately benefit humanity.
A Tale of Two Story Arcs
Large corporate entities seeking to commercialise BCIs may potentially employ unethical practices in internal, confidential research efforts; the inability of the public to hold them accountable is a concerning issue. Early investments into promising commercial BCI ventures, especially in healthcare and entertainment (gaming), by companies including NextMind, Neuralink, Neurable, Emotiv and Kernel have shown great potential for growth in BCI market value. Coupled with the technology’s state of commercial infancy and the lack of BCI-specific regulations, this growth creates an environment with the necessary conditions to spawn domain-specific monopolies. In academic and clinical settings, neuroethicists serve to integrate social benefit into the foundations of these technologies. However, given their limited influence in corporate and commercial settings, a wealth of sensitive data is produced with the scope to be used in a manner which mostly benefits capitalist groups.
Given the public perceptions of this technology based on works of fiction, influencing public regulation of these technologies is a complex issue. Whilst transitions in some “smart” technologies have been more seamless due to the managed expectation with respect to existing technologies, as with smartphones and personal digital assistants, point-and-shoot cameras, sat-navs, handheld music players, and as is the case with upcoming “smartwatches”, this will not always be the case for all future technologies.
In contrast to these integrated technologies, which are increasingly common in the home environment and on our person, BCIs do not have a relatable physical prior to drive concrete expectations. Rather, most expectations in this regard are drawn from parallels in science fiction. With this in mind, it is unsurprising that the public struggled to gauge the utility and validity of technologists claims regarding BCIs, including those related to Elon Musk’s Neuralink chip, which can supposedly stream music directly to the users’ brain. This gap in user expectation and knowledge is one that must be filled to inform future decision making.
Overall, a creative, multi-disciplinary approach is likely to be the way forward to safely and effectively integrate BCIs in the home, society, and potentially, ourselves. Example investigations may include assessing the benefits of AI for augmenting intelligence or the possibilities of nanotechnology in neurosurgery. In navigating these waters, we must be conscious of the social, ethical and moral implications of these technologies, above and beyond nascent fields. Safe regulation of emerging technologies will be the cornerstone of managing this technological integration. We must ask ourselves, whilst we have the innate ability to imagine fictional solutions, do we have the flexibility to implement them?
References