The radical symbiosis of the human body or mind with machines via technological interventions is one area of cutting-edge research in neural engineering that is reminiscent of many speculative science fiction stories on robotics [1]. “You just can’t differentiate between a robot and the very best of humans,” as the writer and biochemist Isaac Asimov once warned. This rendezvous is another typical instance in which the introduction of artificial intelligence (AI) has rendered the capability to revolutionize the many facets of human life, notably our physical and mental health. For that reason, the invisibility of the failing condition of the human brain necessitates the restoration of welfare to those who suffer from the severity of neuropsychiatric or neuromuscular disorders which are constrictive to human flourishing. 

   One prospective solution that is representative of the work of for-profit neurotechnology companies such as Kernel, Neuralink, and Synchron is the development of brain-computer interface (BCI) technology. BCIs are electronic feedback systems that record and analyze the brain activity of the user in real- or near-time using AI algorithms, thereby enabling the user to control an external device (computer cursors, prosthetic limbs, automated wheelchairs, etc.) with their mental faculties [2]. The purpose of BCI is to render neurologically compromised individuals (i.e., noncommunicative, paralyzed, etc.) with some extent of control over their social environment by enabling them to control any given external machine to compensate for their loss of certain cortical functions (e.g., speech, motor control). In application, BCI can be utilized to treat conditions such as cerebral palsy, spinal cord injury, locked-in syndrome, and amyotrophic lateral sclerosis [3]. Through utilizing machine learning techniques, BCIs can also become automatized, thereby possessing the capability to predict the likelihood of impending seizures in individuals with epilepsy, for example, by means of espying precursory neuronal events and subsequently advising the individual to take cautionary measures through sensory cues [4]. 

   Three distinct categories of BCI technology in terms of the amount of volitional control that is needed to generate signals are known: active, reactive, and passive [5]. Active BCIs demand the user to strategically produce specific neuronal patterns such as mentally picturing the movement of certain body parts. Reactive BCIs facilitate by having the user voluntarily attend to an external stimulus among various stimuli to evoke changes in brain activity. Finally, passive BCIs are reliant on the involuntary brain activities of the individual such as mental workload or affective states.

   In practice, BCIs (i.e., microelectrodes) are planted on the surface or on the inside of the neural cortex by means of invasive surgeries to establish the intimate connection between the brain and an external machine that will translate a mental process into an executable output [2]. One example of such an invasive BCIs is deep brain stimulation (DBS), which employs a two-directional (closed-loop) system as opposed to the one-directional algorithm of other common BCIs [6]. DBS is effective in the treatment of movement disorders such as Parkinson’s disease, obsessive-compulsive disorder, and dystonia. DBS requires the implantation of electrodes into certain areas of the brain, which can generate electrical impulses that can subdue abnormal brain activities in targeted brain regions (i.e., the subthalamic nucleus or the globus pallidus internus) and regulate chemical imbalances that are characteristic of specific circuitopathies [7]. The amount of electrical stimulation needed is sent through an extension wire that is connected to an internal pulse generator implanted under the skin in the upper chest. Noninvasive BCI methods allow for the recording of brain activity from the scalp using neuroimaging instruments (e.g., EEG, fMRI, NIRS), thereby eliminating the need for functional neurosurgery [8]. Interestingly, the work of Rakhmatulin and Volkl has demonstrated that a simple and noninvasive BCI device could be inexpensively made using a Raspberry Pi to interpret EEG patterns and allow the user to control mechanical objects—unlike other forms of BCI which pose the challenges of affordability and equity due to the extravagance of this technology [9]. 

   While the current development of many examples of BCI is still in a state of infancy and undergoing clinical trials, further applications of BCI can be extrapolated to commercial purposes such as entertainment, especially in the industry of video games [2], and even as far as augmenting one’s natural intelligence or military combat abilities for cognitive or physical enhancements, respectively, and allowing for brain-to-brain communication among users [10,11]. Only the clinical applications of invasive BCIs (assume all mentions of BCIs hereafter is invasive, unless stated otherwise) will be subjected to discussion and scrutiny herein, however, for the development of BCI technology and its application to medicine and healthcare is an ethically questionable undertaking in spite of its novelty and benefits. One specific peculiarity in our discussion poses the following ethical challenge: In patients with BCI implants, the degree of autonomy and agency can potentially be altered [12,13]. Subsequently, this could impact the accountability, privacy, and identity of patients on a psychological, social, and legal level [10]—along with other iatrogenic complications (i.e., acute trauma, glial scarring) from BCI-induced effects that could raise concerns for safety and ultimately transmute the hope of yesterday into the despair of tomorrow [2].

   It is of foremost importance to discern autonomy from agency as both terms assume that any individual in question is a free agent in a world where one has conscious control over their thoughts and actions. Whereas the concept of autonomy refers to the ability to independently choose a course of actions using one’s reason and knowledge in the absence of any imposed interferences or limitations [10], agency refers to the ability to influence a course of actions as desired and having the feeling of ownership of those actions [13]. These are merely general definitions, however, for various philosophers of different schools of thought are not in equal agreement on the correctness of such ambiguous concepts. 

  Neurologically compromised patients with BCI implants may ultimately experience a diminished sense of autonomy and agency such that they are no longer free agents of the world. Rather, they would simply be subjects to the laws of determinism in the same fashion that the biological brain and the physical universe are constructed, as evidenced through the first-person narratives of patients with BCI implants which will be discussed later. As a result, this concern raises several moral and philosophical questions in relation to one’s humanity and personhood: To what extent does an individual with a BCI implant feel “artificial”? What fraction of the thoughts and decisions that the individual produces are reflective of their authentic self? How much of the individual’s sense of self and judgment are fused with the technology in any given context [1]? In the same vein, Li and Zhang have previously demonstrated the cyborgization of live Madagascar hissing cockroaches that can be remotely controlled with the human brain. Using a portable SSVEP (steady-state visual evoked potential)-based BCI paradigm that delivers electrical stimulations to the cyborg cockroaches (the receiver), the cockroaches can be motioned to move in various directions in accordance with the intentions of the human subject (the controller) who is wearing an EEG headset [14]. While the use of cockroaches may be ethically justifiable to some extent, the story would be different if humans were placed on the receiving end. The line that distinguishes BCI as an assistive tool from the user’s body schema and self-understanding is thus blurred in the hybridization of mankind with AI in the forthcoming mechanistic evolution of Homo sapiens to Homo sapiens technologicus [2]. 

  The biggest antithesis to the argument regarding the preservation of one’s autonomy and agency is perhaps that the issues of humanity and personhood are irrelevant to the ethical debate on BCI. After all, the changes to identity experienced by patients with untreated neurological conditions are, arguably, far greater than the changes brought on by BCI. Therefore, the moral concerns of BCI in the problem of identity change, as some BCI researchers have argued, should not hold too much weight in the ethical guidelines for BCI as changes in self-perception among BCI users are only natural in the implementation of such neurotechnology [15]. As a matter of fact, a fraction of researchers has also made the assertion that the lack of independence to make decisions on the basis of one’s desires is not the fault of BCIs but rather, it should be ascribed to the pathologic condition of the individual that is hindering their will to express themselves or act [2]. If anything, BCIs are tools of empowerment, and studies on patients’ attitudes toward BCI have shown evidence of optimism [16]. 

   Nevertheless, the sense of autonomy and agency experienced by some BCI users are possibly illusionary and are prone to attribution errors [2]. The interplay between human and machine decision-making are becoming increasingly complex and intimate, such that BCIs can initiate certain outputs without the user’s volitional input [17]. This is technologically feasible, given that BCI systems contain a “black box” of outsourced information from the patient to which they have no access [12]. Current BCI systems offer only a restricted level of guidance control to influence executed movements and the ability to veto any initiated commands through specific output channels is also very much lacking [17]. Theoretically, BCIs, especially passive ones, can algorithmically learn the neural activity patterns of the user in specific situations, thus utilizing the collected and stored information to render selectively fewer options for the user to act upon, even in cases where the user may not necessarily endorse these alterations. Such algorithm-derived options, therefore, may attenuate the user’s freedom to produce autonomous commands and their capacity to make choices [12]. This problem is notably pronounced in DBS, where patients’ ability to think and make decisions is only prompted when electrical currents are conducted to alter the brain activity of the patients, which suggests it is a direct means of manipulation [1]. The effects of BCI on the autonomy and agency of patients will be further examined through a first-person perspective. 

   Gilbert et al. conducted a series of individual interviews on six epileptic patients who volunteered to undergo BCI implantation [18]. The objective of the interviews was to capture the contrast between pre- and post-operative experiences of these patients with respect to the perception of their self-image and self-change as a result of BCI-mediated events. The patients’ responses were rather ambiguous, however. One patient asserted that the incorporation of BCI changed her life favorably, such that it “changed [her] confidence [and her] abilities,” further elaborating that “with this device [she] found [herself]” [18]. Clearly, this particular patient experienced a sense of control and empowerment over her life with the BCI implant, yet her experience is in opposition to a different patient, who claimed that the BCI merely caused her otherness to be more apparent in the eyes of society and how it “made [her] feel [she] had no control [… she] got really depressed” [18]. The inability to have control over BCI-driven events suggests an issue regarding accountability. Noisy signals as a result of subconscious neural activity are known to feed into the output system of BCIs, thereby creating unintended movements that are not indicative of the user’s true desires [13]. Because unintended movements can lead to unpredictable and harmful consequences in specific contexts, accountability could possibly be misattributed to the BCI user, and feelings of having zero control in those circumstances may elicit the same miserable sentiment that the latter patient had expressed. The risks of device failure, hacker intrusion, and akrasia are also among the diverse factors that could potentially cause unintended acts to be performed by BCIs which may distort the user’s sense of self and identity in conjunction with the user’s moral and legal responsibilities [10]. 

   It is worth noting that when the aforementioned patient whose life was favorably impacted as a result of the BCI implant was ultimately subjected to having her BCI removed because the company that administered her the implant had to declare bankruptcy, her world fell apart. In the mind of that particular patient, it was as if a piece of her own flesh was being torn apart from her body. Gilbert later remarked that “the company [now] owned the existence of this new person” [1]. Notice, however, that one key element that contributed to the difference in the lived experience of patients with BCI implants is whether or not they view their disability as a part of who they are. Patients who accepted their epilepsy as a part of themselves were more likely to regard the BCI implants in a more positive or neutral manner than patients who did not view themselves as an epileptic, who instead experienced more distress and estrangement [14]. Although this observation cannot be generalized to the entire population of disabled individuals since the study only contained a sample size of six individuals, it does raise the question of whether BCI is a form of treatment or enhancement for disabled individuals. Depending on how disabled individuals see themselves, using BCI without identifying oneself as disabled may be interpreted as an enhancement, for example [2]. Alternatively, individuals who do identify themselves as disabled may perceive BCI as a treatment, but such individuals are most likely fearful of being subjugated to normality and thus refuse to undergo BCI implantation as a result of their attachment to their disability identity, which runs perpendicular to the original purpose of BCI in restoring normal capabilities to individuals who are neurologically compromised. 

The curious case of the patient who experienced a loss of control thus serves as the basis as to why BCI technology may not fully pass the ethical test. Notably, an emphasis on the psychological aftermath of BCI-induced effects with respect to the disembodiment of one’s sense of autonomy and agency should be taken into account. Let us consider the following hypothetical case scenario [19] that highlights our ethical intrigue: A 35-year-old man named Frank is an alcoholic who is at risk of alcohol use disorder and other health complications such as cardiovascular disease and liver cancer as a result of his excessive alcohol consumption. Due to his inability to fight off the withdrawal symptoms, his family suggested he seek DBS treatment to alleviate his compulsive drinking behaviors. Days after his surgery, Frank became indifferent to alcohol and was able to control his intake. However, Frank’s loss of interest in alcohol eventually caused him to experience remorse for undergoing the DBS treatment and he could no longer feel a connection to his old self, as though the treatment changed something about him that was more than his alcoholism. Nevertheless, Frank is hesitant about expressing his inner feelings to his family, who are likely to stigmatize him for his drinking behavior if Frank were to have the DBS device removed. 

  In the above scenario, the existential crisis that Frank is experiencing after his DBS treatment involves a paradox: the desire to be able to enjoy drinking alcohol and the desire to be rid of his alcohol addiction. Even more so, Frank also felt a sense of threat to his identity as a result of having the DBS device implanted in him, as though his decisions are entirely attributed to the device, thus depriving him of his right to autonomy and agency—a testament to the fact that these risks are commonly ill-conceived when giving meaningful consent and should be prioritized as much as protecting one’s privacy from being obtained through BCI systems [6]. One social factor, moreover, that is compromising Frank’s ability to decide and act in accordance with his own wishes is the societal stigma that is imposed upon him by his family, whose biased point of view is in alignment with the norm and is on the contrary to that of Frank’s [2]. The disembodiment of Frank’s identity and the closely affiliated psychological aspects of autonomy and agency are thus the focal points of this ethical debate in relation to the unprecedented effects of BCI on an individual’s humanity and personhood. 

   With utmost certainty, BCIs are among the list of technological singularities that will eventually bring about profound changes to the way clinical patients will live and prosper. Yet, the imminent ethical challenges that BCIs impose on patients contain a myriad of uncertainties with respect to changes in their psychology, notably their sense of autonomy and agency, given the disembodied nature of BCI technology. Interacting with the world through neurotechnological means, as it seems, is the epitomized reality of the 21stcentury. Be that as it may, it ought to be recognized that BCI is to neuroscience what human cloning is to genetics and what nuclear weapons are to nuclear physics—it is a perpetual cycle of progress and destruction in the sustainable development of futuristic societies. 

References 

1. Drew, L. (2019). The ethics of brain-computer interfaces. Nature, 571(7766), S19-S19. 

2. Burwell, S., Sample, M., & Racine, E. (2017). Ethical aspects of brain computer interfaces: a scoping review. BMC medical ethics, 18(1), 1-11. 

3. Shih, J. J., Krusienski, D. J., & Wolpaw, J. R. (2012, March). Brain-computer interfaces in medicine. In Mayo clinic proceedings (Vol. 87, No. 3, pp. 268-279). Elsevier. 

4. Cook, M. J., O'Brien, T. J., Berkovic, S. F., Murphy, M., Morokoff, A., Fabinyi, G., ... & Himes, D. (2013). Prediction of seizure likelihood with a long-term, implanted seizure advisory system in patients with drug-resistant epilepsy: a first-in-man study. The Lancet Neurology, 12(6), 563-571. 

5. Kögel, J., Schmid, J. R., Jox, R. J., & Friedrich, O. (2019). Using brain-computer interfaces: a scoping review of studies employing social research methods. BMC medical ethics, 20, 1-17. 

6. Klein, E., Goering, S., Gagne, J., Shea, C. V., Franklin, R., Zorowitz, S., ... & Widge, A. S. (2016). Brain-computer interface-based control of closed-loop brain stimulation: attitudes and ethical considerations. Brain-Computer Interfaces, 3(3), 140-148. 

7. Lozano, A. M., Lipsman, N., Bergman, H., Brown, P., Chabardes, S., Chang, J. W., ... & Krauss, J. K. (2019). Deep brain stimulation: current challenges and future directions. Nature Reviews Neurology, 15(3), 148-160. 

8. Vlek, R. J., Steines, D., Szibbo, D., Kübler, A., Schneider, M. J., Haselager, P., & Nijboer, F. (2012). Ethical issues in brain–computer interface research, development, and dissemination. Journal of neurologic physical therapy, 36(2), 94-99. 

9. Rakhmatulin, I., & Volkl, S. (2022). PIEEG: Turn a Raspberry Pi into a Brain-Computer-Interface to measure biosignals. arXiv preprint arXiv:2201.02228.

10. Zeng, Y., Sun, K., & Lu, E. (2021). Declaration on the ethics of brain–computer interfaces and augment intelligence. AI and Ethics, 1(3), 209-211. 

11. Fuchs, T. (2006). Ethical issues in neuroscience. Current opinion in psychiatry, 19(6), 600-607. 

12. Friedrich, O., Racine, E., Steinert, S., Pömsl, J., & Jox, R. J. (2021). An analysis of the impact of brain-computer interfaces on autonomy. Neuroethics, 14, 17-29. 13. Davidoff, E. J. (2020). Agency and accountability: ethical considerations for brain-computer interfaces. The Rutgers journal of bioethics, 11, 9. 

14. Li, G., & Zhang, D. (2017). Brain-computer interface controlling cyborg: A functional brain-to-brain interface between human and 

cockroach. Brain-Computer Interface Research: A State-of-the-Art Summary 5, 71-79. 15. Nijboer, F., Clausen, J., Allison, B. Z., & Haselager, P. (2013). The asilomar survey: Stakeholders’ opinions on ethical issues related to brain-computer interfacing. Neuroethics, 6, 541-578. 

16. Schicktanz, S., Amelung, T., & Rieger, J. W. (2015). Qualitative assessment of patients’ attitudes and expectations toward BCIs and implications for future technology development. Frontiers in systems neuroscience, 9, 64. 

17. Steinert, S., Bublitz, C., Jox, R., & Friedrich, O. (2019). Doing things with thoughts: Brain-computer interfaces and disembodied agency. Philosophy & Technology, 32, 457-482. 

18. Gilbert, F., Cook, M., O’Brien, T., & Illes, J. (2019). Embodiment and estrangement: results from a first-in-human “intelligent BCI” trial. Science and engineering ethics, 25, 83-96. 

19. Brown, T., CSNE Ethics Thrust (2014, October). Case studies in neuroscience. Center for Neurotechnology. 

https://centerforneurotech.uw.edu/sites/default/files/CSNE%20Neuroethics%20C ases_for%20distribution.pdf


Comment