One inquiry of concern that merited the attention of neuroscientists, clinicians, ethicists, policymakers, and the like over the past decades stemmed from the development and sophistication of advanced neuroimaging techniques. Neuroimaging has become a revolutionary tool in the frontier of neuroscience for both diagnostic and research purposes. Since the emergence of neuroimaging technology, physicians have employed structural imaging, such as magnetic resonance imaging (MRI) and computed axial tomography (CAT), to help visualize the anatomical structures of the brain, thereby facilitating diagnoses for neurologic or psychiatric conditions (e.g., Alzheimer’s disease, brain cancer, schizophrenia) in patients as a supplement to genetic tests and other examinations [1]. These structural imaging modalities are often used to complement functional imaging modalities, which are frequently used by researchers to examine the brain activity of participants in real-time when instructed to perform certain tasks or react to stimuli in both healthy or diseased brains; examples of such modalities include, inter alia, functional magnetic resonance imaging (fMRI) for measuring blood flow, positron emission tomography (PET) for measuring metabolic activity [2], and magnetoencephalography (MEG) for measuring magnetic fields from electrical currents in neurons [3]. 

Each neuroimaging technique has its own advantages and flaws; for example, the limited temporal resolution of fMRI is compensated with the high temporal resolution of MEG in a combined-modality to render more useful neuroimaging data [4]. In imaging studies that require a large pool of participants, fMRI lowers the risk-benefit ratio as it is less invasive relative to PET (which involves the injection of radioactive contrast agents into the participants) [2]. The capability of neuroimaging further extends to revealing one’s unconscious thoughts [1] and evaluating one’s susceptibility to a particular disease or behavior [2]. Yet for each capability rendered by neuroimaging, there are subsequent inadvertent consequences that are worthy of neuroethical analysis in relation to medical, social, and legal issues and challenges. 

A prevalent occurrence in research studies (as well as in clinical practices) involving human subjects undergoing neuroimaging procedures is the unintended discovery of incidental findings—neurological anomalies that are unrelated to the objective of the brain scan [5] but may contain clinical significances (e.g., the detection of asymptomatic brain tumors or cysts). Depending on a multitude of factors, such as participants’ age and the region of the body that is being scanned, the likelihood of incidental findings, in general, can occur in approximately 2% of imaging scans in the lower extreme and 47% on the higher extreme [6]. Under such circumstances, the appropriate management of incidental findings, especially in brain scans, poses ethical dilemmas during, say, the process of informed consent or deciding whether the incidental finding should be disclosed to the subject or not, if detected. 

It can be agreed upon that it is ethically sensible to report incidental findings to research participants for their own health benefit; the story of Sarah Hilgenberg is one prime example. When Hilgenberg was a medical student at Stanford, she participated in an fMRI study on learning and memory. It was later revealed that Hilgenberg had malformed connections between the blood vessels in her brain; a condition known as arteriovenous malformation which could lead to bleeding due to high blood pressure and further complications if left untreated. Hilgenberg claimed that this incidental finding had positively affected her life and she felt grateful in retrospect [7]. While revealing incidental findings to the subject is done so in the interest of the participant, such events may also act as a psychological burden on the person. There was a case where an anonymous correspondent wrote to the journal Nature, stating how he was informed of a tumor in his brain upon volunteering to help test a new MRI machine. In the aftermath, the said individual had to undergo neurosurgery to remove the tumor but it cost him and his family everything financially because his medical insurability was adversely affected due to having knowledge of the scans [8]. In other cases, incidental findings may also cause unnecessary treatment when a clinically insignificant finding is perceived to be significant, or in the event of false positive results [2]—all of which may cause severe emotional and financial distress for the participant. 

Disclosing incidental findings, therefore, can have negative implications from the perspective of the participants, even though one study’s data suggested that over 90% of participants wished to be informed in the event of incidental findings [9]. Given the uncertainties associated with incidental findings over the course of neuroimaging studies, transparent protocols (e.g., facilitating direct communication with the participant, reporting incidental findings to a research ethics review board) should be implemented beginning with the process of informed consent to ease tensions among participants [10]. It is the researcher’s moral and legal obligation to manage disclosures and non-disclosures in respecting the participants’ autonomy and interests while taking into account the clinical relevance of the incidental finding(s).    

On top of incidental findings, the wealth of neuroimaging databases and discoveries in social neuroscience experiments have presented thought privacy and confidentiality as integral issues to the ethics of neuroimaging. Intimate and personal thought processes that are distinctive to an individual can be captured during neuroimaging to yield brain activation profiles. Individual thought privacy is thus at risk as researchers in past studies were able to pinpoint the exact neural correlates for cooperation, rejection, moral judgment, implicit racial biases [11,12,13,14], and a myriad of other human social behaviors that are grounded in personal thoughts and the subconscious mind. The Human Brain Project and the International Consortium for Brain Mapping (ICBM) are just a few of the large-scale research initiatives that possess an archive of neuroimaging databases in which data can be easily accessed and shared [2]. Data confidentiality is thus an ethical concern, for researchers could identify individuals based on their corresponding neuroprofiles, and this is reflective of how genomic data are being handled. Neuroimaging data could also be commercialized to for-profit sectors since there are neural correlates for economic decision-making among consumers, which are heavily utilized by companies as a neuromarketing strategy that puts consumers’ confidentiality at severe risk [4]. As a result, this raises the need for individuals to prevent traceability by means of securing their cognitive liberty—the right to have autonomy over individuals’ own mental processes [15]. The idea of cognitive liberty is often raised against neuroimaging because of a prevalent Orwellian fear: those who can probe into our brains can act as a ‘Big Brother’ figure and seek contents that the subject is unaware of or does not wish to be known. 

One prominent application of neuroimaging that emphasizes the medical implications of neuroimaging is predicting the onset of neurologic or psychiatric disorders prior to the appearance of overt symptoms [4]. By identifying potential biomarkers or recognizing regional brain activation patterns or neural signatures that are characteristic of the pathological mechanisms of a disease, one’s risk for developing a particular kind of disease can thus be probabilistically assessed [2]. However, one problem with predictive neuroimaging is that probabilistic data are not strictly reliable given the complex and plastic nature of the brain and that the covariance between image data and human behavior does not suggest causality when external factors like culture play a role [4]. In addition, many disease pathologies are not well-understood yet and could therefore lead to the premature application of neuroimaging techniques in this regard [2]. Furthermore, while predictive neuroimaging may encourage early disease prevention through therapeutic interventions, individuals who are at risk of a prognosed disease may be subjected to societal stigma and discrimination, [2] as this is oftentimes seen with respect to vulnerable populations and minority identities.  

The underlying cause for this social issue of stigma and discrimination can be attributed to the popular media’s tendency to oversimplify and overinterpret neuroimaging data through reductionist and pessimistic stances in a manner that supports deterministic beliefs [2]. The media portrayal of basing our essence upon neuroimaging results—a view known as ‘neuroessentialism’—can effectively shape the public perception to develop stigmatizing and discriminatory tendencies and thereby ostracize those with neurologic or psychiatric conditions [2]. This further creates a divide between what may be perceived as “superior” and “inferior” brain characteristics and promotes a non-moralistic view of human nature such as fatalistic biological determinism [2], the notion that our physiology or genes are the roots of our behavior and personality [16]. Hence, a misunderstanding of neuroimaging data could be damaging for vulnerable populations, and the burden of having such knowledge from predictive neuroimaging should be carefully communicated with those who may be at risk. Otherwise, access to insurance, education, and employment may be limited or denied for individuals living with an invisible disease [17].

Neuroimaging data could be used in greater detail in the criminal justice system which may result in legal complications. Ever since the gradual improvement of neuroimaging techniques, brain scans can be used as admissible evidence in court cases related to murder convictions, for example, when questions regarding sanity are raised for the individual that is convicted (this often necessitates psychiatric assistance) [17]. To illustrate its application, in a 1992 New York Supreme Court case People v. Weinstein, the defendant Herbert Weinstein was charged with second-degree murder for killing his wife. Yet, PET scans revealed that Weinstein had a psychiatric defect in which an arachnoid cyst was found in his brain which may have influenced him to act with violence; ultimately, Weinstein’s murder conviction resulted in a manslaughter plea [17]. Hence, whether a convicted individual should be vindicated from his or her crime or not becomes disputable when neuroimaging suggests that it is possible for individuals to be predisposed to committing aggressive and recidivist acts such as homicide as a consequence of brain damage or even childhood abuse and neglect [4]. Indeed, neuroimaging studies have shown that individuals with impaired prefrontal cortical functioning demonstrated a lack of ability to control impulses, assess risks, and empathize—behavioral characteristics that are common among criminal offenders [4].

Another implementation of neuroimaging in the field of forensic science is a practice known as brain fingerprinting, where an individual’s brain waves (P300) are measured using an electroencephalogram (EEG) in response to information and facts that are relevant to the case in question [4]. The stimuli are presented in millisecond intervals to test for P300 wave responses [18]. Arguably, neuroimaging may help discern “truths from lies or real memories from false ones” according to lawyer and law professor Henry Greely [17]. Such was the case for Terry Harrington, who was convicted of murdering a retired police officer in 1977 and was thereafter sentenced to prison after the verdict in the Iowa Supreme Court case Iowa v. Harrington [18]. However, Harrington’s murder conviction was later reversed when he underwent brain fingerprinting in 2000 and had negative results, in which no P300 wave signals were evoked in response to details that were related to his case [18]. Of note, however, that using neuroimaging for lie detection does not assure absolute accuracy as promised and rates of error are uncertain [19]. 

While it is crucial to consider mental defects and diseases in convicted individuals, to what extent is the linkage between brain abnormality and criminal behavior sufficient to prove someone as free of legal responsibility? How reliable and valid ought neuroimaging data be prior to being used as admissible evidence in court? Does forcibly imaging the brain of the offender in search of incriminating memories or other information violate the 4th and 5th Amendments to the U.S. Constitution [19]? The fine line that distinguishes innocence from guilt in such cases is blurred by the foregoing concerns, and the ethics surrounding the usage of neuroimaging as part of the judiciary process remains to be scrutinized.

The key in regard to neuroimaging and its ethical framework relies on the proper interpretation and application of neuroimaging data while considering the many limitations of neuroimaging. Thus far, neuroimaging technology has yet to reach the level of direct mind-reading but offers many inferences into our innermost thoughts and psychological traits. Neuroimaging can be pervasive in many domains of our life beyond medically diagnosing and treating patients; chiefly toward influencing societal viewpoints and legal decisions that could have unforeseen ramifications and even as far as putting individual lives and values at their most vulnerable state. 




References

  1. Brašić, J. R., & Mohamed, M. (2014). Human brain imaging of autism spectrum disorders. In Imaging of the human brain in health and disease (pp. 373-406). Academic Press.

  2. Racine, E., & Illes, J. (2007). Emerging ethical challenges in advanced neuroimaging research: review, recommendations and research agenda. Journal of Empirical Research on Human Research Ethics2(2), 1-10.

  3. Singh, S. P. (2014). Magnetoencephalography: basic principles. Annals of Indian Academy of Neurology17(Suppl 1), S107.

  4. Fuchs, T. (2006). Ethical issues in neuroscience. Current opinion in psychiatry19(6), 600-607.

  5. Shaw, R. L., Senior, C., Peel, E., Cooke, R., & Donnelly, L. S. (2008). Ethical issues in neuroimaging health research: an IPA study with research participants. Journal of health psychology13(8), 1051-1059.

  6. Bomhof, C. H., Van Bodegom, L., Vernooij, M. W., Pinxten, W., De Beaufort, I. D., & Bunnik, E. M. (2020). The impact of incidental findings detected during brain imaging on research participants of the Rotterdam study: an interview study. Cambridge Quarterly of Healthcare Ethics29(4), 542-556.

  7. Hilgenberg, S. (2005). Formation, malformation, and transformation: my experience as medical student and patient. Stanford Med Student Clin J9, 22-5.

  8. Anonymous. (2005). How volunteering for an MRI scan changed my life. Nature, 434(7029), 17.

  9. Kirschen, M. P., Jaworska, A., & Illes, J. (2006). Subjects' expectations in neuroimaging research. Journal of Magnetic Resonance Imaging: An Official Journal of the International Society for Magnetic Resonance in Medicine23(2), 205-209.

  10. Leung, L. (2013). Incidental findings in neuroimaging: Ethical and medicolegal considerations. Neuroscience Journal2013.

  11. Decety, J., Jackson, P. L., Sommerville, J. A., Chaminade, T., & Meltzoff, A. N. (2004). The neural bases of cooperation and competition: an fMRI investigation. Neuroimage23(2), 744-751.

  12. Eisenberger, N. I., Lieberman, M. D., & Williams, K. D. (42003). Does rejection hurt? An fMRI study of social exclusion. Science302(5643), 290-292.

  13. Greene, J. D., Sommerville, R. B., Nystrom, L. E., Darley, J. M., & Cohen, J. D. (2001). An fMRI investigation of emotional engagement in moral judgment. Science293(5537), 2105-2108.

  14. Lieberman, M. D., Hariri, A., Jarcho, J. M., Eisenberger, N. I., & Bookheimer, S. Y. (2005). An fMRI investigation of race-related amygdala activity in African-American and Caucasian-American individuals. Nature neuroscience8(6), 720-722.

  15. Sententia, W. (2004). Neuroethical considerations: cognitive liberty and converging technologies for improving human cognition. Annals of the New York Academy of Sciences1013(1), 221-228.

  16. Allen, G. E. (1984). The roots of biological determinism.

  17. Illes, J., & Racine, E. (2005). Imaging or imagining? A neuroethics challenge informed by genetics. The American Journal of Bioethics5(2), 5-18.

  18. Thompson, P., Cannon, T. D., Narr, K. L., Van Erp, T., Poutanen, V. P., Huttunen, M., & Toga, A. W. (2001). Forensic neuroscience on trial. Nature neuroscience4(1), 1-1.

  19. Roskies, A. (2021, March 3). Neuroethics. Stanford Encyclopedia of Philosophy. Retrieved January 8, 2023, from https://plato.stanford.edu/entries/neuroethics/

Comment