Introduction
The advent of artificial intelligence (AI) in health care has led to the emergence of automated hypnosis systems designed to induce altered states of consciousness using immersive and interactive digital devices. These systems, often available through digital platforms, provide an innovative way to access hypnosis in the absence of a human therapist. While marketed as a solution for autonomy and democratization of care, this form of technological hypnosis raises critical concerns regarding psychological safety and ethical considerations.
Traditionally, hypnosis involves a therapist guiding the patient through a therapeutic process (
Green, 2014) that relies heavily on the nuanced understanding of the patient’s psychological state, emotions, and responses (
Elkins & al., 2015). In contrast, automated systems lack the ability to interpret non-verbal cues or emotional nuances, which can lead to unintended consequences, including cognitive manipulation, emotional distress, or psychological regression.
Furthermore, the use of biodigital systems in hypnosis, encompassing terms such as deepfakes, avatars, and digital humans, creates a convincing simulation of human supervision. Patients may believe they are interacting with a real person, when in fact they engage with a digital entity. This perceived presence can foster misplaced feelings of safety and trust, potentially producing significant psychological effects.
Significantly, emerging research suggests that the replacement of human hypnotherapists with biodigital entities is not merely a hypothetical concern but an imminent reality. Economic pressures within healthcare systems, coupled with rapid technological advancement in AI capabilities, are accelerating this transition at an unprecedented pace.
This trajectory raises important theoretical and clinical questions that will be addressed through two core hypotheses developed at the end of the literature review.
Literature Review
The literature surrounding digital hypnosis is still in its infancy, but several studies have explored the use of AI and digital tools in mental health care. Digital hypnosis is typically defined as hypnosis facilitated by automated systems such as virtual reality (VR), AI chatbots, and other immersive technologies (
Yap & al., 2021). This category also includes biodigitalization, which encompasses avatars, digital humans, digital twins, deepfakes, and similar technologies. For the moment, these systems provide a form of treatment that does not rely on human mediation, but soon they will be completed with biodigitals - digital humans - presenting both potential benefits and significant risks. These biodigital entities may look and act human-like but lack the true empathy and ethical reasoning inherent to real human therapists, creating a false sense of therapeutic guidance.
Defining Biodigitalization in Therapeutic Contexts
Biodigitalization, as defined by (
Jauffret & Landaverde-Kastberg, 2018) and further elaborated by
Jauffret and Aubrun (2022), refers to the creation of digital representations of humans that mimic human appearance, behavior, and interaction capabilities. These digital humans can include avatars with sophisticated emotional expressions, digital twins that replicate specific individuals, and deepfakes that can convincingly simulate real human communication. In therapeutic contexts, biodigitalization enables the creation of virtual therapists or hypnotists that appear remarkably human-like, thus potentially blurring the boundaries between human and machine interaction (
Spiegel, 2013). The biodigitalization phenomenon presents both unique challenges and opportunities in various domains, with particularly sensitive applications in fields like therapeutic services. While these technologies promise to enhance engagement and accessibility, ethical considerations remain paramount, including the need to ensure users are fully aware of the artificial nature of these interactions and to avoid any form of emotional manipulation. The perception of human-like interaction may lead patients to mistakenly believe they are under real human supervision, whereas they are interacting with biodigital entities that lack genuine emotional understanding or ethical judgment.
Publications by
Pagliari et al. (2012) indicate that the application of avatars in psychotherapy is a recent development, and their acceptability, effectiveness, and associated risks remain unclear. In effect, advancements in technology have created entities capable of mimicking the hypnotic induction techniques of master practitioners with remarkable fidelity. Their analysis also suggests that within therapeutic settings, these avatars will increasingly replace human practitioners highlight the potential of avatars to support psychotherapy by increasing accessibility, while emphasizing the need for caution due to unknown clinical outcomes and psychosocial risks.
Cognitive Manipulation in Technological Hypnosis
The concept of cognitive manipulation through digital media is not new. Studies in hypnosis and in psychology have demonstrated that digital interfaces can influence beliefs, perceptions, and behaviors without direct human input. For instance,
Fogg (2002) in his work on persuasive technology discusses how digital systems can subtly influence users’ decisions and behaviors through carefully designed interfaces. In the context of hypnosis, these systems may lead to increased suggestibility, where users are more susceptible to suggestions embedded within the algorithmic structure of the hypnosis session. As
Bandura (2001) notes in his research on social cognitive theory, people often internalize behaviors and beliefs from perceived authority figures without critical examination—a phenomenon that can be amplified in hypnotic states.
Ethical Concerns in Biodigitalization-asSisted Hypnosis
Ethical concerns have long been a topic of discussion in the application of AI in healthcare (
Binns, 2018). With the advent of biodigitalization, these concerns are heightened. In therapeutic contexts, these technologies enable the creation of virtual therapists or hypnotists, which can appear convincingly human-like. However, this visual representation presents a profound ethical issue: patients may believe they are under the supervision of a human professional, while they are actually interacting with a biodigital creation. This illusion can create a false sense of safety and trust, as the digital system mimics human empathy and judgment without having the capacity for actual understanding or ethical reasoning. Unlike human therapists, biodigital systems cannot interpret non-verbal emotional cues or provide real-time adjustments to a patient’s emotional state, a critical aspect of the therapeutic process in hypnosis. Additionally, the lack of genuine human empathy in these systems poses a risk of emotional mismanagement, as patients may form an attachment to a digital entity that cannot genuinely respond to their emotional needs (
Baumeister & Leary, 1995). This dependency on a biodigital presence could lead to a range of issues, including increased suggestibility, formation of unhealthy attachments, and the potential reinforcement of harmful behaviors or beliefs (
Nass & Moon, 2000). Thus, the introduction of biodigitalization in hypnosis adds a layer of ethical complexity, where patients may not fully understand the artificial nature of the interaction. This highlights the need for careful ethical consideration and regulation to ensure that such technologies are used responsibly in therapeutic contexts.
The Lack of Human Supervision and Psychological Risks
Research in psychology highlights the importance of human interaction in therapeutic settings, particularly in hypnosis. Automated systems face significant challenges due to their lack of emotional intelligence and inability to interpret complex human emotions. Unlike human therapists, who can adjust in real time based on patients’ verbal and non-verbal cues, AI systems are limited by their programming and cannot yet respond to individual nuances. Additionally, AI in therapeutic roles could inadvertently reinforce harmful behaviors or beliefs if the system is not properly designed or monitored (
Luxton & al., 2014).
According to
Kirsch and Lynn (1995), the therapeutic relationship is central to the effectiveness of hypnosis, with the presence of a human therapist providing both guidance and safety for the patient. Without human supervision, there is a risk of psychological destabilization or regression, which automated systems may fail to detect.
Yapko (2012) also stresses that the therapeutic alliance, the relationship between therapist and client, is often as crucial as the specific techniques used. The combination of absent human oversight and the perceived human presence offered by biodigital systems heightens the potential for psychological harm.
The Illusion of Human Presence and Its Dangers
A notable concern with biodigital hypnosis systems is the creation of what
Mori and al. (2012) termed the “uncanny valley”-where robots and future digital representations become convincingly human-like yet retain subtle differences that can create psychological discomfort. When applied to therapeutic hypnosis, this phenomenon takes on additional dimensions of risk. According to research by
Gilbert and Wilson (2007), humans have a natural tendency to anthropomorphize entities that display human-like characteristics, potentially leading to false attribution of human qualities such as empathy, understanding, and ethical judgment to AI systems. This anthropomorphization can exacerbate the illusion of human guidance, leading patients to place trust in digital systems that are not capable of authentic emotional support or ethical reasoning. Also,
Baumeister and Leary’s (1995) foundational work on the need for human belonging suggests that individuals may form unhealthy attachments to systems that provide the illusion of human connection, especially during vulnerable states such as hypnosis. This false perception of human presence behind an automated system can lead users to disclose sensitive information or develop dependency relationships with systems incapable of genuine empathy or ethical reasoning (
Nass & Moon, 2000).
Recent findings by
Hudon et al. (2024) demonstrates that the hypnotic state itself may amplify a patient’s inability to distinguish between human and non-human therapeutic presence, creating a particularly dangerous scenario where biodigital hypnotherapists replace practitioners without patients’ full awareness or informed consent.
Research Hypotheses
The research hypotheses are grounded in theoretical and empirical findings from cognitive psychology, ethics, and digital therapeutics. Given the exploratory nature of this qualitative study, these hypotheses serve as guiding propositions to frame the investigation rather than strictly deductive, empirically tested predictions. They focus on the complex interactions between technological simulation and human psychological vulnerability.
Hypothesis 1 (H1) proposes that the gradual replacement of human hypnotherapists by biodigital practitioners will significantly increase patients’ susceptibility to suggestion, driven in part by the uncanny valley effect—where near-human digital representations evoke heightened psychological uncertainty and disorientation.
Hypothesis 2 (H2) suggests that professionals’ attitudes towards AI-assisted hypnosis will be influenced by their clinical experience and ethical concerns, with more experienced practitioners likely to express greater caution and critical perspectives on the integration of biodigital technologies in therapeutic practice.
Methods
This study employs a mixed qualitative-quantitative approach and is exploratory in nature, aiming to generate hypotheses about the use and risks of AI-assisted hypnosis. By combining thematic qualitative analysis with quantification of theme frequency, this mixed-methods design allows for both in-depth understanding and preliminary hypothesis generation. It utilizes semi-structured interviews with 20 early- to mid-career hypnosis professionals (from 32 to 63 years old) in the field of hypnosis in December 2024 in Paris. All participants are certified practitioners, holding diplomas obtained in France, and their qualifications are recognized by international associations such as the World Hypnosis Organization (WHO) and the National Guild of Hypnotists (NGH). They, each have more than five years of experience in the field and were selected based on their use of digital tools in their hypnosis practice. The interviews explore their perceptions of AI-assisted hypnosis, the risks they associate with these technologies, and their experiences with patients who have used these systems. A structured interview guide was developed, comprising 15 open-ended questions divided into three thematic blocks: (1) general experience with AI and digital tools in hypnosis, (2) perceived risks and psychological consequences of biodigital systems, and (3) views on future developments and ethical considerations.
Each interview lasted approximately 60 minutes and was audio-recorded with consent. The recordings were transcribed verbatim to ensure accuracy. The data were then analyzed using thematic analysis facilitated by NVivo qualitative data analysis software. This involved an initial phase of open coding, where segments of text were labeled to capture meaningful units related to the research questions. These codes were reviewed and grouped into broader categories reflecting recurring patterns and concepts. Through iterative refinement and team discussions, significant themes were identified that encapsulated the core concerns and perceptions of participants. NVivo was instrumental in managing and organizing the data, allowing systematic retrieval of coded segments and comparison across interviews. This rigorous process ensured that the emergent themes accurately represented participants’ experiences and views.
To ensure coding consistency, two independent researchers initially coded a subset of transcripts and compared their findings. Any discrepancies were discussed and resolved through consensus to refine the coding framework. The remaining transcripts were subsequently coded by one researcher using the finalized codebook, with periodic checks to ensure uniformity. Thematic saturation was achieved when no new meaningful themes emerged from the data after multiple rounds of analysis, indicating that the key concerns and perspectives of participants were thoroughly captured.
The percentage associated with each concern (e.g., 90% concerned about suggestibility) reflects the proportion of participants whose responses spontaneously or explicitly referenced that theme during the interview. No forced-choice or rating scale was used. Rather, the results are based on an inductive coding process in which emergent themes were identified, categorized, and quantified based on their recurrence across the interviews.
Ten major concerns were identified through this process. If a participant expressed a concern clearly aligning with a theme (e.g., risk of psychological decompensation), it was counted as one occurrence for that theme. Percentages thus represent how many of the 20 participants expressed each concern in their responses. Multiple concerns could be expressed by a single participant.
Additional emphasis was placed on their reactions to the emergence of biodigitalization technologies (e.g., deepfake therapists, digital human, avatars), and how these visual simulations may impact patient perception.
The following subsection further details the interview design, question structure, and thematic categorization applied in this study.
Interview Design and Measurement Approach
To ensure consistent data collection, this study employed a purpose-built semi-structured interview guide. Semi-structured interviews are qualitative data collection methods in which the interviewer follows a flexible guide composed of open-ended questions, allowing participants to elaborate on their answers while enabling the interviewer to probe deeper into relevant or emerging topics. This approach helps maintain a consistent framework across interviews while adapting to participants’ unique perspectives, thereby providing richer and more nuanced data. The guide contained 15 open-ended questions organized into three thematic categories: personal experience with digital tools and AI in hypnosis, perceived clinical risks and psychological effects of biodigital systems, and ethical and professional implications of biodigitalization. For example, participants were asked if they had ever used digital or AI-based systems in their hypnosis practice and in what contexts, what concerns they had regarding the use of avatars or AI-driven systems in hypnotherapy, and whether they believed biodigital hypnotherapists could eventually replace human practitioners and what the consequences might be. These open-ended questions were designed to elicit in-depth responses while maintaining a consistent structure across all interviews. Probing follow-up questions were used when appropriate to clarify or deepen participant responses. The interview guide was reviewed by two experts in hypnosis and digital ethics to ensure relevance and clarity.
Key terms used in the analysis were clearly defined to maintain consistency and understanding. “Suggestibility” was defined as the degree to which individuals are prone to accept and act on suggestions, particularly significant in hypnosis where heightened suggestibility can influence treatment outcomes. “Decompensation” refers to a deterioration in psychological functioning, indicating a breakdown of coping mechanisms that may result in symptoms such as anxiety or depression. The term “secondary concerns” denotes issues identified less frequently or with lower emphasis by participants compared to “primary concerns.” While still relevant, these secondary concerns appeared less prominently in the data and were thus considered of secondary priority in the analysis.
To clarify the data analysis process, the percentages reported in the findings were calculated based on the number of participants (out of 20) who mentioned each concern during their interviews. This was not derived from a structured questionnaire with fixed answers but rather through an inductive thematic analysis approach. Responses were transcribed and coded, with emergent themes categorized. Each time a participant explicitly or implicitly referenced a concern related to a theme, it was counted as one occurrence. Consequently, the percentage indicates the proportion of participants expressing that concern. This method allows for multiple concerns to be attributed to a single participant, reflecting the complexity and diversity of their responses.
Results
Overview of Major Themes
The interviews revealed ten major concerns expressed by participants regarding AI-assisted hypnosis. These concerns emerged organically from participants’ narratives during semi-structured interviews and were identified through thematic coding.
Each participant’s response was reviewed to determine whether a given concern was explicitly or implicitly mentioned. The percentage figures represent the proportion of the total 20 participants who referenced each concern at least once during their interview. Because the interviews were open-ended and semi-structured, there was no fixed number of questions specifically tied to each concern.
No quantitative rating scales or forced-choice questions were used; therefore, percentages should be interpreted as indicators of prevalence across participants’ qualitative responses rather than statistical measures from survey instruments. Multiple concerns could be mentioned by the same participant, and the sum of percentages exceeds 100% accordingly.
The most frequently cited concerns included suggestibility risks (90%), loss of personal therapeutic connection (85%), and potential psychological harm from biodigital (75%). These concerns are summarized in
Table 1.
Uncontrolled Suggestibility
90 % of the practitioners expressed concern that AI-based hypnosis systems could increase the suggestibility of users without human oversight. They argued that without a therapist monitoring the process, users might be more vulnerable to suggestions that could negatively influence their behavior or belief systems.
Risk of Psychological Decompensation
75% of the respondents noted that the lack of real-time emotional feedback could result in the failure to recognize signs of psychological distress or decompensation, which could be harmful, especially in vulnerable patients.
Inability to Adapt to Emotional States
100% of participants emphasized the inability of automated systems to respond to the emotional states of patients. Human therapists use their interpersonal skills to gauge emotional responses and adapt accordingly, something AI systems cannot replicate (
Vanhaudenhuyse & al., 2014).
Potential for Ideological or Commercial Bias
65% percent of the practitioners raised concerns about the potential for embedded biases within the algorithms used in these systems. They warned that AI-driven hypnosis could reinforce ideological or commercial perspectives, potentially leading to ethical issues in treatment.
Illusion of Human Presence through Biodigitalization
More than three-quarters of the participants (85%) spontaneously referred to the risk of patients believing they are interacting with a real human when exposed to biodigitals instead of human therapists. This illusion, particularly in altered states of consciousness such as hypnosis, was considered dangerous, as it could mislead patients and affect their trust, expectations, and suggestibility.
Potential Professional Displacement
85% of participants acknowledged that market forces and technological advancement could potentially contribute to a gradual displacement of human hypnotherapists by biodigital practitioners in certain contexts, especially for standardized protocols and high-volume practices, though this outcome is not certain and requires further empirical investigation.
Findings Related to Hypothesis 1 (H1)
H1 proposed that early- to mid-career hypnosis professionals perceive significant psychological risks associated with AI-assisted hypnosis. The results strongly support this hypothesis. A majority of participants expressed critical concerns regarding several psychological risks, including uncontrolled suggestibility (90%), psychological decompensation due to lack of emotional feedback (75%), and the erosion of personal therapeutic connection (85%). It is important to note that these perceived risks are consistent with findings in existing clinical and psychological literature, which document similar phenomena such as increased suggestibility in unsupervised contexts (
Terhune et al., 2017), psychological decompensation related to lack of emotional attunement (
Jensen & Patterson, 2014), and the critical role of therapeutic alliance in effective hypnosis (
Lynn, et al., 2010). Therefore, while these risks emerged from participant concerns, they are supported by empirical evidence from prior research, reinforcing the validity of these findings and highlighting the need for cautious implementation of AI-assisted hypnosis.
In addition, identity confusion emerged as a major theme: 75% of respondents emphasized the potential interference with users’ identity development, particularly among younger individuals who interact extensively with digital therapeutic agents. Participants noted that prolonged or repeated interactions with AI-driven hypnotherapists, especially in altered states of consciousness, could distort users’ sense of self, blur boundaries between reality and simulation, and negatively influence self-concept formation.
Furthermore, a secondary but unanimously cited concern involved the emotional rigidity of AI systems: all participants (100%) identified the inability of biodigital hypnotherapists to adapt to patients’ emotional states as a critical issue. This limitation was perceived as a significant barrier to safe and effective therapeutic practice, given that human therapists rely on real-time emotional attunement to guide hypnotic processes and ensure psychological safety.
Another concern, cited by 55% of participants—involved reality testing impairment. Practitioners warned that repeated exposure to biodigital hypnotherapists might gradually erode a user’s ability to distinguish between authentic and artificial relationships. This blurring of interpersonal boundaries could, over time, compromise mental stability, particularly in suggestible or emotionally vulnerable individuals.
Finally, 35% of participants raised the issue of inappropriate application. They warned that individuals with psychological vulnerabilities or contraindications for hypnosis might access biodigital hypnotherapy without proper clinical screening, potentially leading to adverse psychological outcomes. The lack of diagnostic assessment and oversight in automated systems was viewed as a serious ethical and clinical risk.
These concerns collectively suggest that psychological risks are viewed as important by hypnosis professionals, providing support for H1 while recognizing that further research is necessary to fully understand the scope and impact of these risks.
Findings Related to Hypothesis 2 (H2)
H2 proposed that professionals’ attitudes towards AI-assisted hypnosis might be influenced by their experience and ethical concerns. The data provide preliminary evidence supporting this notion, as more experienced and ethically engaged participants tended to express greater caution and more critical views regarding AI’s role in hypnotherapy. However, these findings warrant further investigation with larger samples.
Additional themes
Other themes include the impact of biodigital visualizations on patient perception, risks of over-reliance on AI tools, and concerns about professional identity and job security. Formation of unhealthy dependencies was reported by 60% of practitioners, who warned that the constant 24/7 availability of biodigital hypnosis systems could lead users to develop maladaptive attachments. These systems, unlike human therapists with defined boundaries and limited access, may foster dependency behaviors that reinforce inappropriate attachment patterns, especially among emotionally vulnerable individuals.
These results were derived from a thematic analysis in which all interview transcripts were coded to identify recurring concerns. Each time a participant expressed a concern corresponding to one of the emergent themes, it was recorded as a presence for that category. Percentages reflect the number of participants (out of 20) who raised each concern, regardless of how often it appeared in their responses.
Discussion
This study confirms many concerns highlighted in previous literature regarding AI-assisted hypnosis, particularly the ethical and clinical challenges arising from the absence of a human therapist. The phenomenon of biodigitalization introduces specific risks, such as cognitive manipulation, where automated systems may inadvertently influence user behavior beyond intended therapeutic goals (
Fogg, 2002). Additionally, the lack of real-time human supervision raises the possibility of undetected psychological distress, potentially exacerbating mental health issues (
Terhune et al., 2017).
The inability of biodigitals to adapt to individual emotional needs is also a significant issue. While some AI systems can simulate empathy to a degree, they lack the ability to read and respond to complex emotional signals that are critical in therapeutic settings (
Jensen & Patterson, 2014). This raises important ethical questions about the responsibility of developers to ensure the safety of users.
Moreover, the emergence of biodigitalization—still defined as the creation of visually human-like biodigitals generated by AI—introduces a new ethical and psychological layer. These biodigital can convincingly replace the presence of a real human, creating an illusion that may influence the patient’s emotional perception and trust. In hypnotic contexts, this illusion can lead patients to believe they are under human supervision, while in fact, they are guided by a digital entity. This substitution can have subtle yet profound psychological effects, especially in suggestible states. In effect, the technological disruption in hypnosis with the substitution of human hypnotherapists with biodigital alternatives represents not merely a technical evolution but a fundamental transformation of the therapeutic relationship. The economic incentives driving this transition—including reduced costs, increased accessibility, and standardized delivery—must be weighed against the potential loss of the uniquely human elements that have historically defined effective hypnotherapeutic interventions.
A further critical psychological risk identified in this study relates to the 24/7 availability of biodigital systems, which can foster unhealthy dependencies. Consistent with
Linehan’s (1993) work on therapeutic boundaries, the lack of time limitations in AI systems poses the risk of disrupting natural coping mechanisms and sleep patterns. Such constant availability may encourage maladaptive dependency patterns, particularly among individuals with attachment or trauma histories, as noted by
Van der Kolk (2014). The illusion of a “dedicated person” always available for support may lead to experiential avoidance (
Hayes et al., 2020), where users substitute external comfort for internal emotional regulation. Moreover, research by
Cacioppo and Cacioppo (2018) suggests that while digital connections might temporarily alleviate loneliness, they risk exacerbating it by replacing genuine human relationships, an effect particularly detrimental in the emotionally vulnerable context of hypnotherapy.
Additional psychological risks identified include identity confusion, where constant interaction with biodigitals presenting as humans may interfere with the development of a stable sense of self, especially among younger users. Reality testing impairment is also a concern, as frequent immersion in highly convincing biodigital environments during suggestible states may erode the ability to distinguish between authentic and artificial relationships (
Johnson & Raye, 1981). Transference phenomena remain unresolved in AI interactions, risking the entrenchment of maladaptive relational patterns (
Gabbard, 2004). Inappropriate application of biodigital hypnosis, without prior clinical assessment, can exacerbate conditions contraindicated for hypnosis (
Hilgard, 1992). Finally, the “online disinhibition effect” (
Suler, 2004) may lead users to disclose excessively or behave differently under hypnosis combined with digital interaction, increasing vulnerability. These risks are compounded when users interact with biodigital agents that simulate real human presence, further blurring boundaries between reality and simulation in therapeutic relationships.
These findings highlight the multifaceted psychological risks posed by AI-assisted hypnosis and underscore the necessity of robust safeguards. The replacement of human hypnotherapists by biodigital practitioners involves more than technological change, it demands ethical vigilance and the implementation of clinical oversight to protect patient wellbeing.
To translate these concerns into actionable research and policy, it is crucial to identify specific, testable pathways for safeguarding patient welfare. For example, future studies could experimentally evaluate the effectiveness of real-time disclosure mechanisms on patient trust, assess the impact of AI explainability features on users’ informed consent, or investigate clinical protocols that combine human oversight with biodigital interventions to mitigate psychological risks. Operationalizing these pathways will allow the development of evidence-based guidelines and regulatory frameworks, ensuring that ethical principles are embedded in the design and deployment of biodigital hypnosis systems.
Given these concerns, future research should focus on the development and empirical validation of concrete protective measures and regulatory frameworks to mitigate psychological harm while maintaining therapeutic efficacy. In addition to highlighting the psychological and ethical challenges, it is imperative to discuss more concretely the regulatory pathways necessary for the safe integration of AI-assisted hypnosis. Clinical AI certification processes should be established to rigorously evaluate biodigital systems for safety, efficacy, and ethical compliance before deployment in therapeutic contexts. Furthermore, real-time disclosure mechanisms must be implemented to ensure that users are always clearly informed when interacting with AI rather than a human therapist, thereby preserving transparency and enabling informed consent. Explainability standards are also critical, as they would require these AI systems to provide understandable and accessible explanations of their decision-making processes, enhancing trust and accountability in clinical applications. Developing and enforcing such regulatory frameworks will be crucial in mitigating the risks identified and safeguarding patient wellbeing in the emerging landscape of biodigital hypnosis.
Moreover, applied discussion on regulatory pathways should include clinical AI certification processes, real-time disclosure to users about AI involvement, and explainability standards to ensure transparency and trust in these biodigital therapeutic tools.
Research question for future studies
Given the anticipated replacement of human hypnotherapists with biodigital practitioners in many contexts, what specific control mechanism, regulatory frameworks, and technological features must be implemented to mitigate psychological harm while maintaining therapeutic efficacy across different patient populations and clinical conditions?
Answering this research question will require a multidisciplinary approach involving clinicians, AI developers, ethicists, and regulatory bodies to collaboratively design, implement, and continuously update standards that keep pace with technological advancements. Only through such comprehensive and proactive governance can we ensure that biodigital hypnosis fulfills its therapeutic potential without compromising patient safety, autonomy, and trust. As the field evolves, ongoing ethical reflection and public engagement will be essential to navigate the complex implications of integrating AI into intimate therapeutic domains.
Conclusion
This study highlights the potential risks associated with biodigitalized hypnosis without the presence of a human therapist. While these technologies offer significant potential for accessibility and cost-effective care, they also pose substantial ethical and clinical risks. In particular, the increased suggestibility, potential for cognitive manipulation, the illusion of human presence, formation of unhealthy dependencies, and inability to adapt to individual emotional needs create a dangerous environment for vulnerable patients.
The technological trajectory suggests that biodigital hypnotherapists will likely increasingly replace human practitioners across many contexts, beginning with standardized protocols and eventually expanding to more complex therapeutic situations. This transition, while potentially improving access to care, demands careful consideration of the unique psychological vulnerabilities present in hypnotherapeutic contexts and cautious language acknowledging uncertainties rather than inevitable outcomes.
In this sense, biodigitalization, as a process that creates realistic digital humans (including avatars, digital twins, and deepfakes), plays a critical role. It contributes to a convincing illusion of human presence that, while technologically impressive, can psychologically and negatively influence patients, sometimes without their conscious awareness. The therapeutic use of biodigitalization, therefore, must be guided by strong ethical frameworks. Transparency, consent, and clear communication are essential to ensure that patients are not misled or manipulated, even unintentionally, by these biodigitals.
Moreover, this study underscores the urgent need for comprehensive regulatory frameworks that include clinical AI certification, mandatory real-time disclosure to users, and explainability standards to safeguard patient trust and safety. Such measures are crucial to accompany the rapid technological advances and ensure ethical integration of biodigital hypnosis in clinical practice.
This research is groundbreaking in its comprehensive exploration of the implications of biodigital interactions within therapeutic settings, focusing on the ethical and psychological challenges they present. Further interdisciplinary research will be necessary to evaluate the long-term hypnotherapy impacts of implications and to propose preventive strategies for ethical practice in this emerging field. Future research should also focus on developing guidelines for the ethical use of biodigitalization in therapeutic settings, including the need for human oversight and regulation.