Alanezi, F. (2024). Examining the role of ChatGPT in promoting health behaviors and lifestyle changes among cancer patients.
Nutrition and Health, 02601060241244563.
https://doi.org/10.1177/02601060241244563
Ashfaq, M., Yun, J., Yu, S., & Loureiro, S. M. C. (2020). I, Chatbot: Modeling the determinants of users’ satisfaction and continuance intention of AI-powered service agents.
Telematics and Informatics,
54, 101473.
https://doi.org/10.1016/j.tele.2020.101473
Ayo-Ajibola, O., Davis, R. J., Lin, M. E., Riddell, J., & Kravitz, R. L. (2024). Characterizing the adoption and experiences of users of artificial intelligence-Generated health information in the United States: Cross-sectional questionnaire study.
Journal of Medical Internet Research,
26, e55138.
https://doi.org/10.2196/55138
Baek, T. H., & Kim, M. (2023). Is ChatGPT scary good? How user motivations affect creepiness and trust in generative artificial intelligence.
Telematics and Informatics,
83, 102030.
https://doi.org/10.1016/j.tele.2023.102030
Bains, S. S., Dubin, J. A., Hameed, D., Sax, O. C., Douglas, S., Mont, M. A., Nace, J., & Delanois, R. E. (2024). Use and application of large language models for patient questions following total knee arthroplasty.
The Journal of Arthroplasty,
39(9), 2289-2294.
https://doi.org/10.1016/j.arth.2024.03.017
Chevalier, A., & Dosso, C. (2025, 2025/4/17). The influence of medical expertise and information search skills on medical information searching: Comparative analysis from a free data set.
JMIR Formative Research,
9, e62754.
https://doi.org/10.2196/62754
Choudhury, A., Shahsavar, Y., & Shamszare, H. (2025). User intent to use DeepSeek for health care purposes and their trust in the large language model: Multinational survey study.
JMIR Human Factors,
12, e72867.
https://doi.org/10.2196/72867
Davis, F. D. (1989). Perceived usefulness, perceived ease of use, and user acceptance of information technology.
MIS Quarterly,
13(3), 319-340.
https://doi.org/10.2307/249008
Doll, W. J., Hendrickson, A., & Deng, X. (1998). Using Davis’s perceived usefulness and ease-of-use instruments for decision making: A confirmatory and multigroup invariance analysis.
Decision Sciences,
29(4), 839-869.
https://doi.org/10.1111/j.1540-5915.1998.tb00879.x
Dutta-Bergman, M. J. (2004b). Interpersonal communication after 9/11 via telephone and internet: A theory of channel complementarity.
New Media & Society,
6(5), 659-673.
https://doi.org/10.1177/146144804047086
Falk, R. F., & Miller, N. B. (1992). A primer for soft modeling. University of Akron Press.
Fornell, C., & Larcker, D. F. (1981). Evaluating structural equation models with unobservable variables and measurement error.
Journal of Marketing Research,
18(1), 39-50.
https://doi.org/10.2307/3151312
Fuller, C. M., Simmering, M. J., Atinc, G., Atinc, Y., & Babin, B. J. (2016). Common methods variance detection in business research.
Journal of Business Research,
69(8), 3192-3198.
https://doi.org/10.1016/j.jbusres.2015.12.008
Ghanem, D., Shu, H., Bergstein, V., Marrache, M., Love, A., Hughes, A., Sotsky, R., & Shafiq, B. (2024). Educating patients on osteoporosis and bone health: Can “ChatGPT” provide high-quality content?
European Journal of Orthopaedic Surgery & Traumatology,
34(5), 2757-2765.
https://doi.org/10.1007/s00590-024-03990-y
Gilson, A., Safranek, C. W., Huang, T., Socrates, V., Chi, L., Taylor, R. A., & Chartash, D. (2023). How does ChatGPT perform on the United States Medical Licensing Examination (USMLE)? The implications of large language models for medical education and knowledge assessment.
JMIR medical education,
9(1), e45312.
https://doi.org/10.2196/45312
Henseler, J., Hubona, G., & Ray, P. A. (2016). Using PLS path modeling in new technology research: updated guidelines.
Industrial Management & Data Systems,
116(1), 2-20.
https://doi.org/10.1108/IMDS-09-2015-0382
Henseler, J., Ringle, C. M., & Sarstedt, M. (2015). A new criterion for assessing discriminant validity in variance-based structural equation modeling.
Journal of the Academy of Marketing Science,
43(1), 115-135.
https://doi.org/10.1007/s11747-014-0403-8
Hong, S., Thong, J. Y. L., & Tam, K. Y. (2006). Understanding continued information technology usage behavior: A comparison of three models in the context of mobile internet.
Decision Support Systems,
42(3), 1819-1834.
https://doi.org/10.1016/j.dss.2006.03.009
Hsieh, J.-K., Hsieh, Y.-C., Chiu, H.-C., & Feng, Y.-C. (2012). Post-adoption switching behavior for online service substitutes: A perspective of the push-pull-mooring framework.
Computers in Human Behavior,
28(5), 1912-1920.
https://doi.org/10.1016/j.chb.2012.05.010
Hu, L.-T., & Bentler, P. M. (1999). Cutoff criteria for fit indexes in covariance structure analysis: Conventional criteria versus new alternatives.
Structural Equation Modeling,
6(1), 1-55.
https://doi.org/10.1080/10705519909540118
Jin, Q., Leaman, R., & Lu, Z. (2023). Retrieve, summarize, and verify: How will ChatGPT affect information seeking from the medical literature?
Journal of the American Society of Nephrology,
34(8).
https://doi.org/10.1681/ASN.0000000000000166
Johnson, D., Goodman, R., Patrinely, J., Stone, C., Zimmerman, E., Donald, R., Chang, S., Berkowitz, S., Finn, A., & Jahangir, E. (2023). Assessing the accuracy and reliability of AI-generated medical responses: an evaluation of the Chat-GPT model.
Research square, rs. 3. rs-2566942.
https://doi.org/10.21203/rs.3.rs-2566942/v1
Kerstan, S., Bienefeld, N., & Grote, G. (2024). Choosing human over AI doctors? How comparative trust associations and knowledge relate to risk and benefit perceptions of AI in healthcare.
Risk Analysis,
44(4), 939-957.
https://doi.org/10.1111/risa.14216
Kim, J. H., Kim, J., Park, J., Kim, C., Jhang, J., & King, B. (2025). When ChatGPT gives incorrect answers: The impact of inaccurate information by generative AI on tourism decision-making.
Journal of Travel Research,
64(1), 51-73.
https://doi.org/10.1177/00472875231212996
Lee, J.-C., Tang, Y., & Jiang, S. (2023). Understanding continuance intention of artificial intelligence (AI)-enabled mobile banking applications: an extension of AI characteristics to an expectation confirmation model.
Humanities and Social Sciences Communications,
10(1), 333.
https://doi.org/10.1057/s41599-023-01845-1
Lee, S. T., Dutta, M. J., Lin, J., Luk, P., & Kaur-Gill, S. (2018). Trust ecologies and channel complementarity for information seeking in cancer prevention.
Journal of Health Communication,
23(3), 254-263.
https://doi.org/10.1080/10810730.2018.1433253
Lim, J. S., Shin, D., Lee, C., Kim, J., & Zhang, J. (2025). The role of user empowerment, AI hallucination, and privacy concerns in continued use and premium subscription intentions: An extended technology acceptance model for generative AI.
Journal of Broadcasting & Electronic Media, 1-17.
https://doi.org/10.1080/08838151.2025.2487679
Lim, J. S., Shin, D., Zhang, J., Masiclat, S., Luttrell, R., & Kinsey, D. (2023). News audiences in the age of artificial intelligence: Perceptions and behaviors of optimizers, mainstreamers, and skeptics.
Journal of Broadcasting & Electronic Media,
67(3), 353-375.
https://doi.org/10.1080/08838151.2022.2162901
Lim, J. S., & Zhang, J. (2022). Adoption of AI-driven personalization in digital news platforms: An integrative model of technology acceptance and perceived contingency.
Technology in Society,
69, 101965.
https://doi.org/10.1016/j.techsoc.2022.101965
Mendel, T., Singh, N., Mann, D. M., Wiesenfeld, B., & Nov, O. (2025). Laypeople’s use of and attitudes toward large language models and search engines for health queries: Survey study.
Journal of Medical Internet Research,
27, e64290.
https://doi.org/10.2196/64290
Mickle, T. (2025). May 21; Google introduces A.I. chatbot, signaling big changes to search.
The New York Times.
https://nyti.ms/3FiYNlc
Miller, L. M. S., & Bell, R. A. (2012). Online health information seeking: The influence of age, information trustworthiness, and search challenges.
Journal of Aging and Health,
24(3), 525-541.
https://doi.org/10.1177/0898264311428167
Moreno, A., Lara, C. F., Tench, R., & Romenti, S. (2023). COVID-19 communication management in Europe: a comparative analysis of the effect of information-seeking in the public’s sense-making in Italy, Spain and the United Kingdom.
Corporate Communications: An International Journal,
28(5), 744-768.
https://doi.org/10.1108/CCIJ-06-2022-0063
Nah, FF-H, Zheng, R., Cai, J., Siau, K., & Chen, L. (2023). Generative AI and ChatGPT: Applications, challenges, and AI-human collaboration.
Journal of Information Technology Case and Application Research,
25(3), 277-304.
https://doi.org/10.1080/15228053.2023.2233814
Nunnally, J. C., & Bernstein, I. H. (1994). Psychometric theory (3rd ed). McGraw-Hill.
Oeding, J. F., Lu, A. Z., Mazzucco, M., Fu, M. C., Taylor, S. A., Dines, D. M., Warren, R. F., Gulotta, L. V., Dines, J. S., & Kunze, K. N. (2025). ChatGPT-4 performs clinical information retrieval tasks using consistently more trustworthy resources than does Google search for queries concerning the latarjet procedure.
Arthroscopy: The Journal of Arthroscopic & Related Surgery,
41(3), 588-597.
https://doi.org/10.1016/j.arthro.2024.05.025
Ohanian, R. (1990). Construction and validation of a scale to measure celebrity endorsers’ perceived expertise, trustworthiness, and attractiveness.
Journal of Advertising,
19(3), 39-52.
https://doi.org/10.1080/00913367.1990.10673191
Panteli, D., Adib, K., Buttigieg, S., Goiana-da-Silva, F., Ladewig, K., Azzopardi-Muscat, N., Figueras, J., Novillo-Ortiz, D., & McKee, M. (2025). Artificial intelligence in public health: promises, challenges, and an agenda for policy makers and public health institutions.
The Lancet Public Health,
10(5), e428-e432.
https://doi.org/10.1016/S2468-2667(25)00036-2
Rains, S. A. (2007). Perceptions of traditional information sources and use of the World Wide Web to seek health information: Findings from the health information national trends survey.
Journal of Health Communication,
12(7), 667-680.
https://doi.org/10.1080/10810730701619992
Rains, S. A., & Ruppel, E. K. (2016). Channel complementarity theory and the health information-seeking process: Further investigating the implications of source characteristic complementarity.
Communication Research,
43(2), 232-252.
https://doi.org/10.1177/0093650213510939
Rogers, E. M. (1995). Diffusion of innovations (4th ed). Free Press.
Rouzrokh, P., Khosravi, B., Faghani, S., Moassefi, M., Shariatnia, M. M., Rouzrokh, P., & Erickson, B. (2025). A Current review of generative AI in medicine: Core concepts, applications, and current limitations.
Current Reviews in Musculoskeletal Medicine,
https://doi.org/10.1007/s12178-025-09961-y
Ruppel, E. K., & Rains, S. A. (2012). Information sources and the health information-seeking process: An application and extension of channel complementarity theory.
Communication Monographs,
79(3), 385-405.
https://doi.org/10.1080/03637751.2012.697627
Sbaffi, L., & Rowley, J. (2017). Trust and credibility in web-based health information: A review and agenda for future research.
Journal of Medical Internet Research,
19(6), e218.
https://doi.org/10.2196/jmir.7579
Sezgin, E., Jackson, D. I., Kocaballi, A. B., Bibart, M., Zupanec, S., Landier, W., Audino, A., Ranalli, M., & Skeens, M. (2025). Can large language models aid caregivers of pediatric cancer patients in information seeking? A cross-sectional investigation.
Cancer Medicine,
14(1), e70554.
https://doi.org/10.1002/cam4.70554
Shah, S. V. (2024). Accuracy, consistency, and hallucination of large language models when analyzing unstructured clinical notes in electronic medical records.
JAMA Network Open,
7(8), e2425953-e2425953.
https://doi.org/10.1001/jamanetworkopen.2024.25953
Shahsavar, Y., & Choudhury, A. (2023). User intentions to use ChatGPT for self-diagnosis and health-related purposes: cross-sectional survey study.
JMIR Human Factors,
10(1), e47564.
https://doi.org/10.2196/47564
Shiferaw, M. W., Zheng, T., Winter, A., Mike, L. A., & Chan, L.-N. (2024). Assessing the accuracy and quality of artificial intelligence (AI) chatbot-generated responses in making patient-specific drug-therapy and healthcare-related decisions.
BMC Medical Informatics and Decision Making,
24(1), 404.
https://doi.org/10.1186/s12911-024-02824-5
Shin, D., Jitkajornwanich, K., Lim, J. S., & Spyridou, A. (2024). Debiasing misinformation: how do people diagnose health recommendations from AI?
Online Information Review,
48(5), 1025-1044.
https://doi.org/10.1108/OIR-04-2023-0167
Shin, D., Koerber, A., & Lim, J. S. (2024). Impact of misinformation from generative AI on user information processing: How people understand misinformation from generative AI.
New Media & Society, 14614448241234040.
https://doi.org/10.1177/14614448241234040
Song, Y., Mingjia, C., Fei, W., Zhengwang, Y., & Jiang, J. (2025). AI hallucination in crisis self-rescue scenarios: The impact on AI service evaluation and the mitigating effect of human expert advice.
International Journal of Human-Computer Interaction, Advance Online Publication.
https://doi.org/10.1080/10447318.2025.2483858
Soroya, S. H., Farooq, A., Mahmood, K., Isoaho, J., & Zara, S.-e. (2021). From information seeking to information avoidance: Understanding the health information behavior during a global health crisis.
Information Processing & Management,
58(2), 102440.
https://doi.org/10.1016/j.ipm.2020.102440
Sousa, V. D., & Rojjanasrirat, W. (2011). Translation, adaptation and validation of instruments or scales for use in cross-cultural health care research: a clear and user-friendly guideline.
Journal of Evaluation in Clinical Practice,
17(2), 268-274.
https://doi.org/10.1111/j.1365-2753.2010.01434.x
Sun, X., Ma, R., Zhao, X., Li, Z., Lindqvist, J., Ali, A. E., & Bosch, J. A. (2024). Trusting the search: unraveling human trust in health information from Google and ChatGPT.
arXiv preprint arXiv:2403.09987,
https://doi.org/10.48550/arXiv.2403.09987
Swar, B., Hameed, T., & Reychav, I. (2017). Information overload, psychological ill-being, and behavioral intention to continue online healthcare information search.
Computers in Human Behavior,
70, 416-425.
https://doi.org/10.1016/j.chb.2016.12.068
Ukaegbu, O. C., & Fan, M. (2025). Examining the Influence of Personal eHealth Literacy on Continuance Intention towards Mobile Health Applications: A TAM-Based Approach.
Health Policy and Technology, 101024.
https://doi.org/10.1016/j.hlpt.2025.101024
Walker, H. L., Ghani, S., Kuemmerli, C., Nebiker, C. A., Müller, B. P., Raptis, D. A., & Staubli, S. M. (2023). Reliability of Medical Information Provided by ChatGPT: Assessment Against Clinical Guidelines and Patient Information Quality Instrument.
Journal of Medical Internet Research,
25, e47479.
https://doi.org/10.2196/47479
Wang, T., Wang, W., Liang, J., Nuo, M., Wen, Q., Wei, W., Han, H., & Lei, J. (2022). Identifying major impact factors affecting the continuance intention of mHealth: a systematic review and multi-subgroup meta-analysis.
npj Digital Medicine,
5(1), 145.
https://doi.org/10.1038/s41746-022-00692-9
Yang, S., Lu, Y., & Chau, P. Y. K. (2013). Why do consumers adopt online channel? An empirical investigation of two channel extension mechanisms.
Decision Support Systems,
54(2), 858-869.
https://doi.org/10.1016/j.dss.2012.09.011
Yun, H. S., & Bickmore, T. (2025). Online health information-seeking in the era of large language models: Cross-sectional web-Based survey study.
Journal of Medical Internet Research,
27, e68560.
https://doi.org/10.2196/68560
Zhang, X., Xiangda, Y., Xiongfei, C., Yongqiang, S., Hui, C., & She, J. (2018). The role of perceived e-health literacy in users’ continuance intention to use mobile healthcare applications: an exploratory empirical study in China.
Information Technology for Development,
24(2), 198-223.
https://doi.org/10.1080/02681102.2017.1283286
Zheng, H., Chen, X., Jiang, S., & Sun, L. (2023). How does health information seeking from different online sources trigger cyberchondria? The roles of online information overload and information trust.
Information Processing & Management,
60(4), 103364.
https://doi.org/10.1016/j.ipm.2023.103364
Zhou, T., & Li, S. (2024). Understanding user switch of information seeking: From search engines to generative AI.
Journal of Librarianship and Information Science, 09610006241244800.
https://doi.org/10.1177/09610006241244800