RECIPROCAL HUMAN-AI LEARNING IN CROWDSOURCED DESIGN: ALGORITHM AVERSION, TACIT KNOWLEDGE, AND ETHICAL TRANSPARENCY
DOI:
https://doi.org/10.24191/jca.v3i1.11084Keywords:
Reciprocal human-AI learning, creative synergy, crowdsourced design, algorithm aversion, ethical transparencyAbstract
The fast integration of generative artificial intelligence in digital design has altered the use of AI systems from an automated tool to a partner. With this change, there is limited empirical research that can explain how adaptive human-AI interaction can lead to a better creative outcome, especially in crowdsourced design settings. This paper explores how reciprocal human-AI learning (RHML) influences creative synergy and considers the mediating and moderating roles of ethics, transparency, and algorithm aversion, respectively. A 2 x 2 factorial controlled experiment was carried out with 320 subjects with design backgrounds who completed an AI-assisted creative task under a simulated crowdsourced design condition. Expert raters assessed creative, productive outputs, and validated questionnaires assessed psychological constructs. The analysis was done using the factorial ANOVA and moderated mediation analysis with PROCESS. The findings reveal that RHML has a great effect on enhancing creative synergy (F(1,316) = 28.74, p < .001). RHML also decreases aversion to algorithms, which partly mediates its beneficial influence on the results of creativity. Moreover, ethical disclosure puts a lot of force in the correlation between RHML and creative synergy (interaction F(1,316) = 12.63, p < .001). The results prove that successful human-AI cooperation in creative systems is based not only on technological ability but also on psychological acceptance and clear governance mechanisms. The research paper provides empirical support to the study of hybrid intelligence and has design implications in the use of AI-assisted creative platforms.
References
Amabile, T. M., & Pratt, M. G. (2016). The dynamic componential model of creativity and innovation in organizations: Making progress, making meaning. Research in Organizational Behavior, 36, 157–183. https://doi.org/10.1016/j.riob.2016.10.001
Asrifan, A., Susanto, A., Elpisah, E., Syamsuardi, S., & Herlina, H. (2024). AI-driven curriculum design and course management.
Bostrom, N., & Yudkowsky, E. (2014). The ethics of artificial intelligence. In K. Frankish & W. Ramsey (Eds.), The Cambridge handbook of artificial intelligence (pp. 316–334). Cambridge University Press. https://doi.org/10.1017/CBO9781139046855.020
Bozkurt, E. (2025). Algorithmic authority and the ethical turn in educational technology: A praxis-oriented approach. Journal of Applied Philosophy of Education, 1(1), 12–28.
Caglayan, M. (2025). Artificial intelligence in architectural education: Rethinking creativity, ethics, and pedagogy in the digital age. Mitteilungen Klosterneuburg.
Campion, J., O’Connor, D., & Lahiff, C. (2024). Human–artificial intelligence interaction in gastrointestinal endoscopy. World Journal of Gastrointestinal Endoscopy, 16(3), 126–135. https://doi.org/10.4253/wjge.v16.i3.126
Dellermann, D., Ebel, P., Söllner, M., & Leimeister, J. M. (2019). Hybrid intelligence. Business & Information Systems Engineering, 61(5), 637–643. https://doi.org/10.1007/s12599-019-00595-2
Deterding, S., Hook, J., Fiebrink, R., Gillies, M., Gow, J., Akten, M., & Bryan-Kinns, N. (2017). Mixed-initiative creative interfaces. In Proceedings of the 2017 CHI Conference Extended Abstracts on Human Factors in Computing Systems (pp. 628–635). https://doi.org/10.1145/3027063.3027072
Dietvorst, B. J., Simmons, J. P., & Massey, C. (2015). Algorithm aversion: People erroneously avoid algorithms after seeing them err. Journal of Experimental Psychology: General, 144(1), 114–126. https://doi.org/10.1037/xge0000033
Dietvorst, B. J., Simmons, J. P., & Massey, C. (2018). Overcoming algorithm aversion: People will use imperfect algorithms if they can (even slightly) modify them. Management Science, 64(3), 1155–1170. https://doi.org/10.1287/mnsc.2016.2643
Floridi, L., Cowls, J., Beltrametti, M., Chatila, R., Chazerand, P., Dignum, V., Luetge, C., Madelin, R., Pagallo, U., Rossi, F., Schafer, B., Valcke, P., & Vayena, E. (2018). AI4People: An ethical framework for a good AI society: Opportunities, risks, principles, and recommendations. Minds and Machines, 28(4), 689–707. https://doi.org/10.1007/s11023-018-9482-5
Grover, R. (2025). Ethical challenges in AI and robotics: Balancing innovation and responsibility. International Research Journal of Advanced Engineering and Management, 3(3), 401–405.
Hayes, A. F. (2022). Introduction to mediation, moderation, and conditional process analysis: A regression-based approach (3rd ed.). Guilford Press.
Hosain, M., Anik, M., Rafi, S., Tabassum, R., Insia, K., & Siddiky, M. (2023). Path to gain functional transparency in artificial intelligence with meaningful explainability. Journal of Metaverse, 3(2), 166–180.
Jain, R., Garg, N., & Khera, S. (2022). Effective human–AI work design for collaborative decision-making. Kybernetes, 52(11), 5017–5040. https://doi.org/10.1108/K-10-2021-0834
Kumar, A., & Sheel, N. (2025). Ethical implications of AI in social networks.
Le, H., Shen, Y., Li, Z., Xia, M., Tang, L., Li, X., & Fan, Y. (2025). Breaking human dominance: Investigating learners' preferences for learning feedback from generative AI and human tutors. British Journal of Educational Technology. https://doi.org/10.1111/bjet.13517
Mao, Y., Rafner, J., Wang, Y., & Sherson, J. (2023). Human–AI collaboration in creative problem solving: A systematic review. Computers in Human Behavior, 139, 107514. https://doi.org/10.1016/j.chb.2022.107514
Polanyi, M. (1966). The tacit dimension. University of Chicago Press.
Premkumar, K. (2025). Automation vs. human touch: Ethical implications of AI in HR decision-making. Journal of Industrial and Employment Relations, 5(3).
Rafner, J., Langguth, J., & Sherson, J. (2023). Human–AI collaboration for creative problem solving: A systematic review. Computers in Human Behavior, 144, 107755. https://doi.org/10.1016/j.chb.2023.107755
Sen, A. (2025). The missing link: Knowledge management in AI-powered education frameworks. Innovations in Pedagogy and Technology, 1(3), 67–81.
Sposato, M. (2025). Synthetic ethics: Posthuman leadership in algorithmically governed organizations. Journal of Information, Communication and Ethics in Society.
Sqalli, M., Aslonov, B., Gafurov, M., & Nurmatov, S. (2023). Humanizing AI in medical training: Ethical framework for responsible design. Frontiers in Artificial Intelligence, 6. https://doi.org/10.3389/frai.2023.1178420
Vossing, M., Kuhl, N., Lind, M., & Satzger, G. (2022). Designing transparency for effective human-AI collaboration. Information Systems Frontiers, 24(3), 877–895. https://doi.org/10.1007/s10796-021-10121-9
Wardi, R., Khalid, K., Anwar, R., & Naser, F. (2025). An integrated reciprocal human–AI socio-technical framework: Enhancing creative synergy in crowdsourced design. Quantum Journal of Social Sciences and Humanities, 6(4), 431–446. https://doi.org/10.55197/qjssh.v6i4.797
Zhang, L. (2025). Is artificial intelligence a tool, replacement, or partner? Inducing factors and psychological mechanisms of algorithm aversion in mental counseling. Aslib Journal of Information Management.
Downloads
Published
Issue
Section
License
Copyright (c) 2026 Rainal Hidayat Wardi, Rusmadiah Anwar, Rahman Rosman, Faradiba Liana Naser, Ahmad Rizal Abd Rahman

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.

