About the Author(s)


Anthony Brown Email symbol
School of Interdisciplinary Research and Graduate Studies, College of Graduate Studies, University of South Africa, Pretoria, South Africa

Jane Rossouw symbol
Department of Psychology of Education, College of Education, University of South Africa, Pretoria, South Africa

Citation


Brown, A. & Rossouw, J., 2026, ‘Artificial intelligence as a reflexive collaborator in graduate studies supervision’, Transformation in Higher Education 11(0), a657. https://doi.org/10.4102/the.v11i0.657

Original Research

Artificial intelligence as a reflexive collaborator in graduate studies supervision

Anthony Brown, Jane Rossouw

Received: 27 July 2025; Accepted: 19 Nov. 2025; Published: 07 Feb. 2026

Copyright: © 2026. The Author(s). Licensee: AOSIS.
This work is licensed under the Creative Commons Attribution 4.0 International (CC BY 4.0) license (https://creativecommons.org/licenses/by/4.0/).

Abstract

The incorporation of generative artificial intelligence (AI) in doctoral supervision signifies a transformative evolution in higher education. This has been significant, particularly within intricate and emotionally complex research such as sexuality studies. This reflective, collaborative autoethnographic study investigates the experiences of a doctoral student and her supervisor. They explored AI generative tools to enhance research processes, quality of supervision and intellectual inquiry. Anchored in Kolb’s Experiential Learning Theory and reconceptualised through an augmented experiential learning framework, the study elucidates how AI tools like ChatGPT encourage critical thinking. These tools were also used to foster methodological innovation and facilitate ethical reflexivity. Through iterative engagements, AI supported the formulation of sophisticated research questions and bolstered academic writing. It also aided emotional resilience in traversing heteronormative and interdisciplinary landscapes. The study highlights the evolving role of supervisors, not as gatekeepers but as collaborators in AI-informed learning. Significant emphasis was placed on prompt engineering, scholarly scrutiny and academic integrity. Ethical guidelines and rigorous documentation practices ensured a responsible AI application without sacrificing originality.

Contribution: The findings reveal that AI-augmented supervision promotes deeper theoretical engagement and enhances self-directed learning. It also introduces new pedagogical possibilities for complex research endeavours. Nonetheless, the study also underscores the challenges of bias, overreliance and contextual insensitivity inherent in AI outputs. By suggesting actionable strategies for ethical integration, this paper contributes to emerging global discussions on AI in higher education. It presents a framework for inclusive, transformative and contextually aware supervision practices.

Keywords: AI-augmented supervision; doctoral research; experiential learning; ethical AI integration; graduate studies; augmented experiential learning.

Introduction

The rise of artificial intelligence (AI) tools in higher education has opened new opportunities for the supervision of graduate research. Graduate students globally reported how AI generative tools enhance research competencies, refine academic writing and deepen conceptual grasp as they uphold ethical standards (Khalifa & Albadawy 2024; Nartey 2024; Vargas-Murillo, De la Asuncion & De Jesús Guevara-Soto 2023). More specifically, in studies related to sexual health, 14 studies illustrated how AI generative tools were used to augment research processes with precision and thoroughness (Latt et al. 2025). In Hong Kong, one-third of students were found to have used AI generative tools in aiding research essay writing (Kong, Cheung & Zhang 2023). This is because AI-assistive tools have been found to significantly enhance students’ motivation to tackle complex learning processes (Jia & Tu 2024). Hence, the unsurprising but significant growth from 55% to 78% in 2023 utilisation of AI tools in higher education (Stanford HAI 2025). ChatGPT, one of the most widely adopted AI tools among students at the University of South Africa (UNISA) (Chauke et al. 2024), has swiftly gained 100 million active users globally since its release over 2 months ago (Dempere et al. 2023). Higher education institutions globally have grappled with the opportunities and challenges presented by these tools (Inside Higher Education 2024). In the United States, many higher education institutions were restrictive, banning their students from using AI generative tools, claiming it compromised academic integrity (Park & Ahn 2024). Canadian higher education institutions, instead, embraced these tools and immediately developed guidelines for their use (Markkula Centre for Applied Ethics 2023). The Australian higher education sector acknowledged that AI generative tools are assistive learning resources that can tackle new challenges and harness opportunities (TEQSA 2024). This paper corresponds with these best practices of AI utilisation in higher education. We provide a comprehensive analysis that examines how the utilisation of AI generative tools enhances the quality of doctoral supervision between students and supervisors. Jane’s research focused on the complex intersectionality of same-sex families, parent communication strategies and sexuality education in the home environment (Rossouw 2025). This paper also provides actionable insights on how AI-augmented research skills, integrated in a delicate and intricate sphere of sexuality studies, have enhanced critical thinking skills, advanced research methodological considerations and enabled interdisciplinary depth. This study explores AI-augmented doctoral supervision at a South African university. South Africa and many parts of the Global South are confronted with structural inequalities, multilingual settings and limited research supervisor capacities. These challenges make AI a democratising tool. This study explores how AI can enhance the quality of mentorship and promote research experiences. Consequently, this study explores the following question: In what ways do the incorporation of generative AI tools affect the procedures, dynamics and results of graduate research supervision?

Collaborating with artificial intelligence generative tools in graduate research supervision

The landscape of graduate supervision brought significant shifts with the emergence of generative AI. Historically, supervision depended significantly on in-person synchronous meetings. The advent of contemporary intelligent technology has facilitated continuous supervision and feedback, fostering self-directed learning in graduate studies (Cowling et al. 2023; Thong et al. 2025). The integration of AI generative tools in graduate supervision can enhance the quality, effectiveness and accessibility of support provided to students (Dai et al. 2023). This enhancement must be understood within the extended discussions related to digital pedagogies. Researchers are advised to guard against reductionist views of AI simply as a neutral educational tool. These AI technology advancements are found to augment guiding processes that address unique, delicate and intricate conceptual and methodological considerations in thesis development (Rababah et al. 2023). The medium of instruction in graduate studies at higher education institutions in South Africa is predominantly English, although it is an additional language for most students (Munyai 2024). For such contexts, AI generative tools have been found to enable sophisticated corrections of grammar and structure (Chauke et al. 2024). Students can also utilise AI generative tools to co-construct nuanced research questions, refine complex arguments and test literature synthesis processes (Alasadi & Baiz 2023). Before submitting their work for human review, students can utilise these technological tools for revision purposes (Link, Mehrzad & Rahimi 2022). Through such practices, AI-generated tools could profoundly impact self-directed learning for graduate students (Wang & Li 2024). Artificial intelligence-integrated self-directed learning has been reported to enhance graduate student attributes such as analytical skills, intellectual self-reliance and metacognitive abilities to oversee their own learning progress during their academic path (Dai et al. 2023). A key concern in managing graduate supervision is that students often become passive recipients of supervision guidance (Wilson & Jay 2021). Contemporary technologies, such as AI generative tools, now encourage students to be proactive contributors in their learning experience (Baidoo-Anu & Ansah 2023). In essence, the innovative role of AI generative tools in graduate research supervision is lauded for the flexible co-pilot function (Thong et al. 2025). As we embrace these advances, AI should not be perceived as a replacement for human mentoring (Cowling et al. 2023). Although AI generative tools offer substantial advantages, it is crucial to recognise their potential biases and limited understanding of context (Jenks 2025). Therefore, reflections through human mentoring processes should remain the cornerstone of the supervision process. When AI generative tools are thoughtfully and ethically integrated into academic writing to respond to personalised instructions, they can be complementary to human research guidance (Dai et al. 2023). Artificial intelligence integration in graduate studies has shown remarkable potential to revolutionise supervision across research environments.

Harnessing the power of artificial intelligence-driven innovations in sexuality research

Graduate-level research on sexuality studies is conceptually rich and involves complex nuances. One of the challenges of contention is that pervasive heteronormative assumptions shape expectations and perceived moral functions. Not only do compulsory heteronormative scripts marginalise minority sexual identities, but also police regulate and discipline them for deviating from the norm. This research demonstrates that the study of sexuality is situated within a complex and debated social framework. In this context, the researcher is significantly involved from the early stages of design to the analysis of data (Diprose, Thomas & Rushton 2013). This research space demands that scholars be more reflexive across disciplines (Dodgson 2019). Researchers constantly must negotiate their positioning to produce legitimate knowledge (Alabi 2023). For this reason, the interdisciplinary scholarship of sexuality necessitates carefully crafted methodological considerations across multiple epistemologies (Woodward & Woodword 2015). Artificial intelligence generative tools in this multifaceted landscape present an enormous opportunity for critical engagement to augment the iterative and reflective task of sexuality research. Scholarship in sexuality requires an acknowledgement of disputed epistemological frameworks within this domain. It is important to be aware of the intertwined positionality, power dynamics and cultural norms. Artificial intelligence has the potential to contest conventions rather than perpetuate assumptions. For example, AI generative tools could be used to develop different types of interview questions that demonstrate how minor shifts can produce certain authentic forms of knowledge. Engaging with these tools could help to enhance knowledge enquiry that goes beyond the reproduction of dominant scripts. Qualitative research, such as sexuality studies, is deeply contextual and requires researchers to constantly critique, appraise and evaluate their subjectivity (Olmos-Vega et al. 2023). Artificial intelligence generative technologies could be used as co-thinkers, and using these tools as a conversational interlocutor, it could cross-check for possible biases and emotionally charged, sensitive projections. These tools could help to test the tone and refine clarity in discussions. It is, however, important that graduate students and researchers are aware that these language tools are trained on dominant discourses and may potentially reinforce norms. It requires researchers and students in sexuality studies to be critical of the suggestions generated by the tools. In this section, we argue that sexuality research is inherently complex and demands a specific epistemological humility, rigorous methodological openness and emotional reflexivity. Graduate students are often not equipped with the skills to navigate this emotionally laborious and complex terrain. A reflective and critical use of AI generative tools could augment self-directed learning in sexuality studies to enhance conceptualisation, synthesise and facilitate emotional scaffolding through iterative feedback. Such innovations are crucial for addressing the complex challenges within the interdisciplinary field of sexuality education.

Kolb’s experiential learning theory

Kolb’s experiential learning theory (ELT) is instrumental in this study, as it establishes a reflective framework for teaching and learning within the realm of graduate studies supervision. Kolb’s experiential learning model elucidates the process by which knowledge is generated through a cyclical sequence of four fundamental stages: concrete experience, reflective observation, abstract conceptualisation and active experimentation (Taneja, Kiran & Bose 2022). Kolb’s theory is particularly relevant in the context of the recent conceptualisation of AI generative tools in learning development. Students who engage ethically and effectively with AI generative tools in their research anchor their learning in pragmatic academic tasks (Radanliev et al. 2024). The interaction between human learning and technology facilitates the enhancement of self-efficacy, motivation levels and critical thinking (Jia & Tu 2024). Through well-structured reflexivity practices, when adeptly facilitated, students will be able to critically evaluate AI outputs (Johnson & Paulus 2024). By doing so, it would foster a nuanced understanding that may promote enhanced learning. Students may generate novel ideas that amplify human reflective capabilities. In this manner, AI generative tools can transcend the role of a writing assistant to become a co-constructor of knowledge while preserving essential human agency. We assert that through AI-integrated tools, Kolb’s ELT could evolve into augmented experiential learning theory (AELT). Kolb’s emphasis on learning through experience as transformational implies that it does not occur through the passive consumption of AI-generated content. When students actively engage with AI tools in their research projects, the AELT cycle may foster and promote deeper learning (Wijnen-Meijer et al. 2022). In the context of graduate supervision, the emphasis could now shift to higher-order learning facilitation rather than mere transmission of information (Xu 2024). Thus, we maintain that AI-augmented supervision enables a fundamental philosophical commitment to learning as a transformation through experience.

Research methods and design

This study examines AI-enhanced supervision dynamics between a doctoral candidate and her thesis supervisor using a reflective pedagogical framework. This study employs a reflective pedagogical and learning approach, examining the dynamics of AI-augmented supervision. It is imperative to systematically evaluate the integration of AI tools in higher education to comprehend the learning processes that may identify areas of improvement (Machost & Stains 2023). The supervisor and doctoral candidate explored a collaborative autoethnographic methodology (Karalis, Minematsu & Bosca 2023) to share their reflective experiences of graduate research collaboration facilitated by AI tools. Successful thesis supervision is captured in the dynamics of experience-based learning partnerships and mutual development (Nuis, Lundquist & Beausaert 2025). Diverse data sources were utilised, reflecting a period of 18 months. These included reflective supervision journals, AI interaction logs and meeting minutes. Anthony and Jane adhered to optimal methodological outlines and considerations of academic integrity, aligned with institutional ethics protocol. The analysis comprises three interconnected stages derived from frameworks for assessing the incorporation of AI within educational settings. Firstly, the study examined each participant’s reflected experience with AI tools within the supervision process. Secondly, the study explored how the integration of AI tools shaped the supervision relationship, drawing on research that focuses on mentoring conversation tools (Goldshaft 2024). In the stage using a systemic lens, Anthony and Jane analysed the broader implications of AI-augmented thesis supervision in higher education. This study was conducted in accordance with institutional ethics requirements (SU Project ID 28755). The preliminary doctoral research conformed to institutional guidelines mandating the transparent utilisation of AI tools. Regular ethical reflections were conducted (Balalle & Pannilage 2025) to facilitate academic integrity, research quality and graduate research attribute development.

Ethical considerations

Ethical clearance to conduct this study was obtained from Stellenbosch University Social, Behavioural and Education Research Ethics Committee (No. 28756).

Findings and discussion

In this section, the authors analysed their collaborative and reflective involvement in AI-augmented supervision. Utilising Kolb’s ELT, their findings are divided into three thematic categories to uncover the transformative possibilities and constraints of AI in graduate thesis supervision. These themes trace a journey from the student learning and research framework and the transformation of supervision. Lastly, this paper discusses the wider institutional and philosophical impacts. Each theme emphasises not only AI’s capabilities but also the requisites thereof. It draws on aspects of ethical judgement, intellectual modesty and dialogic mentorship.

Artificial intelligence-augmented learning in complex graduate research

The introduction to AI tools in April 2023 represented a pivotal phase in Jane’s doctoral research trajectory. This transition coincided with a period when ChatGPT achieved the fastest technology consumer uptake in history, reaching 100 million active users in just 2 months (Dempere et al. 2023). Janes’s initial response to the widespread attention surrounding emerging AI generative tools was characterised by a sense of inquisitiveness. She reflected that ‘everyone was talking about it and wanted to know what it was all about. I however had serious reservations because of the threats from universities that it is cheating’. This was puzzling because her ‘[my]workplace swiftly adopted it by promptly introducing guidelines and skills’. Conversely, her ‘PhD registration institution sent conflicting messages about compromised academic integrity and penalties’. In collaboration with Anthony, they collectively resolved to embark on a comprehensive exploratory journey. At that juncture, Anthony successfully identified a mentor to help him understand and apply these tools. Under Anthony’s guidance and through the development of AI literacy skills at Jane’s workplace, she began to recognise the potential of these technological tools as enhancements to critical research skills rather than replacements for them.

Jane’s initial substantive implementation of AI tools pertained to the enhancement of her literature review. Jane used AI tools such as ChatGPT, Claude AI and Scispace AI to summarise, compare and critically assess essential sources for her literature review. Over several weeks, she refined her prompts to enhance the quality of analysis, ensuring they were aligned with data analysis and supervisory feedback. Furthermore, she employed these tools within a methodological framework to tackle the challenges inherent in conducting sensitive interdisciplinary research. Artificial intelligence tools demonstrated high accuracy and completeness in sexual health information (Sanchez, Slovacek & Wang 2024). Jane pointed out that:

‘ChatGPT became an essential tool in my pursuit of my research. When confronted with the nuanced distinctions between communication accommodation theory and symbolic interactionism, specifically in the context of same sex parenting. I engaged in comprehensive dialogues that facilitated the articulation of my understanding. It also helped with the identification of knowledge gaps.’ (Doctoral graduate, female)

The integration of AI in academic inquiry has been demonstrated to enhance the articulation of theoretical concepts. This experience aligns research findings with AI literacy that encompasses the critical evaluation of the effective utilisation of AI tools (Long & Magerko 2020). Jane assessed AI-generated results, created effective prompts and incorporated AI feedback in her research projects. These include enhancing interview protocols, evaluating comprehensive questions and investigating alternative analytical techniques. Her experience also unveiled limitations, which required critical scrutiny. She observed, ‘I learned to discern when AI-generated information seemed plausible, it lacked the nuanced and contextual comprehension of lived experiences’. This insight is consistent with recent AI literacies that point out that generated information may sound good but requires scrutiny by experts (University of Washington 2024). Research in Hong Kong indicates a lack of scrutiny could potentially use these tools to cheat by submitting work that is not originally produced (Kong et al. 2023). Chinese institutions predominantly focused on skills development that emphasised evaluation of AI responses, enabling critical thinking (Zou et al. 2023).

Artificial intelligence tools significantly enhanced Jane’s methodological thinking and research design. ChatGPT enabled her to develop protocols with same-sex families. This undertaking was quite complex, considering the intricate cultural sensitivities and ethical considerations. She used ChatGPT and Claude AI to refine methodological approaches and anticipate interview challenges. ‘ChatGPT helped me to think through different scenarios and helped me to design different questioning approaches that could improve participant comfort and data quality’ (Doctoral graduate, female). This aligns with Jia and Tu’s (2024) view that AI enhances learning conceptualisation and aids critical reflections across contexts and scenarios.

Another critical attribute was Jane’s academic writing progress, which was profoundly influenced through the use of AI tools. Tools such as ChatGPT and Claude AI fulfilled roles as writing coaches and editors. Jane independently drafted sections of the thesis and subsequently engaged AI tools for critical feedback and clarity. ‘ChatGPT became a valuable writing partner that assisted me when my arguments were unclear and convoluted with assumptions’. The benefit of using AI tools in writing processes is their timely and organised feedback (Ratih & Kastuhandani 2024). Despite this assistance, ‘I, however, still had to consult with Anthony to verify aspects and alignment within a particular scholarly debate’. As identified earlier, the field of sexuality education often involves pervasive normative thinking, which requires frequent counter-narratives to disrupt and enhance the scholarship. Jane cultivated an advanced prompting skill to propose alternative framings or highlight additional evidence. She articulated that, ‘I learned to craft prompts that would challenge my thinking’. This process enhanced her critical capabilities and skills in argument development, providing deeper insights, implications and discipline sophistication. This is because generative AI has the potential to analyse lived experiences and emotional complexity that necessitates scrutiny (WHO 2024). This functionality was particularly critical in Jane’s study, which required an in-depth analysis of intersectionality, enabling an overlapping comprehension of how race, class, culture and geography shape and influence the field of sexuality education. Jane, though, observed that at times:

‘ChatGPT could offer oversimplified and dominant Western-centric perspectives of sexuality. This taught me the importance of diverse scholarly sources and community voices. I wish to emphasise that the reading of the literature remains paramount and fundamental in graduate studies, or else you would not be able to verify the suggestions by the AI tools.’ (Doctoral graduate, female)

This finding underscores the idea that AI should not be used as an authoritative voice and generated output should always be verified through human supervision (Kasneci et al. 2023).

Reflexive engagements in transformative ethical artificial intelligence-augmented supervision

The role of Anthony as a supervisor evolved significantly with the introduction of AI tools in the supervision process. Instead of merely incorporating technologies into conventional practices, this integration required a fundamental re-evaluation of the supervision process. Prior to its practical integration, Anthony’s preliminary strategy prioritised the capacities and boundaries of AI. He aligned himself with principles promoted by academics who maintain that effective pedagogical strategies need to be established for the deployment of large language models in education (Kasneci et al. 2023). This includes a systematic orientation to acquaint Jane with various AI tools and their suitable applications.

‘My initial priority was to ensure Jane perceived these as thinking instruments rather than a mere answer generator’, Anthony reflected. ‘We dedicated significant time examining methods for constructing effective prompts. We also explored how to critically analyse AI-generated responses and how to integrate AI-derived insights with independent reasoning’. This approach aligns with global promotions that students must develop literacies to verify AI-generated information that effectively emphasises critical thinking (Kasneci et al. 2023). The orientation process reflected ethical guidelines emerging from academic institutions. Makkula Center for Applied Ethics (2023) recommends that students should find out what the institutional stance and principles are regarding the integration of AI. Ongoing discussions were instrumental in ensuring the continuous alignment of AI uphold academic integrity while enabling critical reflections (Cotton, Cotton & Shipway 2024). Anthony emphasised the significance of transparency and meticulous documentation on how the different tools were engaged with. As such comprehensive records were maintained for all notable AI interactions with periodic supervisory meetings. This approach enabled monitoring on research progression, fostered accountability and encouraged reflective learning.

The formulation of ethical protocols for AI implementation necessitated a thorough consideration of various factors. These include academic integrity, research excellence and intellectual growth. Jane and Anthony jointly have drawn on a set of guidelines focused on these elements from recent studies (Makkula Center for Applied Ethics 2023). The considerations included several key principles. The implementation of AI must be comprehensively documented, and the use of the tools should be transparent (University of Toronto 2024). By comprehensively documenting the flow of AI has the potential to alleviate the concern that AI use impairs individual critical thinking (Ruiz-Rojas, Salvador-Ullauri & Acosta-Vargas 2024). The documentation of AI usage formed part of the supervision discussions. ‘We facilitated regular evaluations how the AI integration aided Jane’s scholarly development or whether it was creating dependencies’. Given the absence of well-defined regulatory frameworks for AI integration into graduate studies, we heavily relied on a variety of international frameworks.

Supervision conversations increasingly concentrated on in-depth theoretical insights and advanced methodologies. Anthony pointed out that ‘AI tools were used to handle basic information needs that freed our supervision time for higher-order discussion. These mainly focused on theoretical frameworks, ethical considerations and research implications.’ According to Nuis et al. (2025), this shift enables more sophisticated intellectual engagement and mentoring. Anthony drew attention to how:

‘[T]he tools also equipped Jane to refine questions in preparation for supervision discussion. This process accelerated learning, and we could explore complex aspects of the study. Critical reflections on AI generated content also sparked in-depth philosophical dialogues about sexuality knowledge which further generated larger inquiries.’ (Doctoral supervisor, male)

Anthony highlighted that:

‘[W]e questioned AI’s capacity to understand the lived experiences and cultural context pertained to this study. Since these tools provide 24/7 access to scholarly support, Jane could immediately advance scholarly engagement post the human mentorship and supervision session.’ (Doctoral supervisor, male)

As Anthony noted:

‘Jane developed sophisticated analytical skills since we often examined AI generated content together. We would examine AI generated content together and discuss how to build on AI insights through critical thinking.’ (Doctoral supervisor, male)

These reflective dialogues were mutually intellectually gratifying. As Anthony pointed out, ‘Jane’s inquiries compelled me to clarify my personal views on learning and the development of knowledge. This interaction fostered a two-way learning dynamic that strengthened the supervisory relationship’.

Systematic implications of artificial intelligence-augmented graduate thesis supervision

Emerging AI tools, in the form of large language chatbots, have had a significant impact on higher education practices and policies. Artificial intelligence tools in graduate studies offer notable possibilities and hurdles that need addressing. According to recent international research, there is a slow adoption among educational institutions (TechTarget 2025). This is despite the considerable investment in AI technologies. Among the most primary opportunities with AI integration in educational institutions is its augmenting potential. Research found that AI tools improve the writing process by offering timely structured feedback (Ratih & Kastuhandani 2024). It is important to note that AI tools cannot completely substitute traditional supervisor feedback. The experiences by Jane and Anthony validated the findings. The navigation of AI-integrated supervision uncovered further enhancement opportunities in line with new international best practices. Their experience also highlighted the continuous feedback that AI tools offer. They pointed to the around-the-clock writing support, conceptual clarification and methodological guidance. It was particularly helpful with intricate research topics such as sexualities that could be emotionally challenging and involve multifaceted discussions. These instruments provided entry for a wide range of insights and specialised knowledge (Kong et al. 2023). Locating experienced supervisors researching in interdisciplinary fields such as lesbian, gay, bisexual and transgender (LGBT) studies could be challenging to find. In the case of Anthony and Jane, the applied AI tools presented equalising potential. In postgraduate research supervision, quality assurance and assessment of AI-produced augmentations are vital. Ongoing human oversight is essential because of frequently simplistic or culturally biased viewpoints (WHO 2024).

It is crucial to emphasise the challenges accompanying the advantages of AI integration in graduate research. The primary focus is to make sure that the use of AI supports critical thinking rather than substituting it (Ruiz-Rojas et al. 2024). Anthony noted the temptation to accept AI responses uncritically. This occurrence arises because of the ability of AI outputs syntax resembling that of human language (Christou 2025). Such characteristics could potentially engender greater trust among users. Anthony and Jane advocate for the perpetual vigilance and employment of sophisticated methodologies in the verification of AI outputs. The verification of AI outputs becomes increasingly imperative because of the technical constraints encountered. It has been observed that AI-generated outputs may convey inaccurate information, particularly in the context of critical literature within complex research or culturally nuanced scenarios. As previously mentioned, a fundamental AI literacy skill is the capacity for critical evaluation that capacitates critical thinking.

A fundamental characteristic at the graduate level is the maintenance of authenticity alongside the commitment to academic integrity. All universities have established that graduate thesis work must demonstrate originality and independent thought. Beyond traditional markers of integrity, integration of AI necessitates meticulous documentation concerning the utilisation of these tools. The incorporation of AI in graduate studies imposes the need for reevaluation of ethical considerations from both the supervisor and the student.

These considerations underscore the necessity to reevaluate the methodologies of graduate thesis supervision in the context of AI generative bots. While it is essential to empower students with ethical AI literacy skills, it is even more imperative for thesis supervisors to cultivate these competencies to effectively facilitate mentorship. Undoubtedly, the objectives of thesis supervision necessitate a fundamental restructuring. Supervision must transcend the mere dissemination of information and the monitoring of progress. Artificial intelligence-augmented supervision ought to prioritise the development of critical thinking, the facilitation of ethical decision making and the enhancement of intellectual engagement. This transition corresponds with the contemporary conceptualisation of mentoring as a reciprocal partnership (Nuis et al. 2025). The change necessitates understanding that AI-augmented supervision is more than merely integrating technology into current methods. It is a profound reimagining of educational connections. Artificial intelligence-integrated supervision facilitates an enhanced focus on intricate cognitive processes and fosters deeper intellectual engagement.

Data included meticulously reflective journals documenting interactions with AI tool. Records of supervisory formal and informal engagements and learning reflections were additionally included as components of the dataset. Finally, comprehensive documentation pertaining to the deployment of AI formed part of the data set.

Conclusion

This reflective study offers a critical examination of the dynamic assimilation of generative AI tools within doctoral supervision. It draws on challenges to Kolb’s (1984) ELT with its focus on human-centred and linear perspectives. This study introduces an AELT. The addition highlights dynamic and non-linear interactions between human intentions and algorithms. Artificial intelligence fosters a distributed epistemology wherein cognition emerges from human and machine interactions. This process redefines ‘experience’ as a hybrid cognitive system. The supervisor’s role transitions to facilitating ethical AI-enhanced reflections, fostering critical thinking rather than cognitive outsourcing. Kolb’s ELT posits that students are central to reflection and gaining experiential knowledge through engagement with social or material context. Conversely, AI-driven learning establishes a distributed epistemology. Within this mode, cognition and reflection are co-developed through human–machine interactions (Kasneci et al. 2023). The study reflects frameworks that are ethical, dialogical and epistemologically robust (Alabi 2023). The findings emphasise that when AI generative tools are employed with ethical and reflective intent, they do not detract from the significance of human mentorship. The doctoral candidate in this study developed critical research skills, refined methodological precision and fostered epistemological reflexivity. The integration of AI promoted deeper intellectual engagement, facilitated iterative feedback loops and supported emotional labour. Significantly, AI tools served not as conclusive sources of knowledge but as catalysts for questioning, expansion and synthesis.

Future directives

The recursive interplay between human agency and machine intelligence signifies a paradigmatic shift in knowledge creation. However, the findings reiterate that AI is not neutral; it is influenced by prevailing cultural epistemic and linguistic codes. Jenks (2025) and Christou (2025) highlight that generative AI systems often display biases aligned with dominant linguistic and ideological norms. Consequently, AELT should embrace a critical technocultural perspective, encouraging supervisors and students to scrutinise AI’s influence on knowledge frameworks. Supervisors must guide students in leveraging AI to enhance metacognitive abilities, creativity and academic integrity (Balalle & Pannilage 2025; Markkula Center for Applied Ethics 2023). The aim is to maintain research authenticity and ensure academic integrity. From a policy standpoint, the study advocates for nuanced, context-specific institutional frameworks that embrace AI innovation. Supervision practices must evolve to incorporate AI literacy and critical engagement as fundamental graduate attributes. They should invest in equipping supervisors to navigate these emerging pedagogies. For example, educational institutions should develop frameworks blending innovation with responsibility, emphasising AI ethics, data management and intellectual autonomy as essential graduate attributes (University of Toronto 2024). Artificial intelligence-augmented supervision, when rooted in ethical intent, critical reflexivity and human mentorship, presents significant opportunities to enhance research quality (Long & Magerko 2020; University of Washington 2024). It enables emotional resilience and democratises access to scholarly expertise. It is essential for supervisors to cultivate a critical understanding of generative AI to effectively engage with its emerging applications. Supervisors must guide students to use AI as a collaborator that supports metacognitive awareness and fosters theoretical innovation (Balalle & Pannilage 2025; Markkula Center for Applied Ethics 2023). The incorporation of ethical reflection and verification is crucial in this endeavour. As academia navigates the increasing presence of generative AI within the knowledge ecosystem, future research must continue to explore, refine and reconceptualise the boundaries of supervision to sustain the most enduring commitments nurturing independent thinkers. The study elucidates the manner in which supervision driven by AI can facilitate educational reform oriented towards equity. This paradigm addresses epistemic exclusion, advocates for multilingualism and affords scalable solutions. The Adaptive Educational Learning Technology (AELT) furnishes a transformative framework that augments accessibility, fosters dialogue and advances democratisation within supervisory contexts, thereby contributing to initiatives aimed at reconfiguring higher education in pursuit of social justice and technological advancement. Artificial intelligence in graduate research studies should advance human understanding and broaden the horizons of inquiry.

Acknowledgements

This article is based on research originally conducted as part of Jane P. Rossouw’s doctoral thesis titled ‘Conversations about sexuality in LGBTQ headed home environments’, submitted to the Faculty of Education, Department of Educational Psychology, Stellenbosch University in 2025. The thesis was supervised by Prof Anthony Brown and Dr Carmelita Jacobs. The manuscript has since been revised and adapted for journal publication.

Competing interests

The authors declare that they have no financial or personal relationships that may have inappropriately influenced them in writing this article.

CRediT authorship contribution

Anthony Brown: Conceptualisation; Data curation; Methodology; Writing-original draft. Jane Rossouw: Conceptualisation; Formal analysis; Writing-review & editing. All authors reviewed the article, contributed to the discussion of results, approved the final version for submission and publication and take responsibility for the integrity of its findings.

Funding information

This research received no specific grant from any funding agency in the public, commercial or not-for-profit sectors.

Data availability

The data that support the findings of this study are available on request from the corresponding author, Anthony Brown.

Disclaimer

The views and opinions expressed in this article are those of the authors and are the product of professional research. It does not necessarily reflect the official policy or position of any affiliated institution, funder, agency, or that of the publisher. The authors are responsible for this article’s findings, and content.

References

Alabi, O., 2023, ‘Critical reflection on sexuality research in Nigeria: Epistemology, fieldwork and researcher’s positionality’, Forum Qualitative Sozialforschung / Forum: Qualitative Social Research 24(3), 1–25. https://doi.org/10.17169/fqs-24.3.3972

Alasadi, E.A. & Baiz, C.R., 2023, ‘Generative AI in education and research: Opportunities, concerns, and solutions’, Journal of Chemical Education 100(8), 2965–2971. https://doi.org/10.1021/acs.jchemed.3c00323

Baidoo-Anu, D. & Ansah, L.O., 2023, ‘Education in the era of generative artificial intelligence (AI): Understanding the potential benefits of ChatGPT in promoting teaching and learning’, Journal of AI 7(1), 52–62. https://doi.org/10.61969/jai.1337500

Balalle, H. & Pannilage, S., 2025, ‘Reassessing academic integrity in the age of AI: A systematic literature review on AI and academic integrity’, Social Sciences & Humanities Open 11, 101299. https://doi.org/10.1016/j.ssaho.2025.101299

Chauke, T.A., Mkhize, T.R., Methi, L. & Dlamini, N., 2024, ‘Postgraduate students’ perceptions on the benefits associated with artificial intelligence tools on academic success: In case of ChatGPT AI tool’, Journal of Curriculum Studies Research 6(1), 44–59. https://doi.org/10.46303/jcsr.2024.4

Christou, P.A., 2025, ‘A critical inquiry into the personal and societal perils of Artificial Intelligence’, AI and Ethics 5(3), 2547–2555.

Cotton, D.R., Cotton, P.A. & Shipway, J.R., 2024, ‘Chatting and cheating: Ensuring academic integrity in the era of ChatGPT’, Innovations in Education and Teaching International 61(2), 228–239. https://doi.org/10.1080/14703297.2023.2190148

Cowling, M., Crawford, J., Allen, K.-A. & Wehmeyer, M., 2023, ‘Using leadership to leverage ChatGPT and artificial intelligence for undergraduate and postgraduate research supervision’, Australasian Journal of Educational Technology 39(4), 89–103. https://doi.org/10.14742/ajet.8598

Dai, Y., Lai, S., Lim, C.P. & Liu, A., 2023, ‘ChatGPT and its impact on research supervision: Insights from Australian postgraduate research students’, Australasian Journal of Educational Technology 39(4), 74–88. https://doi.org/10.14742/ajet.8848

Dempere, J., Modugu, K., Hesham, A. & Ramasamy, L.K., 2023, ‘The impact of ChatGPT on higher education’, Frontiers in Education 8, 1–13. https://doi.org/10.3389/feduc.2023.1206936

Diprose, G., Thomas, A.C. & Rushton, R., 2013, ‘Desiring more: Complicating understandings of sexuality in research processes’, Area 45(3), 292–298. https://doi.org/10.1111/area.12031

Dodgson, J.E., 2019, ‘Reflexivity in qualitative research’, Journal of Human Lactation 35(2), 220–222. https://doi.org/10.1177/0890334419830990

Goldshaft, B., 2024, ‘Mentoring in practicum: Supporting student teachers’ learning to notice with collaborative observational tools’, Professional Development in Education 1–20. https://doi.org/10.1080/19415257.2024.2441837

Inside Higher Ed, 2024, How will AI influence higher ed in 2025?, Inside Higher Ed., viewed 02 March 2025, from https://www.insidehighered.com/news/tech-innovation/artificial-intelligence/2024/12/19/how-will-ai-influence-higher-ed-2025.

Jenks, C.J., 2025, ‘Communicating the cultural other: Trust and bias in generative AI and large language models’, Applied Linguistics Review 16(2), 787–795. https://doi.org/10.1515/applirev-2024-0196

Jia, X.H. & Tu, J.C., 2024, ‘Towards a new conceptual model of AI-enhanced learning for college students: The roles of artificial intelligence capabilities, general self-efficacy, learning motivation, and critical thinking awareness’, Systems 12(3), 1–25. https://doi.org/10.3390/systems12030074

Johnson, C.W. & Paulus, T., 2024, ‘Generating a reflexive AI-assisted workflow for academic writing’, The Qualitative Report 29(10), 2772–2792. https://doi.org/10.46743/2160-3715/2024.7634

Karalis, N.T., Minematsu, A. & Bosca, N., 2023, ‘Collective autoethnography as a transformative narrative methodology’, International Journal of Qualitative Methods 22, 1–9. https://doi.org/10.1177/16094069231203944

Kasneci, E., Seßler, K., Küchemann, S., Bannert, M., Dementieva, D., Fischer, F. et al., 2023, ‘ChatGPT for good? On opportunities and challenges of large language models for education’, Learning and Individual Differences 103, 102274. https://doi.org/10.1016/j.lindif.2023.102274

Khalifa, M. & Albadawy, M., 2024, ‘Using artificial intelligence in academic writing and research: An essential productivity tool’, Computer Methods and Programs in Biomedicine 5, 100145. https://doi.org/10.1016/j.cmpbup.2024.100145

Kolb, D.A., 1984, Experience as the source of learning and development, Prentice Hall, Upper Sadle River.

Kong, S.C., Cheung, W.M.Y. & Zhang, G., 2023, ‘A comprehensive AI policy education framework for university teaching and learning’, International Journal of Educational Technology in Higher Education 20, 41. https://doi.org/10.1186/s41239-023-00408-3

Latt, P.M., Aung, E.T., Htaik, K., Soe, N.N., Lee, D., King, A.J. et al., 2025, ‘Evaluation of artificial intelligence (AI) chatbots for providing sexual health information: A consensus study using real-world clinical queries’, BMC Public Health 25(1), 1788. https://doi.org/10.1186/s12889-025-22933-8

Link, S., Mehrzad, M. & Rahimi, M., 2022, ‘Impact of automated writing evaluation on teacher feedback, student revision, and writing improvement’, Computer Assisted Language Learning 35(4), 605–634. https://doi.org/10.1177/16094069231203944

Long, D. & Magerko, B., 2020, ‘What is AI literacy? Competencies and design considerations’, in Proceedings of the 2020 CHI conference on human factors in computing systems, Honolulu, HI, 25–30 April 2020, pp.1–16, ACM, New York. https://doi.org/10.1145/3313831.3376727

Machost, H. & Stains, M., 2023, ‘Reflective practices in education: A primer for practitioners’, CBE – Life Sciences Education 22(2), 1–11. https://doi.org/10.1187/cbe.22-07-0148

Markkula Center for Applied Ethics, 2023, Guidelines for the ethical use of generative AI (i.e. ChatGPT) on campus, Santa Clara University, viewed from https://www.scu.edu/ethics/focus-areas/campus-ethics/guidelines-for-the-ethical-use-of-generative-ai-ie-chatgpt-on-campus/.

Munyai, A., 2024, ‘Language conundrum in higher education institutions in South Africa: One step forward or two steps back?’, De Jure Law Journal 57(1), 177–195. https://doi.org/10.17159/2225-7160/2024/v57a13

Nartey, E.K., 2024, ‘Guiding principles of generative AI for employability and learning in UK universities’, Cogent Education 11(1), 2357898. https://doi.org/10.1080/2331186X.2024.2357898

Nuis, W., Lundquist, R. & Beausaert, S., 2025, ‘Exploring master’s students’ perceptions of mentoring support for reflection in a one-year employability-oriented mentoring program’, Higher Education 1–32. https://doi.org/10.1007/s10734-025-01449-5

Olmos-Vega, F.M., Stalmeijer, R.E., Varpio, L. & Kahlke, R., 2023, ‘A practical guide to reflexivity in qualitative research: AMEE Guide No. 149’, Medical Teacher 45(3), 241–251. https://doi.org/10.1080/0142159X.2022.2057287

Park, H. & Ahn, D., 2024, ‘The promise and peril of ChatGPT in higher education: Opportunities, challenges, and design implications’, in Proceedings of the 2024 CHI conference on human factors in computing systems, viewed 24 February 2025, from https://dl.acm.org/doi/10.1145/3613904.3642785.

Rababah, L.M., Al-Khawaldeh, N.N. & Rababah, M.A., 2023, ‘Mobile-assisted listening instructions with Jordanian audio materials: A pathway to EFL proficiency’, International Journal of Interactive Mobile Technologies 17(21), 129–140.

Radanliev, P., Santos, O., Brandon-Jones, A. & Joinson, A., 2024, ‘Ethics and responsible AI deployment’, Frontiers in Artificial Intelligence 7, 1377011. http://doi.org/10.3389/frai.2024.1377011

Ratih, M.C. & Kastuhandani, F.C., 2024, ‘Students’ lived experiences in utilizing artificial intelligence for thesis writing’, NUSRA: Jurnal Penelitian dan Ilmu Pendidikan 5(2), 760–769. https://doi.org/10.55681/nusra.v5i2.2696

Rossouw, J., 2025, ‘Conversations about sexuality in LGBTQ + headed home environments’, unpublished PhD thesis, Stellenbosch University.

Ruiz-Rojas, L.I., Salvador-Ullauri, L. & Acosta-Vargas, P., 2024, ‘Collaborative working and critical thinking: Adoption of generative artificial intelligence tools in higher education’, Sustainability 16, 5367. https://doi.org/10.3390/su16135367

Sanchez, D., Slovacek, H. & Wang, R., 2024, ‘Shaping the future of men’s sexual health: How artificial intelligence can assist in the management and treatment of erectile dysfunction’, UroPrecision 2(1), 1–8. http://doi.org/10.1002/uro2.31

Stanford HAI, 2025, AI Index 2025: State of AI in 10 charts, Stanford Human-Centered AI Institute, viewed 02 March 2025, from https://hai.stanford.edu/news/ai-index-2025-state-of-ai-in-10-charts.

Taneja, M., Kiran, R. & Bose, S.C., 2022, ‘Critical analysis of Kolb experiential learning process: Gender perspective’, International Journal of Health Sciences 6(S1), 8713–8723. https://doi.org/10.53730/ijhs.v6nS1.6962

TechTarget, 2025, 8 AI and machine learning trends to watch in 2025, TechTarget, viewed 10 March 2025, from https://www.techtarget.com/searchenterpriseai/tip/9-top-AI-and-machine-learning-trends.

TEQSA, 2024, Artificial intelligence resources for higher education, Tertiary Education Quality and Standards Agency, viewed 10 March 2025, from https://www.teqsa.gov.au/guides-resources/higher-education-good-practice-hub/artificial-intelligence.

Thong, C.L., Atallah, Z., Islam, S., Lim, W. & Cherukuri, A.K., 2025, ‘AI-powered tools for doctoral supervision in higher education: A systematic review’, Journal of Information & Knowledge Management 24(2), 1–27. https://doi.org/10.1142/S0219649225300013

University of Toronto School of Graduate Studies, 2024, Guidance on the appropriate use of generative artificial intelligence in graduate theses, School of Graduate Studies, viewed 10 March 2025, from https://www.sgs.utoronto.ca/about/guidance-on-the-use-of-generative-artificial-intelligence/.

University of Washington Graduate School, 2024, Effective and responsible use of AI in research, UW Graduate School, Seattle, Washington.

Vargas-Murillo, A.R., De la Asuncion, I.N.M. & De Jesús Guevara-Soto, F., 2023, ‘Challenges and opportunities of AI-assisted learning: A systematic literature review on the impact of ChatGPT usage in higher education’, International Journal of Learning, Teaching and Educational Research 22(7), 122–135. https://doi.org/10.26803/ijlter.22.7.7

Wang, L. & Li, W., 2024, ‘The impact of AI usage on university students’ willingness for autonomous learning’, Behavioral Sciences 14(10), 1–16. https://doi.org/10.3390/bs14100956

WHO, 2024, The role of artificial intelligence in sexual and reproductive health and rights: Technical brief, p. 15, World Health Organization, viewed 10 March 2025, from https://d-nb.info/1358469903/34.

Wijnen-Meijer, M., Brandhuber, T., Schneider, A. & Berberat, P.O., 2022, ‘Implementing Kolb’s experiential learning cycle by linking real experience, case-based discussion and simulation’, Journal of Medical Education and Curricular Development 9, 23821205221091511. https://doi.org/10.1177/23821205221091511

Wilson, J. & James, W., 2022, ‘Ph. D. partnership: Effective doctoral supervision using a coaching stance’, Journal of Further and Higher Education 46(3), 341–353. https://doi.org/10.1080/0309877X.2021.1945555

Woodward, K. & Woodward, S., 2015, ‘Gender studies and interdisciplinarity’, Palgrave Communications 1(1), 1–5. https://doi.org/10.1057/palcomms.2015.18

Xu, Z., 2024, ‘AI in education: Enhancing learning experiences and student outcomes’, Applied and Computational Engineering 51(1), 104–111. https://doi.org/10.54254/2755-2721/51/20241187

Zou, X., Su, P., Li, L. & Fu, P., 2024, ‘AI-generated content tools and students’ critical thinking: Insights from a Chinese university’, IFLA Journal 50(2), 228–241. https://doi.org/10.1177/03400352231214963



Crossref Citations

No related citations found.