About the Author(s)


Sioux McKenna Email symbol
Centre for Postgraduate Studies, Rhodes University, Makhanda, South Africa

Citation


McKenna, S., 2025, ‘Foregrounding doctoral knowledge and knower in the age of Generative Artificial Intelligence’, Transformation in Higher Education 10(0), a653. https://doi.org/10.4102/the.v10i0.653

Review Article

Foregrounding doctoral knowledge and knower in the age of Generative Artificial Intelligence

Sioux McKenna

Received: 17 July 2025; Accepted: 04 Nov. 2025; Published: 09 Dec. 2025

Copyright: © 2025. The Author Licensee: AOSIS.
This work is licensed under the Creative Commons Attribution 4.0 International (CC BY 4.0) license (https://creativecommons.org/licenses/by/4.0/).

Abstract

While Generative Artificial Intelligence (AI) presents new challenges for doctoral education, it also offers an opportunity to refocus doctoral programmes on their fundamental purposes: contributing to knowledge and developing critical researchers. This article draws on the literature on doctoral education to contend that a fixation on efficiency and market-driven outcomes has made doctoral education particularly vulnerable to the misuse of AI. This is because seeing the doctorate as a product to be acquired within a minimum time diminishes the likelihood of substantive conversations taking place about scholarly responsibility and the nature of knowledge creation. By outlining a few ethical deliberations about Generative AI pertinent to all doctoral candidates, this opinion article optimistically suggests that the common concerns about the misuse of AI in our institutions might act as a catalyst, turning the focus onto the knowledge and knowers of doctoral education.

Contribution: The emergence of Generative AI, such as Large Language Models ChatGPT, Claude and DeepSeek, has led to concerns about authorship in the doctorate. This article suggests that we should instead use this potential threat as the impetus to turn the focus onto the knowledge and knower purposes of doctoral education.

Keywords: doctoral education; Generative AI; critical scholarship; research development; postgraduate education; LLMs.

Introduction

There are significant variations in doctoral education across countries and disciplines, including the range of supervisory structures, the inclusion of publications or creative outputs, the use of a viva, whether there are coursework requirements, and so on (Council on Higher Education [CHE] 2023). Despite this, there exists a remarkable consensus around the world on the two fundamental objectives of doctoral education.

Firstly, doctoral programmes should facilitate knowledge creation. In South Africa, the Higher Education Qualifications Sub-Framework (CHE 2013:40) defines this as making ‘a significant and original academic contribution at the frontiers of a discipline or field’. While what constitutes the new knowledge that doctoral education requires remains highly contested, there is general agreement that doctoral research should advance scholarship.

Secondly, doctoral education should develop autonomous researchers who have acquired the attributes expected of someone capable of contributing to knowledge – what we might term the cultivation of the ‘knower’ (Boud & Lee 2009). While there are marked differences in the dispositions that different fields expect of expert knowers (Maton 2014), there is agreement that the doctoral graduate should embody a ‘researcher identity’ (Castelló et al. 2021). In South Africa, the Qualification Standard for Doctoral Degrees (CHE 2018) articulates this identity broadly in the form of nine graduate attributes, whereby the graduate is meant to have expert knowledge of their topic, understand how it connects to other research, have the ability to communicate their work with both experts and the public, act in ethically responsible ways, and so on.

While we may debate the nuances of what is meant by both a ‘contribution to knowledge’ and ‘the nurturing of a researcher identity’, and whether we are indeed achieving these outcomes, there is a broad consensus that both should be fundamental goals of doctoral education (Boud & Lee 2009; Wisker 2012). However, this article argues that contemporary doctoral education has been significantly shaped by the framing of universities as training centres that sell qualifications (Giroux 2025; Shore & Wright 2024). This context has constrained both what it means to contribute to knowledge and the kinds of knowers emerging with doctoral qualifications. In some cases, we have effectively emptied out the substance of contributing to knowledge and nurturing researchers in order to meet market demands, provide customer satisfaction and protect institutional brands (Ball 2012; Brown 2015; Giroux 2014, 2025).

This article employs a critical review of existing literature to develop a conceptual argument regarding the use of Generative AI in doctoral education. This article reflects on how these core purposes of the doctorate, the creation of knowledge and the nurturing of the autonomous researcher, are affected by the emergence of Generative Artificial Intelligence (AI). While the article acknowledges that Generative AI provides enormous opportunities for doctoral studies and enables leaps forward in knowledge creation, the focus here is on the ways in which the neoliberal context shapes the uptake of Generative AI in potentially negative ways. The article ends with recommendations on how we can temper some of these deleterious effects and make sure that our doctoral candidates engage with Generative AI in ethical, critical and meaningful ways.

Market logic and the emergence of artificial intelligence

The neoliberalism of higher education has been extensively documented by scholars who highlight how market logics have shaped academic institutions and conditioned their practices (e.g. Di Leo 2024; Fleming 2021; Slaughter & Rhoades 2004). Globalisation has seen the entrenchment of neoliberal ideology, characterised by marketisation, privatisation and competition, across higher education systems worldwide, although its manifestation varies according to national and regional contexts (Madra & Adaman 2018; Shore & Wright 2024). Concerns about the impact of neoliberalism in shaping the activities and practices of higher education have also been expressed in the South African context (see, for example, the 2024 special edition of Transformation in Higher Education on the neoliberal turn in higher education). Such concerns should not be dismissed as a misplaced nostalgia for a golden age of higher education that never existed; rather, these concerns emerge from the idea that the good university is one that serves society and the environment (Ashwin 2020; Connell 2019). Despair about instrumentalist approaches to higher education is thus not a longing for the past but rather a despair about how far we are from where we want to be:

At its most basic, neoliberalism, as an ideology, validates anti-statism, deregulation, and a domination of the market over governance structures. Such attributes pose great risk to the stability, standards and quality of doctoral programmes. (Brabazon 2016:16)

Within doctoral education, this neoliberalism has manifested as a fixation with efficient graduation rates at the cost of deliberations about what kind of knowledge is being created and whom that knowledge serves (Boud & Lee 2009; Grant 2003). In our focus on getting students through the system in ‘regulation time’, we have perhaps neglected conversations about what kind of person an ethical researcher is and what responsibilities should accompany the societal status often accorded to the title of ‘Doctor’. The neoliberal context has created what Giroux (2025) describes as a ‘crisis of agency’ in higher education, where the development of critical consciousness takes a back seat to credentialing and employability metrics. For our doctoral candidates, this is experienced as an emphasis on throughput and completion rates rather than the nurturing of their scholarly identity and critical thinking capabilities within the knowledge creation processes of the field (Grant 2003; McAlpine & Norton 2006).

It is within this neoliberal context that Generative AI has emerged. Much of the response by universities to Generative AI to date has been concerned with catching students who ‘cheat the system’ by using these programs to write their work for them (Kramm & McKenna 2023). This response is in part shaped by the idea that the commodity of a qualification will be devalued if students can acquire it without labour. It will no longer be an elite good that credentials successful students for industry and accords societal status if Generative AI can instantly produce the assessment outputs by which we previously determined which students to accredit.

The dominant police-catch-punish reaction to students’ use of AI generally fails to address the larger pedagogical concern that relying on AI can enable students to avoid grappling with complex, principled knowledge and skip the strenuous intellectual work that this entails. This then hampers the likelihood of students enjoying a transformative relationship with knowledge which ‘changes their sense of who they are, their understanding of the world, and their understanding of what they can do in the world’ (Ashwin 2020:68); a transformative relationship that should be at the very heart of doctoral education.

Undoubtedly, Generative AI allows for research at a pace and reach that would have been unfathomable just a few years ago. These capabilities are opening unprecedented research opportunities, from analysing vast datasets and synthesising cross-disciplinary literature to rapidly prototyping methodologies and generating novel hypotheses, transforming how doctoral candidates conduct and disseminate their scholarly work. One large study with doctoral students demonstrated that ChatGPT streamlined their writing process through assistance with drafting, organising ideas, grammar correction and sentence restructuring (Rafi & Amjad 2025). Other studies suggest the use of GenAI can enhance data analysis (Ringo 2025), augment thinking processes (Kumar & Gunn 2024) and serve as complementary resources to traditional database searches (Kumar & Gunn 2024; Mabirizi et al. 2025). Collectively, such studies demonstrate that Generative AI is fundamentally reshaping doctoral education by accelerating research capabilities and providing support throughout the doctoral journey.

But clearly neither concerns about ‘catching’ students who ‘cheat’ the system, nor acknowledgement of the benefits of Generative AI, are the whole story. The implications of Generative AI for doctoral studies are extensive and multifaceted.

If grappled with fully, these implications raise fundamental questions about the nature of knowledge creation, the role of the researcher, and the meaning of original scholarship. If the doctorate is simply about producing a thesis good enough to pass muster in the examination process, and Generative AI can produce such a thesis almost instantly, one might ask whether doctoral programmes should simply focus on training students in prompt engineering and ensuring they know how to scrutinise outputs for hallucinations and inaccuracies. This approach would certainly be more efficient, aligning with current imperatives for streamlined, cost-effective education delivery.

However, this would of course fail to address the purposes of doctoral education. The doctorate is not about knowledge creation in the mechanical sense of reviewing the literature and implementing the data collection and analysis methods accepted in the field; it is also fundamentally about nurturing the independent, responsible researcher who undertakes the work of critically engaging with literature and grappling with data in order to make a contribution.

The critical researcher as outcome of doctoral education

The responsible, independent researcher is fundamentally someone who is critical. Criticality in academia refers to the ability to question assumptions, weigh up alternatives, challenge dominant narratives and analyse power structures within knowledge production. It involves examining whose voices are privileged in scholarly discourse and how knowledge might serve some interests over others (Freire 2005). Becoming critical entails coming to understand how one’s education ‘embodies selective values, is entangled with relations of power, entails judgments about what knowledge counts, legitimates specific social relations, [and] defines agency in particular ways’ (Giroux 2011:6). When we position the development of responsible, independent and critical researchers at the heart of doctoral education, this has significant implications for how we engage with Generative AI in doctoral programmes.

Most importantly, this approach requires ensuring that doctoral students develop a comprehensive understanding of what Generative AI is – how it functions and what its limitations are. This understanding is essential for maintaining the critical stance that should characterise all advanced scholarly work. For starters, doctoral candidates must understand that Large Language Models like ChatGPT, Claude and DeepSeek are not search engines. Rather, they create new text and images based on sophisticated pattern recognition algorithms (Barreto et al. 2023). Crucially, AI systems cannot at present discern truth from falsehood, yet they are designed to adopt confident positions in their outputs, at odds with the cautious proclamations valued in scientific knowledge creation (Peters & Chin-Yee 2025; Stavrova 2024). Generative AI’s combination of apparent authority, tendency towards overgeneralisation, and potential for inaccuracy presents challenges for doctoral students who must learn to evaluate and verify AI-generated content critically. Beyond ensuring that our doctoral candidates have such a basic comprehension of how Generative AI works is the need for us to nurture ethical engagement with AI.

Nurturing ethical engagement with artificial intelligence

Any critical researcher engaging with Generative AI must be aware of the numerous ethical dimensions that accompany its use. These include the extensive environmental costs of AI systems, which are particularly concerning in a world already facing climate change and food and water insecurities (Bashir et al. 2024; Berthelot et al. 2024). The carbon footprint of training large language models raises serious questions about the sustainability of widespread AI adoption in academic contexts. Critical researchers also need to reflect on how current AI systems have been trained on texts, images, videos and other media produced by countless individuals, typically without recognition for those who created the original works (Chesterman 2025; Samuelson 2023).

Furthermore, much of the content moderation and training work has been performed by workers in ‘developing countries’, often under exploitative conditions; for instance, workers in Kenya were paid minimal wages for much of the training while experiencing the significant mental health consequences of this work (Perrigo 2023). And while current models are trained to avoid reproducing overt forms of sexism, racism, xenophobia and homophobia, this does not mean that these problems have been eliminated (Bender et al. 2021; Shah, Schwartz & Hovy 2020). These biases persist in the subtle ways that already permeate much of our research. And there is no guarantee that anti-hate training will be maintained in future AI systems.

Another key ethical issue we need today’s doctoral candidates to grapple with is that Generative AI represents a billion-dollar industry that primarily benefits a small number of large technology corporations (Rudolph et al. 2025). As doctoral candidates begin to draw on AI tools useful in their fields, they need to engage critically with what this industry means for the digital divide. The most sophisticated versions (especially discipline-specific AI software) are not free for use and thus are not accessible to doctoral scholars in most of the Global South. Furthermore, Generative AI is primarily trained on materials from the Global North and thus replicates all the ontological blind spots characteristic of such materials.

We need to ask our doctoral candidates what AI might mean for job displacement in their fields as AI becomes more widespread (Rudolph et al. 2025), especially given that we live in a world where those without salaried employment typically have their basic human rights neglected. These are vital deliberations for doctoral students, who, as future scholars and potential leaders in their fields, have a responsibility to consider the broader social implications of the technologies that are rapidly becoming ubiquitous.

Perhaps most immediately, doctoral candidates need to engage in sustained reflection about what it means to work with AI and what it means to allow AI to perform significant portions of their scholarly work. Have they contributed to knowledge if AI has generated significant portions of the analysis or writing? What kind of critical researchers are they becoming through this process? These questions go to the heart of doctoral education’s purposes.

While Generative AI is a recent phenomenon, the concerns it raises for doctoral education are not entirely new. Rather, they represent intensified versions of questions that should always be asked of our doctoral candidates: What will your doctoral contribution to knowledge be? Who will it benefit? Who are you becoming as a novice researcher? What are the responsibilities that accompany your privileged educational position?

The neoliberal context of contemporary higher education has unfortunately reduced the extent of such conversations in doctoral education. The emphasis on efficiency, completion rates and market relevance has often crowded out these more fundamental questions about purpose, responsibility and scholarly identity.

Conclusion: Artificial intelligence as catalyst for focus on the doctorate’s purpose

Perhaps our current concerns about students relying entirely on AI to complete their doctoral studies will serve as the necessary impetus to foreground foundational questions about the purpose of such studies. Rather than focusing on the police-catch-punish methods we have unsuccessfully used for issues of plagiarism, we might conceptualise Generative AI as an opportunity to engage more substantively with what doctoral education should accomplish.

This reconceptualisation requires moving beyond framings that reduce doctoral education to credential acquisition and skills development. Instead, it demands a focus on the cultivation of critical, responsible scholars who can engage thoughtfully with emerging technologies while maintaining their commitment to rigorous, ethical scholarship that serves broader social purposes.

In the South African context, where transformation remains a central imperative for higher education, this refocusing on the substantive purposes of doctoral education, both knowledge creation that serves society and the development of ethically responsible scholars, directly addresses transformation goals. By foregrounding questions about whose knowledge counts, whom that knowledge serves, and what responsibilities accompany scholarly privilege, we create conditions for doctoral education that can genuinely contribute to social justice and decolonial knowledge production.

The integration of Generative AI into doctoral education thus presents both challenges and opportunities. By addressing these challenges head-on, through critical education about AI’s capabilities and limitations, sustained reflection on ethical implications, and renewed emphasis on scholarly responsibility, we can use this moment to strengthen rather than diminish the fundamental purposes of doctoral education.

Acknowledgements

Competing interests

The author declares that no financial or personal relationships inappropriately influenced the writing of this article.

Authorship contribution

Sioux McKenna: Conceptualisation, Data curation, Formal analysis, Investigation, Methodology, Project administration, Resources, Software, Visualisation, Writing – original draft, Writing – review & editing.

Ethical considerations

Ethical clearance to conduct this study was obtained from the Rhodes University Human Research Ethics Committee (No. RUHREC-2025-0028).

Funding information

This research received no specific grant from any funding agency in the public, commercial or not-for-profit sectors.

Data availability

The authors confirm that the data supporting the findings of this study are available within the article and its references.

Disclaimer

The views and opinions expressed in this article are those of the author and are the product of professional research. They do not necessarily reflect the official policy or position of any affiliated institution, funder, agency or that of the publisher. The author is responsible for this article’s results, findings and content.

References

Ashwin, P., 2020, Transforming higher education: A manifesto, Bloomsbury, London.

Ball, S.J., 2012, Global Education Inc.: New policy networks and the neo-liberal imaginary, Routledge, London.

Bashir, N., Donti, P., Cuff, J., Sroka, S., Ilic, M., Sze, V. et al., 2024, The climate and sustainability implications of generative AI. An MIT Exploration of Generative AI, MIT Press, Cambridge.

Bender, E.M., Gebru, T., McMillan-Major, A. & Shmitchell, S., 2021, ‘On the dangers of stochastic parrots: Can language models be too big?’, in Proceedings of the 2021 ACM conference on fairness, accountability, and transparency, March 3-10, 2021, pp. 610–623, Association for Computing Machinery, New York.

Berthelot, A., Caron, E., Jay, M. & Lefevre, L., 2024, ‘Estimating the environmental impact of Generative-AI services using an LCA-based methodology’, ScienceDirect Procedia CIRP 122, 707–712. https://doi.org/10.1016/j.procir.2024.01.098

Boud, D. & Lee, A., 2009, Changing practices of doctoral education, Routledge, Abingdon.

Brabazon, T., 2016, ‘Winter is coming: Doctoral supervision in the Neoliberal University’, International Journal of Social Sciences & Educational Studies 3(1), 13–21.

Brown, W., 2015, Undoing the demos: Neoliberalism’s stealth revolution, MIT Press, Cambridge.

Castelló, M., McAlpine, L., Sala-Bubaré, L., Inouye, K. & Skakni, I., 2021, ‘What perspectives underlie ‘researcher identity’? A review of two decades of empirical studies’, Higher Education 81, 567–590. https://doi.org/10.1007/s10734-020-00557-8

Chesterman, S., 2025, ‘Good models borrow, great models steal: intellectual property rights and generative AI’, Policy and Society 44(1), 23–37. https://doi.org/10.1093/polsoc/puae006

Connell, R., 2019, The Good University: What universities actually do and why it’s time for radical change, Zed Books, London.

Council on Higher Education (CHE), 2013, Higher education qualifications sub-framework, Council on Higher Education, Pretoria. (As per Notice No 549, Government Gazette No. 36721, 02 August 2013).

Council on Higher Education (CHE), 2018, Higher education qualifications sub-framework qualification standard for doctoral degrees, Council on Higher Education, Pretoria.

Council on Higher Education (CHE), 2023, Briefly speaking #25: Models of postgraduate supervision and the need for a research-rich culture, Council on Higher Education, Pretoria.

Di Leo, J.R., 2024, Dark academe: Capitalism, theory, and the death drive in higher education, Palgrave, London.

Fleming, P., 2021, Dark academia: How universities die, Pluto Press, London.

Freire, P., 2005, Pedagogy of the oppressed, 30th anniversary edn., Continuum International Publishing, New York.

Giroux, H.A., 2011, On critical pedagogy, Continuum, New York.

Giroux, H.A., 2014, Neoliberalism’s war on higher education, Haymarket Books, Chicago.

Giroux, H.A., 2025, The burden of conscience: Educating beyond the veil of silence, Bloomsbury Books, London.

Grant, B.M., 2003, ‘Mapping the pleasures and risks of supervision’, Discourse: Studies in the Cultural Politics of Education 24(2), 175–190. https://doi.org/10.1080/01596300303042

Kramm, N. & McKenna, S., 2023, ‘AI amplifies the tough question: What is higher education really for?’, Teaching in Higher Education 28(8), 2173–2178. https://doi.org/10.1080/13562517.2023.2263839

Kumar, S. & Gunn, A., 2024, ‘Doctoral students’ reflections on generative artificial intelligence (GenAI) use in the literature review process’, Innovations in Education and Teaching International 62(4), 1395–1408. https://doi.org/10.1080/14703297.2024.2427049

Mabirizi, V., Katushabe, C., Muhoza, G. & Rugasira, J., 2025, ‘A systematic review of the impact of generative AI on postgraduate research: Opportunities, challenges, and ethical implications’, Discover Artificial Intelligence 5, 238. https://doi.org/10.1007/s44163-025-00495-3

Madra, Y & Adaman F., 2018, ‘Neoliberal turn in the discipline of economics: Depoliticization through economization’, in D. Cahill, M. Cooper, M. Konings & D. Primrose (eds.), SAGE handbook of neoliberalism, pp. 113–127, Sage, Melbourne.

Maton, K., 2014, Knowledge and Knowers, Routledge, Abingdon.

McAlpine, L. & Norton, J., 2006, ‘Reframing our approach to doctoral programs: An integrative framework for action and research’, Higher Education Research & Development 25(1), 3–17. https://doi.org/10.1080/07294360500453012

Perrigo, B., 2023, ‘OpenAI used Kenyan workers on less than $2 per hour to make ChatGPT less toxic’, Time Magazine, 18 January.

Peters, U. & Chin-Yee, B., 2025, ‘Generalization bias in large language model summarization of scientific research’, Royal Society of Open Science 12, 241776. https://doi.org/10.1098/rsos.241776

Rafi, M.S. & Amjad, I., 2025, ‘The role of generative AI in writing doctoral dissertation: Perceived opportunities, challenges, and facilitating strategies to promote human agency’, Discover Education 4, 165. https://doi.org/10.1007/s44217-025-00503-9

Ringo, D.S., 2025, ‘The effect of generative AI use on doctoral students’ academic research progress: The moderating role of hedonic gratification’, Cogent Education 12(1), 2475268. https://doi.org/10.1080/2331186X.2025.2475268

Rudolph, J., Ismail, F., Tan, S. & Seah, P., 2025, ‘Don’t believe the hype. AI myths and the need for a critical approach in higher education’, Journal of Applied Learning & Teaching 8(1), 6–27. https://doi.org/10.37074/jalt.2025.8.1.1

Samuelson, P., 2023, ‘Generative AI meets copyright’, Science 381, 158–161. https://doi.org/10.1126/science.adi0656

Shah, D., Schwartz, H.A. & Hovy, D., 2020, ‘Predictive biases in natural language processing models: A conceptual framework and overview’, in Proceedings of the 58th annual meeting of the Association for Computational Linguistics, July 5-10, 2020, pp. 5248–5264, Association for Computational Linguistics, Stroudsburg.

Shore, C. & Wright, S., 2024, Audit culture: How indicators and rankings are reshaping the world, Pluto Press, London.

Slaughter, S. & Rhoades, G., 2004, Academic capitalism and the new economy: Markets, state, and higher education, Johns Hopkins University Press, Baltimore.

Stavrova, O., Kleinberg, B., Evans, A.M. & Ivanović, M., 2024, ‘Expressions of uncertainty in online science communication hinder information diffusion’, PNAS Nexus 3(10), 439. https://doi.org/10.1093/pnasnexus/pgae439

Wisker, G., 2012, The good supervisor: Supervising postgraduate and undergraduate research for doctoral theses and dissertations, 2nd edn., Palgrave Macmillan, London.



Crossref Citations

No related citations found.