After fellowship evaluation season: a guide to future applicants
It is the season again: I’ve just finished reviewing several dozen MA and PhD applications, research fellowship submissions, and the evaluations of conference, research, and travel grants.
Let me tell you what has changed compared to previous years.
Fortepan. Entrance examination, Egyetem tér 1-3, ELTE. 1953, 17962, Magyar Rendőr
Science is built on the principle that, within the framework of academic citizenship, scholars evaluate each other’s grant proposals and applications for free.
This is an extremely time‑consuming service because one must read the application carefully to see whether it meets the requirements of the call and whether the statement aligns with the cutting‑edge literature. And more importantly, how this application compares with the others. But the situation has now changed.
What has changed?
It struck me while reading applications from gender‑studies scholars and from scholars hoping to enter this booming field that there is an unacknowledged use of AI. The repeated claims about “changing the world” and “making society a better place” by exposing inequalities and discrimination were surprisingly uniform in the AI‑written statements, while ignoring the fact that those reading these applications are doing so voluntarily, out of goodwill and with an interest in promoting gender studies scholarship, based on the trust that applicants have submitted their own work.
Applicants do not realize that evaluators also know how to use internet search engines.
During the evaluation process, I think I spent more time checking the fake conference acceptance letters than the AI needed to generate them. When, after several conference applications, it turned out the event was held in another country by other organizers at a different time, I felt disheartened. What did the applicants think of me?
Language competence is certified by AI. I am not impressed.
When I first saw a Duolingo certificate attached to an MA application, I thought it was a joke. After seeing it several times, I began to reflect on the consequences of IT companies outsourcing language teaching without any professional oversight. What language is actually being taught “for free” or very cheaply when trained specialists — that is, language teachers are not involved?
The reviewers are still experts.
Reviewers are invited because they are experts; they know the scholarly literature. AI, however, is notorious for generating nonexistent references. When applicants ask AI to generate a proposal, they forget one thing. The reviewers know the field.
When, in several applications, I encountered citations to works that I had supposedly written but had never seen before, I must reconsider under what conditions AI can access open‑access publications. Maybe this whole open access discourse is going very, very wrong.
And why does the applicant assume that I will not notice that the references in the application do not exist?
The future of evaluating grant applications. (Spoiler: it is gloomy)
Grant evaluation has now become detective work rather than scholarly dialogue. Universities are now introducing AI into the admissions and interview process. I am sure that AI could have easily, cheaply, and quickly caught the AI cheating with letters of invitation, proposal, and references as I did.
The authors organised a protest campaign, publishing empty books to protest against the AI stealing their work. Here, the AI is producing content that humans need to spend time evaluating.
This will have long-lasting consequences. Like going back to individual interviews until we can differentiate the human from an AI-generated chatbot.


