I have already written and published several literature reviews. Some were pure review papers, but most were empirical research papers that reviewed extant theory before reporting methods and results.
I have been more lucky to publish systematic literature reviews than with critical literature reviews (have one that has been bouncing back with rejects already 3 times). Besides my experiences, I feel that my students know more about using some online available tools for literature reviews than me.
I have a very carefully curated collection of references organized in .bib files, but my students might write faster literature reviews even if they did not read, digested or organized hundreds of papers as I did before. I learned during my master thesis, that you (1) get an article, (2) read a sentence (3) imagine that you throw the sentence written on paper to the garbage, (4) write in your words sentence and cite the source. However, now as a teacher, I came across students that did not get the articles, barely read them and go to one of the recent AI generative tools that can write and format the literature review for them as in a quite advanced state.
Whether is ethical or not, whether it leads to plagiarism or not, it’s productive. Still, literature reviews generated with those systems will not lead to serious scientific advances per se. But I think in future, many great scientific discoveries will be communicated and integrated with a literature review that leveraged the power of AI generative tools or AI “co-pilots”.
It is a hot topic, last December when attending the International Conference on Information Systems, many panels were discussing the use of generative AI systems in teaching and research. A panel on the use of AI generative systems in papers submitted to top journals led to a lot of discussion and disagreements.
It’s time to learn about this things, and here is a good place to share my lessons learned. I started learning about this on YouTube. I ended up following the videos of academics such as and
They introduced me to a good few tools and processes that should help in the process. It’s time now to test them and see if they can help me with my current research on how students evaluate information in the context of sports and fitness. At the moment, I already know about a few “must cite” papers. Would AI tools find them? Will I get plagiarized content? Would it be worth learning of these tools? Or it is faster to just do it the good old way? So far is seems a very interesting and a creative process to combile all those different tools.
Andy shows many approaches on how to use AI in literature reviews at Some are more basic, some are more advanced and they blend many tools and methods. As I am still at the beginning of my learning journey on this topic, so I followed his video entitled “3 Unbelievable AI Technologies to Automate Your Literature Review”. I am trying to keep things easy !! Like my students.
The first tool used by Andy was wisio.app. Registered, received a confirmation code in my email, all to try and then — kaboom — https://wisio.app/ leads me to
“Application error: a client-side exception has occurred (see the browser console for more information).” — is this because I use Linux, because I use Firefox.
Well — I will give a second chance later, time to move next tool. Fast but unproductive so far.
The second tools was jenni.ai. And WoW … I just wrote “how students evaluate information in the context of sports and fitness” … then the tool started writing content, with citations and references in APA format that I could accept or not.
People are increasingly relying on the internet for information, including sports and fitness-related content. This trend has led to an abundance of information sources, ranging from official sports websites and blogs to social media platforms and online forums (Deng et al., 2022). With such a vast array of sources available, it is crucial for students to develop the skills to evaluate the reliability and credibility of the information they come across. One crucial aspect of evaluating sports and fitness-related information is to consider the credentials of the author or website. Students should look for content written by professionals in the field, such as certified trainers, sports scientists, or reputable organizations like the American College of Sports Medicine. Additionally, checking for citations and references can also help determine the reliability of the information. Another important factor to consider is the date of publication, as new developments in sports science and fitness research can quickly make older information outdated. Furthermore, students should be mindful of potential biases in the information they consume. Whether it's sponsored content from a fitness brand or a personal blog promoting a specific training method, it's important to critically analyze the motives behind the information. Encouraging students to seek out a variety of sources and compare information can also help them develop a more well-rounded understanding of sports and fitness topics. Teaching students how to evaluate sports and fitness-related information will not only equip them with valuable critical thinking skills, but it will also help them make informed decisions about their own health and well-being. Additionally, educators should also emphasize the importance of cross-referencing information with reputable sources and encourage students to question the validity of claims or statistics that seem too good to be true. This literature review highlights the significance of evaluating information in the context of sports and fitness.
It found only one reference. But there is almost not research in this area. It’s ok. And wrote beautiful text.
I do know that students use youTube information in the context of sports and fitness.
So lets ask Jenni jenni.ai again for a subset of my final literature review and told jenni.ai, ” I am writing a systematic literature review about how students evaluate information on youtube?”
Jenni found some important references (Metzger et al., 2003)(Walraven et al., 2009)(Jean et al., 2011)(Maraza-Quispe et al., 2019)
I could open the chat and create section headings, and start writing my own text together with Jennu generated text.
When I asked jenni.ai to sumarize the work of Metzger et al. 2003, it wrote
Miriam J. Metzger's 2007 research, as encapsulated in the work titled "Making sense of credibility on the Web: Models for evaluating online information and recommendations for future research," offers a comprehensive examination of the assessment of online information credibility. Metzger emphasizes the challenges that the vast variety of information on the internet presents for users trying to evaluate the reliability of this information. Metzger's research includes a critical literature review and analysis of existing models of credibility assessment, pinpointing the need for improved online credibility education. She acknowledges that the Web complicates traditional notions of credibility and necessitates more sophisticated approaches for users to effectively discern credible from non-credible information. The author concludes with proposals for developing future research and theory on the evaluation of internet-based information. Metzger suggests ideas for online credibility education practices, aiming to better equip users to locate reliable information in the digital age. Overall, the research underscores the importance of understanding how credibility is construed and judged on the Web, proposing avenues for enhancing users' evaluation skills in the face of ever-expanding online content (Metzger, 2007).
Not bad ! I could then cite in my own paper
“importance of understanding how credibility is construed and judged on the Web” on the motivation part
“She acknowledges that the Web complicates traditional notions of credibility and necessitates more sophisticated approaches for users to effectively discern credible from non-credible information.” on the literature review
O also asked jenni if there any research about evaluating online information in the context of sports or fitness
An then she answered:
Based on the sources provided, there are insights related to online education in the context of sports and fitness, which could pertain to evaluating online information in these fields: 1. **"Foundations of Sport and Exercise Psychology, Seventh Edition With Web Study Guide"** (Exercise, 2021) (Jesselucas, 2021) discusses connecting research to practice in sports psychology. This resource emphasizes the critical role of evaluation and critical thinking in applying material learned from both online and traditional sources. 2. **"EXPLORING THE UTILIZATION OF WEBCAM VIDEOS TO ASSESS EXERCISE TRAINING AND FITNESS ASSESSMENT SKILLS OF STUDENTS IN AN ONLINE GRADUATE EXERCISE SCIENCE COURSE: A CASE STUDY"** (McGlumphy, 2008) addresses online education's challenges in assessing hands-on skills, including psychomotor skills and fitness assessment techniques, implying that evaluating online information and methods is crucial in fitness education. 3. **"Research on the Physical Training under the Internet Environment"** (Chuan, 2014) describes web-based physical education and its role in helping educators and students access sports information and learn sports skills online. The trustworthiness and credibility of such online information would naturally be a critical consideration. 4. **"Main Content and Implementation Path of College Sports Informatization Construction"** (Yu & Pan, 2020) offers a detailed analysis of how information technology influences physical education, suggesting that evaluation of online information is important for optimizing sports training, monitoring physical fitness, and other activities within college sports informatization. While these sources don't address the evaluation of online information credibility in the context of sports and fitness directly, they do touch upon the broader themes of online education, information access, and technology use in these fields, which implicitly involves the critical assessment of such information.