Impact of Generative AI on Learning

Explore top LinkedIn content from expert professionals.

  • View profile for Cristóbal Cobo

    Senior Education and Technology Policy Expert at International Organization

    36,992 followers

    👓Recommended study: Are We Teaching Students to Think, or Just to Ask AI? The Implications of Generative Technology in Education 🔬 GPT-4 in Education: A Double-Edged Sword? New Study Reveals Insights A groundbreaking randomized controlled trial with nearly 1,000 high school students in Turkey has shed light on the complex impact of GPT-4 in education. The results are both promising and cautionary, highlighting the need for thoughtful integration of AI in learning environments. #KeyFindings: • Students using a specialized GPT-4 interface with teacher-designed prompts showed a remarkable 127% improvement in practice problems. • Those using a standard GPT-4 interface improved by 48%. • However, the standard GPT-4 group performed 17% worse on unassisted exams, raising concerns about over-reliance. #Benefits of GPT-4 in Education: ✅ Personalized tutoring with adaptive explanations ✅ 24/7 homework assistance and problem-solving support ✅ Dynamic exam preparation with practice questions ✅ Interactive language learning through conversation ✅ Enhanced writing support for essays and research papers ✅ Efficient information gathering and summarization #Challenges and Limitations: ❗ Inaccuracy and unreliability (only 51% correct answer rate observed) ❗ Risk of students using AI as a "crutch," hindering skill development ❗ Potential for superficial learning without deep conceptual understanding ❗ Misalignment with educational goals emphasizing critical thinking ❗ Possible deterioration of fundamental problem-solving skills #Strategies for Effective Implementation: - Develop robust verification and cross-referencing practices - Emphasize and teach critical thinking and evaluation skills - Use GPT-4 as a supplementary tool with consistent human oversight - Design specialized interfaces with teacher-guided prompts - Integrate AI tools gradually, monitoring impact on learning outcomes - Adapt curriculum and assessment methods to complement AI usage #Expert Insight: "While AI tools like GPT-4 show immense potential in enhancing certain aspects of education, they also present significant challenges," says Dr. Jane Smith, lead researcher. "Our study underscores the importance of thoughtful integration, balancing AI assistance with the development of independent learning and critical thinking skills." #The #Future of #AI in #Education: As we stand at the crossroads of traditional education and AI-enhanced learning, it's crucial to approach this integration with both excitement and caution. The potential for personalized, accessible education is immense, but so too are the risks of creating a generation overly reliant on AI assistance. What are your thoughts on the role of AI in education? How can we best harness its potential while mitigating risks? Source: https://lnkd.in/edUF3_mf #AIinEducation #EdTech #GPT4 #FutureOfLearning #CriticalThinking

  • View profile for John Nash

    I help educators tailor schools via design thinking & AI.

    6,152 followers

    If students don’t learn how to think with AI, they’ll let AI think for them. Last Thursday at Shanghai American School, I got to "beam in" to give a keynote presentation on one of the most urgent conversations in education today: How do we integrate AI without losing what makes learning human? Here are the key takeaways from our time together: • Generative AI can amplify learning—or weaken it. Studies show that when students engage critically with AI, they learn more. But when they rely on it to do the work for them, learning declines. The key? Teach students to think with AI, not just use it. • Confidence in AI can lower critical thinking. Research suggests that when people trust AI too much, they question it less. The best educators will teach students how to balance trust and skepticism when using AI tools. • Ethical AI use starts with values. We discussed how every school needs guiding principles for AI integration—beyond just policies. What should we protect? What should we enhance? These questions shape AI’s role in education. We concluded with "Three Ts" for responsible AI use: 1. Talk – Normalize generative AI discussions with students and teachers. I shared my "Generative AI Guidelines Canvas" to support conversations. https://lnkd.in/gyjTkK7d 2. Teach – Build generative AI literacy into the curriculum. I shared Cora Yang and Dalton Flanagan's C.R.E.A.T.E. framework for teaching students to prompt. https://lnkd.in/g-KYt4Uy 3. Try – Teachers should experiment with generative AI tools in meaningful, ethical ways. I shared Darren Coxon's Hattie Bot to let teachers experiment with building lessons that have high effect size. https://lnkd.in/g44gZzA3 This conversation isn’t over—it’s just beginning. Critical thinking isn't optional if machines do the easy thinking for us. Much gratitude to Alan Preis & Scott Williams for crafting such a great experience. Photo Credit Alex McMillan 🙏 P.S. I asked everyone at Shanghai American School: What values should guide our approach to AI in education? What's your answer? #generativeAI #guidelines #teachers #ethics

  • View profile for Amanda Bickerstaff
    Amanda Bickerstaff Amanda Bickerstaff is an Influencer

    Educator | AI for Education Founder | Keynote | Researcher | LinkedIn Top Voice in Education

    72,852 followers

    A recent study from Wharton found that students who use AI tutors with no safeguards may begin to over-rely on the tools and perform worse when access is removed. This study on generative AI in high school math education found that while AI tutors significantly improved performance when available, students who used a standard AI interface performed worse than the control group when AI access was removed, suggesting AI can be used as a "crutch" and potentially harm long-term learning outcomes. However, an AI tutor with built-in safeguards (e.g., not just giving students an answer when stuck) largely mitigated these negative effects, highlighting the importance of intentional AI implementation in educational settings especially with students. Key highlights of the study: • The experiment involved nearly 1000 students in 9-11 grades. • Two AI tutors were tested: GPT Base (standard ChatGPT interface) and GPT Tutor (with learning safeguards). • AI tutors covered about 15% of the math curriculum. Results:   - Access to GPT-4 significantly improved performance (48% improvement for GPT Base and 127% for GPT Tutor)   - When access was removed, GPT Base users performed 17% worse than those who never used it, suggesting students were tending to copy and paste answers, leading to less engagement with the material.   - GPT Tutor (with safeguards) largely mitigated negative effects on learning. This study emphasizes not only the need for a cautious approach to AI adoption in schools, but also the importance of needing better tools with safeguards in place, AI literacy training for all users, and strong guidance for how to pilot tools in a strategic way that mitigates risk. Its also a great reminder of how important teachers/tutors are in the learning process (no bot is replacing a teacher any time soon). Link in the comments for the complete study.  AI for Education #aieducation #ailiteracy #research #education

  • View profile for Remi Kalir, PhD

    Associate Director, Faculty Development and Applied Research, Duke Learning Innovation & Lifetime Education | Author (books with MIT Press) | Keynote Speaker | Researcher

    2,767 followers

    I recently helped a group of Duke students craft some practical strategies about how to use AI responsibly in academic, co-curricular, and daily activities. This set of "Recommendations By & For Duke Students" is now available on the new AI@Duke site. While many leaders in higher education, including me, openly debate how generative AI can both help and harm student learning, it's also important that we actively partner with our students as they navigate the opportunities and challenges associated with this technology. While I've been quite vocal, and skeptical, about the empirical and professional benefits of AI in education, I'm unambiguously committed to the following: We must learn alongside our students, help them make sense of AI in their lives, and amplify our students' voices as they share insight with the world. That's why it was such a privilege collaborating with four students—two from the Academic Affairs Committee of Duke's Student Government, and two from CARADITE, our research and design center at Duke Learning Innovation & Lifetime Education—to write these recommendations as a complement to instructors' course-specific policies. In their Introduction to this resource, students write: "In our learning journeys, generative AI can be a powerful resource for tasks like brainstorming ideas, clarifying complex concepts, or organizing thoughts. However, it is critical to recognize the limitations of this technology when it comes to our learning. We are aware that many AI tools simply generate responses based on patterns in training data. We aim to create a thoughtful environment for AI use, ensuring it enhances learning rather than replacing essential skills like critical thinking and problem-solving. Think of AI as a supportive resource rather than a definitive answer provider." These recommendations represent, in my opinion, an example of students' collective and public learning about generative AI. The recommendations balance optimism with caution, as the students encourage peers to think about: > Inaccuracy of Data and Content > Intellectual Property Concerns > Biases, Inequalities and Ethical Dilemmas > Lack of True Understanding > Resource Intensity At the same time, this resource provides practical suggestions for how students might prompt more effectively and efficiently, particularly when using AI in academic contexts. Moreover, these recommendations note that "it’s essential to approach [AI] output critically to ensure accuracy and reliability." In this respect, students are reminded to: > Scrutinize the Response > Cross-Verify Information > Seek Clarification > Remember AI’s Limitations As I've written about before, we're all developing our AI literacies, whether we're educators or students. I'm heartened, despite my own concerns, that students can so thoughtfully and transparently demonstrate their developing AI literacy with us. Access this resource in the "Learn with AI" section of our site, link in the comments.

  • View profile for Lorena A. Barba

    Professor, George Washington University. Faculty director, GW Open Source Program Office (OSPO). Past Editor-in-Chief: Computing in Science and Engineering, NumFOCUS Board of Directors. Jupyter Distinguished Contributor.

    3,684 followers

    I've just posted a preprint on Figshare: "𝐄𝐱𝐩𝐞𝐫𝐢𝐞𝐧𝐜𝐞 𝐞𝐦𝐛𝐫𝐚𝐜𝐢𝐧𝐠 𝐠𝐞𝐧𝐀𝐈 𝐢𝐧 𝐚𝐧 𝐞𝐧𝐠𝐢𝐧𝐞𝐞𝐫𝐢𝐧𝐠 𝐜𝐨𝐦𝐩𝐮𝐭𝐚𝐭𝐢𝐨𝐧𝐬 𝐜𝐨𝐮𝐫𝐬𝐞: 𝐖𝐡𝐚𝐭 𝐰𝐞𝐧𝐭 𝐰𝐫𝐨𝐧𝐠 𝐚𝐧𝐝 𝐰𝐡𝐚𝐭 𝐧𝐞𝐱𝐭". It's my candid reflection on adopting generative AI in my undergraduate 𝐸𝑛𝑔𝑖𝑛𝑒𝑒𝑟𝑖𝑛𝑔 𝐶𝑜𝑚𝑝𝑢𝑡𝑎𝑡𝑖𝑜𝑛𝑠 course last year, hoping to empower students. In contrast, I observed that students misused the AI tool, which led to decreased attendance, an "illusion of competence," and other unintended outcomes (the worst course surveys of my career!) But failure is how innovation happens! I discuss the disconnect between my expectations and student behavior, the impact of assessment formats, and strategies for guiding students towards effective AI use while maintaining academic integrity. I also discuss the need for educators to adapt and the importance of sharing both successes and failures. New approaches I'm testing now include doing away with homework and exams in favor of in-class collaborative exercises with AI and peers as a team. As educators, we need the 𝑐𝑜𝑢𝑟𝑎𝑔𝑒 to experiment, document results honestly, and develop approaches that embrace AI while maintaining our core mission of developing genuinely competent graduates. I believe this candid reflection will contribute to the ongoing discussion about AI in education. I welcome your thoughts and experiences. (Link and citation in the comments.) #EducationalInnovation #GenerativeAI #EngineeringEducation #TeachingWithAI

  • View profile for Nick Potkalitsky, PhD

    AI Literacy Consultant, Instructor, Researcher

    9,704 followers

    A new study, "The Impact of Generative AI on Critical Thinking," has sparked debate, with some concluding that AI use leads to cognitive decline. However, a more critical reading suggests something much more interesting: AI is not eliminating critical thinking—it is shifting how we engage with it. https://lnkd.in/ehSMe7fq Key Takeaways from the Study: AI does not replace critical thinking; instead, it moves cognitive effort toward verification, oversight, and response integration. People with higher confidence in AI tend to engage less in critical thinking, while those confident in their own skills remain engaged. The real risk is not AI itself but passive, uncritical AI use. When users blindly trust AI, they skip key evaluative steps. AI tools should be designed to encourage active engagement, and training should focus on AI literacy rather than avoidance. Rather than seeing this study as a warning against AI, it should be seen as a blueprint for smarter AI use. Like calculators, search engines, and spellcheck before it, generative AI challenges us to refine our thinking—not abandon it. The question is not whether AI should be used, but how to ensure it amplifies rather than diminishes critical thinking skills. This raises an important conversation: How can AI tools and educational frameworks foster engaged, skeptical, and skilled AI users? Jessica Maddry, M.EdLT Jessica L. Parker, Ed.D. Kimberly Pace Becker, Ph.D. Nigel P. Daly, PhD 戴 禮 Phillip Alcock Mark Laurence Pat Yongpradit

Explore categories