Hey there, So I was definitely late to the whole AI thing. While everyone was posting screenshots of ChatGPT doing amazing stuff, I was sitting there thinking "okay, but how does this actually help me teach data?" Plus, a lot of us educators are nervous and cautious about AI. If tools can write code and analyze data, will students still learn the fundamentals? How can we make sure they do? But earlier this year, I finally decided to stop being skeptical and actually try it. What Changed My MindThe breakthrough wasn't AI writing better code or queries. It was realizing it could simulate the messiness that happens at work (which is rarely addressed in textbook examples). For example, how do you teach students to handle a manager who says "I need customer insights" and then gets frustrated when you ask what kind? I used to try role-playing in class, but it felt forced and awkward. But AI can actually play that confused manager. I can tell it to be a marketing director who thinks "engagement" just means social media likes. Then I watch my students figure out the right follow-up questions. Or have it play a VP who says "our data shows we're losing customers" without any details about which customers or when. Suddenly my students are practicing skills that matter just as much as knowing syntax or the next feature. The Reality CheckI got excited and started using AI for everything. But I did hit some walls pretty fast. Some stuff worked great - like generating fresh practice problems instead of using the same tired datasets. But then I started wondering: should students learn to write code independently first? What happens when they can't tell if AI gave them the right answer? Because AI does make mistakes - there's literally a warning about it in every chat conversation. How do students develop the critical thinking skills to spot when a query looks right but actually isn't? Or when an analysis seems plausible but misses something fundamental? Without that solid foundation, how can they use these powerful tools effectively and responsibly? What I'm Working OnThis whole experience made me realize other educators and trainers were probably hitting the same questions. So I started writing down what worked and what didn't. That became a newsletter called Teach Data with AI. Some of the recent issues cover:
Here's what I mean by specific prompts. Instead of "create a practice problem," I use something like: The specificity matters because students get practice with real problems, not textbook perfection. It's still a work in progress. Or an experiment in progress. It's a moving target because I experiment with the prompt and tools, but the tools keep on changing as well. One thing's for sure - I'm not trying to replace teaching with AI. I'm using it to handle the tedious setup so I can focus on the thinking parts. What's your experience so far? Are you experimenting with AI too? I'd love to hear what's working (or not working) for you. Talk soon, Most of my lessons are still completely manual. Some things still need a human who remembers being confused. Here is an example issue on generating realistic data exercises. |
Join 5K+ subscribers who receive weekly, bite-sized, practical and actionable lessons for the data professional. | Free video tutorials at youtube.com/sqlbelle | Teaching data? Incorporate AI - tips and prompts at https://teachdatawithai.substack.com/
You know the signs: glazed eyes during your presentation, people checking phones while you explain a process, or the dreaded interruption - ”Sorry, but why does this matter to me?” It happens because we lead with how things work instead of what breaks when they don’t. We assume people want to understand the process when they really want to understand the consequences. The Gap Between Data Professionals and Everyone Else Here’s what usually happens: you spend time crafting a clear technical...
You know that feeling when you show someone your analysis and… nothing happens? The numbers are solid. Your work is spot-on. Everything makes perfect sense. But somehow your ideas just sit there. Nobody's acting on it. Most of the time, this is the reason: data doesn’t convince people - understanding how people think does. The Situation The analysts whose recommendations actually get implemented aren’t always the ones with the fanciest techniques or the cleanest data. They’re the ones who...
Hello there, Quick question: Have you ever designed a metric that created exactly the opposite behavior you wanted? If you’re nodding, you’ve discovered Goodhart's Law in action: "When a measure becomes a target, it ceases to be a good measure." This suggests the fundamental truth about human nature: As soon as people know a number is being watched or used to make decisions, they start optimizing for that number - often at the expense of what it was meant to represent. Here’s why this matters...