Banning AI While Secretly Using It? That’s the Real Academic Dishonesty
I don’t ban AI. I build with it—and teach students to do the same. That’s what real education looks like now.
Let’s start with the elephant in the classroom: AI isn’t the problem. Hypocrisy is.
A recent New York Times article spotlighted students calling out professors for secretly using ChatGPT while banning it in their syllabi. One student noticed AI-generated lecture slides riddled with errors and demanded her tuition back. Others uncovered grading rubrics and feedback copied and pasted straight from bots. The reaction? Outrage and distrust; rightfully so.
Students aren’t asking for perfection. They’re asking for honesty, transparency, and standards that apply to everyone, not just them.
And that’s exactly where I decided to do things differently.
I Teach AI in Real Time Because It’s Moving in Real Time
AI isn’t just something we talk about in theory. In my classroom, it’s a living, breathing part of the learning experience. And if it changes tomorrow? We change with it.
Every semester, I don’t just update the slides—I overhaul how I teach AI based on what’s happening now. If there’s a new model, I show it. If there’s a better prompt strategy, we test it. If a tool drops that could help them think or build better, it goes straight into the class discussion.
Here’s what that looks like:
I demo NotebookLM to show how structured data can inform structured thinking—and how to build AI workflows that don’t just summarize, but synthesize.
We compare models like Claude, ChatGPT, and Perplexity not to crown a winner but to explore different strengths, biases, and use cases. If students only use one tool, they’re only seeing one lens.
I teach effective image prompting so they can stop generating cringey, over-styled nonsense and start building prompts with creative control. We don’t do “make a cat wearing a crown” for fun. We build visual campaigns with intention.
We explore frontier tools like Sora (text-to-video), deep research tools, and emerging integrations across platforms—because that’s where the marketing world is heading.
I discuss AI hallucinations, bias, data integrity, and source transparency because if people don’t understand the limits of these tools, they’ll fall for their output.
And here's the key: I don’t gatekeep any of it.
Students get full access. I encourage experimentation. But I also demand rigor. That means:
Prompts that show critical thinking, not just clever wording.
Outputs that are vetted, revised, and reworked, not just pasted and turned in.
A mindset that uses AI like a creative partner, not a shortcut button.
If a student uses AI well, it shows. If they don’t, it sounds louder.
So yes, I teach AI. But more importantly, I teach them how to fight smarter with it. Because that’s what their future will require.
I Use AI to Grade. And I Tell My Students.
From the first day of class, I’m crystal clear: AI is part of the grading process, for them, and me.
But I don’t use it to cut corners. I use it to raise the bar. I built a custom GPT that doesn’t just assess grammar or spit out comments; it evaluates work based on four key pillars: clarity, revision, originality, and real-world relevance.
The GPT reviews submissions and offers detailed feedback and a draft grade. Then I step in, refine the feedback, adjust the grade, and ensure the critique is accurate, fair, and actionable.
It’s not auto-grading. It’s co-grading, which allows me to give richer, more specific feedback than any educator could alone.
I Shared My GPT With Students, And the Smart Ones Used It
To level the playing field, I created the Marketing Reality Check GPT—a tool trained on the principles of the Marketing Accountability Council. It’s designed to flag weak strategy, shallow storytelling, and misaligned creative.
I gave students access. No charge. No gimmicks. And no mercy for lazy thinking.
Some used it to elevate their work. They uploaded rough drafts, let the GPT punch holes, and returned stronger. They got better, and it showed in both their projects and their grades.
Others ignored it.
Or worse, they used ChatGPT like a vending machine and submitted generic work, hoping for a pass.
My final assignment, Act Like a Marketer, was designed for students to demonstrate how they applied the skills and concepts taught throughout the course. Some students rose to the challenge and showed precise, thoughtful application. Others turned in work that seemed like they hadn’t spent 15 weeks in a class with structured, strategic lessons.
For those who missed the mark, the AI feedback wasn’t vague. It wasn’t sugar-coated. It laid out, line by line, what was missing, what was expected, and why their submission didn’t hold up. I didn't rely on generic comments. I trained the GPT to reflect the actual grading rubric and standards we used all semester, and then I refined the responses with my observations.
The result? Detailed, specific, no-spin critiques.
If they read that feedback and sat with it, it might be their biggest wake-up call in their academic life. Unlike most assignments, this one didn’t just say, “You could’ve done better.” It told them how and why they fell short.
And honestly? That’s the best preparation they could ask for.
In the real world, there’s no partial credit for effort. No curve. No redo because “you were busy.” There’s just accountability. In a job, turning in work that ignores the assignment, the strategy, or the point? That’s how you get fired.
I’d rather give them that jolt now, when the stakes are grades, than let them learn it the hard way on their first job.
Here is an example of the feedback I provided to a student after they questioned my AI grading and called it unfair.
***
I want to be clear and final about this: your project did not meet the expectations of this assignment. This was not about proving you can make content; you clearly can. This was about proving that you understood and could apply the strategy, tools, and frameworks we spent the semester learning.
And you didn’t.
Even if your posts did contain some of those elements, it’s not my job to dig through your content to find them. Your final submission was your chance to present a cohesive, strategic, and reflective campaign that told me what you did, why you did it, and how it connected to the course content. You didn’t do that.
This Is What You Were Supposed to Do — And Didn’t:
According to the assignment, this was your checklist:
Define clear goals using OKRs
Required: Specific objectives + measurable key results
Missing entirely from your workCraft a strategic campaign
Required: Target audience, messaging strategy, positioning, frameworks (StoryArch, Nesting Doll, SMP)
None of these were applied or explainedCreate 3–5 pieces of content with strategy behind channel and tone
You created content, yes
But didn’t justify your platform choices, tone, or strategic intentTrack performance with real analytics and optimize
Required: Measurement, iteration, optimization
You listed surface-level observations, not marketing metrics or real analysisProvide a structured summary and deep reflection
Required: What worked, what didn’t, what you learned, what you’d do differently
Your reflection was minimal and lacked depth—there was no strategic insightDeliver a professional, organized report
Required: Up to 10 slides, polished, labeled, with commentary and structure
Your submission was disorganized and hard to follow—there was no clear narrative arc or campaign logic
This project wasn’t about content creation. It was about demonstrating strategic marketing competency.
You missed the class session where I explicitly outlined all of this, and unfortunately, your final product shows no evidence of your engagement with the course's learning outcomes. As I said, if you had done this project on day one of the class, it would’ve looked the same, defeating the assignment's entire purpose.
Your work was completely inadequate compared to that of your peers, many of whom created structured, thoughtful, insight-driven campaigns. Considering what was missing, you received a generous grade.
This matter is closed.
***
Let’s be clear: generating that level of feedback manually at scale would be insane. It would be inefficient, exhausting, and impossible to maintain. AI didn’t replace me; it scaled my intent.
I Encourage Students to Use AI—But I Expect a Fight
Yes, students can use AI. But I expect them to battle it.
I want to see iterative prompts, synthesis, creative direction, and human judgment layered on machine assistance.
But what I often get is something halfway done, written in AI’s recognizable tone, with no strategic backbone. I can sniff it instantly.
When that happens, I don't scold, I explain. I show them what didn’t work. I invite them to do better. And the ones who rise to the challenge? They come out thinking sharper and working smarter.
I Took This Conversation to the Center for Academic Excellence
I’ve presented this approach at workshops with my School’s Center for Academic Excellence. I’ve met with other faculty, instructional designers, and academic technologists to unpack what AI in the classroom means.
It’s not about gimmicks. It’s not about speed. It’s about integrity.
Faculty from multiple departments shared their curiosity and concerns. Some still cling to the blue books, while others, like me, are experimenting. We talked about co-grading, transparency, and student trust. They asked thoughtful questions. We plan to host a workshop so more educators can explore this approach in detail.
This isn’t just about grading anymore. It’s about saving the credibility of higher education.
Is College Even Worth It Anymore?
That same New York Times article addresses a deeper issue: students questioning the entire point of college. And honestly, they’re not wrong.
If all you’re offering is outdated textbooks, forced essays, and hidden AI use, then no, it’s not worth the money.
But if you’re teaching students how to think, how to build, how to critique, and how to lead? Now you’re delivering value.
Marketing isn’t a significant part of building the ivory tower. It’s a practice, a business, a fast-moving, high-stakes ecosystem. Students need to learn to ship, adapt, revise, and resonate. And if AI is part of their future (and it is), we better prepare them to engage with it like professionals, not passive consumers.
This Isn’t About Efficiency. It’s About Standards.
I didn’t build my GPTs to make grading easier. I built them to make grading better.
Better for students, who get clear feedback they can grow from.
Better for educators, who can uphold high standards without burnout.
Better for academia, which desperately needs to catch up with the world that students will graduate into.
AI didn’t break the grading system.
It exposed how broken it already was.
We Can’t Pretend Anymore
Professors hiding behind ChatGPT while banning it in class? That game is up. Students read the room—and the rubrics—and they know when something doesn’t add up.
I’m not here to protect an outdated system.
I’m here to help build a new one.
We act like marketers in my class.
We test. We revise. We challenge.
No shortcuts. No excuses.
Compass Disclosure: How This Article Was Built
This article didn’t come from nowhere, and it wasn’t ghostwritten in a ChatGPT fever dream.
It was born from months of teaching, testing, grading, reflecting, and listening. I wrote it after multiple conversations with students, administrators, and colleagues who—like me—are trying to make sense of AI’s role in education.
The spark came from a May 2025 New York Times article that detailed the backlash students are having toward professors quietly using AI. That piece made one thing clear: the issue isn’t AI. The issue is secrecy, hypocrisy, and inconsistency. I’ve lived the other side of that story—one where AI use is transparent, intentional, and pedagogically sound.
Here’s what informed this piece:
Classroom experience: Real feedback, real grading conversations, and real student reactions to AI use in my marketing courses.
Custom GPT tools: I designed and deployed grading and strategy GPTs tailored to the Marketing Accountability Council values. These tools informed both how I assessed student work and how I wrote this article.
Faculty collaboration: I’ve participated in ongoing workshops with the Center for Academic Excellence, where I shared my AI grading practices and listened to peers navigating similar issues across disciplines.
Student pushback: Not just complaints, but thoughtful challenges. I welcomed those conversations, and the feedback shaped my approach. The most critical voices made this piece stronger.
Academic values: This isn’t marketing fluff. This pedagogical argument is grounded in real experience and backed by deliberate design.
AI helped organize this, but the stories, strategy, and stance are all mine. If anything, AI helped me write more honestly and efficiently, but never instead of me.
If you have questions about my methods, tools, or sources, ask. Candor, not spin, is part of the compass, too.
— Jay Mandel
❤️ If This Hit a Nerve, Say Thanks
Honestly? I didn’t even think to include a tip jar at first.
But then I realized… I tip for lattes. I tip on apps. I tip $5 for someone handing me a paper menu.
So if this hit a nerve—if it gave you clarity, inspiration, or something you forwarded to your team—consider tipping the person who wrote it.
I’m not backed by a big brand. I’m a strategist and writer with a family to support and a deep belief that marketing should be more ethical and human.
👉 Leave a Tip
👉 Become a Paid Subscriber
You don’t owe me anything. But if you want to support work like this? I’d be honored.