• Playground Post
  • Posts
  • πŸ› A Bot Aced a College Course. The Response Was a Cease-and-Desist.

πŸ› A Bot Aced a College Course. The Response Was a Cease-and-Desist.

What this means for educators + more

Welcome to Playground Post, a bi-weekly newsletter that keeps education innovators ahead of what's next.

This week's reality check: The federal office responsible for guiding schools on education technology no longer exists. Into that vacuum, a 22-year-old built an AI tool that completes entire college courses autonomously, and 124,000 people visited the website in three days. Meanwhile, three-quarters of students who use AI say they're stressed about being falsely accused of cheating by detection tools that don't work reliably.

Have a bold idea for education? Put it to the test.

Tiny Fellowship is a 10-week program for builders in education who want to pilot their idea with real people.

You'll get coaching, a $5,000 stipend, and a proven method to test what works.

Applications are rolling, the sooner you apply, the better.

Want to learn more? Feel free to reach out to [email protected].

πŸ‘‰ Apply now

Data Gem

About 3 in 10 U.S. teens now use AI chatbots daily, and just over half have used them for schoolwork, according to a Pew Research Center survey of 1,458 teens. Twelve percent say they've used a chatbot for emotional support.

Teachers Need AI Training. The Office That Should Provide It No Longer Exists.

Republicans and Democrats don't agree on much in education. But at a recent House subcommittee hearing, they agreed on this: teachers need professional development on AI, and the federal government should help.

The problem is the agency best positioned to do that has been gutted.

The administration shuttered the U.S. Department of Education's Office of Educational Technology last year, eliminating the primary federal source of guidance on technology in schools.

"Historically, the department has helped provide critical resources to assist states, schools and districts in navigating technological challenges," said Ranking Member Suzanne Bonamici, D-Ore. "What this administration has done to the department has diminished or even obliterated its capacity to provide these resources."

Subcommittee Chairman Kevin Kiley, R-Calif., agreed that professional development matters, particularly around AI's risks to critical thinking and student data privacy. 

But he acknowledged the core difficulty: AI is moving at a "dizzying pace," and guidance from experts today could be outdated by next week.

The training gap is big. Teachers are worried they're not getting enough support to work effectively with the AI tools their students are already using.

West Virginia's state superintendent Michele Blatt offered one bright spot: when her state provides AI training through its professional learning management system, teachers adopt quickly. "It's not taking the level of training that we've seen with other things that we've rolled out in the state," she said.

But without federal guidance, private companies are rushing to fill the void

Experts warned the subcommittee that while these companies may be well-intentioned, they are still for-profit. To make money, they may try to make their products addictive, like social media.

"Without some federal guidance and some top-level guidance to sort of prevent that kind of thing, it just enters into a very dangerous space," David Slykhuis, dean of the Dewar College of Education said. He describes the current landscape as the "wild, wild West."

With federal guidance gone and only a handful of states piloting AI programs, the market for AI professional development platforms is wide open. 

The West Virginia example suggests teachers are ready for training when it's available, practical, and doesn't add to their workload. Products that combine AI literacy training with classroom-ready implementation tools could find rapid adoption. And warning about addictive design points to a differentiation opportunity: AI tools built for learning outcomes rather than engagement metrics.

A 22-Year-Old's AI Bot Just Completed an Entire College Course

Advait Paliwal didn't set out to break higher education's credentialing model.

The 22-year-old tech entrepreneur, who dropped out of Brown University's computer science master's program in 2024, was building an AI agent with access to a computer. When a student co-worker complained about finishing an assignment, Paliwal suggested trying the tool on Canvas, the popular learning management system.

"It did it, which was very unexpected for me," Paliwal said. "I started questioning more and more about what it means to be a student."

The tool, called Einstein, does everything: it logs into Canvas, watches lectures, reads essays, writes papers, participates in discussions, and submits homework. Automatically. Even while the student sleeps.

Over 124,000 people visited the website in three days. Instructure, which owns Canvas, sent a cease-and-desist letter, and Paliwal took down the site. 

He has since relaunched with softer language, rebranding Einstein as "the personal tutor every student deserves," though he admits it has all the same functions.

Faculty debate erupted. The courses most vulnerable to Einstein are the "transactional," content-based ones that rely on quizzes, asynchronous discussions, and term papers, according to Jonathan D. Becker, an associate professor at Virginia Commonwealth University who specializes in education technology.

"The most destructive educational technology we have is the large lecture hall," Becker said. "I would be happy if these technologies forced us to stop putting 400 students in a room."

But the implications go beyond pedagogy. 

Anna Mills, an AI faculty developer at the College of Marin, warned that without guardrails, students who use agentic AI to complete courses fraudulently will undermine the perceived value of online credentials for everyone.

"Their credits will be suspect and won't count in the job market unless we can do something about this," Mills said. "And if we have to move to in-person proctoring to accompany online courses, it will undermine access for a lot of people, including those living in rural areas, working long hours or raising families."

The Modern Language Association saw this coming. In an October statement, it warned of a "fully automated loop in which assignments are generated by AI with the support of a learning management system, AI-generated content is submitted by an agentic AI on behalf of the student, and AI-driven metrics evaluate the work on behalf of the instructor."

Mills went further, sending an open letter to OpenAI, Perplexity, Google, and Anthropic asking them to program agentic browsers to refuse to complete LMS assignments.

Paliwal says the outrage was the point. "The only way to make change is to create some sort of understanding of what's at stake," he said. "This was a way to create an understanding of what's happening by showing what's possible."

For education innovators, Einstein exposes a fundamental vulnerability in how institutions verify learning. The immediate market need is for AI-resistant assessment design that can't be completed by autonomous agents. Identity verification systems, competency-based credentialing platforms, and proctoring technology that works without requiring physical presence are all growth areas. 

But the deeper opportunity may be in course design itself: Becker's point about transactional courses suggests that institutions willing to redesign around active, synchronous learning will need new platforms to support that shift.

75% of Students Who Use AI Are Stressed About Being Falsely Accused of Cheating

Institutions are racing to catch AI-generated work. Students are caught in the crossfire.

A YouGov survey of 2,373 university students, commissioned by student support company Studiosity, found that 75% of students who use AI tools report significant stress about their work being wrongly flagged as plagiarism by detection tools.

71% of students now use AI for assignments or study, up from 64% in the previous year's polling. 

And 60% of all students surveyed experienced stress while using AI tools.

The single biggest source of that stress? 

52% cited "being accused of cheating when I did nothing wrong."

The survey also revealed that infrequent AI users feel more anxious than regular ones. Students who had used the tools only once or twice were more likely to stress about false accusations than habitual users, suggesting that unclear institutional policies create more anxiety than the technology itself.

"This highlights a significant trust gap between students, their tools and institutional detection methods," the report states.

Usage is highest among business students (80%) and law students (75%), and lowest in creative arts (52%) and humanities (58%). 

But across disciplines, the same tension holds: students are using AI because it's available, useful, and in many cases encouraged, while simultaneously fearing punishment for using it.

When asked if they would rely entirely on AI if permitted, only 21% said yes. Nearly half expressed concern that AI is eroding their critical thinking and communication skills. 

Most students want to learn. They just don't trust institutions to fairly distinguish between using AI as a tool and using it to cheat.

The report recommends institutions reconsider detection tools that produce false positives and establish protections against wrongful accusations.

The data suggests the biggest gap isn't detection accuracy but the integrity model itself. 

Process-based assessment platforms that evaluate how students develop their work, not just the finished product, could resolve the trust gap. Clear institutional AI policy frameworks that define acceptable use would reduce the anxiety that comes from ambiguity.

⚑️More Quick Hits

This week in education:

β€’ LAUSD superintendent placed on paid leave after FBI raids linked to $6.2M AI startup contract β€” District simultaneously faces an $877 million projected deficit, 3,200 layoff notices, and a 94% strike authorization vote from teachers

β€’ Fourth graders generated explicit AI images using Adobe Express at a California school β€” California published new AI guidelines in response; at least 31 states have issued AI guidance, but safety incidents continue to outpace policy

β€’ Education groups propose $2.5 billion plan to rebuild teacher preparation pipeline β€” AACTE-led coalition calls for a national educator workforce data system and integration of AI training into preparation programs

β€’ Wisconsin parents, teachers, and five districts sue legislature over school funding formula β€” A record 241 school funding referendums went before voters in 2024 as the state refused any increase in general school aid for two consecutive fiscal years

To stay up-to-date on all things education innovation, visit us at playgroundpost.com.

What did you think of today’s edition?

Login or Subscribe to participate in polls.