• Playground Post
  • Posts
  • πŸ› $6.4 Million to Answer One Question. Does AI Actually Work in Schools?

πŸ› $6.4 Million to Answer One Question. Does AI Actually Work in Schools?

What this means for educators + more

Welcome to Playground Post, a bi-weekly newsletter that keeps education innovators ahead of what's next.

This week's reality check: While $6.4 million is being spent to find out if AI can actually improve student outcomes, the reading crisis it's supposed to help solve turns out to be deeper than test scores suggest. Five million multilingual kids can decode words just fine. They just can't understand them.

Last Chance to Join & Build With Us!

Playground Summit (Feb 26–27, New Orleans)

A different kind of education gathering - for builders who want more imagination and better ideas.

Playground Post readers get discounted tickets. Email [email protected] to save.

πŸ‘‰ Learn more

πŸ’Ž Data Gem

21% of public schools reported at least one special education vacancy last year, and 55% reported difficulty filling those roles, according to USGAO and School Pulse Panel data.

Eight School Districts Just Became AI's Biggest Test

For decades, the ed-tech industry has sold tools on vendor promises and anecdotal enthusiasm. Districts buy, hope for the best, and rarely measure what happens next.

A new initiative backed by the Bill & Melinda Gates Foundation and Microsoft is trying to change that. 

Eight school districts across the country have begun implementing AI tools under a "test-and-learn" framework that requires rigorous evaluation against specific, measurable benchmarks.

The projects are collectively funded with $6.4 million from the Gates Foundation and supported with Azure cloud computing and technical resources from Microsoft. The initiative is coordinated by the Council of the Great City Schools and Digital Promise.

The districts were selected competitively and span urban, suburban, and rural communities. 

That diversity is intentional: a tool that works in a well-resourced suburban district with one-to-one devices may fail entirely in a rural district where students share Chromebooks and connectivity is unreliable. 

The designers want to understand not just whether AI helps, but under what conditions and for whom.

What are the districts actually doing? 

Some are deploying AI tutoring systems in math that adapt to student responses in real time. Others are testing AI-assisted literacy tools that analyze student writing and provide feedback teachers can review. Others still are targeting administrative burden: lesson planning and grading that consume enormous amounts of teacher time.

Here's why this matters beyond the eight districts: every deployment must be accompanied by data collection and evaluation protocols. If the initiative produces rigorous, publicly available results, it becomes a reference point for the thousands of districts making purchasing decisions right now with almost no evidence to guide them.

Professional development is built into the program design, with teachers trained not just on using the tools but on evaluating their effectiveness and providing feedback to developers. 

The goal: technology that adapts to classroom realities rather than demanding classrooms adapt to the technology.

For education innovators, this initiative is a market signal worth taking seriously. If evidence-based purchasing becomes the norm, companies that can demonstrate measurable outcomes will have a massive competitive advantage. And for startups building AI products: the initiative's emphasis on diverse deployment environments means tools that work only under ideal conditions won't survive the real-world test. Products designed for low-bandwidth, shared-device, high-need classrooms will be the ones that scale.

5 Million Kids Can Decode Words. They Don't Know What They Mean.

A first grader asked her teacher for help with an assignment that required reading a word and matching it to a picture. The teacher assumed the student hadn't read it.

The student's reply: "I read the word, but I don't know what it means."

That moment, described by veteran North Carolina educator, captures a widening gap in American literacy instruction. 

Students are getting better at decoding. They are not getting better at comprehension.

NAEP results show that reading comprehension outcomes have worsened nationwide, with more students scoring below proficiency in 2024 than in either 2022 or 2019. The sharpest declines are among African American, Hispanic, Native American, and multilingual learners.

There are over 5 million multilingual learners in U.S. schools. Science of reading reforms have boosted their decoding skills. But those reforms were built for monolingual, culturally narrow classrooms.

Here's the problem: many science of reading curricula assume students come from white, English-speaking, middle-class households. 

Research shows students improve in reading when texts reflect their racial, cultural, and linguistic identities, because culture shapes the oral language needed for comprehension. When curricula ignore students' lived experiences, understanding suffers. Not from lack of ability, but from lack of relevance.

Decodable texts deepen this gap. They are designed to practice phonics, not to develop rich vocabulary, complex language, or connections to meaning. As a result, students may look strong on decoding data while continuing to lag in comprehension, confirming NAEP's widening gaps even as phonics instruction improves.

For education innovators, this data reveals a massive underserved market. The 5 million multilingual learners in U.S. schools need literacy tools built for them, not adapted as an afterthought from English-only curricula. There's clear demand for culturally relevant content platforms that embed vocabulary and comprehension instruction alongside phonics, bilingual family engagement tools, and assessment systems that measure comprehension separately from decoding.

Schools Can't Secure What They Can't See

In Q1 2025, ransom demands targeting schools averaged $608,000 globally

Many of these attacks begin with everyday web activity: compromised websites, phishing links, and malicious downloads.

Schools are in their most digitally connected period to date. Cloud-based platforms and web resources have expanded access and flexibility. But that same connectivity has made education a more visible target for cybercriminals.

The problem isn't just the attacks themselves. 

It's that learning has moved off campus, but security hasn't followed. Students now learn across cloud platforms, shared devices, and home networks. Security can no longer be a perimeter around the school building. It has to travel with users.

In November, a sophisticated phishing attack sent to 10,000 student emails in New Haven, Connecticut compromised at least four student accounts. The attackers used a single convincing email, not a technical exploit, to breach the system.

Across OECD countries, 81% of central education authorities provide guidance on privacy and data protection. 

But fewer than half (43%) have monitoring or enforcement mechanisms to ensure those measures are actually implemented at the school level. Guidance without enforcement leaves schools exposed.

Emerging threats are making this worse - generative AI tools enable attackers to craft more convincing phishing emails and create deepfake content for cyberbullying. The sophistication of attacks is rising while many schools still rely on basic filtering and network-level protections that weren't designed for a distributed learning environment.

Web security has shifted from an IT add-on to core infrastructure. There's growing demand for zero-trust security solutions that protect students regardless of location or network, compliance monitoring tools that help districts move from guidance to actual enforcement, and AI-aware threat detection systems designed specifically for education environments where student data is heavily regulated under FERPA and COPPA.

⚑️More Quick Hits

This week in education:

β€’ What's worse for students: a boring worksheet or ineffective ed tech? β€” EdWeek explores the growing debate over whether bad technology is actually worse for learning than traditional low-tech instruction

β€’ New education security toolkit provides AI-aware cybersecurity guidance β€” Toolkit offers schools practical frameworks for addressing AI-driven threats alongside traditional cybersecurity challenges

β€’ Khan Academy partners with Google to integrate Gemini into writing and reading tools β€” Partnership brings Google's AI model into Khan Academy's literacy products, expanding AI tutoring beyond math

β€’ Unmasking EdTech's surveillance infrastructure in the age of AI β€” Analysis argues student monitoring tools have expanded well beyond safety into surveillance, raising civil liberties concerns as AI capabilities grow.

To stay up-to-date on all things education innovation, visit us at playgroundpost.com.

What did you think of today’s edition?

Login or Subscribe to participate in polls.