On March 24, 2026, the NYC Department of Education released the AI guidance it has been working on since ChatGPT launched in late 2022. Teachers are allowed to use AI for brainstorming. Not for grading. This is what 2+ years of institutional deliberation produced.
If you teach in NYC — or anywhere a district is still “working on its AI policy” — this framework is coming to your school too. Understanding what it actually does and doesn’t solve is more useful than reading the press release.
The guidance formalizes what experienced teachers already knew and avoids the questions that remain genuinely unresolved. It will not change practice for teachers who have been thoughtfully using AI since 2022. What it reveals is how school systems handle technology: slowly, defensively, and in ways designed to protect the institution rather than empower the practitioner.
Here is what the policy actually says, why it took two years to say it, and what it means for teachers beyond the traffic-light diagram.
What NYC’s AI Policy Actually Says
NYC’s guidance uses a red/yellow/green framework — think of it as a traffic light for every task where AI might come up.
Green light (go ahead): brainstorming, organizing information, creating initial drafts of non-sensitive communications, scheduling, formatting documents.
Yellow light (proceed with human oversight): research, translating for English language learners, trend analysis.
Red light (never, no exceptions): grading, discipline decisions, IEP development, accommodations for students with disabilities, counseling, behavioral monitoring.
The guidance covers approximately 78,000 teachers across NYC public schools (Gothamist, March 2026). It is preliminary — public feedback is open through May 8, 2026, and a comprehensive Playbook is expected June 2026 (NYC DOE).
One significant catch: before any AI tool can be used in NYC classrooms, it must pass through the Enterprise Request Management Application (ERMA). According to Chalkbeat’s March 24 reporting, ERMA currently lacks guidelines for evaluating algorithmic bias or instructional effectiveness. The tools teachers are officially permitted to use have not been meaningfully vetted for the things that matter most in a classroom.
The Context: What Was Happening Before the Policy Arrived
Here is the abbreviated version of the last three years in NYC schools.
Late 2022: ChatGPT launches. Early 2023: NYC bans it in schools. A few months later: NYC reverses the ban. For the next two-plus years: nothing resembling a coherent policy.
In the vacuum, individual schools improvised. English and social studies teachers restructured assignments entirely, moving to in-class handwritten work to address academic integrity concerns. Many operated without institutional cover of any kind. A Brooklyn high school English teacher put it plainly in Chalkbeat: “I would love clear rules … and I feel that I do not have backup.”
That quote is from March 2026. She said it the week the guidance finally arrived.
Meanwhile, two very loud camps were fighting over what the DOE should do. The MORE Caucus of the UFT, along with Class Size Matters, Climate Families NYC, and the Parent Coalition for Student Privacy, have been pushing for a 2-year AI moratorium. Their petition on Action Network has gathered approximately 1,500 signatures since October 2025. On March 14, 2026 — ten days before the guidance released — teachers, parents, and students protested at the Chancellor’s town hall demanding the moratorium (Fight Back! News).
In the other direction: the DOE announced a proposed Next Generation Technology High School in the Financial District, co-planned with Google and OpenAI as planning partners. That proposal drew its own pushback — a separate petition opposing the school gathered nearly 1,300 signatures (Chalkbeat, March 13, 2026). Leonie Haimson, co-chair of the Parent Coalition for Student Privacy, said: “The thing that’s infuriating to me is that they continue to not only use AI, but expand AI.”
So: 2+ years of absent policy, an improvised ban that lasted a few months, individual teachers figuring it out on their own, a moratorium movement, a vendor-partnered AI high school in the works, and then — the traffic-light framework.
Does the Guidance Actually Help Teachers Who Use AI?
Honestly? Not much.
Teachers who have been using AI thoughtfully since 2022 were already not using it for grading, discipline, or IEP decisions. They were already using it for lesson planning and brainstorming. The green lights are not news. The red lights are not controversial. The guidance formalizes existing good practice — for teachers who already had good practice.
For teachers who have been avoiding AI out of uncertainty, a traffic-light diagram is unlikely to change that. The barrier was never the absence of a framework. It was the absence of trust — trust that their district would back them up if something went wrong, trust that the tools were safe, trust that the guidance was coming from people who understood what a classroom actually looks like.
The ERMA problem compounds this. AMNY described the guidance as “riddled with warnings and concerns around potential privacy violations, data accuracy and risks of spreading misinformation.” But the vetting process designed to address those concerns still lacks criteria for algorithmic bias or instructional effectiveness. You are being given a list of approved tools whose safety has not been fully evaluated by the standards that matter for your students.
The guidance also covers teacher use comprehensively while leaving student use largely unresolved — a significant gap, given that students are already using AI to study regardless of any policy. An analysis of 415 Reddit posts about AI in schools found that the most common teacher complaint is not lack of AI access but vague or inconsistent institutional guidance that places liability back on the individual teacher. One anonymous teacher reported that their school’s policy simply told staff to use “best judgment” — without defining what that meant.
That was the whole policy.
Our Take: The Policy Protects the Institution — Not You
Let’s be direct about what this document actually is.
The traffic-light framework is not a teacher empowerment document. It is a liability management document. Green lights = tasks where AI error has no consequential student welfare impact. Red lights = tasks where AI error creates legal or institutional exposure. Yellow lights = areas where the DOE has not yet made up its mind.
That is not cynicism — it is just reading the categories in order. The DOE did not ban AI from brainstorming because brainstorming is safe for children. It permitted brainstorming because if a brainstorming session produces a bad idea, no one sues the district.
The two loudest voices in this debate are both missing the same thing.
The moratorium coalition’s demand — no AI in NYC schools for two years — is structurally identical to the vendor mandate it opposes. Both treat teachers as subjects of a top-down policy rather than authors of their own professional practice. MORE-UFT frames AI as a labor issue: “Educators face an unsustainable workload, and AI is not the solution to this problem.” That concern is legitimate. But a 2-year institutional freeze doesn’t give teachers more control — it just hands control to different administrators.
The DOE’s AI high school proposal is the real tell. The same week the DOE released guidance urging caution about research and trend analysis, it was co-planning a public school with Google and OpenAI as design partners. The district’s actual AI posture is not the traffic-light framework. It is the vendor relationship.
The guidance does state the right principle: “AI supports — it never replaces — educator decision-making.” That sentence is correct. The question is whether it gets operationalized as a protection for teachers or as a constraint on them. Given that the vetting process was incomplete at launch, given that student use remains unaddressed, and given the FiDi AI school situation — take a guess.
Neither the moratorium advocates nor the expansion advocates started by asking what teachers who actually use AI in their classrooms every day need. That omission is the policy, not an oversight.
What Teachers Should Actually Do With This
Whether your district is NYC or anywhere else, here is what this analysis means for your practice.
If your district has released a policy: Read the red lights carefully. Those categories — grading, discipline, IEPs — define the areas with real ethical stakes regardless of who is in charge. The green lights are permission to keep doing what you were already doing. Nothing in the framework changes what thoughtful AI use actually looks like.
If your district has not released a policy yet: You are in the same position NYC teachers were for the past two years. Do not wait. Document your own framework — what you use AI for, why, and how you maintain professional judgment over outcomes. If your practice is ever questioned, you want to demonstrate deliberate reasoning, not improvisation.
On tool vetting: The ERMA problem is not unique to NYC. Before using any AI tool with student data, verify independently that it has passed your district’s data privacy review. The tool an administrator just recommended may not be cleared for classroom use. Do not assume administrative enthusiasm equals official approval.
On the moratorium vs. expansion binary: Ignore it. Students are already using AI to study and complete assignments regardless of any policy. The question is not whether AI belongs in school — it is who gets to decide how it is used. That answer should start with the teacher in the classroom, not a petition or a vendor contract.
For lesson planning, brainstorming, and communications — the green-light zone — tools built specifically for educator workflows are the right starting point. See our full comparison of MagicSchool AI vs Khanmigo for teachers for a closer look at which fits different classroom needs. And if you’re evaluating the best AI grading tools for teachers, note that grading sits squarely in the red-light category under NYC’s framework — which should inform how any such tool is evaluated and deployed.
Frequently Asked Questions
What does NYC’s new AI policy say teachers can and cannot do with AI?
Green (go ahead): brainstorming, organizing, initial drafts of non-sensitive communications, scheduling, formatting. Yellow (with human oversight): research, ELL translation, trend analysis. Red (never): grading, discipline, IEP development, disability accommodations, counseling. Full guidance at schools.nyc.gov.
Why did NYC schools take 2+ years to release an AI policy?
ChatGPT launched in late 2022. NYC briefly banned it in early 2023, reversed the ban, and then spent over two years in deliberation. The delay reflects institutional caution around liability and data privacy — not the inherent complexity of deciding whether teachers can use AI to brainstorm. Many schools filled the vacuum by developing their own informal guidelines, which varied widely.
Does the NYC AI guidance actually help teachers who want to use AI effectively?
For teachers who have been using AI thoughtfully, not much changes. The guidance codifies what experienced practitioners already knew. For teachers avoiding AI out of uncertainty, a traffic-light diagram is unlikely to shift their practice. The document’s primary function is institutional liability protection. It is useful as official cover for what you were already doing — treat it that way.
Why are some teachers protesting the NYC AI expansion while others want more access?
The MORE Caucus/UFT coalition frames AI as a labor issue — workload, deskilling, deprofessionalization, data privacy. Their moratorium demand reflects genuine concerns about who controls the tools and who benefits from them. Teachers wanting more access are frustrated by years of inconsistent or absent guidance. Both camps are reacting to the same underlying problem: practitioners have had no seat at the table in these decisions.
What should teachers do when their district’s AI policy is vague or arrived too late?
Document your own classroom framework: what you use AI for, why, and how student outcomes remain under your professional judgment. Regardless of what your district says, treat the red-light categories — grading, discipline, IEPs — as the ethical floor. And always verify data privacy independently before using any AI tool that touches student information.
How does the ‘red light, green light’ model compare to how experienced teachers already used AI?
Experienced teachers were already not using AI for grading, discipline, or IEP decisions — and were already using it for lesson planning and brainstorming. The framework mostly formalizes existing good practice. Its most practical value is giving teachers institutional cover for what they were already doing. If the traffic-light categories surprise you, you may not have been using AI carefully enough.
Are top-down school AI policies protecting students or protecting administrators?
Primarily protecting administrators. The red lights align precisely with areas of legal and institutional liability — grades, discipline, IEPs. The incomplete ERMA vetting process, which still lacks guidelines for algorithmic bias, suggests student protection is secondary to risk management for the institution. The right principle is in the guidance itself (“AI supports, never replaces, educator decision-making”) — whether it’s enforced as a protection for teachers or as a constraint on them depends entirely on who’s implementing it and why.
Read the Red Lights, Then Make Your Own Call
NYC’s “red light, green light” AI policy is a liability document dressed up as pedagogical guidance — and that’s fine, as long as you understand what it actually is.
Read the red lights carefully. They define the areas with real ethical and institutional stakes, and those standards hold regardless of who wrote the policy or why. Treat the green lights as permission to keep doing what you were already doing. Don’t wait for June’s comprehensive Playbook before building your own classroom framework.
The guidance arriving two years late, with an incomplete vetting process, while the district co-plans a vendor-partnered AI high school down the street — that’s not a policy failure. That’s just what institutional AI decision-making looks like up close.
The districts that get AI right won’t be the ones with the best policies. They’ll be the ones that trusted teachers to use their professional judgment in the first place.