The Last Bell: What the Class of 2028 Walks Into, and Why Most American High Schools Aren't Ready
White Paper

The Last Bell: What the Class of 2028 Walks Into, and Why Most American High Schools Aren't Ready

Jake McCluskey
Back to white papers

Foreword

I was sitting with a group of high school seniors a few nights ago, and they were talking about AI. How bad it is. How it ruins education. How people shouldn't use it. I asked a series of questions, looking for a single positive thing any of them had to say about it. Every answer came back negative. AI is cheating. AI is dumbing kids down. People should quit using it. It won't help anyone do their job in the future.

These were not the loudest kids on TikTok. These were not the parents at a school board meeting. These were the seniors. The kids who walk this spring into the same labor market where AI is already the second or third hire on most knowledge-work teams. They have been taught that the most powerful tool reshaping their career field is something they should refuse to use, and they believe it.

That conversation is the reason I wrote this paper. Those seniors are not stupid, and they are not lazy. They are absorbing what their schools, their teachers, and the loudest adults in their lives are telling them. The schools believe they are doing right by these kids. They are not.

I'm not an educator. I've spent 25 years in digital marketing watching technology waves reshape industries, the early web, mobile, the social platforms, the attention economy, and I've spent the last six years shipping production AI systems for businesses. My wife is in classrooms every day. Between her work and mine, this paper sits at the intersection of two perspectives that rarely show up in the same room: someone who has watched technology reshape labor markets, and someone who is in a school watching kids try to learn through it.

What follows is not another "future of education" think piece. It is a plain-language brief for a high school principal or superintendent who is trying to make a real decision in the next 90 days about something that is changing faster than any policy committee can keep up with. It contains three claims, one framework, and a 12-month roadmap.

It is meant to be read in 25 minutes and forwarded.

Jake McCluskey, Elite AI Advantage

With classroom insight and editorial review from Morgyn McCluskey, paraeducator and online educator pursuing her degree in business and marketing education, who reviewed every section of this paper for fit with classroom reality.

Executive Summary

This brief argues three things, plainly:

1. The labor market your seniors are walking into has fundamentally shifted, and your curriculum has not. AI is now the second or third hire on most knowledge-work teams in America. Roughly 22 percent of the jobs your graduating seniors will apply for will face structural change in the next four years. The skills that are about to compound are not the skills most high schools are teaching. The good news is that this is fixable inside a single school year, with policy, training, and curriculum touchpoints that fit existing schedules and existing budgets.

2. The cheating panic is a distraction. The literacy gap is the real risk. Most current school AI policies are built around the wrong question. The question is not "how do we stop students from using AI?" They are already using it, and bans simply hide the problem from administrators. The right question is "how do we teach students and educators to use AI as a thinking partner rather than as a way to skip the thinking?" That is a literacy problem, not a discipline problem, and it requires a different rollout than most schools are attempting.

3. The bigger gap is not in your students, it is in your faculty. The students are using AI, usually badly. The faculty mostly are not, and that is the real crisis. A teacher who has never used Claude or ChatGPT cannot evaluate a student's use of it, cannot write a defensible policy, and cannot answer a parent's question with confidence. Educator literacy is the first lever, not the last.

This brief proposes a three-stage rollout, policy first, educator literacy second, student curriculum touchpoints third, that can be implemented over a single school year, on existing budgets, without abandoning standards. The framework is in Section IV. The 12-month roadmap is in Section V. A sample two-page board policy and a parent-night talking-points sheet are in the appendices.

The asks of the reader are simple. Read it. Hand a copy to your board chair and your curriculum director. Pick the section that scares you the least and start there. The doors are already moving. The only question is whether your school chooses to move with them.

Part 1: What the Class of 2028 Walks Into

The labor market your seniors are walking into is not the labor market they are being prepared for. That sentence sounds dramatic. The numbers behind it are not.

Three data points to anchor the rest of this paper:

Stanford's 2026 AI Index reports that 70 percent of US organizations now use generative AI in at least one business function. That number was 33 percent in 2023. In two years, AI use in American workplaces has more than doubled. By the time your current sophomores graduate, the percentage will not be 70, it will be 90 plus.

Anthropic's 2026 Economic Index found that 49 percent of jobs have already had at least a quarter of their daily tasks performed using AI tools. Half of all jobs. Already. Computer programmers run at 74.5 percent AI exposure. Customer service representatives at 70.1 percent. Office and administrative roles at 34.3 percent. Even education and library occupations, the field that has been slowest to integrate AI, sit at 18.2 percent. The "AI is going to change work someday" framing is two years out of date.

The Bureau of Labor Statistics' 2024-2034 projections show computer and mathematical occupations growing at 10.1 percent, the second-fastest of any occupational group. Wind turbine technicians, solar installers, and nurse practitioners lead the headline growth list. Almost every fast-growing role assumes AI fluency as a baseline skill, not a specialty.

Walk those numbers through what they actually mean for a high school senior in 2026:

  • They will graduate into workplaces where 90 percent or more of organizations expect them to use AI.
  • They will compete for jobs against peers who started using AI for school work in 7th grade.
  • The fastest-growing roles assume they already know how to prompt, evaluate, and integrate AI output before the first day on the job.

Their schools, in many cases, told them not to touch it.

Sidebar: The Real Labor Market for North Dakota Graduates

Most national AI-and-jobs commentary assumes the typical graduate is on a college-to-knowledge-work path. Half of North Dakota seniors are not.

North Dakota is one of 12 states (plus DC) where the count of high school graduates is projected to keep growing through 2034, against a national decline. More ND kids will be graduating into the workforce, not fewer. Where they go after graduation is split:

  • Roughly 60 to 65 percent enroll in some form of postsecondary education.
  • Roughly 35 to 40 percent go directly to work, military service, or skilled trades.

The state's largest employer industries, in rough order: healthcare, energy (oil and gas extraction in the Bakken), agriculture, education, retail, manufacturing (Bobcat/Doosan, RDO, John Deere dealer networks), financial services, and a growing tech corridor in Fargo anchored by Microsoft, Bell Bank, and Sanford Health.

Notice what jumps out: every one of those industries is being rebuilt around AI. The oil patch is using AI for predictive maintenance and seismic interpretation. Healthcare is using it for documentation, imaging, and intake. Agriculture is using it for satellite-driven field analysis and equipment optimization. Manufacturing is using it for inventory, quality control, and predictive supply chain. Financial services is using it for everything. The kid who graduates from a North Dakota high school in 2026 and goes to work driving a haul truck in the oil patch is operating equipment with more AI on board than the laptop in the principal's office.

ND's Choice Ready metric, the state's measure of whether seniors are prepared for workforce, military, or postsecondary, climbed from 71 percent to 73 percent between the 2023-24 and 2024-25 school years. That's a real win. It is also a metric that does not yet measure AI literacy in any direct way. A 2026 senior could be Choice Ready under the current rubric and still walk into their first job unable to use the most important tool on the desk.

The post-graduation path does not matter. The literacy gap shows up in all of them.

The AI Literacy Gap Is Already Here

The most important fact about the AI gap in American high schools is that it is already producing graduates with five years of head start.

A senior at a $90,000-per-year private high school in Massachusetts has, in many cases, been using AI tools as a sanctioned part of class work since 2023. They have written essays with AI feedback loops. They have used AI to study for AP exams. They have built capstone projects that integrate AI as a research tool. They will arrive at an internship in 2027 fluent in the workflows their managers are still learning.

A senior at a public high school where AI is banned has been told the tool that runs their professional future is something to refuse.

This is not hypothetical. AI integration in K-12 varies wildly by district and by family income. Roughly half of American teachers say students are already using AI on assignments, but only a fraction of teachers have received any formal training in evaluating that use. The gap between students and teachers is already wider than the gap between any two demographic groups in American education.

This is the context every principal in this country is operating in, whether they have written a single AI policy yet or not.

The Skills That Compound vs. The Skills That Do Not

The temptation, when reading the data above, is to conclude that schools should "teach kids how to use AI." That framing is wrong, and it will produce another generation of people who treat the tool as a shortcut. The real work is teaching the skills that compound when AI is in the mix, and reducing the time spent on the skills that no longer earn it.

Skills that compound (more valuable every quarter):

  • Asking sharp, structured questions
  • Reading critically (judging what an AI output is missing, getting wrong, or hiding)
  • Writing as thinking (using the act of writing to clarify what you actually mean)
  • Synthesizing across multiple sources
  • Persuading and explaining to other humans
  • Taste. Knowing what good looks like.

Skills that no longer earn the time they used to (commoditized fast):

  • Standardized summarization
  • Boilerplate code
  • First-draft writing of routine documents
  • Formatted research reports
  • Translation of common content
  • Rote answer recall

A high school curriculum that takes the second list seriously, and reduces the time spent on those activities while increasing the time spent on the first list, is a curriculum preparing its students for the labor market that already exists. A curriculum that does the opposite is preparing its students for 2018.

The point of teaching AI literacy is not to teach prompts. The prompts will be different in six months. The point is to teach the durable skills, and to use AI as the accelerant for those skills, not the replacement.

That is the framework. Section IV builds it out. Sections V and VI walk a building principal through how to roll it into policy and a 12-month plan.

But before any of that, we have to talk about what is already happening inside the building, because the policy you write next quarter will be useless if it does not start from an honest read of what your students and your faculty are actually doing today.

Part 2: What's Happening Inside the Building

Most schools writing AI policy in 2026 are writing about a problem that ended in 2024.

The cheating panic that kicked off in late 2022, when ChatGPT went public, was real. It was also temporary. By the end of 2024, AI use among American teenagers had already passed the threshold where cheating-style use was no longer the dominant use case. Today, the typical high school student in this country uses AI for at least three things in a normal week: study help, drafting outlines for assignments, and explaining concepts they don't understand. Usually in that order. The "kid hands in an AI-written essay and pretends they wrote it" scenario still happens, but it is no longer the central question.

The central question is what the other 90 percent of student AI use looks like, and whether the school is helping or fighting it.

Why Bans Do Not Work, Empirically

The instinct to ban AI on school networks is understandable. It is also indistinguishable from banning calculators in 1985. Two facts make it ineffective:

First, kids will use it anyway. They have phones. They have personal laptops. They have free AI tools available in any app store. The school network is one of maybe ten places where they could reach an AI tool, and it is the only one the school controls. Banning it on the school network does not reduce student AI use. It reduces school visibility into student AI use.

Second, the ban shifts who teaches them how to use it. Right now, the kids using AI hardest are getting their AI habits from YouTube, TikTok, Reddit, and their friends. Not from their teachers. Not from a curated curriculum. The ban does not protect students from AI. It cedes the literacy education to the worst possible curators.

The schools that have softened or reversed bans in the last 18 months are not doing it because they have given up on academic integrity. They are doing it because they realized the ban did not improve outcomes and did remove the school's seat at the table.

The Cheating Panic Is the Wrong Diagnosis

The panic narrative goes: AI lets students skip the thinking, so if we ban AI, the thinking comes back. It does not.

The students who would have skipped the thinking have always had ways to skip the thinking. They paid older siblings. They copied homework on the bus. They Googled and pasted. They bought essays on the open market. The serious cheating problem in American high schools is decades old and is not caused by AI.

What AI changed is the default expected output of an assignment. A five-paragraph essay that took a student two hours to write in 2018 now takes 90 seconds with AI to produce something that reads about as well. The right response is not to ban the tool. The right response is to redesign the assignment so it tests the thinking, not the formatting.

This is a curriculum problem, not a discipline problem. The schools that have leaned into it have moved toward in-class writing for assessment, presentation-based projects, oral defenses of papers, and assignments that require AI use as part of the workflow with a written reflection on what the AI got wrong. Those formats test thinking. They are also harder to write and harder to grade. That is the work.

The Educator Literacy Gap Is the Bigger Gap

Here is the part most school AI policies skip: the students using AI badly is a smaller problem than the faculty not using AI at all.

A teacher who has never used Claude or ChatGPT cannot tell when a student's work shows signs of unedited AI output. They cannot write a defensible AI policy. They cannot answer a parent's question about it with confidence. They cannot model the discernment they want their students to develop. They cannot use AI themselves to grade faster, plan curriculum faster, or build differentiated materials faster. They are operating at a meaningful disadvantage relative to their colleagues at private schools where the entire faculty has been using these tools since 2023.

This is not a moral failure on the part of teachers. American public school teachers are among the most overworked, under-trained, and under-resourced professionals in the country. Asking them to add "learn to use AI" on top of the existing job, with no time and no budget, is asking them to fail. That is on the system, not on them.

But the system has to fix it, because the gap will not close on its own. Multiple 2024 and 2025 surveys show that fewer than half of US K-12 teachers have received any formal AI training, and among those who have, the typical exposure is measured in hours, not days or weeks. Meanwhile, the average high school senior has had between 18 months and three years of unstructured AI use. The student is more fluent than the teacher. That is the gap.

Closing it does not require sending every teacher to grad school. It requires a structured 6 to 12 hour summer training focused on three things: hands-on use of one or two AI tools, classroom-specific use cases for grading and lesson planning, and the framework for evaluating student AI use. That is doable on a typical district professional development budget. Most districts simply have not made it a priority yet.

What Teachers Are Caught Between

In rooms with teachers who actually want to talk about this, the most common thing said is some variation of: "I do not know what I am supposed to do."

The teacher knows the kids are using AI. The teacher knows the school's policy is unclear or unenforceable. The teacher knows their own district has not given them training. The teacher knows the parents will be angry whatever the school decides. The teacher is being asked to make individual judgment calls every day on something they have not been trained on, and to be the front line of a policy they did not write.

The result, predictably, is that teachers make different calls. One teacher accepts AI use. Another bans it. A third ignores it entirely. Within a single school building, the policy is whoever the kid happens to have for English that semester.

That inconsistency is the actual policy crisis in most American high schools right now. It is not that the rules are too strict or too loose. It is that there are no rules, and every teacher is a one-person committee writing them in real time.

The next two sections, the framework in Section IV and the 12-month roadmap in Section V, exist to fix that. The framework gives a school one shared language for thinking about AI use. The roadmap gives a principal a sequenced plan to get policy, training, and curriculum touchpoints into place over a single school year.

The hardest part of either is not the technical work. It is deciding to start.

Sidebar: What the Kids Are Actually Saying

I sat with a group of high school seniors a few nights ago. Not a focus group. Not a school-sponsored panel. Just a group of kids in their last few months of high school, in their natural setting, talking among themselves about AI. They were discussing it the way teenagers discuss anything that has been put in front of them: with conviction. The conviction was that AI is bad.

I asked questions for the better part of an hour, looking for any positive answer. There were none.

I asked whether AI was useful in any context. The answer was no. AI is for cheating. AI is for people who are too lazy to think. AI is making people stupid.

I asked whether they used AI themselves. Some said yes, reluctantly, as if confessing. The use cases were narrow: studying for tests, getting concepts re-explained when a teacher's explanation hadn't landed, checking work. None of them framed those uses as positive. They were using a tool they had been taught was wrong, in private, and not telling anyone about it.

I asked whether AI would be useful in any future career. The answer was no. AI does not help do jobs. People should quit using it. The world would be better if it disappeared.

I asked where those positions came from. The answers, almost in unison: teachers, parents, things heard at school. Not a single one of them named a personal experience that produced the position. They had been told.

I asked one of them what kind of work she planned to do after graduation. She named a career field that, according to the Bureau of Labor Statistics' 2024-2034 occupational projections, will be among the fastest-growing in America. Every system she will touch in that career, in five years, will be built around AI. She has been taught that the right answer is to refuse to learn how to use it.

This was not one outlier conversation. The same pattern shows up across this state's school system in every group of teenagers I have talked with over the last six months. The kids are not making this up. They are absorbing what the adults around them are saying, and the adults around them are mostly saying that AI is something to fear and avoid.

The result, predictably: a generation graduating into the largest tool-driven labor market shift since the personal computer, holding the position that the tool is the enemy. A position that no one in the room could defend with a personal example, because none of them had been given one. They were just reciting.

That is the gap this paper is trying to close.

Part 3: The Discernment Framework

If a school took only one thing from this paper, this would be it.

The framework has three principles. Each one is a position, and each one has a competing position that most schools are quietly defaulting to. The point of stating them out loud is to make the choice visible.

Principle 1: AI Is a Thinking Partner, Not an Output Replacement

The default school question is "did the student use AI to produce this work?" That is the wrong question. The right question is "did the student use AI to skip the thinking, or to deepen it?"

Those are very different uses, and they are easy to tell apart once you know what to look for. A student who used AI as an output replacement will hand in work that reads polished, has answers that don't surprise them, and cannot defend the reasoning if you ask follow-up questions. A student who used AI as a thinking partner will hand in work that reads in their voice, has the marks of struggle in it, and can defend every claim with where the AI helped and where they pushed back.

Both happened with AI. Only one of them is what school is supposed to teach.

The framing schools should adopt: AI is allowed in the same way calculators are allowed for math class once students have demonstrated they understand the underlying operation. Use is permitted; use without thinking is the violation. The integrity standard is not "did you use AI" but "can you defend the work as yours."

Principle 2: Educator Literacy First, Then Policy, Then Student Curriculum

Most schools attempt these in the wrong order. They write policy first, hand it to teachers who have never used AI, and expect those teachers to enforce a policy they cannot evaluate. The policy gets ignored, then the school writes a stricter policy, then that gets ignored too.

The order that works:

  1. Train the educators first (one semester, hands-on, paid time, structured curriculum).
  2. Write policy second, with input from the now-trained faculty who understand the tool.
  3. Integrate student curriculum touchpoints third, designed by faculty who can model the discernment they want students to develop.

Skipping step 1 is the most common failure mode in this work. It is also the one that costs the most to recover from.

Principle 3: Integrate, Don't Quarantine

The third instinct schools have is to create a single "AI literacy class" that lives in one corner of the schedule and treats the topic as a stand-alone subject. This is a mistake for two reasons.

First, AI is not a subject. It is a tool that touches every subject. A student needs to use AI well in English, in history, in science, in a college application, and in a job interview. A single AI class teaches the tool in a vacuum and leaves every other class un-integrated.

Second, the standalone-class approach signals to the rest of the faculty that AI is "the AI teacher's job." It is not. AI literacy is the same kind of literacy as reading literacy or numeracy. It belongs everywhere or it belongs nowhere.

The right model is curriculum touchpoints, three to five places per grade level where AI use is integrated into existing assignments with structured reflection. Not a new class. Not a new department. New expectations of every teacher who already exists.

The 3-Stage Rollout

The framework above maps to a 3-stage rollout that fits inside one school year:

Stage 1, Policy (Weeks 1 to 8). A small working group of 4 to 6 people (principal, 2 to 3 teachers, IT lead, optionally one parent) writes interim guidance from the policy skeleton in the appendix of this paper. The interim guidance is two pages, board-ready, and explicitly labeled as a starting point that will be revised after educator literacy training.

Stage 2, Educator Literacy (Weeks 9 to 24). A 6 to 12 hour structured training, delivered as paid summer professional development or as a series of release-time afternoons during the school year. The training has three required components: hands-on use of one or two AI tools, classroom-specific use cases for grading and lesson planning, and the framework for evaluating student AI use. Outside expertise can deliver this; many districts cannot, in 2026, deliver this internally yet.

Stage 3, Student Curriculum Touchpoints (Weeks 25 to 52). Each subject department identifies three to five places in their existing curriculum where AI use is integrated as a learning tool with structured reflection. A junior English class might use AI to draft an outline and then have students write a one-page reflection on what the AI got wrong. A history class might require AI-assisted research with a citation standard. A college-prep elective might walk students through prompting for college essay feedback. The touchpoints are integrated, graded, and modeled by teachers who have completed Stage 2.

Each stage builds on the prior. Trying to skip ahead, especially trying to skip Stage 2, is what produces the policy-then-ignored-then-stricter-policy cycle that wastes years.

What Not to Do

A short list of the most common failure modes in this work:

  • Buying a vendor's "AI for education" platform before having a literacy framework. The platform becomes the strategy. The strategy is now whatever the vendor sells.
  • Writing policy by a committee that does not use AI. The output is unenforceable and signals to faculty that the people writing rules do not know the tool.
  • Mandating teacher AI use without paid training time. Equivalent to mandating any other major skill update without resourcing it. Predictably fails.
  • The "soft ban" pattern, where official policy bans AI but informal practice tolerates it. Worse than either a real ban or real permission, because nobody knows what the rules are.
  • Treating student AI use as primarily a discipline problem. It is primarily a curriculum problem. Discipline approaches address symptoms; curriculum approaches address the actual gap.

If a school recognizes itself in any of those, the way out is not to add a sixth failure mode on top. It is to back up to Stage 1 and rebuild.

Part 4: The 12-Month Roadmap for a Building Principal

This is what a single school year looks like if a principal commits to running the framework end to end. Adapt the months to whatever your school's calendar actually is.

Month 1 (typically July or August before the school year starts). Form the AI Working Group. 4 to 6 people: yourself, two to three teachers (mix of departments), your IT lead, optionally a parent representative. First meeting: read this paper together. Decide if the framework fits.

Month 2. Working group drafts interim guidance using the policy skeleton in Appendix A. Two pages. Not a full board policy yet. The interim guidance covers permitted student use, faculty expectations, and the timeline for the full policy.

Month 3. Communicate the interim guidance to faculty, parents, and students. Hold one parent-night session using the talking points in Appendix B. Set the expectation that a full policy will follow once educator training is complete.

Months 4 to 6. Educator literacy training. Either build this internally if you have the in-house expertise, or bring in outside delivery. The non-negotiable: paid teacher time. Unpaid optional training does not produce uniform faculty literacy.

Month 7. Working group reconvenes, now with a literate faculty. Draft full board policy from the skeleton, incorporating what the faculty learned during training. Open a 30-day comment period for parents and the wider school community.

Month 8. Board approval. Public release of the full policy. Communication to parents, students, and faculty.

Months 9 to 12. Curriculum touchpoint integration. Each department identifies three to five places per grade level where AI use is integrated into existing assignments. Quarterly review of how the integration is going. End-of-year check on whether the framework is working, and what to revise for year two.

Year one is the hardest. By year two, the policy is baseline, the faculty literacy is baseline, and the curriculum touchpoints are part of how the school operates. The work after that is maintenance, not setup.

Appendix A: Sample Two-Page Board Policy

Use this as a starting draft. Adapt to your district's policy formatting and legal review. Every numbered section maps to a question your board chair will be asked.

1. Purpose

This policy governs the use of generative AI tools by students, faculty, and staff. The school recognizes generative AI as a category of tool that will be present in the personal, academic, and professional lives of every graduate. The school's responsibility is to teach students to use these tools with discernment, not to remove them from the educational environment.

2. Permitted Student Use

Students may use generative AI tools for the following purposes, subject to course-specific guidance from each teacher:

  • Studying for exams and quizzes
  • Generating practice problems or alternative explanations of concepts
  • Brainstorming and outlining drafts
  • Receiving feedback on student-authored work, with the student making all final decisions
  • Citing AI use in any submitted work, per the citation standard in Section 4

3. Restricted Student Use

Students may not use generative AI tools in the following ways:

  • To produce final submitted work that the student cannot defend through follow-up questioning
  • To bypass demonstration of skills the assignment is designed to assess
  • During in-class assessments unless explicitly permitted by the teacher

4. Citation Standard

Any student submission that incorporated AI assistance must include a brief AI use statement: which tool was used, at what stage of the work, and what the student did with the output. Failure to disclose is treated under the existing academic integrity policy.

5. Equity Provisions

The school provides free or subsidized access to a sanctioned AI tool for any student whose family does not have access at home. No student is required to provide a personal AI subscription to complete any required assignment.

6. Data Privacy and FERPA Compliance

Faculty and staff may not enter student personally identifiable information into any AI tool that does not have a signed data processing agreement with the district. The IT department maintains the list of approved tools.

7. Faculty Use Standards

Faculty are expected to complete the school's annual AI literacy training. Faculty may use AI tools for lesson planning, grading assistance, differentiated material creation, and parent communication drafts. All AI-assisted parent communication is reviewed by the faculty member before sending.

8. Parent Communication

Parents are informed of the school's AI policy at the start of each school year and at any major revision. The school holds at least one parent-night session annually on AI use in education.

9. Review and Update Cycle

This policy is reviewed annually by the AI Working Group and updated as needed. Generative AI is a fast-moving category; static policy is not appropriate.

Appendix B: Parent Night Talking Points

Common questions principals receive at parent night, with suggested directions. These are starting points, not scripts. Adapt to your community.

"My kid is using ChatGPT and I don't know what to do. Should I take it away?"

Probably not, and the school is not asking you to. The kids who do not learn to use AI well at home or at school will be at a meaningful disadvantage in the labor market they're graduating into. The right move is to ask your kid to show you how they use it, talk through what's a good use and what's not, and check in regularly.

"How will the teacher know if my kid used AI on an assignment?"

In most cases, the teacher will not catch every use, and that is not the goal. The goal is to design assignments where AI use is either part of the workflow or does not change the outcome of testing the skill. The school's policy requires students to disclose AI use; failure to disclose is treated under the existing academic integrity policy.

"Aren't kids going to stop learning to write or think or reason?"

The schools that have integrated AI thoughtfully have not seen a drop in student writing or reasoning quality. The schools that ban AI and rely on take-home assignments graded blindly are the ones at risk. The strongest counter-measures are in-class writing, oral defenses, and AI-required-with-reflection assignments.

"What about cheating?"

Cheating predates AI by centuries. The school's academic integrity policy applies to AI in the same way it applies to any other tool a student might use to misrepresent their own work. The bigger question is whether the assignments are designed to test thinking or formatting; we have updated several of our assignment formats accordingly.

"What is the school doing for teachers?"

All faculty have completed (or will complete) a structured AI literacy training before the policy goes into full effect. We are not asking teachers to enforce a policy on a tool they have not been trained to use. That training is paid and ongoing.

About / How to Work Together

I am Jake McCluskey. I have spent 25 years in digital marketing, the last six shipping production AI systems for businesses. I run Elite AI Advantage, a US-based consultancy for organizations trying to use AI without falling for vendor hype. My wife Morgyn is a paraeducator and online educator pursuing her degree in business and marketing education. We have school-age kids. The intersection of those things is why this paper exists.

If you are a high school principal, superintendent, or curriculum director who wants help moving the framework in this paper into your building, three options:

  1. The AI Policy Generator tool at eliteaiadvantage.com/education. Eight questions, generates a defensible draft AI policy you can adapt for board review. Free. No email required to see the output.
  2. Paid pilot engagement. A 60-day rollout that delivers the full policy, a faculty literacy training session, and a parent-night talk for your school. Fixed scope, fixed price. Designed to fit a single school's professional development budget. Book a 30-minute scoping call at eliteaiadvantage.com/scope.
  3. Speaking and workshops. I speak at state and national education conferences on AI in K-12. If your association is planning a 2027 program, eliteaiadvantage.com/speaking has the topics and rates.

The paper itself is open access. No gate, no email capture, no "request the full version." Forward it, print it, hand it to your board chair, post it on your faculty share drive. It does its work by being read, not by being downloaded behind a form.

The doors are already moving. Move with them.

Sources

  • Stanford HAI, 2026 AI Index Report, Economy section: 70% of US organizations using generative AI in at least one business function (up from 33% in 2023).
  • Anthropic, Economic Index Report (2026): 49% of jobs have had at least a quarter of their daily tasks performed using AI tools; computer programmers at 74.5% exposure, customer service reps at 70.1%, education/library at 18.2%.
  • US Bureau of Labor Statistics, Employment Projections 2024-2034: computer and mathematical occupations projected to grow 10.1%; wind turbine technicians, solar PV installers, and nurse practitioners lead growth rankings.
  • Western Interstate Commission for Higher Education (WICHE), Knocking at the College Door (2024): North Dakota among 12 states with high school graduate counts projected to grow through 2034.
  • North Dakota Department of Public Instruction (DPI): Choice Ready metric rose from 71% (2023-24) to 73% (2024-25).
  • Common Sense Media (2024-2025): roughly half of American teachers report students using AI on assignments; teacher training lags student usage.
  • EdWeek Research Center, RAND American Educator Panel, and similar 2024-2025 surveys: fewer than half of US K-12 teachers have received any formal AI training; typical exposure is measured in hours.
The Last Bell: What the Class of 2028 Walks Into, and Why Most American High Schools Aren't Ready | Elite AI Advantage