How Can District IT Directors Vet AI Vendors Pitching Their Schools?
How-To Guide

How Can District IT Directors Vet AI Vendors Pitching Their Schools?

Jake McCluskeyIntermediate30 min
Back to guides

Every district IT director I talk to in 2026 is getting the same calendar. Two AI vendor pitches a week, sometimes three. Each one slick. Each one quoting another district that piloted. Each one asking for a 30-minute call that turns into 60 because the salesperson keeps adding use cases. By the end of the quarter, the IT director has watched 25 demos, has zero clearer sense of which one is real, and has a procurement team asking which one to pilot first.

This is not a vendor problem. It is a diligence problem. Vendor pitches are designed to be slippery on the data layer because the data layer is where most AI EdTech vendors are weak. The fix is a pre-demo screen that filters out the unprepared vendors before they get on your calendar. The 75% of pitches that cannot survive a serious diligence pass should never reach the demo stage.

This guide is the screen. Twelve questions to send before any AI vendor demo. Five red flags that mean walk away on the spot. Seven contract clauses that are not optional in 2026. And the practical workflow that turns vendor evaluation from a calendar drain into a 90-minute-per-vendor diligence routine.

Why this matters for district IT directors specifically

The IT director sits at the structural choke point of every district AI decision. Curriculum picks the use case. The cabinet picks the budget. The IT director picks whether the deployment is FERPA-safe, whether the integration actually works, and whether the contract gives the district an exit when the vendor changes hands or pivots their roadmap. Three out of four AI vendor relationships go sideways in the first 18 months because of integration realities or pricing changes the IT director could have predicted at procurement.

The other thing that matters: the IT director is the one who gets called when something goes wrong. A FERPA incident lands in IT before it lands in the superintendent's office. A vendor going out of business and taking student data with them is an IT problem. A roster sync that breaks at the start of a school year and surfaces during the first board meeting is an IT problem. The diligence at the front of the procurement process is what determines how many of those phone calls happen six months later.

What AI vendor evaluation actually does

Vendor evaluation is the structured process of comparing AI EdTech vendors against a consistent rubric covering data scope, contract terms, integration, pedagogical fit, support, and exit clauses. The output is a ranked vendor list the district can defend to the school board, the cabinet, and a parent audience if any of the three asks how the choice was made.

Three things separate a real vendor evaluation from a procurement free-for-all:

  • It uses the same rubric across every vendor. No vendor gets graded on a different scale because the demo was prettier.
  • It sequences the diligence: pre-demo questionnaire first, demo second, contract redline third, reference calls fourth, decision fifth. Skipping steps is what creates the calendar problem.
  • It documents the evaluation. The rubric, the questionnaire responses, and the contract redlines live in the district's records. If a board member or a state auditor asks how the choice was made, the answer is a folder, not a memory.

Think of it as procurement diligence applied to a category where most vendors are running on the assumption that diligence will be light because districts are overwhelmed. The diligence does not have to be heavy. It has to be consistent.

Before you start

You need:

  • A standing AI vendor rubric. Build it once, reuse it for every pitch. The rubric in this guide is a starting point.
  • A standard pre-demo questionnaire you send to every vendor. Same 12 questions every time.
  • An AI vendor folder structure in your records system. One folder per vendor, with subfolders for the questionnaire response, the demo notes, the contract redline, the reference calls, and the decision memo.
  • The names and contact info of three peer IT directors at districts of similar size who have already evaluated EdTech AI vendors. Reference calls with peers cut through marketing faster than any other diligence step.
  • Buy-in from the curriculum director or assistant superintendent for instruction. If curriculum is not at the table during evaluation, the IT director gets blamed for blocking tools that curriculum wanted.

One thing to settle before you take any vendor call: the FERPA rule. We have a dedicated section on this below. It is non-negotiable. The questionnaire and the contract clauses below are calibrated to it.

Step 1: The 12 pre-demo questions

The failure pattern most IT directors fall into: taking the demo first, then trying to evaluate the data layer afterward. Vendors are good at demos. Vendors are uneven on the data layer. Inverting the order saves enormous calendar time.

The 12 questions to send before any demo:

  • 1. Which foundation model do you run on? OpenAI, Anthropic, Google, Cohere, Mistral, your own. If they cannot say, the call ends here.
  • 2. What is your contractual relationship with that foundation model provider? Enterprise tier with a Data Processing Addendum, or consumer API. Consumer API is disqualifying for K-12 student data.
  • 3. Does student data leave the United States at any point in your processing pipeline? Including subprocessors. State data residency commitments matter for some districts.
  • 4. Will student data from our district be used to train your model or any third party's model? "No" should be in writing in the DPA, not just in the sales call.
  • 5. What is your data return or destruction policy at contract end? Specific timeline and documentation. "30 days with attestation" is the standard.
  • 6. What is your breach notification window and what is your incident response history? 24 to 72 hours is standard. History should be available; new vendors should disclose maturity honestly.
  • 7. Are you SOC 2 Type II attested? Or an equivalent standard. New vendors may be in progress; ask for the timeline.
  • 8. What rostering and SSO do you support? Clever, ClassLink, OneRoster, LTI 1.3. The answer to LTI version matters.
  • 9. What SIS and LMS integrations are live in production at peer districts? Powerschool, Infinite Campus, Schoology, Canvas. Peer reference for each.
  • 10. What is your pricing model for a pilot and for production? Per student, per school, per district, per teacher. Beware vendors who will not commit to production pricing during pilot diligence.
  • 11. Who are three peer districts of similar size who have completed at least 12 months in production with you? Reference calls. Same size matters; a 50-school district reference is not useful for a 4-school district.
  • 12. What is the exit process if we terminate the contract early? Specific clauses, specific timelines, specific data transfer commitments.

This questionnaire goes to the vendor before the demo is scheduled. Response window: one week. Vendors who respond within that window with substantive answers earn the demo. Vendors who push back on the questionnaire, drag the response, or send marketing copy instead of answers do not earn the demo.

For pilot-stage vendors who do not yet have 12 months of production references: ask for the longest production reference they have, plus the pilot references, plus a clear roadmap commitment. New vendors are not disqualifying; vague new vendors are.

Step 2: The 5 red flags that mean walk away

Not every weak answer is a deal-breaker. Five specific patterns are.

Red flag 1: They cannot name the foundation model or describe the data flow to it. Every AI EdTech tool runs on a foundation model. If the vendor will not say which one, they are either running on a consumer API tier without proper contractual coverage, or they are obscuring something else. Walk away.

Red flag 2: The standard contract does not include a Data Processing Addendum or an equivalent. Some vendors have a DPA they offer when asked. Others have nothing prepared because they have never had to deliver one. The second category is not ready for K-12. Walk away.

Red flag 3: They cannot produce three production references at districts of comparable size. Pilot references are not the same as production references. If the vendor has been around for two years and only has pilots to show, the pilots have not converted, which is its own diligence signal. Walk away.

Red flag 4: The pricing model gets vague when you ask about year three. Vendors that lock in a low pilot price and refuse to commit to production pricing are setting up a price-hike trap. The contract clause to ask for: a multi-year price cap or a price-increase formula. If they will not provide one, the year-three price will be the price they decide. Walk away or contract for a single year only.

Red flag 5: The exit process is undefined or punitive. Standard contracts include data return, contract termination, and reasonable notice periods. Aggressive exit terms (90-day data destruction with no return option, large termination penalties, evergreen renewals with short opt-out windows) are vendors who know their product will be hard to keep. Walk away.

When in doubt: a single red flag is a hard conversation, two is a deal-breaker. Three is a vendor who will not survive 18 months in your district.

Step 3: The 7 contract clauses that are not optional

Once a vendor has cleared the questionnaire and the demo, the contract phase is where the diligence either holds up or falls apart.

The seven clauses to require in writing:

  • 1. School official designation under FERPA. Vendor is contracted as a school official with legitimate educational interest, performing services the district would otherwise perform with its own staff.
  • 2. Data use restriction. Data is used only for the contracted educational purpose. No model training, no marketing use, no aggregation that would result in cross-district identification.
  • 3. Subprocessor disclosure. All third-party data processors (the foundation model provider, any cloud infrastructure, any analytics tools) named in writing, with FERPA-aligned terms flowing through.
  • 4. Data return or destruction. Specific timeline, specific scope, documented attestation.
  • 5. Breach notification. Maximum 72 hours, cooperation with district notification obligations, ongoing remediation commitment.
  • 6. SOC 2 Type II or equivalent. Current attestation available for review on request.
  • 7. Termination for convenience. District can terminate for convenience with reasonable notice (30 to 90 days), no termination penalty, full data return.

If any of those seven is missing from the proposed contract, the redline goes back to the vendor with those terms inserted. If the vendor pushes back on more than two of the seven, the procurement does not proceed. These are not negotiable to anyone who has been deployed in K-12 for more than 12 months. Vendors who cannot accept them are vendors who have not done the work.

For districts with state-specific student privacy statutes (California, Colorado, Connecticut, Illinois, New York, others): add the state-statute-specific clauses your district counsel requires. The seven above are the federal floor. State laws may add more.

Step 4: The reference call that actually surfaces information

Vendor-provided references are usually customers chosen for friendliness. The trick is asking the questions that get past the friendliness layer.

The four reference call questions that work:

  • "What did you not know going in that you wish you had known?" This question opens the door to the actual issues. Reference customers are not allowed to bash vendors but they are usually willing to share lessons learned.
  • "What does your renewal conversation look like in the next 6 months?" Renewing customers are signal. Customers who are wavering or shopping are stronger signal in the other direction.
  • "How does the vendor respond when something breaks?" Support quality is the variable that determines whether year two is fine or painful. Reference customers know.
  • "Who at your district owns the relationship?" If the answer is "the IT director" you get one perspective. If the answer is "a curriculum coach who never went through IT" you have learned something about the vendor's procurement process at peer districts, which is a yellow flag.

Three reference calls per finalist vendor. Twenty minutes each. The diligence cost is one hour per finalist. The information density is higher than any demo.

Step 5: The decision memo

The last step in the evaluation is the decision memo. One page. Goes to the cabinet, the procurement team, and into the vendor folder.

What the memo includes:

  • The use case and the business problem the vendor solves.
  • The vendor finalists who passed the questionnaire and demo screens.
  • The contract terms accepted, redlined, or rejected.
  • The pricing structure for the pilot and the projection for production.
  • The reference call summary across the three peer districts.
  • The recommended decision: pilot, no pilot, conditional pilot pending policy work.
  • The success criteria for the pilot or the next decision point.

The memo is what makes the vendor evaluation defensible. If a board member or an auditor asks how the choice was made six months from now, the memo is the answer. Without it, the evaluation lives in someone's email and gets re-litigated every time a new question comes up.

Step 6: The standing rubric and the next vendor

The last piece of the workflow is the standing rubric. The 12 questions, the 5 red flags, and the 7 contract clauses become a permanent district artifact. Every new AI vendor pitch goes through the same process. Every new pilot uses the same exit framework. The first time through is slow. The fifth time through is 90 minutes of work for a clean go-or-no-go decision.

For districts running AI policy committees: the standing rubric becomes part of the policy framework. The committee can say "vendors must clear the rubric before procurement" without re-defining the rubric every meeting. This is what scales the diligence from one IT director's process to a district-wide policy.

The IT-director-specific prompts that actually work

When IT directors use AI to draft the rubric, the questionnaire, the redlines, or the decision memo, the difference between useful AI output and generic output comes down to four prompt moves.

Specify the audience. "District CTO sending this to a 50-school AI EdTech vendor" produces a different questionnaire than "school technology evaluation." The first one calibrates the formality, the diligence depth, and the contract awareness. The second one produces something close to vendor marketing.

Specify the constraint that actually matters. For a questionnaire: "12 questions or fewer, every question disqualifying if the answer is wrong." For a contract redline: "these seven clauses are non-negotiable and must appear verbatim or in stronger form." Pick the constraint that, if AI got it wrong, the artifact would not survive a real procurement.

Specify the regulatory frame. "FERPA, COPPA for under-13 use, plus our state's student privacy statute" calibrates the AI to the actual rules. "Education compliance" produces generic output. The specifics are what make the artifact usable.

Specify what stays static and what changes. The rubric is reusable across vendors. The vendor name, the use case, and the references change. Tell the AI which is which when you ask for a template.

The FERPA non-negotiables

This section is short because the rule is simple, but it is the most important section in this guide.

Do not put any of the following into a vendor's evaluation environment without a signed FERPA-aligned contract and a Data Processing Addendum already in place:

  • Real student names or identifiers, even for demo purposes
  • Real student IDs or local identifiers
  • IEP, 504, or special education service details
  • Disciplinary records or behavioral incident notes
  • Real family contact information
  • Photos of identifiable students
  • Real student work that could be linked to a child

Vendors who ask for real student data during the evaluation phase are vendors who have skipped the standard EdTech procurement playbook. The right answer during evaluation is synthetic test data, dummy student records, or a sample data set the vendor provides. Real student data flows after the contract is signed and the DPA is in place. Not before.

If your district has signed an enterprise agreement with a foundation model provider directly (Anthropic, OpenAI, Google) that includes a Data Processing Addendum, the rules can be different. Ask your general counsel what is covered. The contract terms with the foundation model provider may flow through to vendors that build on top, but only if both contracts say so.

When NOT to run the full evaluation

The full evaluation has limits. There are categories where lighter diligence is appropriate.

Skip the full evaluation for:

  • Productivity tools that do not touch student data. A scheduling tool for staff meetings, an internal documentation assistant, a meeting note-taker for cabinet sessions. These need standard IT review, not the K-12 student-data questionnaire.
  • Pure curriculum content libraries with no AI generation. A library of teacher-vetted reading materials sold by an EdTech company is a content procurement, not an AI vendor evaluation. The data scope is different.
  • Pilots inside a vendor you have already evaluated for production. If the vendor is already cleared for one use case in your district, the second use case can use the existing contract framework with a scope amendment, not a fresh evaluation.
  • Free, open-source tools that run locally. No vendor relationship, no data exfiltration, different evaluation entirely.

A simple rule: the full evaluation is the right diligence for any vendor who will touch K-12 student data, generate output for students, or integrate with the SIS. For vendors who do none of those, the standard IT procurement process applies.

The quick-start template

Here is the email template that works for sending the pre-demo questionnaire to a new AI vendor. Copy it, fill in the brackets, send.

Subject: Pre-demo diligence questions for [Vendor Name]

Hi [Salesperson],

Thanks for the outreach about [Product Name]. Before we schedule a demo, our district AI evaluation process requires written answers to the questions below. Most vendors return responses within 5 to 7 business days. Once we have the answers, we can schedule the demo with the curriculum lead and the procurement team.

[Paste the 12 questions.]

If your team has not put answers like these together for K-12 prospects yet, that is fine to say. We can send a sample DPA and a peer-district reference list as starting points. We just need the answers in writing before the demo.

Best, [Name], [Title]

That is the whole pattern. For 80% of vendor pitches, this is the first email and the last email if the vendor is not ready.

For recurring use across multiple AI categories: save the questionnaire as a district procurement template. Reading tutoring, math intervention, AI scribe for IEP meetings, instructional assistants for teachers all go through the same screen.

Bigger wins beyond the vendor screen

Once the vendor evaluation rubric is in place, the next layer of value shows up in places adjacent to procurement.

District AI policy framework. The rubric, the contract clauses, and the FERPA non-negotiables become the operational backbone of the district AI policy. The policy committee does not have to invent the procurement piece; it already exists. This shortens the policy timeline from 18 months to 6 months in most districts I have worked with.

Cabinet AI literacy. When the IT director presents the vendor evaluation memo at the cabinet, the cabinet learns how to ask the right questions about AI. By the third memo, the cabinet asks about foundation models, DPAs, and exit clauses without coaching. The IT director becomes a teacher of the cabinet, which strengthens the IT-cabinet relationship for the next 10 procurement decisions.

Peer-district network. The reference calls into peer districts compound. The IT director who has called five peer CTOs has a network they can call back when their own district is the reference. This is the informal information layer that determines which vendors actually deliver in K-12 versus which ones are marketing-heavy.

Standardized vendor folder structure. The folder per vendor, the consistent artifacts inside, and the decision memo become a research archive. Two years in, the IT director can answer the question "why did we go with this vendor" by opening a folder. That archive is the difference between an evaluation process and an evaluation memory.

The education AI consulting connection

This is one diligence process in one role. The bigger AI question for districts is structural. Districts that develop a real procurement and policy posture for AI end up able to evaluate the next 20 vendor pitches in 90 minutes each. Districts that do not develop the posture get pulled into 60 demos a year and end up either banning AI awkwardly, deploying it badly, or both.

If your district is wrestling with the broader AI question, the AI Consulting in Education page covers the full scope: where AI actually fits in K-12, the FERPA-safe vendor patterns, the contract templates, the policy framework, and what an engagement looks like when it works.

Closing

The goal is not for IT directors to become procurement gatekeepers. It is for districts to make AI vendor decisions that hold up at the 18-month mark. A clean diligence process is what makes that possible. The 12 questions, 5 red flags, and 7 contract clauses are not a moat against AI. They are the diligence that lets the right AI deployments happen and keeps the wrong ones from getting on the calendar at all.

Pick the next AI vendor pitch in your inbox. Send the 12 questions today. See what comes back in a week. The first run through the rubric will take longer than future runs. The fifth one will take 90 minutes. The compounding savings show up in calendar relief, in cleaner contracts, and in the small number of vendor relationships that actually deliver value.

If you want to talk about how AI fits into your district at the program level, the AI Consulting in Education page lays out the full picture and how an engagement works.

Want this built for you instead?

Let's talk about your AI + SEO stack

If you'd rather skip the how-to and have it shipped for you, that's what I do. Start a conversation and we'll figure out the fastest path to results.

Let's Talk
Questions from readers

Frequently asked

Do we need a paid evaluation tool to do this work?

No. The work is contract review, data flow analysis, and reference calls. The artifacts are spreadsheets, contract redlines, and meeting notes. Some districts buy into procurement platforms like Lightspeed or LearnPlatform that maintain catalogs of vetted EdTech vendors with privacy ratings. Those help, but they do not replace the diligence on a specific vendor for your specific data scope. The catalogs are a starting filter, not a substitute for the questions in this guide. Free tools (your existing spreadsheet software, your contract review process, a checklist) handle the actual work.

Is any AI vendor truly FERPA compliant for K-12 student data?

FERPA compliance is a contractual and operational posture, not a product feature. A vendor is FERPA-aligned when their contract designates them as a school official with legitimate educational interest, restricts data use to the contracted purpose, requires data return or destruction at contract end, and provides reasonable safeguards. Plenty of EdTech vendors have those terms ready. Plenty more market themselves as FERPA-compliant without the underlying contract terms. The diligence question is not whether the vendor claims compliance. It is whether their data processing agreement says so in plain English. If the contract does not say it, the marketing claims do not matter.

Will vendor demos still feel useful after this evaluation process?

More useful, not less. The pre-demo questionnaire filters out the vendors who cannot survive a serious diligence pass, which means the demos you do take are with vendors who have already shown the data layer works. That changes the demo from a sales pitch into a working session on integration, pedagogical fit, and pilot scope. Most district IT directors who run this process tell me the demos go from one hour of sales theater to thirty minutes of real evaluation. The other thirty minutes goes back to actual work.

How do we share the evaluation rubric with our procurement and curriculum teams?

Build the rubric in a shared spreadsheet your district already uses (Google Sheets, Excel Online, your district SharePoint). One row per vendor, columns for the 12 questions and the 7 contract clauses. The procurement team filters and sorts. The curriculum team adds pedagogical and implementation notes. The IT director owns the technical and data scoring. By the time the rubric reaches the cabinet for a decision, three teams have weighed in with consistent criteria and no vendor has been judged by interface alone. This also creates a reusable template for the next AI vendor pitch, which will land within the month.

What if the vendor refuses to answer the pre-demo questions?

That is the answer. A vendor who will not put data scope, contract terms, integration paths, and breach notification commitments in writing before a demo is a vendor who will not put them in writing at any point. Walk away. The five minutes of awkwardness is cheaper than the eight months of vendor relationship that ends in a data incident and a contract you cannot exit cleanly. The vendors worth your time will answer the questions in 48 to 72 hours and will appreciate that the district takes data scope seriously. The ones who push back on the questionnaire are the ones the questionnaire is designed to filter out.

Can teachers still recommend AI tools without going through this process?

They can recommend, but the district decides. The procurement and policy work has to be centralized, even if individual tool requests originate at the teacher level. The pattern that works in most districts: a simple intake form for any AI tool a teacher wants to use, IT review against the standard rubric, and either an approved-tools list or a per-request approval depending on data scope. Teachers who try to deploy AI tools outside the process are not bad actors. They are usually trying to solve a real problem in the absence of clear guidance. The fix is faster intake, not stricter policing.

I am one IT director for a small district. Is this realistic at our scale?

Yes, and arguably more realistic at small scale than at large. A small district has fewer vendors to evaluate per year and a closer relationship between IT, curriculum, and the cabinet. The 12-question pre-demo questionnaire takes one hour to send. Vendor responses come back in a week. The rubric scoring takes another hour. For 4 to 8 vendor pitches per year, the total annual time on this work is 8 to 16 hours. That is not the bottleneck. The bottleneck for small districts is usually the cabinet conversation about whether to invest in AI at all, which the rubric helps inform but does not replace.

What is the single biggest red flag in 2026 vendor pitches?

A vendor that cannot tell you which foundation model they run on or how their data flows to that foundation model. Every AI EdTech tool in 2026 runs on a foundation model from Anthropic, OpenAI, Google, or one of a small number of others. The vendor either has a contractual relationship with the foundation model provider that includes a Data Processing Addendum, or they do not. Vendors who cannot answer that question clearly are running on the consumer API of the foundation model and exposing your students' data to a tier that does not include FERPA-aligned terms. This is the single most common diligence failure right now and the easiest to screen for.