Trellis Blog
Why Teacher Evaluations Don't Have to Feel Like Compliance
You became an instructional leader to help teachers grow. You pictured yourself in classrooms, coaching conversations, and collaborative planning sessions — working alongside teachers to improve instruction and student outcomes.
By Trellis Team

You became an instructional leader to help teachers grow. You pictured yourself in classrooms, coaching conversations, and collaborative planning sessions — working alongside teachers to improve instruction and student outcomes.
Instead, you're writing the same three paragraphs of evaluation feedback you wrote last year for a different teacher. You're checking rubric boxes at 10 PM on a Sunday. You're scheduling post-observation conferences that both you and the teacher know will feel more like a formality than a conversation.
The teacher evaluation process in most American schools is structurally broken. Not because administrators don't care — but because the system was built for compliance, not coaching. Here's why it doesn't have to stay that way.
Table of Contents
- Why Evaluations Feel Like Compliance
- What Evaluation Could Feel Like Instead
- The Key Insight: Feedback Should Have Memory
- Practical Shifts You Can Make Today
- Making the Shift Systematic
Why Evaluations Feel Like Compliance
The compliance trap isn't about any single administrator or school. It's structural. Here's how the evaluation system works against its own stated goals:
Each Observation Is Treated as an Isolated Event
Most evaluation systems treat every classroom visit as if it exists in a vacuum. You observe a lesson, write feedback about that lesson, file it, and move on. The next observation starts from scratch — no reference to what you discussed last time, no connection to the teacher's growth goals, no acknowledgment that this teacher has been working on questioning techniques since September.
When observations are disconnected, feedback becomes repetitive. Teachers notice. They start treating evaluations like weather — something that happens to them periodically, with no cumulative meaning.
Time Pressure Produces Generic Output
The math is brutal. If you're responsible for 40 teachers and each formal observation write-up takes 1.5 hours, that's 60 hours just for first-semester observations. Add walkthroughs, pre-conferences, post-conferences, and documentation — and you're looking at 200+ hours per year on the evaluation process.
Under that time pressure, quality suffers predictably. You default to safe, generic language. You copy your own sentences from prior evaluations and change the names. You write "continue to develop higher-order questioning strategies" for the twelfth time because writing something specific for each teacher would take time you don't have.
The irony: the part of the evaluation process that could actually improve teaching — the feedback — is the part that gets compressed the most.
Rubric-Driven Scoring Replaces Actual Coaching
Evaluation frameworks like Danielson and Marzano were designed to create a shared language for good teaching. They're genuinely useful for that purpose. But when the framework becomes the evaluation — when the goal is to assign a score on each component rather than to coach a teacher toward growth — something important gets lost.
Teachers start performing for the rubric rather than teaching authentically. Administrators start evaluating against a checklist rather than observing a classroom. The post-observation conversation becomes "here's your score and why" rather than "here's what I noticed and what you might try next."
Compliance Systems Reward Completion, Not Quality
Most districts track whether evaluations were completed, not whether they were useful. The central office dashboard shows green checkmarks for submitted evaluations and red flags for missing ones. Nobody checks whether the feedback was specific, actionable, or connected to prior observations.
This creates a rational but destructive incentive: finish the evaluation as quickly as possible. Check the box. Move on.
What Evaluation Could Feel Like Instead
Imagine this: you walk into a teacher's classroom for their third observation of the year. Before you even sit down, you already know what you discussed in your last two visits. You know the teacher set a goal in October to increase student discourse. You know that in December, you noticed improvement in partner talk but suggested working on whole-class discussion. Today, you're watching for how that progression continues.
After the observation, you sit down to write feedback. You reference the October and December observations naturally. You note the growth you've seen. You identify the next step in the teacher's development. The teacher reads the feedback and thinks: "My administrator actually knows my story."
This is what evaluation feels like when it's designed for development instead of compliance:
Connected. Every observation builds on the last. There's a through-line — a growth narrative — that gives each individual visit meaning within a larger story.
Specific. Feedback references actual moments from the lesson — quotes, timestamps, student behaviors — not generic rubric language that could apply to anyone.
Forward-looking. Every evaluation ends with a concrete next step. Not "continue to improve" but "try this specific strategy in your next lesson, and I'll stop by to see how it goes."
Efficient. Because the system tracks context and connections, administrators spend their time on professional judgment rather than reconstructing what happened three months ago.
Mutual. Teachers experience evaluations as coaching conversations, not accountability measures. They look forward to feedback because it's useful.
The Key Insight: Feedback Should Have Memory
The single biggest shift that transforms evaluations from compliance to coaching is this: feedback should remember.
Every observation should build on the last one. The growth area you identified in September should be referenced in your November feedback — either to celebrate progress or to refine the approach. The strengths you noted early in the year should reappear as anchors when the teacher faces new challenges.
When feedback has memory, teachers experience continuity. They see that their administrator is tracking their growth, not just dropping in randomly to judge a single lesson. This continuity builds trust — the foundation of every coaching relationship.
Without memory, evaluations reset to zero every time. The teacher who improved dramatically from October to January never hears about it because each evaluation is written as if the others don't exist. The growth area that persists across three observations never gets addressed systematically because nobody connects the dots.
The reason feedback doesn't have memory in most schools isn't philosophical. Everyone agrees it should. The reason is practical: maintaining longitudinal awareness of 30-50 teachers across multiple observations per year — remembering what you said, what they tried, what improved, what didn't — is humanly impossible at scale without a system that does it for you.
Practical Shifts You Can Make Today
Even without changing your tools, you can start shifting your evaluation practice from compliance toward development:
1. Start Every Evaluation by Re-reading the Last One
Before writing feedback, spend 3 minutes reading your prior observation of this teacher. Look for one thing to reference: a growth area they were working on, a strength you want to reinforce, or a next step you suggested. Opening your feedback with "Building on our October conversation..." changes the entire tone.
2. Cut Your Growth Areas to One
If your evaluations typically list 3-4 areas for improvement, cut to one. Go deep on that single area with specific evidence, a clear explanation of why it matters, and a concrete next step. Teachers can only work on one thing at a time. Giving them four things to improve is giving them nothing.
3. Replace Rubric Language with Classroom Language
Instead of "The teacher demonstrates developing proficiency in Component 3b," write "Your questions today were mostly factual — 'What happened next?' and 'Who can tell me the answer?' Try preparing three questions before tomorrow's lesson that start with 'Why,' 'How,' or 'What would happen if.'" You can reference the framework component if needed, but lead with language a teacher can act on.
4. End with a Specific Offer
Don't just recommend a strategy — offer to support it. "Would you like me to model this in your classroom?" or "I'll stop by Thursday to see how the new approach is working" or "Ms. Rodriguez does this well — I can arrange for you to observe her." The offer transforms you from evaluator to coach.
5. Deliver Feedback Within 48 Hours
Timeliness beats polish. A rough-but-specific observation delivered the next day is more useful than a polished-but-late evaluation delivered two weeks later. If the full write-up will take time, send a brief email within 24 hours with one strength and one growth area, and follow up with the formal document.
Making the Shift Systematic
The shifts above work for individual administrators committed to changing their practice. But for schools and networks that want to transform evaluation culture at scale, individual effort isn't enough. The system itself needs to support developmental evaluation.
This is why Trellis exists. It's a teacher development platform that makes the shifts described in this article systematic and sustainable:
- Feedback with memory: Trellis maintains longitudinal teacher profiles, so every observation automatically builds on prior ones. Growth areas are tracked. Strengths are reinforced. The teacher's development story writes itself.
- Specificity by design: Trellis works from your actual observation notes — typed or audio-recorded — ensuring feedback is grounded in what you saw, not generated from generic templates.
- Time that enables quality: By reducing evaluation write-ups from 1-2 hours to about 15 minutes, Trellis gives administrators the time to invest in the coaching conversations that follow the written feedback.
- Consistency across a school or network: When every administrator uses the same system, feedback quality becomes consistent — even as principals turn over or new administrators join.
The goal isn't to remove the human from the evaluation process. It's to remove the barriers that prevent humans from doing the evaluation process well.