Back to blog
Feb 15, 20269 min read

Trellis Blog

From Evaluation to Development Rethinking How We Support Teachers

By Trellis Team

from evaluation to development rethinking how we support teachers

Most schools run two separate systems for teacher growth. One is the evaluation system: formal observations, rubric scores, compliance documentation, and summative ratings. The other is the professional development system: workshops, coaching cycles, PLCs, and growth goals. These two systems rarely talk to each other.

The evaluation says a teacher is "developing" in questioning techniques. The PD calendar offers a workshop on differentiation. The teacher's growth goal, written in August, mentions classroom management. Three systems, three different priorities, no coherent story about how this teacher is actually growing.

The best schools in the country don't make this mistake. They treat teacher evaluation as the backbone of their professional development strategy — not a separate compliance exercise that runs on a parallel track. This post makes the case for that shift and explains how to make it practical.

Table of Contents

  • The False Dichotomy
  • What Research Says About Evaluation and Growth
  • The Structural Problem
  • What an Integrated Approach Looks Like
  • How to Get Started
  • Building the Connective Tissue

The False Dichotomy

Somewhere along the way, schools decided that evaluation and professional development serve different purposes. Evaluation is about accountability — documenting performance, assigning ratings, making employment decisions. Professional development is about growth — building skills, exploring new strategies, improving practice.

This separation makes intuitive sense. Teachers understandably resist the idea that the same process used to rate them is also supposed to help them grow. "How can I be honest about my struggles if this goes in my personnel file?"

But the separation creates a bigger problem: it means the richest source of data about a teacher's practice — observation feedback — never informs the support they receive. Evaluation data sits in a compliance database. Professional development is planned from a district calendar. The two systems produce a lot of activity but very little coherent teacher growth.

The schools that get the best results from teacher development don't separate these systems. They integrate them. The observation isn't just a compliance event — it's the foundation for the next coaching conversation. The evaluation feedback doesn't just rate the teacher — it identifies the specific professional learning that will help them improve. The growth goal isn't a form teachers fill out in August and forget — it's a living thread that runs through every observation and development opportunity.

What Research Says About Evaluation and Growth

The question of whether teacher evaluation can actually improve teaching has been studied extensively. The evidence is encouraging — but with important conditions.

A RAND Corporation study examining teacher evaluation systems found that teachers who received more frequent observation-based feedback found it more helpful for improving their practice. Notably, 76% of teachers in the study reported making improvements to their teaching as a direct result of feedback from evaluation systems.

But here's the condition that matters: the feedback had to be specific, timely, and connected to actionable next steps. Generic rubric scores didn't move the needle. Lengthy narratives that arrived weeks after the observation didn't move it either. What worked was feedback that told teachers exactly what to work on and how — delivered soon enough that they could actually apply it.

The National Council on Teacher Quality (NCTQ) has similarly found that evaluation systems are most effective when they're explicitly linked to professional development planning. When schools use evaluation data to identify individual growth areas and then provide targeted support for those specific areas, teacher improvement is measurable and meaningful.

The implication is clear: evaluation and professional development aren't just compatible — they're most effective when they're the same system.

The Structural Problem

If the research supports integration, why are evaluation and professional development still separate in most schools? The answer is structural:

Evaluation Data Is Locked in Compliance Systems

Most evaluation platforms — Frontline, Vector, district-built databases — store observation data as compliance documentation. The data is structured around completion rates, summative scores, and rubric ratings. It's designed to answer "Did this teacher get evaluated?" not "What should this teacher work on next?"

Extracting actionable development insights from these systems usually requires manual work: reading through narrative feedback, identifying patterns, and connecting those patterns to available PD resources. Few administrators have time for this, so it doesn't happen.

Professional Development Is Planned at the Wrong Level

Most PD is planned at the school or district level based on broad priorities: "This year we're focusing on culturally responsive teaching" or "Our PD theme is data-driven instruction." These may be worthwhile topics, but they're disconnected from what individual teachers actually need based on their observation data.

A teacher whose evaluations consistently flag differentiation as a growth area sits through a PD session on assessment design because that's what was scheduled. The session is fine. It's just not what she needs right now.

Growth Goals Are Set and Forgotten

Teachers set professional growth goals at the beginning of the year, usually in a formal goal-setting document. These goals are supposed to guide the year's development. In practice, they're written in September and never referenced again until the end-of-year summative evaluation, when administrators scramble to connect the dots.

The goals aren't bad. The problem is that nothing in the evaluation or PD process systematically connects back to them. They exist in a document that nobody opens.

What an Integrated Approach Looks Like

Here's what it looks like when evaluation and professional development function as a single system:

Step 1: Observation Generates Personalized Feedback

An administrator observes a teacher and writes specific, evidence-based feedback identifying strengths and growth areas. So far, this is what most schools already do (or try to do).

Step 2: Feedback Connects to Growth Goals

The evaluation feedback explicitly references the teacher's professional growth goals. "Your goal this year is to increase student discourse. During today's observation, I noticed strong partner talk but minimal whole-class discussion. Here's where I'd focus next..."

This connection means the growth goal is a living element of every observation — not a document that gathers dust.

Step 3: Growth Areas Inform Development Plans

The growth area identified in the evaluation directly shapes the professional development the teacher receives. Instead of attending a generic workshop, the teacher gets targeted support: a peer observation of a colleague who excels in whole-class discussion, a coaching session focused on discussion protocols, or a short video study of specific techniques.

Step 4: The Next Observation References Prior Feedback

When the administrator returns for the next observation, they're watching for progress on the identified growth area. The feedback references what was discussed last time: "In our November conversation, we talked about moving from partner talk to whole-class discussion. Today, I saw you implement the fishbowl protocol we discussed, and the result was striking — eight students contributed to a sustained academic conversation."

Step 5: Longitudinal Data Tells a Growth Story

Over the course of the year, the evaluation data tells a coherent story about each teacher's development. The administrator can see the trajectory: where the teacher started, what was identified as a growth area, what support was provided, and what progress was made. This story is useful for the teacher (who sees their own growth), the administrator (who can advocate for the teacher), and the school (which can identify patterns and plan resources).

How to Get Started

Integrating evaluation and professional development doesn't require new policies or board approval. It starts with practical changes at the administrator level:

1. Reference Growth Goals in Every Observation

Pull up the teacher's growth goal before writing evaluation feedback. Include at least one sentence that connects your observation to their stated goal — whether they're making progress, facing a challenge related to it, or demonstrating a strength that connects to it.

2. End Every Evaluation with a Development Recommendation

Don't just identify growth areas — connect them to specific development opportunities. "Based on this observation, I recommend..." followed by a concrete resource: a peer to observe, a article to read, a technique to try, a coaching session to schedule. The evaluation should answer "What should I do next?" not just "How did I do?"

3. Start Each Observation by Reviewing the Last One

Spend 3 minutes before every observation reading your prior feedback for that teacher. What growth area did you identify? What next step did you suggest? Watch for evidence that the teacher acted on your recommendations — and acknowledge it in your feedback, whether they did or didn't.

4. Use Evaluation Data to Plan PD

At the school level, aggregate the growth areas identified across evaluations. If 60% of your teachers received feedback about questioning techniques, that's your next PD topic — not because it was on the district calendar, but because your observation data says it's what your teachers need.

5. Share the Integration with Teachers

Tell teachers what you're doing: "I'm going to connect my observation feedback to your growth goals this year, and I'm going to reference prior observations in every write-up. I want you to experience evaluation as part of your development, not separate from it." Transparency builds trust and sets expectations.

Building the Connective Tissue

The shifts above are powerful, but they're also labor-intensive. Connecting every observation to growth goals, referencing prior feedback in every write-up, and translating evaluation data into development plans requires the one resource administrators don't have: time.

This is why Trellis was built as a teacher development platform, not just an evaluation tool. Trellis serves as the connective tissue between evaluation and professional development:

  • Longitudinal teacher profiles automatically track growth areas, strengths, and goals across observations — so every evaluation builds on the last without requiring you to re-read prior feedback manually
  • Growth goal integration connects observation feedback to each teacher's stated professional development goals, making the link between evaluation and development automatic
  • Pattern recognition across observations surfaces insights like "This teacher has shown consistent growth in classroom management over three observations" or "Questioning techniques remains a growth area — consider targeted coaching support"
  • Elli AI assistant lets you query your observation data to inform PD planning: "What are the most common growth areas across my 7th-grade team?" or "Which teachers would benefit from peer observation of strong discussion facilitation?"

The result: evaluation and professional development stop being parallel systems and become an integrated cycle — observation generates personalized feedback, feedback informs targeted development, and the next observation measures growth.

As one pilot administrator described it: "I used to dread writing evaluations — they felt generic and unhelpful. Now I spend 15 minutes on observations that used to take me 2 hours, and teachers actually thank me for the feedback."

See how Trellis connects evaluation to development →