The Teacher Evaluation Platform enables Observers to assess the performance of public school teachers and students on a bi-annual basis. This platform not only facilitates observations, but also allows school-based facilitators to set objectives after the observation, track progress, conduct surveys, and reflect on the year's outcome.
In this Case Study I will guide you through my process and highlight my collaboration with a cross-functional team.
1. UX Design/Research
2. PM
3. Developers/Software Architecture
4.IT Specialist
This project scope is developing and implementing 5 main modules of evaluation platform. Any additional improvements outside of this scope will not be included in this project.


According to the Stakeholders and User interviews 35% Observers who used this platform, quit their job because of the platform.
Training time of the new observers took 167h it is 110% more than the regular given time.
78% of the Error rate in multiple tasks with 6 participants
The user research goal is to understand the current evaluation process, identify the key user journey and find out pain points that evaluators are facing.
What is the workflow for the Observation process?
8 users in a different roles supporting,
managing Teacher evaluation platform

Teachers indicated by numbers, because of the privacy. Observer wants to see the teacher name- but still keep discreet from 1st accessibility.
Observers wants to find teacher by sorting one of these: by Cycle, cohort year, or by choosing a school
Observer wants to edit the comment she/he entered, delete or add new
Observer wants to Print the evaluation, submit completed one or save for later submission.
Observer wants to start Observation with less data entered
User wants to save and exit anytime during the Observation process
RESEARCH
RESEARCH

UX PRINCIPLES

MID-AUGUST 2022

LATE - AUGUST, 2022
Each session 60 minutes.
Search for an Observation: Participants asked to
searched for a specific Observation from the
Observation Tab on the Dashboard
Filter Results: Participants asked to find a specific Observation by filtering and sorting the list
Start Observation: Participants asked to Start new Observation session

Successful Task Completion: 8/8=1 x 100=75%. 8 out 8 testers successfully completed "Enter Teacher's Details" task, as the participants filtered the data and sorted the list.
Time on Task: This KPI was one of the important ones- as Observation has a timer. We tasked users to "Complete 2steps of the Observation: Teacher and Class Details". 8 Participants. Time on Task in seconds: 22, 15, 60, 24, 18, 31, 17, 45 (22+15+60+24+18+31+17+45)/7= 29. Result shows that it took 29 seconds on average to complete first 2 steps of the Observation. Which is 47 second faster than old one.
Error- Rate: Start Observation task got 7(correct)/8(total participants) =0.875 x 8= 7% which great 7 out 8 participants completed task correctly.
Finally, we updated the wireframes with 2 feedbacks: Stored Icon and

I collaborated closely with the engineering team to confirm the implementation of the design and copy. In addition to daily stand-up meetings, we held weekly demonstrations and periodically sat down together to review and compare the implementation with the final designs.
THE RESULT
If you like what you see and want to work together, please get in touch! ☺️
aigerimuz@gmail.com