Tuesday, September 13, 2011

Changing Education Paradigms

Publish Post

Sir Ken Robinson is a British author and creativity expert who challenges the way we're educating our children.  The video clip above has Sir Ken Robinson's talk at the 2006 TED conference as he calls for a radical rethinking of our school systems.  Here are a few quotes from the video above:

  • "And my contention is, all kids have tremendous talents.  And we squander them pretty ruthlessly.  My contention is that creativity now is as important in education as literacy, and we should treat it with the same status."
  • "If you're not prepared to be wrong, you'll never come up with anything original."


In the video clip below Sir Ken Robinson examines the current factory model of the education system and the urgent need to re-examine how we teach students.



Ken  Robinson is the author of the book, Out of Our Minds: Learning to be Creative. You can learn more about Ken Robinson here.

What is a highly effective teacher actually worth to students?

The number one variable impacting student learning is the quality of the teacher in the classroom.  Stanford University's professor, Eric Hanushek, set out to determine the the impact of value added from highly effective teachers upon the future economic earnings of students. It is important to note that he was not seeking to solely isolate value added, as in the achievement gains, but rather the aggregate differences on lifetime earnings for students derived from having the highly effective public school teachers to less effective teachers expressed in percentiles.

His ideas are basically this: the current salary structure for public school teachers provides financial incentives based upon experience and advanced degrees and these factors are uncoupled from any systemic influence on student achievement.  His study examines the aggregate impact of effective teachers upon students' future economic earnings points towards two important variables, teacher effectiveness and class size.  The figure below illustrates this correlation in more detail.


Dr. Eric Hanushek is the Paul and Jean Hanna Senior Fellow at the Hoover Institution of Stanford University and according to his webpage is, "a leader in the development of economic analysis of educational issues, and his work on efficiency, resource usage, and economic outcomes of schools has frequently entered into the design of both national and international educational policy."  He explained his concepts recently in a podcast on Econ Talk. The entire podcast can be found online here.

Dr. Hanushek's ideas are expanded in more detail by examining his recent paper, "The Economic Value of Higher Teacher Quality," by Eric Hanushek. Urban Institute, National Center for Analysis of Longitudinal Data in Education Research, Working Paper 56, December 2010. Published version can be found at Economics of Education Review, volume 30, Issue 3, June 2011, pp. 466-479.  

Wednesday, July 27, 2011

How many individuals does it take to change an entire school system?

The question posed by researchers at Rensselaer Polytechnic Institute is more general, but still applicable to the school change process, "how can a committed set of minority opinion holders on a network, reverse the majority opinion?"  The question centers on the issue of how individuals adopt new behaviors and new opinions as influenced by group members.  These computer scientists examined historical evidence of how small groups of committed agents who consistently proselytize the opposing opinion and are immune to influence can change entire group behavior.

The researchers identified the minimum statistical threshold of 10% required to alter group majority opinion.  The online article, Minority Rules: Scientist Discover Tipping Point for the Spread of Ideas, explains the research, "Scientists at Rensselaer Polytechnic Institute have found that when just 10 percent of the population holds an unshakable belief, their belief will always be adopted by the majority of the society. The scientists, who are members of the Social Cognitive Networks Academic Research Center (SCNARC) at Rensselaer, used computational and analytical methods to discover the tipping point where a minority belief becomes the majority opinion."


The image below depicts group behavior over time and shows how the initial ten percent of  committed opinion holders can change entire group behavior.  It is important to note that the initial ten percent must hold an unshakable belief.  This study has potential implications for entire school systems and the adoption of innovative behavior.


The abstract of the research is below and an accompanying PDF located here.  The entire research article titled, "Social consensus through the influence of committed minorities" is available online.

Abstract:


We show how the prevailing majority opinion in a population can be rapidly reversed by a small fraction p of randomly distributed committed agents who consistently proselytize the opposing opinion and are immune to influence. Specifically, we show that when the committed fraction grows beyond a critical value p_c ≈ 10%, there is a dramatic decrease in the time, T_c, taken for the entire population to adopt the committed opinion. In particular, for complete graphs we show that when p < p_c, T_c ∼ \exp(α(p)N), while for p > p_c, T_c ∼ \ln N. We conclude with simulation results for Erd\Hos-Rényi random graphs and scale-free networks which show qualitatively similar behavior.

Sunday, June 26, 2011

A Collaborative Data Driven Teaching Experiment

The below case study details how two teachers collaboratively implemented a series of common assessments and analyzed data to alter instruction with the stated goal of improving student achievement in a public middle school.

Assessment Methodologies

Teacher A and Teacher B decided to collaborate and analyze student achievement data based upon pre-test, formative and summative common assessments for one unit of study.  The teachers agreed to focus these common assessments towards two student performance standards:

1. Student understanding of US History content
2. Reading comprehension of primary and secondary source documents

Before any instruction was provided in the US History classes, both teachers gave students the same pre-test. The use of Einstruction's CPS clicker response pads expedited the grading time for these assessments.  This pre-test clicker data was disaggregated by two standards, those questions that were content-specific or questions assessing reading comprehension skill.

The ability to quickly use student achievement data was also improved by exporting CPS clicker data directly into an excel spreadsheet (CSV) format.  This data was then uploaded onto a google doc to enable real time collaboration and analysis between the teachers.

Pre-test results

Comparing the pre-test performance of students between teacher A and teacher B yields a few differences statistically however,  analyzing the combined pre-test performance demonstrates a larger sample size for later post-test analysis with the goal of determining effect size.  Not surprisingly, the teachers learned that students knew very little about the upcoming content of the unit with the combined average score on all content specific questions for all students was 44%.

The data also showed all students assessed using the pre-test had an average total score of 47%.  On average, students did marginally better on the reading skills questions, 57%.  One interesting statistic to note was the large combined standard deviation of 15 points from the average score for all students on the reading skill questions.

This is an interesting statistic to note as it indicates that we have a wide range of student reading ability in our classes.  See the pre-test student achievement data summary below from google docs.


Formative Assessment of Reading Standard Proficiency

Next both teachers provided a formative common assessment to all students to gauge change upon only the reading standard.  It is important to note that the formative achievement data is based upon a very small sample size of questions and this potentially skewed the results below.  

Teacher A: 87%
Teacher B: 88%

Summative Data Results

Following the formative assessment results teachers A and B decided to use different teaching  methodologies to address student reading comprehension.  Teacher A partnered students based upon proficiency levels proven from pre-test results and formative results.  Teacher B focused upon active reading and thematic reading skills and introduced and reviewed text reading strategies.
After altering instruction following the pre and formative assessment results a summative assessment was provided and an effect size was calculated based upon the formula below:


Teacher A:
Sample: 53
Mean: 88%
Std Dev: 11.77
Mean reading standard: 81%
Mean content standard: 91%
Effect Size = 1.681

Teacher B:
Sample: 38 students
Mean: 90.7%
Std Dev: 8.118
Mean reading standard: 83%
Mean content standard: 93%
Effect size = 1.513 
Analysis

Comparing the student achievement data throughout the assessment cycle of pre/formative/summative common assessments it is clear the student achievement increased.  Students gains were greatest on content standards largely due to a lack of previous knowledge.  Student achievement results on reading standard also increased but not at a similar rate.  Comparing the standard deviation on the assessment cycle both teachers significantly narrowed the variability of scores while increasing performance. This data indicates that both teachers were able to raise the achievement for all students and narrow the gap between proficiency and non-proficient students.  In addition, the effect size data clearly shows the teaching methods used, while divergent, were both highly effective instructional strategies.  It is difficult to isolate the causality in this case study and it is important to stress the extremely small sample size.

Thursday, June 2, 2011

From MCAS to Teacher Evaluation

A recent task force report from the Massachusetts Department of Elementary and Secondary Education proposes to overhaul the teacher evaluation system across the Commonwealth.

Described in a NPR article, the task force recommendations call for teachers to be evaluated using results from two types of student assessment, one of which must be the growth data from the Massachusetts Comprehensive Assessment Systems exam where it applies. This task force analyzed the current teacher evaluation systems and provided guidelines for improvement. This entire NPR article available online here.

This task force published the report "Building a Breakthrough Framework for Educator Evaluation in the Commonwealth". The entire report is available online and it outlined the below reasons for changing the existing system on page five:

The Task Force concludes that current educator evaluation practice in Massachusetts:

• Rarely includes student outcomes as a factor in evaluation
• Often fails to differentiate meaningfully between levels of educator effectiveness
• Fails to identify variation in effectiveness within schools and districts
• Rarely singles out excellence among educators
• Does not address issues of capacity, or “do-ability”
• Fails to calibrate ratings, allowing inconsistent practices across the state
• Fails to ensure educator input or continuous improvement
• Is often under-resourced or not taken seriously"

The task force recommends that evaluators use a wide variety of other local, district, state or commercially-available standardized exams. In addition the recommendations include student work samples can also be used and that teachers should also be judged during classroom observations on elements such as instruction, student assessment and curriculum measures.

These changes were recently described in a Boston Globe article titled, Rating Teachers on MCAS Results: Sweeping changes pushed by state education leader on April 17, 2011.

The Boston Globe article states that the new teacher evaluation system "also gives teachers who do not make the grade a year to show improvement or face termination. A fiery debate subsequently emerged over how much weight testing data should have in determining the overall effectiveness of a teacher or administrator." The entire Boston Globe article can be found online here.

Sunday, March 6, 2011

How Data and Technology Improve Schools

Pedro Noguera, a Professor of Teaching and Learning at New York University, talks about how data and technology can improve schools in the video below.  The main idea he expresses in the video is that,

  • "Data relieves teachers of the burden of guesswork, while technology gives students more control over learning." Pedro Noguera

In the video below Mr. Noguera comments upon SmartBoard technology and the mathematics program EPGY.  EPGY is the Education Program for Gifted Youth project at Stanford University which provides multimedia computer learning courses using Computer Assisted Instruction (CAI).  More information about Pedro Noguera can be found online at bigthink.com



A case study of a public school district-wide implementation of computer aided instruction is analyzed by Thomas Trautman of the American Education Corporation.  His research describes the impact upon student achievement following the implementation of A+ny Where Learning Systems (A+ny).  This research study examined students achievement of Illinois Public School District number 159 and offered the below findings:
  • "The study showed that schools where the use of the A+nyWhere Learning System was encouraged used the software more than the neighboring schools where the use of the A+nyWhere Learning System was only made available and permitted. More importantly, the schools where the use of the A+nyWhere Learning System was encouraged and used more had greater gains in both reading and mathematics as measured by the Iowa Test of Basic Skills." (Trautman, pg 22)
The entire study titled, "Computer Aided Instruction and Academic Achievement" can be found online here.  More examples of CAI implementation can be found here.

Saturday, March 5, 2011

Getting Teacher Assessment Right

The National Education Policy Center, NEPC released a report titled, "Getting Teacher Assessment Right: What Policymakers can Learn from Research"  in December 2010.  This report was created by Patricia Hinchey from Penn State University.  This report seeks seeks to address three main questions concerning evaluating teacher performance:

What should teachers be assessed upon?
What is the purposes of assessing teachers?
How can you assess teachers?


What to assess teachers upon? The report is concerned about the lack of an agreed-upon definition of teacher quality but points to ongoing research in the areas of teacher quality, teacher performance and teacher effectiveness.  Some of the research the report cited includes the following:


The below chart from the report summarize each of the categories derived from this research.

What is the purpose of assessing teachers?
The report provides two very different reasons or function towards assessing teachers.  Summative assessment is used to make a judgement while a formative assessment is "used to gain information that can help teachers improve or expand their abilities." (Hinchey, pg. 8)  Using formative assessment of teachers to improve practice requires positive, open relationships between teachers and administrators operating in an existing non-threatening environment.

What tools are available to better assess teachers? The report offers a wide array of methods in educator performance appraisal such as traditional classroom observation, use of instructional artifacts, portfolios, teacher self-reports, student surveys, value-added assessment (VAA) and peer assistance and review (PAR).

The entire report is available in PDF online here.