Proof Points Archives - The Hechinger Report https://hechingerreport.org/tags/proof-points/ Covering Innovation & Inequality in Education Fri, 12 Apr 2024 08:53:31 +0000 en-US hourly 1 https://hechingerreport.org/wp-content/uploads/2018/06/cropped-favicon-32x32.jpg Proof Points Archives - The Hechinger Report https://hechingerreport.org/tags/proof-points/ 32 32 138677242 PROOF POINTS: Four things a mountain of school discipline records taught us https://hechingerreport.org/proof-points-four-things-a-mountain-of-school-discipline-records-taught-us/ https://hechingerreport.org/proof-points-four-things-a-mountain-of-school-discipline-records-taught-us/#comments Mon, 15 Apr 2024 10:00:00 +0000 https://hechingerreport.org/?p=100042

Editor’s note: Substituting for Jill Barshay is Sarah Butrymowicz, The Hechinger Report’s investigations editor. Jill will return next week. Every school day, thousands of students are suspended for vague, subjective reasons, such as defiance and disorderly conduct. Our investigative team recently took a deep dive into these punishments, based on 20 states for which we […]

The post PROOF POINTS: Four things a mountain of school discipline records taught us appeared first on The Hechinger Report.

]]>

Editor’s note: Substituting for Jill Barshay is Sarah Butrymowicz, The Hechinger Report’s investigations editor. Jill will return next week.

Every school day, thousands of students are suspended for vague, subjective reasons, such as defiance and disorderly conduct. Our investigative team recently took a deep dive into these punishments, based on 20 states for which we were able to obtain data. Our analysis revealed more than 2.8 million suspensions and expulsions from 2017-18 to 2021-22 under these ambiguous categories. 

Here’s a closer look at some of what we found:

1. Suspensions for these categories of behavior are incredibly common. 

Our analysis found that nearly a third of suspensions and expulsions reported by states was meted out under these types of categories, which also included insubordination, disruptive behavior, and disobedience. 

In Alabama, educators have 56 categories to choose from as justification for student punishment; a full third in our sample were assigned for one of four vague violations. This is what the state calls them: “defiance of authority,” “disorderly conduct — other,” “disruptive demonstrations,” and “disobedience — persistent, willful.” 

In North Carolina, Ohio and Oregon, about half or more of all suspensions were classified in similar categories. 

There are a few reasons why these categories are so widely used. For one, they often capture the low-level infractions that are most common in schools, such as ignoring a teacher’s direction, yelling in class or swearing. By comparison, more clearcut and serious violations, such as those involving weapons or illegal substances, are rarer. They made up only 2 percent and 9 percent of the discipline records, respectively. 

But experts also say that terms such as disorder or defiance are so broad and subject to interpretation that they can quickly become a catchall. For instance, in Oregon, the umbrella category of disruptive behavior includes insubordination and disorderly conduct, as well as harassment, obscene behavior, minor physical altercations, and “other” rule violations.

2. Educators classify a huge range of behavior as insubordination or disruption. 

As part of our reporting, we obtained more than 7,000 discipline records from a dozen school districts across eight states to see what specific behavior was leading to suspensions labeled this way. It was a wide range, sometimes even within a single school district. Sometimes students were suspended for behavior as minor as being late to class; others, because they punched someone. And it was all called the same thing, which experts say prevents school discipline decisions from being transparent to students and the greater public. 

There were some common themes though, behaviors like yelling at peers, throwing things in a classroom or refusing to do work. We developed a list of 15 commonly repeated behaviors and coded about 3,000 incidents by hand, marking whether they described that type of conduct. We used machine learning to analyze the rest. 

Related: Young children misbehave. Some are suspended for acting their age

In fewer than 15 percent of cases, students got in trouble for using profanity, or for talking back, or for yelling at school staff. In at least 20 percent of cases, students refused a direct order and in 6 percent, they were punished for misusing technology, including being on their cell phones during class or using school computers inappropriately.

3. Inequities can be even more pronounced in these ambiguous categories. 

We know from decades of research and federal data collection that Black students are more likely to be suspended from school than their white peers. In many places, that is especially true when it comes to categories like insubordination. 

In Indiana, for example, Black students were suspended or expelled for defiance at four times the rate of white students on average. In 2021-22, eight Black students received this punishment per 100 students, compared with just two white students. In all other categories, the difference was three times the rate. 

Research suggests that teachers sometimes react to the same behavior differently depending on a child’s race. A 2015 study found that when teachers were presented with school records describing two instances of misbehavior by a student, teachers felt more troubled when they believed a Black student repeatedly misbehaved rather than a white student.

They “are more likely to be seen as ‘troublemakers’ when they misbehave in some way than their white peers,” said Jason Okonofua, assistant professor at University of California-Berkeley and a co-author of the study. Teachers are usually making quick decisions in situations where they are removing a child from the classroom, he said, and biases tend to “rear their heads” under those circumstances.

Related: What happens when suspensions get suspended?

Similar disparities exist for students with disabilities. In all states for which we had demographic data, these students were more likely to be suspended for insubordination or disorderly conduct violations than their peers. In many states, those differences were larger than for other suspensions. 

4. Suspension rates vary widely within states. 

Further underscoring how much educator discretion exists in determining when or whether to suspend a student, individual districts report hugely different suspension rates. 

Take Georgia, for instance, which allows for students to be punished for disorderly conduct and “student incivility.” In 2021-22, the 3,300-student McDuffie County School System cited these two reasons for suspensions more than 1,250 times, according to state data. That’s nearly 40 times per 100 students. Similarly sized Appling County issued so few suspensions for disorderly conduct and student incivility that the numbers were redacted to protect student privacy. 

Editors’ note: The Hechinger Report’s Fazil Khan had nearly completed the data analysis and reporting for this project when he died in a fire in his apartment building. Read about the internship fund created to honor his legacy as a data reporter. USA TODAY Senior Data Editor Doug Caruso completed data visualizations for this project based on Khan’s work.

This story about school discipline data was produced by The Hechinger Report, a nonprofit, independent news organization focused on inequality and innovation in education. Sign up for the Proof Points newsletter.

The post PROOF POINTS: Four things a mountain of school discipline records taught us appeared first on The Hechinger Report.

]]>
https://hechingerreport.org/proof-points-four-things-a-mountain-of-school-discipline-records-taught-us/feed/ 1 100042
Another way to quantify inequality inside colleges https://hechingerreport.org/another-way-to-quantify-inequality-inside-colleges/ https://hechingerreport.org/another-way-to-quantify-inequality-inside-colleges/#respond Mon, 17 Feb 2020 11:00:33 +0000 https://hechingerreport.org/?p=61026 college graduation rates by race and ethnicity

One way to look at inequality in America is to notice how people of different races and ethnicities get sorted into different colleges. In Virginia, for example, more than 50 percent of all black college students attend just four colleges, according to research by the Urban Institute, a Washington think tank. They’re not the state’s […]

The post Another way to quantify inequality inside colleges appeared first on The Hechinger Report.

]]>
college graduation rates by race and ethnicity
college graduation rates by race and ethnicity
Credit: Chart from "Understanding Equity Gaps in College Graduation," Urban Institute, January 2020.

One way to look at inequality in America is to notice how people of different races and ethnicities get sorted into different colleges. In Virginia, for example, more than 50 percent of all black college students attend just four colleges, according to research by the Urban Institute, a Washington think tank. They’re not the state’s most selective institutions such as University of Virginia in Charlottesville, College of William and Mary or Virginia Tech. Instead, large numbers of black students attend Old Dominion and Virginia Commonwealth universities.

This segregated sorting in higher education is somewhat understandable. Black students are less likely to apply and be admitted to the state’s top tier schools. Black Virginians were more likely to have grown up in poor neighborhoods and attended elementary, middle and high schools with other poor students, less experienced teachers and fewer resources.  Still the racial concentration in just a few institutions is striking, given that Virginia has sizable black populations in many counties throughout the eastern half of the state and a wide choice of colleges and universities.

But another way to think about educational inequality is to analyze how students fare at the same institution.  The Urban Institute researchers calculated graduation rates by race and ethnicity in Virginia and Connecticut and found that white and Asian students graduate at higher rates than black and Latino students at most colleges. At some colleges in Virginia, the gap exceeds 30 percentage points. For example, 50 percent of white and Asian students obtained their four-year bachelor’s degrees within six years at the Jefferson College of Health Science in Roanoke (now part of Radford University) compared to only 18 percent of blacks and Latinos.  At all four-year universities and colleges in Virginia, the average graduation gap between whites and Asians and blacks and Latinos is 16 percentage points. Black and Latino students do graduate at higher rates than white and Asian students at two historically black Virginia universities, Hampton and Virginia State. The gaps are smaller in Connecticut and at two-year colleges but they still exist.

“Institutional leaders need to think strongly about the way their rhetoric does not align with the actual institutional policies in who gains access to and graduates from their college,” said Dominique Baker, an assistant professor of education policy at Southern Methodist University, who reviewed the Urban Institute study, in an e-mail interview.  “I mean, why does a student’s lack of financial resources predict the gap in graduation likelihood between Black and white students?”

Related: Behind the Latino college degree gap

Understanding Equity Gaps in College Graduation” was written by Erica Blom and Tomas Monarrez at the Urban Institute and published in January 2020. The authors focused on Virginia and Connecticut because those are the two states where they were able to obtain detailed data for students but it is likely that the findings are similar nationally. The researchers looked at data from all public and private nonprofit universities in Virginia and from all public and two private institutions in Connecticut that volunteered to participate. (The Urban Institute received funding from Arnold Ventures, a philanthropic corporation, which is also among the many funders of The Hechinger Report.)

A common rebuttal to this simple calculation of graduation rates by race and ethnicity is to point out that black and Latino students generally have more obstacles in college than their white and Asian peers. They often arrive at college with lower test scores and high school grades. Students with weaker academic preparation might be more likely to fail classes and drop out of college. Black and Latino students also often encounter more financial hardship in college and drop out for economic reasons.

This is where the Urban Institute analysis gets really interesting. One might hope if we could adjust for these factors —  academic preparation and family income —  that racial graduation gaps would disappear. Blacks and whites with the same smarts and money ought to be graduating at the same rates, right? The researchers at the Urban Institute were able to dig back into college application and school records and see students’ SAT scores and high school grades. For the state of Virginia, they had access to family income, which was listed on college applications. For Connecticut, they could see if the student came from a family poor enough to quality for federal Pell grants.

Then they mathematically adjusted the graduation rates, comparing students with the same academic preparation and family income or poverty status at each college. In most cases, the gaps plummet by more than half. But there are still differences along race and ethnicity at many institutions. At one Virginia community college, Paul D. Camp in Franklin, there is still a 20 percentage point difference between the graduation rates of whites and Asians and those of blacks and Latinos. What this means is that even among students with the same high school grades and family income, a white student is, on average, 20 percentage points more likely to get a two-year associate degree than a black student. The graduation gap at the Jefferson College of Health Science, which I referred to above, falls from above 30 percentage points to around 15 percentage points — still significant.

Blom and her co-author didn’t have enough data to adjust for other factors that could explain the graduation differences. Black and Latino families often have less savings and fewer assets to fall back on. But the researchers didn’t know students’ wealth. It’s possible that the adjusted graduation gaps would further close to zero if family wealth were factored in. Perhaps a black student with the same academic preparation, family income and wealth, does graduate college at the same rate as a white student. We don’t know.

In the research literature, this “unexplained” gap in college attainment is often explained as racial discrimination or institutional racism. But the Urban Institute report does not make such accusations. When I talked with Erica Blom, one of the authors, she applauded education leaders in Virginia and Connecticut for wanting to understand their equity gaps and opening their student data records to this kind of scrutiny. “I hope that our findings spur institutions to do some reflection on whether there is more they could do to help all students succeed,” she said.

Related: Federal data shows 3.9 million students dropped out of college with debt in 2015 and 2016

Blom pointed out that Pasadena City College in California was able to reduce graduation inequities after it scrutinized its student data and changed dozens of policies. The college’s efforts were explained at length in a story published September 2019 in Politico. For example, biology professors at the college erased a rule banning late assignments or makeup exams. That helped accommodate low-income students who have to juggle work and school.  The director of professional development formed faculty book clubs to discuss Claude Steele’s “Whistling Vivaldi,” a book about “stereotype threat,” which is a psychological theory about how minorities perform worse in environments where people like them traditionally don’t succeed.

The bar charts below show the raw and adjusted graduation gaps for institutions in Virginia and Connecticut. For Virginia, the researchers were able to reveal the names of the colleges, but not for Connecticut.

college graduation rates by race and ethnicity
Yellow dots adjust for prior academic achievement and family income, but there are still racial differences in graduation rates. For example, 50 percent of white and Asian students obtained their four-year bachelor’s degrees within six years at the Jefferson College of Health Science in Roanoke (now part of Radford University) compared to only 18 percent of blacks and Latinos. After adjusting for students’ prior academic achievement and family income, the graduation gap falls from above 30 percentage points to around 15 percentage points — still significant. Credit: Chart from "Understanding Equity Gaps in College Graduation," Urban Institute, January 2020.
The graduation gaps tend to be smaller in Connecticut than in Virginia. In Connecticut only public universities and two private institutions participated and their names were not disclosed. Yellow dots adjust for prior academic achievement and family income, but there are still racial differences in graduation rates. Credit: Chart from "Understanding Equity Gaps in College Graduation," Urban Institute, January 2020.
The graduation gaps at Virginia’s community colleges tend to be smaller than at four-year institutions in the state but they still exist. Yellow dots adjust for prior academic achievement and family income but there are still racial differences in graduation rates. Credit: Chart from "Understanding Equity Gaps in College Graduation," Urban Institute, January 2020.
These are the racial graduation gaps betweens whites and Asians and blacks and Latinos at Connecticut’s community colleges. The names of the institutions were not disclosed. Credit: Chart from "Understanding Equity Gaps in College Graduation," Urban Institute, January 2020.

Addressing these graduation gaps will probably be expensive and involve more financial aid, tutoring and advising for students. “If the will or money is not present, it is difficult to see large-scale structural change occurring,” Baker said by e-mail.

We have a problem in America. Only 21 percent of blacks and 15 percent of Latinos have a bachelor’s degree or more, compared with 35 percent of whites and 54 percent of Asians.  If we want more college-educated Americans, we need to do something about it.

This story about college graduation rates by race and ethnicity was written by Jill Barshay and produced by The Hechinger Report, a nonprofit, independent news organization focused on inequality and innovation in education. Sign up for the Hechinger newsletter.

The post Another way to quantify inequality inside colleges appeared first on The Hechinger Report.

]]>
https://hechingerreport.org/another-way-to-quantify-inequality-inside-colleges/feed/ 0 61026
The science of talking in class https://hechingerreport.org/the-science-of-talking-in-class/ https://hechingerreport.org/the-science-of-talking-in-class/#respond Mon, 03 Feb 2020 11:00:07 +0000 https://hechingerreport.org/?p=60542 peer interaction

One of the hallmarks of so-called “progressive” schools is freedom for students to talk to each other in class. Students aren’t required to sit quietly all day, obediently listening to a teacher lecture or silently completing an assignment on their own. The Swiss psychologist Jean Piaget, whose theories of child development inspire many teachers today, […]

The post The science of talking in class appeared first on The Hechinger Report.

]]>
peer interaction
Students in Shana Cunningham’s fourth-grade reading class discuss a nonfiction passage.
An analysis of 71 studies finds that peer discussions and group work boost learning. Credit: Kayleigh Skinner

One of the hallmarks of so-called “progressive” schools is freedom for students to talk to each other in class. Students aren’t required to sit quietly all day, obediently listening to a teacher lecture or silently completing an assignment on their own. The Swiss psychologist Jean Piaget, whose theories of child development inspire many teachers today, thought peer interaction was important both for a child’s social development and for learning itself. Piaget believed that children were passive recipients of knowledge when instructed by adults. But he noticed that when a child was asking questions and arguing with a fellow student, the child became an active, engaged learner. It’s that active engagement that leads to learning, Piaget theorized.

Yet when teachers open the classroom to group work and children’s chatter, peer learning can seem like a waste of time. Students often veer off-task, talking about Fortnite or Lizzo. Noise levels rise. Conflicts erupt. Are they really learning? Whether it’s productive to allot precious classroom minutes for children to talk with each other remains a debate with practical consequences.

A team of U.K. researchers collected all the studies they could find on peer interaction, in which children are either discussing or collaborating on an assignment together in small groups of two, three or four students. They found 71 studies, covering more than 7,000 children and teens. Most of the studies took place in the United States and the United Kingdom.  The results: Piaget was partly right … and wrong. Students tend to learn better by interacting with each other rather than wrestling with an assignment or a new topic on their own. But interacting with an adult one-to-one is even better than peer-to-peer interaction.

“That’s great, but how many times can a teacher work one on one with a child?” said Harriet Tenenbaum, an expert in learning at the University of Surrey and one of the study’s authors. “Group work should not be seen as a waste of time. There’s something about conversation that helps people learn. Doing problems on your own isn’t as beneficial. ”

Tenenbaum and her co-authors found that peer interaction helped children at all ages, from the youngest four-year-olds to the oldest 18-year-olds in the studies. Groups of three or four students were as productive as pairs of two, though most of the studies had children working in pairs. For studies that noted gender, boys and girls equally benefited from working in pairs or groups regardless if they worked together or apart. The study, “How effective is peer interaction in facilitating learning? A meta-analysis,” was published online December 2019 in the Journal of Educational Psychology.

Students didn’t always learn more from interacting with each other than working alone in the 71 underlying studies. The ones that produced the strongest learning gains for peer interaction were those where adults gave children clear instructions for what do during their conversations. Explicit instructions to “arrive at a consensus” or “make sure you understand your partner’s perspective” helped children learn more. Simply telling students to “work together” or “discuss”  often didn’t generate learning improvements for students in the studies. That’s because students often repeat what they already believe in an unstructured conversation. The instructions force children to debate and negotiate, during which they can clear up misunderstandings and deepen their knowledge.

Related: A study finds promise in project-based learning for young low-income children

“Instructions are really important,” said Tenenbaum. In other words, the trendy direction to “turn and talk to your neighbor” isn’t sufficient.

Unfortunately, this meta-analysis didn’t shed light on so many questions I have about peer dynamics and learning. Do kids still learn well from peers when one student is dominating the conversation or when a partner is slacking off and forcing you to do all the work? These studies didn’t document behaviors during a conversation. Can you learn as much from a bright peer as a struggling learner?  Too few of the studies noted students’ prior achievement levels to compare that with how much they learned during the exercise.

I was especially disappointed that there wasn’t much insight into when a peer-to-peer discussion is most productive during a lesson. These underlying studies didn’t take place in real classrooms but in controlled laboratory conditions. Generally students came to a room and were tested to see what they already knew about a topic. Then some students were randomly assigned to work in pairs or group on that same topic. Students assigned to comparison control groups did different things depending on the study:  toiling on their own, working with an adult or even doing nothing. Then everyone was tested at the end of the exercise to see how much they learned. Learning gains were measured by comparing before and after tests.

Tenenbaum’s advice isn’t to get rid of traditional instruction but to use peer discussions to reinforce a lecture. She recommends that teachers continue to teach a lesson to the whole class, as usual, and then break the students up and have them work with a peer for five to 10 minutes to reinforce the concept. “I wouldn’t say to five-year-olds, ‘work with peers all day long,’” she said. Keeping the peer-to-peer sessions short might also help keep children and adolescents on topic.

But the science doesn’t yet prove this advice is sound. “The next step is to test this in real classrooms,” said Tenenbaum, suggesting that a teacher could give a half-hour lesson and then researchers could split the kids up into working in groups or alone and see who does better. “But it’s really hard for researchers to get into classrooms,” said Tenenbaum. “For schools, it’s disruptive.” Much like talking in class.

This story about peer interaction was written by Jill Barshay and produced by The Hechinger Report, a nonprofit, independent news organization focused on inequality and innovation in education. Sign up for the Hechinger newsletter.

The post The science of talking in class appeared first on The Hechinger Report.

]]>
https://hechingerreport.org/the-science-of-talking-in-class/feed/ 0 60542
Reframing ed tech to save teachers time and reduce workloads https://hechingerreport.org/ai-in-education-reframing-ed-tech-to-save-teachers-time-and-reduce-workloads/ https://hechingerreport.org/ai-in-education-reframing-ed-tech-to-save-teachers-time-and-reduce-workloads/#respond Mon, 27 Jan 2020 11:00:24 +0000 https://hechingerreport.org/?p=60402 AI in education

For much of the previous decade, advocates of education technology imagined a classroom where computer algorithms would differentiate instruction for each student, delivering just the right lessons at the right time, like a personal tutor. The evidence that students learn better this way has not been strong and, instead, we’re reading reports that technology use […]

The post Reframing ed tech to save teachers time and reduce workloads appeared first on The Hechinger Report.

]]>
AI in education

For much of the previous decade, advocates of education technology imagined a classroom where computer algorithms would differentiate instruction for each student, delivering just the right lessons at the right time, like a personal tutor. The evidence that students learn better this way has not been strong and, instead, we’re reading reports that technology use at school sometimes hurts student achievement.

So it was interesting to see McKinsey & Co., an elite consulting firm, reframe the argument for buying education technology away from computerized instruction to something more pedestrian: saving teachers time.  A January 2020 report by the firm estimated that between 20 and 40 percent of the 50 hours that a typical teacher currently works a week could be saved through existing automation technology, often enabled by artificial intelligence (AI).  That adds up to 13 saved hours a week, hours of freedom that could help relieve teacher burnout. Those hours could also be reallocated so that teachers can do more of what teachers do best: interact with students.

“Many of the attributes that make good teachers great are the very things that AI or other technology fails to emulate: inspiring students, building positive school and class climates, resolving conflicts, creating connection and belonging, seeing the world from the perspective of individual students, and mentoring and coaching students,” the McKinsey authors wrote. “These things represent the heart of a teacher’s work and cannot — and should not — be automated.”

The McKinsey authors suggest that existing technology can be used to help teachers in several areas: planning lessons, assessing students, grading homework, giving feedback and administrative paperwork. The consultants aren’t suggesting that computers can replace any of these tasks entirely but rather reduce the amount of time teachers have to spend on them. For example, they estimate that lesson preparation could be cut from almost 11 to six hours. They calculate that weekly grading could be cut in half from six to three hours. And they say that two hours a week of administrative paperwork could be trimmed.

The report is provocatively titled, “How artificial intelligence will impact K-12 teachers,” though many of the recommendations are for categories of software applications that don’t necessarily use sophisticated AI algorithms at all, such as sites where teachers can find curriculum materials posted by other teachers. I was curious to learn what scientists who are involved in studying and developing AI in education thought of McKinsey’s analysis and heard a range of praise, skepticism and outright criticism.

A lot of automatic grading technology isn’t very good yet, AI experts told me. “There’s stuff out there than you can use tomorrow but I also think there’s still a lot to be done,” said Ryan Baker, a professor at the University of Pennsylvania who studies how students learn from educational software. Computers can easily grade math computations, he said, but automated writing feedback or feedback for more complicated math or science problems still needs to get better. 

Related: Most English lessons on Teachers Pay Teachers and other sites are ‘mediocre’ or ‘not worth using,’ study finds

Another shortcoming with a lot of existing technology, Baker said, is using the data that computerized systems generate for lesson planning. If computers are grading homework or assessing what students know, the systems need to convey results for each student in a way that’s useful for teachers. “It’s a hard challenge,” he said. “There’s a lot to be done in taking sophisticated AI models that I work with and translate them into something that’s understandable to teachers and that they trust.”

Often developers of educational software create dashboards for teachers to decipher that sometimes add to their workloads instead of saving them time. Or the feedback for teachers is very simplistic, such as, 56 percent of the class got a particular homework problem wrong. But there’s no insight into how to help each student.

Stefania Druga, a doctoral student at the University of Washington, who studies AI in education and who founded a coding project to teach students how AI models work, argues that automated grading can have the unintended consequence of breaking the feedback loop between teacher and student. Often a student can trick a robograder and get the problem right without understanding the underlying concept, she explained. Or the student learns what metrics the automatic grader looks at and “optimizes” for them. For example, a writing feedback program might emphasize the use of the word “evidence” and other synonyms and give high marks to incomprehensible essays that sprinkle those keywords throughout. Though time consuming, having teachers actually read student work directly, she says, is important for the learning process.

Some of the biggest time savings, according to McKinsey, could be in using existing technology for lesson planning. But curriculum experts who reviewed popular online curriculum resources found them to be mediocre. McKinsey acknowledges that quality in ed tech is a problem. Schools and teachers are pitched a “myriad of competing solutions,” some of which “promise great things but deliver little,” the report explains. McKinsey called for “neutral arbiters” to evaluate the quality of software with “objective and rigorous performance data.” 

Related: Research shows lower test scores for fourth graders who use tablets in schools

The cynic in me was wondering if McKinsey is merely offering ed tech companies new talking points to sell their existing wares. Instead of boasting about how much they boost student performance, ed tech marketers can tout how many teacher hours they save. But I was intrigued with McKinsey’s suggestion to use AI more for the back office. The consultants suggest that automation software could fill out forms or suggest potential responses, maintain inventories of materials, equipment, and products, and automatically order replacements.

Andrew Berning, the president of the Renaissance Institute, a company that sells back-office tech services to schools, told me that McKinsey’s reframing is “spot on.” “The market has been too focused on developing AI ‘teaching machines’ and not enough in the area where we can really have an impact, such as facilitation, automation and teachers support,” he said, via e-mail. His big growth area: tools for schools to monitor their internet traffic to detect cyber threats, such as phishing, fraud and ransomware.

That’s what schools need: more technology to protect them from the harm that the technology they’ve already bought is causing.

This story about AI in education was written by Jill Barshay and produced by The Hechinger Report, a nonprofit, independent news organization focused on inequality and innovation in education. Sign up for the Hechinger newsletter.

The post Reframing ed tech to save teachers time and reduce workloads appeared first on The Hechinger Report.

]]>
https://hechingerreport.org/ai-in-education-reframing-ed-tech-to-save-teachers-time-and-reduce-workloads/feed/ 0 60402
The learning effect of air quality in classrooms https://hechingerreport.org/the-learning-effect-of-air-quality-in-classrooms/ https://hechingerreport.org/the-learning-effect-of-air-quality-in-classrooms/#respond Mon, 20 Jan 2020 11:00:51 +0000 https://hechingerreport.org/?p=60216 air quality in classrooms

Education researchers have traditionally focused on the obvious ingredients of teaching and learning, such as instruction, curriculum, student motivation and school funding. But now researchers are scrutinizing the physical environment that surrounds students, especially the air quality and temperature in classrooms. In the past four years, scholars have produced a growing body of research, making […]

The post The learning effect of air quality in classrooms appeared first on The Hechinger Report.

]]>
air quality in classrooms
Hotter classroom temperatures and polluted air are affecting children’s brains at school, according to a new body of education research. (AP Photo/Nick Ut)

Education researchers have traditionally focused on the obvious ingredients of teaching and learning, such as instruction, curriculum, student motivation and school funding. But now researchers are scrutinizing the physical environment that surrounds students, especially the air quality and temperature in classrooms. In the past four years, scholars have produced a growing body of research, making a persuasive case that both air pollution and heat — an increasing concern with climate change — harm student achievement.

The latest is a draft, working paper from a scholar at New York University who studied the use of air filters in one community in California and concludes that test scores rose a lot more in the schools that installed them than in nearby schools that didn’t. It’s a small study and the results seem to be influenced by one school with very large test score gains. Something other than air filters might have produced the test score improvements. The study received a lot of attention and generated controversy among statisticians, after Vox wrote about it earlier in January 2020. But it reminded me that there’s an important new body of research about air quality in classrooms that’s worth watching.

We’ve known for quite some time that pollution is bad for your health but researchers are documenting how it affects our brains. A 2016 Israel study found that high rates of pollution on the day of an exam tamped down high school test scores. The same students scored higher on different test dates with cleaner air. Boys and low-income students were the most affected. A 2019 draft, working paper of a study on university students in London also found that exposure to indoor air pollutants was associated with lower exam test scores. Again, males were more affected than females and the mental acuity problems were triggered by particulate levels that were below current guidelines at the U.S. Environmental Protection Agency. Sefi Roth, an economist at the London School of Economics, is an author of both studies.

Claudia Persico, a policy scholar at American University, has been spearheading this line of research into pollution and student achievement in this country. Three of her papers came out in 2019. One, a draft, working paper about traffic pollution, studied students from the same community and found that students who attend schools located downwind from a highway had lower test scores, more behavioral problems and more absences than students who attend upwind schools, which were more protected from toxic auto emissions. A second study found that students exposed to more air pollution in Florida had lower test scores and were more likely to get suspended from school. A third study found that children who experienced prenatal exposure to a toxic waste site had worse cognitive and behavioral outcomes than their siblings who were unaffected because of a family move or the timing of a Superfund site cleanup.

Similarly, there’s new scientific evidence for something we’ve all felt on a hot day: it’s hard to think clearly in the heat. Taking advantage of a giant data set of PSAT scores, a team of researchers examined 10 million students across the country and found that they had lower scores if they experienced hotter school days in years preceding the test, with extreme heat being particularly damaging. The study, forthcoming in American Economic Journal: Economic Policy, calculated that every Fahrenheit degree increase in the outdoor temperature over a school year reduced that year’s learning by 1 percent. (An earlier draft version was circulated by the National Bureau of Economic Research in 2018.) Given rising temperatures, those learning reductions are adding up.

Another study by Jisung Park, an economist at the University of California, Los Angeles, who was also a co-author of the PSAT study, found that just one day of ill-timed heat could have an impact on high stakes exams in New York City.  In the study, which will be published in the Journal of Human Resources, Park found that hot testing days reduced students’ performance on Regents exams, which are required for graduation in New York, and thus decreased the probability of  a student graduating from high school. For example, he found that students are 10 percent more likely to fail an exam when the temperature is 90 degrees than when it’s 72 degrees.

In the PSAT paper, Park and his colleagues make the argument that air conditioning can offset this heat problem. But they based this analysis on surveys of guidance counselors and students who answered questions about whether their classrooms felt too hot and if their schools had added air conditioners over the past decade. The researchers did not actually observe where air conditioners were running and compare test scores. It’s also possible that wealthier schools could more easily afford air conditioning and that wealthier students were more likely to score higher on the PSAT anyway. On the other hand, it’s plausible that rising heat and uneven access to air conditioning is contributing to growing achievement gaps between rich and poor that we’ve seen on several standardized tests.

Related: What 2018 PISA international rankings tell us about U.S. schools

As with the air filter study I referred to at the top of this column, it’s not yet clear that a simple remedy will make a big difference. Another unpublished study of air conditioners in Chicago public schools didn’t find an improvement in test scores in classrooms that used them, two researchers told me.

In other words, this new body of environmental research in education is still in its early stages. “We have more confidence that air pollution and heat are affecting student learning,” said Joshua Goodman, an economist at Brandeis University who is a co-author of the PSAT and heat study. “The question of what to do about it is still open. It’s not at the point where every district should be spending money on this.”

“If I were a principal or a superintendent, I wouldn’t take this research run out and spend a lot of money on air filters and air conditioners,” Goodman added. “But it’s worth thinking about if there are particular buildings where we already have a sense that there is a problem.”

Of course, teachers will tell you where the classrooms are insufferable in May, June, August and September. Pollution can be subtler. Classrooms with open windows near highways are a place to start.

The irony, of course, is that powerful air conditioners and air filtration systems use a lot of energy and contribute further to pollution and climate change. In the end, these fixes could end up doing more damage than good. Progress isn’t easy.

This story about air quality in classrooms was written by Jill Barshay and produced by The Hechinger Report, a nonprofit, independent news organization focused on inequality and innovation in education. Sign up for the Hechinger newsletter.

The post The learning effect of air quality in classrooms appeared first on The Hechinger Report.

]]>
https://hechingerreport.org/the-learning-effect-of-air-quality-in-classrooms/feed/ 0 60216
A study on teaching critical thinking in science https://hechingerreport.org/a-study-on-teaching-critical-thinking-in-science/ https://hechingerreport.org/a-study-on-teaching-critical-thinking-in-science/#respond Mon, 13 Jan 2020 11:00:50 +0000 https://hechingerreport.org/?p=60020 teaching the scientific method

In September 2019, I wrote about a review of the research on how to teach critical thinking by University of Virginia professor Daniel Willingham. His conclusion was that generic critical thinking skills don’t translate from one subject to another but that subject-specific critical thinking skills can be explicitly taught as you go deep into a […]

The post A study on teaching critical thinking in science appeared first on The Hechinger Report.

]]>
teaching the scientific method
A study of 2,500 middle school students finds that the acquisition of scientific reasoning skills produces stronger learning gains. Credit: Education Images/Universal Images Group via Getty Images)

In September 2019, I wrote about a review of the research on how to teach critical thinking by University of Virginia professor Daniel Willingham. His conclusion was that generic critical thinking skills don’t translate from one subject to another but that subject-specific critical thinking skills can be explicitly taught as you go deep into a lesson, be it history or math, as students need to learn a lot of information to process it.

A large study on teaching science to middle school students was published afterwards and it adds more nuance to this debate between critical thinking skills and content knowledge. A team of researchers found that students who learn to think like a scientist (more on what that means later) learn many more of the facts and figures in their science classes and absorb the content better.

The big wrinkle is that the researchers found that it’s not effective to start the school year with a couple of weeks of lessons on the scientific method, which many teachers do.  Instead, the successful teaching of scientific thinking starts with a content-rich lesson where students learn to ask the right questions and evaluate evidence while they are processing information on dolphins, molecules or another specific science topic.

In other words, critical thinking is necessary to absorb content better but a content-rich lesson is needed to teach critical thinking in the first place.

The study, “Scientific sensemaking supports science content learning across disciplines and instructional contexts,” was published in October 2019 in the journal of Contemporary Educational Psychology. The researchers were Matthew Cannady at the University of California at Berkeley, Joo Man Chung from Georgetown University and Christian Schunn and Paulette Vincent-Ruz, both from the University of Pittsburgh.

In the study, researchers tested more than 2,500 sixth and eighth grade students multiple times during fall term, measuring their scientific reasoning abilities separately from how much content knowledge they were learning in biology or chemistry classes. The students hailed from a large Eastern city with a high proportion of black students and from several school districts in the West where there were many Latino students.

The researchers measured “thinking like a scientist” by breaking down the thinking and analyzing that scientists do into categories, such as asking questions, designing experiments, interpreting data, constructing cause-and-effect explanations, making arguments and understanding how scientific theories evolve. Much of it involves the skills you would need in order to follow the scientific method of generating, testing and modifying hypotheses.  For example, the researchers were interested in how students were able to connect claims to evidence, prioritize which forms of justification to use as evidence and refute alternative claims. The adjacent table from the study provides concrete examples of how students were assessed.

A breakdown of thinking like a scientist

On the left are the researchers’ categories of thinking scientifically. They largely adhere to aspects of the scientific method. On the right are examples of questions that the researchers used to gauge whether middle school students grasped how to think scientifically. Click here to see a larger version of this table. Credit: Table 2 from "Scientific sensemaking supports science content learning across disciplines and instructional contexts" by Matthew A. Cannady, Paulette Vincent-Ruz, Joo Man Chung and Christian D. Schunn, published October 2019 in the journal of Contemporary Educational Psychology

The researchers calculated that students who started the year with a higher degree of scientific reasoning learned considerably more content during the term than students who didn’t have as strong a sense of scientific reasoning. It was true in both biology and chemistry classes and for both sixth and eighth graders.

The researchers also paid close attention to the style of instruction in the science classrooms. Some teachers adhered closely to a textbook. Others had a more hands-on approach with lots of experiments and projects. Hands-on instruction tended to produce more growth in scientific reasoning for students; doing science is helpful for learning to think like a scientist. However, one surprising result is that the researchers didn’t see a difference in the amount of content that a student learned between the two teaching approaches.

Both styles could be done well, helping students to learn how to reason like a scientist, and both could be done poorly. For example, a hands-on teacher can merely ask students to follow step-by-step instructions in a prepackaged experiment kit without pushing students to think through why they’re doing these steps. Meanwhile, a textbook teacher can ask fabulous questions.

Those questions seem to matter the most. For example: Why did you ask that question? Why is it a good or reasonable question? Why is the study you designed the right study to answer that question? Why does the data support your idea?

Related: Scientific research on how to teach critical thinking contradicts education trends

Why strong scientific reasoning helps you learn more facts and figures is mysterious. Learning scientists have a couple of theories. One has to do with information processing. The human brain is constantly mishearing and misreading things. If we understand what should be going on and are trying to make sense of the information coming to us, we fix the errors.

The other theory is based on how human memory works.  When we try to understand new information and make connections between ideas, learning scientists say it takes less effort to memorize new facts and figures. Another approach is repetition. And certainly studying something 10 times is more effective than only five times. But understanding it once is better.

When I first became an education reporter almost a decade ago, I remember education policymakers and experts explaining to me that we don’t need to teach students as much boring content anymore because you can look up everything you need to know on Google. They believed that thinking skills are the really important thing to teach students and that’s what will last with students year after year after they’ve forgotten all about the periodic table.  But this research shows that you can’t learn how to think as an abstract skill. You have to dig into the “boring” and “forgettable” content to acquire it.

This story about teaching the scientific method was written by Jill Barshay and produced by The Hechinger Report, a nonprofit, independent news organization focused on inequality and innovation in education. Sign up for the Hechinger newsletter.

The post A study on teaching critical thinking in science appeared first on The Hechinger Report.

]]>
https://hechingerreport.org/a-study-on-teaching-critical-thinking-in-science/feed/ 0 60020
A ‘wildly intrusive’ way to help older college students get their degrees https://hechingerreport.org/a-wildly-intrusive-way-to-help-older-college-students-get-their-degrees/ https://hechingerreport.org/a-wildly-intrusive-way-to-help-older-college-students-get-their-degrees/#respond Mon, 06 Jan 2020 11:00:34 +0000 https://hechingerreport.org/?p=59941 John Jay graduation

The John Jay College for Criminal Justice is known for training New York City’s future police officers and as Dara Byrne rose through the ranks of the college’s administration, she noticed a mystery right on her campus: why were 2,000 seniors, with only one year left to graduate, not enrolling in the fall? “Students in […]

The post A ‘wildly intrusive’ way to help older college students get their degrees appeared first on The Hechinger Report.

]]>
John Jay graduation

The John Jay College for Criminal Justice is known for training New York City’s future police officers and as Dara Byrne rose through the ranks of the college’s administration, she noticed a mystery right on her campus: why were 2,000 seniors, with only one year left to graduate, not enrolling in the fall?

“Students in that 90-credit zone, really close to graduation, were leaving,” said Byrne, when I talked to her by phone. “I couldn’t understand why because it’s counter to what you would think. You’re almost there. You’ve got only three, four, five, eight classes left.” Many of these students intended to return. “But data showed that very few of them came back to us,” Byrne said. “Thousands of students a year, never getting a degree.”

In 2018, Byrne and her colleagues decided to play detective using predictive analytics, which tracks student data to predict who is likely to graduate and who is not. In addition to being dean of undergraduate studies, Byrne has a second job as associate provost for undergraduate retention and it is her job to make sure John Jay’s students — mostly low-income and many the first in their families to attend college — graduate. Typically, colleges work on improving graduation rates by investing more in students when they first arrive on campus. But Byrne’s observation convinced her that she needed to turn this model around and nurture older, nontraditional students at the end of their college careers.

Byrne’s colleagues presented the results of their recent experiment in using predictive analytics at the Complete College America annual conference in Phoenix, Ariz., in December 2019.

Predictive analytics helped Byrne and her colleagues figure out which seniors were most likely to leave before getting their degrees. John Jay College sent more than 10 years of student data to a nonprofit organization, DataKind, and its data scientists figured out which attributes were associated with dropping out at the very end of a college career. Twenty-four data points, such as financial aid status, plus a dozen calculations, such falling grades, rose to the top as important indicators.

In the spring of 2018, DataKind ranked more than 1,100 rising seniors according to their risk of dropping out of school. The school especially focused on 380 students with the highest risk scores. Byrne hired three employees plus a student liaison to give these students extra academic advising and financial aid.*

Byrne calls this level of advising “wildly intrusive,” with advisers telephoning and forcing these high-risk students to come in for face-to-face meetings to learn about their problems. Many of the students had already spent more than four years in college and had exhausted their federal and state financial aid. Many had taken on more part-time work to pay for school.

Related: Colleges are using big data to track students in an effort to boost graduation rates, but it comes at a cost

“It’s exactly those financial pressures that make students think about not coming back,” Byrne said. “Emergency funds are critical…It’s amazing to see that $300 is what is standing between someone and a college degree.”

John Jay gave many of these students aid that they otherwise wouldn’t have qualified for to pay small, unpaid tuition bills. Even mediocre students received these “emergency funds.”

“There’s not a lot of infrastructure for students whose performance is declining over time,” Byrne said, noting that more than half of the school’s graduates go into public service, from police and firefighting to social work and youth nonprofits. “We think a 2.5 student [C+ student] is just as valuable as 3.0 [B student].”

The “wildly intrusive” advisers were encouraged to remove administrative roadblocks, sometimes bending college rules on removing and repeating courses. Byrne told me about one student who needed to retroactively withdraw from a class where she did not do well. The school allows students to withdraw from classes if they are experiencing mental health challenges but this student didn’t have any mental health documentation of the domestic violence she suffered that semester. Her adviser advocated for her and helped to gather other forms of documentation. That student’s exemption from the rules eventually led to a policy change that allows for the program’s advisers to advocate for students instead of requiring students to pay for a mental health professional to document a mental health crisis.**

After one year of Byrne’s interventions, which John Jay calls the Completion for Upper Division Students Program, 51 percent of the 380 students identified as high risk and targeted for help graduated in the 2018-19 academic year. In the previous year, before the college had started this experiment, only 47 percent of the students that the DataKind algorithm would have identified as high-risk seniors completed a college degree on their own within a year.  That’s a four-percentage-point jump in the graduation rate for this group.

The gains were primarily achieved by an even smaller subgroup of 180 students who had the very highest risks of dropping out, according to the algorithmic predictions. More than half of them were Hispanic, much higher than the 46 percent Hispanic population of the overall student body.  And they were notably older: 38 percent of this very high-risk group was at least 25 years of age. This very high-risk group saw a six-percentage-point improvement in their graduation rate, from 33 percent previously to 39 percent after the data-driven intervention. (The remaining 200 students in the intervention actually saw their graduation rate deteriorate slightly from 64 percent to 62 percent.)

Among the larger group of 1,100 students who had originally been analyzed for risk, nearly 73 percent graduated within a year. Without the program, the school had expected only 54 percent to graduate within two years.***

Those jumps helped increase the overall graduation rate at John Jay College, which has 13,000 undergraduates, by 2 percentage points, according to the college’s estimates. It calculates that nearly 38 percent of its students who started in 2015 received a bachelor’s degree within four years and almost 52 percent of the students who started in 2013 received a bachelor’s degree within six years.

Graduation rates at John Jay were already trending upward because of other things the college has been doing. But without this predictive analytics experiment, John Jay estimates that its four-year and six-year graduation rates would have grown to less than 36 percent and 50 percent, respectively. It’s one of the first times I’ve seen a college try to measure the effectiveness of predictive analytics and compare it to what might have happened otherwise.

Of course, predictive analytics, counseling and financial aid come with price tags. Two foundations, MasterCard and Robin Hood, footed John Jay’s bill for DataKind. (Companies typically charge colleges $300,000 a year for providing predictive analytics services.) And the Price Family Foundation**** gave John Jay $800,000 to hire extra staff and give students emergency grants.

Because John Jay is part of the City University of New York system, Byrne is well aware of the financial pressures at public institutions. She argues that it can be affordable if colleges hunt for cheaper open-source data analysis and reallocate existing advisers. Getting more seniors to graduate seems worth the effort. But ultimately, sleuthing out and solving each student’s obstacles one at a time is slow, painstaking work.

*Clarification: This paragraph was revised after John Jay College provided additional clarifying information to explain how the list of more than 1100 students was generated. 

**Clarification: This sentence was modified after publication to make clear that program advisers are involved in the new policy. 

***Correction: The estimate of the two-year graduation rate without the program was revised down to 54 percent after John Jay College provided additional clarifying data.

****Correction: This story previously misidentified the Price foundation that donated funds to John Jay College. The Price Family Foundation that donated funds has no connection with Sol Price, the founder of Price Club.

This story about John Jay College graduation was written by Jill Barshay and produced by The Hechinger Report, a nonprofit, independent news organization focused on inequality and innovation in education. Sign up for the Hechinger newsletter.

The post A ‘wildly intrusive’ way to help older college students get their degrees appeared first on The Hechinger Report.

]]>
https://hechingerreport.org/a-wildly-intrusive-way-to-help-older-college-students-get-their-degrees/feed/ 0 59941
10 of the most popular stories about education research in 2019 https://hechingerreport.org/10-of-the-most-popular-stories-about-education-research-in-2019/ https://hechingerreport.org/10-of-the-most-popular-stories-about-education-research-in-2019/#respond Mon, 30 Dec 2019 11:00:45 +0000 https://hechingerreport.org/?p=59688 education research stories

For my year-end post, I’m highlighting 10 of the most well-read Proof Points stories of 2019. They are listed in the order of popularity — by the number of times readers viewed them on our website, The Hechinger Report. What stands out for me is how popular education trends, from social-emotional learning to school discipline, […]

The post 10 of the most popular stories about education research in 2019 appeared first on The Hechinger Report.

]]>
education research stories

For my year-end post, I’m highlighting 10 of the most well-read Proof Points stories of 2019. They are listed in the order of popularity — by the number of times readers viewed them on our website, The Hechinger Report. What stands out for me is how popular education trends, from social-emotional learning to school discipline, aren’t standing up to scientific scrutiny. The research evidence for education technology continues to be weak.

Thank you to everyone who has read and commented on my weekly stories about education data and research. I look forward to continuing this conversation with you next year. If you would like to receive an email newsletter and notification when the column comes out each week, please click here and fill out the form. Happy New Year  and I’ll be back again on Jan. 6, 2020.

1. Scientific research on how to teach critical thinking contradicts education trends

Daniel Willingham, a professor of psychology at the University of Virginia, makes the case that one of the biggest education trends — critical thinking — isn’t taught properly in schools. An extensive review of the research on how to teach critical thinking argues for teaching students old-fashioned content knowledge instead of abstract critical thinking skills that don’t transfer between subjects and disciplines.

2. Gifted classes may not help talented students move ahead faster

Gifted education is getting renewed attention because it is one of the big ways that U.S. school systems separate children by race. This study found that students in gifted classrooms are learning the same topics and curriculum as students in general education classes. That calls into question why we’re separating bright kids into separate classrooms if they’re not getting an accelerated education. In a related column, “Is there a trade-off between racial diversity and academic excellence in gifted classrooms?,” I looked at how both the racial and ethnic composition of gifted classrooms and student achievement levels might change if we picked the top students in each neighborhood.

3. Research scholars to air problems with using ‘grit’ at school

One of the more popular concepts of the past decade — Angela Duckworth’s grit — isn’t standing up to scientific scrutiny. At least five studies have found problems with the underlying research on how grit is measured and whether more grit helps students do better at school.

4. The promise of ‘restorative justice’ starts to falter under rigorous research

Another big trend is to take a softer approach to school discipline. But research scholars are finding that it’s very hard for schools to implement restorative justice programs and schools that have tried them aren’t seeing much decline in discipline rates compared to schools that are disciplining students as usual.

5. The dark side of education research: widespread bias

An analysis of 30 years of educational research by scholars at Johns Hopkins University found that when a maker of an educational intervention conducted its own research or paid someone to do the research, the results commonly showed greater benefits for students than when the research was independent.

6. Research shows lower test scores for fourth graders who use tablets in schools

A mounting body of evidence indicates that technology in schools isn’t boosting student achievement as its proponents had hoped it would. In fact, a study from the Reboot Foundation finds that students who use technology more often do worse in school.

7. An analysis of achievement gaps in every school in America shows that poverty is the biggest hurdle

Sean Reardon, a sociologist at Stanford University, calculated achievement gaps and racial segregation in nearly every school in the United States.  He finds that poverty rates are the biggest driver of academic achievement gaps but racial segregation matters because black and Hispanic students are concentrated in high poverty schools. If you haven’t clicked on the data for your local school, I encourage you to do so. It’s eye-opening.

8. Evidence increases for reading on paper instead of screens

This was my favorite meta-analysis of the year. A North Dakota education professor collected every study she could find that compared reading comprehension on screens versus paper. Paper beat screens almost every time.

9. Five years after Common Core, a mysterious spike in failure rate among NY high school students

After five years of high schools teaching to the Common Core standards, a local education policy consultant noted a sudden spike in the failure rate on high school exams in New York State.  These test results are pointing out that some students are having greater trouble learning the material than they used to. One hypothesis is that low-achieving kids who were introduced to Common Core standards midway through their educational career might have been harmed in the sometimes rocky transition.

10. Weakest students more likely to take online college classes but do worse in them

A January 2019 paper documents the rise of online learning and reviews a large body of academic research on the topic. The researchers conclude that most students, especially those with weak academic backgrounds, aren’t being well served by the kinds of online courses that colleges are typically offering.

Related: 10 of the most important stories about education research in 2018

This story about the top education research stories of 2019 was written by Jill Barshay and produced by The Hechinger Report, a nonprofit, independent news organization focused on inequality and innovation in education. Sign up for the Hechinger newsletter.

The post 10 of the most popular stories about education research in 2019 appeared first on The Hechinger Report.

]]>
https://hechingerreport.org/10-of-the-most-popular-stories-about-education-research-in-2019/feed/ 0 59688
Most English lessons on Teachers Pay Teachers and other sites are ‘mediocre’ or ‘not worth using,’ study finds https://hechingerreport.org/most-english-lessons-on-teachers-pay-teachers-and-other-sites-are-mediocre-or-not-worth-using-study-finds/ https://hechingerreport.org/most-english-lessons-on-teachers-pay-teachers-and-other-sites-are-mediocre-or-not-worth-using-study-finds/#respond Mon, 23 Dec 2019 11:00:00 +0000 https://hechingerreport.org/?p=59723 Teachers Pay Teachers

One of the most popular English lessons in the instructional marketplace Teachers Pay Teachers is a unit on how to teach William Shakespeare’s “Romeo and Juliet.”  It costs only $14.99 and claims to explain how to teach the tragedy of the star-crossed lovers in a fun-filled way while hitting almost 50 Common Core standards in […]

The post Most English lessons on Teachers Pay Teachers and other sites are ‘mediocre’ or ‘not worth using,’ study finds appeared first on The Hechinger Report.

]]>
Teachers Pay Teachers
A Thomas B. Fordham Institute study reviewed popular English lessons on Teachers Pay Teachers and two other sites and found that most weren’t good.

One of the most popular English lessons in the instructional marketplace Teachers Pay Teachers is a unit on how to teach William Shakespeare’s “Romeo and Juliet.”  It costs only $14.99 and claims to explain how to teach the tragedy of the star-crossed lovers in a fun-filled way while hitting almost 50 Common Core standards in five weeks. More than 1,200 Yelp-style reviews from teachers are posted. Many gush like this one:  “Everything is clearly laid out and takes the guesswork out of trying to feel like I have to do it all. Such a lifesaver!”

There’s just one problem. A curriculum expert who reviewed the pedagogy explained to me that the unit was “weak” and does not meet Common Core standards as the creator claimed. The one exception was the essay prompts, which the reviewer rated as good. But otherwise, instruction in this Shakespeare unit rarely goes deeper than a surface understanding of the text. It doesn’t push students to delve into the themes or how characters drive the plot. Too often, students are encouraged to think about how aspects of the story relate to their own lives instead of analyzing Shakespeare’s play, the reviewer said. To understand Juliet’s line, “A rose by any other name would smell as sweet,” students are asked about how names matter in our lives. “If you had a different first name, would your life have been different?”

“Playing a game of insults, that’s all well and good,” said Jennifer Dean, an educational consultant and former English teacher who has a Ph.D in English literature and was the lead reviewer in a December 2019 study of online instructional materials.  “But you have to have the excitement of digging into the text. That’s what’s missing here….And it pings from sonnets to tone to character without any logic.”

This Romeo and Juliet unit isn’t an outlier, according to the expert reviewers in the study. Five reviewers rated most of the online materials that teachers download as “mediocre” or “probably not worth using.” Specifically, they studied 328 of the most downloaded units and lessons in high school English on three websites (Teachers Pay Teachers, ReadWriteThink and Share My Lesson) and rated their quality in ten areas, such as building students’ content knowledge and how cognitively demanding it is. They found that 64 percent of them “should not be used” or were “probably not worth using.” A majority of the materials were rated 0 or 1 on a 0-3 quality scale. Lessons were often missing explanations for how to use the materials in the classroom or how to differentiate instruction for struggling students.

“Virtually all teachers supplement to some degree,” said Morgan Polikoff, an associate professor at the University of Southern California who co-authored the study with Dean. “This is a huge phenomenon that is woefully understudied. This first study certainly casts doubt on the content of these lessons in high school English. There’s a lot of not very good stuff out there. And it’s going to require a lot of work to identify what’s good.”

Related: Scientific evidence on how to teach writing is slim

“We need to do more to get quality materials in teachers’ hands,” added Polikoff.

Polikoff and Dean’s study, “The Supplemental Curriculum Bazaar: Is What’s Online Any Good?” was published on Dec. 10, 2019 by the Thomas B. Fordham Institute, a conservative think tank. This research was funded by the Chan Zuckerberg Initiative and the Thomas B. Fordham Foundation. The Chan Zuckerberg Initiative is among the many funders of The Hechinger Report.

The online world of supplemental teaching materials is enormous and the researchers started by limiting the universe to a particular subject and a narrow range of grades. Polikoff focused on high school English because textbooks are uncommon and teachers are often on their own as they select which novels to teach.  Reviewers only analyzed complete lessons or multi-lesson units. Individual worksheets or games were excluded because they cannot be expected to fulfill so many educational goals.

Some of the quality ratings are subjective — just as movie critics have different opinions. Reasonable people can disagree. Even some of the categories are debatable. Should you ding a lesson because it doesn’t include diverse authors or cover culturally diverse topics? These reviewers did. But there were enough categories that a lesson wouldn’t be classified as mediocre for shortcomings in only one or two areas.

Not all the lessons were disappointing to the expert reviewers. Dean pointed me to another Romeo and Juliet lesson on Share My Lesson that wasn’t as “snazzy and beautiful” as the Teachers Pay Teachers one described above but was instructionally complete, she said. “Here you have themes, conflicts, figurative language,” Dean said. “It’s progressing in a thoughtful, logical way, lesson to lesson.” The unit begins with a lesson on the Hatfields and the McCoys so that students can see parallels with the Capulets and the Montagues in later weeks. “This amazing lesson is free,” Dean said. “The other amazing thing is that an amazing lesson is rare.”

Teachers Pay Teachers is one of the most popular sites for all kinds of materials. More than 80 percent of U.S. teachers use it, according to the company. (According to a 2017 federal survey of teachers, more than half  —  millions of teachers — were using materials from Teachers Pay Teachers every week.)  But the other two giants where teachers seek materials, Pinterest and Google, were conspicuously missing from the study.  The researchers excluded those sites, they explained, because they couldn’t identify which lessons are intended for high school English and they were unable to sort lessons by downloads.

Polikoff hopes to extend this analysis beyond the narrow slice of high school English to other subjects and grades. If the findings are replicated in future studies, it calls into question the effectiveness of online marketplaces for teaching materials and if we can rely on crowdsourcing to spot quality. It was surprising to me that even the nonprofit site ReadWriteThink, which carefully approves who can post lessons and insists that the lessons be peer reviewed, had such uneven quality.  ReadWriteThink is a joint venture between the International Literacy Association and the National Council of Teachers of English. Share My Lesson, also a nonprofit, is run by the national teacher’s union, the American Federation of Teachers, and allows any teacher with an account to post materials just like Teachers Pay Teachers.

Related: The ‘dirty secret’ about educational innovation

Why is the market dysfunctional? Part of the problem is an excess of choice. More than four million items are on the Teachers Pay Teachers site, according to the for-profit company. How is a teacher to sort through them and choose? The star ratings aren’t much of a guide. Polikoff told me that the average star rating of the materials he downloaded was 3.98 stars out of 4. “That really flies in the face of the idea that somehow this market is going to sort of self correct in a way that results in the best materials rising to the top,” Polikoff said.

Dean, the lead reviewer, suspects that many teachers are lured by slick marketing. “Presentations can be dazzling,” she said. “It leads teachers astray because it looks so good.”

Many teachers might also genuinely think that a lesson is aligned with Common Core standards when it isn’t. A 2018 RAND study documented how a majority of teachers give the wrong answer to questions about what the Common Core standards are asking for.

“The market is breaking down, I think, because teachers are not generally experts in creating curriculum materials or maybe even identifying good curriculum materials,” said Polikoff.  “I don’t view that as a failing of them as teachers or as humans. I view that as a failure of… their teacher education programs and their in-service professional development programs in the school districts that they work in.”

Polikoff says the burden shouldn’t be on teachers to sort through thousands of lessons. He suggests that districts and states, and even textbook publishers, could get in the business of recommending specific supplemental materials that are good.

In a written statement, Teachers Pay Teachers asserted that teachers are finding “high­ quality, standards aligned materials that support effective instruction” on its site, but says it plans to revamp its ratings and review system in January 2020. “Improving our ratings and reviews system has been a major company­wide priority,” a company spokeswoman wrote me. The company will allow users to give feedback on the accuracy of the standards tagging, and it will ask new questions, such as whether the material is at the appropriate level of difficulty.

Related: 3 lessons from data on how students are actually using educational apps and software at school

Share My Lesson says it goes beyond crowdsourcing to indicate which lessons are high quality. It gives a green checkmark to lessons that are vetted by a cadre of expert teachers. Still, many popular lessons don’t have those green checks. (I reached out to ReadWriteThink, but didn’t hear back from the organization before deadline.)

As part of the study, researchers interviewed teachers who told them that the ability to download lessons online is a welcome time saver. I wondered if students are worse or better off now that teachers have access to a vast marketplace of lessons of questionable quality. In previous decades, English teachers had to spend hours developing lessons themselves. Probably some of those self-created lessons were good and some weren’t. Even though the average quality of materials online isn’t great, that doesn’t necessarily mean that they’re worse than what teachers otherwise would have taught to students. Answering this question would be a great area for future research.

This story about Teachers Pay Teachers was written by Jill Barshay and produced by The Hechinger Report, a nonprofit, independent news organization focused on inequality and innovation in education. Sign up for the Hechinger newsletter.

The post Most English lessons on Teachers Pay Teachers and other sites are ‘mediocre’ or ‘not worth using,’ study finds appeared first on The Hechinger Report.

]]>
https://hechingerreport.org/most-english-lessons-on-teachers-pay-teachers-and-other-sites-are-mediocre-or-not-worth-using-study-finds/feed/ 0 59723
What 2018 PISA international rankings tell us about U.S. schools https://hechingerreport.org/what-2018-pisa-international-rankings-tell-us-about-u-s-schools/ https://hechingerreport.org/what-2018-pisa-international-rankings-tell-us-about-u-s-schools/#comments Mon, 16 Dec 2019 11:00:59 +0000 https://hechingerreport.org/?p=59506 PISA rankings 2018

In 1967, on the first international comparison of educational achievement in math, the United States ranked 11 out of 12 nations. Students in Germany, England, France and Japan all scored ahead of students in the U.S.. The only country behind the U.S. was Sweden. No one was surprised. A Washington Post news article explained that […]

The post What 2018 PISA international rankings tell us about U.S. schools appeared first on The Hechinger Report.

]]>
PISA rankings 2018

In 1967, on the first international comparison of educational achievement in math, the United States ranked 11 out of 12 nations. Students in Germany, England, France and Japan all scored ahead of students in the U.S.. The only country behind the U.S. was Sweden. No one was surprised. A Washington Post news article explained that U.S. teachers weren’t as well trained in math pedagogy and that American society didn’t value mathematical achievement as much as other countries.

After the release of the latest 2018 rankings by the Programme for International Student Assessment, or PISA, earlier in December 2019, there was considerable hand wringing and consternation but the result wasn’t much different. The U.S. still ranks behind the same group of countries, with the exceptions of Israel, which has slipped below, and Sweden, which has risen above the U.S. In math, the U.S. ranks 36th out of the 79 countries and regions that participate in the test.

It’s worth noting that the U.S. Department of Education considers the U.S. ranking to be 30th, not 36th. That’s because some of the numerically higher scores are so close that the National Center for Education Statistics calculates them to be statistically equivalent. And not all of the 79 geographic entities are countries. In some cases, autonomous regions, such as Hong Kong, participate separately from their countries. The Organisation for International Co-operation and Development (OECD), which runs PISA, also allows partial participation for some nations. The top rank in the world is held by a group of four provinces within China (Beijing, Shanghai, Jiangsu and Zhejiang).

But however you count it, U.S. math performance is below the international average.

“What surprises me is how stable U.S. performance is,” said Tom Loveless, an education expert who was formerly at the Brookings Institution. “The scores have always been mediocre.”

The U.S. performs relatively better in reading, average instead of below average. It ranks 13th out of the 79 countries and regions, according to the 2018 PISA scores in reading. As with math, U.S. performance hasn’t changed much since the first PISA tests in 2000. Today’s scores in reading and math aren’t statistically different from when PISA started testing the subjects in the early 2000’s.

The latest PISA scores reinforce the results from the 2019 National Assessment of Educational Progress (NAEP) test of math and reading, which U.S. fourth and eighth graders take every two years. Those results, released in October 2019, also found that U.S. achievement hasn’t progressed over the past decade and, for low-performing students, was the same as 30 years ago. The international PISA test is taken by older students, 15-year-olds, every three years. The majority of U.S. test takers are at the beginning of their high school sophomore year.

Amid the long-term stagnation, there is an important change to note. Inequality is growing. Peggy Carr, associate commissioner of the National Center for Education Statistics (NCES) points out that both exams are showing a widening achievement gap between high- and low-performing students. One in five American 15-year-olds, 19 percent, scored so low on the PISA test that they had difficulty with basic aspects of reading, such as identifying the main ideas in a text of moderate length.

Related: U.S. now ranks near the bottom among 35 industrialized nations in math

But the inequality story is a nuanced one. Part of the inequality is between schools with students at wealthier schools posting much higher test scores than students at schools with large numbers of disadvantaged students. But the vast majority of educational inequality in America is inside each school, according to the PISA test score report. Statisticians mathematically teased out inequality between schools versus within each school and found that, in the U.S., only 20 percent of the variation in student performance is between schools. The remaining 80 percent is inside each school.

I wanted to understand this more and talked with Miyako Ikeda, a Paris-based senior analyst in charge of PISA data analysis at the OECD. For readers who want to geek out with me, here’s the explanation. Imagine five schools, each with 10 students. Students in the first school come from the poorest families and students in the fifth school come from the wealthiest. The other three schools lie between the two extremes. If you calculate the average test score for the 10 students in each school, you would see that the average test score for each rises with wealth. In the U.S., 93 points separate the average score in the poorest schools from the wealthiest. That’s about three grade levels — the difference between 10th grade achievement and 7th grade achievement.

Credit: sketch by Miyako Ikeda, OECD

But what’s interesting is that the difference in test scores among the top performing and lowest performing students in each school is much greater. In a simplified diagram of the U.S. that Ikeda drew for me, “A” on the right, you can see that the top performing students in the most disadvantaged schools are scoring as well as average students in the “best” schools. Each “x” represents a student.

This is a sharp contrast with other schools systems around the world. In Germany, for example, there is much less variation in each school. Student test scores are clustered closely together under each roof.  But there are greater differences between schools with the least advantaged schools scoring much lower than the wealthiest schools . Germany is closer to something like “B” in the diagram. In this case, no one in the least advantaged schools is approaching the the scores of the most advantaged schools.

Why the U.S. has so much inequality inside each school is up for debate. Even if the family incomes are similar in each school, American schools might have more cultural diversity with some families emphasizing the importance education more than others. In other cases, there might be a wide range of incomes in a large high school and student performance mirrors that wide range.

Related: U.S. education achievement slides backwards

Andreas Schleicher, director for education and skills at OECD, hypothesizes that the common practice of “tracking” or separating more advanced students into more challenging classes is to blame.  Other scholars have come to the same conclusion in their analysis of international test scores. If what the students are learning in their classrooms are different, you’d expect the test scores to be different too.

Although the debate over interpreting the data is likely to continue, one thing seems clear. We need to rethink reform. While it is vital to fix dysfunctional schools where too few students can read well and add fractions, these PISA test results show that we also need to understand what goes wrong at our most functional and revered suburban schools where the bottom students get left behind.

This story about PISA rankings 2018 was written by Jill Barshay and produced by The Hechinger Report, a nonprofit, independent news organization focused on inequality and innovation in education. Sign up for the Hechinger newsletter.

The post What 2018 PISA international rankings tell us about U.S. schools appeared first on The Hechinger Report.

]]>
https://hechingerreport.org/what-2018-pisa-international-rankings-tell-us-about-u-s-schools/feed/ 3 59506