Search
Close this search box.
December 12, 2024
Search
Close this search box.

Linking Northern and Central NJ, Bronx, Manhattan, Westchester and CT

Why Higher SAT Scores Don’t Mean Boys Are Better at Math

When the College Board released data on the updated 2016 SAT, the results raised some hackles.

For more than 40 years, girls have consistently received lower test scores in math than boys on the SAT, a standardized assessment designed to measure reasoning skills and gauge preparedness for college. This trend is documented in an analysis of four decades of data by the American Enterprise Institute.

So, is it true that boys do better? In a word: yes. But these tests do not tell the whole story.

 

A Look at the Numbers

Mean math scores showed a 31-point difference between male and female students for the 2015 batch of test-takers, with a similar male advantage across all ethnicities. Males represented 62.3 percent of all students with math scores in the highest range, 700-800. At the next-highest range, boys were still in the majority; girls only represented 45 percent of students who scored between 600-700 on the math portion. While notably less dramatic, the Nation’s Report Card offered similar results from the National Assessment of Educational Progress (NAEP). In 2013, 2 percent more fourth-grade boys than girls met proficiency standards in math, and 1 percent more among eighth-graders; in other words, the trend of boys achieving higher standardized test scores in math occurs well before they take the SAT.

Particularly strange next to these math scores are essentially all the other numbers related to female achievement, also featured in reports by the College Board. Female SAT test-takers consistently outperform their male counterparts in verbal tasks, including a 12-point difference in writing, and are overrepresented in AP and honors math courses (in which 54 percent of students were female in 2015). On average, girls had taken more math classes (52 percent of those who had taken more than four years were female students), and completely dominated the stats on GPA and class ranking. The majority of test-takers with an A+ average (97–100) were girls, at 59%—a difference of 18 percentage points.

 

Narrowing the Achievement Gap

So while test results may seem to hint at some math deficiency in the female brain, the issue is much more complex. Some interpretations of the data, as in “Differences in the Gender Gap: Comparisons Across Racial/Ethnic Groups in Education and Work,” a report from the Educational Testing Service, insist that the gap is slowly narrowing over time. Hansen, Guilfoy and Pillai’s “More Than Title IX” describes the post-Sputnik push for math and science in schools for all students, and a steady rise in math scores across the board, with the most visible difference in girls’ scores. The authors may have good reason to be optimistic about the positive effects of gender equity in education; today’s 30-point difference between male and female test-takers in math is certainly an improvement over the 34 point-difference observed in 1994, and the 44-point difference registered in 1972.

 

Rethinking Assessment

Even as the achievement gap between male and female students narrows, some acknowledge that it may never fully close with this particular assessment tool. Voyer and Voyer, with the American Psychological Association, reviewed studies that consisted of 502 effect sizes (the size of the difference between two groups) and 369 data samples to attempt to explain the discrepancy between female and male student performance. They noted that girls tend to emphasize mastery over performance, meaning young women tend to work to understand the course material fully, rather than to achieve higher scores. Grades reflect persistence and focused effort over a period of time in a particular social environment, while test scores provide a single snapshot of performance. In this context, it makes sense that girls consistently earn higher grades than their male counterparts—and that they have done so since the data have been available, dating back to 1914.

In an article in The Journal of Economic Perspectives, authors Niederle and Vesterlund describe how the competitive nature of standardized testing plays a role in the score discrepancy. Especially in mixed-sex situations, women have different cognitive responses to competitive pressure than men. According to the authors, since women shy away from competitive situations in which men often thrive, measuring performance in these situations—for instance, in the form of standardized tests—probably doesn’t accurately reflect non-competitive performance. As Niederle and Vesterlund point out, the question of girls’ math scores seems to arise with each release of new data, and great debate over the significance of the discrepancy ensues; but there are plenty of other questions we should ask. For example, why do boys so consistently underperform in tasks of language? For that matter, why haven’t boys’ grades—in any area—risen as consistently as girls’ math scores?

Just as the steady achievement gap between male and female scores is of concern, so too is the disturbingly stubborn discrepancy between white and black students’ achievement, evident in the nearly 100-point SAT score difference in both reading and math. On top of all that, there is a predictable rise in test scores that students from higher-income families typically see: approximately 20 points for each $20,000 increase in income. In short, girls are not the only demographic at a disadvantage on test day.

Given all of the factors that contribute to student scores—beyond measures of reasoning and preparedness for college, the purported objectives of the SAT and many other standardized tests—we should not just be asking why boys score higher than girls on these exams. If so many demographics are at a perpetual disadvantage, we should also be asking: What are these assessments really testing? And do these assessments yield sufficient, unbiased information about reasoning skills and college readiness? If the answer to the last question is no, then it may be time to reassess our assessments.

By Kathryn deBros, 

Courtesy of Noodle Pros

Leave a Comment

Most Popular Articles