new sat<\/a> is supposed to be aligned with current common core standards in schools (and is startlingly act-like in many ways). in other words, the sat is working extra hard to fight for its relevance.<\/p>\nevery summer in recent memory, alarmist news reports about the state of sat and act scores flood the media. but it isn\u2019t helicopter parents or indignant students driving these reports. it\u2019s the college board and the act.<\/u> it\u2019s pretty smart on their part. such reports make the automatic assumption that the act and sat are accurate measures of college potential. and this is a rather questionable assumption, as many teenagers who consider themselves to be excellent students but poor test-takers would not hesitate to point out. this is not to say that there aren\u2019t some correlations between test scores and academic potential, but the sat and act are far from fair judges.<\/p>\ntest scores are declining because more students are taking the test<\/h2>\n
ok, so the tests aren\u2019t entirely fair, and there\u2019s an ulterior motive behind widespread reports about declining scores–but why are scores decreasing?<\/p>\n
the simplest answer is that there is a far more diverse group of students taking the sat and act, including more low-income and underrepresented groups. the number of students using a fee-waiver, indicating they are low income, was at its highest in recent years, and the number of underrepresented minority students increased to 32.5%, compared to 29% in the class of 2011. and this is good news, indicating more students than ever before are being encouraged to pursue higher education. as andrew ho, harvard professor in the graduate school of education, argues, headlines about declining test scores could just as easily be about increasing test participation.*<\/p>\n
why sat and act test scores will never change that much<\/h2>\n
it\u2019s also worth mentioning that standardized test scores are subject to a process called equating. it\u2019s not exactly a curve, but it\u2019s a process that ensures that a student\u2019s score should be roughly the same no matter when he or she takes the test. and sat and act questions are carefully tested on students to make sure that there won\u2019t be any drastic changes in difficulty between test forms. this means we aren\u2019t likely to ever see test scores increase or decrease by large amounts year to year–this would mean the testmakers aren\u2019t doing their jobs. if students as a whole do become far worse or far better at the test, then scaled scores would be adjusted to reflect this (in other words it would become either easier or harder to get a certain score).<\/p>\n
because the sat has a lot of \u201cpoints\u201d to work with, a shift of a few points in either direction (when we are talking about a total of 2400 points) is not a huge deal. it also makes sense that act scores (which range from 1-36) would remain \u201cstagnant\u201d in recent years, as opposed to declining or increasing, because it would take a lot more to move the average score from, say, a 21 to a 22.<\/p>\n
so, ultimately, doomsday reports about sat and act scores are both rather hyperbolic and rather ingenious marketing strategies on the part of the sat and act.<\/p>\n
what we should be talking about regarding declining sat and act scores<\/h2>\n
opponents of standardized testing are often quick to point out that it is income that is often the strongest predictor of test scores. a glance a bit further down in the college board\u2019s annual report reveals this correlation:<\/p>\n
<\/p>\n