The calculations we use to determine who lives under the poverty line haven’t really changed since the 1960s. Maybe it’s time to update them.
In March 2021, Congress and President Biden passed the American Rescue Plan Act. Expected to cut child poverty in half, the Act drew critical acclaim for expanding the child tax credit to provide lower and middle income families with an additional $3,000 to $3,600 per year for each child age 17 and younger. There’s no doubt that in these uncertain times, investing in the resources that allow young people to flourish is a worthwhile idea. How is it, though, that such a small amount, equivalent to a mere $1.50 to $1.80 per hour in take home pay for a full time worker, could be the difference between a child growing up above the poverty line or below it?
The calculations that determine the poverty line haven’t changed in a scandalously long time. In the early 1960s, Mollie Orshansky, who worked for the Social Security Administration, set out to update the government’s definition of poverty. First, she looked at four food plans designed by Department of Agriculture dietitians to provide adequate nutrition, and chose the cheapest one, the “economy” level diet that was meant to be sufficient “for temporary or emergency use when funds are low.”
Then, Orshansky consulted a 1955 Department of Agriculture survey which reported that families of 3 or more people spent about a third of their take-home pay on food. Figuring that a family spending that percentage on food must be eating an “economy” diet, and that a family so burdened would also have reduced non-food expenditures to a similarly bare-bones level, she multiplied the cost of the food by 3.7 (a little leeway is good) and extrapolated what the poverty line would be for variously sized families from there, accounting also for the gender of the head of household, how young the children were, and if the family farmed or not. She published the guidelines in 1965, noting that while it’s hard to say how much income was necessary, she felt confident this level set a benchmark for “too low.”
Orshansky’s math was based on after-tax income, but it was applied to Census Bureau data that used pre-tax figures, the most complete data available at the time. By 1968, it was clear these figures were inadequate, and the SSA sought to update the figures again, based on rising standards of living. The Bureau of the Budget blocked this effort, allowing adjustments based only on rising prices, and not on the idea that expectations were changing with the times.
The next minor changes came in 1981, when the difference in male- and female-led families was averaged out, and increasing the maximum hypothetical family size from 7 to 9 people. In 2010, more guidance came out which admitted that families also have to pay for clothes and utilities, and may have unrelated people (such as foster children) living with them, but this “supplemental poverty measure” didn’t replace the official Orshansky definition. And there you have it: the poverty line is based on the idea that a family spends about a third of its income on a subsistence meal plan. If you are better off, by golly, you’re not poor, right? You’re doing fine, apparently. (Pardon my eyeroll.)
Times were different in the 1960s. While food is cheaper now (so a figure that triples food expenditure to determine poverty is relatively smaller), other expenses such as housing, health care, and education have exploded. In 2016, the official poverty rate in the United States was 12.7%, with 40.6 million Americans living below the poverty line. That’s $24,339 (before taxes, in 2016) per year for a family of four. Is it a blessing that the American Rescue Plan’s extra ~$3,000/year pulled so many children out of poverty, or a moral outrage that so many families have so little, before or after the law took effect?
In recent years, there have been calls to modernize the poverty threshold to better reflect current conditions. Some would raise the bar, based upon benefits that poor families may be receiving, such as SNAP or Medicaid. Others would like to see a definition that takes relative poverty into account, since $24,339 doesn’t go as far in, say, San Francisco, as it does in Tennessee.
Another idea is to throw away the idea that poverty is about mere income, and base it instead on whether or not a family is able to obtain what they need to live. Craig Gundersen, a professor of economics at Baylor University, suggests looking at food insecurity. In 2020, 45 million Americans are estimated to have experienced food insecurity, the inability to obtain the food they need for a balanced diet. Gundersen presented three ways that the government could end food insecurity for all Americans, laying out the cost for each. The top tier plan, a universal program that would provide all families with the current maximum SNAP benefits, would reduce food insecurity by 89%, at the cost of $730 billion per year.
However, not everyone experiences food insecurity. Gunderson also modeled costs for providing benefits to families making less than four times the poverty threshold while increasing SNAP benefits for people who already receive them by $42 per week, the average amount that people in the program would need to completely eliminate food insecurity. It would reduce food insecurity by 98% and come to $564 billion per year.
Which is why it probably won’t get done. The United States is the sixth most prosperous of the OECD countries (per capita), yet has the third highest level of poverty – worse than Mexico. Much like Americans would rather see the world burn around them than make the changes needed to mitigate the climate emergency, they seem more afraid of being called Socialists (and that someone, somewhere, might eat a Pop Tart at their expense) than they are of redefining and alleviating poverty to allow every American a basic level of food and shelter, let alone the means to live a decent life. Americans like poverty, as long as it’s other people living in it. Less competition, y’know?
Related: America’s Big Government Harvest Box