“It really makes you wonder, ‘Where did it all go wrong? Why wasn’t this child protected?’” Those were the words of Merced, California police sergeant Kalvin Haygood, among the first to find the lifeless body of eight-year-old Sophia Mason earlier this year. “It was probably the most disturbing thing I’ve seen in my career,” he told the Mercury News. Appearances suggest that the girl was kept and punished in a locked metal shed, abused physically and sexually, and exploited for pornography.
After conducting a three-month investigation into Sophia’s murder by her mother and the mother’s boyfriend, reporters found plenty of instances where things went wrong, including eight separate reports of abuse or neglect in the 15 months before her death that were all but ignored.
How much can a person’s past behavior tell us about the future? Common sense tells us the answer is “a lot,” but policymakers seem intent on ignoring it. The single biggest predictor of whether a parent will abuse or neglect a child is whether they have done it before. This is not to say that we should charge people with crimes they haven’t committed, à la Minority Report. But we should be particularly concerned about children in the care of those with a history of abuse. This is the thinking behind, say, sex-abuse registries, in which people found guilty of hurting children in the past are not allowed in roles or positions where they might have unfettered access to kids.
But all too often, public officials seem intent on ignoring information reported to child-welfare agencies. A few years ago, California legislators proposed using a system of “blind removals,” which would not only cover up any identifying details about the race, ethnicity, and zip code of abusers but also stop child-welfare decisionmakers from seeing records of past abuse. It was proposed in the name of racial equity.
Oregon recently decided to stop using predictive risk modeling (PRM) to help decide which families social workers would investigate—not because the modeling was inaccurate or did a poor job at determining which children were at highest risk, but because policymakers wanted more racially equitable outcomes. If that’s all you’re looking for, why not just stop investigating all cases of child abuse in black homes until more reports from white homes come in? Or just assign investigations like you’re dealing a deck of cards—one investigation for white families, one for black families, one for Hispanics, one for Asians, and then back to whites. Sophia Mason, who was black, would just have to wait her turn.
If these ideas sound absurd, consider the decisions made to keep Sophia with her mother. Relatives expressed concerns even before her mother, Samantha Johnson, had given birth. Her long history of substance abuse and prostitution, combined with what one cousin described as the intellectual capacity of a 10- or 12-year-old, did not seem like a recipe for responsible parenthood.
Sophia was mostly raised by her grandmother and a support system of aunts, cousins, and neighbors until she was seven. But then her mother returned with a boyfriend, and the two began to sneak Sophia out of the house at night. The Department of Children and Family Services (DCFS) conducted an assessment and determined that there was no risk of future maltreatment. But officials recorded no notes. On another occasion, when DCFS workers saw Sophia’s body covered with bruises and scabs, they determined that she could safely remain with the couple. Her mother and boyfriend then left town with her for a few months, returning with Sophia having lost a significant amount of weight. An additional report of suspected abuse was filed with DCFS. On another occasion, a social worker claimed that Johnson had no history of drug abuse, though she was residing in a “sober-living” home.
“I fail to understand why we keep using check-box tools based on case-workers’ subjective views,” says Rhema Vaithianathan, a professor in the School of Economics at Auckland University of Technology and co-director of the Centre for Social Data Analytics. Vaithianathan, who has helped develop predictive-analytics models for Allegheny County and other jurisdictions, notes, “These tools continue to fail—yet agencies seem hesitant to adopt modern technologies such as PRM that can systematically harvest all the information that is held in the case files and generate data-driven insights.”
No one likes the idea of computers making decisions for us, but using PRM doesn’t mean that people would no longer be in charge. With data collected in one central database and an algorithm to help determine when a child is at high risk, it would be much harder for supervisors in child welfare agencies to ignore red flags. A child reported by different parties on multiple occasions in a single year for signs of abuse should be setting off alarm bells at an agency. It won’t be up to a caseworker to study a child’s history or figure out whether the mother has had a drug problem; that information, gained from public records and previous reports, will already be in the system.
Of course, we might improve the workforces at child welfare agencies in other ways. Caseworkers frequently come from the lowest educational ranks and are undertrained, under-supported, and underpaid. But if it can help save kids like Sophia Mason, using artificial intelligence should be a no-brainer.
Photo: tzahiV/iStock