A trial using data to prevent child abuse helped a UK police force detect a child gang and drastically cut the time it takes child protection experts to review cases.
High profile abuse cases such as the death of 4-year old Daniel Pelka led to the creation of Multi-Agency Safeguarding Hubs to help agencies including the police and social care groups more effectively “join the dots” on the fragmented data they hold on children.
But Ravi Gogna at defence firm BAE Systems says such hubs don’t help in many cases. “That really only works for the red flag events. If something really big, really bad happens, you hit your threshold of risk and data sharing occurs.
“But what we’d learned from the Daniel Pelka case and a number of other cases is, actually, a lot of kids don’t have one big red flag event, they have lots of small events.”
The company partnered with Gloucestershire Constabulary in a £250,000 pilot project to take siloed data from police, social care, education and health systems, analyse it automatically and flag cases to child protection experts. In total, it looked at 100 indicators of risk, such as poor school attendance.
Reviewing cases faster
The technology identified children who might be candidates for early intervention by agencies, before abuse occurred.
The data also discovered a gang of children in the area, the older members of which were committing crimes. “We accidentally found a gang. We weren’t setting out to look for gangs,” said Gogna, speaking at New Scientist Live in London yesterday. He said the police took the decision not to arrest children in the gang who had committed crimes, but instead to speak at assemblies at the schools they attend.
The pilot also offers the prospect of saving police and other agencies’ time, cutting the time it takes to review an individual’s case from 2.5 hours manually by humans to 15 minutes by machine. Gloucestershire Constabulary told New Scientist: “Our officers were very impressed by the potential of the technology to help us protect children.”
The project was overseen by the data watchdog, the Information Commissioner’s Office and an independent barrister.
William Wong at Middlesex University London says that without joined-up data, it isn’t possible to usefully tell if a child is at risk, but there is a danger of the system identifying the wrong people. “When the consequences are significant such as in identifying vulnerable kids, we need to ensure some form of algorithmic transparency – so that the people involved are able to ascertain for themselves if the results of the algorithm is sensible. This means there must be enough transparency,” he says.