This AI tool shows which politicians and issues are getting the most TV time
The Stanford Cable TV News Analyzer uses computer vision to identify public figures on the idiot box
AI’s risks and rewards
The tool can also identify gender biases in TV coverage. In the graph below, the move towards on-air gender parity appears to havereversedsince 2015.
However, this example raises one of the ethical concerns around the project. The system uses computer vision to make a binary assessment of each presenter’s gender, based on the appearance of their face. But their appearance could be different from their gender identity or birth sex. This shortcoming risks misgendering people and excluding non-binary individuals from the analysis.
In addition, facial recognition isnotoriously biasedanderror-prone. But the researchers claim their application has a low potential for harm.
They also say all the people in their database were identified by the Amazon Rekognition Celebrity Recognition API, which only includes public figures. However, Amazon hasn’t revealed its definition of a public figure.
Despite these qualms, the tool could provide some useful insights into media biases — particularly with another nasty and divisive US election campaign underway.
You can try the tool out for yourself at theStanford Cable TV News Analyzer website.
So you like our media brand Neural? You shouldjoin our Neural event track at TNW2020, where you’ll hear how artificial intelligence is transforming industries and businesses.
Story byThomas Macaulay
Thomas is a senior reporter at TNW. He covers European tech, with a focus on AI, cybersecurity, and government policy.Thomas is a senior reporter at TNW. He covers European tech, with a focus on AI, cybersecurity, and government policy.
Get the TNW newsletter
Get the most important tech news in your inbox each week.