WHY AI CAN’T REALLY FILTER OUT “HATE NEWS”

WHY AI CAN’T REALLY FILTER OUT “HATE NEWS”

In Define information before you talk about it, neurosurgeon Michael Egnor interviewed engineering prof Robert J. Marks on the way information, not matter, shapes our world (October 28, 2021). In the first portion, Egnor and Marks discussed questions like: Why do two identical snowflakes seem more meaningful than one snowflake. Then they turned to the relationship between information and creativity. Is creativity a function of more information? Or is there more to it? And human intervention make any difference? Does Mount Rushmore have no more information than Mount Fuji? Does human intervention make a measurable difference? That’s specified complexity. Putting the idea of specified complexity to work, how do we measure meaningful information? How do we know Lincoln contained more information than his bust? In this episode, they address the hope that advanced AI could somehow recognize and filter out bias and hate — the problem is that bias is innate in programming.

Michael Egnor: Some people hope that artificial intelligence could filter out hate news. No, it’s not going to be able to filter out hate news without a bias from the programmer of what is hate news.

Read more