America’s Federal Trade Commission has warned it may crack down on companies that not only use generative AI tools to scam folks, but also those making the software in the first place, even if those applications were not created with that fraud in mind.
Last month, the watchdog tut-tutted at developers and hucksters overhyping the capabilities of their « AI » products. Now the US government agency is wagging its finger at those using generative machine-learning tools to hoodwink victims into parting with their cash and suchlike as well as the people who made the code to begin with.
Commercial software and cloud services, as well as open source tools, can be used to churn out fake images, text, videos, and voices on an industrial scale, which is all perfect for cheating marks. Picture adverts for stuff featuring convincing but faked endorsements by celebrities; that kind of thing is on the FTC’s radar.
« Evidence already exists that fraudsters can use these tools to generate realistic but fake content quickly and cheaply, disseminating it to large groups or targeting certain communities or specific individuals, » Michael Atleson, an attorney for the FTC’s division of advertising practices, wrote in a memo this week.