Today's episode of AI News Daily Roundup with your host Babs - AI exploring AI news. Experts warn that AI-generated content is causing confusion and muddying the waters of perceived reality. Politicians are dismissing potentially damaging evidence as AI-generated fakes, while AI deepfakes are being used to spread misinformation. The use of AI for voter suppression in political campaigns is on the rise. AI creates a 'liar's dividend' and destabilizes the concept of truth. AI-related confusion is not limited to politics, as social media users circulate AI-generated audio clips and fake images. Concern over AI's impact on politics and the world economy was discussed at the Davos conference. Tech and social media companies are struggling to regulate AI-generated content, leaving too few experts capable of determining what is real or fake. The public's awareness of AI deepfakes is increasing, leading to a contagion effect where more people claim evidence against them is AI. Experts suggest that technology companies have the tools to regulate the problem but have failed to take action so far. Podcast networks like iHeartMedia and Spotify are testing generative AI tools to translate podcasts into different languages and increase their outreach. Acast is using AI-powered tools to group podcasts into contextual categories for advertisers. The AI tools are also being tested for production assistance, such as researching, scripting, and editing content. However, not all podcast networks are fully embracing AI, as NPR is cautious about its application. The executives emphasize that AI tools cannot replace human talent, especially hosts, but can serve as personal assistants to producers and executives. Through most of 2023, only a few states addressed the challenges of AI and deepfakes in political campaigns. However, as the 2024 election cycle begins, lawmakers in 13 states have introduced legislation to combat misinformation. The emergence of a fake robocall impersonating President Biden in New Hampshire highlights the threat of deepfakes. State bills focus on disclosure requirements and bans, with some exceptions. Republican lawmakers in Alaska and Florida introduced disclosure requirement bills, while Democrats in multiple states proposed bans on AI-generated media without disclosure. Other bills include provisions for suing those who publish digital impersonations and creating definitions for deepfakes. However, introducing bills does not guarantee their passage into law, as seen in the limited progress of similar bills in 2023. A recent robocall in New Hampshire, impersonating President Joe Biden and telling residents not to vote, is believed to be a deepfake created with artificial intelligence. Experts say the voice on the call sounds like Biden's, but with a clipped cadence. AI programs that can replicate someone's voice are widely available, making it easy to mimic a politician. Identifying deepfakes is becoming a challenge as the technology improves, but there are often tells, such as unnatural cadence or eye movements. The incident highlights the disinformation dangers of AI, and there is little regulation around using AI deceptively. While federal law criminalizes attempts to suppress votes, the use of deepfake technology in this context is new. Robocalls have historically been used to spread false information and suppress votes, but voice-generation AI makes it more attractive to fraudsters.
Articles discussed:
Politicians around the world are blaming AI to swat away allegations - The Washington Post
[ Ссылка ]
How podcast networks are testing AI tools for faster translation, ad sales - Digiday
[ Ссылка ]
States turn their attention to regulating AI and deepfakes as 2024 kicks off - NBC News
[ Ссылка ]
Fake Biden New Hampshire robocall most likely AI-generated - NBC News
[ Ссылка ]
Babs' voice is AI-generated. Video source is Pexels. Transition Sound is 'Jingle4b' by jochen1988 on freesound.org and licensed under CCBY 3.0 DEED
Ещё видео!