"IBM urges Congress to adopt a precision regulation approach to AI," Montgomery said. "This means establishing rules to govern the deployment of AI in specific use cases, not regulating the technology itself." Montgomery said that Congress should clearly define the risks of AI and impose "different rules for different risks," with the strongest rules "applied to use cases with the greatest risks to people and society."
People trying to use Discord are being asked to identify an object that does not exist. The object in question is a “Yoko,” which appears to be a kind of mix between a snail and a yoyo. Multiple people have reported seeing a prompt to identify a Yoko when asked to solve a simple captcha prompt while trying to use Discord.
The picture of the Yoko, as well as the other images in the captcha, appear generated by AI. Another user complained on Twitter that they’d failed to pass a captcha to log into Discord when it asked him to identify images of a puzzle cube. Again, the pictures appeared to be AI generated.
“We were training it in simulation to identify and target a Surface-to-air missile (SAM) threat. And then the operator would say yes, kill that threat. The system started realizing that while they did identify the threat at times the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective,” Hamilton said, according to the blog post.
DrCaleb
CKA Moderator
Posts: 50457
Posted: Mon Jun 05, 2023 6:04 am
It's funny how many stories I've read about this drone killing it's operator, after the story was disputed by the US Air Force.
TL;DR: AI generated content contains errors that when AI uses previously created content to create more content, it causes the error to magnify until the content is useless.