The biggest AI flops of 2024

Photo Credit: Shutterstock

The most dreadful AI failures in 2024 (194590000)

AI professionals have had a busy 12 months. We can’t count the number of successful launches and Nobel Prizes. It has not always been smooth.

AI can be unpredictable, but the availability of generative algorithms has led to people testing their limits in strange and even harmful ways. Here are some of the biggest AI mistakes that will be made in 2024.

AI has infiltrated every corner of internet

The use of Generative Artificial Intelligence (AI) makes it easy to create reams and reams worth of content, including text, pictures, videos, etc. These models are a fast and easy way to create content in large quantities. In 2024, we began calling the (generally low-quality) content that is produced by these models what they are: AI slop.

The low-cost way to create AI crap is now found everywhere on the internet. From newsletters and books on Amazon, to articles and ads on the web, and even shonky images on social media. These pictures, which are often accompanied by emotional content (e.g., children crying, wounded veterans) are more likely to be shared. This results in increased engagement, and higher ad revenues for the creators.

The rise of AI garbage is more than just annoying. It poses a real problem to the models who helped produce it. The fact that these models were trained using data scraped off the web means the number of websites with AI junk is increasing. This will have a real impact on the performance and output of the models.

AI Art is affecting our perceptions of reality

The effects of artificial intelligence images began to seep into the real world in 2018. Willy’s Chocolate Experience is a wild, unofficial event inspired by Roald Dahl’s Charlie and the Chocolate Factory In February, the company’s fantastic AI-generated promotional materials made the news around the globe. They gave the visitors the impression that the warehouse was much more grandiose than it actually is.

Similar to this, people lined up the streets in Dublin for an Halloween parade which didn’t even exist. The website in Pakistan used AI to generate a list with events happening around the city. This was then shared on social media before October 31. Although the SEO-baiting site (myspirithalloween.com) has since been taken down, both events illustrate how misplaced public trust in AI-generated material online can come back to haunt us.

Grok lets users create pictures of almost any situation

Most major AI image-generators have rules that limit what AI models are allowed to do. This is done in order to prevent the creation of violent, sexually explicit, or illegal content. These guardrails can also be used to ensure that others don’t steal intellectual property. Grok, a virtual assistant created by Elon Musk’s AI firm, xAI ignores most of these rules in order to comply with Musk’s refusal of “woke AI”.

While other image-modeling companies will refuse to produce images that depict celebrities, violent or terrorist acts, or copyrighted materials – unless tricked, of course – Grok will create images like Donald Trump holding a bomb or Mickey Mouse firing a bazooka. It does not generate nude pictures, but its refusal to follow the rules makes it difficult for other companies to avoid creating controversial material.

Deepfakes with explicit sexual content Taylor Swift is circulated on the internet

On January 1, deepfake nudes featuring Taylor Swift, a singer, began circulating online, on social networks such as X, Facebook, and X. Telegram users tricked Microsoft Designer, its AI image maker to create the explicit pictures. This shows how guardrails are easily circumvented.

Microsoft closed all the loopholes in the system, but the incident brought to light the poor policies of the social media platforms regarding content moderation. The images were widely circulated and the posts appeared for several days. The most shocking thing to learn is that we are still powerless in the fight against nonconsensual porn. Watermarking and data poisoning can be helpful, but they will need to become more widespread to have any impact.

Other high-profile chatbots that have done more harm than benefit include the bot of the DPD delivery company, which swore at will and declared itself useless without prompting. Another bot, designed to give New Yorkers accurate information on their city government, ended up giving advice about how to violate the law.

The market for AI devices isn’t exactly booming

In 2024, the AI industry failed to succeed in implementing hardware assistants. Humane tried to convince customers to buy the Ai Pin – a wearable computer – but even cutting the price did not boost sales. Rabbit R1, an ChatGPT personal assistant, also suffered the same fate after a series of negative reviews. It was reported that it is slow and buggy. The two products appeared to try to solve an issue that didn’t exist.

AI Search Summaries Went Awry

Has anyone ever eaten a rock or added glue to pizza? Google’s AI Overviews gave users some outlandish advice in May, after adding generated results to search results. AI can’t distinguish between an accurate news article and a Reddit joke. The users raced to see what the AI Overviews would produce.

AI summaries are not only funny but can have grave consequences. Recently, a new iPhone feature which groups notifications from apps together and summarises their content generated a false BBC News article. In the summary, it was falsely claimed that Luigi Mangione had committed suicide. He has been accused of murdering the CEO of a health insurance company, Brian Thompson. The headline created by the same article announcing that Israeli Prime Minister Benjamin Netanyahu was arrested also proved to be incorrect. This type of error can spread false information and undermine the trust that people have in media organizations.

View Article Source

Share Article
Facebook
LinkedIn
X
3 months until cheaper olive oil hits supermarket shelves
Mondelez International warns of a price increase for cocoa
Mental Health Problems? It Might be Better In The Morning