HomeArtificial IntelligenceFive Dangers Of AI Development For The Future

Five Dangers Of AI Development For The Future

Dangers of AI: Artificial intelligence (AI) that generates text and images has been used for controversial and dangerous purposes.

Since programs such as ChatGPT, DALL-E 2 and Midjourney became popular, there have been reports of shared fake news, author theft and even virtual crimes carried out with the help of these technologies. This has caused governments worldwide to be concerned about the issue – such as Italy, which banned ChatGPT after accusations of leaking user information. Next, discover five dangers of AIs that generate texts and images for the future.

Misinformation

Since their emergence, chatbots have been concerned about their potential to spread false information. This happens because ChatGPT can sometimes provide misleading or outdated data on different subjects, and it compellingly does this. Furthermore, in the case of image generators, such as Midjourney, the photos developed by the program can create confusion among several users.

In this sense, a concern among experts is that, in the future, these technologies will evolve and make it difficult to discern what is reality from what is artificially generated. According to The New York Times, OpenAI has a specific policy that prohibits using ChatGPT technology to encourage violence promote dishonesty and attempts at political influence, but does not have secure coverage for non-English speaking countries. 

Data Leak

Artificial intelligence has access to diverse information, and, therefore, risks are involved in the personal data that is passed on to them. Programs powered by AIs can become vulnerable to hacking for reasons ranging from the lack of encryption necessary to communicate between customers and the chatbot to failures in the platforms that host the programs themselves. It was out of fear of these system weaknesses that Italy banned ChatGPT, for example.

Furthermore, there is the possibility of these chatbots being invaded by malware and ransomware that aim to capture sensitive information from users. It is worth highlighting the cases in which there were data leaks from ChatGPT Plus subscribers, according to a study by the technology company ESET.

Data on usage history and private information about user payments, name, address and last four digits of cards became available on Twitter and Reddit, including in foreign languages. This happened due to a bug that allowed a series of hacker attacks, which even stole the chatbot’s internet user accounts.

Assistance In Phishing Scams

Malicious users also use artificial intelligence to steal password data and credit card numbers, for example. Criminals use the weaknesses present in chatbot systems to manipulate them by creating hidden “prompts”, which direct users to malicious websites and, thus, extract sensitive information.

After gaining access to email, for example, hackers can send messages to other contacts as if they were the actual owner of the account – which increases the number of people who can fall for scams. 

Jailbreaks

ChatGPT has also been the jailbreaking target, as several users have tried manipulating the commands to “break” the platform. The practice aims to make the chatbot ignore its rules and produce content against its guidelines, such as discriminatory or malicious texts. To do this, jailbreakers write specific commands to circumvent the ChatGPT system.

Viruses via Extensions

Criminals have also taken advantage of the fame of chatbots like ChatGPT to defraud users. An excellent example of this crime is the fake extension for Google Chrome, “Quick access to ChatGPT”, reported by the company specializing in digital security ESET.

The plugin was used by hackers to steal Facebook accounts and spread malware, in addition to publishing prohibited content and malicious advertisements on the original hijacked accounts. According to information released by ESET, once installed, the application invaded the victim’s computer and gained access to users’ personal information.

What Has Been Done To Mitigate These Risks?

With the increase in problems involving AI programs, different institutions have spoken out against development without specific regulations. The most concrete example of this trend was the open letter created by the organization Future of Life Institute, which businessman Elon Musk signed. The document calls for interrupting AIs such as ChatGPT-4 and DALL-E 2 for at least six months. Furthermore, it informs about the dangers that these technologies can cause to society.

Also Read: Ethical Principles Of Artificial Intelligence (AI)

Tech Galaxieshttps://techgalaxies.com
Techgalaxies is the perfect destination for tech news readers, as our team is dedicated to publishing innovative and informative tech articles to our visitors.
RELATED ARTICLES

Recent Articles