When man meets AI, the man is more intelligent. That means that the creative AI is not creative at all. The man who uses the AI is creative. The thing that we should understand is this: what we see as an example of "fuzzy logic" is a large number of precise reactions or precise logical points.
That means that there are multiple words. Or otherways saying: multiple triggers. That is connected with a certain operation. This allows users to use dialect words or literally while using the AI.
The dialect words and literary words are both connected to certain operatios. That gives the user freedom in the form of language. What the person uses to command the system.
In this model, users can use both, dialects and literary words. That makes the AI more flexible than if the user must use precise literal words.
We can say many ways something that launches the action. The words, connected with certain operations are called logical points. Developers can connect each action with multiple logic points. Which makes the system seem like it uses fuzzy logic.
That means that when we give a code or order encoded into hex-decimal or ASCII mode is not precise same as the orders that we write in regular text mode. This is the thing that might bypass the protection of generative AI. If somebody wants to use the generative AI as the tool, that makes it possible to write malicious code very effectively the person must just bypass the safety mechanisms of those generative AI. One version can be that the commands are given in the form of Hex or ASCII mode.
Another thing that can cause the ability to bypass the security of artificial intelligence is to give orders as small pieces. The hacker- or otherwise the malware developer can use multiple AIs to create the code. And then that person can connect the results. The problem is this. The code writer must have deep knowledge of coding. And the other thing is this. Hackers can use AI to develop the base code as legal developers.
They can freely use AI to develop legal software like chat programs and firewall software. The spyware that hackers use to steal information is the chat program or firewall that is modified to send data without the user's permission. So hackers can make legal applications, and then change the code so that it runs backward and steals information. One version is to create a tool that allows as an example teachers to follow what students do with computers.
Normally those programs tell the user. That they are operating. Their use is limited to classrooms while teachers teach things like programming. There can be some red frames and text. That tells the observation program is in use. Or the program requires user acceptance. Hackers can remove those manifests and then they can observe the targeted computers.
https://cybersecuritynews.com/encoding-technique-jailbreaks-chatgpt-4o/
Comments
Post a Comment