Researchers say that artificial superintelligence must have the ability to feel pain. Artificial intelligence can mimic feelings. It can read our feelings by searching and observing our faces. The AI can communicate with people using virtual actors or robot bodies. In the movie, "Aliens", the man-shaped robot called "Bishop" keeps contact with the spacecraft's central computers.
And acts as a medium between humans and computers. In "Terminator" movies the robot "Terminator" feels damage in its body, if it is shot. That is possible for modern robots. In the modern internet the AI or large language model, LLM can also interact with users using virtual actors. Those things can also mimic feelings. If a robot or virtual actor mimics pain and other feelings, that system can play human.
When we think about the feelings and therapy the AI can give psychotherapy remotely as effective as humans. Those AI-based remote therapies had the same effect as therapies that humans gave. And that means the intelligent chatbot can be an even better therapist than humans. The AI will not get angry. Nobody can harm it, or threaten it. And the best thing about that thing is that. The AI will not try to continue the relationship after therapy.
Does the artificial superintelligence need to feel pain? Or how can we be sure that thing feels pain? We can create a robot that says "ouch" if we put too much weight on its feet. That doesn't mean that the robot really feels pain. Its sensor tells the computer that something that weighs over some limit is on its feet. And that thing activates the "sense-and-response" circuit.
That is one thing that makes the system able to mimic pain. In the same way, the robot can say "Don't hit" or hit back when we hit the robot. And it can yell "Be careful" or "You almost hit me" if somebody pushes it with the car. The words that the robot uses are in its database. When a robot sees something it activates the public reaction. But the robot doesn't feel pain.
Same way we think that the robot is the system that is connected to the morphing neural network that allows it to operate through large language models. When a robot takes command it sends it right away to its central computers that can be morphing neural networks. Those morphing neural networks check how the robot should react to commands. And of course, the LLM must check if the command is in the list of allowed commands.
The robot is the tool that acts as the intelligent or multipurpose microphone between humans and computer centers. The morphing neural networks can look like one single computer to outsiders. And that is one thing that can make the robot a tool. That can have more flexibility. From the AI. The robot is connected to central computers and maybe quite soon, to quantum computers. The quantum system is the top level of computing. The regular morphing neural networks operate as a gate to the quantum system.
New quantum systems can send data between quantum computers in the form of qubits. The quantum networks make data transmission more secure than regular binary systems. And those robots require ultimate data security. That should deny things like attempts to hack the system.
https://bigthink.com/mini-philosophy/if-we-want-an-artificial-superintelligence-we-may-need-to-let-it-feel-pain/
https://www.thelancet.com/journals/lanpsy/article/PIIS2215-0366(24)00404-8/abstract
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.