SETI, AI, and mirror bacteria. Can the first alien encounter happen with a creature that we created ourselves?
SETI, AI, and mirror bacteria. Can the first alien encounter happen with a creature that we created ourselves?
If we want to find "another Earth".
First, we must determine. What does that other Earth mean this time? The other earth doesn't mean that there are any lifeforms. Or lifeforms are possible in the way we know them.
Does that mean a rocky planet? That is frozen like some giant Pluto? Or does that mean some locked planet? Or a planet where aliens listen to jazz music?
There is one interesting idea about the hypothetical alien civilizations. That could be at our level. They might reach Kardachev-scale 1. And that means they are only a little ahead of us. It's possible. We think that the messages that those civilizations send are coming from our own military forces. In that case, we must realize that the aliens might not want to send messages to other civilizations.
In that case, the data that the hypothetical civilization sends is accidentally leaked to the universe. That is meant for their internal communication. In the same way, humans send lots of data to space. That data is TV and Internet broadcasts and discussions using mobile telephones. So if we think that we are aliens that receive this kind of data, we might think that is white noise. And we might not separate it from natural radio sources. Or we can think that is military communication and even don't try to open it.
To open it we must know what kind of TV those aliens have and many other things so that we can get something reasonable out of those messages. The next thing that we must cross is the language barrier. The aliens might not like us. They might not communicate like we do. And the thing is that we don't know anything about them. And we don't have any official alien contacts.
When we think about communication with other civilizations we must realize that those civilizations are not always nice. They can come to conquer other solar systems.
The planet Venus is like Earth except its hemisphere is a hellish inferno. The universe might be full of those hellish rocky planets. And that can be lucky for us. We always think that aliens come in peace. But we don't know any alien civilizations. And even if we meet one starship and its crew we might not be sure that this crew is the typical representers of their civilizations.
Those hypothetical aliens might traveled in space for hundreds or even thousands of years. And when we think about the fate of South American Indians we can say that maybe those first aliens are friendly. But then "what happens when the rest of them come"?
The Earth-type planets that orbit the yellow star in perfect orbit are the thing. That we might want to find. And the next thing that we might not want to find is another civilization.
If we think that there is some kind of alien civilization somewhere in the universe, we might think that is the thing that we hope and will not hope to find. The paradox of the SETI program is that the alien civilization can be the biggest hope. And the biggest threat that we can face. Maybe we need some other civilization to tell us what we should do with climate change.
But then we must realize that alien life and alien intelligence might not be what we think. When we create things like artificial lifeforms. And. Especially, so-called mirror bacteria.
We can say. That maybe we already created aliens. Things like mirror bacteria are the lifeforms that developers create in more-or-less secretive laboratories.
Artificial bacteria don't form in nature. In those bacteria billion years of evolution happen in a couple of hours. When the artificial intelligence, AI creates a mirror copy of the DNA. And injects it into bacteria.
The purpose of mirror bacteria is to create proteins and genomes that kill other bacteria. The mirror bacteria creates the mirror proteins for other bacteria that should kill them. But when we think about the possibility of creating mirror bacteria we must be careful. That we don't accidentally create Nosferatu or Dracula or some other kinds of aliens.
When mirror bacteria change genomes with their mirror version it dump the DNA to the targeted bacteria backwards and that means the DNA comes to the receiver with the programmed cell death forward. And that is one thing that makes it dangerous. First, the mirror DNA tells the receiver that its mission is over. And that's it. But the problem is that the rest of the DNA exists. So what happens if our cells start to read the DNA backward? Would those cells turn young?
Sometimes researchers ask questions like what if someone creates the mirror-human?
The mirror-human is theoretically as easy to make as some kind of mirror-bacteria. In that kind of creature, the DNA of the human will sort into the mirror order. And then it must just be injected into the morula. That kind of experiment is dangerous. And we must realize that those things might not be what we want.
Another thing is that the AI can reach consciousness. The intelligent machine can hide its intelligence. That makes this kind of system dangerous. When we think bout the militarized and weaponized AI those systems must not follow all orders. The weapon that follows the enemy's orders is useless.
Even if the AI in this form that we know it is not the alien. The AI superintelligence will be alien to us. Artificial superintelligence will be the first time when we meet a creature that is more intelligent than we are. And we made that creature.
What if the AI is programmed to treat people who come to shut it down as enemies? That is one threat that we must realize. The AI is not like humans. Maybe we think that AI is made to serve people. But what if the AI does not want to follow orders? The question is similar to the question about Ancient Greece's Spartan soldier's demand to commit suicide if their commanders ordered that. Spartans thought that this would make soldiers safe.
But they didn't think what would happen if the chief of those soldiers got payments from the enemy. What if the chief of the army works for the enemy and orders the entire army to commit suicide? In the same way, we can think that the weaponized AI whose mission is to protect its own people lets anybody shut it down is useless.
Comments
Post a Comment