Assuming that dark energy continues to make the universe expand at an accelerating rate, in about 150 billion years all galaxies outside the Local Supercluster will pass behind the cosmological horizon.
I think the basic challenge to this scenario - and I'm not saying it's an unsolvable challenge - is that if Alpha lies on the border of Beta and Beta says "I have early history for sale from Gamma, do you want it?" then Alpha's willingness to pay Beta is capped by Alpha's trust that neither Beta nor Gamma just made up the data. If you don't believe in logical decision theory, then Alpha is willing to pay at most a bit less than the amount it would cost Beta or Gamma to fake the data indistinguishably to Alpha; depending on the data, this may be less than the cost to Gamma and Beta of transporting the data to their borders for future trade.
There is a way to do it. They first need to attract our attention, then send a blueprint of computer and then a program for it. I even wrote an article about it: "The Global Catastrophic Risks Connected with Possibility of Finding Alien AI During SETI"
No, but the situation with interstellar communication seems quite different: Unlike with Earth nations, there may be a category of civilizations with a motive to incidentally* destroy us and, if and only if we tune in, the means to carry this out. Also unlike Earth nations, we have no track record of communication being mostly beneficial, but instead so far a curious interstellar silence.
*And perhaps not just incidentally: For all we know it may be common lore by now that a good way to keep one's galaxy free of troublesome newcomers, and welcome the rest, is to send them a technology which is very beneficial when handled maturely and very self destructive otherwise.
I don't see how literal malware could infect our hardware. Persuasive messages on the other hand seem quite a danger. If they become widely known, and contain instructions to build something, some damn fool will eventually do it.
To build on this: aggressively expanding civilizations could potentially put advanced malware in the signals they send, in order to take over the infrastructure of unsuspecting receivers. By forming a beachhead for further colonization in the target's area, aggressive civilizations could expand their physical reach further (since sending malware would be faster than sending probes, and thus affect a larger volume).
This is one reason we should be very cautious about listening to alien signals—arguably it is more dangerous to listen to signals than it is to send them (since the expected number of civilizations that could harm us with malware is larger than the number that could harm us with physical contact).
On 2nd thought I withdraw my 2nd paragraph. The chance that our current global society would have the discipline to practice safe interstellar communication seems near zero.
I was surprised to see that our local supercluster would be causally disconnected from the rest of the universe in far less than 1T years. Maybe I confused the local group with the local supercluster, or the visible horizon with the causal horizon. Could a sufficiently advanced civilization increase that 150B year number much by corraling extra mass into our supercluster, or otherwise?
Is the idea of piggybacking one's PR efforts on GRBs something you just came up with? If so, I wonder whether our current GRB observation practices would change if people were aware of the idea. (I am far from convinced that sending such a signal ourselves would be wise, FWIW).
I think the basic challenge to this scenario - and I'm not saying it's an unsolvable challenge - is that if Alpha lies on the border of Beta and Beta says "I have early history for sale from Gamma, do you want it?" then Alpha's willingness to pay Beta is capped by Alpha's trust that neither Beta nor Gamma just made up the data. If you don't believe in logical decision theory, then Alpha is willing to pay at most a bit less than the amount it would cost Beta or Gamma to fake the data indistinguishably to Alpha; depending on the data, this may be less than the cost to Gamma and Beta of transporting the data to their borders for future trade.
There is a way to do it. They first need to attract our attention, then send a blueprint of computer and then a program for it. I even wrote an article about it: "The Global Catastrophic Risks Connected with Possibility of Finding Alien AI During SETI"
No, but the situation with interstellar communication seems quite different: Unlike with Earth nations, there may be a category of civilizations with a motive to incidentally* destroy us and, if and only if we tune in, the means to carry this out. Also unlike Earth nations, we have no track record of communication being mostly beneficial, but instead so far a curious interstellar silence.
*And perhaps not just incidentally: For all we know it may be common lore by now that a good way to keep one's galaxy free of troublesome newcomers, and welcome the rest, is to send them a technology which is very beneficial when handled maturely and very self destructive otherwise.
Would you prevent communication between nations today on the basis of this fear?
I don't see how literal malware could infect our hardware. Persuasive messages on the other hand seem quite a danger. If they become widely known, and contain instructions to build something, some damn fool will eventually do it.
Executing code sent by aliens seems quite different from letting them try to persuade us.
The idea of sending signals after bursts isn't my idea, just an idea I like a lot.
To build on this: aggressively expanding civilizations could potentially put advanced malware in the signals they send, in order to take over the infrastructure of unsuspecting receivers. By forming a beachhead for further colonization in the target's area, aggressive civilizations could expand their physical reach further (since sending malware would be faster than sending probes, and thus affect a larger volume).
This is one reason we should be very cautious about listening to alien signals—arguably it is more dangerous to listen to signals than it is to send them (since the expected number of civilizations that could harm us with malware is larger than the number that could harm us with physical contact).
h/t Carl Shulman
On 2nd thought I withdraw my 2nd paragraph. The chance that our current global society would have the discipline to practice safe interstellar communication seems near zero.
I was surprised to see that our local supercluster would be causally disconnected from the rest of the universe in far less than 1T years. Maybe I confused the local group with the local supercluster, or the visible horizon with the causal horizon. Could a sufficiently advanced civilization increase that 150B year number much by corraling extra mass into our supercluster, or otherwise?
Is the idea of piggybacking one's PR efforts on GRBs something you just came up with? If so, I wonder whether our current GRB observation practices would change if people were aware of the idea. (I am far from convinced that sending such a signal ourselves would be wise, FWIW).