20 Comments

Maybe post-Soviet privatizations are relevant. In Russia they gave out shares of newly privatized industries to workers, since they used to be nationally owned. Most communist-bloc people had no experience with capitalism, they didn't know what to do and sold their shares cheap or got scammed. It was all happening very quickly since there were worries the Communists would get back into power, there was a huge rush to privatize now and build up an anti-communist power bloc. Oligarchs and criminals quickly took over huge swathes of the economy.

Expand full comment
author

yeah this is a great analogy!

Expand full comment

So... BDs* and group BDs jacked into FOREVER as an AI procedurally generates content based upon the container owner's desires.

😬

This is not the take.

At some point on some platform (possibly here), I said that the Demiurgos cracks down on ascension by creating deeper layers of abstraction from base reality.

"Digital life" is the worst possible manifestation of that.

*braindances

Expand full comment
author

Your own personal utopia can be anything you want. Are you projecting maybe?

Expand full comment

Most of the stuff in here is correct but I'm not sure about Elysium itself. Bostrom's "Deep Utopia" talks about this topic. There's value in my experience being "real"

Expand full comment
author

What is not "real" here?

Expand full comment

Your suggestions sound similar to Nozick's experience machine and perhaps even Wireheading

Expand full comment

You can get real experience by visiting other people or inviting them to you.

Expand full comment

I like this idea but I have some questions. Apologies if you're planning to cover any of these points in your next essay.

1. How would this system function under the condition of many sovereign entities competing, whether that is regular countries, network states, unrestricted corporations that have built different AI, or whatever else? Is the assumption that humans have no say in this future so the ASI implementing Elysium will do so without any human input? If this were not the case or if there were multiple ASIs with different goals then we would have the same situation as before, with various entities attempting to exert control over populations of humans and in the worst case scenario getting into conflicts with each other that would probably have a catastrophic amount of human casualties.

2. Since you mentioned resource limitations there would need to be an enforcement mechanism against humans or other entities that did not agree to obey resource constraints. Wouldn't that imply we need a single enforcing entity that is effectively a god over all of human space and can exert overwhelming power against anyone who defects from the system? I'm not saying this is bad necessarily but it would basically be a secular realization of the Christian rapture and would mean no one will have any meaningful agency over their lives after this ASI is brought into being.

3. Where exactly do these beings who are created in people's individualized utopias come from? You mentioned there would be rules like you can't torture them, so that seems to imply the ASI in some way provides a template that you have to work within, meaning if I understand correctly it is like you are video game designer creating characters only they are conscious entities? To me that seems less efficient than just requesting what you want from the ASI and having it split off a subroutine to model a certain "character". Obviously torture is bad since it violates the consent of the digital entities, but what about situations where the consent is more blurry. An example I am thinking of is if you create a planet of catgirls or whatever that are theoretically very intelligent but incapable of desiring anything else except sex, couldn't that be considered equivalent to drugging humans for their entire lives to use them as sex slaves? I guess my inclination would be that either they should not have any capacities that they don't need to serve their function so they would either be retarded outside of their sexual capacities or simply lack qualia, or they have traits that bias them toward particular behavior such as nymphomania but are still fully-functioning beings capable of other behavior as well. Admittedly this is very philosophical and relies on a lot of speculative assumptions.

4. How long do you think we have before we will either achieve something like Elysium or be killed by AI? If Kurzweil's 2040 or 2045 estimate is accurate, then I would be surprised if we can completely replace the entire global political system without a lot of intermediate steps in which AIs that are advanced but not godlike take over most of the functions of government. Otherwise I think it's plausible that new polities could be created within that timeframe either in the ocean, off Earth, or within part of the territory of presently existing nation states, but it's doubtful democratic governance will completely cease to exist anywhere. In that scenario do you think it would be possible to develop aligned AI or do the presently existing superpowers need to be removed?

Expand full comment
author

> no one will have any meaningful agency over their lives after this ASI is brought into being.

You have agency to decide what happens in your part of ELYSIUM

Expand full comment

I get that but I'm talking about the fact that this premise hinges on the assumption that no one will ever gain an infinitesimal fraction of the power of this ASI or they will be able to challenge it. Presumably creating other ASIs would be harshly prevented as well, so there would be a single all-powerful god ruling over as much of space as it could expand into for the rest of the age of the universe. I'm not saying that's necessarily bad but just clarifying since it sounds like that's what Elysium would entail.

Expand full comment
author

yes that is exactly the plan and it's quite similar to a society with military forces that have a monopoly on violence

Expand full comment
author

> in the worst case scenario getting into conflicts with each other

This post does not cover conflict

Expand full comment
author

> without any human input?

No input is needed. And indeed it would be very dangerous to have anyone giving input to it.

Expand full comment

If there is no human input what makes you think it would decide on the solution you want on its own? Also what about the other questions?

Expand full comment
author

It wouldn't decide, this would be its programming. Once in motion, no input needed.

Expand full comment

And who is going to create this AI? If it is a particular nation other nations will oppose its imperative, will their governments just get crushed?

Expand full comment
author

> And who is going to create this AI

We are, man. Nobody else is gonna do it.

Expand full comment

Alright, I am a wordcel so probably not me personally but I will try to do what I can to help the people working on it. I saw you are involved in the Praxis project, have you proposed this idea to them?

Expand full comment