3 Comments

The basic problem with this (that I can see) is that you begin with a highly abstract proof of the existence of an AIXI-style mathematical formalism (representing an idealised superintelligence), and then shift to empirical, heuristic arguments based on extrapolations from current technology and social systems. These are quite different kinds of argument, and the former does not necessarily shed much insight on the latter.

On your AIXI-style proof: as far as I can tell the core idea is that the universe is finite, and that therefore any kind of intelligent agent within the universe could in principle be replaced with a ginormous lookup table. (Presumably this includes agents that receive some inputs S1...St, take some action At, receive new inputs St+1...St+n, take another action, etc.) [1]

I do not think anyone familiar with mathematics and computer science would argue with this. The problem is it does not shed much light on the empirical, heuristic arguments you make in this article. I think there are practical or philosophical criticisms that could be made of the empirical arguments you make above, but by framing the argument in terms of mathematics you preempt most such criticisms, as you can simply say the critics don't understand or haven't addressed the mathematical arguments you make.

[1] Regarding modelling an intelligent agent as a lookup table, there is a tangential problem, which is separate from my points above, but which I think is relevant. Take any intelligence agent -- superintelligent or not -- which updates on new information. To model such an agent -- one that begins not knowing everything -- the lookup table or equivalent basically has to incorporate all information in the universe, or at least all information the agent would act on once it possesses it.

E.g., suppose you are modelling a strictly limited tool AI that performs chemistry experiments, and uses the results of those experiments to plan new experiments. Assume the AI's decisions are determined by the data it has received up to a given point, s.t. it can be modelled by a lookup table or equivalent structure such as you describe. The lookup table therefore *has to incorporate* (in some form) a model of all of the aspects of chemistry the AI might conceivably interact with. If the AI is able to synthesise new molecules, the model has to be even more complicated.

For a superintelligent agent, the same argument would hold, but the lookup table in essence has to incorporate a model of the entire universe -- including the complete laws of physics and the physical properties of every entity that the SAI might interact with. By "incorporate a model" I mean that the astronomically vast network of inputs and outputs constituting the lookup table would in some sense have to correspond to a complete model of the universe (amongst other things).

Expand full comment
author

Thanks.

Perhaps I should add to the post making clear why I think the empirical jump is justified.

Expand full comment