quiet_NaN

Wiki Contributions

Comments

Sorted by

Unlike Word, the human genome is self-hosting. That means that it is paying fair and square for any complexity advantage it might have -- if Microsoft found that the x86 was not expressive enough to code in a space-efficient manner, they could likewise implement more complex machinery to host it.

Of course, the core fact is that the DNA of eukaryotes looks memory efficient compared to the bloat of word.

There was a time when Word was shipped on floppy disks. From what I recall, it came on multiple floppies, but on the order of ten, not a thousand. With these modern CD-ROMs and DVDs, there is simply less incentive to optimize for size. People are not going to switch away from word to libreoffice if the latter was only a gigabyte.

I think formally, the Kolmogorov complexity would have to be stated as the length of a description of a Turing Machine (not that this gets completely rid of any wiggle room).

Of course, TMs do not offer a great gaming experience.

"The operating system and the hardware" is certainly an upper bound, but also quite certainly to be overkill.

Your floating point unit or your network stack are not going to be very busy while you play tetris.

If you cut it down to the essentials (getting rid of things like scores which have to displayed as characters, or background graphics or music), you have a 2d grid in which you need to toggle fields, which is isomorphic to a matrix display. I don't think that having access to boost or JCL or the python ecosystem is going to help you much in terms of writing a shorter program than you would need for a bit serial processor. And these things can be crazy small -- this one takes about 200 LUTs and FFs. If we can agree that an universal logic gate is a reasonable primitive which would be understandable to any technological civilization , then we are talking on the order of 1k or 2k logic gates here. Specifying that on a circuit diagram level is not going to set you back by more than 10kB.

So while you are technically correct that there is some overhead, I think directionally Malmesbury is correct in that the binary file makes for a reasonable estimate of the information content, while adding the size of the OS (sometimes multiple floppy disks, these days!) will lead to a much worse estimate.

Agreed.

If the authors claim that adding randomness in the territory in classical mechanics requires making it more complex, they should also notice that for quantum mechanics, removing the probability from the territory for QM (like Bohmian mechanics) tends to make the the theories more complex. 

Also, QM is not a weird edge case to be discarded at leisure, it is to the best of our knowledge a fundamental aspect of what we call reality. Sidelining it is like arguing "any substance can be divided into arbitrary small portions" -- sure, as far as everyday objects such as a bottle of water are concerned, this is true to some very good approximation, but it will not convince anyone.

Also, I am not sure that for the many world interpretation, the probability of observing spin-up when looking at a mixed state is something which firmly lies in the map. From what I can tell, what is happening in MWI is that the observer will become entangled with the mixed state. From the point of view of the observer, they find themselves either in the state where they observed spin-up and spin-down, but their world model before observation as "I will find myself either in spin-up-world or spin-down-world, and my uncertainty about which of these it will be is subjective" seems to grossly misrepresent their model. They would say "One copy of myself will find itself in spin-down-world, and one in spin-up-world, and if I were to repeat this experiment to establish a frequentist probability, I would find that that the probability of each outcome is given by the coefficient of that part of the wave function to the power of two".

So, in my opinion,

  • If a blackjack player wonders if a card placed face-down on the table is an ace, that is uncertainty in their map.
  • If someone wonders how a deterministic but chaotic physical system will evolve over time, that is also uncertainty in the map.
  • If someone wonders what outcome they are likely to measure in QM, that is (without adding extra epicycles) uncertainty in the territory.
  • If someone wonders what the evolution of a large statistical ensemble which is influenced by QM at the microscopic level (such as a real gas) is, that might be mostly uncertainty in the map if looking at very short time scales (where position and momentum uncertainty and the statistical nature of scattering cross-sections would not constrain Laplace's daemon too much), but exists in the territory for most useful time spans.

Okay. So from what I understand, you want to use a magnetic effect observed in plasma as a primary energy source.

Generally, a source of energy works by taking a fuel which contains energy and turning it into a less energetic waste product. For example, carbon and oxygen can be burned to form CO2. Or one can split some uranium nucleus into two fragments which are more stable and reap the energy difference as heat.

Likewise, a wind turbine will consume some of the kinetic energy of the air, and a solar panel will take energy from photons. For a fusion reactor, you gain energy because you turn lighter nuclei (hydrogen isotopes or helium-3) into helium-4, which is extraordinarily stable.

Thus, my simple question: where does the energy for your invention come from? "The plasma" is not a sufficient answer, because on Earth we generally encounter plasma rarely for us to exploit, in fusion reactor designs, it is generated painstakingly at a huge energy cost.

Something goes into your reactor, and something comes out of it. If it is the same, then it can hardly have expended energy in your reaction.

Seconded. Also, in the second picture, that line is missing, so it seems that it is just Zvi complaining about the "win probability"?

My guess is that the numbers (sans the weird negative sign) might indicate the returns in percent for betting on either team. Then, if the odds were really 50:50 and the bookmaker was not taking a cut, they should be 200 each? So 160 would be fair if the first team had a win probability of 0.625, while 125 would be fair if the other team had a win probability of 0.8. Of course, these add up to more than one, which is to be expected, the bookmaker wants to make money. If they were adding up to 1.1, that would (from my gut feeling) be a ten percent cut for the bookie. Here, it looks like the bookie is taking almost a third of the money? Why would anyone play at those odds? I find it hard to imagine that anyone can outperform the wisdom of the crowds by a third. The only reason to bet here would be if you knew the outcome beforehand because you had rigged the game.

This is all hypothetical, for all I know the odds in sports betting are stated customarily as the returns on a 70$ bet or whatever.

Edit: seems I was not very correct in my guess.

quiet_NaN2-1

There is also a quantum version of that puzzle.

I have two identical particles of non-zero spin in identical states (except possibly for the spin direction). One of them is spin up. What is the probability that both of them are spin up.

For fermions, that probability is zero, of course. Pauli exclusion principle.

For bosons, ...

... the key insight is that you can not distinguish them. The possible wave functions are either (spin-up, spin-up) or (spin-up,spin-down)=(spin-down,spin-up). Hence, you get p=1/2. (From this, we can conclude that boys (p=1/3) are made up from 2/3 bosons and 1/3 fermions.)

Let us assume that the utility of personal wealth is logarithmic, which is intuitive enough: 10k$ matter a lot more to you if you are broke than if your net worth is 1M$.

Then by your definition of exploitation, every transaction where a poor person pays a rich person and enhances their personal wealth in the process is exploitive. The worker surely needs the rent money more than the landlord, so the landlord should cut the rent to the point where he does not make a profit. Likewise the physician providing aid to the poor, or the CEO selling smartphones to the middle class. 

Classifying most of capitalism as "exploitive" (per se) is of course not helpful. You can add epicycles to your definition by considering money rather than its subjective utilitarian value, but under that definition all financial transactions would be 'fair': the poor person values 1k$ more than his kidney, while the rich person values the kidney more than 1k$, so they trade and everyone is better off (even though we would likely consider this trade exploitative).

More generally, to have a definition of fairness, we need some unit of inter-personal utility. Consider the Ultimatum game. If both participants are of similar wealth and with similar needs and have worked a similar amount for the gains, then a 50:50 split seems fair. But if we don't have a common frame of utility, perhaps because one of them is a peasant and the other is a feudal king, then objectively determining what is a fair split seems impossible. 

Some comments.

 

[...] We will quickly hit superintelligence, and, assuming the superintelligence is aligned, live in a post-scarcity technological wonderland where everything is possible.

Note, firstly, that money will continue being a thing, at least unless we have one single AI system doing all economic planning. Prices are largely about communicating information. If there are many actors and they trade with each other, the strong assumption should be that there are prices (even if humans do not see them or interact with them). Remember too that however sharp the singularity, abundance will still be finite, and must therefore be allocated.

 

Personally, I am reluctant to tell superintelligences how they should coordinate. It feels like some ants looking at the moon and thinking "surely if some animal is going to make it to the moon, it will be a winged ant."Just because market economies have absolutely dominated the period of human development we might call 'civilization', it is not clear that ASIs will not come up with something better. 

The era of human achievement in hard sciences will probably end within a few years because of the rate of AI progress in anything with crisp reward signals.

As an experimental physicist, I have opinions about that statement. Doing stuff in the physical world is hard. The business case for AI systems which can drive motor vehicles on the road is obvious to anyone, and yet autonomous vehicles remain the exception rather than the rule. (Yes, regulations are part of that story, but not all of it.) By contrast, the business case for an AI system which can cable up a particle detector is basically non-existent. I can see an AI using either a generic mobile robot developed for other purposes for plugging in all the BNC cables, or I can see it using a minimum wage worker with a head up display as a bio-drone -- but more likely in two decades than a few years. 

Of course, experimental physics these days is very much a team effort -- the low-hanging fruits have mostly been picked, nobody is going to discover radium or fission again, at the most they will be a small cog in a large machine which discovers the Higgs boson or gravity waves.[1] So you might argue that experimental physics today is already not a place for peak human excellence (a la the Humanists in Terra Ignota).

 

More broadly, I agree that if ASI happens, most unaugmented humans are unlikely to stay at the helm of our collective destiny (to the limited degree they ever were). Even if some billionaire manages to align the first ASI to maximize his personal wealth, if he is clever he will obey the ASI just like all the peasants. His agency is reduced to not following the advice of his AI on some trivial matters. ("I have calculated that you should wear a blue shirt today for optimal outcomes." -- "I am willing to take a slight hit in happiness and success by making the suboptimal choice to wear a green shirt, though.") 

Relevant fiction: Scott Alexander, The Whispering Earring.

Also, if we fail to align the first ASI, human inequality will drop to zero. 

  1. ^

     Of course, being a small cog in some large machine, I will say that.

quiet_NaN153

Some additional context.

This fantasy world is copied from a role-playing game setting—a fact I discovered when Planecrash literally linked to a Wiki article to explain part of the in-universe setting.

The world of Golarian is a (or the?) setting of the Pathfinder role playing game, which is a fork of the D&D 3.5 rules[1] (but notably different from Forgotten Realms, which is owned by WotC/Hasbro). The core setting is defined in some twenty-odd books which cover everything from the political landscape in dozens of polities to detailed rule for how magic and divine spells work. From what I can tell, the Planecrash authors mostly take what given and fill in the blanks in a way which makes the world logically coherent, in the same way Eliezer did with the world of Harry Potter in HPMOR. 

Also, Planecrash (aka Project Lawful) is a form of collaborative writing (or free form roleplaying?) called glowfic. In this case, EY writes Keltham and the setting of dath ilan (which exists only in Keltham's mind, as far as the plot is concerned), plus a few other minor characters. Lintamande writes Carissa and most of the world -- it is clear that she is an expert in the Pathfinder setting -- as well as most of the other characters. At some point, other writers (including Alicorn) join and write minor characters. If you have read through planecrash and want to read more from Eliezer in that format, glowfic.com has you covered. For example, here is a story which sheds some light on the shrouded past of dath ilan (plus more BDSM, of course). 

 

  1. ^

    D&D 3 was my first exposure to the D&D (via Bioware's Neverwinter Nights), so it is objectively the best edition. Later editions are mostly WotC milking the franchise with a complete overhaul every few years. AD&D 2 is also fine, if a bit less streamlined -- THAC0, armor classes going negative and all that. 

quiet_NaN192

 One big aspect of Yudkowskian decision theory is how to respond to threats. Following causal decision theory means you can neither make credible threats nor commit to deterrence to counter threats. Yudkowsky endorses not responding to threats to avoid incentivising them, while also having deterrence commitments to maintain good equilibria. He also implies this is a consequence of using a sensible functional decision theory. But there's a tension here: your deterrence commitment could be interpreted as a threat by someone else, or visa versa. 

I have also noted this tension. Intuitively, one might think that it depends on the morality of the action -- the robber who threatens to blow up a bank unless he gets his money might be seen as a threat, while a policy of blowing up your own banks in case of robberies might be seen as a deterrence commitment. 

However, this can not be it, because decision theory works with any utility functions. 

The other idea is that that 'to make threats' is one of these irregular verbs. I make credible deterrence commitments, you show a willingness to escalate, they try to blackmail me with irrational threats. This is of course just as silly in the context of game theory.

One axis of difference might be if you physically restrict your own options to prevent you from not following through on your threat (like a Chicken player removing their steering wheel, or that doomsday machine from Dr. Strangelove). But this merely makes a difference for people who are known to follow causal decision theory where they would try to maximize the utility in whatever branch of reality they find themselves in. From my understanding, the adherents of functional decision theory do not need to physically constrain their options -- they would be happy to burn the world in one branch of reality if that was the dominant strategy before their opponent had made their choice. 

Consider the ultimatum game (which gets convered in Planecrash, naturally) where one party makes a proposal on how to split 10$ and the other party can either accept (gaining their share) or reject it (in which case neither party gains anything). In planecrash, the dominant strategy is presented as rejecting unfair allocations with some probability so that the expected value of the proposing party is lower than if they had proposed a fair split. However, this hinges on the concept of fairness. If each dollar has the same utility to every participant, then a 50-50 split seems fair. But in a more general case, the utilities of both parties might be utterly incomparable, or the effort of both players might be very different -- an isomorphic situation is a silk merchant encountering the first of possibly multiple highwaymen, and having to agree on a split, with both parties having the option to burn all the silk if they don't agree. Agreeing to a 50-50 split each time could easily make the business model of the silk merchant impossible.

"This is my strategy, and I will not change it no matter what, so you better adapt your strategy if you want to avoid fatal outcomes" is an attitude likely to lead to a lot of fatal outcomes. 

Load More