Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: Vulture 23 July 2014 07:53:18PM 0 points [-]

Okay, this is probably a stupid question but: What's the B word?

Comment author: GuySrinivasan 23 July 2014 07:27:42PM 0 points [-]

This is very important. Feeling like there are status consequences has an effect on decision making in humans regardless of whether there are actual status consequences.

Comment author: polymathwannabe 23 July 2014 07:27:08PM *  0 points [-]

I'm interested, but who reads Captain Planet fanfiction these days?

In response to How to Be Happy
Comment author: therufs 23 July 2014 07:02:47PM *  0 points [-]

We overestimate the misery we will experience after a romantic breakup, failure to get a promotion, or even contracting an illness. We also overestimate the pleasure we will get from buying a nice car, getting a promotion, or moving to a lovely coastal city. So: lower your expectations about the pleasure you'll get from such expenditures.

I found this useful for updating my views on what conditions are conducive to happiness: http://www.ted.com/talks/dan_gilbert_asks_why_are_we_happy/transcript

Comment author: iarwain1 23 July 2014 06:46:40PM *  0 points [-]

Oh yes, and check out hpmor.com.

Comment author: Tyrrell_McAllister 23 July 2014 06:29:17PM 0 points [-]

I know the basics of Causal and Evidential Decision Theory but I am baffled by Timeless Decision Theory. If you could point me in the direction of where to find articles on these issues that would be greatly appreciated. Thank you again for the thoughtful and useful reply, it helped a lot.

If you want to get a handle on the "Less Wrong" approach to decision theory, I'd recommend starting with Wei Dai's Updateless Decision Theory (UDT) rather than with Timeless Decision Theory (TDT). The basic mathematical outline of UDT is more straightforward, so you will be up and running quicker.

Wei's posts introducing UDT are here and here. I wrote a brief write-up that just gives a precise description of UDT without any motivation, justification, or examples.

Comment author: AmandaEHouse 23 July 2014 05:18:37PM 0 points [-]

Here are some relevant blockquotes of Bostrom's reasoning on brain-computer interfaces, from Superintelligence chapter 2:

It is sometimes proposed that direct brain–computer interfaces, particularly implants, could enable humans to exploit the fortes of digital computing—perfect recall, speedy and accurate arithmetic calculation, and high-bandwidth data transmission—enabling the resulting hybrid system to radically outperform the unaugmented brain.64 But although the possibility of direct connections between human brains and computers has been demonstrated, it seems unlikely that such interfaces will be widely used as enhancements any time soon.65

To begin with, there are significant risks of medical complications—including infections, electrode displacement, hemorrhage, and cognitive decline—when implanting electrodes in the brain. ... One study of Parkinson patients who had received deep brain implants showed reductions in verbal fluency, selective attention, color naming, and verbal memory compared with controls. Treated subjects also reported more cognitive complaints.66 Such risks and side effects might be tolerable if the procedure is used to alleviate severe disability. But in order for healthy subjects to volunteer themselves for neurosurgery, there would have to be some very substantial enhancement of normal functionality to be gained.


enhancement is likely to be far more difficult than therapy. Patients who suffer from paralysis might benefit from an implant that replaces their severed nerves or activates spinal motion pattern generators.67 Patients who are deaf or blind might benefit from artificial cochleae and retinas.68 Patients with Parkinson’s disease or chronic pain might benefit from deep brain stimulation that excites or inhibits activity in a particular area of the brain.69 What seems far more difficult to achieve is a high-bandwidth direct interaction between brain and computer to provide substantial increases in intelligence of a form that could not be more readily attained by other means. Most of the potential benefits that brain implants could provide in healthy subjects could be obtained at far less risk, expense, and inconvenience by using our regular motor and sensory organs to interact with computers located outside of our bodies. We do not need to plug a fiber optic cable into our brains in order to access the Internet.

Not only can the human retina transmit data at an impressive rate of nearly 10 million bits per second, but it comes pre-packaged with a massive amount of dedicated wetware, the visual cortex, that is highly adapted to extracting meaning from this information torrent and to interfacing with other brain areas for further processing.70 Even if there were an easy way of pumping more information into our brains, the extra data inflow would do little to increase the rate at which we think and learn unless all the neural machinery necessary for making sense of the data were similarly upgraded. Since this includes almost all of the brain, what would really be needed is a “whole brain prosthesis–—which is just another way of saying artificial general intelligence. Yet if one had a human-level AI, one could dispense with neurosurgery: a computer might as well have a metal casing as one of bone. So this limiting case just takes us back to the AI path, which we have already examined.

Comment author: E_Ransom 23 July 2014 05:02:22PM 0 points [-]

Hello and welcome to LessWrong!

We have something of a crosspollination with tvtropes as well as a few other sites. The similar "archive diving" structures probably don't hurt.

Glad you decided to join in! The site always needs some bioscience to collaborate with our high computer science population. Look forward to seeing your contributions.

Comment author: Skeptityke 23 July 2014 04:58:12PM 2 points [-]

This seems highly exploitable.

Anyone here want to try to use these bogus numbers to get a publisher to market their own fanfiction?

In response to comment by TimS on The Affect Heuristic
Comment author: Pastafarianist 23 July 2014 04:54:54PM *  0 points [-]

Doing badly on written word problems can be explained by failure to comprehend linguistic concepts. Doing badly on math problems can be explained by failure to comprehend mathematical concepts.

You see, this explanation makes perfect sense.

Comment author: PhilGoetz 23 July 2014 04:45:16PM 0 points [-]

Agreed. Done. Thanks!

Comment author: paper-machine 23 July 2014 04:35:38PM 2 points [-]

The title's a lot funnier if you s/as/of/.

Comment author: Drahflow 23 July 2014 12:59:47PM 0 points [-]

Given identical money payoffs between two options (even when adjusting for non-linear utility of money), choosing the non-ambiguous has the added advantage of giving a limited rationality agent less possible futures to spend computing resources on while the process of generating utility runs.

Consider two options: a) You wait one year and get 1 million dollars. b) You wait one year and get 3 million dollars with 0.5 probability (decided after this year).

If you take option b), depending on the size of your "utils", all planning for after the year must essentially be done twice, once for the case with 3 million dollars available and once for the case without.

Comment author: eli_sennesh 23 July 2014 12:46:54PM 0 points [-]

So, basically, how are the interventions going so far? Are we winning or losing?

Comment author: polymathwannabe 23 July 2014 12:40:22PM 0 points [-]

One question I do have is what exactly is the importance of decision theories? [...] Are they applicable in real life situations or only in thought experiments?

One of the main functions of a good decision theory is to bridge the territory-map divide: by solving problems in your head, it shows you how to solve problems in the real world. You can identify a good decision theory when it works in theory and in practice. If a decision theory seems to work in practice, but is not describable in a precise language (e.g. "do what feels good"), it actually hasn't been well thought out and puts you at risk of being paralyzed when a very serious and very complex situation arises. On the other hand, if it only works in theory but is impracticable (e.g. "pray to Minerva for an omen"), it will be a waste of storage space in your head. In short, a decision theory should serve as a tool for you to manage your life.

Comment author: Wei_Dai 23 July 2014 10:54:47AM *  4 points [-]

What is the importance of finding a perfect decision theory?

Three motivations are common around here:

  1. Building a Friendly AI that is based on decision theory.
  2. Understanding what ideal rationality looks like, so we have a better idea of what to aim for as far as improving our own rationality.
  3. Curiosity. If we knew what the perfect decision theory was, many philosophical questions may be answered or would be closer to being answered.

For some relevant posts, see 1 and 2.

Comment author: Qwake 23 July 2014 07:54:49AM *  0 points [-]

I think luminosity is very important as making conscious and self aware decisions instead of simply responding to external stimuli mindlessly is what seperates humans from being a very complex robot. The more conscious we are the better decisions we can make as we can analyze our thought processes and eliminate biases and emotional flaws in our thinking. In my opinion, consciousness and rationality are directly proportional in humans. In short, any human who wants to become a more rational thinker would be well advised to take steps to increase their consciousness or luminosity if you want to call it that. That is certainly what I am trying to do. Great series by the way.

Comment author: private_messaging 23 July 2014 07:40:33AM -1 points [-]

I don't even see how to phrase your position coherently.

There's far more hypotheses which a human updates on when learning in the early life than there are genes, so there's simply not enough genes to address priors to hypotheses individually. A lot in the human body (minor blood vessels, details in the fingerprint patterns, etc) is not set by genes - most of the fine detail isn't individually controlled by genes.

But there are genes that control how strong synapses are under what conditions, and there are genes that control the conditions in different parts of the neocotrex.

The fidelity is very low - it's not a blueprint. The thing is, you can't make predictions about what would evolve from just what's beneficial. It'd be beneficial for many mammals to have extra eyes in the back, but not a single mammal has those, because the developmental process doesn't provide for a simple mutation that yields such eyes.

To use an analogy to a computer, I would argue that studying the properties of neurons will get you about as far in understanding psychology as studying the properties of circuits will get you in understanding software.

Not when the guys who speculate about the software keep insisting that microsoft windows is added into the computers at the semiconductor chip factory... that's probably the best analogy. Hardware is what determines how and where the software can be loaded from. For example from the hardware considerations we can see that RAM comes in blank, and hard drive comes in with head positioning tracks and some firmware but not the OS.

Comment author: lukeprog 23 July 2014 05:01:40AM 3 points [-]

Good work!

Not sure if you're planning to make further clarifications to the visualizations and the post, but one suggestion would be to introduce a new arrow (or arrows) showing that multipolar scenarios may very well resolve into a unipolar outcome after not much time (decades or centuries). This provides one major justification for the book's focus on singleton scenarios, another justification being that singleton scenarios are easier to analyze.

Comment author: wedrifid 23 July 2014 04:18:43AM 0 points [-]

From a technological standpoint, I would have thought that a hybrid of biological and machine intelligence would be likely to have the best aspects of each

Unfortunately the 'biological' part is not too great at recursive self improvement (at the fundamental level). It's a mess. If we merely wanted to create cyborgs with cognitive advantages then this strategy is a no-brainer (except literally). If we are trying to create a superintelligence then the recursive self improvement feature is more or less obligatory.

View more: Next