In response to AI Policy?
Comment author: blogospheroid 12 November 2013 08:53:52AM 3 points [-]

David Brin believes that high speed trading bots are a high probability route to human indifferent AI. If you agree with him, then laws governing the usage of high speed trading algorithms could be useful. There is a downside in terms of stock liquidity, but how much that will affect overall economic growth is still a research area.

In response to comment by blogospheroid on AI Policy?
Comment author: hylleddin 16 November 2013 04:07:27AM *  1 point [-]

I see trading bots as a not unlikely source of human indifferent AI, but I don't see how a transaction tax would help. Penalizing high frequency traders just incentivizes smarter trades over faster trades.

Comment author: hylleddin 13 November 2013 02:24:06AM 2 points [-]

From my experience doing group study for classes, there don't seem to be any major advantages or disadvantages for pairs vs small groups. The most relevant factor is how many eyeballs looking at something, but even that isn't a huge effect. Both are more effective than working alone (as the article concludes).

For a lot of things, getting together IRL looks like it would work best, but the logistics there can be difficult. For people who have Lesswrong meetups nearby, those are an obvious way to potentially coordinate meatspace study groups.

Comment author: Armok_GoB 08 November 2013 05:05:10PM 0 points [-]

You are probably counting more properties things can vary under as "ontological". I'm mostly doing a software vs. hardware, need to be puppeteered vs. automatic, and able to interact with environment vs. stuck in a simulation, here.

I'm basing the moral status largely on "well realized", "complex" and "technically sentient" here. You'll notice all my example ALSO has the actual utility function multiplier at "unknown".

Most tulpas probably have almost exactly the same intelligence as their host, but not all of it stacks with the host, and thus count towards it's power over reality.

Comment author: hylleddin 08 November 2013 10:43:03PM 1 point [-]

Most tulpas probably have almost exactly the same intelligence as their host, but not all of it stacks with the host, and thus count towards it's power over reality.

Ah. I see what you mean. That makes sense.

Comment author: Armok_GoB 06 November 2013 04:57:57PM 3 points [-]

I've been doing some research (mainly hanging on their subreddit) and I think I have a fairly good idea of how tulpas work and the answers to your questions.

There are a myriad very different things tulpas are described as and thus "tulpas exist in the way people describe them" is not well defined.

There undisputably exist SOME specific interesting phenomena that's the referent of the word Tulpa.

I estimates a well developed tulpas moral status to be similar to that of a newborn infant, late-stage alzheimer's victim, dolphin, or beloved family pet dog.

I estimates it's ontological status to be similar to a video game NPC, recurring dream character, or schizophrenic hallucination.

I estimate it's power over reality to be similar to a human (with lower intelligence than their host) locked in a box and only able to communicate with one specific other human.

It does not seem deciding to make a tulpa is a sign of being crazy. Tulpas themselves seem to not be automatically unhealthy and can often help their host overcome depression or anxiety. However, there are many signs that the act of making a tulpa is dangerous and can trigger latent tendencies or be easily done in a catastrophically wrong way. I estimate the risk is similar to doing extensive meditation or taking a single largeish dose of LSD. For this reason I have not and will not attempt making one.

I am to lazy to find citations or examples right now but I probably could. I've tried to be a good rationalist and am fairly certain of most of these claims.

Comment author: hylleddin 08 November 2013 01:40:30AM *  1 point [-]

As someone with personal experience with a tulpa, I agree with most of this.

I estimates it's ontological status to be similar to a video game NPC, recurring dream character, or schizophrenic hallucination.

I agree with the last two, but I think a video game NPC has a different ontological status than any of those. I also believe that schizophrenic hallucinations and recurring dream characters (and tulpas) can probably cover a broad range of ontological possibilities, depending on how "well-realized" they are.

I estimates a well developed tulpas moral status to be similar to that of a newborn infant, late-stage alzheimer's victim, dolphin, or beloved family pet dog.

I have no idea what a tulpa's moral status is, besides not less than a fictional character and not more than a typical human.

I estimate it's power over reality to be similar to a human (with lower intelligence than their host) locked in a box and only able to communicate with one specific other human.

I would expect most of them to have about the same intelligence, rather than lower intelligence.

Comment author: [deleted] 04 October 2013 08:40:38PM 5 points [-]

Is there a way to explain that to a non-mathematician?

In response to comment by [deleted] on The best 15 words
Comment author: hylleddin 22 October 2013 02:12:07AM 0 points [-]

Or even a non-category theorist?

Comment author: pnrjulius 29 May 2012 03:47:47AM 2 points [-]

Not 50 years. Craig Venter did it already in 2010. So it took 3 years to do what you thought it would take 50.

Comment author: hylleddin 25 August 2013 03:33:22AM 2 points [-]

He didn't actually synthesize a whole living thing. He synthesized a genome and put it into a cell. There's still a lot of chemical machinery we don't understand yet.

Comment author: Bayeslisk 16 August 2013 07:56:12AM 0 points [-]

How does Korean relate to this? I speak it semi-fluently and none of those three things happen in it. I have however found its folding in of adjectives into verbs one of several useful toeholds for learning Lojban, though.

Comment author: hylleddin 17 August 2013 12:10:24AM 0 points [-]

It doesn't directly relate. I'm currently learning Korean and don't want to try learning multiple languages at the same time. Also, I want a broader experience with languages before I try to make my own.

Comment author: hylleddin 02 August 2013 09:24:40PM *  13 points [-]

The mark of a great man is one who knows when to set aside the important things in order to accomplish the vital ones.

-- Tillaume, The Alloy of Law

Comment author: So8res 17 July 2013 02:23:14PM *  2 points [-]

What do you think the word "terminal" means in this context, and what do you think I think it means?

Edit: Seriously, I'm not being facetious. I think I am using the word correctly, and if I'm not, I'd like to know. The downvotes tell me little.

Comment author: hylleddin 25 July 2013 08:33:19PM 1 point [-]

In local parlance, "terminal" values are a decision maker's ultimate values, the things they consider ends in themselves.

A decision maker should never want to change their terminal values.

For example, if a being has "wanting to be a music star" as a terminal value, than it should adopt "wanting to make music" as an instrumental value.

For humans, how these values feel psychologically is a different question from whether they are terminal or not.

See here for more information

Comment author: Glen 25 July 2013 05:22:16PM 14 points [-]

Hello all, my name is Glen and I am a fairly long-time lurker here. I first found this site through the Sword of Good short story, and filed it in my "List of things I want to read but will never actually get around to" and largely forgot about it until I recognized the name while reading HPMOR. I've read most, but not all, of the sequences and am currently going through Quantum Mechnics. I'm Chicago based and work as a programmer for an advertising company. I consider myself a low-mid level rationalist and am working at getting better.

I run or play in a wide range of tabletop games, where I'm known as being a GM-Friendly Munchkin. That is to say, I like finding exploits and unusual combinations, but then I talk to the person running the game about them and usually explain why I shouldn't be allowed to do that. It lets me have fun breaking the system without actually making hte game less fun. I've also used basic information theory to great effect, unless the GM tells me to knock it off. Currently in love with Exalted. Been burned by Shadowrun in the past, but I just can't stay mad at her.

Comment author: hylleddin 25 July 2013 07:50:31PM *  4 points [-]

We're curious how you've used information theory in RPGs. It sounds like there are some interesting stories there.

View more: Prev | Next