Comment author: taygetea 31 March 2015 05:18:22PM *  0 points [-]

Nitpick: BTC can be worth effectively less than $0 if you buy some then the price drops. But in a Pascalian scenario, that's a rounding error.

More generally, the difference between a Mugging and a Wager is that the wager has low opportunity cost for a low chance of a large positive outcome, and the Mugging is avoiding a negative outcome. So, unless you've bet all the money you have on Bitcoin, it maps much better to a Wager scenario than a Mugging. This is played out in the common reasoning of "There's a low chance of this becoming extremely valuable. I will buy a small amount corresponding to the EV of that chance, just in case".

Edit: I may have misread, but just to make sure, you were making the gold comparison as a way to determine the scale of the mentioned large positive outcome, correct? And my jump to individual investing wasn't a misinterpretation?

Comment author: [deleted] 27 March 2015 09:37:56AM 0 points [-]

But I think I am "strong" enough to avoid my usual tribal arguments ("copy is not stealing as it does not remove the original") and be fully consequentualist ("copying kills pop culture, and it is good because") and how would that be a bad thing? My point is precisely that we are probably strong enough to discuss such topics without slogan-chanting and well within epistemic rationality.

And I am unsure how you didn't recognize that the sentence you quoted is not the usual four-legs-good tribal chant but something with a clear consequence predicted which is easy to approach rationally ("what is the chance it kills pop culture?" "what is the chance good things happen if pop culture gets killed?")

Comment author: taygetea 29 March 2015 02:03:46AM 0 points [-]

The entire point of "politics is the mind-killer" is that no, even here is not immune to tribalistic idea-warfare politics. The politics just get more complicated. And the stopgap solution until we figure out a way around that tendency, which doesn't appear reliably avoidable, is to sandbox the topic and keep it limited. You should have a high prior that a belief that you can be "strong" is Dunning-Kruger talking.

Comment author: Xerographica 27 March 2015 06:16:32PM 0 points [-]

The decline happened as a result of my indefinite banishment from Wikipedia. How many page views did I generate when I was active on Wikipedia? A lot more than I generate now that I'm banned!

I'm kinda kidding around but there's more than a kernel of truth in there. When Wikipedia was first created... there were more than a gazillion bits of knowledge missing. Over time though... the "easiest" bits were filled in. As all the lowest hanging fruit was picked... there were less and less people tall enough to reach the higher fruit. Clearly this resulted in a significant decrease in editing activity and by extension... a decrease in page views.

What percentage of the total decline in page views does this explanation actually account for? Beats me. It has to account for some though.

On a tangentially related note... a few weeks after famous economists die... I like to try and grab a screenshot of the page views for their Wikipedia entries. Their page views have a huge spike as their life/death is widely discussed... but then the page views decline pretty quickly afterwards. Unfortunately, in too many cases I've forgotten to grab screenshots. And the graph doesn't look as good when you have to go back in time. :( Why economists? Well... I think they'd appreciate it more than most famous dead people. Plus, it could be interesting/informative to compare their graphs in order to try and discern some useful information about something... economical.

In exchange for my explanation... how about you try and resolve Satt's Paradox? Or, you can try and predict if/when/how quarters up are going to replace thumbs up.

Comment author: taygetea 29 March 2015 01:50:14AM 2 points [-]

This would rely on a large fraction of pageviews being from Wikipedia editors. That seems unlikely. Got any data for that?

Comment author: taygetea 28 March 2015 03:35:01PM 3 points [-]

You could construct an argument about needing to reinforce explicitly using system-2 ethics on common situations to make sure that you associate those ethics implicitly with normal situations, and not just contrived edge cases. But that seems to be even a bit too charitable. And also easily fixed if so.

Comment author: taygetea 28 March 2015 12:44:59PM 0 points [-]

From my experiences trying similar things over IRC, I have found that the lack of anything holding you to your promises definitely is a detriment to most people. I have found a few for whom that's not the case, but that's very much the exception. That's definitely a failure mode to look out for, doing this online (especially in text) won't work for many people. In addition, this discrepancy can create friction between people.

The general structure of the failure tends to be one person feeling vaguely bad about not talking as much, or missing a session. And then when they don't have many vectors to viscerally receive signals of disapproval, of the kind that would cause them to be uncomfortable and go through with it even when they don't want to, it becomes easiest to do it the next time. Schelling Fences are easier to break without face to face interaction.

There should be ways to bypass that problem. One of the memes around LW is actively reinforcing positive things, instead of relying on implied approval. If you can create a culture of actively rewarding success, and treating apathy as something to be stamped out at every point, then you can do it. You can also make a point to create norms where one goes out of their way to help someone who falls behind to figure out what the true problem is. If you can manage that, instead of silence or simple berating, then you can make it work. Ideas around Tell Culture can help you with this. Unfortunately, this also requires diverting a lot of focus into preserving those conditions. Creating community norms is hard, but that seems like the way you avoid that problem.

I don't mean to imply that you want to start a community around this along the lines of the LW study hall, but this is what I have found from my attempts. Maybe someone will find it helpful.

Comment author: Vladimir_Nesov 02 August 2013 01:29:26AM *  2 points [-]
  • Aliens won't produce a FAI, their successful AI project would have alien values, not ours (complexity of value). It would probably eat us. I suspect even our own humane FAI would eat us, at the very least get rid of the ridiculously resource-hungry robot-body substrate. The opportunity cost of just leaving dumb matter around seems too enormous to compete with whatever arguments there might be for not touching things, under most preferences except those specifically contrived to do so.
  • UFAI and FAI are probably about the same kind of thing for the purposes of powerful optimization (after initial steps towards reflective equilibrium normalize away flaws of the initial design, especially for "scruffy" AGI). FAI is just an AGI that happens to be designed to hold our values in particular. UFAI is not characterized by having "simple" values (if that characterization makes any sense in this context, it's not clear in what way should optimization care about the absolute difficulty of the problem, as compared to the relative merits of alternative plans). It might even turn out to be likely for a poorly-designed AGI to have arbitrarily complicated "random noise" values. (It might also turn out to be relatively simple to make an AI with values so opaque that it would need to turn the whole universe into an only instrumentally valuable computer in order to obtain a tiny chance of figuring out where to move a single atom, the only action it ever does for terminal reasons. Make it solve a puzzle of high computational complexity or something.)
  • There doesn't appear to be a reason to expect values to influence the speed of expansion to any significant extent, it's astronomical waste (giving an instrumental drive) for almost all values, important to start optimizing the matter as soon as possible.
Comment author: taygetea 02 August 2013 07:59:09AM -2 points [-]

Relating to your first point, I've read several stories that talk about that in reverse. AIs (F or UF is debatable for this kind) that expand out into the universe and completely ignore aliens, destroying them for resources. That seems like a problem that's solvable with a wider definition of the sort of stuff it's supposed to be Friendly to, and I'd hope aliens would think of that, but it's certainly possible.

Comment author: CAE_Jones 28 July 2013 03:44:10AM 1 point [-]

Oogely Boogely? Summoning a desk and transfiguring it into a pig? Petrifying numerous terminally ill people, transfiguring them into something small and stable (aka the ringmione hypothesis), and using a finite to turn them into a shield? Filling a mokeskin pouch with chilled snakes? (Imagines Voldemort constantly casting AK at Harry, who constantly shouts snake and pulls something out of his pouch). Or maybe even Serpensortia, if the conjured snake counts for purposes of AK (it can be finited, after all). Or one could just summon a cloud of spiders ("The Amazing Spider-Mage! Not to be confused with Spider-Muggle!)

In canon, Faux-Moody demonstrated AK on a spider. Are spiders still vulnerable to AK in MoR?

Comment author: taygetea 28 July 2013 03:59:55AM 0 points [-]

According to Quirrel, yes, they are. "Anything with a brain". And I notice that you've only looked at what we've directly seen. The presence of spells like all the ones you mentioned lead me to think that you can do more directed things with spells harry hasn't come across yet.

Comment author: elharo 27 July 2013 09:17:44PM 4 points [-]

A much better title would be "Norbert Wiener on Automation and Unemployment." It's not clear that Wiener here was considering AI at all.

Comment author: taygetea 27 July 2013 09:49:11PM 0 points [-]

the second, cybernetic, industrial revolution “is [bound] to devalue the human brain, at least in its simpler and more routine decisions

It certainly seems like he considered it, at least on a basic level, enough to be extrapolated.

Comment author: [deleted] 26 July 2013 06:10:44AM 0 points [-]

the need for a specific type of dry-erase marker

I meant that it's obvious that a given piece of chalk will work, whereas a given dry-erase marker may have dried up without obviously looking like it's dried up.

In response to comment by [deleted] on Open thread, July 23-29, 2013
Comment author: taygetea 26 July 2013 06:15:23AM 0 points [-]

Well, I did say it far outweighed it. Even that's less of an inconvenience in my mind, but that's getting to be very much a personal preference thing.

Comment author: gwillen 26 July 2013 04:04:45AM 12 points [-]

You jest, but it seems -- depending on whether one believes that AK works on animals or not -- that you have just come up with a way to block the unblockable curse. That's some serious lateral thinking, right there.

Comment author: taygetea 26 July 2013 06:10:35AM *  6 points [-]

Creating arbitrary animals that are barely alive, don't need food, water, air, or movement, and made of easily workable material which is also good as armor seems like a good place to start, and also within the bounds of magic. This isn't as absurd as it seems. Essentially living armor plates. You'd want them to be thin so you could have multiple layers, and to fall off when they die, and various similar things. Or maybe on a different scale, like scale or lamellar armor.

View more: Prev | Next