All of jwray's Comments + Replies

jwray10

We don't observe any Dyson swarms in our lightcone, and would be capable of detecting dyson swarms tens of thousands of light years away, but the maximum expansion speed of AI control is much less than the speed of light when I extrapolate from known physics.  This should be taken as weak evidence that AIs don't inevitably build Dyson swarms and try to take over the universe.  I think the probability of AI doom is still quite large.

jwray40

My experience is very different.  I feel unitary, without any IFS or jungian shadow or other sort of subconscious parts trying to deceive my conscious self.  I violate quite a lot of social norms without feeling any shame or guilt about it, because I've got an 'internal scorecard'.  So long as I'm true to my own values/morality, and I can protect myself with some combination of power / occlumency / disengaging, all three of which come easily to me, social norms don't matter in private.

2Giskard
Non-sarcastically, it must be AMAZING to be you.
5Valentine
To me this is exciting. I deduced that the mental architecture you're describing should be possible. It's extremely cool to hear someone just name it as a lived experience. Like, what would a mind that's actually systematically free of Newcomblike self-deception have to be like, assuming the hostile telepaths problem is real? This is one possible solution. Assuming I haven't misunderstood what you're describing!
jwray20

prices are one of the best mechanisms for communicating the strength of preferences, but perhaps among friends you want a separate made-up currency with a more equal distribution. Daniel Reeves just bites the bullet and uses dollars though:  https://messymatters.com/tv/

jwray21

Re computational unkindness, optimizing solely for what one person wants is easy.  The complexity mostly arises from the picker trying to satisfy others' implicit preferences that they pretend not to have for the sake of being "flexible".

2AnnaJo
Complexity also arises when you have weak preferences, and think that others' preferences might be stronger than yours. So you're more "flexible" relatively, but there's no good way of calibrating the strength of their preferences without repeated interactions. 
jwray70

If our corrupted hardware can't be trusted to compute the consequences in a specific case, it probably also can't be trusted to compute the consequences of a general rule.  All our derivations of deontological rules will be tilted in the direction of self interest or tribalism or unexamined disgust responses, not some galaxy-brained evaluation of the consequences of applying the rule to all possible situations.

Russell conjugation:  I have deontological guardrails, you have customs, he has ancient taboos.

[edit: related Scott post which I endorse i... (read more)

jwray10

Is there somewhere I can sign up to get notified of all the future St Louis meetups?

1SebastianG
Send me your email address! Also if you click St. Louis Junto, you can then click 'Subscribe to Group.'
jwray50

Suppose my decision algorithm is:  I obtain the source code of Omega and run its prediction algorithm to determine what it predicts I will do, and then do the opposite of that.

This would be kind of like the proof that the halting problem is non-computable.

jwray10

Certainly perfect prediction is impossible in some cases.  Look at the halting problem in computer science.

5jwray
Suppose my decision algorithm is:  I obtain the source code of Omega and run its prediction algorithm to determine what it predicts I will do, and then do the opposite of that. This would be kind of like the proof that the halting problem is non-computable.
jwray30

This seems like a subset of point #7 here (https://slatestarcodex.com/2016/02/20/writing-advice/)

7. Figure out who you’re trying to convince, then use the right tribal signals

I would define weirdness as emitting signals that the tribe recognizes as "other" but not "enemy".  Emitting enough of the in-group signals may counteract that.

This is also reminiscent of John Gottman's empirical research on married couples where he found they were much more likely to split if the ratio of positive to negative interactions was less than 5 to 1.

jwray10

Intertemporal arbitrage: buying corn when there's a bumper crop and selling it when there's a drought.  How do we get rid of that?   Either time travel or giving everyone lots of storage space and prior knowledge of all the goods he will ever need and their future abundance/scarcity time series.

Price signals arising from trade are also an incentive for consuming less of / producing more of scarce things to make them less scarce.  Without the incentives of prices we'd need some other way of enforcing rationing of the finite capacity of the iron mines and communicating each person's marginal utility for iron.  A borg hivemind.

jwray30

Bellingham was also one of my top finalists for a personal move after I spent dozens of hours poring over statistics and maps.   I used a tool on city-data.com that works like a stock screener for cities.

http://www.city-data.com/advanced/search.php#body?fips=0&csize=a&sc=10&sd=1&states=ALL&near=&nam_crit1=5195&b5195=20000&e5195=MAX&i5195=1&nam_crit2=1033&b1033=20000&e1033=MAX&i1033=1&nam_crit3=5900&b5900=MIN&e5900=5&i5900=1&nam_crit4=4048&b4048=MIN&e4048=45&i4048=1&... (read more)

jwray30

I thought of a second potential problem in my layman armchair.  Every cell that a virus infects, it kills (when the cell dies, the new viruses pop out).  But what if the mRNA for a single protein just messes up a cell, without killing it?   Possibly worse than just killing it.

Answer by jwray30

The mRNA tricks your cells into making a spike protein

The alternative is the live adenovirus vector, which tricks your cells into making a spike protein PLUS all the other proteins that make up the adenovirus.

So it seems like the former probably can't be any worse unless it "infects" different types of cells.

3jwray
I thought of a second potential problem in my layman armchair.  Every cell that a virus infects, it kills (when the cell dies, the new viruses pop out).  But what if the mRNA for a single protein just messes up a cell, without killing it?   Possibly worse than just killing it.