LESSWRONG
LW

Dagon
12982Ω191354500
Message
Dialogue
Subscribe

Just this guy, you know?

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
I can't tell if my ideas are good anymore because I talked to robots too much
Dagon5d100

I don't know how long you've been talking to real people, but the vast majority are not particularly good at feedback - less consistent than AI, but that doesn't make them more correct or helpful.  They're less positive on average, but still pretty un-correlated with "good ideas".  They shit on many good ideas, and support a lot of bad ideas. and are a lot less easy to query for reasons than AI is.

I think there's an error in thinking talk can ever be sufficient - you can do some light filtering, and it's way better if you talk to more sources, but eventually you have to actually try stuff.

Reply
Roman Malov's Shortform
Dagon5d20

Hmm.  What about the claim "pysicality -> no free will".  This is the more common assertion I see, and the one I find compelling.  

The simplicity/complexity I often see attributed to "consciousness" (and I agree: complexity does not imply consciousness, but simplicity denies it), but that's at least partly orthogonal to free will.

Reply
Roman Malov's Shortform
Dagon5d20

They can overgeneralize that feeling over all physical systems (like humans), missing out on the fact that this feeling should only be felt

I don't follow why this is "overgeneralize" rather than just "generalize".  Are you saying it's NOT TRUE for complex systems, or just that we can't fit it in our heads?   I can't compute the Mandelbrot Set in my head, and I can't measure initial conditions well enough to predict a multi-arm pendulum beyond a few seconds.  But there's no illusion of will for those things, just a simple acknowledgement of complexity.

Reply
Time Machine as Existential Risk
Dagon7d20

TL;DR: The laws of physics seem to prevent us from accidentally erasing our own history through time travel.

What makes you think this?  I have some introspective consistency in my memories, but I can't tell if they're actually real, or if they've been (subjectively) recently changed/implanted or otherwise made to fit the "current" timeline.

Reply
Getting To and From Monism
Dagon9d53

We can start from a point of complete skepticism about everything. Regardless of your specific beliefs about the probability of a simulated universe, or if you are a pure idealist, you can say with confidence that at least something exists

If you can say that, you can say a lot more.  You can say what experiences and memories you have.  There is variance and a perception of time and change.  Oops, monism no longer makes sense.

Reply
If Moral Realism is true, then the Orthogonality Thesis is false.
Dagon9d21

I think this is correct, but I strongly doubt that any strong version of moral realism applies to our universe.  I further suspect that there's a separate argument you'd need to address: "if moral realism is true and we have wrong beliefs about moral truths, then correct beliefs could look nearly arbitrary".  I've not seen this second argument made (let alone rebutted), because most people I talk to don't give much weight to moral realism.

There's yet another argument against this in that steps 3 and 4 seem not to be universally true in humans - they might (or might) not have some explanatory power in a median individual, but we see plenty of examples of high-intelligence (for a human) presumably-immoral acts.  Even if it's correlated, individual instances of intelligence can vary widely in their moral actions.

Reply
"It isn't magic"
Dagon12d32

The presumption of complete reducibility is, with some great certainty, unclear at best and at worst absolutely impossible,

Oh, I fully agree.  But "complete" is not necessary to achieve the change in categorization from "unknown and magical" to "just (big/difficult) math".  

I don't know how much playing people around here have done with Mandelbrot set coding, but it's a useful comparison in this.  It's very clearly NOT magic in any literal sense - the calculation is trivial (and even the iterations to determine converge/diverge is pretty easy to understand).  But the results remain captivating and astounding (to me) that they come from such simple rules.  

In this sense, I suspect many complex systems will remain impressive and astounding, no matter how good we get at modeling and understanding their components.  In the sense that knowing the underlying rules DOES turn it from "fully magical" into "an interesting corner of math", this will probably happen to current LLMs, and likely eventually to primate intelligence.

 

Reply
Load More
2Dagon's Shortform
6y
92
No wikitag contributions to display.
14What epsilon do you subtract from "certainty" in your own probability estimates?
Q
7mo
Q
6
3Should LW suggest standard metaprompts?
Q
10mo
Q
6
8What causes a decision theory to be used?
Q
2y
Q
2
2Adversarial (SEO) GPT training data?
Q
2y
Q
0
24{M|Im|Am}oral Mazes - any large-scale counterexamples?
Q
3y
Q
4
17Does a LLM have a utility function?
Q
3y
Q
11
8Is there a worked example of Georgian taxes?
Q
3y
Q
12
9Believable near-term AI disaster
3y
3
2Laurie Anderson talks
4y
0
76For moderately well-resourced people in the US, how quickly will things go from "uncomfortable" to "too late to exit"?
Q
5y
Q
11
Load More