All of intrepidadventurer's Comments + Replies

Large Language Models Fail on Trivial Alterations to Theory-of-Mind Tasks is a paper that I recently tried to read and tried to recreate its findings and succeeded.  Whether or not LLMs have TOM feels directionally unanswerable, is this a consciousness level debate? 

However, I followed up by asking questions prompted by the phrase "explain Sam's theory of mind" which got much more cohesive answers. It's not intuitive to me yet how much order can arise from prompts. Or where the order arises from? Opaque boxes indeed.  

Also consider including non-ML researchers in the actual org building. Project management for example, or other administration folks. People who've got experience in ensuring organizations don't fail, ML researchers need to eat, pay their taxes etc.

I posit that we've imagined basically everything available with known physics, and extended into theoretical physics. We don't need to capitulate to the ineffable of a superintelligence, known + theoretical capabilities already suffice to absolutely dominate if managed by an extremely competent entity.

I agreed with the conclusions, now that you had brought up the point of the incomprehensibility of an advanced mind, FAI almost certainly will have plans that we deem as hostile and are to our benefit. Monkeys being vaccinated, seems like a reasonable analogy. I want us to move past the "we couldn't imagine their tech" to me a more reasonable "we couldn't imagine how they did their tech"  

I find this thought pattern frustrating. That these AI's possess magic powers that are unimaginable.  Even with our limited brains, we can imagine all the way past the current limits of physics and include things like potential worlds if the AI could manipulate space-time in ways we don't know how too.

I've seen people imagining computronium, and omni-universal computing clusters. Figuring out ways to generate negentropy, literally re-writing the laws of the universe, Bootstrapped Nano-factories, using the principle of non-locality to effect changes at... (read more)

2RomanS
I do think that an advanced enough AGI might possess powers that are literally unimaginable for humans, because of their cognitive limitations. (Can a chimpanzee imagine a Penrose Mechanism?)  Although that's not the point of my post. The point is, the FAI might have plans of such a deepness and scope and complexity, humans could perceive some of its actions as hostile (e.g. global destructive mind uploading, as described in the post). I've edited the post to make it clearer.
6JBlack
I would be extremely surprised if a superintelligence doesn't devise physical capabilities that are beyond science fiction and go some way into fantasy. I don't expect them to be literally omnipotent, but at least have Clarkean "sufficiently advanced technology". We may recognise some of its limits, or we may not. "Computronium" just means an arrangement of matter that does the most effective computation possible given the constraints of physical law with available resources. It seems reasonable to suppose that technology created by a superintelligence could approach that. Bootstrapped nano-factories are possible with known physics, and biology already does most of it. We just can't do the engineering to generalize it to things we want. To suppose that a superintelligence can't do the engineering either seems much less justified than supposing that it can. The rest are far more speculative, but I don't think any of them can be completely ruled out. I agree that the likelihood on any single one of these is tiny, but disagree in that I expect the aggregate of "capabilities that are near omnipotent by our standards" to be highly likely.
Answer by intrepidadventurer30

the number of popular X in a human system Y: 

  • highly attractive people as a % of the population; because its a competitive force no matter the actual underling state people will just change the thing they are competing on.
    • to change this I imagine you'd have to change "the bowl" ie: add more attention available per human
  • Youtube creators as a % of watchtime / engagement

Orbits just came to me, not sure if that counts a novel but I had never thought of them before as a stable equilibrium. They should stay the same unless perturbed by an outside force... bu

... (read more)

I have been thinking about the argument of the singularity in general. This proposition that an intellect sufficiently advanced can / will change the world by introducing technology that is literally beyond comprehension. I guess my question is this, is there some level of intelligence such that there are no possibilities that it can't imagine even if it can't actually go and do them.

Are humans past that mark? We can imagine things literally all the way past what is physically possible and or constrained to realistic energy levels.

1MrMind
A difficult question to answer, because many elements are not precisely defined. But let's just say for a moment that 'intellect' is conflated with 'universal Turing machine' and 'thinking' with 'processing a program'. There are of course limits for any finite UTM: on one side, you cannot 'probe' thoughts too deeply because of constraints on memory/energy/time, on the other side there are thoughts that are simply too complex. So no, an AI could never imagine anything that is too complex or too expensive for it to think. For us humans the situation is even worse, because our brains are no computers, and we are very capable of imagining incoherent things.

I did encounter this problem (once) and I was experiencing resistance to going back even though I had a lot of success with the chat. I figured having a game plan for next time would be my solution.

This post and reading "why our kind cannot cooperate" kicked me off my ass to donate. Thanks Tuxedage for posting.

0[anonymous]
.

Fair critique. Despite the lack of clarity on my part the comments have more than satisfactorily answered the question about community norms here. I suppose the responders can thank g-factor for that :)

-2hyporational
Well played.

It does answer my question. Also thanks for suggestion to focus on the behaviour rather than the person. I didn't even realize I was thinking like that till you two pointed it out.

What are community norms here about sexism (and related passive aggressive "jokes" and comments about free speech) at the LW co-working chat? Is LW going for wheatons law or free speech and to what extent should I be attempting to make people who engage in such activities feel unwelcome or should I be at all?

I have hesitated to bring this up because I am aware its a mind-killer but I figured If facebook can contain a civil discussion about vaccines then LW should be able to talk about this?

0passive_fist
I'd like to see some evidence that such stuff is going on before pointing fingers and making rules that could possible alienate a large fraction of people. I've been attending the co-working chat for about a week, on and off (I take the handle of 'fist') and so far everyone seems friendly and more than willing to accomodate the girls in the chat. Have you personally encountered any problems?
6hyporational
I connotationally interpret your question as: "what are the community norms about bad things?" You're not giving us enough information so that we could know what you're talking about, and you're asking our blind permission to condemn behaviour you disagree with.
4Viliam_Bur
I don't have an answer here, just a note that this question actually contains two questions, and it would be good to answer both of them together. It would also be a good example of using rationalist taboo. A: What are the community norms for defining sexism? B: What are the community norms for dealing with sexism (as defined above)? Answering B without answering A can later easily lead to motivated discussions about sexism, where people would be saying: "I think that X is [not] an example of sexism" when what they really wanted to say would be: "I think that it is [not] appropriate to use the community norm B for X".
7Lumifer
Depends on how you define sexism. Some people consider admitting that men and women are different to be sexism, never mind acting on that belief :-/ TheOtherDave's answer is basically correct. Crass and condescending people don't get far, but its possible to have a discussion of issues which cost Larry Summers so dearly.
3matheist
(I haven't seen the LW co-working chat) If you want to tell people off for being sexist, your speech is just as free as theirs. People are free to be dicks, and you're free to call them out on it and shame them for it if you want. I think you should absolutely call it out, negative reactions be damned, but I also agree with NancyLebovitz that you may get more traction out of "what you said is sexist" as opposed to "you are sexist". To say nothing is just as much an active choice as to say something. Decide what kind of environment you want to help create.

There are no official community norms on the topic.

For my own part, I observe a small but significant number of people who seem to believe that LessWrong ought to be a community where it's acceptable to differentially characterize women negatively as long as we do so in the proper linguistic register (e.g, adopting an academic and objective-sounding tone, avoiding personal characterizations, staying cool and detached).

The people who believe this ought to be unacceptable are either less common or less visible about it. The majority is generally silent o... (read more)

Ideally, I'd want the people to feel that the behavior is unwelcome rather than that they themselves are unwelcome, but people are apt to have their preferred behaviors entangled with their sense of self, so the ideal might not be feasible. Still, it's probably worth giving a little thought to discouraging behaviors rather than getting rid of people.

I have committed to a food log with social back up, I am testing the hypothesis that to the first degree of approximation calories out > calories in = weight loss.

I have started to hard code a personal website using team tree-house (style sheets and two pages complete). I figure that the last comparative advantage we have before the machines take over is coding so why not test if I can do it.

so un retracting is not possible.

I have committed to a food log with social back up, I am testing the hypothesis that to the first degree of approximation calories out > calories in = weight loss.

I have started to hard code a personal website using team tree-house (style sheets and two pages complete). I figure that the last comparative advantage we have before the machines take over is coding so why not test if I can do it.

[This comment is no longer endorsed by its author]Reply
5intrepidadventurer
I have committed to a food log with social back up, I am testing the hypothesis that to the first degree of approximation calories out > calories in = weight loss. I have started to hard code a personal website using team tree-house (style sheets and two pages complete). I figure that the last comparative advantage we have before the machines take over is coding so why not test if I can do it. so un retracting is not possible.

To what extent do you prefer the spreadsheet to have additional rows versus complete columns?

0[anonymous]
I prefer additional rows because it is harder to find great researchers than it is to find information about them, given that you know their name (one exception might be the "notable publications" column). The informational value of rows are hence greater.