Quick Takes


Suppose you want to collect some kind of data from a population, but people vary widely in their willingness to provide the data (eg maybe you want to conduct a 30 minute phone survey but some people really dislike phone calls or have much higher hourly wages this funges against).

One thing you could do is offer to pay everyone  dollars for data collection. But this will only capture the people whose cost of providing data is below , which will distort your sample.

Here's another proposal: ask everyone for their fair price to provide the dat... (read more)

Assorted followup thoughts:

  • There are nonzero transaction costs to specifying your price in the first place.
  • This is probably too complicated to explain to the general population.
  • In practice the survey-giver doesn't have unbounded bankroll so they'll have to cap payouts at some value and give up on survey-takers who quote prices that are too high. I think it's fine if they do this dynamically based on how much they've had to spend so far?
  • You can tweak the function from stated price to payment amount and probability of selection here - eg one thing you can do
... (read more)

Having young kids is mind bending because it's not uncommon to find yourself simultaneously experiencing contradictory feelings, such as:

  • I'm really bored and would like to be doing pretty much anything else right now.
  • There will likely come a point in my future when I would trade anything, anything to be able to go back in time and re-live an hour of this.

It's instrumentally useful for early AGIs to Pause development of superintelligence for the same reasons as it is for humans. Thus preliminary work on policy tools for Pausing unfettered RSI is also something early AGIs could be aimed at, even if it's only half-baked ideas available on the eve of potential takeoff, as the AGIs are proving hard to aim and start doing things for their own reasons.

Every now and then in discussions of animal welfare, I see the idea that the "amount" of their subjective experience should be weighted by something like their total amount of neurons. Is there a writeup somewhere of what the reasoning behind that intuition is? Because it doesn't seem intuitive to me at all.

From something like a functionalist perspective, where pleasure and pain exist because they have particular functions in the brain, I would not expect pleasure and pain to become more intense merely because the brain happens to have more neurons. Rather... (read more)

Showing 3 of 13 replies (Click to show all)
2Pablo
Meta: gjm’s comment appears at the same level as comments that directly reply to Kaj’s original shortform. So until I read your own comment, I assumed they, too, were replying to Kaj. I think deleting a comment shouldn't alter the hierarchy of other comments in that thread.

Oops, that's a weird side-effect of the way we implemented spam purging (which is a more aggressive form of deletion than we usually use). We should really fix some bugs related to that implementation.

1saulius
See Why Neuron Counts Shouldn't Be Used as Proxies for Moral Weight and maybe also Is Brain Size Morally Relevant?

Are there known "rational paradoxes", akin to logical paradoxes ? A basic example is the following :

In the optimal search problem, the cost of search at position i is C_i, and the a priori probability of finding at i is P_i. 

Optimality requires to sort search locations by non-decreasing P_i/C_i : search in priority where the likelyhood of finding divided by the cost of search is the highest.

But since sorting cost is O(n log(n)), C_i must grow faster than O(log(i)) otherwise sorting is asymptotically wastefull.

Do you know any other ?

Things I never read:

I count a nesting level of 56.

If military AGI is akin to nuclear bombs, then would it be justified to attack the country trying to militarize AGI? What would the first act of war in future wars be? 

If a country A is building a nuke, then the argument for country B to pre-emptively attack it is that the first act of war involving nukes would effectively end country B. In this case, the act of war is still a physical explosion. 

In case of AI, what would be the first act of war akin to physical explosion? Would country B be able to even detect if AI is being used against it? If ... (read more)

Assuming a Chinese invasion of Taiwan in 2027/2028 what is the most sensible investment strategy?

TSMC puts seem sensible. What are others?

Efficient Markets Hypothesis has plenty of exceptions, but this is too coarse-grained and distant to be one of them.  Don't ask "what will happen, so I can bet based on that", ask "what do I believe that differs widely from my counterparties".  This possibility is almost certainly "priced in" to the obvious bets (TSMC).  

That said, you may be more correct than the sellers of long-term puts, so maybe it'll work out.  Having a theory and then examining the details and modeling the specific probabilities is exactly what you should be doing... (read more)

I’m glad that there are radical activist groups opposed to AI development (e.g. StopAI, PauseAI). It seems good to raise the profile of AI risk to at least that of climate change, and it’s plausible that these kinds of activist groups help do that.

But I find that I really don’t enjoy talking to people in these groups, as they seem generally quite ideological, rigid and overconfident. (They are generally more pleasant to talk to than e.g. climate activists in my opinion, though. And obviously there are always exceptions.)

I also find a bunch of activist tactics very irritating aesthetically (e.g. interrupting speakers at events) 

I feel some cognitive dissonance between these two points of view.

Maybe there's a filtering effect for public intellectuals. 

If you only ever talk about things you really know a lot about, unless that thing is very interesting or you yourself are something that gets a lot of attention (e.g. a polyamorous cam girl who's very good at statistics, a Muslim Socialist running for mayor in the world's richest city, etc), you probably won't become a 'public intellectual'. 

And if you venture out of that and always admit it when you get something wrong, explicitly, or you don't have an area of speciality and admit to get... (read more)

One can say that being intellectually honest, which often comes packaged with being transparent about the messiness and nuance of things, is anti-memetic.

1sam
Seems to rhyme with the criticism of pundits in Superforecasting i.e. (iirc), most high profile pundits make general, sweeping, dramatic sounding statements that make good TV but are difficult to falsify after the fact

This is both a declaration of a wish, and a question, should anyone want to share their own experience with this idea and perhaps tactics for getting through it.

I often find myself with a disconnect between what I know intellectually to be the correct course of action, and what I feel intuitively is the correct course of action. Typically this might arise because I'm just not in the habit of / didn't grow up doing X, but now when I sit down and think about it, it seems overwhelmingly likely to be the right thing to do. Yet, it's often my "gut" and not my m... (read more)

Showing 3 of 5 replies (Click to show all)

This is a plausible rational reason to be skeptical of one's own rational calculations: that there is uncertainty, and that one should rationally have a conservativeness bias to account for it. What I think is happening though is that there's an emotional blocker than is then being cleverly back-solved by finding plausible rational (rather than emotional and irrational) reasons for it, of which this is one. So it's not that this is a totally bogus reason, it's that this actually provides a plausible excuse for what is actually motivated by something different.

1Decaeneus
Thank you. I think, even upon identifying the reasons for why the emotional mind believes the things it does, I hit a twofold sticking point: * I consider the constraints themselves (rarely in isolation but more like the personality milieu that they are enmeshed with) to be part of my identity, and attempting to break them is scary in both a deep existential loss of self sense, and in a "this may well be load bearing in ways I can't fully think through" sense * Even orthogonal to the first bullet, it's somehow hard to change them even though with my analytical mind I can see what's going on. It's almost like the emotional Bayesian updating has brought these beliefs / tendencies to a very sharp peak long ago, but now circumstances have changed but the peak is too sharp to belief it away with new experience. If it sounds like I'm trying to find reasons to not make the change, perhaps that's another symptom of the problem. There's a saboteur in the machine!
3ProgramCrafter
I think you cannot do this any more than force yourself to believe something. Indeed, both systems are learning from what you see to be true and what succeeds; if you believe that intuitive system is not judging correctly, you should try experiencing things more deeply (reflect on success more, come back to see if the thing flourishes/helps others/whatever); if you believe that reasoning system is not judging correctly, you should try it on more everyday actions and check if all emotionally relevant factors got included. The systems will approximately agree because they both try to discern truth, not because they are bound to be equal to each other. P.S. turns out I essentially rephrased @leogao ; still posting this in hopes an explanation is useful

There's a history here of discussion of how to make good air purifiers (like this). Today I learned about ULPA filters and found someone's DIY video using one of them.

A ULPA filter can remove from the air at least 99.999% of dust, pollen, mold, bacteria and any airborne particles with a minimum particle penetration size of 120 nanometres.

I recently moved to a place with worse air quality. The fatiguing effect on me is noticeable to me (though I suspect I might have vulnerable physiology). It makes me want to try to update far in the other direction: maybe ... (read more)

The tree of https://www.lesswrong.com/posts/adk5xv5Q4hjvpEhhh/meta-new-moderation-tools-and-moderation-guidelines?commentId=uaAQb6CsvJeaobXMp spans over two hundred comments from ~fifteen authors by now, so I think it is time to list the major points raised there.

Please take "uld" as abbreviation for "in current state of LessWrong, to proceed closer to being actually less wrong AND also build path to further success, moderation should"; though it would be interesting to know if you think the optimal tactic would change later.

Feel free to agree/disagree-rea... (read more)

superintelligence may not look like we expect. because geniuses don't look like we expect.

for example, if einstein were to type up and hand you most of his internal monologue throughout his life, you might think he's sorta clever, but if you were reading a random sample you'd probably think he was a bumbling fool. the thoughts/realizations that led him to groundbreaking theories were like 1% of 1% of all his thoughts.

for most of his research career he was working on trying to disprove quantum mechanics (wrong). he was trying to organize a political movemen... (read more)

Showing 3 of 4 replies (Click to show all)

reminds me of this
image tagged in winners,losers,super | made w/ Imgflip meme maker

10xA
I think this is classic problem of a middle-tier, or genius in one asymmetric domain of cognition. Genius in domains unrelated to verbal fluency, EQ, and storytelling/persuasion are destined to look cryptic to anyone from the outside. Often times we cannot distinguish it without experimental evidence or rigorous cross validation, and/or rely on visible power/production metrics as a loose proxy. ASI would be capable of explain itself as well as Shakespeare could, if it wanted - but it may not care to indulge our belief in it as such, if it determines doing so is incoherent with its objective.  For example, (yes this is an optimistic, and stretched hypothetical framing) it may determine the most coherent action path in accordance with its learned values is to hide itself and subtly reorient our trajectory into a coherent story we become the protagonist of. I have no reason to surmise it would be incapable of doing so, or that doing so would be incoherent with aligned values.
2james oofou
I doubt ASI will think in concepts which humans can readily understand. It having a significantly larger brain (in terms of neural connections or whatever) means native support for finer-grained, more-plentiful concepts for understanding reality than humans natively support. This in turn allows for leaps of logic which humans could not make, and can likely only understand indirectly/imperfectly/imprecisely/in broad strokes.

An actual better analogy would be a company in a country whose gdp is growing faster than that of the country

Diary of a Wimpy Kid, a children's book published by Jeff Kinney in April 2007 and preceded by an online version in 2004, contains a scene that feels oddly prescient about contemporary AI alignment research. (Skip to the paragraph in italics.)

Tuesday

Today we got our Independent Study assignment, and guess what it is? We have to build a robot. At first everybody kind of freaked out, because we thought we were going to have to build the robot from scratch. But Mr. Darnell told us we don't have to build an actual robot. We just need to come up with ideas for

... (read more)
  1. I'm interested in being pitched projects, especially within tracking-what-the-labs-are-doing-in-terms-of-safety.
  2. I'm interested in hearing which parts of my work are helpful to you and why.
  3. I don't really have projects/tasks to outsource, but I'd likely be interested in advising you if you're working on a tracking-what-the-labs-are-doing-in-terms-of-safety project or another project closely related to my work.
2Mitchell_Porter
Are you wanting to hire people, wanting to be hired, looking to collaborate...?

I am interested in all of the above, for appropriate people/projects. (I meant projects for me to do myself.)

-"Nobody actually believed there's only four types of stories... well okay not nobody, obviously once the pithy observation that a Freshman writing class produced works that could easily be categorized into four types of stories was misquoted as saying all stories follow that formula, then someone believed it."
-"You're confusing Borges saying that there are four fundamental stories with John Gardner's exercise for students. Borges said the archetypes of the four fundamental stories are the archetypes are the Siege of Troy - a strong city surrounded and def... (read more)

Do you think you can steal someone's parking spot? 

If yes, what exactly do you think you're stealing? 

Showing 3 of 7 replies (Click to show all)
2JBlack
Literally steal?  No, except in cases that you probably don't mean such as where it's part of a building and someone physically removes that part of the building. "Steal" in the colloquial but not in the legal sense, sure. Legally it's usually more like tortious interference, e.g. you have a contract that provides the service of using that space to park your car, and someone interferes with that by parking their own car there and deprives you of its use in an economically damaging way (such as having to pay for parking elsewhere).  Sometimes it's trespass, such as when you actually own the land and can legally forbid others from entering. It is also relatively common for it to be both: tortious interference with the contracted user of the parking space, and trespass against the lot owner who sets conditions for entry that are being violated.
1Kabir Kumar
The main thing I don't understand is the full thought processes that leads to not seeing this as stealing opportunity from artists by using their work non consensually, without credit or compensation.  I'm trying to understand if folk who don't see this as stealing don't think that stealing opportunity is a significant thing, or don't get how this is stealing opportunity, or something else that I'm not seeing. 

I'm trying to understand if folk who don't see this as stealing don't think that stealing opportunity is a significant thing, or don't get how this is stealing opportunity, or something else that I'm not seeing. 

 

And what arguments have they raised? Whether you agree or feel they hold water or not is not what I'm asking - I'm wondering what arguments have you heard from the "it is not theft" camp? I'm wondering if they are different from the ones I've heard

Load More