All of NeroWolfe's Comments + Replies

NeroWolfe53

Probably even negatively correlated. If you think you're protected, you're going to engage in sex more often without real protection than you would if you knew you were just 15 minutes away from being a parent.

NeroWolfe11

I think the point by the OP is that while YOU might think NYC is a great place, not everybody does. One of the nice things about the current model is that you can move to NYC if you want to, but you don't have to. In the hypothetical All-AGI All Around The World future, you get moved there whether or not you like it. Some people will, but it's worth thinking about the people who won't like it and consider what you might do to make that future better for them as well.

4O O
I think this post was supposed to be some sort of gotcha to SF AI optimists given how this is worded but in reality a lot of tech workers without family here would gladly move to NYC.[1]    A better example would be Dubai. Objectively not a bad city and you can possibly make a lot more without tax, but obvious reasons you'd be hesitant. Still don't think this is that huge of a gotcha. The type of people this post is targeting are generally risk-tolerant. So yeah if you effectively tripled their pay and made them move to Dubai, they'd take it with high likelihood.  1. ^ I don't get the misses the point reaction, as I'm pretty sure this was the true motivation of this post, think about it. Who could they be talking about where a NYC relocation is within the realm of possibility, are tech workers, and are chill with AI transformations.

Your black table of income levels and taxes paid has something wrong with it. I looked at the Tax Foundation link you provide, and it says something rather different from what you report.

Here is how I read their numbers compared to yours

Top 5%: 23.3% rate vs. your 18.9%

Top 10%: 21.5% rate vs. your 14.3%

Top 25%: 18.4% rate vs. your 10.3%

Top 50%: 16.2% rate vs. your 7.2%

I also note that your row for School Teachers has the same bracket as truck drivers and police officers, but the rate for teachers is from the next bracket up.

1Ben Turtel
Hey @NeroWolfe.   I think you're looking at the wrong numbers. For example, their 23.3% for the top 5% INCLUDES the top 1%, which skews the averages precisely because of the power laws at play. The have another table further down where they break this out:

I hope you market it under the name Soylent.

Why do you think that the space colonists would be able to create a utopian society just because they are not on earth? You will still have all the same types of people up there as down here, and they will continue to exhibit the Seven Deadly Sins. They will just be in a much smaller and more fragile environment, most likely making the consequences of bad behavior worse than here on earth.

3Vanessa Kosoy
It's not because they're not on Earth, it's because they have a superintelligence helping them. Which might give them advice and guidance, take care of their physical and mental health, create physical constraints (e.g. that prevent violence), or even give them mind augmentation like mako yass suggested (although I don't think that's likely to be a good idea early on). And I don't expect their environment to be fragile because, again, designed by superintelligence. But I don't know the details of the solution: the AI will decide those, as it will be much smarter than me.
5mako yass
They have superintelligence, the augmenting technologies that come of it, and the self-reflection that follows receiving those, they are not the same types of people.

So, does this mean that you have descended past "We need to eliminate the suffering of fruit flies" and gone straight for "We need to eliminate the suffering of atomic nuclei that are forced to fuse together?" This seems like a pretty wildly wrong view, and not because rectifying the problem is beyond our technological abilities. It seems like there is plenty of human suffering to attend to without having to invent new kinds of suffering based on atoms in the sun.

6the gears to ascension
it does not. human suffering is the priority because they contain the selfhoods we'd want to imbue descendants of onto the sun's negentropy. earth is rapidly losing the information-theoretic selves of beings and this is a catastrophe. My moral system adds up to being pretty normal in familiar circumstances, the main way I disagree with mainstream is that I want to end farmed animal suffering asap too. But my main priority in the near term is preserving human life and actualization; my concern that the sun is pure suffering is relative to the beings who are themselves dying. The underlying principle here is measuring what could have been in terms of complex beings actualizing themselves with that negentropy, and in order for that could-have-been to occur we need to end the great many sources of death, disease, and suffering that mean those people won't be with us when we can achieve starlifting.

I saw that too and I don’t think it’s a nitpick. All of that was raised in support of the idea that human limits are much greater than we think, so having a couple of examples that are off by a factor of two is not a small difference. In addition to the wild claims about a human with 350 kg of muscle mass, I know the world record for unequipped deadlift is just shy of 1,100 pounds/500kg. “Lifting a car” can’t mean picking it off the ground entirely no matter how small it is; my Miata weighs about 2,400 pounds and other than something like a Lotus Elise it’... (read more)

1George3d6
See my correction, agree with both points, I don't think it changes the example, I did a quick google and I'm not into weightlifting/strongman stuff, so I didn't realize my misinformation was an order of magnitude off. I still think it's essentially fair to say these dudes are "buffer" than historical dudes and seem to owe that to advances in training and (primarily) PEDs
3[comment deleted]

I may have used too much shorthand here. I agree that flying cars are impractical for the reasons you suggest. I also agree that anybody who can justify it uses a helicopter, which is akin to a flying car.

According to Wikipedia, this is not a concept that first took off (hah!) in the 1970s - there have been working prototypes since at least the mid-1930s. The point of mentioning the idea is that it represents a cautionary tale about how hard it is to make predictions, especially about the future. When cars became widely used (certainly post-WWII), futurist... (read more)

1[anonymous]
Right I am just trying to ask if you personally thought they were far fetched when you learned of them. Or were there serious predictions that this was going to happen. Flying cars don't pencil in. AGI financially does pencil in. AGI killing everyone with 95 percent probability in 5 years doesn't because it require several physically unlikely assumptions. The two assumptions are A. being able to optimize an algorithm to use many oom less compute than right now B. The "utility gain" of superintelligence being so high it can just do things credentialed humans don't think are possible at all. Like developing nanotechnology in a garage rather than needing a bunch of facilities that resemble IC fabs. If you imagined you might be able to find a way to make flying cars like regular cars, and reach mpgs similar to that of regular cars, and the entire FAA drops dead... Then yeah flying cars sound plausible but you made physically unlikely assumptions.

I gather from the recent census article that most of the readers of this site are significantly younger than I am, so I'll relay some first-hand experiences you probably didn't live through.

I was born in 1964. The Cuban Missle Crisis was only a few years in the past, and Kennedy had just been shot, possibly by Russians, or The Mob, or whomever. Continuing through at least the end of the Cold War in 1989, there was significant public opinion that we were all going to die in a nuclear holocaust (or Nuclear Winter), so really, what was the point in making lon... (read more)

2[anonymous]
I have a big peeve about that. When I try to model a flying car I see the tradeoffs of (High fuel consumption, higher cost to build, higher skill to drive, noise, falling debris) vs (less time to reach work) As long as the value per hour of a workers time is less than the cost per hour of the vtol + externalities, there isn't ROI for most workers. Less market size means higher cost and thus we just have helicopters for billionaires and everyone else drives. Did this come up in the 1970s or after the oil shocks were over in the 80s? Because they just jump out at me as a doomed idea that doesn't happen because it doesn't make money. Even now: electric vtols fix the fuel cost, using commodity parts makes them cheaper, automation makes them easier to fly, but you still have the negative externalities. AI makes immediate money, gpt-4 seems to be 100+ percent annual ROI...(60 mil to train, 2 billion annual revenue after a year, assuming 10 percent profit margin)
gwern3014

However, you should probably hedge your bets to a certain extent just in case you manage to live to retirement age.

Do you need to, though? People have been citing Feynman's anecdote about the RAND researchers deciding to stop bothering with retirement savings in the 1940s/50s because they thought the odds of nuclear war was so high. But no one has mentioned any of those RAND researchers dying on the streets or living off the proverbial dog chow in retirement. And why would they have?

First, anyone who was a RAND researcher is a smart cookie doing white-c... (read more)

I see a problem with this approach when the speaker does not know the answer to the question:

Under Abs-E, binary questions ("yes"-or-"no") are less obvious to answer. If your answer would ordinarily be "no", you must instead reply as if the question was open-ended. For example, your reply to "will you be here tomorrow?" may be "yes", or "I will be in the office tomorrow", or "I will stay home tomorrow". This forces you to speak with more information.

How do you respond when you don't know what you will be doing tomorrow? This could be a case where you haven... (read more)

1dkl9
You almost always have some information to concentrate your priors. Between mutually-helpful speakers, implicit with an answer to a question is that the answer gives all the information you have on the question that could benefit the questioner. E.g. "Almost certainly somewhere between $150 and $250."

But the big caveat is the exception "with the consent of both parties." I realize that Eliezer doesn't want to play against all comers, but presumably, nobody is expecting Ra and Datawitch to defend themselves against random members of the public.

I'm willing to believe that the "AI" can win this game since we have multiple claims to have done that, so knowing the method seems like it would benefit everybody.

[edited to fix a misspelling of Eliezer's name]

9datawitch
We kept the secrecy rule because it was the default but I stand by it now as well. There are a lot of things I said in that convo that I wouldn't want posted on lesswrong, enough that I think the convo would have been different without the expectation of privacy. Observing behavior often changes it.

In what way is rugby wildly different from combat :-)

I'm skeptical that the government regulation will amount to much that addresses the main problems. Highly profitable industries have routinely engaged in regulatory capture, often for decades, before things are bad enough that the population as a whole demands change. Oil and gas companies, pharmaceutical companies, etc., all come to mind as current examples of "regulated" industries that do pretty much what they like. Before that we had examples like tobacco, asbestos, narcotics-laced health tonics, etc., that took decades before they were really impeded ... (read more)

Don't underestimate the value of a red-headed woman...

And we knew it is the plural form of "medium," which is isomorphic to the message.

This is a good presentation of the idea. However, I think there are a few important things missing from the discussion:

  • Divisibility of goods for barter. You mention it briefly, but this seems like a big reason to pick an arbitrary accounting unit that can be subdivided at will.
  • Portability. Theoretically, we could agree on grains of wheat as the accounting unit since they are sufficiently fine-grained(hah!) to avoid the divisibility problem. However, having to haul 100 tons of grain down to the auto dealer is pretty inconvenient.
  • Time-shifting. Again, you to
... (read more)

Given that it's been a while since @Kat Woods and @Emerson Spartz claimed they had "incontrovertible proof" that warranted a delay in publishing, I'm hoping it's coming out soon. If not, a simple "we goofed" response would seem appropriate.

NeroWolfe3316

The "give us a week" message appears either misleading or overly optimistic. Unless there have been replies from Nonlinear in a separate thread, I don't think they have explained anything beyond their initial explanation of getting food. Coupled with the fact that it's hard to imagine a valid context or explanation for some of the things they confirm to have happened (drug smuggling, driving without a license or much experience), I have to conclude that they're not likely to change my mind at this point. I realize that probably doesn't matter to them since... (read more)

I'm going to have to work the phrase "delusions of grandeur without the substance to back that up" into my repertoire. Sort of like Churchill's comment about Clement Atlee: "A modest man with much to be modest about."

I'm not sure if this will improve or harm your understanding, but I appear to have taken a slightly different route to the problem. Rather than having two simultaneous equations, I reasoned as follows:

  1. We're trying to figure out how much the ball costs, so that's our X
  2. We know the bat is X + 1
  3. So we know (X) + (X+1) = 1.1, becomes 2X +1 = 1.1, 2X = 0.1, X is 5 cents

That seems simpler to me than the two simultaneous equations, even though it's essentially the same math. I'm also not sure if I would have gotten this correct as the first time I saw this, it had the giveaway clue "Note: the answer is NOT 10." I suspect I would have come up with 10 without the warning, but I can't be sure.

I think it would be good to word this as "and intends to publish a detailed point-by-point response by September 15th," or whatever the correct date turns out to be.

So, I'm new here, and apparently, I've misunderstood something. My comment didn't seem all that controversial to me, but it's been down-voted by everybody who gave it a vote. Can somebody pass me a clue as to why there is strong disagreement with my statement? Thanks.

5Viliam
As of now, the votes are positive. I guess it sometimes happens that some people like your comment, some people don't like it, and the ones who don't like it just noticed it first. (By the way, I mostly agree with the spirit of your comment, but I think you used too strong words. So I didn't vote either way. For example, as mentioned elsewhere, a good reason to wait for a week might be that the "context" is someone else's words, and you want to get their consent to publish the record. Also, the conclusion that "the Nonlinear side seems pretty fishy" is like... yeah, I suppose that most readers feel the same, but the debate is precisely about whether Nonlinear can produce in a week some context that will make it seem "less fishy". They would probably agree that the text as it is written now does not put them in good light.)
6Adam Zerner
I think that if a comment gets lots and lots of eyes on it, the upvotes and agreement votes will end up being reasonable enough. But I think there are other situations (not uncommon) where there are not enough eyes on it and the vote counts are unreasonable. I also think that there is a risk of unreasonable vote counts even once there are lots of eyes on the comment in question in situations like these where the dynamics are particularly mind-killing. For your comment, I don't see anything downvote worthy. My best guess is that the downvoters didn't think you were being charitable enough. Personally I think the belief that you were being uncharitable enough to justify a downvote is pretty unreasonable.

How complicated is providing context for that without a week of work on your side? The only plausible exculpatory context I can imagine is something akin to: "If somebody sent me a text like this, I would sever all contact with them, so I'm providing it as an example of what I consider to be unacceptable." I fail to see how hard it is to explain why the claims are false now and then provide detailed receipts within the week.

I don't know any of the parties involved here, but the Nonlinear side seems pretty fishy so far.

So, I'm new here, and apparently, I've misunderstood something. My comment didn't seem all that controversial to me, but it's been down-voted by everybody who gave it a vote. Can somebody pass me a clue as to why there is strong disagreement with my statement? Thanks.

I don't share your optimistic view that transnational agencies such as the IAEA will be all that effective. The history of the nuclear arms race is that those countries that could develop weapons did, leading to extremes such as the Tsar Bomba, a 50-megaton monster that was more of a dick-waving demonstration than a real weapon. The only thing that ended the unstable MAD doctrine was the internal collapse of the Soviet Union. So, while countries have agreed to allow limited inspection of their nuclear facilities and stockpiles, it's nothing like the level ... (read more)

2alex.herwix
Thanks for engaging with the post and acknowledging that regulation may be a possibility we should consider and not reject out of hand.  My position is actually not that optimistic. I don't believe that such transnational agencies are very likely to work or a safe bet to ensure a good future, it is more that it seems to be in our best interest to really consider all of the options that we can put on the table, try to learn from what has more or less worked in the past but also look for creative new approaches and solutions because the alternative is dystopia or catastrophe. A key difference between AI and nuclear weapons is that the AI labs are not as sovereign as nation states. If the US, UK, and EU were to impose strong regulation on their companies and "force them to cooperate" similar to what I outlined, this would seem (at least theoretically) possible and already a big win to me. For example, more resources could be allocated to alignment work compared to capabilities work. China seems much more concerned about regulation and control of companies anyway so I see a chance that they would follow suit in approaching AI carefully.  To be honest, it's overdue that we find the guts to face up to them and put them in their place. Of course that's easier said than done but the first step is to not be intimidated before we even tried. Similarly, the call for worldwide regulations often seems to me to be a case of "don't let the perfect be the enemy of the good". Of course, worldwide regulations would be desirable but if we only get US, UK, and EU or even the US or EU alone to make some moves here, we would be in a far better position. It's a bogeyman that companies will simply turn around and set up shop in the Bahamas to pursue AGI development because they would not be able to a) secure the necessary compute to run development and b) sell their products in the largest markets. We do have some leverage here. Thanks for acknowledging the issue that I am pointing to

I'm a new member here and curious about the site's view on responding to really old threads. My first comment was on a post that turned out to be four years old. It was a post by Wei Dai and appeared at the top of the page today, so I assumed it was new. I found the content to be relevant, but I'd like to know if there is a shared notion of "don't reply to posts that are more than X amount in the past."

4Adam Zerner
I'm very confident that there is no norm of pushing people away from posting on old threads. I'm generally confident that most people appreciate comments on old posts. However, I think it is also true that comments on old posts are unlikely to be seen, voted on, or responded to.
3Raemon
It's totally normal to comment on old posts. We deliberate design the forum to make it easier to do and for people to see that you have.
5Zack_M_Davis
I love getting comments on old posts! (There would be less reason to write if all writing were doomed to be ephemera; the reverse-chronological format of blogs shouldn't be a straitjacket or death sentence for ideas.)

Long-time listener, first-time caller here! I think this is an interesting viewpoint, and I wonder how you decide if you're making forward progress. With standard publications, media presentations, or whatever, you can identify that you've contributed Idea X to Field Y when you're published, but it seems harder to know that for yourself when the main contribution is forum posts. I'm interested in your views on this. 

2Wei Dai
On a forum you can judge other people's opinions of your contributions by the karma (or the equivalent) of your posts, and by their comments. Of course there's a risk that people on some forum liking your posts might represent groupthink instead of genuine intellectual progress, but the same risk exists with academic peer review, and one simply has to keep this risk/uncertainty in mind.