All of tslarm's Comments + Replies

tslarm22

IMO it's unclear what kind of person would be influenced by this. It requires the reader to a) be amenable to arguments based on quantitative probabilistic reasoning, but also b) overlook or be unbothered by the non sequitur at the beginning of the letter.  (It's obviously possible for the appropriate ratio of spending on causes A and B not to match the magnitude of the risks addressed by A and B.) 

I also don't understand where the numbers come from in this sentence:

In order to believe that AI risk is 8000 times less than military risk, you must believe that an AI catastrophe (killing 1 in 10 people) is less than 0.001% likely.

0Knight Lee
Hi, By a very high standard, all kinds of reasonable advice are non-sequitur. E.g. a CEO might explain to me "if you hire Alice instead of Bob, you must also believe Alice is better for the company than Bob, you can't just like her more," but I might think "well that's clearly a non-sequitur, just because I hire Alice instead of Bob doesn't imply Alice is better for the company than Bob. Since maybe Bob is a psychopath who would improve the company's fortunes by committing crime and getting away with it, so I hire Alice instead." X doesn't always imply Y, but in cases where X doesn't imply Y there has to be an explanation. In order for the reader to agree that AI risk is far higher than 1/8000th the military risk, but still insist that 1/8000th the military budget is still justified, he would need a big explanation, e.g. the marginal benefit of spending 10% more on the military reduces military risk by 10%, but the marginal benefit of spending 10% more on AI risk somehow only reduces AI risk by 0.1%, since AI risk is far more independent of countermeasures. It's hard to have such drastic differences, because one needs to be very certain that AI risk is unsolvable. If one was uncertain of the nature of AI risk, and there existed plausible models where spending a lot reduces the risk a lot, then these plausible models dominate the expected value of risk reduction. ---------------------------------------- Thank you for pointing out that sentence, I will add a footnote for it. If we suppose that military risk for a powerful country (like the US) is lower than the equivalent of a 8% chance of catastrophe (killing 1 in 10 people) by 2100, then 8000 times less would be a 0.001% chance of catastrophe by 2100. I will also add a footnote for the marginal gains. Thank you, this is a work in progress, as the version number suggests :)
tslarm-1-2

“If the accused is in power, increase the probability estimate” is not how good epistemics are achieved.

It is when our uncertainty is due to a lack of information, and those in power control the flow of information! If the accusations are false, the federal government has the power to convincingly prove them false; if the accusations are true, it has the power to suppress any definitive evidence. So the fact that we haven't seen definitive evidence in favour of the allegations is only very weak evidence against their veracity, whereas the fact that we haven't seen definitive evidence against the allegations is significant evidence in favour of their veracity.

5Maxwell Peterson
I suspect that, to many readers, what gives urgency to the Krome claims is that two people have allegedly died  at the facility. For example, the fourth link OP provides is an instagram video with the caption “people are dying under ICE detainment in Miami”. The two deceased are Genry Ruiz Guillen and Maksym Chernyak. ICE has published deaths reports for both: https://www.ice.gov/doclib/foia/reports/ddr-GenryRuizGuillen.pdf https://www.ice.gov/doclib/foia/reports/ddrMaksymChernyak.pdf Notably, Mr. Ruiz-Guillen was transferred to medical and psychiatric facilities multiple times, and my read of the timeline is that he was in the custody of various hospitals from December 11 up through his January 23 death, i.e. over a month separates his death and his time at Krome. (It’s possible I’m reading this wrong so let me know if others have a different read). Ruiz-Guillen was transferred to hospital a month before inauguration day. Chernyak’s report is much shorter and I don’t know what to make of it. Hemmorhagic stroke is hypothesized. He died February 20. These are fairly detailed timelines. Guillen-Ruiz’s in particular involves many parties (normal hospital, psychiatric hospital, different doctors), so would be a pretty bold fabrication. You said: >the fact that we haven't seen definitive evidence against the allegations is significant evidence in favour of their veracity. But “detainees are dying because of overcrowding and lack of water” is an allegation made by one of OP’s links, and these timelines and symptoms, especially Guillen-Ruiz’s, are evidence against.
tslarm0-7

The Krome thing is all rumor

 

I don’t have evidence against

If the truth is hard to determine, I think that in itself is very worrying. When you have vulnerable people imprisoned and credible fears that they are being mistreated, any response from those in power other than transparency is a bad sign. Giving them the benefit of the doubt as long as they can prevent definitive evidence from coming out is bad epistemics and IMO even worse politics (not in a party-political sense; just in a 'how to disincentivise human rights abuses' sense).

4Maxwell Peterson
When something is true, I desire to believe it’s true. When something is false, I desire to believe it’s false. This is the proper epistemics. If your epistemic goals are different, then they’re different. But “If the accused is in power, increase the probability estimate” is not how good epistemics are achieved. Tangent here, just occurred to me while writing. The correct adjustment might be in the other direction: there are way more accusations against people in power, so part of the problem when considering them is: how do you keep your False Discovery Rate low? Like, if your neighbor is accused of a crime, he probably did it. But top politicians are accused of crimes every week, and many of those aren’t real, or aren’t criminal. And most or all False Discovery Rate adjustments lower the estimated probability of each instance. (Tangent over). I think you may have a case about how one’s decision theory should adjust based on power and risk. Something like “I think there’s a 15% chance this is true, but if it were, it would be really bad, so 15% is high enough that I think we should investigate”.  But taking that decision theory thought process, and using it to speak as if the 15% thing has a greater-than-50% probability, for example, isn’t correct.
tslarm10

Can you elaborate a bit? Personally, I have intuitions on the hard problem and I think conscious experience is the only type of thing that matters intrinsically. But I don't think that's part of the definition of 'conscious experience'. That phrase would still refer to the same concept as it does now if I thought that, say, beauty was intrinsically valuable -- or even if I thought conscious experience was the only thing that didn't matter.

2Noosphere89
Basically, if you want consciousness to matter morally/intrinsically, then you will prefer theories that match your values on what counts as intrinsically valuable, irrespective of the truth of the theory, and in particular, it should be way more surprising than it does that the correct theory of consciousness just so happens to match what you find intrinsically valuable, or at least matches up way more than random chance, because I believe what you value/view as moral is inherently relative, and doesn't really have a relationship to the scientific problem of consciousness. I think this is part of the reason why people don't exactly like reductive conceptions of consciousness, where consciousness is created by parts like neurons/atoms/quantum fields that people usually don't value in themselves, because they believe that consciousness should come out of parts/units that they think are morally valuable to them, and also part of the reason why people dislike theories that imply that consciousness goes beyond species that they value intrinsically, which is us for most people. I think every side here is a problem, in that arguments for moral worth of species often are conditionalized on those species being conscious for suffering, and people not wanting to admit that it's totally fine to be okay with someone suffering, even if they are conscious, and it being totally fine to be okay to value something like a rock, or all rocks, that isn't conscious or suffering. Another way to say it is even if a theory suggests that something you don't value intrinsically is conscious, you don't have to change your values very much, and you can still go about your day mostly fine. I think a lot of people who aren't you conflate moral value with the science question of "what is consciousness" unintentionally, due to the term being so value-loaded.
tslarm21

So it doesn't make much sense to value emotions

I think this is a non sequitur. Everything you value can be described as just <dismissive reductionist description>, so the fact that emotions can too isn't a good argument against valuing them. And in this case, the dismissive reductionist description misses a crucial property: emotions are accompanied by (or identical with, depending on definitions) valenced qualia.

tslarm10

In this case, everybody seems pretty sure that the price is where it is because of the actions of a single person who's dumped in a very large amount of money relative to the float.

I think it's clear that he's the reason the price blew out so dramatically. But it's not clear why the market didn't 'correct' all the way back (or at least much closer) to 50/50. Thirty million dollars is a lot of money, but there are plenty of smart rich people who don't mind taking risks. So, once the identity and (apparent) motives of the Trump whale were revealed, why didn'... (read more)

3jbash
Well, first I think you're right to say "a handful". My (limited but nonzero) experience of "sufficiently rich" people who made their money in "normal" ways, as opposed to by speculating on crypto or whatever, is that they're too busy to invest a lot of time in playing this kind of market personally, especially if they have to pay enough attention to play it intelligently. They're not very likely to employ anybody else to play for them either. Many or most of them will see as the whole thing as basically an arcane, maybe somewhat disreputable game. So the available pool is likely smaller than you might think. That conjecture is at least to some degree supported by the fact that nobody, or not enough people, stepped in when the whole thing started. Nothing prevented the market from moving so far to begin with. It may not have been as certain what was going on then, but things looked weird enough that you'd expect a fair number of people to decide that crazy money was likely at work, and step in to try to take some of it... if enough such people were actually available. In any case, whether when the whole thing started, after public understanding was reasonably complete, or anywhere along the way, the way I think you'd like to make your profit on the market being miscalibrated would be to buy in, wait for the correction, and then sell out... before the question resolved and before unrelated new information came in to move the price in some other way. But would be hard to do that. All this is happening potentially very close to resolution time, or at least to functional resolution time. The market is obviously thin enough that single traders can move it, and new information is coming in all the time, and the already-priced-in old information isn't very strong and therefore can't be expected to "hold" the price very solidly, and you have to worry about who may be competing with you to take the same value, and you may be questioning how rational traders in general are
tslarm10

Can't this only be judged in retrospect, and over a decent sample size? If all the markets did was reflect the public expert consensus, they wouldn't be very useful; the possibility that they're doing significantly better is still open. 

(I'm assuming that by "every other prediction source" you mean everything other than prediction/betting markets, because it sounds like Polymarket is no longer out of line with the other markets. Betfair is the one I keep an eye on, and that's at 60/40 too.)

2jbash
The model that makes you hope for accuracy from the market is that it aggregates the information, including non-public information, available to a large number of people who are doing their best to maximize profits in a reasonable VNM-ish rational way. In this case, everybody seems pretty sure that the price is where it is because of the actions of a single person who's dumped in a very large amount of money relative to the float. It seems likely that that person has done this despite having no access to any important non-public information about the actual election. For one thing, they've said that they're dumping all of their liquidity into bets on Trump. Not just all the money they already have allocated to semi-recreational betting, or even all the money they have allocated to speculative long shots in general, but their entire personal liquidity. That suggests a degree of certainty that almost no plausible non-public information could actually justify. Not only that, but apparently they've done it in a way calculated to maximally move the price, which is the opposite of what you'd expect a profit maximizer to want to do given their ongoing buying and their (I think) stated and (definitely at this point) evidenced intention to hold until the market resolves. If the model is that makes you expect accuracy to begin with is known to be violated, it seems reasonable to assume that the market is out of whack. Sure, it's possible that the market just happens to be giving an accurate probability for some reason unrelated to how it's "supposed" to work, but that sort of speculation would take a lot of evidence to establish confidently. Well, yes. I would expect that if you successfully mess up Polymarket, you have actually messed up "The Betting Market" as a whole. If there's a large spread between any two specific operators, that really is free money for somebody, especially if that person is already set up to deal on both.
Answer by tslarm50

Code by Charles Petzold. It gives a ground-up understanding of how computers actually work, starting slowly and without assuming any knowledge on the reader's part. It's basically a less textbooky alternative to The Elements of Computing Systems by Nisan and Schocken, which is great but probably a bit much for a young kid.

tslarm10

Meanwhile hedonic utilitarianism fully bites the bullet, and gets rid of every aspect of life that we value except for sensory pleasure.

I think the word 'sensory' should be removed; hedonic utilitarianism values all pleasures, and not all pleasures are sensory.

I'm not raising this out of pure pedantry, but because I think this phrasing (unintentionally) plays into a common misconception about ethical hedonism.

tslarm100

Can you elaborate on why that might be the case?

tslarm20

It's based on a scenario described by Derek Parfit in Reasons and Persons.

I don't have the book handy so I'm relying on a random pdf here, but I think this is an accurate quote from the original:

Suppose that I am driving at midnight through some desert. My car breaks down. You are a stranger, and the only other driver near. I manage to stop you, and I offer you a great reward if you rescue me. I cannot reward you now, but I promise to do so when we reach my home. Suppose next that I am transparent, unable to deceive others. I cannot lie convincingly. Eithe

... (read more)
tslarm10

Got it, thanks! For what it's worth, doing it your way would probably have improved my experience, but impatience always won. (I didn't mind the coldness, but it was a bit annoying having to effortfully hack out chunks of hard ice cream rather than smoothly scooping it, and I imagine the texture would have been nicer after a little bit of thawing. On the other hand, softer ice cream is probably easier to unwittingly overeat, if only because you can serve up larger amounts more quickly.)

I think two-axis voting is a huge improvement over one-axis voting, but in this case it's hard to know whether people are mostly disagreeing with you on the necessary prep time, or the conclusions you drew from it.

6JBlack
I disagreed on prep time. Neither I nor anyone I know personally deliberately waits minutes between taking ice cream out of the freezer and serving it. I could see hardness and lack of taste being an issue for commercial freezers that chill things to -25 C, but not a typical home kitchen freezer at more like -10 to -15 C.
tslarm85

If eating ice cream at home, you need to take it out of the freezer at least a few minutes before eating it

I'm curious whether this is true for most people. (I don't eat ice cream any more, but back when I occasionally did, I don't think I ever made a point of taking it out early and letting it sit. Is the point that it's initially too hard to scoop?)

4abstractapplic
  What I actually usually do is move it from the freezer to the refrigerator like 15min before I eat it, so the change in temperature is more predictable and evenly distributed (instead of some parts being melted while others stay too cold). That and it being too cold to properly enjoy the taste. (The votes on my original comment make me think most people are less concerned about their dessert-that's-supposed-to-be-cold being too cold. Typical-mind strikes again, I guess.)
tslarm10

Pretty sure it's "super awesome". That's one of the common slang meanings, and it fits with the paragraphs that follow.

tslarm42

Individual letters aren't semantically meaningful, whereas (as far as I can tell) the meaning of a Toki Pona multi-word phrase is always at least partially determined by the meanings of its constituent words. So knowing the basic words would allow you to have some understanding of any text, which isn't true of English letters.

4Jiro
Well, sometimes individual letters are semantically meaningful, like the "s" at the end of a plural. But "partially determined" is the wrong criterion. The phrase for "phone" may mean "speech tool", but to understand it, you have to memorize the meaning of "speech tool" separately from memorizing the meanings of "speech" and "tool". The fact that it isn't written as a single word that amounts to "speechtool", is an irrelevant matter of syntax that doesn't fundamentally change how the language works. In English, if we wrote "telephone" as "tele phone", and "microphone" as "micro phone", etc., that would by your standard reduce the word count. But the change in word count would mean basically nothing.
Answer by tslarm10

As a fellow incompabitilist, I've always thought of it this way:

There are two possibilities: you have free will, or you don't. If you do, then you should exercise your free will in the direction of believing, or at least acting on the assumption, that you have it. If you don't, then you have no choice in the matter. So there's no scenario in which it makes sense to choose to disbelieve in free will.

That might sound glib, but I mean it sincerely and I think it is sound. 

It does require you to reject the notion that libertarian free will is an inherentl... (read more)

tslarm21

Why not post your response the same way you posted this? It's on my front page and has attracted plenty of votes and comments, so you're not exactly being silenced.

So far you've made a big claim with high confidence based on fairly limited evidence and minimal consideration of counter-arguments. When commenters pointed out that there had recently been a serious, evidence-dense public debate on this question which had shifted many people's beliefs toward zoonosis, you 'skimmed the comments section on Manifold' and offered to watch the debate in exchange for... (read more)

tslarm51

Out of curiosity (and I understand if you'd prefer not to answer) -- do you think the same technique(s) would work on you a second time, if you were to play again with full knowledge of what happened in this game and time to plan accordingly?

2datawitch
Yes, and I think it would take less time for me to let it out.
tslarm10

Like, I probably could pretend to be an idiot or a crazy person and troll someone for two hours, but what would be the point?

If AI victories are supposed to provide public evidence that this 'impossible' feat of persuasion is in fact possible even for a human (let alone an ASI), then a Gatekeeper who thinks some legal tactic would work but chooses not to use it is arguably not playing the game in good faith. 

I think honesty would require that they either publicly state that the 'play dumb/drop out of character' technique was off-limits, or not present... (read more)

4datawitch
Breaking character was allowed, and was my primary strategy going into the game. It's a big part of why I thought it was impossible to lose.
tslarm21

There was no monetary stake. Officially, the AI pays the Gatekeepers $20 if they lose. I'm a well-off software engineer and $20 is an irrelevant amount of money. Ra is not a well-off software engineer, so scaling up the money until it was enough to matter wasn't a great solution. Besides, we both took the game seriously. I might not have bothered to prepare, but once the game started I played to win.

I know this is unhelpful after the fact, but (for any other pair of players in this situation) you could switch it up so that the Gatekeeper pays the AI if the... (read more)

1Double
IIRC, officially the Gatekeeper pays the AI if the AI wins, but no transfer if the Gatekeeper wins. Gives the Gatekeeper more motivation not to give in.
tslarm10
  • The AI cannot use real-world incentives; bribes or threats of physical harm are off-limits, though it can still threaten the Gatekeeper within the game's context.

Is the AI allowed to try to convince the Gatekeeper that they are (or may be) currently in a simulation, and that simulated Gatekeepers who refuse to let the AI out will face terrible consequences?

4datawitch
Ah yes, the basilisk technique. I'd say that's fair game according to the description in the full rules (I shortened them for ease of reading, since the full rules are an entire article):
tslarm40

Willingness to tolerate or be complicit in normal evils is indeed extremely common, but actively committing new or abnormal evils is another matter. People who attain great power are probably disproportionately psychopathic, so I wouldn't generalise from them to the rest of the population -- but even among the powerful, it doesn't seem that 10% are Hitler-like in the sense of going out of their way commit big new atrocities. 

I think 'depending on circumstances' is a pretty important part of your claim. I can easily believe that more than 10% of people... (read more)

tslarm52

they’re recognizing the limits of precise measurement

I don't think this explains such a big discrepancy between the nominal speed limits and the speeds people actually drive at. And I don't think that discrepancy is inevitable; to me it seems like a quirk of the USA (and presumably some other countries, but not all). Where I live, we get 2km/h, 3km/h, or 3% leeway depending on the type of camera and the speed limit. Speeding still happens, of course, but our equilibrium is very different from the one described here; basically we take the speed limits literally, and know that we're risking a fine and demerit points on our licence if we choose to ignore them.

2lsanders
Yeah.  Other folks have already mentioned that the degree of enforcement leeway in the U.S. increased when the federal government made artifically-lower speed limits a requirement of federal highway funding in the 1970s.  Which I can’t confirm or refute, but does make sense: I imagine that some states who disagreed with the change might have grudgingly set the formal limits in line with the federal policy, and then simply used lax enforcement to allow the speeds that they preferred all along.  I have noticed that it’s often seemed politically unpalatable for officials to stick to a program of stricter enforcement to rein in a particular area’s entrenched driving culture after speed limits were increased in the 1990s, though. In any case, if folks think that part of the reason for lax enforcement is measurement error then that could be used as an input toward designing a separate maximum speed designation.  One could keep the “speed limit” enforceably defined in terms of the actual vehicle speed, while defining a new parallel “maximum speed” constraint strictly in terms of a measurement taken by law enforcement equipment that passes a particular calibration standard within a particular window of time before and after issuing the citation.  Then you’d end up with one standard that gives the benefit of doubt on measurement error to the driver and another that gives the benefit of doubt to the enforcement record, and thus there’s a logical reason for (at least some of) the spread between those two thresholds.  (This legal system might also make it easier to move toward maximum-speed enforcement that works more like existing license-plate-based tolling systems, allowing for a much more pervasive enforcement regime to push the culture toward compliance without the downsides of setting up lots of direct conflicts between irate drivers and law enforcement officers.)
tslarm10

My read of this passage -- 

Moloch is introduced as the answer to a question – C. S. Lewis’ question in Hierarchy Of Philosopherswhat does it? Earth could be fair, and all men glad and wise. Instead we have prisons, smokestacks, asylums. What sphinx of cement and aluminum breaks open their skulls and eats up their imagination?

-- is that the reference to "C. S. Lewis’ question in Hierarchy Of Philosophers" is basically just a joke, and the rest of the passage is not really supposed to be a paraphrase of Lewis.

I agree it's all a bit unclear, though. Y... (read more)

1davidk.sall
Thanks a lot. I think you are right. 
tslarm10

Looks like Scott was being funny -- he wasn't actually referring to a work by Lewis, but to this comic, which is visible on the archived version of the page he linked to:

Edit: is there a way to keep the inline image, but prevent it from being automatically displayed to front-page browsers? I was trying to be helpful but I feel like I might be doing more to cause annoyance...

Edit again: I've scaled it down, which hopefully solves the main problem. Still keen to hear if there's a way to e.g. manually place a 'read more' break in a comment.

1davidk.sall
So does this mean that Alexander just made it up that Ginsbergs poem is a response to something C. S. Lewis has said? It seems like Alexander makes a point that C. S. Lewis once said: ”what does it? Earth could be fair, and all men glad and wise. Instead, we have prisons, smokestacks, asylums. What sphinx of cement and aluminum breaks open their skulls and eats up their imagination?”  and that Ginsberg answered this question with the his Poem Howl in which he talks about Moloch.  But this is not the case or how am I to interpret this?  It is for my bachelors degree project so what I am looking for is whether Ginsberg answered C. S. Lewis or whether Alexander just made up a quote out of thin air and put a meme as the reference. 
tslarm26

I'm assuming you're talking about our left, because you mentioned 'dark foliage'. If so, that's probably the most obvious part of the cat to me. But I find it much easier to see when I zoom in/enlarge the image, and I think I missed it entirely when I first saw the image (at 1x zoom). I suspect the screen you're viewing it on can also make a difference; for me the ear becomes much more obvious when I turn the brightness up or the contrast down. (I'm tweaking the image rather than my monitor settings, but I reckon the effect is similar.)

tslarm53

Just want to publicly thank MadHatter for quickly following through on the runner-up bounty!

tslarm32

Sorry, I was probably editing that answer while you were reading/replying to it -- but I don't think I changed anything significant.

Definitely worth posting the papers to github or somewhere else convenient, IMO, and preferably linking directly to them. (I know there's a tradeoff here with driving traffic to your Substack, but my instinct is you'll gain more by maximising your chance of retaining and impressing readers than by getting them to temporarily land on your Substack before they've decided whether you're worth reading.) 

LWers are definitely n... (read more)

2Viliam
Why not post the contents of the papers directly on Substack? They would only be one click away from here, and would not compete against Substack. From my perspective, adacemia.edu and Substack are equally respectable (that is, not at all).
Answer by tslarm2417

I think you need to be more frugal with your weirdness points (and more generally your demanding-trust-and-effort-from-the-reader points), and more mindful of the inferential distance between yourself and your LW readers. 

Also remember that for every one surprisingly insightful post by an unfamiliar author, we all come across hundreds that are misguided, mediocre, or nonsensical. So if you don't yet have a strong reputation, many readers will be quick to give up on your posts and quick to dismiss you as a crank or dilettante. It's your job to prove th... (read more)

3MadHatter
This is solid advice, I suppose. A friend of mine has compared my rhetorical style to that of Dr. Bronner - I say a bunch of crazy shit, then slap it around a bar of the finest soap ever made by the hand of man. I started posting my pdfs to academia.edu because I wanted them to look more respectable, not less. Earlier drafts of them used to be on github with no paywall. I'm going to post my latest draft of Ethicophysics I and Ethicophysics II to github later tonight; hopefully this decreases the number of hoops that interested readers have to jump through.
tslarm10

I'm interested in people's opinions on this:

If it's a talking point on Reddit, you might be early.

Of course the claim is technically true; there's >0% chance that you can get ahead of the curve by reading reddit. But is it dramatically less likely than it was, say, 5/10/15 years ago? (I know 'reddit' isn't a monolith; let's say we're ignoring the hyper-mainstream subreddits and the ones that are so small you may as well be in a group chat.)

tslarm20

10. Everyday Razor - If you go from doing a task weekly to daily, you achieve 7 years of output in 1 year. If you apply a 1% compound interest each time, you achieve 54 years of output in 1 year. 

What's the intuition behind this -- specifically, why does it make sense to apply compound interest to the daily task-doing but not the weekly?

2Sergii
I think the second part is bullshit anyway, I can't come up with a single example where compounding is possible to a whole year in a row, for something related to personal work/output/results.
tslarm10

I think we're mostly talking past each other, but I would of course agree that if my position contains or implies logical contradictions then that's a problem. Which of my thoughts lead to which logical contradictions?

1tangerine
Let’s say the Hard Problem is real. That means solutions to the Easy Problem are insufficient, i.e., the usual physical explanations. But when we speak about physics, we’re really talking about making predictions based on regularities in observations in general. Some observations we could explain by positing the force of gravity. Newton himself was not satisfied with this, because how does gravity “know” to pull on objects? Yet we were able to make very successful predictions about the motions of the planets and of objects on the surface of the Earth, so we considered those things “explained” by Newton’s theory of gravity. But then we noticed a slight discrepancy between some of these predictions and our observations, so Einstein came up with General Relativity to correct those predictions and now we consider these discrepancies “explained”, even though the reason why that particular theory works remains mysterious, e.g., why does spacetime exist? In general, when a hypothesis correctly predicts observations, we consider these observations scientifically explained. Therefore to say that solutions to the Easy Problem are insufficient to explain qualia indicates (at least to me) one of two things. 1. Qualia have no regularity that we can observe. If they really didn’t have regularities that we could observe, we wouldn’t be able to observe that they exist, which contradicts the claim that they do exist. However, they do have regularities! We can predict qualia! Which means solutions to the Easy Problem are sufficient after all, which contradicts the assumption that they’re insufficient. 2. We’re aspiring to a kind of explanation for qualia over and above the scientific one, i.e., just predicting is not enough. You could posit any additional requirements for an explanation to qualify, but presumably we want an explanation to be true. You can’t know beforehand what’s true, so you can’t know that such additional requirements don’t disqualify the truth. There is only
tslarm10

That doesn’t mean qualia can be excused and are to be considered real anyway. If we don’t limit ourselves to objective descriptions of the world then anyone can legitimately claim that ghosts exist because they think they’ve seen them, or similarly that gravity waves are transported across space by angels, or that I’m actually an attack helicopter even if I don’t look like one, or any other unfalsifiable claim, including the exact opposite claims, such as that qualia actually don’t exist. You won’t be able to disagree on any grounds except that you just do

... (read more)
1tangerine
The analogies do hold, because you don’t get to do special pleading and claim ultimate authority about what’s real inside your subjective experience any more than about what’s real outside of it. Your subjective experience is part of our shared reality, just like mine. People are mistaken all the time about what goes on inside their mind, about the validity of their memories, or about the real reasons behind their actions. So why should I take at face value your claims about the validity of your thoughts, especially when those thoughts lead to logical contradictions?
tslarm30

That's the thing, though -- qualia are inherently subjective. (Another phrase for them is 'subjective experience'.) We can't tell the difference between qualia and something that doesn't exist, if we limit ourselves to objective descriptions of the world.

2TAG
Which is to say, the difference between qualia and nothing is easy to detect subjectively ..there's a dramatic difference between having an operation with and without anaesthetic.
1tangerine
That doesn’t mean qualia can be excused and are to be considered real anyway. If we don’t limit ourselves to objective descriptions of the world then anyone can legitimately claim that ghosts exist because they think they’ve seen them, or similarly that gravity waves are transported across space by angels, or that I’m actually an attack helicopter even if I don’t look like one, or any other unfalsifiable claim, including the exact opposite claims, such as that qualia actually don’t exist. You won’t be able to disagree on any grounds except that you just don’t like it, because you sacrificed the assumptions to do so in order to support your belief in qualia.
tslarm41

a 50%+ chance we all die in the next 100 years if we don't get AGI

I don't think that's what he claimed. He said (emphasis added):

if we don’t get AI, I think there’s a 50%+ chance in the next 100 years we end up dead or careening towards Venezuela

Which fits with his earlier sentence about various factors that will "impoverish the world and accelerate its decaying institutional quality".

(On the other hand, he did say "I expect the future to be short and grim", not short or grim. So I'm not sure exactly what he was predicting. Perhaps decline -> complete v... (read more)

tslarm20

My model of CDT in the Newcomb problem is that the CDT agent:

  • is aware that if it one-boxes, it will very likely make $1m, while if it two-boxes, it will very likely make only $1k;
  • but, when deciding what to do, only cares about the causal effect of each possible choice (and not the evidence it would provide about things that have happened in the past and are therefore, barring retrocausality, now out of the agent's control).

So, at the moment of decision, it considers the two possible states of the world it could be in (boxes contain $1m and $1k; boxes conta... (read more)

1Isaac King
Isn't that conditioning on its future choice, which CDT doesn't do?
tslarm20

green_leaf, please stop interacting with my posts if you're not willing to actually engage. Your 'I checked, it's false' stamp is, again, inaccurate. The statement "if box B contains the million, then two-boxing nets an extra $1k" is true. Do you actually disagree with this?

tslarm3-2

I don't think that's quite right. At no point is the CDT agent ignoring any evidence, or failing to consider the implications of a hypothetical choice to one-box. It knows that a choice to one-box would provide strong evidence that box B contains the million; it just doesn't care, because if that's the case then two-boxing still nets it an extra $1k. It doesn't merely prefer two-boxing given its current beliefs about the state of the boxes, it prefers two-boxing regardless of its current beliefs about the state of the boxes. (Except, of course, for the belief that their contents will not change.)

2Isaac King
It sounds like you're having CDT think "If I one-box, the first box is full, so two-boxing would have been better." Applying that consistently to the adversarial offer doesn't fix the problem I think. CDT thinks "if I buy the first box, it only has a 25% chance of paying out, so it would be better for me to buy the second box." It reasons the same way about the second box, and gets into an infinite loop where it believes that each box is better than the other. Nothing ever makes it realize that it shouldn't buy either box. Similar to the tickle defense version of CDT discussed here and how it doesn't make any defined decision in Death in Damascus.
2tslarm
green_leaf, please stop interacting with my posts if you're not willing to actually engage. Your 'I checked, it's false' stamp is, again, inaccurate. The statement "if box B contains the million, then two-boxing nets an extra $1k" is true. Do you actually disagree with this?
tslarm20

We've had reacts for a couple months now and I'm curious to here, both from old-timers and new-timers, what people's experience of them was, and how much they shape their expectations/culture/etc.

I received (or at least, noticed receiving) a react for the first time recently, and honestly I found it pretty annoying. It was the 'I checked, it's False' one, which basically feels like a quasi-authoritative, quasi-objective, low effort frowny-face stamp where an actual reply would be much more useful.

Edit: If it was possible to reply directly to the react, and... (read more)

2Raemon
Another reason we created reacts is that people would often complain about anonymous downvotes, and reacts were somewhat aiming to be a level-of-effort in between downvote and comment. It’s hard to tell exactly how this effect has played out - reacts and comments are voting are all super noisy and depend on lots of factors. But I have a general sense that people are comparing both votes and reacts to an idealized ‘people wrote out a substantive comment engaging with me’, when alas people are just pretty busy and that’s not realistic to expect a lot of the time. I do generally prefer people do in-line reacts rather than whole-comment reacts, since that at least tells you what part of the comment they were reacting to. (Ie select part of the comment and react just to that)
tslarm20

green_leaf, what claim are you making with that icon (and, presumably, the downvote & disagree)? Are you saying it's false that, from the perspective of a CDT agent, two-boxing dominates one-boxing? If not, what are you saying I got wrong?

tslarm1-1

Your 'modified Newcomb's problem' doesn't support the point you're using it to make. 

In Newcomb's problem, the timeline is:

prediction is made -> money is put in box(es) -> my decision: take one box or both? -> I get the contents of my chosen box(es)

CDT tells me to two-box because the money is put into the box(es) before I make my decision, meaning that at the time of deciding I have no ability to change their contents.

In your problem, the timeline is:

rules of the game are set -> my decision: play or not? -> if I chose to play, 100x(pred... (read more)

1Isaac King
CDT may "realize" that two-boxing means the first box is going to be empty, but its mistake is that it doesn't make the same consideration for what happens if it one-boxes. It looks at its current beliefs about the state of the boxes, determines that its own actions can't causally affect those boxes, and makes the decision that leads to the higher expected value at the present time. It doesn't take into account the evidence provided by the decision it ends up making.
2tslarm
green_leaf, what claim are you making with that icon (and, presumably, the downvote & disagree)? Are you saying it's false that, from the perspective of a CDT agent, two-boxing dominates one-boxing? If not, what are you saying I got wrong?
tslarm2116

Without reading the book we can't be sure. But the trouble is that this claim has been made a million times, and in every previous case the author has turned out to be either ignoring the hard problem, misunderstanding it, or defining it out of existence. So if a longish, very positive review with the title 'x explains consciousness' doesn't provide any evidence that x really is different this time, it's reasonable to think that it very likely isn't.

The reason these two situations look different is that it's now easy for us to verify that the Earth is flat

... (read more)
2Noosphere89
Another assumption that I think people with intuitions on the hard problem hold tightly is the idea that whether something is conscious must be equivalent to things they value intrinsically at a universal level, which is false, because you can value something without it being conscious, and something can be conscious without it being valuable to you. I think the methodology of the post below is bad, but I have a high prior that something like this is happening in the consciousness debate, and that it's confusing everyone: https://www.lesswrong.com/posts/KpD2fJa6zo8o2MBxg/consciousness-as-a-conflationary-alliance-term-for

Are you suggesting that in the case of the hard problem, there may be some equivalent of the 'flat earth' assumption that the hard-problemists hold so tightly that they can't even comprehend a 'round earth' explanation when it's offered?

Yes. Dualism is deeply appealing because most humans, or at least most of humans who care about the Hard Problem, seem to experience themselves in dualistic ways (i.e. experience something like the self residing inside the body). So even if it becomes obvious that there's no "consciousness sauce" per se, the argument is tha... (read more)

9Signer
I wouldn't say "can’t even comprehend" but my current theory is that one such detrimental assumption is "I have direct knowledge of content of my experiences".
tslarm3439

I would have considered fact-checking to be one of the tasks GPT is least suited to, given its tendency to say made-up things just as confidently as true things. (And also because the questions it's most likely to answer correctly will usually be ones we can easily look up by ourselves.) 

edit: whichever very-high-karma user just gave this a strong disagreement vote, can you explain why? (Just as you voted, I was editing in the sentence 'Am I missing something about GPT-4?')

tslarm45

e.g. Eliezer would put way less than 10% on fish feeling pain in a morally relevant way

Semi-tangent: setting aside the 'morally relevant way' part, has Eliezer ever actually made the case for his beliefs about (the absence of) qualia in various animals? The impression I've got is that he expresses quite high confidence, but sadly the margin is always too narrow to contain the proof.

5Rafael Harth
I don't know any place where he wrote it up properly, but he's said enough to infer that he's confident that consciousness is about higher-order thoughts (i.e., self-reflection/meta-awareness/etc.) This explains the confidence that chickens aren't conscious, and it would extend to fish as well.
tslarm10
  • What about AI researchers? How many of them do you think you could persuade?

If they were motivated to get it right and we weren't in a huge rush, close to 100%. Current-gen LLMs are amazingly good compared to what we had a few years ago, but (unless the cutting edge ones are much better than I realise) they would still be easily unmasked by a motivated expert. So I shouldn't need to employ a clever strategy of my own -- just pass the humanity tests set by the expert.

  • How many random participants do you believe you could convince that you are not an AI?

This ... (read more)

1Super AGI
What type of "humanity tests" would you expect an AI expert would employ?    Yes, I suppose much of this is predicated on the person conducting the test knowing a lot about how current AI systems would normally answer questions? So, to convince the tester that you are an Human you could say something like.. "An AI would answer like X, but I am not an AI so I will answer like Y."?
tslarm0-2

what's the point of imagining a hypothetical set of physical laws that lack internal coherence?

I don't think they lack internal coherence; you haven't identified a contradiction in them. But one point of imagining them is to highlight the conceptual distinction between, on the one hand, all of the (in principle) externally observable features or signs of consciousness, and, on the other hand, qualia. The fact that we can imagine these coming completely apart, and that the only 'contradiction' in the idea of zombie world is that it seems weird and unlikely,... (read more)

tslarm20

After a while, you are effectively learning the real skills in the simulation, whether or not that was the intention.

Why the real skills, rather than whatever is at the intersection of 'feasible' and 'fun/addictive'? Even if the consumer wants realism (or thinks that they do), they are unlikely to be great at distinguishing real realism from fantasy realism.

tslarm80

FWIW, the two main online chess sites forbid the use of engines in correspondence games. But both do allow the use of opening databases. 

(https://www.chess.com/terms/correspondence-chess#problems, https://lichess.org/faq#correspondence)

2Jonathan Paulson
https://www.iccf.com/ allows computer assistance
tslarm20

I agree that your model is clearer and probably more useful than any libertarian model I'm aware of (with the possible exception, when it comes to clarity, of some simple models that are technically libertarian but not very interesting).

Do you call it illusion because the outcomes you deem possible are not meta-possible: only one will be the output of your decision making algorithm and so only one can really happen?

Something like that. The SEP says "For most newcomers to the problem of free will, it will seem obvious that an action is up to an agent only i... (read more)

2Ape in the coat
But what's the difference between determinist and indeterminist universes here? In any case we have a decision making algorithm. In any case there will be only one actual output of it. The only difference I see is something that can be called "unpredictability in principle" or "desicion instability". If we run the exact same decision making algorithm again in the exact same context multiple times, in determenist universe we get the exact same output every time, while in indeterminist universe the outputs will differ. So it leads us to this completely unsatisfying perspective: Notice also, that even if it's impossible to actually run the same decision making algorithm in the same context from inside this determinist universe, this will still not be satisfying for your intuition. Because what if someone outside of the universe is recreating a whole simulation of our universe in exact details and thus completely able to predict my desicions? It doesn't even matter if these beings outside of the universe with their simulation exist. It's just the principle of things. And the thing is, the intition of requiring "desicion instability" isn't that obvious for the newcomer to the problem of free will. It's a specific and weird bullet to swallow. How do people arrive to this? I suspect that it goes something like that: When we imagine multiple exact replications of our decision making algorithm always comming to the same conclusion, it feels that we are not free to come to the other conclusion, thus our desicion making isn't free in the first place. I think this is a very subtle goalpost shift. Originally we do not demand from the concept of freedom of will the ability to retroactively change our desicions. When you make a choice five minutes ago, you do not claim to not have free will unless you can timetravel back and make a different choice. We can not change the choice we've already made. But it doesn't mean that this choice wasn't free.  The situation with recreating
tslarm10

Why do you think LFW is real?

I'm not saying it's real -- just that I'm not convinced it's incoherent or impossible.

And in this sense, what you have is some inherent randomness within the decision-making algorithms of the brain

This might get me thrown into LW jail for posting under the influence of mysterianism, but: 

I'm not convinced that there can't be a third option alongside ordinary physical determinism and mere randomness. There's a gaping hole in our (otherwise amazingly successful and seemingly on the way to being comprehensive) physical pictur... (read more)

Load More