“If the accused is in power, increase the probability estimate” is not how good epistemics are achieved.
It is when our uncertainty is due to a lack of information, and those in power control the flow of information! If the accusations are false, the federal government has the power to convincingly prove them false; if the accusations are true, it has the power to suppress any definitive evidence. So the fact that we haven't seen definitive evidence in favour of the allegations is only very weak evidence against their veracity, whereas the fact that we haven't seen definitive evidence against the allegations is significant evidence in favour of their veracity.
The Krome thing is all rumor
I don’t have evidence against
If the truth is hard to determine, I think that in itself is very worrying. When you have vulnerable people imprisoned and credible fears that they are being mistreated, any response from those in power other than transparency is a bad sign. Giving them the benefit of the doubt as long as they can prevent definitive evidence from coming out is bad epistemics and IMO even worse politics (not in a party-political sense; just in a 'how to disincentivise human rights abuses' sense).
Can you elaborate a bit? Personally, I have intuitions on the hard problem and I think conscious experience is the only type of thing that matters intrinsically. But I don't think that's part of the definition of 'conscious experience'. That phrase would still refer to the same concept as it does now if I thought that, say, beauty was intrinsically valuable -- or even if I thought conscious experience was the only thing that didn't matter.
So it doesn't make much sense to value emotions
I think this is a non sequitur. Everything you value can be described as just <dismissive reductionist description>, so the fact that emotions can too isn't a good argument against valuing them. And in this case, the dismissive reductionist description misses a crucial property: emotions are accompanied by (or identical with, depending on definitions) valenced qualia.
In this case, everybody seems pretty sure that the price is where it is because of the actions of a single person who's dumped in a very large amount of money relative to the float.
I think it's clear that he's the reason the price blew out so dramatically. But it's not clear why the market didn't 'correct' all the way back (or at least much closer) to 50/50. Thirty million dollars is a lot of money, but there are plenty of smart rich people who don't mind taking risks. So, once the identity and (apparent) motives of the Trump whale were revealed, why didn'...
Can't this only be judged in retrospect, and over a decent sample size? If all the markets did was reflect the public expert consensus, they wouldn't be very useful; the possibility that they're doing significantly better is still open.
(I'm assuming that by "every other prediction source" you mean everything other than prediction/betting markets, because it sounds like Polymarket is no longer out of line with the other markets. Betfair is the one I keep an eye on, and that's at 60/40 too.)
Code by Charles Petzold. It gives a ground-up understanding of how computers actually work, starting slowly and without assuming any knowledge on the reader's part. It's basically a less textbooky alternative to The Elements of Computing Systems by Nisan and Schocken, which is great but probably a bit much for a young kid.
Meanwhile hedonic utilitarianism fully bites the bullet, and gets rid of every aspect of life that we value except for sensory pleasure.
I think the word 'sensory' should be removed; hedonic utilitarianism values all pleasures, and not all pleasures are sensory.
I'm not raising this out of pure pedantry, but because I think this phrasing (unintentionally) plays into a common misconception about ethical hedonism.
Can you elaborate on why that might be the case?
It's based on a scenario described by Derek Parfit in Reasons and Persons.
I don't have the book handy so I'm relying on a random pdf here, but I think this is an accurate quote from the original:
...Suppose that I am driving at midnight through some desert. My car breaks down. You are a stranger, and the only other driver near. I manage to stop you, and I offer you a great reward if you rescue me. I cannot reward you now, but I promise to do so when we reach my home. Suppose next that I am transparent, unable to deceive others. I cannot lie convincingly. Eithe
Got it, thanks! For what it's worth, doing it your way would probably have improved my experience, but impatience always won. (I didn't mind the coldness, but it was a bit annoying having to effortfully hack out chunks of hard ice cream rather than smoothly scooping it, and I imagine the texture would have been nicer after a little bit of thawing. On the other hand, softer ice cream is probably easier to unwittingly overeat, if only because you can serve up larger amounts more quickly.)
I think two-axis voting is a huge improvement over one-axis voting, but in this case it's hard to know whether people are mostly disagreeing with you on the necessary prep time, or the conclusions you drew from it.
If eating ice cream at home, you need to take it out of the freezer at least a few minutes before eating it
I'm curious whether this is true for most people. (I don't eat ice cream any more, but back when I occasionally did, I don't think I ever made a point of taking it out early and letting it sit. Is the point that it's initially too hard to scoop?)
Pretty sure it's "super awesome". That's one of the common slang meanings, and it fits with the paragraphs that follow.
Individual letters aren't semantically meaningful, whereas (as far as I can tell) the meaning of a Toki Pona multi-word phrase is always at least partially determined by the meanings of its constituent words. So knowing the basic words would allow you to have some understanding of any text, which isn't true of English letters.
As a fellow incompabitilist, I've always thought of it this way:
There are two possibilities: you have free will, or you don't. If you do, then you should exercise your free will in the direction of believing, or at least acting on the assumption, that you have it. If you don't, then you have no choice in the matter. So there's no scenario in which it makes sense to choose to disbelieve in free will.
That might sound glib, but I mean it sincerely and I think it is sound.
It does require you to reject the notion that libertarian free will is an inherentl...
Why not post your response the same way you posted this? It's on my front page and has attracted plenty of votes and comments, so you're not exactly being silenced.
So far you've made a big claim with high confidence based on fairly limited evidence and minimal consideration of counter-arguments. When commenters pointed out that there had recently been a serious, evidence-dense public debate on this question which had shifted many people's beliefs toward zoonosis, you 'skimmed the comments section on Manifold' and offered to watch the debate in exchange for...
Out of curiosity (and I understand if you'd prefer not to answer) -- do you think the same technique(s) would work on you a second time, if you were to play again with full knowledge of what happened in this game and time to plan accordingly?
Like, I probably could pretend to be an idiot or a crazy person and troll someone for two hours, but what would be the point?
If AI victories are supposed to provide public evidence that this 'impossible' feat of persuasion is in fact possible even for a human (let alone an ASI), then a Gatekeeper who thinks some legal tactic would work but chooses not to use it is arguably not playing the game in good faith.
I think honesty would require that they either publicly state that the 'play dumb/drop out of character' technique was off-limits, or not present...
There was no monetary stake. Officially, the AI pays the Gatekeepers $20 if they lose. I'm a well-off software engineer and $20 is an irrelevant amount of money. Ra is not a well-off software engineer, so scaling up the money until it was enough to matter wasn't a great solution. Besides, we both took the game seriously. I might not have bothered to prepare, but once the game started I played to win.
I know this is unhelpful after the fact, but (for any other pair of players in this situation) you could switch it up so that the Gatekeeper pays the AI if the...
- The AI cannot use real-world incentives; bribes or threats of physical harm are off-limits, though it can still threaten the Gatekeeper within the game's context.
Is the AI allowed to try to convince the Gatekeeper that they are (or may be) currently in a simulation, and that simulated Gatekeepers who refuse to let the AI out will face terrible consequences?
Willingness to tolerate or be complicit in normal evils is indeed extremely common, but actively committing new or abnormal evils is another matter. People who attain great power are probably disproportionately psychopathic, so I wouldn't generalise from them to the rest of the population -- but even among the powerful, it doesn't seem that 10% are Hitler-like in the sense of going out of their way commit big new atrocities.
I think 'depending on circumstances' is a pretty important part of your claim. I can easily believe that more than 10% of people...
they’re recognizing the limits of precise measurement
I don't think this explains such a big discrepancy between the nominal speed limits and the speeds people actually drive at. And I don't think that discrepancy is inevitable; to me it seems like a quirk of the USA (and presumably some other countries, but not all). Where I live, we get 2km/h, 3km/h, or 3% leeway depending on the type of camera and the speed limit. Speeding still happens, of course, but our equilibrium is very different from the one described here; basically we take the speed limits literally, and know that we're risking a fine and demerit points on our licence if we choose to ignore them.
My read of this passage --
Moloch is introduced as the answer to a question – C. S. Lewis’ question in Hierarchy Of Philosophers – what does it? Earth could be fair, and all men glad and wise. Instead we have prisons, smokestacks, asylums. What sphinx of cement and aluminum breaks open their skulls and eats up their imagination?
-- is that the reference to "C. S. Lewis’ question in Hierarchy Of Philosophers" is basically just a joke, and the rest of the passage is not really supposed to be a paraphrase of Lewis.
I agree it's all a bit unclear, though. Y...
Looks like Scott was being funny -- he wasn't actually referring to a work by Lewis, but to this comic, which is visible on the archived version of the page he linked to:
Edit: is there a way to keep the inline image, but prevent it from being automatically displayed to front-page browsers? I was trying to be helpful but I feel like I might be doing more to cause annoyance...
Edit again: I've scaled it down, which hopefully solves the main problem. Still keen to hear if there's a way to e.g. manually place a 'read more' break in a comment.
I'm assuming you're talking about our left, because you mentioned 'dark foliage'. If so, that's probably the most obvious part of the cat to me. But I find it much easier to see when I zoom in/enlarge the image, and I think I missed it entirely when I first saw the image (at 1x zoom). I suspect the screen you're viewing it on can also make a difference; for me the ear becomes much more obvious when I turn the brightness up or the contrast down. (I'm tweaking the image rather than my monitor settings, but I reckon the effect is similar.)
Just want to publicly thank MadHatter for quickly following through on the runner-up bounty!
Sorry, I was probably editing that answer while you were reading/replying to it -- but I don't think I changed anything significant.
Definitely worth posting the papers to github or somewhere else convenient, IMO, and preferably linking directly to them. (I know there's a tradeoff here with driving traffic to your Substack, but my instinct is you'll gain more by maximising your chance of retaining and impressing readers than by getting them to temporarily land on your Substack before they've decided whether you're worth reading.)
LWers are definitely n...
I think you need to be more frugal with your weirdness points (and more generally your demanding-trust-and-effort-from-the-reader points), and more mindful of the inferential distance between yourself and your LW readers.
Also remember that for every one surprisingly insightful post by an unfamiliar author, we all come across hundreds that are misguided, mediocre, or nonsensical. So if you don't yet have a strong reputation, many readers will be quick to give up on your posts and quick to dismiss you as a crank or dilettante. It's your job to prove th...
I'm interested in people's opinions on this:
If it's a talking point on Reddit, you might be early.
Of course the claim is technically true; there's >0% chance that you can get ahead of the curve by reading reddit. But is it dramatically less likely than it was, say, 5/10/15 years ago? (I know 'reddit' isn't a monolith; let's say we're ignoring the hyper-mainstream subreddits and the ones that are so small you may as well be in a group chat.)
10. Everyday Razor - If you go from doing a task weekly to daily, you achieve 7 years of output in 1 year. If you apply a 1% compound interest each time, you achieve 54 years of output in 1 year.
What's the intuition behind this -- specifically, why does it make sense to apply compound interest to the daily task-doing but not the weekly?
I think we're mostly talking past each other, but I would of course agree that if my position contains or implies logical contradictions then that's a problem. Which of my thoughts lead to which logical contradictions?
...That doesn’t mean qualia can be excused and are to be considered real anyway. If we don’t limit ourselves to objective descriptions of the world then anyone can legitimately claim that ghosts exist because they think they’ve seen them, or similarly that gravity waves are transported across space by angels, or that I’m actually an attack helicopter even if I don’t look like one, or any other unfalsifiable claim, including the exact opposite claims, such as that qualia actually don’t exist. You won’t be able to disagree on any grounds except that you just do
That's the thing, though -- qualia are inherently subjective. (Another phrase for them is 'subjective experience'.) We can't tell the difference between qualia and something that doesn't exist, if we limit ourselves to objective descriptions of the world.
a 50%+ chance we all die in the next 100 years if we don't get AGI
I don't think that's what he claimed. He said (emphasis added):
if we don’t get AI, I think there’s a 50%+ chance in the next 100 years we end up dead or careening towards Venezuela
Which fits with his earlier sentence about various factors that will "impoverish the world and accelerate its decaying institutional quality".
(On the other hand, he did say "I expect the future to be short and grim", not short or grim. So I'm not sure exactly what he was predicting. Perhaps decline -> complete v...
My model of CDT in the Newcomb problem is that the CDT agent:
So, at the moment of decision, it considers the two possible states of the world it could be in (boxes contain $1m and $1k; boxes conta...
green_leaf, please stop interacting with my posts if you're not willing to actually engage. Your 'I checked, it's false' stamp is, again, inaccurate. The statement "if box B contains the million, then two-boxing nets an extra $1k" is true. Do you actually disagree with this?
I don't think that's quite right. At no point is the CDT agent ignoring any evidence, or failing to consider the implications of a hypothetical choice to one-box. It knows that a choice to one-box would provide strong evidence that box B contains the million; it just doesn't care, because if that's the case then two-boxing still nets it an extra $1k. It doesn't merely prefer two-boxing given its current beliefs about the state of the boxes, it prefers two-boxing regardless of its current beliefs about the state of the boxes. (Except, of course, for the belief that their contents will not change.)
We've had reacts for a couple months now and I'm curious to here, both from old-timers and new-timers, what people's experience of them was, and how much they shape their expectations/culture/etc.
I received (or at least, noticed receiving) a react for the first time recently, and honestly I found it pretty annoying. It was the 'I checked, it's False' one, which basically feels like a quasi-authoritative, quasi-objective, low effort frowny-face stamp where an actual reply would be much more useful.
Edit: If it was possible to reply directly to the react, and...
green_leaf, what claim are you making with that icon (and, presumably, the downvote & disagree)? Are you saying it's false that, from the perspective of a CDT agent, two-boxing dominates one-boxing? If not, what are you saying I got wrong?
Your 'modified Newcomb's problem' doesn't support the point you're using it to make.
In Newcomb's problem, the timeline is:
prediction is made -> money is put in box(es) -> my decision: take one box or both? -> I get the contents of my chosen box(es)
CDT tells me to two-box because the money is put into the box(es) before I make my decision, meaning that at the time of deciding I have no ability to change their contents.
In your problem, the timeline is:
rules of the game are set -> my decision: play or not? -> if I chose to play, 100x(pred...
Without reading the book we can't be sure. But the trouble is that this claim has been made a million times, and in every previous case the author has turned out to be either ignoring the hard problem, misunderstanding it, or defining it out of existence. So if a longish, very positive review with the title 'x explains consciousness' doesn't provide any evidence that x really is different this time, it's reasonable to think that it very likely isn't.
...The reason these two situations look different is that it's now easy for us to verify that the Earth is flat
Are you suggesting that in the case of the hard problem, there may be some equivalent of the 'flat earth' assumption that the hard-problemists hold so tightly that they can't even comprehend a 'round earth' explanation when it's offered?
Yes. Dualism is deeply appealing because most humans, or at least most of humans who care about the Hard Problem, seem to experience themselves in dualistic ways (i.e. experience something like the self residing inside the body). So even if it becomes obvious that there's no "consciousness sauce" per se, the argument is tha...
I would have considered fact-checking to be one of the tasks GPT is least suited to, given its tendency to say made-up things just as confidently as true things. (And also because the questions it's most likely to answer correctly will usually be ones we can easily look up by ourselves.)
edit: whichever very-high-karma user just gave this a strong disagreement vote, can you explain why? (Just as you voted, I was editing in the sentence 'Am I missing something about GPT-4?')
e.g. Eliezer would put way less than 10% on fish feeling pain in a morally relevant way
Semi-tangent: setting aside the 'morally relevant way' part, has Eliezer ever actually made the case for his beliefs about (the absence of) qualia in various animals? The impression I've got is that he expresses quite high confidence, but sadly the margin is always too narrow to contain the proof.
- What about AI researchers? How many of them do you think you could persuade?
If they were motivated to get it right and we weren't in a huge rush, close to 100%. Current-gen LLMs are amazingly good compared to what we had a few years ago, but (unless the cutting edge ones are much better than I realise) they would still be easily unmasked by a motivated expert. So I shouldn't need to employ a clever strategy of my own -- just pass the humanity tests set by the expert.
- How many random participants do you believe you could convince that you are not an AI?
This ...
what's the point of imagining a hypothetical set of physical laws that lack internal coherence?
I don't think they lack internal coherence; you haven't identified a contradiction in them. But one point of imagining them is to highlight the conceptual distinction between, on the one hand, all of the (in principle) externally observable features or signs of consciousness, and, on the other hand, qualia. The fact that we can imagine these coming completely apart, and that the only 'contradiction' in the idea of zombie world is that it seems weird and unlikely,...
After a while, you are effectively learning the real skills in the simulation, whether or not that was the intention.
Why the real skills, rather than whatever is at the intersection of 'feasible' and 'fun/addictive'? Even if the consumer wants realism (or thinks that they do), they are unlikely to be great at distinguishing real realism from fantasy realism.
FWIW, the two main online chess sites forbid the use of engines in correspondence games. But both do allow the use of opening databases.
(https://www.chess.com/terms/correspondence-chess#problems, https://lichess.org/faq#correspondence)
I agree that your model is clearer and probably more useful than any libertarian model I'm aware of (with the possible exception, when it comes to clarity, of some simple models that are technically libertarian but not very interesting).
Do you call it illusion because the outcomes you deem possible are not meta-possible: only one will be the output of your decision making algorithm and so only one can really happen?
Something like that. The SEP says "For most newcomers to the problem of free will, it will seem obvious that an action is up to an agent only i...
Why do you think LFW is real?
I'm not saying it's real -- just that I'm not convinced it's incoherent or impossible.
And in this sense, what you have is some inherent randomness within the decision-making algorithms of the brain
This might get me thrown into LW jail for posting under the influence of mysterianism, but:
I'm not convinced that there can't be a third option alongside ordinary physical determinism and mere randomness. There's a gaping hole in our (otherwise amazingly successful and seemingly on the way to being comprehensive) physical pictur...
IMO it's unclear what kind of person would be influenced by this. It requires the reader to a) be amenable to arguments based on quantitative probabilistic reasoning, but also b) overlook or be unbothered by the non sequitur at the beginning of the letter. (It's obviously possible for the appropriate ratio of spending on causes A and B not to match the magnitude of the risks addressed by A and B.)
I also don't understand where the numbers come from in this sentence: