All of AVoropaev's Comments + Replies

What's the source of that 505 employees letter? I mean the contents aren't too crazy, but isn't it strange that the only thing we have is a screenshot of the first page?

3Robert_AIZI
It was covered in Axios, who also link to it as a separate pdf with all 505 signatories.
2Zvi
Initially I saw it from Kara Swisher (~1mm views) then I saw it from a BB employee. I presume it is genuine.

Re: Tik-tok viral videos. I think that the cliff is simply because recent videos had too little time to be watched 10m times. The second graph in the article is not about the same for 0.1m views, but about average views per week (among videos with >0.1m views), which stays stable.

I don't understand the point of questions 1 and 3.

If we forget about details of how model works, the question 1 essentially checks whether the entity in question have a good enough rng. Which doesn't seem to be particularly relevant? Human with a vocabulary and random.org can do that. AutoGTP with access to vocabulary and random.org also have a rather good shot. Superintelligence that for some reason decides to not use rng and answer deterministically will fail. I suppose it would be very interesting to learn that say GPT-6 can do it without external rng, ... (read more)

1Jacob Pfau
Maybe I should've emphasized this more, but I think the relevant part of my post to think about is when I say Another way of putting this is that to achieve low loss, an LM must learn to output high-entropy in cases of uncertainty. Separately, LMs learn to follow instructions during fine-tuning. I propose measuring an LMs ability to follow instructions in cases where instruction-following requires deviating from that 'high-entropy under uncertainty' learned rule. In particular, in the cases discussed, rule following further involves using situational information. Hopefully this clarifies the post to you. Separately, insofar as the proposed capability evals have to do with RNG, the relevant RNG mechanism has already been learned c.f. the Anthropic paper section of my post (though TBF I don't remember if the Anthropic paper is talking about p_theta in terms of logits or corpus wide statistics; regardless I've seen similar experiments succeed with logits). I don't think this test is particularly meaningful for humans, and so my guess is thinking about answering some version of my questions yourself probably just adds confusion? My proposed questions are designed to depend crucially on situational facts about an LM. There are no immediately analogous situational facts about humans. Though it's likely possible to design a similar-in-spirit test for humans, that would be its own post.

Yeah, you are right. It seems that it was actually one of the harder ones I tried. This particular problem was solved by 4 of 28 members of a relatively strong group. I distinctly remember also trying some easy problems from a relatively weak group, but I don't have notes and Bing don't save chat.

I guess I should just try again, especially in light of gwillen's comment. (By the way, if somebody with access to actual GPT-4 is willing to help me with testing it on some math problems, I'd really appreacite it .)

That would explain a lot. I've heard this rumor, but when I tried to trace the source, i haven't found anything better than guesses. So I dismissed it, but maybe I shouldn't have. Do you have a better source?

I agree that there are some impressive improvements from GPT-3 to GPT-4. But they seem to me a lot less impressive than jump from GPT-2 producing barely coherent texts to GPT-3 (somewhat) figuring out how to play chess.

I disagree with you take on LLM's math abilities. Wolfram Alpha helps with tasks like SAT -- and GPT-4 is doing well enough on them. But for some reason it (at least in the incarnation of Bing) has trouble with simple logic puzzles like the one I mentioned in other comment.

Can you tell more about success with theoretical physics concepts? I don't think I've seen anybody try that.

3Portia
Not coherently, no. My girlfriend is a theoretical physics and theory of machine learning prof, my understanding of her work is extremely fuzzy. But she was stuck on something where I was being a rubber ducky, which is tricky insofar as I barely understand what she does, and I proposed talking to ChatGPT. She basically entered the problem she was stuck on (her suspicion that two different things were related somehow, though she couldn't quite pinpoint how). It took some tweaking - at first, it was super superficial, giving an explanation more suited for wikipedia or school homework than getting to the actual science, she needed to push it over and over to finally get equations and not just superficial explanations, unconnected. And at the time, the internet plugin was not out, so the lack of access to recent papers was a problem. But she said eventually, it spat out some accurate equations (though also the occasional total nonsense), made a bunch of connections between concepts that were accurate (though it could not always correctly identify why), and made some proposals for connections that she at least found promising. She was very intrigued by its ability to spot those connections; in some ways, it seemed to replicate the intuition an advanced physicist eventually obtains. She compared the experience with talking to an A-star Bachelor student who has memorised all the concepts and is very well read, but if you start prodding, often has not truly understood them; and yet suddenly makes some connections that should be vastly beyond them, though unable to properly explain why. She still found it helpful and interesting. - I am still under the impression it does much worse in this area than in e.g. biology or computer science. With the logic puzzle, the technical report on ChatGPT4 also seems confused at that. They had some logic puzzles which ChatGPT failed at, and where they got worse and worse with each iteration, only to suddenly learn them with no warning. I h

I didn't say "it's worse than 12 yo at any math task". I meant nonstandard problems. Perhaps that's wrong English terminology? Sort of easy olympiad problem?

The actual test that I performed was "take several easy problems from a math circle for 12 y/o and try various 'lets think tep-by-step' to make Bing write solutions".

Example of such a problem:

Between 20 poles, several ropes are stretched (each rope connects two different poles; there is no more than one rope between any two poles). It is known that at least 15 ropes are attached to each pole. The poles are divided into groups so that each rope connects poles from different groups. Prove that there are at least four groups.

6ChristianKl
Most 12-year-olds are not going to be able to solve that problem. 
3gwillen
It's extremely important in discussions like this to be sure of what model you're talking to. Last I heard, Bing in the default "balanced" mode had been switched to GPT-3.5, presumably as a cost saving measure.

Two questions about capabilities of GPT-4.

  1. The jump in capabilities from GPT-3 to GPT-4 seems like much much less impressive than jump from GPT-2 to GPT-3. Part of that is likely because later version of GPT-3 were noticeably smarter than the first ones, but that reason doesn't seem sufficient to me. So what's up? Should I expect that GPT-4 -> GPT-5 will be barely noticeable?
  2. In particular I am rather surprised at apparent lack in ability to solve nonstandard math problems. I didn't expect it to beat IMO, but I did expect that problems for 12 y/o would be
... (read more)
1Portia
For 30 % of tasks, users actually prefer 3 over 4. For many tasks, the output will barely vary. Yet there are some where the output changed drastically and for the better. If you aren't noticing it, these were not your area of focus. A lot of it concerns things like psychological bias and deception, tricks children fall for and adults spot. Also spatial reasoning, visual reasoning. LLMs are terrible at math. Not because it is harder, but because the principles are different, and machine learning is a shitty way to learn them. The very thing that makes it good at poetry makes it suck at math. They can't even count the words in a text accurately. This will likely not improve that much from improving LLMs themselves - the solution is external plug-ins, e.g. into Wolfram Alpha, which are already being done. My girlfriend had moderate success getting it to work on theoretical physics concepts, after extensive prompting for being more technical, and guiding through steps. If you like math, that might be more interesting for you.
8ChristianKl
GPT4 scored 700/800 at the SAT math test. I don't think a 12 year old gets such a score. 
4the gears to ascension
well, the fact that I don't have an answer ready is itself a significant component of an answer to my question, isn't it? A friend on an alignment chat said something to the effect of: And so I figured I'd come here and ask about it. This eval seems super shallow, only checking if the model is, on its own, trying to destroy the world. Seems rather shallow and uncreative - it barely touched on any of the jailbreaks or ways to pressure or trick the model into misbehaving.

Can it in some way describe itself? Something like "picture of DALL-E 2".

#2: My impression is that something like 2%-10% of Ukrainian population believed that a month ago (would you consider that worrying enough?). My evidence for that is very shacky and it is indeed quite possible that I am overestimating it by an order of magnitude (still kind of worrying, though I might be overestimating even more).

First, my aunt is among them. Second, over last few years I've seen multiple (something like 5-10, concentrated around present date?) discussions on social media where friends of friends (all Russians) said that they believe in na... (read more)

Yes, I think that it is the most likely scenario. Still, it bothered me enough that I mentioned it -- I consider such omission 2-3 times more likely in a world where there are other important (intentional) omissions that I haven't noticed than in a world where he is honest.

I still think that reading Galeev is worth it and that he is trustworthy enough source. But if for example he'll make a thread on modern Russian opposition that doesn't mention Navalny, it'll be a huge red flag for me.

2kjz
Galeev mentions Navalny in his newest thread about power dynamics and how they might change in response to the current crisis. It's a long thread so you'll need to scroll down quite a bit to see the section on Navalny. Galeev doesn't portray him in a very positive manner.

To clarify: this site contains very effective propaganda that makes it a cognitohazard. You are likely underestimating its danger. It is not "just a bunch of fake statements". It is "a bunch of statements optimized for inflicting particular effects on its readers". Such "particular effects" are not limited to believing in what news says. In fact, news regularly contradict what they said a few months ago even in peace time, so believing what they are literally saying is probably not the point.

Before reading propaganda consider that such materials:

1) Convinc... (read more)

4Zvi
Any old nonsense will convince a non-zero number of people, but I don't see any evidence for #2 for people living in areas Russia doesn't physically control in numbers that should worry us? The UA rate of thinking Nazi rule seems much lower than e.g. the USA rate of thinking home Nazi rule here, which seems importantly non-zero. (Also worth noting the word Nazi in this context means something importantly distinct, although still quite terrible).  On #3 I would very much expect the opposite. People at LW are very good vs. such tactics in general, and are high-information, and have access to Western sources, and this stuff is optimized to appeal to people in the former USSR. Consider how effective Russian efforts have been in the West in general. And we're going in knowing it is what it is, which is highly protective.  Strangely, this site seems like it's an attempt to be a sane Russian-slanted source, that is being careful not to say obviously false things, as opposed to most Russian sources that are not doing that - and I expect that most people who believe the RU line are coming from the other kind of news source. Many of the front page news sources match exactly my western sources, without even an attempt to spin the information, and are indeed newsworthy developments.
3Viliam
Yes. I am repeatedly surprised how some people, even those otherwise quite intelligent, just don't seem to realize (or perhaps they just don't mind) that they keep believing sources that contradict what they said yesterday. I guess some people just don't bother building a consistent model of reality, and they live fully at Simulacrum level 3. I recently listened to some Putin's speech which other pro-Putin people have praised. And on one hand, I was like: "Yeah, what he says is internally consistent, and I feel like I can empathise with what he is trying to convey." But on the other hand, I also noticed that there were just too many things that contradict what I currently believe about the world, so either pretty much everything I believe is wrong, or he is simply lying. And, well, the probability is never 0 or 1, but the priors on "a politician is lying" are not that low. That is, definitely a cognitohazard. An average person would probably accept 50% of it. A person who tries to be consistent will either resist it... or with a tiny probability switch to a new gestalt. I mean... of course, if anyone desires to spend their time reading that shit, I can't stop them anyway. I just wish in general that there would be less sharing of such links. There is this theory that freedom of speech is always good, because exposure to evil memes builds your immune system. But, analogically to the actual immune system, sometimes it makes you stronger, and sometimes it (mind-)kills you. In real life, we do not actually keep ourselves healthy by exposing ourselves to every possible toxin. Hygiene actually increases our lifespan. Also, the theory assumes that there is a marketplace of ideas, where people meet both good and bad ideas, and they compete. But currently we have clickbait-powered online bubbles, where people exposed to some memes just isolate themselves from information that opposes those memes. (Which, I am quite aware of the irony, is how they would describe me. Becau

As a Russian I confirm that everything that Galeev says seems legit. I haven't been following our politics that much, but Gallev's model of Putin's fits my observations.

The only thing that looked a little suspicious to me was the thread on Russian parliamentarism -- there was an opportunity to say something about Navalny's team there (e.g. as a central example of party that can't be registered or something about them organizing protests), and I expected that he would mention it, but he didn't. In fact, I don't think he ever mentioned Navalny in any of his threads. Why?

2Zvi
Interesting question. My guess is he doesn't consider it important? 

I think that if Lesswrong wants to be less wrong, then questions "why do you believe in that?" should not be downvoted.

As for the question itself, I know next to nothing about the situation on this NPP, but just from priors I'd give 70% that if someone shelled it, it was Russian army.

1) It is easier to shoot at NPP if you don't know what you re shooting at. Russian army is much more likely to mistake this target for something else.

2) p(Russian government lies that it wasn't them | it was them) > p(Ukrainian government lies it wasn't them | it was them) ... (read more)

Update: Prosecutor's General Office says that protest will be treated as "participation in radical group" which is up to 6 years. Probably won't be used too massively, at least initially.

Yeah, doesn't seem to be true. There is this law, and general attitude of treating posts on vk/facebook as a mass media -- but it is 'just' 3 years or a huge fine, and it is rarely enforced (yet). (There might be some other relevant laws that I don't know about, but I would be very surprised (and concerned) if they involved 10 year prison terms.) It might be wise to make some minimal precautions though -- like making all posts that are not meant to be read by tovaritch major "friends only".

2AVoropaev
Update: Prosecutor's General Office says that protest will be treated as "participation in radical group" which is up to 6 years. Probably won't be used too massively, at least initially.

Thank you for treating it as a "today's lucky 10,000" event. I am aware about quines (though not much more than just 'aware') and what I am worried about is whether people that created FairBot were careful enough.

1TLW
I haven't spotted any inherent contradictions in the FairBot variants thus far. That being said, now that I look at it there's no guarantee that said FairBot variants will ever halt... in which case they aren't proper agents, in much the same way that while (true) {} isn't a proper agent.

"Definition" was probably a wrong word to use. Since we are talking in the context of provability, I meant "a short string of text that replaces a longer string of text for ease of human reader, but is parsed as a longer string of text when you actually work with it". Impredicative definitions are indeed quite common, but they go hand in hand with proofs of their consistency, like proof that a functional equation have a solution, or example of a group to prove that group axioms are consistent, or more generally a model of some axiom system.

Sadly I am not f... (read more)

It's been ages since I studied provability logic, but those bots look suspicious to me. Have anybody actually formalized them? Like the definition of FairBot involves itself, so it is not actually a definition. Is it a statement that we consider true? Is it just a statement that we consider provable? Why won't adding something like this to GL result in contradiction?

3TLW
Ah, you are part of today's Lucky 10,000. Quines are a thing. It is possible to write a program that does arbitrary computation on its own source code, by doing variants of the above. (That being said, you have to be careful to avoid running into Halting-problem variants.)
2Adele Lopez
Yes, you can play with them yourself: https://github.com/machine-intelligence/provability Impredicative definitions are quite common in mathematics.

I'm no programmer, so I have no comment on "how to develop" part. The "safe" part seems extremely unsafe to me though.

1) Your strategy relies on human supervisor's ability to recognize a threat that is disguised by superintelligence. Which is doomed to failure almost by definition.

2) Supervisor himself is not protected from possible threat. He is also one of the main targets that AI would want to affect.

3) >Moreover, the artificial agent won’t be able to change the operational system of the computer, its own code or any offline task that could fundament... (read more)

I'm trying to see what makes those numbers so implausible, and as far as I understand (at least without looking into regional data) the most surprising/suspicious thing is that number of new cases of Delta is dropping too fast.

But why shouldn't it be dropping fast? Odds of people getting Omicron (as opposed to Delta) are growing fast enough -- if we assume that they are (# of Omicron cases)/(# of Delta cases)*(some coefficient like their relative R_0), then due to Omicrons's fast doubling it can go from 1:2 to 4:1 in just a week. That will make new Delta c... (read more)

I don't see why all possible ways for AGI to critically fail to do what we have build it for must involve taking over the lightcone.

That doesn't stop other people from building one.

So let's also blow up the Earth. By that definition the alignment would be solved.

When you say "create an AGI which doesn't do this" do you mean that it has about 0% probability of doing it or one that have less than 100% probability of doing it? 

Edit: my impression was that the point of alignment was producing an AGI that have high probability of good outcomes and low probability of bad outcomes. Creating an AGI that simply have low probability of destroying the universe seems to be trivial. Take a hypothetical AGI before it produced output, throw a coin and if its tails then destroy it. Voila, the probability of destroying the un... (read more)

1JBlack
I don't see how your scenario addresses the statement "Taking over the lightcone is the default behavior". Yes, it's obvious that you can build an AGI and then destroy it before you turn it on. You can also choose to just not build one at all with no coin flip. There's also the objection that if you destroy it before you turn it on, have you really created an AGI, or just something that potentially might have been an AGI? It also doesn't stop other people from building one. If theirs destroys all human value in the future lightcone by default, then you still have just as big a problem.

To teach million kids you need like hundred thousand teachers from Dath Ilani. They don't currently exist.

It can be circumvented by first teaching say a hundred students, 10% of which becomes teachers and help teaching new 'generation'. If each 'generation' takes 5 years, and one teacher can teach 10 students in one generation, the amount of teachers will be multiplied by 2 every 5 years, and you'll get a million Dath Ilanians in like 50 years.

One teacher teaching 10 students and 1 of them becoming a teacher might be more possible than it seems. For exampl... (read more)

How would having AGI that have 50% chance to obliterate lightcone, 40% to obliterate just Earth and 10% to correctly produce 1000000 of paperclips without casualties solve the alignment?

5Vivek Hebbar
Taking over the lightcone is the default behavior.  If you can create an AGI which doesn't do this, you've already figured out how to put some constraint on its activities.  Notably, not destroying the lightcone implies that the AGI doesn't create other AGIs which go off and destroy the lightcone.

I think that since lady also said something about pharmacy, it's more likely that "lusi"="Luke's".

It' not about relative age (either as in age of one person divided by age of another or one age substracted from another), it's about their month of birth. So it's evidence for relevance of amount of received sunshine during pregnancy, relevance of age of being admitted in school and relevance of astrology.

Since it seems to somewhat align with different kinds of education starting in different times of year, my personal bet is on schools, though I wouldn't completely discount differences of pregnancies in different times of the year (sorry, astrology, but I need a lot more evidence to seriously consider you).

1AnthonyC
Did they compare regions with different cut-off dates? For example, in NY it's in December, and in MA it's in September. Actually looks like it's 12/1 in NYC and 12/31 on Long Island, too. Those would help distinguish age vs sunlight or other seasonal effects on development.

But that's a fix to a global problem that you won't fix anyway. What you can do is allocate some resources to fixing a lesser problem "this guy had nothing to eat today".

It seems to me that your argument proves too much -- when faced with a problem that you can fix you can always say "it is a part of a bigger problem that I can't fix" and do nothing.

What do you mean by 'real fix' here? What if said that real-real fix requires changing human nature and materialization of food and other goods out of nowhere? That might be more effective fix, but it is unlikely to happen in near future and it is unclear how you can make it happen. Donating money now might be less effective, but it is somehow that you can actually do.

1[anonymous]
A real fix is forcing everyone in a large area to contribute to fixing a problem. If enough people can't be compelled to contribute the problem can't be fixed. Doing something that costs you resources but doesn't fix the problem and negatively affects you vs others who aren't contributing but are competing with you isn't a viable option. In prisoners dilemma you may preach always cooperate but you have to defect if your counterparty won't play fair. Similarly warren Buffett can preach that billionaires should pay more taxes but not pay any extra voluntarily until all billionaires have to.

Detailed categorizations of mental phenomena sounds useful. Is there a way for me to learn that without reading religious texts?

1frcassarino
Qualia Research institute is working on building a catalogue of qualia iirc.

How can you check proof of any interesting statement about real world using only math? The best you can do is check for mathematical mistakes.

3[anonymous]
"what do they claim to know and how do they know it" No amount of credentials or formal experience makes an expert not wrong if they do not have high quality evidence, that they have shown, to get their conclusions from. And an algorithm formally proven to be correct that they show they are using. Or in the challenge trials : ethicist claims to value human life. A challenge trial only risks the lives of a few people, where even if they die it would have saved hundreds of thousands. In this case the " basic math" is one of multiplication and quantities, showing the "experts" don't know anything. As you might notice, ethicists do not have high quality information as input to generate their conclusions from. Without that information you cannot expect more than expensive bullshitting. "Ethics" today is practiced by reading ancient texts and more modern arguments, many of which have cousins with religion. But ethics is not philosophy. It is actually a math problem. Ultimately, there are things you claim to value ("terminal values"). There are actions you can consider doing. Some actions have an expected value that with a greater score on the things you care about, and some actions have a lesser expected value. Any other action but taking the one with the highest expected value (factoring in variance), is UNETHICAL. Yes, professional ethicists today are probably mostly all liars and charlatans, no more qualified than a water douser. I think EY worked down to this conclusion in a sequence but this is the simple answer. One general rule of thumb if you didn't read the above: if an expert claims to know what they are doing, look at the evidence they are using. I don't know the anatomy of the human body enough to gainsay an orthopedic surgeon, but I'm going to trust the one that actually looks at a CT scan over one that palpates my broken limb and reads from some 50 year old book. Doesn't matter if the second one went to the most credible medical school and has 50 year

I assume you mean that I assume P(money in Bi | buyer chooses Bi )=0.25? Yes, I assume this, although really I assume that the seller's prediction is accurate with probability 0.75 and that she fills the boxes according to the specified procedure. From this, it then follows that P(money in Bi | buyer chooses Bi )=0.25.

Yes, you are right. Sorry.

Why would it be a logical contradiction? Do you think Newcomb's problem also requires a logical contradiction?

Okay, it probably isn't a contradiction, because the situation "Buyer writes his decision and it is common... (read more)

1Caspar Oesterheld
Sorry for taking some time to reply! >You might wonder why am I spouting a bunch of wrong things in an unsuccessful attempt to attack your paper. Nah, I'm a frequent spouter of wrong things myself, so I'm not too surprised when other people make errors, especially when the stakes are low, etc. Re 1,2: I guess a lot of this comes down to convention. People have found that one can productively discuss these things without always giving the formal models (in part because people in the field know how to translate everything into formal models). That said, if you want mathematical models of CDT and Newcomb-like decision problems, you can check the Savage or Jeffrey-Bolker formalizations. See, for example, the first few chapters of Arif Ahmed's book, "Evidence, Decision and Causality". Similarly, people in decision theory (and game theory) usually don't specify what is common knowledge, because usually it is assumed (implicitly) that the entire problem description is common knowledge / known to the agent (Buyer). (Since this is decision and not game theory, it's not quite clear what "common knowledge" means. But presumably to achieve 75% accuracy on the prediction, the seller needs to know that the buyer understands the problem...) 3: Yeah, *there exist* agent models under which everything becomes inconsistent, though IMO this just shows these agent models to be unimplementable. For example, take the problem description from my previous reply (where Seller just runs an exact copy of Buyer's source code). Now assume that Buyer knows his source code and is logically omniscient. Then Buyer knows what his source code chooses and therefore knows the option that Seller is 75% likely to predict. So he will take the other option. But of course, this is a contradiction. As you'll know, this is a pretty typical logical paradox of self-reference. But to me it just says that this logical omniscience assumption about the buyer is implausible and that we should consider agents who
AVoropaevΩ030

I've skimmed over the beginning of your paper, and I think there might be several problems with it.
 

  1. I don't see where it is explicitly stated, but I think information "seller's prediction is accurate with probability 0,75" is supposed to be common knowledge. Is it even possible for a non-trivial probabilistic prediction to be a common knowledge? Like, not as in some real-life situation, but as in this condition not being logical contradiction? I am not a specialist on this subject, but it looks like a logical contradiction. And you can prove absolutel
... (read more)
4Caspar Oesterheld
>I think information "seller's prediction is accurate with probability 0,75" is supposed to be common knowledge. Yes, correct! >Is it even possible for a non-trivial probabilistic prediction to be a common knowledge? Like, not as in some real-life situation, but as in this condition not being logical contradiction? I am not a specialist on this subject, but it looks like a logical contradiction. And you can prove absolutely anything if your premise contains contradiction. Why would it be a logical contradiction? Do you think Newcomb's problem also requires a logical contradiction? Note that in neither of these cases does the predictor tell the agent the result of a prediction about the agent. >What kinds of mistakes does seller make? For the purpose of the paper it doesn't really matter what beliefs anyone has about how the errors are distributed. But you could imagine that the buyer is some piece of computer code and that the seller has an identical copy of that code. To make a prediction, the seller runs the code. Then she flips a coin twice. If the coin does not come up Tails twice, she just uses that prediction and fills the boxes accordingly. If the coin does come up Tails twice, she uses a third coin flip to determine whether to (falsely) predict one of the two other options that the agent can choose from. And then you get the 0.75, 0.125, 0.125 distribution you describe. And you could assume that this is common knowledge. Of course, for the exact CDT expected utilities, it does matter how the errors are distributed. If the errors are primarily "None" predictions, then the boxes should be expected to contain more money and the CDT expected utilities of buying will be higher. But for the exploitation scheme, it's enough to show that the CDT expected utilities of buying are strictly positive. >When you write "$1−P (money in Bi | buyer chooses Bi ) · $3 = $1 − 0.25 · $3 = $0.25.", you assume that P(money in Bi | buyer chooses Bi )=0.75. I assume you mean

I've skimmed over A Technical Explanation of Technical Explanation (you can make links end do over stuff by selecting the text you want to edit (as if you want to copy it); if your browser is compatible, toolbar should appear). I think that's the first time in my life when I've found out that I need to know more math to understand non-mathematical text. The text is not about Bayes' Theorem, but it is about application of probability theory to reasoning, which is relevant to my question. As far as I understand, Yudkowski writes about the same algorithm that... (read more)

That's interesting. I've heard about probabilistic modal logics, but didn't know that not only logics are working towards statisticians, but also vice versa. Is there some book or videocourse accessible to mathematical undergraduates?

This formula is not Bayes' Theorem, but it is a similar simple formula from probability theory, so I'm still interested in how you can use it in daily life.

Writing P(x|D) implies that x and D are the same kind of object (data about some physical process?) and there are probably a lot of subtle problems in defining hypothesis as a "set of things that happen if it is true" (especially if you want to have hypotheses that involve probabilities). 

Use of this formula allows you to update probabilities you prescribe to hypotheses, but it is no... (read more)

2Vladimir_Nesov
Here's an example of applying the formula (to a puzzle).

The above formula is usually called "odds form of Bayes formula". We get the standard form by letting in the odds form, and we get the odds form from the standard form by dividing it by itself for two hypotheses ( cancels out).

The serious problem with the standard form of Bayes is the term, which is usually hard to estimate (as we don't get to choose what is). We can try to get rid of it by expanding but that's also no good, because now we need to know . One way to state the problem... (read more)