Posts

Sorted by New

Wiki Contributions

Comments

Is that table representative of the data? If so, it is a very poor dataset. Most of those questions look very in-group, to which it is accurately forecasting 0.5, since anyone outside that bubble has no idea of the answer.

I wonder how different it is if you filter out every question with a first person pronoun, or that mentions anyone who was not Wikipedia-notable as of the cut off date.

Perhaps it does well in politics and sports because those are the only categories about general knowledge that have a decent number of questions to evaluate. (Per the y-scale in the per category graphs.) Though finance appears to contradict that, since it has similar amount of questions and uncertainty score.

It appears you only show uncertainty relative to its own predictions, and not whether the data from Manifold showed it to be an uncertain question even to Manifold users.

I also would've expected to see some evidence of that being a good prompt, rather than leaving it open whether the entire outcome is an artifact of the prompt given.

I'm confused why you don't expect some other Republican candidate to do it. Have you not paid attention to Gov. DeSantis's actions in Florida? https://www.sltrib.com/opinion/commentary/2023/05/05/commentary-is-ron-desantis-fascist/

I'm not familiar with Nikki Haley, but this article seems to indicate she is at least far right: https://www.newstatesman.com/quickfire/2023/02/nikki-haley-is-extremist-moderates-clothing-donald-trump

Mike Pence risked his life to oppose Trump's January 6th coup attempt, so even though he is an Christian evangelical Dominionist, and I vehemently disagree with him on policy, I'm going to count him as pro-democracy. I also couldn't easily find any clear point by point evidence that he's a fascist, separate from Trump. Mostly stuff like this which calls him one, but never backs it up: https://www.google.com/amp/s/www.thenation.com/article/politics/mike-pence-gridiron-january-6th/tnamp/

So out of the 4 people Politico considers contenders for the Republican nomination, 3 are far right or fascist, and the 1 who is partially pro-democracy is considered not likely to win, but might be able to influence who does. https://www.politico.com/interactives/2023/republican-candidates-2024-gop-presidential-hopefuls-list/

I only read up to remark 5.B before I got too distracted that remark 1 does not describe the GPT I interact with.

How did you come to the conclusion that the token deletion rule is to remove 1 token from the front?

The API exposed by OpenAI does not delete any tokens. If you exceed the context window, you receive an error and you are responsible for how to delete tokens to get back within it. (I believe, if I understand correctly, this is dynamic GPT, calculating one token at a time, but only appending to the end of the input tokens until it reaches a stop token or the completion length parameter. Prompt length + max completion length must be <= context length. Due to per token billing, the max completion length is usually much smaller than reaching the context limit, but I could see where the most interesting behavior for your purposes would be with a larger limit.)

The deletion rule I've been working with, langchain.memory.ConversationSummaryBufferMemory, is very different. When the threshold token count is exceeded, it separates the first n chat messages to get below the goal. It then runs GPT with a summarization prompt with those n messages included. The output of that summarization is then prepended to the original history's chat messages starting at n+1. This is far more selective in which history it is throwing away, which can have a large impact on behavior.

Langchain does have simpler rules that just throw away history, but they usually throw away an entire message at a time, not a single token at a time.

Why are you ignoring prompts much smaller than the context window? This appears to be the vast majority of prompts, because given the way the API works you need to leave room for the reply, and have some way to handle continuation if the reply hits the limit before it hits the stop token. The tokens past the stop token in the context window never seem to matter, though I have not investigated how they do that, i.e. do they force them all to zero or whatever.

I think your supposition that most people have trouble critiquing arguments they're encountering for the first time is incorrect. I don't find this hard myself. Learning how to critique arguments is a skill you can study. Even just googling "how to critique an argument you've never seen before" gives some reasonable starting points. I'm not surprised a background in Evangelical Christianity has left you lacking this skill, as unquestioning belief is favored there.

Seeking out and listening to podcasts from several distinct but not obviously incorrect philosophies might give your a better perspective on alternative values you might apply to the Rationalist approach. (My favored alternative happens to be ecological approaches.)

Singularity.FM has some good interviews with people with contrary views to mainstream singularitian thought, and some of those have useful alternative lenses to view the world through, even when you don't agree with their conclusions.

Reading about those who have taken Rationalist-style approaches to get to obviously crazy conclusions is also useful, for seeing where people are prone to going off the rails, so you can avoid the same mistakes, or recognize the signs when others do.

As for perfectionism... As an interview with a specialist in innovation pointed out, if you aren't failing, you aren't taking big enough risks to find something new.

"When I was a kid, I thought mistakes were simply bad, and to be avoided. As an adult I realized many problems are best solved by working in two phases, one in which you let yourself make mistakes, followed by a second in which you aggressively fix them."--Paul Graham

I hope some of this helps, and good luck in your journey!

Most programming is not about writing the code, it is about translating a human description of the problem into a computer description of the problem. This is also why all attempts so far to make a system so simple "non-programmers" can program it have failed. The difficult aptitude for programming is the ability to think abstractly and systematically, and recognize what parts of a human description of the problem need to be translated into code, and what unspoken parts also need to be translated into code.

The article noted it was high frequency stimulus that had the effect, and seemed to be disrupting normal function.

The article also says the patient was awake.

Good article, I'll have to see if reminding myself of this helps at work tomorrow.

Success and happiness cause you to regain willpower;

This is dangerously incorrect - studies show willpower is only an expendable resource for people who believe it to be. People who don't think willpower is expendable have longer lasting willpower.

I feel like the question there is "Does the map match the territory?"

If atoms are real, then there is something in the territory to which the symbol atom on our map refers.

I'm tempted to say that if an atom is real, then any sufficiently accurate model must include something that refers to them. However, wouldn't that lead to the conclusion that no, atoms do not exist, we were mistaken? Really quantum wave functions exist, and an atom is just a shorthand for referring to a particular type of collection of electron, quark, and gluon wave functions. (um, oops, exceeded my knowledge of quantum mechanics here, replace what I said with whatever quantum mechanics says an atom is.) Or would it lead to the conclusion that atom is a name for a particular well-defined class of collections of wave functions?

If something such as an atom is not real, then they are just a convenient organizing principle that let us achieve a simplified, but necessarily incorrect, model. Whether to keep using the known incorrect model tends to depend on its usefulness, but you must always account for the incorrectness. (For example, we keep using Newtonian Mechanics and the Ideal Gas laws, even though both are known to be incorrect. We just know what domains they are accurate enough in to keep using.)

What's the usefulness of "I think that everything exists, by the way: there's an ensemble universe"? How does it constrain your expectations?

I don't see how having specific beliefs either way about stuff outside the observable universe is useful.

Now, if you can show that whether the universe beyond the observable is infinite or non-infinite but much larger than the Hubble Volume constrains expectations about the contents of the observable universe, then it might be useful.

Load More