All of Ericf's Comments + Replies

Years ago, I advocated banning crypto as a means of limiting the damage AI could do, thinking an advanced AI might be able to mine or hack exchanges (eg by guessing passwords of "lost" bitcoins) and accumulate wealth (ie power).

Apparent it could also just make a meme coin, and generate billions from nothing, given a sufficiently edgy coin (for example, AI itself)

https://www.axios.com/2025/01/19/donald-trump-crypto-billionaire

I am once again humbly suggesting that all un-regulated currency, especially distributed ledgers, be banned worldwide as a precautionary measure.

Another example that reality (especially anything involving technology) is not constrained by the need to be realistic. What SF author would dare write a story with meme coins, much less one in which the meme coins involved AIs like Claude?

The clean lines make me think you didn't use hypergeometric calculations. If I have 2 extrovert friends, on any given day 0 (25%), 1(50%), or 2(25%) of them will want to hang out. If I want to hang out on day N, there is a 25% chance I fail to.

Virtually no-one differentiates between a 4 and a 5 on these kind of surveys. Either they are the kind of person who "always" puts 5, or they "never" do.

With Rat. Adjacent or other overthibkers, you can give more specific anchors (eg 5 = this was the best pairing). Or you can have specific unsealed questions (ie:

  1. I would go to another room to avoid this person at a party
  2. I do not want to see this person again
  3. Whatever
  4. If someone else did the work to plan it, I would show up and spend time with this person again
  5. I will schedule time to see this person again.

Airline tickets are a bad example because they are priced dynamically. So if more people find/exploit the current pricing structure, the airline will (and does) shift the pricing slightly until it remains profitable.

+1 for substituting brain processes. High-g neurodivergents of all flavors tend to run apps in the "wrong" parts of their brain to do things that neutotypicals do automatically. Low-g neurodivergents just fail at the tasks.

4Rana Dexsin
In less serious (but not fully unserious) citation of that particular site, it also contains an earlier depiction of literally pulling up ladders (as part of a comic based on treating LOTR as though it were a D&D campaign) that shows off what can sometimes result: a disruptive shock from the ones stuck on the lower side, in this case via a leap in technology level.

Por que no los dos? It's a minority of people who have the ability and inclination to learn how to conform to a different mileu than thier natural state.

CK, as used here, seems more transactional and situation specific. Emotional Labor is usually referring to a pattern over time, including things like checking for unknown unknowns, and "making sure X gets done" Both ideas are playing in similar space.

Bonus points in a dating context: by being specific and authentic you drive away people who won't be compatible. In the egg example, even if the second party knows nothing about the topic, they can continue the conversation with "I can barely boil water, so I always take a frozen meal in to work" or "I don't like eggs, but I keep pb&j at my desk" or just swipe left and move on to the next match.

Follow up question: is this a permanent gain or temporary optimization (eg without further intervention, what scores would the subject get in 6 months?)

We know for sure that eating well and getting a good night's sleep dramatically improves performance on a wide array of mental tasks. It's not a stretch to think other interventions could boost short term performance even higher.

For further study: Did the observed increase represent a repeatable gain, or an optimization? Within-subject studies show a full SD variation between test sessions for many subjects, so I would predict that "a set of interventions" could produce a "best possible score" for an individual but hit rapid diminishing returns.

3Ericf
Follow up question: is this a permanent gain or temporary optimization (eg without further intervention, what scores would the subject get in 6 months?) We know for sure that eating well and getting a good night's sleep dramatically improves performance on a wide array of mental tasks. It's not a stretch to think other interventions could boost short term performance even higher.
3George3d6
Will have an update on this in 2 weeks or so.

Communication bandwidth: if you find that you’re struggling to understand what the person is saying or get on the same page as them, this is a bad sign about your ability to discuss nuanced topics in the future if you work together.

Just pulling this quote out to highlight the most critical bit. Everything else is about distinguishing between BS and ability to remember, understand, and communicate details of an event (note: this is a skill not often found at the 100 IQ level). That second thing isn't necessarily a job requirement for all positions (eg sales, entry level positions), but being comfortable talking with your direct reports is always critical.

The described "next image" bot doesn't have goals like that, though. Can you take the pre-trained bot and give it a drive to "make houses" and have it do that? When all the local wood is used up, will it know to move elsewhere, or plant trees?

3gilch
Yes, maybe? That kind of thing is presumably in the training data and the generator is designed to have longer term coherence. Maybe it's not long enough for plans that take too long to execute, so I'm not sure if Sora per se can do this without trying it (and we don't have access), but it seems like the kind of thing a system like this might be able to do.

If you have to give it a task, is it really an agent? Is there some other word for "system that comes up with its own tasks to do"?

3gilch
Did you come up with your hunger drive on your own? Sex drive? Pain aversion? Humans count as agents, and we have these built in. Isn't it enough that the agent can come up with subgoals to accomplish the given task?

Note that you have reduced the raw quantity of dust specks by "a lot" with that framing. Heat death of universe is in "only" 10^106 years, so that would be no more than 2^ (10^(106)) people (if we somehow double every year) compared to 3||3^(27), which is 3^ (10^ (a number too big to write down))

200 years ago was 1824. So compared to buying land or company stocks (the London and NY stock exchanges were well established by then) or government bonds.

2Matt Goldenberg
Some quick calculations from chatGPT puts the value from a british government bond (considered the world power then) at about equal to the value of gold, assuming a fixed interest rate of 3% with gold coming out slightly ahead. I haven't really checked these calculations but they pass the sniff test (except the part where chatGPT tried to adjust todays dollars for inflation).  

Narrator: gold has been a poor bet for 90% of the last 200 years.

(Don't quote me on that, but it is true that gold was a good bet for about 10 years in recent memory, and a bad bet for most post-industrial time)

1Matt Goldenberg
Compared to what?  My guess is it's a better bet than most currencies during that time, aside from a few winners that it would have been hard to predict ahead of time.  E.g., if 200 years ago, you had taken the most powerful countries and their currencies, and put your money into those, I predict you'd be much worse off than gold.

I can't tie up cash in any sort of escrow, but I'd take that bet on a handshake.

Mr. Pero got fewer votes than either major party candidate. Not a ringing endorsement. And I didn't say the chances were quite low, I said they were zero*. Which is at least 5 orders of magnitude difference from "quite low" so I don't think we agree about his chances.

*technically odds can't be zero, but I consider anything less likely than "we are in a simulation that is subject to intervention from outside" to be zero for all decision making purposes.

3Zane
Maybe the chance that Kennedy wins, given a typical election between a Republican and a Democrat, is too low to be worth tracking. But this election seems unusually likely to have off-model surprises - Biden dies, Trump dies, Trump gets arrested, Trump gets kicked off the ballot, Trump runs independently, controversy over voter fraud, etc. If something crazy happens at the last minute, people could end up voting for Kennedy. If you think the odds are so low, I'll bet my 10 euros against your 10,000 that Kennedy wins. (Normally I'd use US dollars, but the value of a US dollar in 2024 could change based on who wins the election.)

There is an actual 0% chance that anyone other than the Democratic or Republican nominee (or thier replacement in the event of death etc.) becomes president. Voting for/supporting any other candidate has, historically, done nothing to support that candidate's platform in the short or long term. If you find both options without merit, you should vote for your preferred enemy:

  1. Who will be most receptive to your message, either in a compromise, or argument And/or
  2. So sorry about your number 1 issue, neither party cares. What's your number 2 issue, maybe there is a difference there?
2Zane
I wouldn't entirely dismiss Kennedy just yet; he's polling better than any independent or third party candidate since Ross Perot. That being said, I do agree that his chances are quite low, and I expect I'll end up having to vote for one of the main two candidates.

Do you have a link to the study validating that the LLM responses actually match the responses given by humans in that category?

Note one weakness of this technique. An LLM is going to provide what the average generic written account would be. But messages are intended for a specific audience, sometimes a specific person, and that audience is never"generic median internet writer." Beware WIERDness. And note that visual/audio cues are 50-90% of communication, and 0% of LLM experience.

4Gordon Seidoh Worley
You can actually ask the LLM to give an answer as if it were some particular person. For example, just now, to test this, I did a chat with Claude about the phrase "wear a mask", and it produced different responses when I ask it what it would do upon hearing this phrase from public health officials if it was a scientist, a conspiracy theories, or a general member of the public, and in each case it gives reasonably tailored responses that reflect those differences. So if you know your message is going to a particularly unusual audience or you want to know how different types of people will interpret the same message, you can get it to give you some info on this.

How does buying "none of the above" work as you add more entries? If someone buys NOTA today, and the winning entry is #13, does everyone who bought NOTA before it was posted also win?

1Isaac King
Yes, if you buy "other" it splits those shares among all new answers.

Agree that closer to reality would be one advisor, who has a secret goal, and player A just has to muddle through against an equal skill bot with deciding how much advice to take. And playing like 10 games in a row, so the EV of 5 wins can be accurately evaluated against.

Plausible goals to decide randomly between:

  1. Player wins
  2. Player loses
  3. Game is a draw
  4. Player loses thier Queen (ie opponent still has thier queen after all immediate trades and forcing moves are completed)
  5. Player loses on time
  6. Player wins, delivering checkmate with a bishop or knight move
  7. M
... (read more)

Arguing against A doesn't support Not A, but arguing against Not Not A is arguing against A (while still not arguing in favor of Not A) - albeit less strongly than arguing against A directly. No back translation is needed, because arguments are made up of actual facts and logic chains. We abstract it to "not A" but even in pure Mathematics, there is some "thing" that is actually being argued (eg, my grass example).

Arguing at a meta level can be thought of as putting the object level debate on hold and starting a new debate about the rules that do/should govern the object level domain.

Alice: grass is green -> grass isn't not green Bob: the grass is teal -> the grass is provably teal Alice: your spectrometer is miscalibrated -> your spectrometer isn't not miscalibrated.

...

I'm having trouble with the statement {...and has some argument against C'}. The point of the double negative translation is that any argument against not not A is necessarily an argument against A (even though some arguments against A would not apply to not not A). And the same applies to the other translation - Alice is steelmanning Bob's argument, so there shouldn't be any drift of topic.

2abramdemski
That's an interesting point, but I have a couple of replies. * First and foremost, any argument against 'not not A' becomes an argument against A if Alice translates back into classical logic in a different way than I've assumed she is. Bob's argument might conclude 'not A' (because ¬¬¬A→¬A even in intuitionistic logic), but Alice thinks of this as a tricky intuitionistic assertion, and so she interprets it indirectly as saying something about proofs. For Alice to notice and understand your point would, I think, be Alice fixing the failure case I'm pointing out. * Second, an argument against an assumption need not be an argument for its negation, especially for intuitionists/constructivists. (Excluded middle is something they want to argue against, but definitely not something they want to negate, for example.) The nature of Bob's argument against Alice's claim can be quite complex and can occur at meta-levels, rather than occurring directly in the logic. So I guess I'm not clear that things are as simple as you claim, when this happens.

Additionally and separately, the law "X takes effect at time t" will be opposed by the interests that oppose X, regardless of the value of t.

1FCCC
I think your point is that this strategy only works if the voting block’s short-term interests conflict and their long-term interests don’t conflict. I fully agree with that.

Consider a scale that runs from "authentic real life" to "Lotus eater box" At any point along that scale, you can experience euphoria. At the Lotus Eater end, it is automatic. At the real life end, it is incidental. "Games" fall towards the Lotus Eater end of the spectrum, not as far as slot machines, but further from real life than Exercise or Eating Chocolate. Modern game design is about exploiting what is known about what brains like, to guide the players through the (mental) paths necessary to generate happy chems. They call it "being Fun" but that's j... (read more)

90% of games are designed to be fun. Meaning the point is to stimulate your brain to produce feel-good chemicals. No greater meaning, or secret goal. To do this, they have goals, rules, and other features, but the core loop is very simple:

  1. I want to get a dopamine hit, therefore
  2. I open up a game, and
  3. The game provides a structure that I follow, subordinating my "real life" to the artificial goals and laws of the game
  4. Profit!

When the brain generates good feelings, it usually has reasons for doing that, which a game designer has to be aware of. If you keep trying to make it generate good feelings without respecting the deeper purposes of the source of the feelings, afaik it generally stops working after a bit.

My aspiration is to make games that are compatible with living in real life. It's a large underserved market.

I don't think the assumption of equal transaction costs holds. If I want to fill in some potholes on my street, I can go door to door and ask for donations - which costs me time but has minimal and well understood costs to the other contributors. If I have to add "explain this new thing" and "keep track of escrow funds" and "cycling back and telling everyone how the project funding is going, and making them re-decide if/how much to contribute" that is a whole bunch of extra costs.

Also, of the public good is not quantum (eg, I could fix anywhere from 1-10 o... (read more)

A normal Kickstart is already impossible to price correctly - 99% either deliberately underprice to ensure "success" (the "preorders" model) or accidentally underpriced and cost the founders a ton of unpaid overtime (the gitp model) or they overpriced and don't get funded.

A clarification:

Consider the premises (with scare quotes indicating technical jargon):

  1. "Acting in Bad Faith" is Baysean evidence that a person is "Evil"
  2. "Evil" people should be shunned

The original poster here is questioning statement 1, presenting evidence that "good" people act in bad faith too often for it to be evidence of "evil."

However, I belive the original poster is using a more broad definition of "Acting in Bad Faith" than the people who support premise 1.

That definition, concisely, would be "engaging in behavior that is recognized in context a... (read more)

A strange game.

The only winning move is not to play.

Just don't use the term "conspiracy theory" to describe a theory about a conspiracy. Popular culture has driven "false" into the definition of that term, and wishful appeals to bare text doesn't make that connection go away. It hurts that some terms are limited in usability, but the burden of communication falls on the writer.

Setting aside the object level question here, trying to redefine words in order to avoid challenging connotations is a way to go crazy.

If someone is theorizing about a conspiracy, that's a conspiracy theory by plain meaning of the words. If it's also true, then the connotation about conspiracy theories being false is itself at least partly false. 

The point is to recognize that it does belong in the same class, and how accurate/strong those connotations are for this particular example of that reference class, and letting connotations shift to match as ... (read more)

Answer by Ericf10

The innocent explanation is that the SS got back to him just before some sort of 90 day deadline, and he did the math. In which case the tweet could have been made out of ignorance, like flashing an "OK" sign in the "White Power" orientation. It's not easy to keep up with all the dog whistles out there.

Still political malpractice to not track and avoid those signals, though. If you "accidentally" have a rainbow in the background of a campaign photo, that counts as aligning with the LGBTQ+ crowd - same thing with putting "88" in a campaing post & Natzis.

So, the tweet aligns hos campaign with the Natzis, but might have done it accidentally.

Neurotypicals have weaker preferences regarding textures and other sensory inputs. By and large, they would not write, read, or expect others to be interested in a blow-by-blow of asthetics. Also, at a meta level, the very act of writing down specifics about a thing is not neurotypical. Contrast this post with the equivalent presentation in a mainstream magazine. The same content would be covered via pictures, feeling words, and generalities, with specific products listed in a footnote or caption, if at all. Or consider what your neurotypical friend's Face... (read more)

2JenniferRM
Oooh! High agreement on something this downvoted is curiosity catnip! (Currently I see -18 for position, and +7 for agreement... I haven't touched either button, but I'll definitely upvote a response to my questions here <3) I thought "this is nice" would be a common human reaction, but apparently I'm miscalibrated? The "agreement votes" suggest that even people who think you're being mean kinda grudgingly admit that you're saying something accurate... ...but like... What?  Don't "normal people" also like in a basic public space (that isn't a museum or a dance club or a bedroom or... etc) that are clean bright soft simple naturalistic spaces? I'm honestly curious if some things that I think of as kinda sorta universally joyful are actually "mere" parochial preference? One possibility that I'm considering is that by "neuro-divergent" you just mean "smart and thoughtful and hopeful and optimistic, and willing to try things according to naive first principles just in case they turn out as great as it seems like they'd turn out, and generally having an uncrushed spirit"? It does seem to me like maybe normal people are extremely sad and broken a lot of times, and maybe that's all you're pointing to somehow, but that's a self-congratulatory theory, and also it isn't very predictive of any details, and so my default mental move is to doubt it and test it. Hence: can you explain what you meant by that? :-)
Answer by Ericf31

The real answer is that you should minimize the risk that you walk away and leave the door open for hours, and open it zero times whenever possible. The relative heat loss from 1 vs many separate openings is not significantly different from each-other, but it is much more than 0, and the tail risk of "all the food gets warm and spoils" should dominate the decisions

Answer by Ericf83

I don't thunk your model is correct. Opening the fridge causes the accumulated cold air to fall out over a period of a few (maybe 4-7?) seconds, after which it doesn't really matter how long you leave it open, as the air is all room temp. The stuff will slowly take heat from the room temp air, at a rate of about 1 degree/minute. Once the door is closed, it takes a few minutes (again, IDK how long) to get the air back to 40F, and then however long to extract the heat from the stuff. If you are chosing between "stand there with it open" and "take something o... (read more)

2Trevor Hill-Hand
Something that may help build a better model/intuition is this video from Technology Connections: https://www.youtube.com/watch?v=CGAhWgkKlHI I mentally visualize the cold air as a liquid when I open the door, or maybe picturing it looking similar to the fog from dry ice. Since it's cold, it falls downward, "pouring" out onto the floor, and probably does not take more than a few seconds, though I would love to see someone capture it on video with a thermal camera. After that, I figure it doesn't really matter how long the door is open, until you start talking about leaving it open for 10+ minutes where you can then start to worry about the food's temperature rising, and the fridge wasting energy trying to cool the open space. On the timescale of just a few moments while you grab stuff, the damage is already done once you open it the first time, and leaving it open or opening/closing it again doesn't really affect anything. This is also why grocery stores and restaurant kitchens tend to have reach-in fridges, open from the top like a chest freezer, instead of vertical doors (though, that's also for convenience).
2[comment deleted]

Re: happiness, it's that meme graph: Dumb: low expectations, low results, is happy Top: can self-modify expectations to match reality: is happy Muddled middle: takes expectations from environment, can't achieve them, is unhappy.

1Jackson Wagner
This is funny, although of course what this is really pointing to isn't a literal U-shaped graph, but that it's really better to think about this in a much more multidimensional way, rather than just trying to graph happiness vs intelligence.  Of course there are all sorts of other traits (like conscientiousness, etc) that might influence happiness.  But more importantly IMO is what you are pointing to -- there are all sorts of different "mindsets" that you can take towards your life, which have a huge impact on happiness... maybe high-IQ slightly helps you grope your way towards a healthier mindset, but to a large extent these mindsets / life philosophies seem independent of intelligence.  By "mindset", I am thinking of things like: -  "internal vs external locus of control" - level of expectations like you say, applied to lots of different life areas where we have expectations - stoic vs neurotic/catastrophizing attitude towards events - how you relate to / take expectations and desires your social environment (trying to keep up with the joneses, vs deliberately rebelling, vs lots of other stances). - being really hard on yourself vs having self-compassion vs etc And so on; too many to mention.

The definition of Nash equilibrium is that you assume all other players will stay with thier strategy. If, as in this case, that assumption does not hold then you have (I guess) an "unstable" equilibrium.

The other thing that could happen is silent deviations, where some players aren't doing "punish any defection from 99" - they are just doing "play 99" to avoid punishments. The one brave soul doesn't know how many of each there are, but can find out when they suddenly go for 30.

It's not. The original Nash construction is that player N picks a strategy that maximizes thier utility, assuming all other players get to know what N picked, and then pick a strategy that maximizes thier own utility given that. Minimax as a goal is only valid for atomic game actions, not complex strategies - Specifically because of this "trap"

3localdeity
Ok, let's see.  Wikipedia: This is sensible. Then... from the Twitter thread: This seems incorrect.  The Wiki definition of Nash equilibrium posits a scenario where the other players' strategies are fixed, and player N chooses the strategy that yields his best payoff given that; not a scenario where, if player N alters his strategy, everyone else responds by changing their strategy to "hurt player N as hard as possible".  The Wiki definition of Nash equilibrium doesn't seem to mention minimax at all, in fact (except in "see also"). In this case, it seems that everyone's starting strategy is in fact something like "play 99, and if anyone plays differently, hurt them as hard as possible".  So something resembling minimax is part of the setup, but isn't part of what defines a Nash equilibrium.  (Right?) Looking more at the definitions... The "individual rationality" criterion seems properly understood as "one very weak criterion that obviously any sane equilibrium must satisfy" (the logic being "If it is the case that I can do better with another strategy even if everyone else then retaliates by going berserk and hurting me as hard as possible, then super-obviously this is not a sane equilibrium"). It is not a definition of what is rational for an individual to do.  It's a necessary but nowhere near sufficient condition; if your decisionmaking process passes this particular test, then congratulations, you're maybe 0.1% (metaphorically speaking) on the way towards proving yourself "rational" by any reasonable sense of the word. Does that seem correct?

There is a more fundamental objection: why would a set of 1s and 0s represent (given periodic repetition in 1/3 of the message, so dividing it into groups of 3 makes sense) specifically 3 frequencies of light and not

  1. Sound (hat tip The Hail Mary Project)
  2. An arrangement of points in 3d space
  3. Actually 6 or 9 "bytes" to defie each "point"
  4. Or the absolute intensity or scale of the information (hat tip Monty Python tiny aliens)
0qvalq
I think it could deduce it's an image of a sparse 3D space with 3 channels. From there, it could deduce a lot, but maybe not that the channels are activated by certain frequencies.

I think the key facility of am agent vs a calculator is the capability to create new short term goals and actions. A calculator (or water, or bacteria) can only execute the "programming" that was present when it was created. An agent can generate possible actions based on its environment, including options that might not even have existed when it was created.

I think even these first rough concepts have a distinction between beliefs and values. Even if the values are "hard coded" from the training period and the manual goal entry.

Being able to generate short term goals and execute them, and see if you are getting closer to your long tern goals is basically all any human does. It's a matter of scale, not kind, between me and a dolphin and AgentGPT.

In summary: Creating an agent was apparently already a solved problem, just missing a robust method of generating ideas/plans that are even vaguely possible.

Star Trek (and other Sci fi) continues to be surprisingly prescient, and "Computer, create an adversary capable of outwitting Data" creating an agen AI is actually completely realistic for 24th century technology.

Our only hopes are:

  1. The accumulated knowledge of humanity is sufficient to create AIs with an equivalent of IQ of 200, but not 2000.
  2. Governments step in and ban things.
  3. Adversarial action kee
... (read more)
6DirectedEvolution
Speculative: Another point is that it may be speed of thought, action, and access to information that bottlenecks human productive activites - that these are the generators of the quality of human thought. The difference between you and Von Neumann isn't necessarily that each of his thoughts was magically higher-quality than yours. It's that his brain created (and probably pruned) thoughts at a much higher rate than yours, which left him with a lot more high quality thoughts per unit time. As a result, he was also able to figure out what information would be most useful to access in order to continue being even more productive. Genius is just ordinary thinking performed at a faster rate and for a longer time. GPT-4 is bottlenecked by its access to information and long-term memory. AutoGPT loosens or eliminates those bottlenecks. When AutoGPT's creators figure out how to more effectively prune its ineffective actions and if costs come down, then we'll probably have a full-on AGI on our hands.
-12skulk-and-quarrel
Load More