All of ike's Comments + Replies

ike20

Two of us showed up, we'll be hanging out in the back under the large square white tent if anyone else is looking

ike41

Maybe buying IPV4 addresses? https://news.ycombinator.com/item?id=32416043 some discussion here.

Civilization has been trying to move to ipv6 for decades, but IPV4 is still widespread and commands a premium. With more internet growth this could blow up more.

ike43

Yeah, the version I liked was that someone else bribed a guard/guards into letting him kill himself but that's basically the same.

ike100

It's been hard keeping to it, but I do notice myself being more productive when I do. One thing that has stayed is not having an email tab always open. Hoping that over time I get better at following it strictly; it has such immediate positive effects that I'm not so worried I'll gradually forget and stop, like happened with other productivity attempts (e.g. making to-do lists.)

2TurnTrout
Yay! Keep up the good work :) I bet there's a way to stick to it better, I'd advise you to keep trying things on that front.
ike20

Consider the shortest algorithm that simulates the universe perfectly.

Meaningless, on my metaphysics. Definition is circular - in order to define fundamental you have to already assume that the universe can be simulated "perfectly", but to define a perfect simulation you'll need to rely on concepts like "fundamental", or "external reality".

Assuming that the way the universe looks changes continuously with these constants, it seems strange to insist that if the changes are so small you can't notice them they don't exist.

The assumption is meaningless.... (read more)

ike20

On my metaphysics it's not coherent to talk about "fundamental" constants, for multiple reasons. Try tabooing that and ask about what, if anything, is actually meant.

If you can't measure any of these constants past a hundred significant digits, what does it mean to talk about the constant having any digits beyond that? And what does it mean for a constant to be fundamental?

4Yair Halberstadt
Fundamental physical constants are easy. Consider the shortest algorithm that simulates the universe perfectly. That algorithm will consist of some rules, and some data. The data are fundamental physical constants. Assuming that the way the universe looks changes continuously with these constants, it seems strange to insist that if the changes are so small you can't notice them they don't exist. The aliens running the universe might well be able to read off all infinity digits of these constants, and measure precisely what difference changing the nth digit will make for all n.
ike50

I really liked this post. As a result of reading it, I'm trialling the following:

Every time I go on my computer or phone, I need to specifically have a plan for one specific thing I am going to do. This can be "check all notifications from X/Y/Z), or "write this one long email", or even, "15 minutes of unstructured time", but it should always be intentional. If I get the urge to do something else, I need to save it for a future session, which can be immediately afterwards.

2TurnTrout
Interested to hear how this goes.
ike20

Yes, but you said they're buying the no-coup shares, which subsidizes a coup. Article contradicts itself. 

ike20

Mars buys shares that pay out 5 million Dogecoin if there is not a coup

Suppose the prior implied probability of a regime change is 0.20. Mars can buy its shares for 1 million Dogecoin, pocketing a risk-free net utility equivalent to 4 million Dogecoin.

I'm confused - if the prior probability of a coup is 20% and Mars is buying shares that pay out if no coup, Mars would pay 4M?

2lsusr
A prior implied probability of 20% that there will be a coup means Mars would pay 0.2 for shares that pay out 1.0 if there is a coup. Multiply both sides of the equation by M (1 million). It costs Mars 0.2M to buy shares that pay out 1M if there is a coup. Multiply both sides of the equation by another 5. It costs Mars 1M to buy shares that pay out 5M if there is a coup. If Mars pays 1M to buy shares that pay out 5M if there is a coup then Mars pockets 5M − 1M = 4M.
ike10

I wrote the post, everything is true.

In general I find stories of how others have succeeded interesting and useful. It's also interesting to see how various markets are inefficient at times. Same motivation as my post on prediction markets last year.

4Viliam
Ah, sorry, didn't mean to accuse you. It's just general skepticism, when I read something online, before I ask myself: "how did this happen?", I try to ask myself: "do I have any evidence that it actually happened?". Because there are people out there, such as Kiyosaki, whose business model is to write a fictional book about getting rich using superior wisdom, and then selling seminars where they teach this wisdom. Whether the wisdom works for you or not, Kiyosaki definitely gets his money.
ike30

I have nothing wrong with the probability request here, I have a problem with the scenario. What kind of evidence are you getting that makes these two and only these two outcomes possible? Solomonoff/Bayes would never rule out any outcome, just make some of them low probability.

I've talked about the binding problem in Solomonoff before, see https://www.lesswrong.com/posts/Jqwb7vEqEFyC6sLLG/solomonoff-induction-and-sleeping-beauty and posts it links back to. See also "dust theory".

ike20

Not deployed yet except some minor positions. Will probably be ~30% private fiat funds, some crypto funds, and most chasing short term yield farming opportunities or related DeFi plays. It's not something easily scalable and requires quite a lot of active management.

I blew away both the market and crypto this year with a range of exotic strategies and low risk. Think it's basically impossible to repeat performance at current levels, but it seems hard to lose money given my risk appetite and diversification, and the only way I wouldn't make significant money is if crypto enters a bear which is probably correlated to a stock market bear as well, in which case I'm probably close to flat.

ike140

Putting together an article, for now here's 22 predictions:

 

Tether is worth less than 99c by end of year: 2%

Biden or Harris mentions GPT-4: 2%

At least one school district or university is reported to tell students not to use language models (not just one teacher): 15%

A deepfake goes viral (>1 million views) with very few people, at first, realizing that it's fake: 15%

Russia is widely considered to have captured the capital of Ukraine, Kyiv: 10%

S&P 500 goes up: 80%

Annual CPI inflation released Jan 2023 (for Dec 2022) over 3.5%: 40%

The 7 day roll... (read more)

ike00

Probability is in the mind. Your question is meaningless. What is meaningful is what your expectations should be for specific experiences, and that's going to depend on the kind of evidence you have and Solomonoff induction - it's pretty trivial once you accept the relevant premises.

It's just not meaningful to say "X and Y exist", you need to reframe it in terms of how various pieces of evidence affect induction over your experiences.

2mako yass
I already am thinking about it in those terms, so I'm not sure what's going wrong here. Would it have been clearer if the focusing question was more like "what is the probability that, if you manage to find a pair of mirrors that you can use to check the model number on the back of your head, you'll see a model number corresponding to the heaver brain?"
ike190

There's been a handful of kashrus scandals where people mixed in meat from other supply chains into stuff that was supposed to be kosher certified, which seems a useful reference point for how this can slip through even with an extensive monitoring system intended to prevent that.

1Richard Horvath
I agree. I think it is more likely that "real" meat will be mixed into lab-grown, to dilute the cost/keep up with the demand. I think it is more likely that some wholesalers and retailers will be faking the lab-grown meat without the knowledge of the original producer, selling in similar boxes.
ike30

Technically part of the US, if you move here as a US citizen you get 0% capital gains tax rates which is really good if you're investing/trading/etc. Would love if more people moved here.

Answer by ike40

San Juan, Puerto Rico.

3ike
Technically part of the US, if you move here as a US citizen you get 0% capital gains tax rates which is really good if you're investing/trading/etc. Would love if more people moved here.
ike40

I'm currently looking into buying a bank or insurance company to do exactly that.

It's really non-trivial to borrow large amounts at low rates to lend. Way easier said than done.

ike30

Crypto yields are currently 20%+ annualized with fairly low risk if you know what you're doing 

2Barry_Cotter
If you are in that position surely the economically rational thing to do would be to juice your returns by borrowing to invest more?
Answer by ike60

No, because that's a meaningless claim about external reality. The only meaningful claims in this context are predictions.

"Do you expect to see chaos, or a well formed world like you recall seeing in the past, and why?"

The latter. Ultimately that gets grounded in Occam's razor and Solomonoff induction making the latter simpler.

ike20

I've spent a lot of time and written a handful of posts (including one on the interaction between Solomonoff and SIA) building my ontology. Parts may be mistaken but I don't believe it's "confused". Tabooing core concepts will just make it more tedious to explain, probably with no real benefit.

In particular, the only actual observations anyone has are of the form "I have observed X", and that needs to be the input into Solomonoff. You can't input a bird's eye view because you don't have one.

Anyway, it seems weird that being altruistic affects the agent's d... (read more)

1rvnnt
Thanks for the suggestions. Clearly there's still a lot of potentially fruitful disagreement here, some of it possibly mineable for insights; but I'm going to put this stuff on the shelf for now. Anyway, thanks.
ike20

A couple of things.

If you're ok with time inconsistent probabilities then you can be dutch booked.

I think of identity in terms of expectations. Right before you go to sleep, you have a rational subjective expectation of "waking up" with any number from 1-20 with a 5% probability.

It's not clear how the utility function in your first case says to accept the bet given that you have the probability as 50/50. You can't be maximizing utility, have that probability, and accept the bet - that's just not what maximizes probability under those assumptions.

My version of the bet shouldn't depend on if you care about other agents or not, because the bet doesn't affect other agents.

1rvnnt
Sure. Has some part of what I've written given the impression that I think time-inconsistent probabilities (or preferences) are OK? I want to give a thumbs-up to the policy of sharing ways-of-thinking-about-stuff. (Albeit that I think I see how that particular way of thinking about this stuff is probably confused. I'm still suggesting Tabooing "I", "me", "you", "[me] waking up in ...", etc.) Thanks. True, that part of what I wrote glossed over a large bunch of details (which may well be hiding confusion on my part). To try to quickly unpack that a bit: * In the given scenario, each agent cares about all similar agents. * Pretending to be a Solomonoff inductor, and updating on all available information/observations -- without mapping low-level observations into confused nonsense like "I/me is observing X" -- an agent in a green room ends up with p(coin=1) = 0.5. * The agent's model of reality includes a model of {the agent itself, minus the agent's model of itself (to avoid infinite recursion)}. * Looking at that model from a bird's-eye-view, the agent searches for an action a that would maximize ∑w∈Wutility received by xeroxed agents in the version of w where this agent outputs a, where W is the set of "possible" worlds. (I.e. W is the set of worlds that are consistent with what has been observed thus far.) (We're not bothering to weight the summed terms by p(w) because here all w are equiprobable.) * According to the agent's model, all in-room-agents are running the same decision-algorithm, and thus all agents observing the same color output the same decision. This constrains what W can contain. In particular, it only contains worlds w where if this agent is outputting a, then also all other agents (in rooms of the same color) are also outputting a. * The agent's available actions are "accept bet" and "decline bet". When the agent considers those worlds where it (and thus, all other agents-in-green) outputs "accept bet", it calculates the total utilit
ike20

You can start with Bostrom's book on anthropic bias. https://www.anthropic-principle.com/q=book/table_of_contents/

The bet is just each agent is independently offered a 1:3 deal. There's no dependence as in EYs post.

1rvnnt
It seems to me that, also with the bet you describe, there is no paradox/inconsistency. To make sure we're talking about the same thing: The bet I'm considering is: One likely source of confusion that I see here is: If one thinks about {what the agent cares about} in terms of "I", "me", "this agent", or other such concepts which correspond poorly to the Territory (in this kind of dilemma/situation). To properly deconfuse that, I recommend Tabooing "this", "I", "me", "you", etc. (A trick I found useful for that was to consider variations of the original dilemma, where the agents additionally have numbers tattooed on them; either numbers from 1 to 20, or random UUIDs, or etc.; either visible to the agent or not. Then one can formulate the agent's utility function in terms of "agent with number N tattooed on it", instead of e.g. "instances of me".) For brevity, below I do use "this" and "I" and etc. Hopefully enough of the idea still comes through to be useful. If what the agent cares about is something like "utilons gained in total, by computations/agents that are similar to the original agent", then: * Before the experiment: The agent would want agents in green rooms to accept the bet, and agents in red rooms to reject the bet. * Upon waking up in a green room: The agent has received no information which would allow it to distinguish between coin-flip-outcomes, and its probability for coin=1 is still 50/50. I.e., the agent is in practically the same situation as before the experiment, and so its answer is still the same: accept the bet. (And conversely if in a red room.) The above seems consistent/paradox-free to me.(?) If what the agent cares about is something like "utilons gained, by this particular blob of atoms, and the temporal sort-of-continuation of it, as usually understood by e.g. humans", then: * Before the experiment: The original description of the dilemma leaves unclear what happens to the original blob of atoms, but here I'll assume that t
ike50

You're just rejecting one of the premises here, and not coming close to dissolving the strong intuitions / arguments many people have for SIA. If you insist the probability is 50/50 you run into paradoxes anyway (if each agent is offered a 1:3 odds bet, they would reject it if they believe the probability is 50%, but you would want in advance for agents seeing green to take the bet.)

4Charlie Steiner
You're right, we didn't even get to the part where the proposed game is weird even without the anthropics.
1rvnnt
Thanks for the response. I hadn't heard of SIA before. After a bit of searching, I'm guessing you're referring to the Self-Indication Assumption.(?) SIA, intuitions about it: Looks like there's a lot of stuff to read, under SIA (+ SSA). My current impression is that SIA is indeed confused (using a confused ontology/Map). But given how little I know of SIA, I'm not super confident in that assessment (maybe I'm just misunderstanding what people mean by SIA). Maybe if I find the time, I'll read up on SIA, and write a post about why/how I think it's confused. (I'm currently guessing it'd come down to almost the same things I'd write in the long version of this post -- about how people end up with confused intuitions about nonexistent sampling processes inserting nonexistent "I/me" ghosts into some brains but not others.) If you could share links/pointers to the "strong intuitions / arguments many people have for SIA" you mentioned, I'd be curious to take a look at them. Bets and paradoxes: I don't understant what you mean by {running into paradoxes if I insist the probability is 50/50 and each agent is given a 1:3 odds bet}. If we're talking about the bet as described in Eliezer's original post, then the (a priori) expected utility of accepting the bet would be 0.5*(18 - 23) + 0.5(2 - 18*3) = -20, so I would not want to accept that bet, either before or after seeing green, no? I'm guessing you're referring to some different bet. Could you describe in more detail what bet you had in mind, or how a paradox arises?
ike10

Yes, rejecting probability and refusing to make predictions about the future is just wrong here, no matter how many fancy primitives you put together.

I disagree that standard LW rejects that, though.

ike20

Variance only increases chance of Yes here. If cases spike and we're averaging over 100k, reporting errors won't matter. If we've averaging 75k, a state dumping extra cases could plausibly push it over 100k

3gbear605
To rephrase that, "Yes" requires that at least one day has reported >100k cases while "No" requires that all days have reported <100k cases. So if there is variance it will increase the chance any given day will be reported wrongly and a single wrong reporting of >100k will make "Yes" inaccurately occur. Of course, if it only spikes >100k real cases on a few days, and those days also have variance, that will make "No" inaccurately occur, but I agree that that's an unlikely situation. The real problem would be if the CDC has a consistent reporting error. For example if some states with high real case counts were to stop reporting data and the CDC then extrapolated from the remaining lower case count states, they could report an inaccurately low number of cases.
ike20

Two Moderna doses here with no significant side effects

ike20

I know what successful communication looks like. 

What does successful representation look like? 

1TAG
Communication uses symbols, which are representations.
ike20

Yes, it appears meaningless, I and others have tried hard to figure out a possible account of it.

I haven't tried to get a fully general account of communication but I'm aware there's been plenty of philosophical work, and I can see partial accounts that work well enough.

1TAG
You're implicitly assuming it works by using it. So why can't I assume that representation works, somehow?
ike40

I'm communicating, which I don't have a fully general account of, but is something I can do and has relatively predictable effects on my experiences. 

1TAG
Your objection to representation was that there is no account if it.
ike20

Not at all, to the extent head is a territory. 

1TAG
Tell me what you are doing ,then.
ike20

What does it mean for a model to "represent" a territory?

-1TAG
You're assuming that the words you are using can represent ideas in your head.
ike20

>On the other hand, when I observe that other nervous systems are similar to my own nervous system, I infer that other people have subjective experiences similar to mine.

That's just part of my model. To the extent that empathy of this nature is useful for predicting what other people will do, that's a useful thing to have in a model. But to then say "other people have subjective experiences somewhere 'out there' in external reality" seems meaningless - you're just asserting your model is "real", which is a category error in my view. 

1TAG
"The model is the territory" is a category error, but "the model accurately represents the territory" is not.
Answer by ike20

My own argument, see https://www.lesswrong.com/posts/zm3Wgqfyf6E4tTkcG/the-short-case-for-verificationism and the post it links back to.

It seems that if external reality is meaningless, then it's difficult to ground any form of morality that says actions are good or bad insofar as they have particular effects on external reality.

1Michele Campolo
That is an interesting point. More or less, I agree with this sentence in your fist post: in the sense that one can do science by speaking only about their own observations, without making a distinction between what is observed and what "really exists". On the other hand, when I observe that other nervous systems are similar to my own nervous system, I infer that other people have subjective experiences similar to mine. How does this fit in your framework? (Might be irrelevant, sorry if I misunderstood)
ike40

But, provided you speak about this notion, why would verificationismism lead to external world anti-realism?

Anti-realism is not quite correct here, it's more that claims about external reality are meaningless as opposed to false. 

One could argue that synthetic statements aren't really about external reality: What we really mean is "If I were to check, my experiences would be as if there were a tree in what would seem to be my garden". Then our ordinary language wouldn't be meaningless. But this would be a highly revisionary proposal. We arguably don't

... (read more)
2Lukas_Gloor
This is semantics but I'd say what you're describing fits the label "anti-realism" perfectly well. I wrote a post on Why Realists and Anti-Realists disagree. (It also mentions existence anti-realism briefly at the end.)   
1TAG
From my POV , you are external reality.
ike20

I granted your supposition of such things existing. I myself don't believe any objective external reality exists, as I don't think those are meaningful concepts.

1TAG
They're in the dictionary.
ike20

Perhaps. It's not clear to me how such facts could exist, or what claims about them mean.

If you've got self locating uncertainty, though, you can't have objective facts about what atoms near you are doing.

1TAG
The existence of a set of facts is implied by the existence of a world or worlds. You are supposing be existence of a multiverse, not me. I can have good-enough knowledge of what atoms near me are doing, because otherwise science wouldn't work. Of course, that's only subjective, but you are the one supposing the existence of a large objective world.
ike20

>If they didn't write the sentence, then they are not identical to me and don't have to accept that they are me.

Sure, some of those people are not identical to some other people. But how do you know which subset you belong to? A version of you that deluded themselves into thinking they wrote the sentence is subjectively indistinguishable from any other member of the set. You can only get probabilistic knowledge against, i.e. "most of the people in my position are not deluding themselves", which lets you make probabilistic predictions. But saying "X is true" and grounding that as "X is probable" doesn't seem to work. What does "X is true" mean here, when there's a chance it's not true for you?  

1TAG
Suppose there is no personal identity at all. Then there are still objective facts about what some bunch of atoms somewhere is doing.
ike20

I'm tentatively ok with claims of the sort that a multiverse exists, although I suspect that too can be dissolved.

Note that in your example, the relevant subset of the multiverse is all the people who are deluding themselves into thinking they typed that sentence. If there's no meaningful sense in which you're self located as someone else vs that subset, then there's no meaningful sense in which you "actually" typed it.

1TAG
If my supposed counterparts are identical in every way, then there is no confusion about whether they write thc sentence. If they didn't write the sentence, then they are not identical to me and don't have to accept that they are me. You don't just need multiverse theory to be true , you need strong claims about transworld identity to be true.
ike20

What form of realism is consistent with my statement about level 4?

6TAG
That there is an external world. Which, in this case, happens to be a multiverse. You seem to be taking an epistemology-flavoured approach, where realism depends on having a set of facts, rather than a set of things. But even at that, it's not clear that multiverses imply a lack of facts. If there is a duplicate me somewhere that didn't just type that sentence, that doesn't indicate an lack of clarity about what I did , any more than if I had a twin who didn't just type that sentence.
Answer by ike120

External reality is not a meaningful concept, some form of verificationism is valid. I argued for it in various ways previously on LW, one plausible way to get there is through a multiverse argument.

Verificationism w.r.t level 3 multiverse - "there's no fact of the matter where the electron is before it's observed, it's in both places and you have self locating uncertainty."

Verificationism w.r.t. level 4 multiverse - "there's no fact of the matter as to anything, as long as it's true in some subsets of the multiverse and false in others, you just have self locating uncertainty."

Lots of people seem to accept the first but not the second.

2cubefox
Verificationismism in the sense of the logical positivists is a theory of meaning. According to this theory, kowing the meaning of a statement p would amount to knowing the conditions under which it would be true and under which it would be false. (To give it a Bayesian slant, I like to widen this as "knowing what would be evidence for/against p). Is it this what you have in mind? Verificationismism in this sense was used against postulating transcendent entities or state of affairs. Something is transcendent if it is beyond every possible experience. Therefore there is nothing which could verify of falsify facts about it. The logical positivists argued on the basis of verificationismism that statements about transcendent things (certain conceptions of God for example) are meaningless. Not false, but meaningless. (Verificationismism lost a lot of popularity in the 1950s and 60s because there was very little progress in making the notion precise. Also, some apparently unverifiable theories (e.g. in astronomy) seemed to be perfectly meaningful. Whether those problems can be met I don't know. Another point is that verificationismism was meant only as a condition of meaningfulness of so-called synthetic statements. Statements are synthetic iff their truth depends not only in their meaning. In contrast, the truth of "analytic" statements only depends on their meaning. The logical positivists assumed that logical and mathematical statements were analytic. Since verificationismism doesn't apply to the meaning of those latter statements, it arguably isn't a theory of meaning in the general sense.) But, provided you speak about this notion, why would verificationismism lead to external world anti-realism? Because statements like "there is a tree in my garden" cannot be truly "verified" -- because there might be no garden and no tree, and I might instead be deceived by a Cartesian demon? For the "wider" conception mentioned above this wouldn't be a problem I think. Having
3TAG
OTOH, realism isn't defined as every observable having a simultaneous sharp value.
ike60

How is that different than say the CIA taking ESP seriously, MKULTRA etc?

4Dumbledore's Army
I would say the UFO thing is different because the defence people are reporting physical phenomena which they can’t explain. So far as I know, the CIA didn’t have evidence that ESP worked and subsequently decide to investigate it, rather someone persuaded them to spend some money looking for evidence (which they didn’t find). The UFO reports give the impression that the DoD didn’t want to take them seriously but they got smacked in the face by enough evidence that they didn’t have much choice. Again, I’m not saying it’s definitely something weird. But if there’s a one-third chance the UFO reports are from something interesting, isn’t it worth investigating? Remember that aliens are only one of the interesting possibilities. The other ones are that China/Russia/someone has either made a big leap ahead in technology; or has figured out how to spoof multiple US military systems and is testing their abilities by generating UFO sightings. Or the third option, something we haven’t even thought of.
ike30

From what I can tell, most of the people who lost significant sums on the CO2 markets were generally profitable and +EV. Although I guess I'm mostly seeing input from the people who hang out on the discord all day, which is a skewed sample.

1Liam Donovan
Because mr co2 guy was clearly making a -EV bet that happened to pay off this time :)
ike90

Prediction markets are tiny compared to real world markets. Something like $100 million total volume on Polymarket since inception. There just aren't as many people making sure they're efficient.

ike70

It's actually a bit worse - there's a 2% fee paid to liquidity providers, so if you only bet and don't provide liquidity then you lose money on average. Of course you can lose money providing liquidity too if the market moves against you. Anyone can provide liquidity and get a share of that 2%.

Answer by ike40

Probability is in the mind. It's relative to the information you have.

In practical terms, you typically don't have good enough resolution to get individual percentage point precision, unless it's in a quantitative field with well understood processes.

ike90

USDC is a very different thing than tether.

Do you have most of your net worth tied up in Eth, or something other than USD at any rate? If not I don't see how the volatility point could apply.

1Zac Hatfield-Dodds
With the capital I have on hand as a PhD student, there's just no way that running something like Vitalik's pipeline to make money on prediction markets will have a higher excess return-on-hours-worked over holding ETH than my next-best option (which I currently think is a business I'm starting). If I was starting with a larger capital pool, or equivalently a lower hourly rate, I can see how it would be attractive though.
Load More