LESSWRONG
LW

1221
Cole Wyeth
4184Ω295447832
Message
Dialogue
Subscribe

I am a PhD student in computer science at the University of Waterloo, supervised by Professor Ming Li and advised by Professor Marcus Hutter.

My current research is related to applications of algorithmic probability to sequential decision theory (universal artificial intelligence). Recently I have been trying to start a dialogue between the computational cognitive science and UAI communities. Sometimes I build robots, professionally or otherwise. Another hobby (and a personal favorite of my posts here) is the Sherlockian abduction master list, which is a crowdsourced project seeking to make "Sherlock Holmes" style inference feasible by compiling observational cues. Give it a read and see if you can contribute!

See my personal website colewyeth.com for an overview of my interests and work.

I do ~two types of writing, academic publications and (lesswrong) posts. With the former I try to be careful enough that I can stand by ~all (strong/central) claims in 10 years, usually by presenting a combination of theorems with rigorous proofs and only more conservative intuitive speculation. With the later, I try to learn enough by writing that I have changed my mind by the time I'm finished - and though I usually include an "epistemic status" to suggest my (final) degree of confidence before posting, the ensuing discussion often changes my mind again. As of mid-2025, I think that the chances of AGI in the next few years are high enough (though still <50%) that it’s best to focus on disseminating safety relevant research as rapidly as possible, so I’m focusing less on long-term goals like academic success and the associated incentives. That means most of my work will appear online in an unpolished form long before it is published.

Sequences

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
5Cole Wyeth's Shortform
Ω
1y
Ω
251
I recklessly speculate about timelines
Meta-theory of rationality
AIXI Agent foundations
Deliberative Algorithms as Scaffolding
Cole Wyeth's Shortform
Cole Wyeth1mo22

Semantics; it’s obviously not equivalent to physical violence. 

Reply
AI 2027: What Superintelligence Looks Like
Cole Wyeth7mo68

I expect this to start not happening right away.

So at least we’ll see who’s right soon.

Reply
"But You'd Like To Feel Companionate Love, Right? ... Right?"
Cole Wyeth8h92

I like it relatively better when those values are relatively more aligned with mine, but I still put some weight on people doing their own thing even when I otherwise don’t like it. And third, I am not the sort of person who would try to convince you to pursue values which are not your own (including by self-modifying into someone whose values are not in line with your current values). I might fight you, if your values are sufficiently opposed to mine, but I’m not going to try to convince you that I’m doing you a favor by fighting you. I’m certainly an asshole sometimes, but I at least strive to be an honest asshole.

Well said. I can identify with this part (and it reminds me of mtg's Black). In fact, I would go even further and say that "human values" being maximized by a Singleton forever would importantly fall short of my ideal future.

I basically agree with the rest of the essay, though I certainly feel companionate love. It has a lot of direct and indirect practical benefits (as well as being valuable for its own sake), but also means I have to make tradeoffs to pursue my ambitions (however, my revealed preferences are to follow my ambitions anyway, e.g. moving to Canada to do a PhD). 

Reply1
Lambda Calculus Prior
Cole Wyeth15h90

some quick thoughts:


It appears that under this pair encoding, each new input would be applied as a two-argument function to  the existing list, when we presumably want it to be fed to the last element of the list in order to continue the stream of output growing to the right. Otherwise there’s effectively still a restriction to finite input, before an infinite output can be produced. 

Assuming that problem were fixed:

Under Vanessa’s distribution on terms, only finite terms will be sampled with probability one, so you’d effectively feed in an infinite sequence of finite terms. 

There’s some flexibility on the input distribution preserving universality; see “Learning universal predictors” theorem 9.

If I had to guess, this is probably enough for universality, but I don’t know this. 

Li and Vitanyi has a section on concrete models or some such that includes a discrete lambda calculus prior (not for induction).

Reply
What's so hard about...? A question worth asking
Cole Wyeth2d102

Similar to Wentworth’s advice to ask experts what they are mentally tracking.

Reply
Turing-Complete vs Turing-Universal
Cole Wyeth2d50

I think probabilistic lambda calculus is exactly equivalent to monotone Turing machines, so in the continuous case relevant to Solomonoff induction there is no difference. It’s “unfair” to compare the standard lambda calculus to monotone Turing machines because the former does not admit infinite length “programs.”

Reply
On the Normativity of Debate: A Discussion With Said Achmiz
Cole Wyeth4d40

Yea, though I walk through the valley of arguments, I will fear no argument, for I understand that adversarially selected reasoning-chains are memetic hazards. 

Reply
Question the Requirements
Cole Wyeth4d88

Still an awesome visual.

Reply
MtG Colour Wheel applied to Politics
Cole Wyeth5d20

I mean, the color wheel is from magic: the gathering, not Duncan Sabien. He just wrote a particular (interesting) take on it. My impression that Mark Rosewater has had the most influence on shaping the philosophy (or at least communicating it).

Reply
Overview of strong human intelligence amplification methods
Cole Wyeth8d40

I still have a couple of years left. I have been advocating human brain uploading / emulation and other forms of rationality enhancement with deep learning lately, and might actually work on a related project this spring, but so far I am still mainly focused on agent foundations. 

Reply1
Load More
23Nontrivial pillars of IABIED
1mo
3
69Alignment as uploading with more steps
Ω
2mo
Ω
33
16Sleeping Experts in the (reflective) Solomonoff Prior
Ω
3mo
Ω
0
53New Paper on Reflective Oracles & Grain of Truth Problem
Ω
3mo
Ω
0
46Launching new AIXI research community website + reading group(s)
3mo
2
26Pitfalls of Building UDT Agents
Ω
4mo
Ω
5
16Explaining your life with self-reflective AIXI (an interlude)
Ω
4mo
Ω
0
29Unbounded Embedded Agency: AEDT w.r.t. rOSI
Ω
4mo
Ω
0
19A simple explanation of incomplete models
Ω
4mo
Ω
1
67Paradigms for computation
Ω
5mo
Ω
10
Load More
AIXI
10 months ago
(+11/-174)
Anvil Problem
a year ago
(+119)