LESSWRONG
LW

2107
Martin Vlach
7351171
Message
Dialogue
Subscribe

If you get an email from aisafetyresearch@gmail.com , that is most likely me. I also read it weekly, so you can pass a message into my mind that way.
Other ~personal contacts: https://linktr.ee/uhuge 

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
1Martin Vlach's Shortform
3y
36
Why Future AIs will Require New Alignment Methods
Martin Vlach5h20

Task duration for software engineering tasks that AIs can complete with 50% success rate (50% time horizon)

paragraph seems duplicated.

 

medical research doing so in concerning domains

"instead of" is missing..?

Reply
Martin Vlach's Shortform
Martin Vlach16d10

My friends(M.K.,he's on Github) honorable aim to establish a term in the AI evals field: The cognitive asymetry, generating-verifying complexity gap for model-as-judge evals.

Various tasks that have a clear intelligence-to-solve vs. intelligence-to-verify-a-solution gap, ie. only X00-B LMs have a shot, but X-B model is strong on verifying are desired.
It fits nicely to the incremental iterative alighnment scaling playbook, I hope.

Reply
GPT-oss is an extremely stupid model
Martin Vlach1mo10

I'd bet "re-based" model ala https://huggingface.co/jxm/gpt-oss-20b-base when instruction-tuned would do same as similarly sized Qwen models.

Reply
Project Vend: Can Claude run a small shop?
Martin Vlach3mo10

It's provided the current time together with other 20k sys-prompt tokens, so substantially more diluted influence on the behaviours..?

Reply
So You Think You've Awoken ChatGPT
Martin Vlach3mo10

Folks like this guy hit it on hyperspeed - 

https://www.facebook.com/reel/1130046385837121/?mibextid=rS40aB7S9Ucbxw6v

 

I still remember university teacher explaining how early TV transmission were very often including/displaying ghosts of dead people, especially dead relatives.

As the tech matures from art these phenomena or hallucinations evaporate.

Reply
Energy-Based Transformers are Scalable Learners and Thinkers
Martin Vlach3mo40

 you seem to report one OOM less than this picture in https://alexiglad.github.io/blog/2025/ebt/#:~:text=a%20log%20function).-,Figure%208,-%3A%20Scaling%20for

Reply
Open Thread - Summer 2025
Martin Vlach4mo10

Link to Induction section on https://www.lesswrong.com/lw/dhg/an_intuitive_explanation_of_solomonoff_induction/#induction seems broken on mobile Chrome, @habryka 

Reply
Open Thread - Summer 2025
Martin Vlach4mo20

I've heard that hypothesis in a review of that blog post of Anthropic, likely by 

AI Explained

maybe by

bycloud

.

 

They've called it "Chekov's gun".

Reply
Open Thread - Summer 2025
Martin Vlach4mo10

What's your view on sceptic claims about RL on transformer LMs like https://arxiv.org/abs/2504.13837v2 or one that CoT instruction yields better results than <thinking> training?

Reply
Open Source Search (Summary)
Martin Vlach4mo10

Not the content I expect labeled AIb Capabilities,

although I see how that'd be vindicated. 

 

By the way, if I write an article about LMs generating SVG, that's a plaintext and if I put an SVG illustration up, that's an image, not a plaintext?

Reply1
Load More
4Draft: A concise theory of agentic consciousness
4mo
2
0Thou shalt not command an alighned AI
5mo
4
8G.D. as Capitalist Evolution, and the claim for humanity's (temporary) upper hand
5mo
3
6Would it be useful to collect the contexts, where various LLMs think the same?
Q
2y
Q
1
1Martin Vlach's Shortform
3y
36
Zombies
a year ago
(+52/-50)