Johannes C. Mayer

↘↘↘↘↘↘↙↙↙↙↙↙
Checkout my Biography.
↗↗↗↗↗↗↖↖↖↖↖↖

Wiki Contributions

Comments

Sorted by

Typst is better than Latex

I started to use Typst. I feel a lot more productive in it. Latex feels like a slug. Typst doesn't feel like it slows me down when typing math, or code. That and the fact that it has an online collaborative editor, and that rendering is very very fast are the most important features. Here are some more:

  • It has an online collaborative editor.
  • It compiles instantly (at least for my main 30-page document)
  • The online editor has Vim support.
  • It's free.
  • It can syntax highlight lots of languages (e.g. LISP and Lean3 are supported).
  • It's embedded scripting language is much easier to use than Latex Macros.
  • The paid version has Google Doc-style comment support.
  • It's open source and you can compile documents locally, though the online editor is closed source.

Here is a comparison of encoding the games of life in logic:

Latex

$$
\forall i, j \in \mathbb{Z}, A_{t+1}(i, j) = \begin{cases}
                                0 &\text{if} \quad A_t(i, j) = 1 \land N_t(i, j) < 2 \\
                                1 &\text{if} \quad A_t(i, j) = 1 \land N_t(i, j) \in \{2, 3\} \\
                                0 &\text{if} \quad A_t(i, j) = 1 \land N_t(i, j) > 3 \\
                                1 &\text{if} \quad A_t(i, j) = 0 \land N_t(i, j) = 3 \\
                                0 &\text{otherwise}
                              \end{cases}
$$

Typst

$
forall i, j in ZZ, A_(t+1)(i, j) = cases(
                                0 "if" A_t(i, j) = 1 and N_t(i, j) < 2 \
                                1 "if" A_t(i, j) = 1 and N_t(i, j) in {2, 3} \
                                0 "if" A_t(i, j) = 1 and N_t(i, j) > 3 \
                                1 "if" A_t(i, j) = 0 and N_t(i, j) = 3 \
                                0 "otherwise")
$

Typst in Emacs Org Mode

Here is some elisp to treat latex blocks in emacs org-mode as typst math, when exporting to HTML (renders/embeds as SVG images):

;;;; Typst Exporter
;;; This exporter requires that you have inkscape and typst in your path.
;;; Call org-typst-enabled-html-export

;;; TODO
;;; - Error if inskape or typst is not installed.
;;; - Make it such that it shows up in the org-dispatch exporter and we can
;;;   automatically not export only to output.html.
;;; - Automatically setup the HTML header, and possible also automatically start the server as described in: [[id:d9f72e91-7e8d-426d-af46-037378bc9b15][Setting up org-typst-html-exporter]]
;;; - Make it such that the temporary buffers are deleted after use.


(require 'org)
(require 'ox-html) ; Make sure the HTML backend is loaded

(defun spawn-trim-svg (svg-file-path output-file-path)
  (start-process svg-file-path
		 nil
		 "inkscape"
		 svg-file-path
		 "--export-area-drawing"
		 "--export-plain-svg"
		 (format "--export-filename=%s" output-file-path)))

(defun correct-dollar-sings (typst-src)
  (replace-regexp-in-string "\\$\\$$"
			    " $" ; Replace inital $$ with '$ '
			    (replace-regexp-in-string "^\\$\\$" "$ " ; same for ending $$
						      typst-src)))

(defun math-block-p (typst-src)
  (string-match "^\\$\\$\\(\\(?:.\\|\n\\)*?\\)\\$\\$$" typst-src))

(defun html-image-centered (image-path)
  (format "<div style=\"display: flex; justify-content: center; align-items: center;\">\n<img src=\"%s\" alt=\"Centered Image\">\n</div>" image-path))

(defun html-image-inline (image-path)
  (format " <img hspace=3px src=\"%s\"> " image-path))

(defun spawn-render-typst (file-format input-file output-file)
  (start-process input-file nil "typst" "compile" "-f" file-format input-file output-file))

(defun generate-typst-buffer (typst-source)
  "Given typst-source code, make a buffer with this code and neccesary preamble."
  (let ((buffer (generate-new-buffer (generate-new-buffer-name "tmp-typst-source-buffer"))))
    (with-current-buffer buffer
      (insert "#set text(16pt)\n")
      (insert "#show math.equation: set text(14pt)\n")
      (insert "#set page(width: auto, height: auto)\n")1
      (insert typst-source))
    buffer))
  
(defun embed-math (is-math-block typst-image-path)
    (if is-math-block
	(html-image-centered typst-image-path)
        (html-image-inline typst-image-path)))

(defun generate-math-image (output-path typst-source-file)
  (let* ((raw-typst-render-output (make-temp-file "my-temp-file-2" nil ".typ")))
    (spawn-render-typst file-format typst-source-file raw-typst-render-output)
    (spawn-trim-svg raw-typst-render-output typst-image-path)))

(defun my-typst-math (latex-fragment contents info)
  ;; Extract LaTeX source from the fragment's plist
  (let* ((typst-source-raw (org-element-property :value latex-fragment))
	 (is-math-block (math-block-p typst-source-raw))
	 (typst-source (correct-dollar-sings typst-source-raw))
	 (file-format "svg") ;; This is the only supported format.
         (typst-image-dir (concat "./typst-svg"))
	 (typst-buffer (generate-typst-buffer typst-source)) ; buffer of full typst code to render
	 (typst-source-file (make-temp-file "my-temp-file-1" nil ".typ"))
	 ;; Name is unique for every typst source we render to enable caching.
	 (typst-image-path (concat typst-image-dir "/"
				   (secure-hash 'sha256 (with-current-buffer typst-buffer (buffer-string)))
				   "." file-format)))
    ;; Only render if neccesary
    (unless (file-exists-p typst-image-path)
      (message (format "Rendering: %s" typst-source))
      ;; Write the typst code to a file
      (with-current-buffer typst-buffer
	(write-region (point-min) (point-max) typst-source-file))
      (generate-math-image typst-image-path typst-source-file))
    (kill-buffer typst-buffer)
    (embed-math is-math-block typst-image-path)))

(org-export-define-derived-backend 'my-html 'html
    :translate-alist '((latex-fragment . my-typst-math))
    :menu-entry
    '(?M "Export to My HTML"
	((?h "To HTML file" org-html-export-to-html))))

;; Ensure org-html-export-to-html is bound correctly to your backend:
(defun org-html-export-to-html-with-typst (&optional async subtreep visible-only body-only ext-plist)
  (interactive)
  (let* ((buffer-file-name (buffer-file-name (window-buffer (minibuffer-selected-window))))
	 (html-output-name (concat (file-name-sans-extension buffer-file-name) ".html")))
    (org-export-to-file 'my-html html-output-name
      async subtreep visible-only body-only ext-plist)))

(setq org-export-backends (remove 'html org-export-backends))
(add-to-list 'org-export-backends 'my-html)

Simply eval this code and then call org-html-export-to-html-with-typst.

Here is a model of mine, that seems related.

[Edit: Add Epistemic status]
Epistemic status: I have used this successfully in the past and found it helpful. It is relatively easy to do. is large for me.

I think it is helpful to be able to emotionally detach yourself from your ideas. There is an implicit "concept of I" in our minds. When somebody criticizes this "concept of I", it is painful. If somebody says "You suck", that hurts.

There is an implicit assumption in the mind that this concept of "I" is eternal. This has the effect, that when somebody says "You suck", it is actually more like they say "You sucked in the past, you suck now, and you will suck, always and ever".

In order to emotionally detach yourself from your ideas, you need to sever the links in your mind, between your ideas and this "concept of I". You need to see an idea as an object that is not related to you. Don't see it as "your idea", but just as an idea.

It might help to imagine that there is an idea-generation machine in your brain. That machine makes ideas magically appear in your perception as thoughts. Normally when somebody says "Your idea is dumb", you feel hurt. But now we can translate "Your idea is dumb" to "There is idea-generating machinery in my brain. This machinery has produced some output. Somebody says this output is dumb".

Instead of feeling hurt, you can think "Hmm, the idea-generating machinery in my brain produced an idea that this person thinks is bad. Well maybe they don't understand my idea yet, and they criticize their idea of my idea, and not actually my idea. How can I make them understand?" This thought is a lot harder to have while being busy feeling hurt.

Or "Hmm, this person that I think is very competent thinks this idea is bad, and after thinking about it I agree that this idea is bad. Now how can I change the idea-generating machinery in my brain, such that in the future I will have better ideas?" That thought is a lot harder to have when you think that you yourself are the problem. What is that even supposed to mean that you yourself are the problem? This might not be a meaningful statement, but it is the default interpretation when somebody criticizes you.

The basic idea here is, to frame everything without any reference to yourself. It is not me producing a bad plan, but some mechanism that I just happened to observe the output of. In my experience, this not only helps alleviate pain but also makes you think thoughts that are more useful.

Answer by Johannes C. Mayer74

Here is what I would do, in the hypothetical scenario, where I have taken over the world.

  1. Guard against existential risk.
  2. Make sure that every conscious being I have access to is at least comfortable as the baseline.
  3. Figure out how to safely self-modify, and become much much much ... much stronger.
  4. Deconfuse myself about what consciousness is, such that I can do something like 'maximize positive experiences and minimize negative experiences in the universe', without it going horribly wrong. I expect that 'maximize positive experiences, minimize negative experiences in the universe' very roughly points in the right direction, and I don't expect that would change after a long reflection. Or after getting a better understanding of consciousness.
  5. Optimize hard for what I think is best.

Though this is what I would do in any situation really. It is what I am doing right now. This is what I breathe for, and I won't stop until I am dead.

[EDIT 2023-03-01_17-59: I have recently realized that is is just how one part of my mind feels. The part that feels like me. However, there are tons of other parts in my mind that pull me in different directions. For example, there is one part that wants me to do lots of random improvements to my computer setup, which are fun to do, but probably not worth the effort. I have been ignoring these parts in the past, and I think that their grip on me is stronger because I did not take them into account appropriately in my plans.]

I totally agree with this. I expect the majority early AI researchers where falling into this trap. The main problem I am focusing on is how a mind can construct a model of the world in the first place.

The goal is to have a system where there are no unlabeled parameters ideally. That would be the world modeling system. It then would build a world model that would have many unlabeled parameters. By understanding the world modeler system you can ensure that the world model has certain properties. E.g. there is some property (which I don't know) of how to make the world model not contain dangerous minds.

E.g. imagine the AI is really good at world modeling, and now it models you (you are part of the world) so accurately that you are now basically copied into the AI. Now you might try to escape the AI, which would actually be really good because then you could save the world as a speed intelligence (assuming the model of you would really accurate which is probably wouldn't be). But if it models another mind (maybe it considers dangerous adversaries) then maybe they could also escape, and would not be aligned.

By understanding the system you could put constraints on what world models can be generated, such that all generated world models can't contain such dangerous minds, or at least make such minds much less likely.

I propose that a more realistic example would be “classifying images via a ConvNet with 100,000,000 weights” versus “classifying images via 5,000,000 lines of Python code involving 1,000,000 nonsense variable names”. The latter is obviously less inscrutable on the margin but it’s not a huge difference.

Python code is a discrete structure. You can do proofs on more easily than for a NN. You could try to apply program transformations on it that preserve functional equality, trying to optimize for some measure of "human understandable structure". There are image classification alogrithms iirc that are worse than NN but much more interpretable, and these algorithms would at most be hundets of lines of code I guess (haven't really looked a lot at them).

Anyway, it’s fine to brainstorm on things like this, but I claim that you can do that brainstorming perfectly well by assuming that the world model is a Bayes net (or use OpenCog AtomSpace, or Soar, or whatever), or even just talk about it generically.

You give examples of recognizing problems. I tried to give examples of how you can solve these problems. I'm not brainstorming on "how could this system fail". Instead I understand something, and then I just notice without really trying, that now I can do a thing that seems very useful, like making the system not think about human psycology given certain constraints.

Probably I completely failed at making clear why I think that, because my explanation was terrible. In any case I think your suggested brainstorming this is completely different from the thing that I am actually doing.

To me it just seems that limiting the depth of a tree search is better that limiting the compute of a black box neural network. It seems like you can get a much better grip on what it means to limit the depth, and what this implies about the system behavior, when you actually understand how tree search works. Of cause tree search here is only an example.

Here. There is a method you can have. This is just a small pice of what I do. I also probably haven't figured out many important methodological things yet.

Also this is very important.

John's post is quite wierd, because it only says true things, and implicitly implies a conclusion, namely that NNs are not less interpretable than some other thing, which is totally wrong.

Example: A neural network implements modular arithmetic with furier transforms. If you implement that furier algorithm in python, it's harder to understand for a human than the obvious modular arithmetic implementation in python.

It doesn't matter if the world model is inscruitable when looking directly at it, if you can change the generating code such that certain properties must hold. Figuring out what these properties is not directly solved by understading intelligence of cause.

This is bad because, if AGI is very compute-efficient, then when we have AGI at all, we will have AGI that a great many actors around the world will be able to program and run, and that makes governance very much harder.

This is bad because, if AGI is very compute-efficient, then when we have AGI at all, we will have AGI that a great many actors around the world will be able to program and run, and that makes governance very much harder.

Totally agree, so obviously try super hard to not leak the working AGI code if you had it.

But you won’t get insight into those distinctions, or how to ensure them in an AGI, by thinking about whether world-model stuff is stored as connections on graphs versus induction heads or whatever.

No you can. E.g. I could define theoretically a general algoritm that identifies the minimum concrepts neccesary, if I know enough about the structure of the system, specifically how concepts are stored, for solving a task. That's of cause not perfect, but it would seem that for very many problems it would make the AI unable to think about things like human manipulation, or that it is a constrained AI, even if that knowledge was somewhere in a learned black box world model. This is just an example of something you can do by knowing the structure of a system.

If your system is some plain code with for loops, just reduce the number the for loops of seach processes do. Now decreasing/incleasing the iterations somewhat will correspond to making the system dumber/smarter. Again obviously not solving the problem completely, but clearly a powerful thing to be able to do.

Of cause many low level details do not matter. Often you'd only care that something is a sequence, or a set. I am talking about a higher level program structure.

It feels like you are somewhat missing the point. The goal is to understand how intelligence works. Clearly that would be very useful for alignment? Even if you would get a blackbox world model. But of cause it would also enable you to think about how to make such a world model more interpretable. I think that is possible, it's just not what I am focusing on now.

I specifically am talking about solving problems that nobody knows the answer to, where you are probably even wrong about what the problem even is. I am not talking about taking notes on existing material. I am talking about documenting the process of generating knowledge.

I am saying that I forget important ideas that I generated in the past, probably they are not yet so refined that they are impossible to forget.

A robust alignment scheme would likely be trivial to transform into an AGI recipe.

Perhaps if you did have the full solution, but it feels like that there are some things of a solution that you could figure out, such that that part of the solution doesn't tell you as much about the other parts of the solution.

And it also feels like there could be a book such that if you read it you would gain a lot of knowledge about how to align AIs without knowing that much more about how to build one. E.g. a theoretical solution to the stop button problem seems like it would not tell you that much about how to build an AGI compared to figuring out how to properly learn a world model of Minecraft. And knowing how to build a world model of minecraft probably helps a lot with solving the stop button problem, but it doesn't just trivially yield a solution.

If you had a system with “ENTITY 92852384 implies ENTITY 8593483" it would be a lot of progress, as currently in neural networks we don't even understand the interal structures.

I want to have an algorithm that creates a world model. The world is large. A world model is uninterpretable by default through it's sheer size, even if you had interpretable but low level abels. By default we don't get any interpretable labels. I think there are ways to have generic dataprocessing procedures that don't talk about the human mind at all, that would yield more interpretable world model. Similar to how you could probably specify some very general property about python programs, such that that program becomes easier to understand by humans. E.g. a formalism of what it means that the control flow is straightforward: Don't use goto in C.

But even if you wouldn't have this, understanding the system still allows you to understand what the structure of the knowledge would be. It seems plausible that one could simply by understanding the system very well, make it such that the learned datastrucutres need to take particular shapes, such that these shapes correspond some relevant alignment properties.

In any case, it seems that this is a problem that any possible way to build an intelligence runs into? So I don't think it is a case against the project. When building an AI with NN you might not even think about that the interal representations might be wierd and alien (even for an LLM trained on human text)[1], but the same problem persists.

  1. ^

    I haven't looked into this, or thought about at all, though that's what I expect.

Load More