Podcast with Kanjun Qiu and Josh Albrecht (Generally Intelligent)

In June, I did a podcast with my friend Kanjun and Josh from Generally Intelligent. I forgot to fit the announcement into the blog schedule, so now here it is for any of you interested in following it.

I mostly sounded like a goofball in the raw tape, but Kanjun and Josh's editing made me sound at least nominally reasonable. Here's one lightly edited excerpt, on the difficulty of communicating novel ideas:

One thing is a lot of the ideas I was pursuing were a little bit off the wall, at least compared to what someone would do if they were just thinking about things straightforwardly. So I had a lot of early conference rejections that I think were just due to not knowing how to communicate my ideas in a way that would be compelling to reviewers. Of course, it’s always something you’re going to have to learn, but I think it was harder here because there’s not as much of a template. If you’re writing a paper that’s doing better on some benchmark, there’s been a bunch of papers like that written before, you have a template you can follow, but I had to learn how to write a good paper without having a template.

I think it required me to learn to become a significantly better writer. And I think that helped later on because it made me feel more comfortable pursuing unusual ideas. I knew I had the skills to present those ideas. As long as I believed in them, I could get other people to believe in them.

And part of a longer excerpt on how thinking about AI alignment led me to work on robustness:

So the first attempt at this was thinking about how to specify what an AI system should be doing. I was thinking of this first, just from a very traditional computer science-y perspective of okay, well you often have functions that are supposed to be doing something. You can write down these APIs that say what they’re supposed to do, but if you want to do something like automated debugging or verification, you have to actually formally specify what each function is supposed to do. And it turns out that it’s an enormous amount of work and not even clearly possible to go from the English description that you see in programming APIs to an actual formal description. So I spent a while thinking about this, but not making much progress on it.

We shifted from thinking about this formal specification because it seemed way too hard to just thinking about, okay, what makes software do what you want it to do? Well, if it’s well encapsulated in the sense that you know when it ought to work and know when it ought not to work, that seems good. What are the senses in which machine learning fails to live up to that? One way is this contract that your test distribution should be the same as your training distribution.

More at the link.

Jacob Steinhardt

Jacob Steinhardt


Comments

Sign in to join the conversation.