Partner im RedaktionsNetzwerk Deutschland
Radio Logo
PodcastsTechnologieLatent Space: The AI Engineer Podcast — Practitioners talking LLMs, CodeGen, Agents, Multimodality, AI UX, GPU Infra and all things Software 3.0
Höre Latent Space: The AI Engineer Podcast — Practitioners talking LLMs, CodeGen, Agents, Multimodality, AI UX, GPU Infra and all things Software 3.0 in der App.
Höre Latent Space: The AI Engineer Podcast — Practitioners talking LLMs, CodeGen, Agents, Multimodality, AI UX, GPU Infra and all things Software 3.0 in der App.
(256.086)(250.186)
Sender speichern
Wecker
Sleeptimer

Latent Space: The AI Engineer Podcast — Practitioners talking LLMs, CodeGen, Agents, Multimodality, AI UX, GPU Infra and all things Software 3.0

Podcast Latent Space: The AI Engineer Podcast — Practitioners talking LLMs, CodeGen, Agents, Multimodality, AI UX, GPU Infra and all things Software 3.0
Alessio + swyx
The podcast by and for AI Engineers! In 2023, over 1 million visitors came to Latent Space to hear about news, papers and interviews in Software 3.0. We cover ...
Mehr

Verfügbare Folgen

5 von 95
  • Agents @ Work: Lindy.ai
    Alessio will be at AWS re:Invent next week and hosting a casual coffee meetup on Wednesday, RSVP here! And subscribe to our calendar for our Singapore, NeurIPS, and all upcoming meetups!We are still taking questions for our next big recap episode! Submit questions and messages on Speakpipe here for a chance to appear on the show!If you've been following the AI agents space, you have heard of Lindy AI; while founder Flo Crivello is hesitant to call it "blowing up," when folks like Andrew Wilkinson start obsessing over your product, you're definitely onto something.In our latest episode, Flo walked us through Lindy's evolution from late 2022 to now, revealing some design choices about agent platform design that go against conventional wisdom in the space.The Great Reset: From Text Fields to RailsRemember late 2022? Everyone was "LLM-pilled," believing that if you just gave a language model enough context and tools, it could do anything. Lindy 1.0 followed this pattern:* Big prompt field ✅* Bunch of tools ✅* Prayer to the LLM gods ✅Fast forward to today, and Lindy 2.0 looks radically different. As Flo put it (~17:00 in the episode): "The more you can put your agent on rails, one, the more reliable it's going to be, obviously, but two, it's also going to be easier to use for the user."Instead of a giant, intimidating text field, users now build workflows visually:* Trigger (e.g., "Zendesk ticket received")* Required actions (e.g., "Check knowledge base")* Response generationThis isn't just a UI change - it's a fundamental rethinking of how to make AI agents reliable. As Swyx noted during our discussion: "Put Shoggoth in a box and make it a very small, minimal viable box. Everything else should be traditional if-this-then-that software."The Surprising Truth About Model LimitationsHere's something that might shock folks building in the space: with Claude 3.5 Sonnet, the model is no longer the bottleneck. Flo's exact words (~31:00): "It is actually shocking the extent to which the model is no longer the limit. It was the limit a year ago. It was too expensive. The context window was too small."Some context: Lindy started when context windows were 4K tokens. Today, their system prompt alone is larger than that. But what's really interesting is what this means for platform builders:* Raw capabilities aren't the constraint anymore* Integration quality matters more than model performance* User experience and workflow design are the new bottlenecksThe Search Engine Parallel: Why Horizontal Platforms Might WinOne of the spiciest takes from our conversation was Flo's thesis on horizontal vs. vertical agent platforms. He draws a fascinating parallel to search engines (~56:00):"I find it surprising the extent to which a horizontal search engine has won... You go through Google to search Reddit. You go through Google to search Wikipedia... search in each vertical has more in common with search than it does with each vertical."His argument: agent platforms might follow the same pattern because:* Agents across verticals share more commonalities than differences* There's value in having agents that can work together under one roof* The R&D cost of getting agents right is better amortized across use casesThis might explain why we're seeing early vertical AI companies starting to expand horizontally. The core agent capabilities - reliability, context management, tool integration - are universal needs.What This Means for BuildersIf you're building in the AI agents space, here are the key takeaways:* Constrain First: Rather than maximizing capabilities, focus on reliable execution within narrow bounds* Integration Quality Matters: With model capabilities plateauing, your competitive advantage lies in how well you integrate with existing tools* Memory Management is Key: Flo revealed they actively prune agent memories - even with larger context windows, not all memories are useful* Design for Discovery: Lindy's visual workflow builder shows how important interface design is for adoptionThe Meta LayerThere's a broader lesson here about AI product development. Just as Lindy evolved from "give the LLM everything" to "constrain intelligently," we might see similar evolution across the AI tooling space. The winners might not be those with the most powerful models, but those who best understand how to package AI capabilities in ways that solve real problems reliably.Full Video PodcastFlo’s talk at AI Engineer SummitChapters* 00:00:00 Introductions * 00:04:05 AI engineering and deterministic software * 00:08:36 Lindys demo* 00:13:21 Memory management in AI agents * 00:18:48 Hierarchy and collaboration between Lindys * 00:21:19 Vertical vs. horizontal AI tools * 00:24:03 Community and user engagement strategies * 00:26:16 Rickrolling incident with Lindy * 00:28:12 Evals and quality control in AI systems * 00:31:52 Model capabilities and their impact on Lindy * 00:39:27 Competition and market positioning * 00:42:40 Relationship between Factorio and business strategy * 00:44:05 Remote work vs. in-person collaboration * 00:49:03 Europe vs US Tech* 00:58:59 Testing the Overton window and free speech * 01:04:20 Balancing AI safety concerns with business innovation Show Notes* Lindy.ai* Rick Rolling* Flo on X* TeamFlow* Andrew Wilkinson* Dust* Poolside.ai* SB1047* Gathertown* Sid Sijbrandij* Matt Mullenweg* Factorio* Seeing Like a StateTranscriptAlessio [00:00:00]: Hey everyone, welcome to the Latent Space Podcast. This is Alessio, partner and CTO at Decibel Partners, and I'm joined by my co-host Swyx, founder of Smol.ai.Swyx [00:00:12]: Hey, and today we're joined in the studio by Florent Crivello. Welcome.Flo [00:00:15]: Hey, yeah, thanks for having me.Swyx [00:00:17]: Also known as Altimore. I always wanted to ask, what is Altimore?Flo [00:00:21]: It was the name of my character when I was playing Dungeons & Dragons. Always. I was like 11 years old.Swyx [00:00:26]: What was your classes?Flo [00:00:27]: I was an elf. I was a magician elf.Swyx [00:00:30]: Well, you're still spinning magic. Right now, you're a solo founder and CEO of Lindy.ai. What is Lindy?Flo [00:00:36]: Yeah, we are a no-code platform letting you build your own AI agents easily. So you can think of we are to LangChain as Airtable is to MySQL. Like you can just pin up AI agents super easily by clicking around and no code required. You don't have to be an engineer and you can automate business workflows that you simply could not automate before in a few minutes.Swyx [00:00:55]: You've been in our orbit a few times. I think you spoke at our Latent Space anniversary. You spoke at my summit, the first summit, which was a really good keynote. And most recently, like we actually already scheduled this podcast before this happened. But Andrew Wilkinson was like, I'm obsessed by Lindy. He's just created a whole bunch of agents. So basically, why are you blowing up?Flo [00:01:16]: Well, thank you. I think we are having a little bit of a moment. I think it's a bit premature to say we're blowing up. But why are things going well? We revamped the product majorly. We called it Lindy 2.0. I would say we started working on that six months ago. We've actually not really announced it yet. It's just, I guess, I guess that's what we're doing now. And so we've basically been cooking for the last six months, like really rebuilding the product from scratch. I think I'll list you, actually, the last time you tried the product, it was still Lindy 1.0. Oh, yeah. If you log in now, the platform looks very different. There's like a ton more features. And I think one realization that we made, and I think a lot of folks in the agent space made the same realization, is that there is such a thing as too much of a good thing. I think many people, when they started working on agents, they were very LLM peeled and chat GPT peeled, right? They got ahead of themselves in a way, and us included, and they thought that agents were actually, and LLMs were actually more advanced than they actually were. And so the first version of Lindy was like just a giant prompt and a bunch of tools. And then the realization we had was like, hey, actually, the more you can put your agent on Rails, one, the more reliable it's going to be, obviously, but two, it's also going to be easier to use for the user, because you can really, as a user, you get, instead of just getting this big, giant, intimidating text field, and you type words in there, and you have no idea if you're typing the right word or not, here you can really click and select step by step, and tell your agent what to do, and really give as narrow or as wide a guardrail as you want for your agent. We started working on that. We called it Lindy on Rails about six months ago, and we started putting it into the hands of users over the last, I would say, two months or so, and I think things really started going pretty well at that point. The agent is way more reliable, way easier to set up, and we're already seeing a ton of new use cases pop up.Swyx [00:03:00]: Yeah, just a quick follow-up on that. You launched the first Lindy in November last year, and you were already talking about having a DSL, right? I remember having this discussion with you, and you were like, it's just much more reliable. Is this still the DSL under the hood? Is this a UI-level change, or is it a bigger rewrite?Flo [00:03:17]: No, it is a much bigger rewrite. I'll give you a concrete example. Suppose you want to have an agent that observes your Zendesk tickets, and it's like, hey, every time you receive a Zendesk ticket, I want you to check my knowledge base, so it's like a RAG module and whatnot, and then answer the ticket. The way it used to work with Lindy before was, you would type the prompt asking it to do that. You check my knowledge base, and so on and so forth. The problem with doing that is that it can always go wrong. You're praying the LLM gods that they will actually invoke your knowledge base, but I don't want to ask it. I want it to always, 100% of the time, consult the knowledge base after it receives a Zendesk ticket. And so with Lindy, you can actually have the trigger, which is Zendesk ticket received, have the knowledge base consult, which is always there, and then have the agent. So you can really set up your agent any way you want like that.Swyx [00:04:05]: This is something I think about for AI engineering as well, which is the big labs want you to hand over everything in the prompts, and only code of English, and then the smaller brains, the GPU pours, always want to write more code to make things more deterministic and reliable and controllable. One way I put it is put Shoggoth in a box and make it a very small, the minimal viable box. Everything else should be traditional, if this, then that software.Flo [00:04:29]: I love that characterization, put the Shoggoth in the box. Yeah, we talk about using as much AI as necessary and as little as possible.Alessio [00:04:37]: And what was the choosing between kind of like this drag and drop, low code, whatever, super code-driven, maybe like the Lang chains, auto-GPT of the world, and maybe the flip side of it, which you don't really do, it's like just text to agent, it's like build the workflow for me. Like what have you learned actually putting this in front of users and figuring out how much do they actually want to add it versus like how much, you know, kind of like Ruby on Rails instead of Lindy on Rails, it's kind of like, you know, defaults over configuration.Flo [00:05:06]: I actually used to dislike when people said, oh, text is not a great interface. I was like, ah, this is such a mid-take, I think text is awesome. And I've actually come around, I actually sort of agree now that text is really not great. I think for people like you and me, because we sort of have a mental model, okay, when I type a prompt into this text box, this is what it's going to do, it's going to map it to this kind of data structure under the hood and so forth. I guess it's a little bit blackmailing towards humans. You jump on these calls with humans and you're like, here's a text box, this is going to set up an agent for you, do it. And then they type words like, I want you to help me put order in my inbox. Oh, actually, this is a good one. This is actually a good one. What's a bad one? I would say 60 or 70% of the prompts that people type don't mean anything. Me as a human, as AGI, I don't understand what they mean. I don't know what they mean. It is actually, I think whenever you can have a GUI, it is better than to have just a pure text interface.Alessio [00:05:58]: And then how do you decide how much to expose? So even with the tools, you have Slack, you have Google Calendar, you have Gmail. Should people by default just turn over access to everything and then you help them figure out what to use? I think that's the question. When I tried to set up Slack, it was like, hey, give me access to all channels and everything, which for the average person probably makes sense because you don't want to re-prompt them every time you add new channels. But at the same time, for maybe the more sophisticated enterprise use cases, people are like, hey, I want to really limit what you have access to. How do you kind of thread that balance?Flo [00:06:35]: The general philosophy is we ask for the least amount of permissions needed at any given moment. I don't think Slack, I could be mistaken, but I don't think Slack lets you request permissions for just one channel. But for example, for Google, obviously there are hundreds of scopes that you could require for Google. There's a lot of scopes. And sometimes it's actually painful to set up your Lindy because you're going to have to ask Google and add scopes five or six times. We've had sessions like this. But that's what we do because, for example, the Lindy email drafter, she's going to ask you for your authorization once for, I need to be able to read your email so I can draft a reply, and then another time for I need to be able to write a draft for them. We just try to do it very incrementally like that.Alessio [00:07:15]: Do you think OAuth is just overall going to change? I think maybe before it was like, hey, we need to set up OAuth that humans only want to kind of do once. So we try to jam-pack things all at once versus what if you could on-demand get different permissions every time from different parts? Do you ever think about designing things knowing that maybe AI will use it instead of humans will use it? Yeah, for sure.Flo [00:07:37]: One pattern we've started to see is people provisioning accounts for their AI agents. And so, in particular, Google Workspace accounts. So, for example, Lindy can be used as a scheduling assistant. So you can just CC her to your emails when you're trying to find time with someone. And just like a human assistant, she's going to go back and forth and offer other abilities and so forth. Very often, people don't want the other party to know that it's an AI. So it's actually funny. They introduce delays. They ask the agent to wait before replying, so it's not too obvious that it's an AI. And they provision an account on Google Suite, which costs them like $10 a month or something like that. So we're seeing that pattern more and more. I think that does the job for now. I'm not optimistic on us actually patching OAuth. Because I agree with you, ultimately, we would want to patch OAuth because the new account thing is kind of a clutch. It's really a hack. You would want to patch OAuth to have more granular access control and really be able to put your sugar in the box. I'm not optimistic on us doing that before AGI, I think. That's a very close timeline.Swyx [00:08:36]: I'm mindful of talking about a thing without showing it. And we already have the setup to show it. Why don't we jump into a screen share? For listeners, you can jump on the YouTube and like and subscribe. But also, let's have a look at how you show off Lindy. Yeah, absolutely.Flo [00:08:51]: I'll give an example of a very simple Lindy and then I'll graduate to a much more complicated one. A super simple Lindy that I have is, I unfortunately bought some investment properties in the south of France. It was a really, really bad idea. And I put them on a Holydew, which is like the French Airbnb, if you will. And so I received these emails from time to time telling me like, oh, hey, you made 200 bucks. Someone booked your place. When I receive these emails, I want to log this reservation in a spreadsheet. Doing this without an AI agent or without AI in general is a pain in the butt because you must write an HTML parser for this email. And so it's just hard. You may not be able to do it and it's going to break the moment the email changes. By contrast, the way it works with Lindy, it's really simple. It's two steps. It's like, okay, I receive an email. If it is a reservation confirmation, I have this filter here. Then I append a row to this spreadsheet. And so this is where you can see the AI part where the way this action is configured here, you see these purple fields on the right. Each of these fields is a prompt. And so I can say, okay, you extract from the email the day the reservation begins on. You extract the amount of the reservation. You extract the number of travelers of the reservation. And now you can see when I look at the task history of this Lindy, it's really simple. It's like, okay, you do this and boom, appending this row to this spreadsheet. And this is the information extracted. So effectively, this node here, this append row node is a mini agent. It can see everything that just happened. It has context over the task and it's appending the row. And then it's going to send a reply to the thread. That's a very simple example of an agent.Swyx [00:10:34]: A quick follow-up question on this one while we're still on this page. Is that one call? Is that a structured output call? Yeah. Okay, nice. Yeah.Flo [00:10:41]: And you can see here for every node, you can configure which model you want to power the node. Here I use cloud. For this, I use GPT-4 Turbo. Much more complex example, my meeting recorder. It looks very complex because I've added to it over time, but at a high level, it's really simple. It's like when a meeting begins, you record the meeting. And after the meeting, you send me a summary and you send me coaching notes. So I receive, like my Lindy is constantly coaching me. And so you can see here in the prompt of the coaching notes, I've told it, hey, you know, was I unnecessarily confrontational at any point? I'm French, so I have to watch out for that. Or not confrontational enough. Should I have double-clicked on any issue, right? So I can really give it exactly the kind of coaching that I'm expecting. And then the interesting thing here is, like, you can see the agent here, after it sent me these coaching notes, moves on. And it does a bunch of other stuff. So it goes on Slack. It disseminates the notes on Slack. It does a bunch of other stuff. But it's actually able to backtrack and resume the automation at the coaching notes email if I responded to that email. So I'll give a super concrete example. This is an actual coaching feedback that I received from Lindy. She was like, hey, this was a sales call I had with a customer. And she was like, I found your explanation of Lindy too technical. And I was able to follow up and just ask a follow-up question in the thread here. And I was like, why did you find too technical about my explanation? And Lindy restored the context. And so she basically picked up the automation back up here in the tree. And she has all of the context of everything that happened, including the meeting in which I was. So she was like, oh, you used the words deterministic and context window and agent state. And that concept exists at every level for every channel and every action that Lindy takes. So another example here is, I mentioned she also disseminates the notes on Slack. So this was a meeting where I was not, right? So this was a teammate. He's an indie meeting recorder, posts the meeting notes in this customer discovery channel on Slack. So you can see, okay, this is the onboarding call we had. This was the use case. Look at the questions. How do I make Lindy slower? How do I add delays to make Lindy slower? And I was able, in the Slack thread, to ask follow-up questions like, oh, what did we answer to these questions? And it's really handy because I know I can have this sort of interactive Q&A with these meetings. It means that very often now, I don't go to meetings anymore. I just send my Lindy. And instead of going to like a 60-minute meeting, I have like a five-minute chat with my Lindy afterwards. And she just replied. She was like, well, this is what we replied to this customer. And I can just be like, okay, good job, Jack. Like, no notes about your answers. So that's the kind of use cases people have with Lindy. It's a lot of like, there's a lot of sales automations, customer support automations, and a lot of this, which is basically personal assistance automations, like meeting scheduling and so forth.Alessio [00:13:21]: Yeah, and I think the question that people might have is memory. So as you get coaching, how does it track whether or not you're improving? You know, if these are like mistakes you made in the past, like, how do you think about that?Flo [00:13:31]: Yeah, we have a memory module. So I'll show you my meeting scheduler, Lindy, which has a lot of memories because by now I've used her for so long. And so every time I talk to her, she saves a memory. If I tell her, you screwed up, please don't do this. So you can see here, oh, it's got a double memory here. This is the meeting link I have, or this is the address of the office. If I tell someone to meet me at home, this is the address of my place. This is the code. I guess we'll have to edit that out. This is not the code of my place. No dogs. Yeah, so Lindy can just manage her own memory and decide when she's remembering things between executions. Okay.Swyx [00:14:11]: I mean, I'm just going to take the opportunity to ask you, since you are the creator of this thing, how come there's so few memories, right? Like, if you've been using this for two years, there should be thousands of thousands of things. That is a good question.Flo [00:14:22]: Agents still get confused if they have too many memories, to my point earlier about that. So I just am out of a call with a member of the Lama team at Meta, and we were chatting about Lindy, and we were going into the system prompt that we sent to Lindy, and all of that stuff. And he was amazed, and he was like, it's a miracle that it's working, guys. He was like, this kind of system prompt, this does not exist, either pre-training or post-training. These models were never trained to do this kind of stuff. It's a miracle that they can be agents at all. And so what I do, I actually prune the memories. You know, it's actually something I've gotten into the habit of doing from back when we had GPT 3.5, being Lindy agents. I suspect it's probably not as necessary in the Cloud 3.5 Sunette days, but I prune the memories. Yeah, okay.Swyx [00:15:05]: The reason is because I have another assistant that also is recording and trying to come up with facts about me. It comes up with a lot of trivial, useless facts that I... So I spend most of my time pruning. Actually, it's not super useful. I'd much rather have high-quality facts that it accepts. Or maybe I was even thinking, were you ever tempted to add a wake word to only memorize this when I say memorize this? And otherwise, don't even bother.Flo [00:15:30]: I have a Lindy that does this. So this is my inbox processor, Lindy. It's kind of beefy because there's a lot of different emails. But somewhere in here,Swyx [00:15:38]: there is a rule where I'm like,Flo [00:15:39]: aha, I can email my inbox processor, Lindy. It's really handy. So she has her own email address. And so when I process my email inbox, I sometimes forward an email to her. And it's a newsletter, or it's like a cold outreach from a recruiter that I don't care about, or anything like that. And I can give her a rule. And I can be like, hey, this email I want you to archive, moving forward. Or I want you to alert me on Slack when I have this kind of email. It's really important. And so you can see here, the prompt is, if I give you a rule about a kind of email, like archive emails from X, save it as a new memory. And I give it to the memory saving skill. And yeah.Swyx [00:16:13]: One thing that just occurred to me, so I'm a big fan of virtual mailboxes. I recommend that everybody have a virtual mailbox. You could set up a physical mail receive thing for Lindy. And so then Lindy can process your physical mail.Flo [00:16:26]: That's actually a good idea. I actually already have something like that. I use like health class mail. Yeah. So yeah, most likely, I can process my physical mail. Yeah.Swyx [00:16:35]: And then the other product's idea I have, looking at this thing, is people want to brag about the complexity of their Lindys. So this would be like a 65 point Lindy, right?Flo [00:16:43]: What's a 65 point?Swyx [00:16:44]: Complexity counting. Like how many nodes, how many things, how many conditions, right? Yeah.Flo [00:16:49]: This is not the most complex one. I have another one. This designer recruiter here is kind of beefy as well. Right, right, right. So I'm just saying,Swyx [00:16:56]: let people brag. Let people be super users. Oh, right.Flo [00:16:59]: Give them a score. Give them a score.Swyx [00:17:01]: Then they'll just be like, okay, how high can you make this score?Flo [00:17:04]: Yeah, that's a good point. And I think that's, again, the beauty of this on-rails phenomenon. It's like, think of the equivalent, the prompt equivalent of this Lindy here, for example, that we're looking at. It'd be monstrous. And the odds that it gets it right are so low. But here, because we're really holding the agent's hand step by step by step, it's actually super reliable. Yeah.Swyx [00:17:22]: And is it all structured output-based? Yeah. As far as possible? Basically. Like, there's no non-structured output?Flo [00:17:27]: There is. So, for example, here, this AI agent step, right, or this send message step, sometimes it gets to... That's just plain text.Swyx [00:17:35]: That's right.Flo [00:17:36]: Yeah. So I'll give you an example. Maybe it's TMI. I'm having blood pressure issues these days. And so this Lindy here, I give it my blood pressure readings, and it updates a log that I have of my blood pressure that it sends to my doctor.Swyx [00:17:49]: Oh, so every Lindy comes with a to-do list?Flo [00:17:52]: Yeah. Every Lindy has its own task history. Huh. Yeah. And so you can see here, this is my main Lindy, my personal assistant, and I've told it, where is this? There is a point where I'm like, if I am giving you a health-related fact, right here, I'm giving you health information, so then you update this log that I have in this Google Doc, and then you send me a message. And you can see, I've actually not configured this send message node. I haven't told it what to send me a message for. Right? And you can see, it's actually lecturing me. It's like, I'm giving it my blood pressure ratings. It's like, hey, it's a bit high. Here are some lifestyle changes you may want to consider.Alessio [00:18:27]: I think maybe this is the most confusing or new thing for people. So even I use Lindy and I didn't even know you could have multiple workflows in one Lindy. I think the mental model is kind of like the Zapier workflows. It starts and it ends. It doesn't choose between. How do you think about what's a Lindy versus what's a sub-function of a Lindy? Like, what's the hierarchy?Flo [00:18:48]: Yeah. Frankly, I think the line is a little arbitrary. It's kind of like when you code, like when do you start to create a new class versus when do you overload your current class. I think of it in terms of like jobs to be done and I think of it in terms of who is the Lindy serving. This Lindy is serving me personally. It's really my day-to-day Lindy. I give it a bunch of stuff, like very easy tasks. And so this is just the Lindy I go to. Sometimes when a task is really more specialized, so for example, I have this like summarizer Lindy or this designer recruiter Lindy. These tasks are really beefy. I wouldn't want to add this to my main Lindy, so I just created a separate Lindy for it. Or when it's a Lindy that serves another constituency, like our customer support Lindy, I don't want to add that to my personal assistant Lindy. These are two very different Lindys.Alessio [00:19:31]: And you can call a Lindy from within another Lindy. That's right. You can kind of chain them together.Flo [00:19:36]: Lindys can work together, absolutely.Swyx [00:19:38]: A couple more things for the video portion. I noticed you have a podcast follower. We have to ask about that. What is that?Flo [00:19:46]: So this one wakes me up every... So wakes herself up every week. And she sends me... So she woke up yesterday, actually. And she searches for Lenny's podcast. And she looks for like the latest episode on YouTube. And once she finds it, she transcribes the video and then she sends me the summary by email. I don't listen to podcasts as much anymore. I just like read these summaries. Yeah.Alessio [00:20:09]: We should make a latent space Lindy. Marketplace.Swyx [00:20:12]: Yeah. And then you have a whole bunch of connectors. I saw the list briefly. Any interesting one? Complicated one that you're proud of? Anything that you want to just share? Connector stories.Flo [00:20:23]: So many of our workflows are about meeting scheduling. So we had to build some very open unity tools around meeting scheduling. So for example, one that is surprisingly hard is this find available times action. You would not believe... This is like a thousand lines of code or something. It's just a very beefy action. And you can pass it a bunch of parameters about how long is the meeting? When does it start? When does it end? What are the meetings? The weekdays in which I meet? How many time slots do you return? What's the buffer between my meetings? It's just a very, very, very complex action. I really like our GitHub action. So we have a Lindy PR reviewer. And it's really handy because anytime any bug happens... So the Lindy reads our guidelines on Google Docs. By now, the guidelines are like 40 pages long or something. And so every time any new kind of bug happens, we just go to the guideline and we add the lines. Like, hey, this has happened before. Please watch out for this category of bugs. And it's saving us so much time every day.Alessio [00:21:19]: There's companies doing PR reviews. Where does a Lindy start? When does a company start? Or maybe how do you think about the complexity of these tasks when it's going to be worth having kind of like a vertical standalone company versus just like, hey, a Lindy is going to do a good job 99% of the time?Flo [00:21:34]: That's a good question. We think about this one all the time. I can't say that we've really come up with a very crisp articulation of when do you want to use a vertical tool versus when do you want to use a horizontal tool. I think of it as very similar to the internet. I find it surprising the extent to which a horizontal search engine has won. But I think that Google, right? But I think the even more surprising fact is that the horizontal search engine has won in almost every vertical, right? You go through Google to search Reddit. You go through Google to search Wikipedia. I think maybe the biggest exception is e-commerce. Like you go to Amazon to search e-commerce, but otherwise you go through Google. And I think that the reason for that is because search in each vertical has more in common with search than it does with each vertical. And search is so expensive to get right. Like Google is a big company that it makes a lot of sense to aggregate all of these different use cases and to spread your R&D budget across all of these different use cases. I have a thesis, which is, it's a really cool thesis for Lindy, is that the same thing is true for agents. I think that by and large, in a lot of verticals, agents in each vertical have more in common with agents than they do with each vertical. I also think there are benefits in having a single agent platform because that way your agents can work together. They're all like under one roof. That way you only learn one platform and so you can create agents for everything that you want. And you don't have to like pay for like a bunch of different platforms and so forth. So I think ultimately, it is actually going to shake out in a way that is similar to search in that search is everywhere on the internet. Every website has a search box, right? So there's going to be a lot of vertical agents for everything. I think AI is going to completely penetrate every category of software. But then I also think there are going to be a few very, very, very big horizontal agents that serve a lot of functions for people.Swyx [00:23:14]: That is actually one of the questions that we had about the agent stuff. So I guess we can transition away from the screen and I'll just ask the follow-up, which is, that is a hot topic. You're basically saying that the current VC obsession of the day, which is vertical AI enabled SaaS, is mostly not going to work out. And then there are going to be some super giant horizontal SaaS.Flo [00:23:34]: Oh, no, I'm not saying it's either or. Like SaaS today, vertical SaaS is huge and there's also a lot of horizontal platforms. If you look at like Airtable or Notion, basically the entire no-code space is very horizontal. I mean, Loom and Zoom and Slack, there's a lot of very horizontal tools out there. Okay.Swyx [00:23:49]: I was just trying to get a reaction out of you for hot takes. Trying to get a hot take.Flo [00:23:54]: No, I also think it is natural for the vertical solutions to emerge first because it's just easier to build. It's just much, much, much harder to build something horizontal. Cool.Swyx [00:24:03]: Some more Lindy-specific questions. So we covered most of the top use cases and you have an academy. That was nice to see. I also see some other people doing it for you for free. So like Ben Spites is doing it and then there's some other guy who's also doing like lessons. Yeah. Which is kind of nice, right? Yeah, absolutely. You don't have to do any of that.Flo [00:24:20]: Oh, we've been seeing it more and more on like LinkedIn and Twitter, like people posting their Lindys and so forth.Swyx [00:24:24]: I think that's the flywheel that you built the platform where creators see value in allying themselves to you. And so then, you know, your incentive is to make them successful so that they can make other people successful and then it just drives more and more engagement. Like it's earned media. Like you don't have to do anything.Flo [00:24:39]: Yeah, yeah. I mean, community is everything.Swyx [00:24:41]: Are you doing anything special there? Any big wins?Flo [00:24:44]: We have a Slack community that's pretty active. I can't say we've invested much more than that so far.Swyx [00:24:49]: I would say from having, so I have some involvement in the no-code community. I would say that Webflow going very hard after no-code as a category got them a lot more allies than just the people using Webflow. So it helps you to grow the community beyond just Lindy. And I don't know what this is called. Maybe it's just no-code again. Maybe you want to call it something different. But there's definitely an appetite for this and you are one of a broad category, right? Like just before you, we had Dust and, you know, they're also kind of going after a similar market. Zapier obviously is not going to try to also compete with you. Yeah. There's no question there. It's just like a reaction about community. Like I think a lot about community. Lanespace is growing the community of AI engineers. And I think you have a slightly different audience of, I don't know what.Flo [00:25:33]: Yeah. I think the no-code tinkerers is the community. Yeah. It is going to be the same sort of community as what Webflow, Zapier, Airtable, Notion to some extent.Swyx [00:25:43]: Yeah. The framing can be different if you were, so I think tinkerers has this connotation of not serious or like small. And if you framed it to like no-code EA, we're exclusively only for CEOs with a certain budget, then you just have, you tap into a different budget.Flo [00:25:58]: That's true. The problem with EA is like, the CEO has no willingness to actually tinker and play with the platform.Swyx [00:26:05]: Maybe Andrew's doing that. Like a lot of your biggest advocates are CEOs, right?Flo [00:26:09]: A solopreneur, you know, small business owners, I think Andrew is an exception. Yeah. Yeah, yeah, he is.Swyx [00:26:14]: He's an exception in many ways. Yep.Alessio [00:26:16]: Just before we wrap on the use cases, is Rick rolling your customers? Like a officially supported use case or maybe tell that story?Flo [00:26:24]: It's one of the main jobs to be done, really. Yeah, we woke up recently, so we have a Lindy obviously doing our customer support and we do check after the Lindy. And so we caught this email exchange where someone was asking Lindy for video tutorials. And at the time, actually, we did not have video tutorials. We do now on the Lindy Academy. And Lindy responded to the email. It's like, oh, absolutely, here's a link. And we were like, what? Like, what kind of link did you send? And so we clicked on the link and it was a recall. We actually reacted fast enough that the customer had not yet opened the email. And so we reacted immediately. Like, oh, hey, actually, sorry, this is the right link. And so the customer never reacted to the first link. And so, yeah, I tweeted about that. It went surprisingly viral. And I checked afterwards in the logs. We did like a database query and we found, I think, like three or four other instances of it having happened before.Swyx [00:27:12]: That's surprisingly low.Flo [00:27:13]: It is low. And we fixed it across the board by just adding a line to the system prompt that's like, hey, don't recall people, please don't recall.Swyx [00:27:21]: Yeah, yeah, yeah. I mean, so, you know, you can explain it retroactively, right? Like, that YouTube slug has been pasted in so many different corpuses that obviously it learned to hallucinate that.Alessio [00:27:31]: And it pretended to be so many things. That's the thing.Swyx [00:27:34]: I wouldn't be surprised if that takes one token. Like, there's this one slug in the tokenizer and it's just one token.Flo [00:27:41]: That's the idea of a YouTube video.Swyx [00:27:43]: Because it's used so much, right? And you have to basically get it exactly correct. It's probably not. That's a long speech.Flo [00:27:52]: It would have been so good.Alessio [00:27:55]: So this is just a jump maybe into evals from here. How could you possibly come up for an eval that says, make sure my AI does not recall my customer? I feel like when people are writing evals, that's not something that they come up with. So how do you think about evals when it's such like an open-ended problem space?Flo [00:28:12]: Yeah, it is tough. We built quite a bit of infrastructure for us to create evals in one click from any conversation history. So we can point to a conversation and we can be like, in one click we can turn it into effectively a unit test. It's like, this is a good conversation. This is how you're supposed to handle things like this. Or if it's a negative example, then we modify a little bit the conversation after generating the eval. So it's very easy for us to spin up this kind of eval.Alessio [00:28:36]: Do you use an off-the-shelf tool which is like Brain Trust on the podcast? Or did you just build your own?Flo [00:28:41]: We unfortunately built our own. We're most likely going to switch to Brain Trust. Well, when we built it, there was nothing. Like there was no eval tool, frankly. I mean, we started this project at the end of 2022. It was like, it was very, very, very early. I wouldn't recommend it to build your own eval tool. There's better solutions out there and our eval tool breaks all the time and it's a nightmare to maintain. And that's not something we want to be spending our time on.Swyx [00:29:04]: I was going to ask that basically because I think my first conversations with you about Lindy was that you had a strong opinion that everyone should build their own tools. And you were very proud of your evals. You're kind of showing off to me like how many evals you were running, right?Flo [00:29:16]: Yeah, I think that was before all of these tools came around. I think the ecosystem has matured a fair bit.Swyx [00:29:21]: What is one thing that Brain Trust has nailed that you always struggled to do?Flo [00:29:25]: We're not using them yet, so I couldn't tell. But from what I've gathered from the conversations I've had, like they're doing what we do with our eval tool, but better.Swyx [00:29:33]: And like they do it, but also like 60 other companies do it, right? So I don't know how to shop apart from brand. Word of mouth.Flo [00:29:41]: Same here.Swyx [00:29:42]: Yeah, like evals or Lindys, there's two kinds of evals, right? Like in some way, you don't have to eval your system as much because you've constrained the language model so much. And you can rely on open AI to guarantee that the structured outputs are going to be good, right? We had Michelle sit where you sit and she explained exactly how they do constraint grammar sampling and all that good stuff. So actually, I think it's more important for your customers to eval their Lindys than you evaling your Lindy platform because you just built the platform. You don't actually need to eval that much.Flo [00:30:14]: Yeah. In an ideal world, our customers don't need to care about this. And I think the bar is not like, look, it needs to be at 100%. I think the bar is it needs to be better than a human. And for most use cases we serve today, it is better than a human, especially if you put it on Rails.Swyx [00:30:30]: Is there a limiting factor of Lindy at the business? Like, is it adding new connectors? Is it adding new node types? Like how do you prioritize what is the most impactful to your company?Flo [00:30:41]: Yeah. The raw capabilities for sure are a big limit. It is actually shocking the extent to which the model is no longer the limit. It was the limit a year ago. It was too expensive. The context window was too small. It's kind of insane that we started building this when the context windows were like 4,000 tokens. Like today, our system prompt is more than 4,000 tokens. So yeah, the model is actually very much not a limit anymore. It almost gives me pause because I'm like, I want the model to be a limit. And so no, the integrations are ones, the core capabilities are ones. So for example, we are investing in a system that's basically, I call it like the, it's a J hack. Give me these names, like the poor man's RLHF. So you can turn on a toggle on any step of your Lindy workflow to be like, ask me for confirmation before you actually execute this step. So it's like, hey, I receive an email, you send a reply, ask me for confirmation before actually sending it. And so today you see the email that's about to get sent and you can either approve, deny, or change it and then approve. And we are making it so that when you make a change, we are then saving this change that you're making or embedding it in the vector database. And then we are retrieving these examples for future tasks and injecting them into the context window. So that's the kind of capability that makes a huge difference for users. That's the bottleneck today. It's really like good old engineering and product work.Swyx [00:31:52]: I assume you're hiring. We'll do a call for hiring at the end.Alessio [00:31:54]: Any other comments on the model side? When did you start feeling like the model was not a bottleneck anymore? Was it 4.0? Was it 3.5? 3.5.Flo [00:32:04]: 3.5 Sonnet, definitely. I think 4.0 is overhyped, frankly. We don't use 4.0. I don't think it's good for agentic behavior. Yeah, 3.5 Sonnet is when I started feeling that. And then with prompt caching with 3.5 Sonnet, like that fills the cost, cut the cost again. Just cut it in half. Yeah.Swyx [00:32:21]: Your prompts are... Some of the problems with agentic uses is that your prompts are kind of dynamic, right? Like from caching to work, you need the front prefix portion to be stable.Flo [00:32:32]: Yes, but we have this append-only ledger paradigm. So every node keeps appending to that ledger and every filled node inherits all the context built up by all the previous nodes. And so we can just decide, like, hey, every X thousand nodes, we trigger prompt caching again.Swyx [00:32:47]: Oh, so you do it like programmatically, not all the time.Flo [00:32:50]: No, sorry. Anthropic manages that for us. But basically, it's like, because we keep appending to the prompt, the prompt caching works pretty well.Alessio [00:32:57]: We have this small podcaster tool that I built for the podcast and I rewrote all of our prompts because I noticed, you know, I was inputting stuff early on. I wonder how much more money OpenAN and Anthropic are making just because people don't rewrite their prompts to be like static at the top and like dynamic at the bottom.Flo [00:33:13]: I think that's the remarkable thing about what we're having right now. It's insane that these companies are routinely cutting their costs by two, four, five. Like, they basically just apply constraints. They want people to take advantage of these innovations. Very good.Swyx [00:33:25]: Do you have any other competitive commentary? Commentary? Dust, WordWare, Gumloop, Zapier? If not, we can move on.Flo [00:33:31]: No comment.Alessio [00:33:32]: I think the market is,Flo [00:33:33]: look, I mean, AGI is coming. All right, that's what I'm talking about.Swyx [00:33:38]: I think you're helping. Like, you're paving the road to AGI.Flo [00:33:41]: I'm playing my small role. I'm adding my small brick to this giant, giant, giant castle. Yeah, look, when it's here, we are going to, this entire category of software is going to create, it's going to sound like an exaggeration, but it is a fact it is going to create trillions of dollars of value in a few years, right? It's going to, for the first time, we're actually having software directly replace human labor. I see it every day in sales calls. It's like, Lindy is today replacing, like, we talk to even small teams. It's like, oh, like, stop, this is a 12-people team here. I guess we'll set up this Lindy for one or two days, and then we'll have to decide what to do with this 12-people team. And so, yeah. To me, there's this immense uncapped market opportunity. It's just such a huge ocean, and there's like three sharks in the ocean. I'm focused on the ocean more than on the sharks.Swyx [00:34:25]: So we're moving on to hot topics, like, kind of broadening out from Lindy, but obviously informed by Lindy. What are the high-order bits of good agent design?Flo [00:34:31]: The model, the model, the model, the model. I think people fail to truly, and me included, they fail to truly internalize the bitter lesson. So for the listeners out there who don't know about it, it's basically like, you just scale the model. Like, GPUs go brr, it's all that matters. I think it also holds for the cognitive architecture. I used to be very cognitive architecture-filled, and I was like, ah, and I was like a critic, and I was like a generator, and all this, and then it's just like, GPUs go brr, like, just like let the model do its job. I think we're seeing it a little bit right now with O1. I'm seeing some tweets that say that the new 3.5 SONNET is as good as O1, but with none of all the crazy...Swyx [00:35:09]: It beats O1 on some measures. On some reasoning tasks. On AIME, it's still a lot lower. Like, it's like 14 on AIME versus O1, it's like 83.Flo [00:35:17]: Got it. Right. But even O1 is still the model. Yeah.Swyx [00:35:22]: Like, there's no cognitive architecture on top of it.Flo [00:35:23]: You can just wait for O1 to get better.Alessio [00:35:25]: And so, as a founder, how do you think about that, right? Because now, knowing this, wouldn't you just wait to start Lindy? You know, you start Lindy, it's like 4K context, the models are not that good. It's like, but you're still kind of like going along and building and just like waiting for the models to get better. How do you today decide, again, what to build next, knowing that, hey, the models are going to get better, so maybe we just shouldn't focus on improving our prompt design and all that stuff and just build the connectors instead or whatever? Yeah.Flo [00:35:51]: I mean, that's exactly what we do. Like, all day, we always ask ourselves, oh, when we have a feature idea or a feature request, we ask ourselves, like, is this the kind of thing that just gets better while we sleep because models get better? I'm reminded, again, when we started this in 2022, we spent a lot of time because we had to around context pruning because 4,000 tokens is really nothing. You really can't do anything with 4,000 tokens. All that work was throwaway work. Like, now it's like it was for nothing, right? Now we just assume that infinite context windows are going to be here in a year or something, a year and a half, and infinitely cheap as well, and dynamic compute is going to be here. Like, we just assume all of these things are going to happen, and so we really focus, our job to be done in the industry is to provide the input and output to the model. I really compare it all the time to the PC and the CPU, right? Apple is busy all day. They're not like a CPU wrapper. They have a lot to build, but they don't, well, now actually they do build the CPU as well, but leaving that aside, they're busy building a laptop. It's just a lot of work to build these things. It's interesting because, like,Swyx [00:36:45]: for example, another person that we're close to, Mihaly from Repl.it, he often says that the biggest jump for him was having a multi-agent approach, like the critique thing that you just said that you don't need, and I wonder when, in what situations you do need that and what situations you don't. Obviously, the simple answer is for coding, it helps, and you're not coding, except for, are you still generating code? In Indy? Yeah.Flo [00:37:09]: No, we do. Oh, right. No, no, no, the cognitive architecture changed. We don't, yeah.Swyx [00:37:13]: Yeah, okay. For you, you're one shot, and you chain tools together, and that's it. And if the user really wantsFlo [00:37:18]: to have this kind of critique thing, you can also edit the prompt, you're welcome to. I have some of my Lindys, I've told them, like, hey, be careful, think step by step about what you're about to do, but that gives you a little bump for some use cases, but, yeah.Alessio [00:37:30]: What about unexpected model releases? So, Anthropic released computer use today. Yeah. I don't know if many people were expecting computer use to come out today. Do these things make you rethink how to design, like, your roadmap and things like that, or are you just like, hey, look, whatever, that's just, like, a small thing in their, like, AGI pursuit, that, like, maybe they're not even going to support, and, like, it's still better for us to build our own integrations into systems and things like that. Because maybe people will say, hey, look, why am I building all these API integrationsFlo [00:38:02]: when I can just do computer use and never go to the product? Yeah. No, I mean, we did take into account computer use. We were talking about this a year ago or something, like, we've been talking about it as part of our roadmap. It's been clear to us that it was coming, My philosophy about it is anything that can be done with an API must be done by an API or should be done by an API for a very long time. I think it is dangerous to be overly cavalier about improvements of model capabilities. I'm reminded of iOS versus Android. Android was built on the JVM. There was a garbage collector, and I can only assume that the conversation that went down in the engineering meeting room was, oh, who cares about the garbage collector? Anyway, Moore's law is here, and so that's all going to go to zero eventually. Sure, but in the meantime, you are operating on a 400 MHz CPU. It was like the first CPU on the iPhone 1, and it's really slow, and the garbage collector is introducing a tremendous overhead on top of that, especially a memory overhead. For the longest time, and it's really only been recently that Android caught up to iOS in terms of how smooth the interactions were, but for the longest time, Android phones were significantly slowerSwyx [00:39:07]: and laggierFlo [00:39:08]: and just not feeling as good as iOS devices. Look, when you're talking about modules and magnitude of differences in terms of performance and reliability, which is what we are talking about when we're talking about API use versus computer use, then you can't ignore that, right? And so I think we're going to be in an API use world for a while.Swyx [00:39:27]: O1 doesn't have API use today. It will have it at some point, and it's on the roadmap. There is a future in which OpenAI goes much harder after your business, your market, than it is today. Like, ChatGPT, it's its own business. All they need to do is add tools to the ChatGPT, and now they're suddenly competing with you. And by the way, they have a GPT store where a bunch of people have already configured their tools to fit with them. Is that a concern?Flo [00:39:56]: I think even the GPT store, in a way, like the way they architect it, for example, their plug-in systems are actually grateful because we can also use the plug-ins. It's very open. Now, again, I think it's going to be such a huge market. I think there's going to be a lot of different jobs to be done. I know they have a huge enterprise offering and stuff, but today, ChatGPT is a consumer app. And so, the sort of flow detail I showed you, this sort of workflow, this sort of use cases that we're going after, which is like, we're doing a lot of lead generation and lead outreach and all of that stuff. That's not something like meeting recording, like Lindy Today right now joins your Zoom meetings and takes notes, all of that stuff.Swyx [00:40:34]: I don't see that so farFlo [00:40:35]: on the OpenAI roadmap.Swyx [00:40:36]: Yeah, but they do have an enterprise team that we talk to You're hiring GMs?Flo [00:40:42]: We did.Swyx [00:40:43]: It's a fascinating way to build a business, right? Like, what should you, as CEO, be in charge of? And what should you basically hireFlo [00:40:52]: a mini CEO to do? Yeah, that's a good question. I think that's also something we're figuring out. The GM thing was inspired from my days at Uber, where we hired one GM per city or per major geo area. We had like all GMs, regional GMs and so forth. And yeah, Lindy is so horizontal that we thought it made sense to hire GMs to own each vertical and the go-to market of the vertical and the customization of the Lindy templates for these verticals and so forth. What should I own as a CEO? I mean, the canonical reply here is always going to be, you know, you own the fundraising, you own the culture, you own the... What's the rest of the canonical reply? The culture, the fundraising.Swyx [00:41:29]: I don't know,Flo [00:41:30]: products. Even that, eventually, you do have to hand out. Yes, the vision, the culture, and the foundation. Well, you've done your job as a CEO. In practice, obviously, yeah, I mean, all day, I do a lot of product work still and I want to keep doing product work for as long as possible.Swyx [00:41:48]: Obviously, like you're recording and managing the team. Yeah.Flo [00:41:52]: That one feels like the most automatable part of the job, the recruiting stuff.Swyx [00:41:56]: Well, yeah. You saw myFlo [00:41:59]: design your recruiter here. Relationship between Factorio and building Lindy. We actually very often talk about how the business of the future is like a game of Factorio. Yeah. So, in the instance, it's like Slack and you've got like 5,000 Lindys in the sidebar and your job is to somehow manage your 5,000 Lindys. And it's going to be very similar to company building because you're going to look for like the highest leverage way to understand what's going on in your AI company and understand what levels do you have to make impact in that company. So, I think it's going to be very similar to like a human company except it's going to go infinitely faster. Today, in a human company, you could have a meeting with your team and you're like, oh, I'm going to build a facility and, you know, now it's like, okay,Swyx [00:42:40]: boom, I'm going to spin up 50 designers. Yeah. Like, actually, it's more important that you can clone an existing designer that you know works because the hiring process, you cannot clone someone because every new person you bring in is going to have their own tweaksFlo [00:42:54]: and you don't want that. Yeah.Swyx [00:42:56]: That's true. You want an army of mindless dronesFlo [00:42:59]: that all work the same way.Swyx [00:43:00]: The reason I bring this, bring Factorio up as well is one, Factorio Space just came out. Apparently, a whole bunch of people stopped working. I tried out Factorio. I never really got that much into it. But the other thing was, you had a tweet recently about how the sort of intentional top-down design was not as effective as just build. Yeah. Just ship.Flo [00:43:21]: I think people read a little bit too much into that tweet. It went weirdly viral. I was like, I did not intend it as a giant statement online.Swyx [00:43:28]: I mean, you notice you have a pattern with this, right? Like, you've done this for eight years now.Flo [00:43:33]: You should know. I legit was just hearing an interesting story about the Factorio game I had. And everybody was like, oh my God, so deep. I guess this explains everything about life and companies. There is something to be said, certainly, about focusing on the constraint. And I think it is Patrick Collison who said, people underestimate the extent to which moonshots are just one pragmatic step taken after the other. And I think as long as you have some inductive bias about, like, some loose idea about where you want to go, I think it makes sense to follow a sort of greedy search along that path. I think planning and organizing is important. And having older is important.Swyx [00:44:05]: I'm wrestling with that. There's two ways I encountered it recently. One with Lindy. When I tried out one of your automation templates and one of them was quite big and I just didn't understand it, right? So, like, it was not as useful to me as a small one that I can just plug in and see all of. And then the other one was me using Cursor. I was very excited about O1 and I just up frontFlo [00:44:27]: stuffed everythingSwyx [00:44:28]: I wanted to do into my prompt and expected O1 to do everything. And it got itself into a huge jumbled mess and it was stuck. It was really... There was no amount... I wasted, like, two hours on just, like, trying to get out of that hole. So I threw away the code base, started small, switched to Clouds on it and build up something working and just add it over time and it just worked. And to me, that was the factorial sentiment, right? Maybe I'm one of those fanboys that's just, like, obsessing over the depth of something that you just randomly tweeted out. But I think it's true for company building, for Lindy building, for coding.Flo [00:45:02]: I don't know. I think it's fair and I think, like, you and I talked about there's the Tuft & Metal principle and there's this other... Yes, I love that. There's the... I forgot the name of this other blog post but it's basically about this book Seeing Like a State that talks about the need for legibility and people who optimize the system for its legibility and anytime you make a system... So legible is basically more understandable. Anytime you make a system more understandable from the top down, it performs less well from the bottom up. And it's fine but you should at least make this trade-off with your eyes wide open. You should know, I am sacrificing performance for understandability, for legibility. And in this case, for you, it makes sense. It's like you are actually optimizing for legibility. You do want to understand your code base but in some other cases it may not make sense. Sometimes it's better to leave the system alone and let it be its glorious, chaotic, organic self and just trust that it's going to perform well even though you don't understand it completely.Swyx [00:45:55]: It does remind me of a common managerial issue or dilemma which you experienced in the small scale of Lindy where, you know, do you want to organize your company by functional sections or by products or, you know, whatever the opposite of functional is. And you tried it one way and it was more legible to you as CEO but actually it stopped working at the small level. Yeah.Flo [00:46:17]: I mean, one very small example, again, at a small scale is we used to have everything on Notion. And for me, as founder, it was awesome because everything was there. The roadmap was there. The tasks were there. The postmortems were there. And so, the postmortem was linkedSwyx [00:46:31]: to its task.Flo [00:46:32]: It was optimized for you. Exactly. And so, I had this, like, one pane of glass and everything was on Notion. And then the team, one day,Swyx [00:46:39]: came to me with pitchforksFlo [00:46:40]: and they really wanted to implement Linear. And I had to bite my fist so hard. I was like, fine, do it. Implement Linear. Because I was like, at the end of the day, the team needs to be able to self-organize and pick their own tools.Alessio [00:46:51]: Yeah. But it did make the company slightly less legible for me. Another big change you had was going away from remote work, every other month. The discussion comes up again. What was that discussion like? How did your feelings change? Was there kind of like a threshold of employees and team size where you felt like, okay, maybe that worked. Now it doesn't work anymore. And how are you thinking about the futureFlo [00:47:12]: as you scale the team? Yeah. So, for context, I used to have a business called TeamFlow. The business was about building a virtual office for remote teams. And so, being remote was not merely something we did. It was, I was banging the remote drum super hard and helping companies to go remote. And so, frankly, in a way, it's a bit embarrassing for me to do a 180 like that. But I guess, when the facts changed, I changed my mind. What happened? Well, I think at first, like everyone else, we went remote by necessity. It was like COVID and you've got to go remote. And on paper, the gains of remote are enormous. In particular, from a founder's standpoint, being able to hire from anywhere is huge. Saving on rent is huge. Saving on commute is huge for everyone and so forth. But then, look, we're all here. It's like, it is really making it much harder to work together. And I spent three years of my youth trying to build a solution for this. And my conclusion is, at least we couldn't figure it out and no one else could. Zoom didn't figure it out. We had like a bunch of competitors. Like, Gathertown was one of the bigger ones. We had dozens and dozens of competitors. No one figured it out. I don't know that software can actually solve this problem. The reality of it is, everyone just wants to get off the darn Zoom call. And it's not a good feeling to be in your home office if you're even going to have a home office all day. It's harder to build culture. It's harder to get in sync. I think software is peculiar because it's like an iceberg. It's like the vast majority of it is submerged underwater. And so, the quality of the software that you ship is a function of the alignment of your mental models about what is below that waterline. Can you actually get in sync about what it is exactly fundamentally that we're building? What is the soul of our product? And it is so much harder to get in sync about that when you're remote. And then you waste time in a thousand ways because people are offline and you can't get a hold of them or you can't share your screen. It's just like you feel like you're walking in molasses all day. And eventually, I was like, okay, this is it. We're not going to do this anymore.Swyx [00:49:03]: Yeah. I think that is the current builder San Francisco consensus here. Yeah. But I still have a big... One of my big heroes as a CEO is Sid Subban from GitLab.Flo [00:49:14]: Mm-hmm.Swyx [00:49:15]: Matt MullenwegFlo [00:49:16]: used to be a hero.Swyx [00:49:17]: But these people run thousand-person remote businesses. The main idea is that at some company size, your company is remote anyway. Yeah. Because if you go from one building to two buildings, congrats, you're now remote from the other building. If you want to go from one city office to two city offices, they're remote from each other.Flo [00:49:35]: But the teams are co-located. Every time anyone talks about remote success stories, they always talk about this real force. Yeah. It's always GitLab and WordPress and Zapier. Zapier. It used to be Envision. And I will point out that in every one of these examples, you have a co-located counterfactual that is sometimes orders of magnitude bigger. Look, I like Matt Mullenweg a lot, but WordPress is a commercial failure. They run 60% of the internet and they're like a fraction of the size of even Substack. Right?Swyx [00:50:05]: They're trying to get more money.Flo [00:50:07]: Yeah, that's my point, right? Look, GitLab is much smaller than GitHub. Envision, you know, is no more. And Figma, like, completely took off. And Figma was like very in-person. So, I think if you're optimizing for productivity, if you really know, hey, this is a support ticket, right, and I want to have my support ticket for a buck 50 per support ticket and next year I want it for a buck 20, then sure, send your support ticket team to offshore, like the Philippines or whatever, and just optimize for cost. If you're optimizing for cost, absolutely be remote. If you're optimizing for creativity, which I think that software and product building is a creative endeavor, if you're optimizing for creativity, it's kind of like you have to be in person and hear the music to do that.Swyx [00:50:52]: Yeah. Maybe the line is that all jobs that can be remote should be AI or Lindy's and all jobs that are not remote are in person. Like, there's a very,Flo [00:51:04]: very clear separation of jobs. Sure. Well, I think over the long term,Swyx [00:51:09]: every job is going to be AI anyway. It would be curious to break down what you think is creativity in coding and in product defining and how to express that for sure. You're definitely what I call a temperature zero use case of LLMs. You want it to be reliable, predictable, small. And then there's other use cases of LLMs that are more for creativity and engines. Right? I haven't checked, but I'm pretty sure no one uses Lindy for brainstorming. Actually,Flo [00:51:36]: probably they do. I use Lindy for brainstormingSwyx [00:51:38]: a lot, actually. Yeah, yeah. But you want to have something that's anti-fragile to hallucination. Hallucinations are good.Flo [00:51:45]: By creativity, I mean, is it about direction or magnitude? If it is about direction, like decide what to do, then it's a creative endeavor. If it is about magnitude and just do it as fast as possible, as cheap as possible, then it's magnitude. And so sometimes, you know, software companies are not necessarily creative. Sometimes you know what you're doing. And I'll say that it's going to come across the wrong way, but linear. I look up to a huge amount, like such amazing product builders, but they know what they're building. They're building a I don't mean to throw shade at them. Like, good for them.Swyx [00:52:20]: I think they're aware that they're not like They recently got s**t for saying that they have work-life balance on their job description.Flo [00:52:26]: They're like, what do you mean by this? We're building a new kind of product that no one's ever built before. And so we're just scratching our heads all day trying to get in sync about like, what exactly is itSwyx [00:52:37]: that we're building? What does it consist of? Inherently creative struggle. Yeah. Dare we ask about San Francisco? And there's a whole bunch of tough stuff in here. Probably the biggest one I would just congratulate you on is becoming American, right? Very French, but your heart was sort of in the U.S. You eventually found your way here. What are your takes for founders? A few years ago, you wrote this post on Go West, young man. And now you've basically completed that journey, right? You're now here and up to the point where you're kind of mystified by how Europe has been so decel.Flo [00:53:11]: In a way, though, I feel vindicated because I was making the prediction that Europe was over 14 years ago or something like that. I think it's been a walking corpse for a long time. I think it is only now becoming obvious that it is paying the consequences of its policies from 10, 20, 30 years ago. I think at this point, I wish I could rewrite the Go West, young man article but really even more extreme. I think at this point, if you are in tech, especially in AI, but if you're in tech and you're not in San Francisco, you either lack judgment or you lack ambition. It's funny, I recently told that to someone and they were like, oh, not everyone wants to be like a unicorn founder. And I was like, like I said, judgment or ambition. It's fine to not have ambition. It's fine to want to prioritize other things than your company in life or your career in life. That's perfectly okay. But know that that's the trade-off you're making. If you prioritize your career, you've got to be here.Alessio [00:54:03]: As a fellow European escapist, I grew up in Rome.Flo [00:54:05]: Yeah, how do you feel?Swyx [00:54:06]: We never talk about your feelings about Europe.Alessio [00:54:08]: Yeah, I've been in the U.S. now six years. Well, I started my first company in Europe 10 years ago, something like that. Yeah, you can tell nobody really wants to do much. And then you're like, okay. It's funny, I was looking back through some old tweets and I was sending all these tweets to Marc Andreessen like 15 years ago like trying to like learn more about why are you guys putting money in these things that most people here would say you're like crazy to like even back. And eventually, you know, I started doing venture six, five years ago. And I think just like so many people in Europe reach out and ask, hey, can you like talk to our team and they just cannot comprehend like the risk appetite that people have here. It's just like so foreign to people, at least in Italy and like in some parts of Europe. I'm sure there's some great founders in Europe, but like the average European founders, like why would I leave my job at the post office to go work on the startup that could change everything and become very successful but might go out of business instead in the U.S. You have like, you know, we host a hackathon and it's like 400 people and it's like, where can I go work that it's like no job security, you know? It's just like completely different and there's no incentives from the government to change that. There's no way you can like change such a deep-rooted culture of like, you know, going and wine and April spritzFlo [00:55:27]: and all of thatAlessio [00:55:28]: early in the afternoon.Flo [00:55:29]: So, I don't really know how it's going to change.Alessio [00:55:32]: It's quality of life. Yeah, totally. That's why I left. The quality is so high that I left. But again, I think it's better to move here and just, if you want to do this job and do this, you should be here. If you don't want to, that's fine.Flo [00:55:47]: But like,Alessio [00:55:48]: don't copium. Don't be like, oh no, you can also be successful doing this and knees or like whatever. No, probably not, you know? So,Flo [00:55:59]: yeah,Alessio [00:56:00]: I've already done my N400Flo [00:56:01]: so I should get my U.S. citizenship interview soon. Yeah. And I think to be fair, I think what's happening right now to Europe and they've said no to capitalism. They've decided to say no to capitalism a long time ago. They've like completely over-regulated. Taxation is much too high and so forth. But I also think some of this is a little bit of a self-fulfilling prophecy or it's a self-perpetuating phenomenon because, look, to your point, like once there is a network effect that's just so incredibly powerful, they can't be broken, really. And we tried with San Francisco. I tried with San Francisco. Like during COVID,Swyx [00:56:35]: there was a movement of people moving to Miami.Flo [00:56:38]: How did that pan out? You can't break the network effect,Swyx [00:56:41]: you know? It's so annoying because first principles wise, tech should not be here. Like tech should be in Miami because it's just a better city.Flo [00:56:48]: San Francisco does not want tech to be here.Swyx [00:56:50]: San Francisco hates tech.Flo [00:56:51]: 100%.Swyx [00:56:52]: This is the thing I actually wrote down.Alessio [00:56:54]: San Francisco hates tech. It is true. I think the people that are in San Francisco that were here before, tech hated it and then there's kind of like this passed down thing. But I would say people in Miami would hate it too if there were too much of it. You know? The Mickey Beach crowd would also not gel.Swyx [00:57:08]: They're just rich enough and chill enough to not care.Flo [00:57:10]: Yeah, I think so too.Swyx [00:57:11]: They're like, oh, crypto kids.Flo [00:57:13]: Okay, cool. Yeah. Miami celebrates success which is one thingSwyx [00:57:17]: I loved about it.Flo [00:57:18]: A little bit too much.Swyx [00:57:19]: Maybe the last thing I'll mention, I just wanted a little bit of EUAC talk. I think that's good. I'll maybe carve out that I think the UK has done really well. That's an argument for the UK not being part of Europe is that, you know, the AI institutions there at least have done very well. Right?Flo [00:57:34]: Sure. I think a lot of Britain is in the gutter. Yeah, exactly.Swyx [00:57:38]: They've been stagnating at best. And then France has a few wins.Flo [00:57:41]: Who?Swyx [00:57:42]: Mistral.Flo [00:57:43]: Who uses Mistral?Swyx [00:57:44]: Hugging face.Flo [00:57:45]: A few wins.Swyx [00:57:46]: I'm just saying. They disappointed their first AI minister. You know the meme with the guyFlo [00:57:51]: who's celebrating with his trophy and then he's like, no, that's France. Right? To me, that's France. It's like, aha, look, we've got Mistral! It's like champagne! It's like maybe 1% of market share. And by the way, and it's not a critic of them, it's a critic of France and of Europe. And by the way, I think I've heard that the Mistral guys were moving to the US. They're opening an office here. They're opening an office here. But, I mean,Swyx [00:58:15]: they're very French, right?Flo [00:58:16]: Right.Swyx [00:58:17]: You can't really avoid it. There's one interesting counter move which is Jason Warner and ISOCAT moving to Paris for poolside. I don't know. It remains to be seen how that move is going. Maybe the last thing I'll say, you know, that's the Europe talk. We try not to do politics so much, but you're here. One thing that you do a lot is you test your overturned windows. Right? Like far more than any founder I know. You know it's not your job. Someone, for sure, you're just indulging. But also, I think you consciously test. And I just want to see what drives you there and why do you keep doing it? Because you treat very spicy stuff, especially for like the San Francisco sort of liberal dynasty.Flo [00:58:59]: I don't know because I assume you're referring to I posted something about pronouns and how nonsense...Swyx [00:59:05]: Just in general. I don't want you to focus on any particular thing unless you want to.Flo [00:59:09]: You know, well, that tweet in particular, when I was tweeting it, I was like, oh, this is kind of spicy. Should I do this? And then I just did it. And I received zero pushback.Swyx [00:59:20]: And the tweet was actuallyFlo [00:59:21]: pretty successful and I received a lot of people reaching out like, oh my God, so true. I think it's coming from a few different places. One, life is more fun this way. Like I don't feel like if everyone always self-censors, you never know what everyone, what anyone thinks. And so it's becoming like a self-perpetuating thing. It's like a public lies, private truth sort of phenomenon. Or like, you know, there's this phenomenon called the preference cascade. It's like, there's this joke. It's like, oh, there's only one communist left in USSR. The problem is no one knows which one it is. So everyone pretends to be communist because everyone else pretends to be communist. And so I think there's a role to be played when you have a boss who's going to fire me. It's like, look, if I don't speak up and if founders don't speak up, I'm like, why? What are you afraid of? Right? Like, there's really not that much downside. And I think there'sSwyx [01:00:14]: something to be said about standing up for what you think is right and being real and owning your opinions. I think there's a correlation there between having that level of independence for your political beliefs and free speech or whatever and the wayFlo [01:00:27]: that you think about business too. But I think there's such a powerful insight at its core, which is groupthink is real and pervasive and really problematic. Like, your brain constantly shuts down because you're not even thinking in your other way or you're not thinking. You just look around you and you decide to adopt the same beliefs as people around you. And everyone thinksSwyx [01:00:48]: they're immuneFlo [01:00:49]: and everyone elseSwyx [01:00:50]: is doing itFlo [01:00:51]: except themselves. I'm a special snowflake. I have free will. That's right. And so I actually make it a point to look for, and then I think about it and I'm like, do I believe this thing? And very often the answer is yes. And then I just say it. And so I think the AI safety is an example of that. Like, at some point, Marc Andreessen blocked me on Twitter and it hurt, frankly. I really look up to Marc AndreessenSwyx [01:01:13]: and I knew he would block me. It means you're successful on Twitter.Flo [01:01:17]: It's just the right message. Marc Andreessen was really my booster initially on Twitter. He really made my account. And I was like, look, I'm really concerned about AI safety. It is an unpopular viewSwyx [01:01:27]: among my peers. I remember, you were one of the few that actually came out in support of the bill.Flo [01:01:32]: I came out in support of SB1047 a year and a half ago. I put like some tweet storms about how I was really concerned. And yeah, I was blocked by a bunch of AI safety people and I don't like it, but you know, it's funny, maybe it's my French education. But look, in France, World War II is very present in people's minds and the phenomenon of people collaborating with the Nazis and there's always this sort of debate that people have like at dinner and it's like, ah, would you really have resisted during World War II? And everybody is always saying, oh yeah, we totally have resisted. It's like, yeah, but no. The reality of it is 95% of the country did not resist and most of it actually collaborated actively with the Nazis. And so 95% of y'all are wrong. You would actually have collaborated, right? I've always told myself I will stand up for what I think is right because some people got attacked and the way I was brought up is if someone gets attacked before you, you get involved. It doesn't matter, you get involved and you help the person, right? And so, look, I'm not pretending we're nowhere near a World War II phenomenon but I'm like, exactly because we are nowhere nearAlessio [01:02:45]: this kind of phenomenon. The stakes are so low and if you're not going to stand upFlo [01:02:49]: for what you think is right when the stakes are so low,Swyx [01:02:52]: are you going to stand up when it matters? There's an inconsistency in your statements because you simultaneously believe that AGI is very soon and you also say stakes are low. You can't believe both are real.Flo [01:03:03]: Well, why does AGI make the stakes of speaking up higher?Swyx [01:03:06]: Sorry, the stakes of safety.Flo [01:03:08]: Oh yeah, no, the stakes of AISwyx [01:03:11]: are like physical safety?Flo [01:03:12]: No, AI safety. Oh no, the stakes of AI safety couldn't be higher.Swyx [01:03:17]: I meant the stakesFlo [01:03:18]: of speaking up aboutAlessio [01:03:19]: pronouns or whatever. How do you figure out who's real and who isn't? Because there was a manifesto for responsible AI that hundreds of VCs and people signed and I don't think anybody actually thinks about it anymore.Flo [01:03:30]: Was that the pause letter?Swyx [01:03:31]: The six-month pause?Flo [01:03:32]: No,Alessio [01:03:33]: there was something else that I think general catalyst and some fun sign. And then there's maybe the anthropic case which is like, hey, we're leaving open AI because you guys don't take security seriously and then it's like, hey, what if we gave AI access to a whole computerFlo [01:03:49]: to just go do things?Alessio [01:03:50]: How do you reconcile like, okay, I mean, you could say the same thing about Lindy. It's like, if you're worried about AI safety, why are you building AI? Right? That's kind of like the extreme thinking. How do you internally decide between participation and talking about it and saying, hey, I think this is important but I'm still going to build towards that and building actually makes it safer because I'm involved versus just being like anti. I think this is unsafe but then not do anything about it and just kind of remove yourselfFlo [01:04:20]: from the whole thing. What I think about our own involvement here is I'm acutely concerned about the risks at the model layer and I'm simultaneously very excited about the upside. Like, for the record, my PDoom, insofar as I can quantify it, which I cannot, but if I had to, like my vibe is like 10% or something like that and so there's like a 90% chance that we live in like a pure utopia. Right? And that's awesome. Right? So like, let's go after utopia. Right? Let's talk about the 10% chance that we live in a utopia where there's no disease and it's like a post-scarcity world. I think that utopia is going to happen through, like again, I'm bringing my little contribution to the movement. I think it would be silly to say no to the upside because you're concerned about the downside. At the same time, we want to be concerned about the downside. I know that it's very self-serving to say, oh, you know, like the downside doesn't exist at my layer, it exists at like the model layer. But truly, look at Lindy, look at the Apple building. I struggle to see exactly how it would like get up if I'm concerned about the model layer.Swyx [01:05:21]: Okay. Well, this kind of discussion can go on for hours. It is still daylight, so not the best time for it. But I really appreciate you spending the time. Any other last calls to actions or thoughts that you feel like you want to get off your chest?Flo [01:05:33]: AGI is coming.Flo [01:05:37]: Are you hiringAlessio [01:05:38]: for any roles? We are.Flo [01:05:40]: Oh yeah, I guess that should be the...Swyx [01:05:43]: Don't bother.Flo [01:05:44]: No, can you stop saying AGI is coming and just talk about it? We are also hiring yeah, we are hiring designers and engineers right now. Yeah. So hit me up at flo.lindy.aiAlessio [01:05:55]: And then go talk to my Lindy. You're not actually going to read it.Flo [01:05:58]: Actually, I have wonderedSwyx [01:05:59]: how many times when I talk to you, I'm talking to a bot. Part of that is I don't have to know, right?Flo [01:06:05]: That's right. Well, it's actually doubly confusing because we also have a teammateSwyx [01:06:09]: whose name is Lindy. Yes, I was wondering when I met her, I was like, wait, did you hire her first?Flo [01:06:14]: Marketing is fun. No, she was an inspiration after we named the company both after her. Oh, okay.Swyx [01:06:19]: Interesting. Yeah, wonderful. I'll comment on the design piece just because I think that there are a lot of AI companies that very much focus on the functionality and the models and the capabilities and the benchmark. But I think that increasingly I'm seeing people differentiate with design and people want to use beautiful products and people who can figure that out and integrate the AI into their human lives. You know, design at the limit. One, at the lowest level is to make this look pretty, make this look like Stripe or Linear's homepage. That's design. But at the highest level of design it is make this integrate seamlessly into my life. Intuitive, beautiful, inspirational maybe even. And I think that companies that, you know, this is kind of like a blog post I've been thinking about, companies that emphasize design actually are going to win more than companies that don't. Yeah,Flo [01:07:06]: I love Steve Jobs' quote and I'm going to butcher it. It's something like, design is the expression of the soul of a man-made product through successive layers of design. Jesus. Right? He was good. He was cooking. He was cooking on that one. He was cooking. It starts with the soul of the product which is why I was saying it is so important to reach alignment about that soul of the product, right? It's like an onion, like you peel the onion in those layers, right? And you design an entire journey just like the user experiencing your product chronologically all the way from the beginning of like the awareness stage I think it is also the job of the designer to design that part of the experience. It's like, okay, design is immensely important. Okay.Alessio [01:07:46]: Lovely. Yeah.Flo [01:07:48]: Thanks for coming on, Flo. Yeah, absolutely. Thanks for having me. Get full access to Latent Space at www.latent.space/subscribe
    --------  
    1:09:53
  • Agents @ Work: Dust.tt
    We are recording our next big recap episode and taking questions! Submit questions and messages on Speakpipe here for a chance to appear on the show!Also subscribe to our calendar for our Singapore, NeurIPS, and all upcoming meetups!In our first ever episode with Logan Kilpatrick we called out the two hottest LLM frameworks at the time: LangChain and Dust. We’ve had Harrison from LangChain on twice (as a guest and as a co-host), and we’ve now finally come full circle as Stanislas from Dust joined us in the studio.After stints at Oracle and Stripe, Stan had joined OpenAI to work on mathematical reasoning capabilities. He describes his time at OpenAI as "the PhD I always wanted to do" while acknowledging the challenges of research work: "You're digging into a field all day long for weeks and weeks, and you find something, you get super excited for 12 seconds. And at the 13 seconds, you're like, 'oh, yeah, that was obvious.' And you go back to digging." This experience, combined with early access to GPT-4's capabilities, shaped his decision to start Dust: "If we believe in AGI and if we believe the timelines might not be too long, it's actually the last train leaving the station to start a company. After that, it's going to be computers all the way down."The History of DustDust's journey can be broken down into three phases:* Developer Framework (2022): Initially positioned as a competitor to LangChain, Dust started as a developer tooling platform. While both were open source, their approaches differed – LangChain focused on broad community adoption and integration as a pure developer experience, while Dust emphasized UI-driven development and better observability that wasn’t just `print` statements.* Browser Extension (Early 2023): The company pivoted to building XP1, a browser extension that could interact with web content. This experiment helped validate user interaction patterns with AI, even while using less capable models than GPT-4.* Enterprise Platform (Current): Today, Dust has evolved into an infrastructure platform for deploying AI agents within companies, with impressive metrics like 88% daily active users in some deployments.The Case for Being HorizontalThe big discussion for early stage companies today is whether or not to be horizontal or vertical. Since models are so good at general tasks, a lot of companies are building vertical products that take care of a workflow end-to-end in order to offer more value and becoming more of “Services as Software”. Dust on the other hand is a platform for the users to build their own experiences, which has had a few advantages:* Maximum Penetration: Dust reports 60-70% weekly active users across entire companies, demonstrating the potential reach of horizontal solutions rather than selling into a single team.* Emergent Use Cases: By allowing non-technical users to create agents, Dust enables use cases to emerge organically from actual business needs rather than prescribed solutions.* Infrastructure Value: The platform approach creates lasting value through maintained integrations and connections, similar to how Stripe's value lies in maintaining payment infrastructure. Rather than relying on third-party integration providers, Dust maintains its own connections to ensure proper handling of different data types and structures.The Vertical ChallengeHowever, this approach comes with trade-offs:* Harder Go-to-Market: As Stan talked about: "We spike at penetration... but it makes our go-to-market much harder. Vertical solutions have a go-to-market that is much easier because they're like, 'oh, I'm going to solve the lawyer stuff.'"* Complex Infrastructure: Building a horizontal platform requires maintaining numerous integrations and handling diverse data types appropriately – from structured Salesforce data to unstructured Notion pages. As you scale integrations, the cost of maintaining them also scales. * Product Surface Complexity: Creating an interface that's both powerful and accessible to non-technical users requires careful design decisions, down to avoiding technical terms like "system prompt" in favor of "instructions." The Future of AI PlatformsStan initially predicted we'd see the first billion-dollar single-person company in 2023 (a prediction later echoed by Sam Altman), but he's now more focused on a different milestone: billion-dollar companies with engineering teams of just 20 people, enabled by AI assistance.This vision aligns with Dust's horizontal platform approach – building the infrastructure that allows small teams to achieve outsized impact through AI augmentation. Rather than replacing entire job functions (the vertical approach), they're betting on augmenting existing workflows across organizations.Full YouTube EpisodeChapters* 00:00:00 Introductions* 00:04:33 Joining OpenAI from Paris* 00:09:54 Research evolution and compute allocation at OpenAI* 00:13:12 Working with Ilya Sutskever and OpenAI's vision* 00:15:51 Leaving OpenAI to start Dust* 00:18:15 Early focus on browser extension and WebGPT-like functionality* 00:20:20 Dust as the infrastructure for agents* 00:24:03 Challenges of building with early AI models* 00:28:17 LLMs and Workflow Automation* 00:35:28 Building dependency graphs of agents* 00:37:34 Simulating API endpoints* 00:40:41 State of AI models* 00:43:19 Running evals* 00:46:36 Challenges in building AI agents infra* 00:49:21 Buy vs. build decisions for infrastructure components* 00:51:02 Future of SaaS and AI's Impact on Software* 00:53:07 The single employee $1B company race* 00:56:32 Horizontal vs. vertical approaches to AI agentsTranscriptAlessio [00:00:00]: Hey everyone, welcome to the Latent Space podcast. This is Alessio, partner and CTO at Decibel Partners, and I'm joined by my co-host Swyx, founder of Smol.ai.Swyx [00:00:11]: Hey, and today we're in a studio with Stanislas, welcome.Stan [00:00:14]: Thank you very much for having me.Swyx [00:00:16]: Visiting from Paris.Stan [00:00:17]: Paris.Swyx [00:00:18]: And you have had a very distinguished career. It's very hard to summarize, but you went to college in both Ecopolytechnique and Stanford, and then you worked in a number of places, Oracle, Totems, Stripe, and then OpenAI pre-ChatGPT. We'll talk, we'll spend a little bit of time about that. About two years ago, you left OpenAI to start Dust. I think you were one of the first OpenAI alum founders.Stan [00:00:40]: Yeah, I think it was about at the same time as the Adept guys, so that first wave.Swyx [00:00:46]: Yeah, and people really loved our David episode. We love a few sort of OpenAI stories, you know, for back in the day, like we're talking about pre-recording. Probably the statute of limitations on some of those stories has expired, so you can talk a little bit more freely without them coming after you. But maybe we'll just talk about, like, what was your journey into AI? You know, you were at Stripe for almost five years, there are a lot of Stripe alums going into OpenAI. I think the Stripe culture has come into OpenAI quite a bit.Stan [00:01:11]: Yeah, so I think the buses of Stripe people really started flowing in, I guess, after ChatGPT. But, yeah, my journey into AI is a... I mean, Greg Brockman. Yeah, yeah. From Greg, of course. And Daniela, actually, back in the days, Daniela Amodei.Swyx [00:01:27]: Yes, she was COO, I mean, she is COO, yeah. She had a pretty high job at OpenAI at the time, yeah, for sure.Stan [00:01:34]: My journey started as anybody else, you're fascinated with computer science and you want to make them think, it's awesome, but it doesn't work. I mean, it was a long time ago, it was like maybe 16, so it was 25 years ago. Then the first big exposure to AI would be at Stanford, and I'm going to, like, disclose a whole lamb, because at the time it was a class taught by Andrew Ng, and there was no deep learning. It was half features for vision and a star algorithm. So it was fun. But it was the early days of deep learning. At the time, I think a few years after, it was the first project at Google. But you know, that cat face or the human face trained from many images. I went to, hesitated doing a PhD, more in systems, eventually decided to go into getting a job. Went at Oracle, started a company, did a gazillion mistakes, got acquired by Stripe, worked with Greg Buckman there. And at the end of Stripe, I started interesting myself in AI again, felt like it was the time, you had the Atari games, you had the self-driving craziness at the time. And I started exploring projects, it felt like the Atari games were incredible, but there were still games. And I was looking into exploring projects that would have an impact on the world. And so I decided to explore three things, self-driving cars, cybersecurity and AI, and math and AI. It's like I sing it by a decreasing order of impact on the world, I guess.Swyx [00:03:01]: Discovering new math would be very foundational.Stan [00:03:03]: It is extremely foundational, but it's not as direct as driving people around.Swyx [00:03:07]: Sorry, you're doing this at Stripe, you're like thinking about your next move.Stan [00:03:09]: No, it was at Stripe, kind of a bit of time where I started exploring. I did a bunch of work with friends on trying to get RC cars to drive autonomously. Almost started a company in France or Europe about self-driving trucks. We decided to not go for it because it was probably very operational. And I think the idea of the company, of the team wasn't there. And also I realized that if I wake up a day and because of a bug I wrote, I killed a family, it would be a bad experience. And so I just decided like, no, that's just too crazy. And then I explored cybersecurity with a friend. We're trying to apply transformers to cut fuzzing. So cut fuzzing, you have kind of an algorithm that goes really fast and tries to mutate the inputs of a library to find bugs. And we tried to apply a transformer to that and do reinforcement learning with the signal of how much you propagate within the binary. Didn't work at all because the transformers are so slow compared to evolutionary algorithms that it kind of didn't work. Then I started interested in math and AI and started working on SAT solving with AI. And at the same time, OpenAI was kind of starting the reasoning team that were tackling that project as well. I was in touch with Greg and eventually got in touch with Ilya and finally found my way to OpenAI. I don't know how much you want to dig into that. The way to find your way to OpenAI when you're in Paris was kind of an interesting adventure as well.Swyx [00:04:33]: Please. And I want to note, this was a two-month journey. You did all this in two months.Stan [00:04:38]: The search.Swyx [00:04:40]: Your search for your next thing, because you left in July 2019 and then you joined OpenAI in September.Stan [00:04:45]: I'm going to be ashamed to say that.Swyx [00:04:47]: You were searching before. I was searching before.Stan [00:04:49]: I mean, it's normal. No, the truth is that I moved back to Paris through Stripe and I just felt the hardship of being remote from your team nine hours away. And so it kind of freed a bit of time for me to start the exploration before. Sorry, Patrick. Sorry, John.Swyx [00:05:05]: Hopefully they're listening. So you joined OpenAI from Paris and from like, obviously you had worked with Greg, but notStan [00:05:13]: anyone else. No. Yeah. So I had worked with Greg, but not Ilya, but I had started chatting with Ilya and Ilya was kind of excited because he knew that I was a good engineer through Greg, I presume, but I was not a trained researcher, didn't do a PhD, never did research. And I started chatting and he was excited all the way to the point where he was like, hey, come pass interviews, it's going to be fun. I think he didn't care where I was, he just wanted to try working together. So I go to SF, go through the interview process, get an offer. And so I get Bob McGrew on the phone for the first time, he's like, hey, Stan, it's awesome. You've got an offer. When are you coming to SF? I'm like, hey, it's awesome. I'm not coming to the SF. I'm based in Paris and we just moved. He was like, hey, it's awesome. Well, you don't have an offer anymore. Oh, my God. No, it wasn't as hard as that. But that's basically the idea. And it took me like maybe a couple more time to keep chatting and they eventually decided to try a contractor set up. And that's how I kind of started working at OpenAI, officially as a contractor, but in practice really felt like being an employee.Swyx [00:06:14]: What did you work on?Stan [00:06:15]: So it was solely focused on math and AI. And in particular in the application, so the study of the larger grid models, mathematical reasoning capabilities, and in particular in the context of formal mathematics. The motivation was simple, transformers are very creative, but yet they do mistakes. Formal math systems are of the ability to verify a proof and the tactics they can use to solve problems are very mechanical, so you miss the creativity. And so the idea was to try to explore both together. You would get the creativity of the LLMs and the kind of verification capabilities of the formal system. A formal system, just to give a little bit of context, is a system in which a proof is a program and the formal system is a type system, a type system that is so evolved that you can verify the program. If the type checks, it means that the program is correct.Swyx [00:07:06]: Is the verification much faster than actually executing the program?Stan [00:07:12]: Verification is instantaneous, basically. So the truth is that what you code in involves tactics that may involve computation to search for solutions. So it's not instantaneous. You do have to do the computation to expand the tactics into the actual proof. The verification of the proof at the very low level is instantaneous.Swyx [00:07:32]: How quickly do you run into like, you know, halting problem PNP type things, like impossibilities where you're just like that?Stan [00:07:39]: I mean, you don't run into it at the time. It was really trying to solve very easy problems. So I think the... Can you give an example of easy? Yeah, so that's the mass benchmark that everybody knows today. The Dan Hendricks one. The Dan Hendricks one, yeah. And I think it was the low end part of the mass benchmark at the time, because that mass benchmark includes AMC problems, AMC 8, AMC 10, 12. So these are the easy ones. Then AIME problems, somewhat harder, and some IMO problems, like Crazy Arm.Swyx [00:08:07]: For our listeners, we covered this in our Benchmarks 101 episode. AMC is literally the grade of like high school, grade 8, grade 10, grade 12. So you can solve this. Just briefly to mention this, because I don't think we'll touch on this again. There's a bit of work with like Lean, and then with, you know, more recently with DeepMind doing like scoring like silver on the IMO. Any commentary on like how math has evolved from your early work to today?Stan [00:08:34]: I mean, that result is mind blowing. I mean, from my perspective, spent three years on that. At the same time, Guillaume Lampe in Paris, we were both in Paris, actually. He was at FAIR, was working on some problems. We were pushing the boundaries, and the goal was the IMO. And we cracked a few problems here and there. But the idea of getting a medal at an IMO was like just remote. So this is an impressive result. And we can, I think the DeepMind team just did a good job of scaling. I think there's nothing too magical in their approach, even if it hasn't been published. There's a Dan Silver talk from seven days ago where it goes a little bit into more details. It feels like there's nothing magical there. It's really applying reinforcement learning and scaling up the amount of data that can generate through autoformalization. So we can dig into what autoformalization means if you want.Alessio [00:09:26]: Let's talk about the tail end, maybe, of the OpenAI. So you joined, and you're like, I'm going to work on math and do all of these things. I saw on one of your blog posts, you mentioned you fine-tuned over 10,000 models at OpenAI using 10 million A100 hours. How did the research evolve from the GPD 2, and then getting closer to DaVinci 003? And then you left just before ChatGPD was released, but tell people a bit more about the research path that took you there.Stan [00:09:54]: I can give you my perspective of it. I think at OpenAI, there's always been a large chunk of the compute that was reserved to train the GPTs, which makes sense. So it was pre-entropic splits. Most of the compute was going to a product called Nest, which was basically GPT-3. And then you had a bunch of, let's say, remote, not core research teams that were trying to explore maybe more specific problems or maybe the algorithm part of it. The interesting part, I don't know if it was where your question was going, is that in those labs, you're managing researchers. So by definition, you shouldn't be managing them. But in that space, there's a managing tool that is great, which is compute allocation. Basically by managing the compute allocation, you can message the team of where you think the priority should go. And so it was really a question of, you were free as a researcher to work on whatever you wanted. But if it was not aligned with OpenAI mission, and that's fair, you wouldn't get the compute allocation. As it happens, solving math was very much aligned with the direction of OpenAI. And so I was lucky to generally get the compute I needed to make good progress.Swyx [00:11:06]: What do you need to show as incremental results to get funded for further results?Stan [00:11:12]: It's an imperfect process because there's a bit of a... If you're working on math and AI, obviously there's kind of a prior that it's going to be aligned with the company. So it's much easier than to go into something much more risky, much riskier, I guess. You have to show incremental progress, I guess. It's like you ask for a certain amount of compute and you deliver a few weeks after and you demonstrate that you have a progress. Progress might be a positive result. Progress might be a strong negative result. And a strong negative result is actually often much harder to get or much more interesting than a positive result. And then it generally goes into, as any organization, you would have people finding your project or any other project cool and fancy. And so you would have that kind of phase of growing up compute allocation for it all the way to a point. And then maybe you reach an apex and then maybe you go back mostly to zero and restart the process because you're going in a different direction or something else. That's how I felt. Explore, exploit. Yeah, exactly. Exactly. Exactly. It's a reinforcement learning approach.Swyx [00:12:14]: Classic PhD student search process.Alessio [00:12:17]: And you were reporting to Ilya, like the results you were kind of bringing back to him or like what's the structure? It's almost like when you're doing such cutting edge research, you need to report to somebody who is actually really smart to understand that the direction is right.Stan [00:12:29]: So we had a reasoning team, which was working on reasoning, obviously, and so math in general. And that team had a manager, but Ilya was extremely involved in the team as an advisor, I guess. Since he brought me in OpenAI, I was lucky to mostly during the first years to have kind of a direct access to him. He would really coach me as a trainee researcher, I guess, with good engineering skills. And Ilya, I think at OpenAI, he was the one showing the North Star, right? He was his job and I think he really enjoyed it and he did it super well, was going through the teams and saying, this is where we should be going and trying to, you know, flock the different teams together towards an objective.Swyx [00:13:12]: I would say like the public perception of him is that he was the strongest believer in scaling. Oh, yeah. Obviously, he has always pursued the compression thesis. You have worked with him personally, what does the public not know about how he works?Stan [00:13:26]: I think he's really focused on building the vision and communicating the vision within the company, which was extremely useful. I was personally surprised that he spent so much time, you know, working on communicating that vision and getting the teams to work together versus...Swyx [00:13:40]: To be specific, vision is AGI? Oh, yeah.Stan [00:13:42]: Vision is like, yeah, it's the belief in compression and scanning computes. I remember when I started working on the Reasoning team, the excitement was really about scaling the compute around Reasoning and that was really the belief we wanted to ingrain in the team. And that's what has been useful to the team and with the DeepMind results shows that it was the right approach with the success of GPT-4 and stuff shows that it was the right approach.Swyx [00:14:06]: Was it according to the neural scaling laws, the Kaplan paper that was published?Stan [00:14:12]: I think it was before that, because those ones came with GPT-3, basically at the time of GPT-3 being released or being ready internally. But before that, there really was a strong belief in scale. I think it was just the belief that the transformer was a generic enough architecture that you could learn anything. And that was just a question of scaling.Alessio [00:14:33]: Any other fun stories you want to tell? Sam Altman, Greg, you know, anything.Stan [00:14:37]: Weirdly, I didn't work that much with Greg when I was at OpenAI. He had always been mostly focused on training the GPTs and rightfully so. One thing about Sam Altman, he really impressed me because when I joined, he had joined not that long ago and it felt like he was kind of a very high level CEO. And I was mind blown by how deep he was able to go into the subjects within a year or something, all the way to a situation where when I was having lunch by year two, I was at OpenAI with him. He would just quite know deeply what I was doing. With no ML background. Yeah, with no ML background, but I didn't have any either, so I guess that explains why. But I think it's a question about, you don't necessarily need to understand the very technicalities of how things are done, but you need to understand what's the goal and what's being done and what are the recent results and all of that in you. And we could have kind of a very productive discussion. And that really impressed me, given the size at the time of OpenAI, which was not negligible.Swyx [00:15:44]: Yeah. I mean, you've been a, you were a founder before, you're a founder now, and you've seen Sam as a founder. How has he affected you as a founder?Stan [00:15:51]: I think having that capability of changing the scale of your attention in the company, because most of the time you operate at a very high level, but being able to go deep down and being in the known of what's happening on the ground is something that I feel is really enlightening. That's not a place in which I ever was as a founder, because first company, we went all the way to 10 people. Current company, there's 25 of us. So the high level, the sky and the ground are pretty much at the same place. No, you're being too humble.Swyx [00:16:21]: I mean, Stripe was also like a huge rocket ship.Stan [00:16:23]: Stripe, I was a founder. So I was, like at OpenAI, I was really happy being on the ground, pushing the machine, making it work. Yeah.Swyx [00:16:31]: Last OpenAI question. The Anthropic split you mentioned, you were around for that. Very dramatic. David also left around that time, you left. This year, we've also had a similar management shakeup, let's just call it. Can you compare what it was like going through that split during that time? And then like, does that have any similarities now? Like, are we going to see a new Anthropic emerge from these folks that just left?Stan [00:16:54]: That I really, really don't know. At the time, the split was pretty surprising because they had been trying GPT-3, it was a success. And to be completely transparent, I wasn't in the weeds of the splits. What I understood of it is that there was a disagreement of the commercialization of that technology. I think the focal point of that disagreement was the fact that we started working on the API and wanted to make those models available through an API. Is that really the core disagreement? I don't know.Swyx [00:17:25]: Was it safety?Stan [00:17:26]: Was it commercialization?Swyx [00:17:27]: Or did they just want to start a company?Stan [00:17:28]: Exactly. Exactly. That I don't know. But I think what I was surprised of is how quickly OpenAI recovered at the time. And I think it's just because we were mostly a research org and the mission was so clear that some divergence in some teams, some people leave, the mission is still there. We have the compute. We have a site. So it just keeps going.Swyx [00:17:50]: Very deep bench. Like just a lot of talent. Yeah.Alessio [00:17:53]: So that was the OpenAI part of the history. Exactly. So then you leave OpenAI in September 2022. And I would say in Silicon Valley, the two hottest companies at the time were you and Lanktrain. What was that start like and why did you decide to start with a more developer focused kind of like an AI engineer tool rather than going back into some more research and something else?Stan [00:18:15]: Yeah. First, I'm not a trained researcher. So going through OpenAI was really kind of the PhD I always wanted to do. But research is hard. You're digging into a field all day long for weeks and weeks and weeks, and you find something, you get super excited for 12 seconds. And at the 13 seconds, you're like, oh, yeah, that was obvious. And you go back to digging. I'm not a trained, like formally trained researcher, and it wasn't kind of a necessarily an ambition of me of creating, of having a research career. And I felt the hardness of it. I enjoyed a lot of like that a ton. But at the time, I decided that I wanted to go back to something more productive. And the other fun motivation was like, I mean, if we believe in AGI and if we believe the timelines might not be too long, it's actually the last train leaving the station to start a company. After that, it's going to be computers all the way down. And so that was kind of the true motivation for like trying to go there. So that's kind of the core motivation at the beginning of personally. And the motivation for starting a company was pretty simple. I had seen GPT-4 internally at the time, it was September 2022. So it was pre-GPT, but GPT-4 was ready since, I mean, I'd been ready for a few months internally. I was like, okay, that's obvious, the capabilities are there to create an insane amount of value to the world. And yet the deployment is not there yet. The revenue of OpenAI at the time were ridiculously small compared to what it is today. So the thesis was, there's probably a lot to be done at the product level to unlock the usage.Alessio [00:19:49]: Yeah. Let's talk a bit more about the form factor, maybe. I think one of the first successes you had was kind of like the WebGPT-like thing, like using the models to traverse the web and like summarize things. And the browser was really the interface. Why did you start with the browser? Like what was it important? And then you built XP1, which was kind of like the browser extension.Stan [00:20:09]: So the starting point at the time was, if you wanted to talk about LLMs, it was still a rather small community, a community of mostly researchers and to some extent, very early adopters, very early engineers. It was almost inconceivable to just build a product and go sell it to the enterprise, though at the time there was a few companies doing that. The one on marketing, I don't remember its name, Jasper. But so the natural first intention, the first, first, first intention was to go to the developers and try to create tooling for them to create product on top of those models. And so that's what Dust was originally. It was quite different than Lanchain, and Lanchain just beat the s**t out of us, which is great. It's a choice.Swyx [00:20:53]: You were cloud, in closed source. They were open source.Stan [00:20:56]: Yeah. So technically we were open source and we still are open source, but I think that doesn't really matter. I had the strong belief from my research time that you cannot create an LLM-based workflow on just one example. Basically, if you just have one example, you overfit. So as you develop your interaction, your orchestration around the LLM, you need a dozen examples. Obviously, if you're running a dozen examples on a multi-step workflow, you start paralyzing stuff. And if you do that in the console, you just have like a messy stream of tokens going out and it's very hard to observe what's going there. And so the idea was to go with an UI so that you could kind of introspect easily the output of each interaction with the model and dig into there through an UI, which is-Swyx [00:21:42]: Was that open source? I actually didn't come across it.Stan [00:21:44]: Oh yeah, it wasn't. I mean, Dust is entirely open source even today. We're not going for an open source-Swyx [00:21:48]: If it matters, I didn't know that.Stan [00:21:49]: No, no, no, no, no. The reason why is because we're not open source because we're not doing an open source strategy. It's not an open source go-to-market at all. We're open source because we can and it's fun.Swyx [00:21:59]: Open source is marketing. You have all the downsides of open source, which is like people can clone you.Stan [00:22:03]: But I think that downside is a big fallacy. Okay. Yes, anybody can clone Dust today, but the value of Dust is not the current state. The value of Dust is the number of eyeballs and hands of developers that are creating to it in the future. And so yes, anybody can clone it today, but that wouldn't change anything. There is some value in being open source. In a discussion with the security team, you can be extremely transparent and just show the code. When you have discussion with users and there's a bug or a feature missing, you can just point to the issue, show the pull request, show the, show the, exactly, oh, PR welcome. That doesn't happen that much, but you can show the progress if the person that you're chatting with is a little bit technical, they really enjoy seeing the pull request advancing and seeing all the way to deploy. And then the downsides are mostly around security. You never want to do security by obfuscation. But the truth is that your vector of attack is facilitated by you being open source. But at the same time, it's a good thing because if you're doing anything like a bug bountying or stuff like that, you just give much more tools to the bug bountiers so that their output is much better. So there's many, many, many trade-offs. I don't believe in the value of the code base per se. I think it's really the people that are on the code base that have the value and go to market and the product and all of those things that are around the code base. Obviously, that's not true for every code base. If you're working on a very secret kernel to accelerate the inference of LLMs, I would buy that you don't want to be open source. But for product stuff, I really think there's very little risk. Yeah.Alessio [00:23:39]: I signed up for XP1, I was looking, January 2023. I think at the time you were on DaVinci 003. Given that you had seen GPD 4, how did you feel having to push a product out that was using this model that was so inferior? And you're like, please, just use it today. I promise it's going to get better. Just overall, as a founder, how do you build something that maybe doesn't quite work with the model today, but you're just expecting the new model to be better?Stan [00:24:03]: Yeah, so actually, XP1 was even on a smaller one that was the post-GDPT release, small version, so it was... Ada, Babbage... No, no, no, not that far away. But it was the small version of GDPT, basically. I don't remember its name. Yes, you have a frustration there. But at the same time, I think XP1 was designed, was an experiment, but was designed as a way to be useful at the current capability of the model. If you just want to extract data from a LinkedIn page, that model was just fine. If you want to summarize an article on a newspaper, that model was just fine. And so it was really a question of trying to find a product that works with the current capability, knowing that you will always have tailwinds as models get better and faster and cheaper. So that was kind of a... There's a bit of a frustration because you know what's out there and you know that you don't have access to it yet. It's also interesting to try to find a product that works with the current capability.Alessio [00:24:55]: And we highlighted XP1 in our anatomy of autonomy post in April of last year, which was, you know, where are all the agents, right? So now we spent 30 minutes getting to what you're building now. So you basically had a developer framework, then you had a browser extension, then you had all these things, and then you kind of got to where Dust is today. So maybe just give people an overview of what Dust is today and the courtesies behind it. Yeah, of course.Stan [00:25:20]: So Dust, we really want to build the infrastructure so that companies can deploy agents within their teams. We are horizontal by nature because we strongly believe in the emergence of use cases from the people having access to creating an agent that don't need to be developers. They have to be thinkers. They have to be curious. But anybody can create an agent that will solve an operational thing that they're doing in their day-to-day job. And to make those agents useful, there's two focus, which is interesting. The first one is an infrastructure focus. You have to build the pipes so that the agent has access to the data. You have to build the pipes such that the agents can take action, can access the web, et cetera. So that's really an infrastructure play. Maintaining connections to Notion, Slack, GitHub, all of them is a lot of work. It is boring work, boring infrastructure work, but that's something that we know is extremely valuable in the same way that Stripe is extremely valuable because it maintains the pipes. And we have that dual focus because we're also building the product for people to use it. And there it's fascinating because everything started from the conversational interface, obviously, which is a great starting point. But we're only scratching the surface, right? I think we are at the pong level of LLM productization. And we haven't invented the C3. We haven't invented Counter-Strike. We haven't invented Cyberpunk 2077. So this is really our mission is to really create the product that lets people equip themselves to just get away all the work that can be automated or assisted by LLMs.Alessio [00:26:57]: And can you just comment on different takes that people had? So maybe the most open is like auto-GPT. It's just kind of like just trying to do anything. It's like it's all magic. There's no way for you to do anything. Then you had the ADAPT, you know, we had David on the podcast. They're very like super hands-on with each individual customer to build super tailored. How do you decide where to draw the line between this is magic? This is exposed to you, especially in a market where most people don't know how to build with AI at all. So if you expect them to do the thing, they're probably not going to do it. Yeah, exactly.Stan [00:27:29]: So the auto-GPT approach obviously is extremely exciting, but we know that the agentic capability of models are not quite there yet. It just gets lost. So we're starting, we're starting where it works. Same with the XP one. And where it works is pretty simple. It's like simple workflows that involve a couple tools where you don't even need to have the model decide which tools it's used in the sense of you just want people to put it in the instructions. It's like take that page, do that search, pick up that document, do the work that I want in the format I want, and give me the results. There's no smartness there, right? In terms of orchestrating the tools, it's mostly using English for people to program a workflow where you don't have the constraint of having compatible API between the two.Swyx [00:28:17]: That kind of personal automation, would you say it's kind of like an LLM Zapier type ofStan [00:28:22]: thing?Swyx [00:28:22]: Like if this, then that, and then, you know, do this, then this. You're programming with English?Stan [00:28:28]: So you're programming with English. So you're just saying, oh, do this and then that. You can even create some form of APIs. You say, when I give you the command X, do this. When I give you the command Y, do this. And you describe the workflow. But you don't have to create boxes and create the workflow explicitly. It just needs to describe what are the tasks supposed to be and make the tool available to the agent. The tool can be a semantic search. The tool can be querying into a structured database. The tool can be searching on the web. And obviously, the interesting tools that we're only starting to scratch are actually creating external actions like reimbursing something on Stripe, sending an email, clicking on a button in the admin or something like that.Swyx [00:29:11]: Do you maintain all these integrations?Stan [00:29:13]: Today, we maintain most of the integrations. We do always have an escape hatch for people to kind of custom integrate. But the reality is that the reality of the market today is that people just want it to work, right? And so it's mostly us maintaining the integration. As an example, a very good source of information that is tricky to productize is Salesforce. Because Salesforce is basically a database and a UI. And they do the f**k they want with it. And so every company has different models and stuff like that. So right now, we don't support it natively. And the type of support or real native support will be slightly more complex than just osing into it, like is the case with Slack as an example. Because it's probably going to be, oh, you want to connect your Salesforce to us? Give us the SQL. That's the Salesforce QL language. Give us the queries you want us to run on it and inject in the context of dust. So that's interesting how not only integrations are cool, and some of them require a bit of work on the user. And for some of them that are really valuable to our users, but we don't support yet, they can just build them internally and push the data to us.Swyx [00:30:18]: I think I understand the Salesforce thing. But let me just clarify, are you using browser automation because there's no API for something?Stan [00:30:24]: No, no, no, no. In that case, so we do have browser automation for all the use cases and apply the public web. But for most of the integration with the internal system of the company, it really runs through API.Swyx [00:30:35]: Haven't you felt the pull to RPA, browser automation, that kind of stuff?Stan [00:30:39]: I mean, what I've been saying for a long time, maybe I'm wrong, is that if the future is that you're going to stand in front of a computer and looking at an agent clicking on stuff, then I'll hit my computer. And my computer is a big Lenovo. It's black. Doesn't sound good at all compared to a Mac. And if the APIs are there, we should use them. There is going to be a long tail of stuff that don't have APIs, but as the world is moving forward, that's disappearing. So the core API value in the past has really been, oh, this old 90s product doesn't have an API. So I need to use the UI to automate. I think for most of the ICP companies, the companies that ICP for us, the scale ups that are between 500 and 5,000 people, tech companies, most of the SaaS they use have APIs. Now there's an interesting question for the open web, because there are stuff that you want to do that involve websites that don't necessarily have APIs. And the current state of web integration from, which is us and OpenAI and Anthropic, I don't even know if they have web navigation, but I don't think so. The current state of affair is really, really broken because you have what? You have basically search and headless browsing. But headless browsing, I think everybody's doing basically body.innertext and fill that into the model, right?Swyx [00:31:56]: MARK MIRCHANDANI There's parsers into Markdown and stuff.Stan [00:31:58]: FRANCESC CAMPOY I'm super excited by the companies that are exploring the capability of rendering a web page into a way that is compatible for a model, being able to maintain the selector. So that's basically the place where to click in the page through that process, expose the actions to the model, have the model select an action in a way that is compatible with model, which is not a big page of a full DOM that is very noisy, and then being able to decompress that back to the original page and take the action. And that's something that is really exciting and that will kind of change the level of things that agents can do on the web. That I feel exciting, but I also feel that the bulk of the useful stuff that you can do within the company can be done through API. The data can be retrieved by API. The actions can be taken through API.Swyx [00:32:44]: For listeners, I'll note that you're basically completely disagreeing with David Wan. FRANCESC CAMPOY Exactly, exactly. I've seen it since it's summer. ADEPT is where it is, and Dust is where it is. So Dust is still standing.Alessio [00:32:55]: Can we just quickly comment on function calling? You mentioned you don't need the models to be that smart to actually pick the tools. Have you seen the models not be good enough? Or is it just like, you just don't want to put the complexity in there? Like, is there any room for improvement left in function calling? Or do you feel you usually consistently get always the right response, the right parametersStan [00:33:13]: and all of that?Alessio [00:33:13]: FRANCESC CAMPOY So that's a tricky product question.Stan [00:33:15]: Because if the instructions are good and precise, then you don't have any issue, because it's scripted for you. And the model will just look at the scripts and just follow and say, oh, he's probably talking about that action, and I'm going to use it. And the parameters are kind of abused from the state of the conversation. I'll just go with it. If you provide a very high level, kind of an auto-GPT-esque level in the instructions and provide 16 different tools to your model, yes, we're seeing the models in that state making mistakes. And there is obviously some progress can be made on the capabilities. But the interesting part is that there is already so much work that can assist, augment, accelerate by just going with pretty simply scripted for actions agents. What I'm excited about by pushing our users to create rather simple agents is that once you have those working really well, you can create meta agents that use the agents as actions. And all of a sudden, you can kind of have a hierarchy of responsibility that will probably get you almost to the point of the auto-GPT value. It requires the construction of intermediary artifacts, but you're probably going to be able to achieve something great. I'll give you some example. We have our incidents are shared in Slack in a specific channel, or shipped are shared in Slack. We have a weekly meeting where we have a table about incidents and shipped stuff. We're not writing that weekly meeting table anymore. We have an assistant that just go find the right data on Slack and create the table for us. And that assistant works perfectly. It's trivially simple, right? Take one week of data from that channel and just create the table. And then we have in that weekly meeting, obviously some graphs and reporting about our financials and our progress and our ARR. And we've created assistants to generate those graphs directly. And those assistants works great. By creating those assistants that cover those small parts of that weekly meeting, slowly we're getting to in a world where we'll have a weekly meeting assistance. We'll just call it. You don't need to prompt it. You don't need to say anything. It's going to run those different assistants and get that notion page just ready. And by doing that, if you get there, and that's an objective for us to us using Dust, get there, you're saving an hour of company time every time you run it. Yeah.Alessio [00:35:28]: That's my pet topic of NPM for agents. How do you build dependency graphs of agents? And how do you share them? Because why do I have to rebuild some of the smaller levels of what you built already?Swyx [00:35:40]: I have a quick follow-up question on agents managing other agents. It's a topic of a lot of research, both from Microsoft and even in startups. What you've discovered best practice for, let's say like a manager agent controlling a bunch of small agents. It's two-way communication. I don't know if there should be a protocol format.Stan [00:35:59]: To be completely honest, the state we are at right now is creating the simple agents. So we haven't even explored yet the meta agents. We know it's there. We know it's going to be valuable. We know it's going to be awesome. But we're starting there because it's the simplest place to start. And it's also what the market understands. If you go to a company, random SaaS B2B company, not necessarily specialized in AI, and you take an operational team and you tell them, build some tooling for yourself, they'll understand the small agents. If you tell them, build AutoGP, they'll be like, Auto what?Swyx [00:36:31]: And I noticed that in your language, you're very much focused on non-technical users. You don't really mention API here. You mention instruction instead of system prompt, right? That's very conscious.Stan [00:36:41]: Yeah, it's very conscious. It's a mark of our designer, Ed, who kind of pushed us to create a friendly product. I was knee-deep into AI when I started, obviously. And my co-founder, Gabriel, was a Stripe as well. We started a company together that got acquired by Stripe 15 years ago. It was at Alain, a healthcare company in Paris. After that, it was a little bit less so knee-deep in AI, but really focused on product. And I didn't realize how important it is to make that technology not scary to end users. It didn't feel scary to me, but it was really seen by Ed, our designer, that it was feeling scary to the users. And so we were very proactive and very deliberate about creating a brand that feels not too scary and creating a wording and a language, as you say, that really tried to communicate the fact that it's going to be fine. It's going to be easy. You're going to make it.Alessio [00:37:34]: And another big point that David had about ADAPT is we need to build an environment for the agents to act. And then if you have the environment, you can simulate what they do. How's that different when you're interacting with APIs and you're kind of touching systems that you cannot really simulate? If you call it the Salesforce API, you're just calling it.Stan [00:37:52]: So I think that goes back to the DNA of the companies that are very different. ADAPT, I think, was a product company with a very strong research DNA, and they were still doing research. One of their goals was building a model. And that's why they raised a large amount of money, et cetera. We are 100% deliberately a product company. We don't do research. We don't train models. We don't even run GPUs. We're using the models that exist, and we try to push the product boundary as far as possible with the existing models. So that creates an issue. Indeed, so to answer your question, when you're interacting in the real world, well, you cannot simulate, so you cannot improve the models. Even improving your instructions is complicated for a builder. The hope is that you can use models to evaluate the conversations so that you can get at least feedback and you could get contradictive information about the performance of the assistance. But if you take actual trace of interaction of humans with those agents, it is even for us humans extremely hard to decide whether it was a productive interaction or a really bad interaction. You don't know why the person left. You don't know if they left happy or not. So being extremely, extremely, extremely pragmatic here, it becomes a product issue. We have to build a product that identifies the end users to provide feedback so that as a first step, the person that is building the agent can iterate on it. As a second step, maybe later when we start training model and post-training, et cetera, we can optimize around that for each of those companies. Yeah.Alessio [00:39:17]: Do you see in the future products offering kind of like a simulation environment, the same way all SaaS now kind of offers APIs to build programmatically? Like in cybersecurity, there are a lot of companies working on building simulative environments so that then you can use agents like Red Team, but I haven't really seen that.Stan [00:39:34]: Yeah, no, me neither. That's a super interesting question. I think it's really going to depend on how much, because you need to simulate to generate data, you need to train data to train models. And the question at the end is, are we going to be training models or are we just going to be using frontier models as they are? On that question, I don't have a strong opinion. It might be the case that we'll be training models because in all of those AI first products, the model is so close to the product surface that as you get big and you want to really own your product, you're going to have to own the model as well. Owning the model doesn't mean doing the pre-training, that would be crazy. But at least having an internal post-training realignment loop, it makes a lot of sense. And so if we see many companies going towards that all the time, then there might be incentives for the SaaS's of the world to provide assistance in getting there. But at the same time, there's a tension because those SaaS, they don't want to be interacted by agents, they want the human to click on the button. Yeah, they got to sell seats. Exactly.Swyx [00:40:41]: Just a quick question on models. I'm sure you've used many, probably not just OpenAI. Would you characterize some models as better than others? Do you use any open source models? What have been the trends in models over the last two years?Stan [00:40:53]: We've seen over the past two years kind of a bit of a race in between models. And at times, it's the OpenAI model that is the best. At times, it's the Anthropic models that is the best. Our take on that is that we are agnostic and we let our users pick their model. Oh, they choose? Yeah, so when you create an assistant or an agent, you can just say, oh, I'm going to run it on GP4, GP4 Turbo, or...Swyx [00:41:16]: Don't you think for the non-technical user, that is actually an abstraction that you should take away from them?Stan [00:41:20]: We have a sane default. So we move the default to the latest model that is cool. And we have a sane default, and it's actually not very visible. In our flow to create an agent, you would have to go in advance and go pick your model. So this is something that the technical person will care about. But that's something that obviously is a bit too complicated for the...Swyx [00:41:40]: And do you care most about function calling or instruction following or something else?Stan [00:41:44]: I think we care most for function calling because you want to... There's nothing worse than a function call, including incorrect parameters or being a bit off because it just drives the whole interaction off.Swyx [00:41:56]: Yeah, so got the Berkeley function calling.Stan [00:42:00]: These days, it's funny how the comparison between GP4O and GP4 Turbo is still up in the air on function calling. I personally don't have proof, but I know many people, and I'm probably part of them, to think that GP4 Turbo is still better than GP4O on function calling. Wow. We'll see what comes out of the O1 class if it ever gets function calling. And Cloud 3.5 Summit is great as well. They kind of innovated in an interesting way, which was never quite publicized. But it's that they have that kind of chain of thought step whenever you use a Cloud model or Summit model with function calling. That chain of thought step doesn't exist when you just interact with it just for answering questions. But when you use function calling, you get that step, and it really helps getting better function calling.Swyx [00:42:43]: Yeah, we actually just recorded a podcast with the Berkeley team that runs that leaderboard this week. So they just released V3.Stan [00:42:49]: Yeah.Swyx [00:42:49]: It was V1 like two months ago, and then they V2, V3. Turbo is on top.Stan [00:42:53]: Turbo is on top. Turbo is over 4.0.Swyx [00:42:54]: And then the third place is XLAM from Salesforce, which is a large action model they've been trying to popularize.Stan [00:43:01]: Yep.Swyx [00:43:01]: O1 Mini is actually on here, I think. O1 Mini is number 11.Stan [00:43:05]: But arguably, O1 Mini has been in a line for that. Yeah.Alessio [00:43:09]: Do you use leaderboards? Do you have your own evals? I mean, this is kind of intuitive, right? Like using the older model is better. I think most people just upgrade. Yeah. What's the eval process like?Stan [00:43:19]: It's funny because I've been doing research for three years, and we have bigger stuff to cook. When you're deploying in a company, one thing where we really spike is that when we manage to activate the company, we have a crazy penetration. The highest penetration we have is 88% daily active users within the entire employee of the company. The kind of average penetration and activation we have in our current enterprise customers is something like more like 60% to 70% weekly active. So we basically have the entire company interacting with us. And when you're there, there is so many stuff that matters most than getting evals, getting the best model. Because there is so many places where you can create products or do stuff that will give you the 80% with the work you do. Whereas deciding if it's GPT-4 or GPT-4 Turbo or et cetera, you know, it'll just give you the 5% improvement. But the reality is that you want to focus on the places where you can really change the direction or change the interaction more drastically. But that's something that we'll have to do eventually because we still want to be serious people.Swyx [00:44:24]: It's funny because in some ways, the model labs are competing for you, right? You don't have to do any effort. You just switch model and then it'll grow. What are you really limited by? Is it additional sources?Stan [00:44:36]: It's not models, right?Swyx [00:44:37]: You're not really limited by quality of model.Stan [00:44:40]: Right now, we are limited by the infrastructure part, which is the ability to connect easily for users to all the data they need to do the job they want to do.Swyx [00:44:51]: Because you maintain all your own stuff.Stan [00:44:53]: You know, there are companies out thereSwyx [00:44:54]: that are starting to provide integrations as a service, right? I used to work in an integrations company. Yeah, I know.Stan [00:44:59]: It's just that there is some intricacies about how you chunk stuff and how you process information from one platform to the other. If you look at the end of the spectrum, you could think of, you could say, oh, I'm going to support AirByte and AirByte has- I used to work at AirByte.Swyx [00:45:12]: Oh, really?Stan [00:45:13]: That makes sense.Swyx [00:45:14]: They're the French founders as well.Stan [00:45:15]: I know Jean very well. I'm seeing him today. And the reality is that if you look at Notion, AirByte does the job of taking Notion and putting it in a structured way. But that's the way it is not really usable to actually make it available to models in a useful way. Because you get all the blocks, details, et cetera, which is useful for many use cases.Swyx [00:45:35]: It's also for data scientists and not for AI.Stan [00:45:38]: The reality of Notion is that sometimes you have a- so when you have a page, there's a lot of structure in it and you want to capture the structure and chunk the information in a way that respects that structure. In Notion, you have databases. Sometimes those databases are real tabular data. Sometimes those databases are full of text. You want to get the distinction and understand that this database should be considered like text information, whereas this other one is actually quantitative information. And to really get a very high quality interaction with that piece of information, I haven't found a solution that will work without us owning the connection end-to-end.Swyx [00:46:15]: That's why I don't invest in, there's Composio, there's All Hands from Graham Newbig. There's all these other companies that are like, we will do the integrations for you. You just, we have the open source community. We'll do off the shelf. But then you are so specific in your needs that you want to own it.Swyx [00:46:28]: Yeah, exactly.Stan [00:46:29]: You can talk to Michel about that.Swyx [00:46:30]: You know, he wants to put the AI in there, but you know. Yeah, I will. I will.Stan [00:46:35]: Cool. What are we missing?Alessio [00:46:36]: You know, what are like the things that are like sneakily hard that you're tackling that maybe people don't even realize they're like really hard?Stan [00:46:43]: The real parts as we kind of touch base throughout the conversation is really building the infra that works for those agents because it's a tenuous walk. It's an evergreen piece of work because you always have an extra integration that will be useful to a non-negligible set of your users. I'm super excited about is that there's so many interactions that shouldn't be conversational interactions and that could be very useful. Basically, know that we have the firehose of information of those companies and there's not going to be that many companies that capture the firehose of information. When you have the firehose of information, you can do a ton of stuff with models that are just not accelerating people, but giving them superhuman capability, even with the current model capability because you can just sift through much more information. An example is documentation repair. If I have the firehose of Slack messages and new Notion pages, if somebody says, I own that page, I want to be updated when there is a piece of information that should update that page, this is not possible. You get an email saying, oh, look at that Slack message. It says the opposite of what you have in that paragraph. Maybe you want to update or just ping that person. I think there is a lot to be explored on the product layer in terms of what it means to interact productively with those models. And that's a problem that's extremely hard and extremely exciting.Swyx [00:48:00]: One thing you keep mentioning about infra work, obviously, Dust is building that infra and serving that in a very consumer-friendly way. You always talk about infra being additional sources, additional connectors. That is very important. But I'm also interested in the vertical infra. There is an orchestrator underlying all these things where you're doing asynchronous work. For example, the simplest one is a cron job. You just schedule things. But also, for if this and that, you have to wait for something to be executed and proceed to the next task. I used to work on an orchestrator as well, Temporal.Stan [00:48:31]: We used Temporal. Oh, you used Temporal? Yeah. Oh, how was the experience?Swyx [00:48:34]: I need the NPS.Stan [00:48:36]: We're doing a self-discovery call now.Swyx [00:48:39]: But you can also complain to me because I don't work there anymore.Stan [00:48:42]: No, we love Temporal. There's some edges that are a bit rough, surprisingly rough. And you would say, why is it so complicated?Swyx [00:48:49]: It's always versioning.Stan [00:48:50]: Yeah, stuff like that. But we really love it. And we use it for exactly what you said, like managing the entire set of stuff that needs to happen so that in semi-real time, we get all the updates from Slack or Notion or GitHub into the system. And whenever we see that piece of information goes through, maybe trigger workflows to run agents because they need to provide alerts to users and stuff like that. And Temporal is great. Love it.Swyx [00:49:17]: You haven't evaluated others. You don't want to build your own. You're happy with...Stan [00:49:21]: Oh, no, we're not in the business of replacing Temporal. And Temporal is so... I mean, it is or any other competitive product. They're very general. If it's there, there's an interesting theory about buy versus build. I think in that case, when you're a high-growth company, your buy-build trade-off is very much on the side of buy. Because if you have the capability, you're just going to be saving time, you can focus on your core competency, etc. And it's funny because we're seeing, we're starting to see the post-high-growth company, post-SKF company, going back on that trade-off, interestingly. So that's the cloud news about removing Zendesk and Salesforce. Do you believe that, by the way?Alessio [00:49:56]: Yeah, I did a podcast with them.Stan [00:49:58]: Oh, yeah?Alessio [00:49:58]: It's true.Swyx [00:49:59]: No, no, I know.Stan [00:50:00]: Of course they say it's true,Swyx [00:50:00]: but also how well is it going to go?Stan [00:50:02]: So I'm not talking about deflecting the customer traffic. I'm talking about building AI on top of Salesforce and Zendesk, basically, if I understand correctly. And all of a sudden, your product surface becomes much smaller because you're interacting with an AI system that will take some actions. And so all of a sudden, you don't need the product layer anymore. And you realize that, oh, those things are just databases that I pay a hundred times the price, right? Because you're a post-SKF company and you have tech capabilities, you are incentivized to reduce your costs and you have the capability to do so. And then it makes sense to just scratch the SaaS away. So it's interesting that we might see kind of a bad time for SaaS in post-hyper-growth tech companies. So it's still a big market, but it's not that big because if you're not a tech company, you don't have the capabilities to reduce that cost. If you're a high-growth company, always going to be buying because you go faster with that. But that's an interesting new space, new category of companies that might remove some SaaS. Yeah, Alessio's firmSwyx [00:51:02]: has an interesting thesis on the future of SaaS in AI.Alessio [00:51:05]: Service as a software, we call it. It's basically like, well, the most extreme is like, why is there any software at all? You know, ideally, it's all a labor interface where you're asking somebody to do something for you, whether that's a person, an AI agent or whatnot.Stan [00:51:17]: Yeah, yeah, that's interesting. I have to ask.Swyx [00:51:19]: Are you paying for Temporal Cloud or are you self-hosting?Stan [00:51:22]: Oh, no, no, we're paying, we're paying. Oh, okay, interesting.Swyx [00:51:24]: We're paying way too much.Stan [00:51:26]: It's crazy expensive, but it makes us-Swyx [00:51:28]: That's why as a shareholder, I like to hear that. It makes us go faster,Stan [00:51:31]: so we're happy to pay.Swyx [00:51:33]: Other things in the infrastack, I just want a list for other founders to think about. Ops, API gateway, evals, you know, anything interesting there that you build or buy?Stan [00:51:41]: I mean, there's always an interesting question. We've been building a lot around the interface between models and because Dust, the original version, was an orchestration platform and we basically provide a unified interface to every model providers.Swyx [00:51:56]: That's what I call gateway.Stan [00:51:57]: That we add because Dust was that and so we continued building upon and we own it. But that's an interesting question was in you, you want to build that or buy it?Swyx [00:52:06]: Yeah, I always say light LLM is the current open source consensus.Stan [00:52:09]: Exactly, yeah. There's an interesting question there.Swyx [00:52:12]: Ops, Datadog, just tracking.Stan [00:52:14]: Oh yeah, so Datadog is an obvious... What are the mistakes that I regret? I started as pure JavaScript, not TypeScript, and I think you want to, if you're wondering, oh, I want to go fast, I'll do a little bit of JavaScript. No, don't, just start with TypeScript. I see, okay.Swyx [00:52:30]: So interesting, you are a research engineer that came out of OpenAI that bet on TypeScript.Stan [00:52:36]: Well, the reality is that if you're building a product, you're going to be doing a lot of JavaScript, right? And Next, we're using Next as an example. It's a great platform. And our internal service is actually not built in Python either, it's built in Rust.Swyx [00:52:50]: That's another fascinating choice. The Next.js story is interesting because Next.js is obviously the king of the world in JavaScript land, but recently ChachiBT just rewrote from Next.js to Remix. We are going to be having them on to talk about the big rewrite. That is like the biggest news in front-end world in a while.Stan [00:53:06]: All right, just to wrap,Alessio [00:53:07]: in 2023, you predicted the first billion dollar company with just one person running it, and you said that's basically like a sign of AGI, once we get there. And you said it had already been started. Any 2024 updates on the take?Stan [00:53:20]: That quote was probably independently invented it, but Sam Altman stole it from me eventually. But anyway, it's a good quote. So I hypothesized it was maybe already being started, but if it's a uniperson company, it would probably grow really fast, and so we should probably see it already. I guess we're going to have to wait for it a little bit. And I think it's because the dust of the world don't exist. And so you don't have that thing that lets you run those, just do anything with models. But one thing that is exciting is maybe that we're going to be able to scale a team much further than before. All generations of company might be the first billion dollar companies with engineering teams of 20 people. That would be so exciting as well. That would be so great. You know, you don't have the management hurdle, you're just 20 focused people with a lot of assistance from machines to achieve your job. That would be great. And that I believe in a bit more. Yeah.Alessio [00:54:14]: I've written a post called Maximum Enterprise Utilization, kind of like you have MFU for GPUs, but it's basically like so many people are focused on, oh, it's going to like displace jobs and whatnot. But I'm like, there's so much work that people don't do because they don't have the people. And maybe the question is that you just don't scale to that size, you know, to begin with. And maybe everybody will use Dust and Dust is only going to be 20 people and then people using Dust will be two people.Swyx [00:54:39]: So my hot take is, I actually know what vertical they'll be in. They'll be content creators and podcasters.Alessio [00:54:44]: There's already two of us, so we're a max capacity.Swyx [00:54:47]: Most people would regard Jimmy Donaldson, like Mr. Beast as a billionaire, but his team is, he's got about like 200 people. So he's not a single person company. The closer one actually is Joe Rogan, where he basically just has like a guy. Hey, Jamie, put it on the screen. But Joe, I don't think, he sold his future for 250 million to Spotify. So he's not going to hit that billionaire status. The non-consensus one, it will be the Hawkswagirl.Swyx [00:55:12]: Anyway, but like you want creators who are empowered by a bunch of agents, Dust agents to do all this stuff because then ultimately it's just the brand, the curation. What is the role of the human then? What is that one person supposed to do if you have all these agents?Stan [00:55:28]: That's a good question. I mean, I think it was, I think it was Pinterest or Dropbox founder at the time was when you're CEO, you mostly have an editorial position. You're here to say yes and no to the things you are supposed to do.Swyx [00:55:42]: Okay, so I make a daily AI newsletter where I just, it's 99% AI generated, but I serve the role as the editor. Like I write commentary. I choose between four options.Stan [00:55:53]: You decide what goes in and goes out. And ultimately, as you said, you build up your brand through those many decisions.Swyx [00:56:00]: You should pursue creators.Stan [00:56:03]: And you've made a, I think you've made a, you've have an upcoming podcast with Notebook NLM, which has been doing a crazy stuff. That is exciting.Swyx [00:56:09]: They were just in here yesterday. I'll tell you one agent that we need. If you want to pursue the creator market, the one agent that we haven't paid for is our video editor agent. So if you want, you need to, you know, wrap FFmpeg in a GPT.Alessio [00:56:24]: Awesome. This was great. Anything we missed? Any final kind of like call to action hiring? It's like, obviously people should buy the product.Stan [00:56:32]: And no, I think we didn't dive into the vertical versus horizontal approach to AI agents. We mentioned a few things. We spike at penetration and that's just awesome because we carry the tool that the entire company has and use. So we create a ton of value, but it makes our go-to-market much harder. Vertical solutions have a go-to-market that is much easier because they're like, oh, I'm going to solve the lawyer stuff. But the potential within the company after that is limited. So there's really a nice tension there. We are true believers of the horizontal approach and we'll see how that plays out. But I think it's an interesting thing to think about when as a founder or as a technical person working with agents, what do you want to solve? Do you want to solve something general or do you want to solve something specific? And it has a lot of impact on eventually what type of company you're going to build.Swyx [00:57:21]: Yeah, I'll provide you my response on that. So I've gone the other way. I've gone products over platform. And it's basically your sense on the products drives your platform development. In other words, if you're trying to be as many things to as many people as possible, we're just trying to be one thing. We build our brand in one specific niche. And in future, if we want to choose to spin off platforms for other things, we can because we have that brand. So for example, Perplexity, we went for products in search, right? But then we also have Perplexity Labs that like here's the info that we use for search and whatever.Stan [00:57:51]: The counter argument to that is that you always have lateral movement within companies, but if you're Zendesk, you're not going to be Zendesk- Serving web services.Swyx [00:58:03]: There are a few, you know, there's success stories on both sides, but there's Amazon and Amazon web services, right? And sorry by platform,Stan [00:58:08]: I don't really mean the platform as the platform platform. I mean like the product that is useful to everybody within the company. And I'll take on that is that there is so many operations within the company. Some of them have been extremely rationalized by the markets, like salespeople, like support has been extremely rationalized. And so you can probably create very powerful vertical product around that. But there is so many operations that make up a company that are specific to the company that you need a product to help people get assisted on those operations. And that's kind of the bet we have. Excellent.Alessio [00:58:40]: Awesome, man. Thanks again for the time. Thank you very much for having me.Stan [00:58:42]: It was so much fun. Yeah, great discussion.Swyx [00:58:44]: Thank you.Stan [00:58:46]: Thank you. Get full access to Latent Space at www.latent.space/subscribe
    --------  
    1:00:06
  • In the Arena: How LMSys changed LLM Benchmarking Forever
    Apologies for lower audio quality; we lost recordings and had to use backup tracks. Our guests today are Anastasios Angelopoulos and Wei-Lin Chiang, leads of Chatbot Arena, fka LMSYS, the crowdsourced AI evaluation platform developed by the LMSys student club at Berkeley, which became the de facto standard for comparing language models. Arena Elo is often more cited than MMLU scores to many folks, and they have attracted >1,000,000 people to cast votes since its launch, leading top model trainers to cite them over their own formal academic benchmarks:The Limits of Static BenchmarksWe’ve done two benchmarks episodes: Benchmarks 101 and Benchmarks 201. One issue we’ve always brought up with static benchmarks is that 1) many are getting saturated, with models scoring almost perfectly on them 2) they often don’t reflect production use cases, making it hard for developers and users to use them as guidance. The fundamental challenge in AI evaluation isn't technical - it's philosophical. How do you measure something that increasingly resembles human intelligence? Rather than trying to define intelligence upfront, Arena let users interact naturally with models and collect comparative feedback. It's messy and subjective, but that's precisely the point - it captures the full spectrum of what people actually care about when using AI.The Pareto Frontier of Cost vs IntelligenceBecause the Elo scores are remarkably stable over time, we can put all the chat models on a map against their respective cost to gain a view of at least 3 orders of magnitude of model sizes/costs and observe the remarkable shift in intelligence per dollar over the past year:This frontier stood remarkably firm through the recent releases of o1-preview and price cuts of Gemini 1.5:The Statistics of SubjectivityIn our Benchmarks 201 episode, Clémentine Fourrier from HuggingFace thought this design choice was one of shortcomings of arenas: they aren’t reproducible. You don’t know who ranked what and what exactly the outcome was at the time of ranking. That same person might rank the same pair of outputs differently on a different day, or might ask harder questions to better models compared to smaller ones, making it imbalanced. Another argument that people have brought up is confirmation bias. We know humans prefer longer responses and are swayed by formatting - Rob Mulla from Dreadnode had found some interesting data on this in May:The approach LMArena is taking is to use logistic regression to decompose human preferences into constituent factors. As Anastasios explains: "We can say what components of style contribute to human preference and how they contribute." By adding these style components as parameters, they can mathematically "suck out" their influence and isolate the core model capabilities.This extends beyond just style - they can control for any measurable factor: "What if I want to look at the cost adjusted performance? Parameter count? We can ex post facto measure that." This is one of the most interesting things about Arena: You have a data generation engine which you can clean and turn into leaderboards later. If you wanted to create a leaderboard for poetry writing, you could get existing data from Arena, normalize it by identifying these style components. Whether or not it’s possible to really understand WHAT bias the voters have, that’s a different question.Private EvalsOne of the most delicate challenges LMSYS faces is maintaining trust while collaborating with AI labs. The concern is that labs could game the system by testing multiple variants privately and only releasing the best performer. This was brought up when 4o-mini released and it ranked as the second best model on the leaderboard:But this fear misunderstands how Arena works. Unlike static benchmarks where selection bias is a major issue, Arena's live nature means any initial bias gets washed out by ongoing evaluation. As Anastasios explains: "In the long run, there's way more fresh data than there is data that was used to compare these five models." The other big question is WHAT model is actually being tested; as people often talk about on X / Discord, the same endpoint will randomly feel “nerfed” like it happened for “Claude European summer” and corresponding conspiracy theories:It’s hard to keep track of these performance changes in Arena as these changes (if real…?) are not observable.The Future of EvaluationThe team's latest work on RouteLLM points to an interesting future where evaluation becomes more granular and task-specific. But they maintain that even simple routing strategies can be powerful - like directing complex queries to larger models while handling simple tasks with smaller ones.Arena is now going to expand beyond text into multimodal evaluation and specialized domains like code execution and red teaming. But their core insight remains: the best way to evaluate intelligence isn't to simplify it into metrics, but to embrace its complexity and find rigorous ways to analyze it. To go after this vision, they are spinning out Arena from LMSys, which will stay as an academia-driven group at Berkeley.Full Video PodcastChapters* 00:00:00 - Introductions* 00:01:16 - Origin and development of Chatbot Arena* 00:05:41 - Static benchmarks vs. Arenas* 00:09:03 - Community building* 00:13:32 - Biases in human preference evaluation* 00:18:27 - Style Control and Model Categories* 00:26:06 - Impact of o1* 00:29:15 - Collaborating with AI labs* 00:34:51 - RouteLLM and router models* 00:38:09 - Future of LMSys / ArenaShow Notes* Anastasios Angelopoulos* Anastasios' NeurIPS Paper Conformal Risk Control* Wei-Lin Chiang* Chatbot Arena* LMSys* MTBench* ShareGPT dataset* Stanford's Alpaca project* LLMRouter* E2B* DreadnodeTranscriptAlessio [00:00:00]: Hey everyone, welcome to the Latent Space podcast. This is Alessio, Partner and CTO in Residence at Decibel Partners, and I'm joined by my co-host Swyx, founder of Smol.ai.Swyx [00:00:14]: Hey, and today we're very happy and excited to welcome Anastasios and Wei Lin from LMSys. Welcome guys.Wei Lin [00:00:21]: Hey, how's it going? Nice to see you.Anastasios [00:00:23]: Thanks for having us.Swyx [00:00:24]: Anastasios, I actually saw you, I think at last year's NeurIPS. You were presenting a paper, which I don't really super understand, but it was some theory paper about how your method was very dominating over other sort of search methods. I don't remember what it was, but I remember that you were a very confident speaker.Anastasios [00:00:40]: Oh, I totally remember you. Didn't ever connect that, but yes, that's definitely true. Yeah. Nice to see you again.Swyx [00:00:46]: Yeah. I was frantically looking for the name of your paper and I couldn't find it. Basically I had to cut it because I didn't understand it.Anastasios [00:00:51]: Is this conformal PID control or was this the online control?Wei Lin [00:00:55]: Blast from the past, man.Swyx [00:00:57]: Blast from the past. It's always interesting how NeurIPS and all these academic conferences are sort of six months behind what people are actually doing, but conformal risk control, I would recommend people check it out. I have the recording. I just never published it just because I was like, I don't understand this enough to explain it.Anastasios [00:01:14]: People won't be interested.Wei Lin [00:01:15]: It's all good.Swyx [00:01:16]: But ELO scores, ELO scores are very easy to understand. You guys are responsible for the biggest revolution in language model benchmarking in the last few years. Maybe you guys want to introduce yourselves and maybe tell a little bit of the brief history of LMSysWei Lin [00:01:32]: Hey, I'm Wei Lin. I'm a fifth year PhD student at UC Berkeley, working on Chatbot Arena these days, doing crowdsourcing AI benchmarking.Anastasios [00:01:43]: I'm Anastasios. I'm a sixth year PhD student here at Berkeley. I did most of my PhD on like theoretical statistics and sort of foundations of model evaluation and testing. And now I'm working 150% on this Chatbot Arena stuff. It's great.Alessio [00:02:00]: And what was the origin of it? How did you come up with the idea? How did you get people to buy in? And then maybe what were one or two of the pivotal moments early on that kind of made it the standard for these things?Wei Lin [00:02:12]: Yeah, yeah. Chatbot Arena project was started last year in April, May, around that. Before that, we were basically experimenting in a lab how to fine tune a chatbot open source based on the Llama 1 model that I released. At that time, Lama 1 was like a base model and people didn't really know how to fine tune it. So we were doing some explorations. We were inspired by Stanford's Alpaca project. So we basically, yeah, grow a data set from the internet, which is called ShareGPT data set, which is like a dialogue data set between user and chat GPT conversation. It turns out to be like pretty high quality data, dialogue data. So we fine tune on it and then we train it and release the model called V2. And people were very excited about it because it kind of like demonstrate open way model can reach this conversation capability similar to chat GPT. And then we basically release the model with and also build a demo website for the model. People were very excited about it. But during the development, the biggest challenge to us at the time was like, how do we even evaluate it? How do we even argue this model we trained is better than others? And then what's the gap between this open source model that other proprietary offering? At that time, it was like GPT-4 was just announced and it's like Cloud One. What's the difference between them? And then after that, like every week, there's a new model being fine tuned, released. So even until still now, right? And then we have that demo website for V2 now. And then we thought like, okay, maybe we can add a few more of the model as well, like API model as well. And then we quickly realized that people need a tool to compare between different models. So we have like a side by side UI implemented on the website to that people choose, you know, compare. And we quickly realized that maybe we can do something like, like a battle on top of ECLMs, like just anonymize it, anonymize the identity, and that people vote which one is better. So the community decides which one is better, not us, not us arguing, you know, our model is better or what. And that turns out to be like, people are very excited about this idea. And then we tweet, we launch, and that's, yeah, that's April, May. And then it was like first two, three weeks, like just a few hundred thousand views tweet on our launch tweets. And then we have regularly double update weekly, beginning at a time, adding new model GPT-4 as well. So it was like, that was the, you know, the initial.Anastasios [00:04:58]: Another pivotal moment, just to jump in, would be private models, like the GPT, I'm a little,Wei Lin [00:05:04]: I'm a little chatty. That was this year. That was this year.Anastasios [00:05:07]: Huge.Wei Lin [00:05:08]: That was also huge.Alessio [00:05:09]: In the beginning, I saw the initial release was May 3rd of the beta board. On April 6, we did a benchmarks 101 episode for a podcast, just kind of talking about, you know, how so much of the data is like in the pre-training corpus and blah, blah, blah. And like the benchmarks are really not what we need to evaluate whether or not a model is good. Why did you not make a benchmark? Maybe at the time, you know, it was just like, Hey, let's just put together a whole bunch of data again, run a, make a score that seems much easier than coming out with a whole website where like users need to vote. Any thoughts behind that?Wei Lin [00:05:41]: I think it's more like fundamentally, we don't know how to automate this kind of benchmarks when it's more like, you know, conversational, multi-turn, and more open-ended task that may not come with a ground truth. So let's say if you ask a model to help you write an email for you for whatever purpose, there's no ground truth. How do you score them? Or write a story or a creative story or many other things like how we use ChatterBee these days. It's more open-ended. You know, we need human in the loop to give us feedback, which one is better. And I think nuance here is like, sometimes it's also hard for human to give the absolute rating. So that's why we have this kind of pairwise comparison, easier for people to choose which one is better. So from that, we use these pairwise comparison, those to calculate the leaderboard. Yeah. You can add more about this methodology.Anastasios [00:06:40]: Yeah. I think the point is that, and you guys probably also talked about this at some point, but static benchmarks are intrinsically, to some extent, unable to measure generative model performance. And the reason is because you cannot pre-annotate all the outputs of a generative model. You change the model, it's like the distribution of your data is changing. New labels to deal with that. New labels are great automated labeling, right? Which is why people are pursuing both. And yeah, static benchmarks, they allow you to zoom in to particular types of information like factuality, historical facts. We can build the best benchmark of historical facts, and we will then know that the model is great at historical facts. But ultimately, that's not the only axis, right? And we can build 50 of them, and we can evaluate 50 axes. But it's just so, the problem of generative model evaluation is just so expansive, and it's so subjective, that it's just maybe non-intrinsically impossible, but at least we don't see a way. We didn't see a way of encoding that into a fixed benchmark.Wei Lin [00:07:47]: But on the other hand, I think there's a challenge where this kind of online dynamic benchmark is more expensive than static benchmark, offline benchmark, where people still need it. Like when they build models, they need static benchmark to track where they are.Anastasios [00:08:03]: It's not like our benchmark is uniformly better than all other benchmarks, right? It just measures a different kind of performance that has proved to be useful.Swyx [00:08:14]: You guys also published MTBench as well, which is a static version, let's say, of Chatbot Arena, right? That people can actually use in their development of models.Wei Lin [00:08:25]: Right. I think one of the reasons we still do this static benchmark, we still wanted to explore, experiment whether we can automate this, because people, eventually, model developers need it to fast iterate their model. So that's why we explored LM as a judge, and ArenaHard, trying to filter, select high-quality data we collected from Chatbot Arena, the high-quality subset, and use that as a question and then automate the judge pipeline, so that people can quickly get high-quality signal, benchmark signals, using this online benchmark.Swyx [00:09:03]: As a community builder, I'm curious about just the initial early days. Obviously when you offer effectively free A-B testing inference for people, people will come and use your arena. What do you think were the key unlocks for you? Was it funding for this arena? Was it marketing? When people came in, do you see a noticeable skew in the data? Which obviously now you have enough data sets, you can separate things out, like coding and hard prompts, but in the early days, it was just all sorts of things.Anastasios [00:09:31]: Yeah, maybe one thing to establish at first is that our philosophy has always been to maximize organic use. I think that really does speak to your point, which is, yeah, why do people come? They came to use free LLM inference, right? And also, a lot of users just come to the website to use direct chat, because you can chat with the model for free. And then you could think about it like, hey, let's just be kind of like more on the selfish or conservative or protectionist side and say, no, we're only giving credits for people that battle or so on and so forth. Strategy wouldn't work, right? Because what we're trying to build is like a big funnel, a big funnel that can direct people. And some people are passionate and interested and they battle. And yes, the distribution of the people that do that is different. It's like, as you're pointing out, it's like, that's not as they're enthusiastic.Wei Lin [00:10:24]: They're early adopters of this technology.Anastasios [00:10:27]: Or they like games, you know, people like this. And we've run a couple of surveys that indicate this as well, of our user base.Wei Lin [00:10:36]: We do see a lot of developers come to the site asking polling questions, 20-30%. Yeah, 20-30%.Anastasios [00:10:42]: It's obviously not reflective of the general population, but it's reflective of some corner of the world of people that really care. And to some extent, maybe that's all right, because those are like the power users. And you know, we're not trying to claim that we represent the world, right? We represent the people that come and vote.Swyx [00:11:02]: Did you have to do anything marketing-wise? Was anything effective? Did you struggle at all? Was it success from day one?Wei Lin [00:11:09]: At some point, almost done. Okay. Because as you can imagine, this leaderboard depends on community engagement participation. If no one comes to vote tomorrow, then no leaderboard.Anastasios [00:11:23]: So we had some period of time when the number of users was just, after the initial launch, it went lower. Yeah. And, you know, at some point, it did not look promising. Actually, I joined the project a couple months in to do the statistical aspects, right? As you can imagine, that's how it kind of hooked into my previous work. At that time, it wasn't like, you know, it definitely wasn't clear that this was like going to be the eval or something. It was just like, oh, this is a cool project. Like Wayland seems awesome, you know, and that's it.Wei Lin [00:11:56]: Definitely. There's in the beginning, because people don't know us, people don't know what this is for. So we had a hard time. But I think we were lucky enough that we have some initial momentum. And as well as the competition between model providers just becoming, you know, became very intense. Intense. And then that makes the eval onto us, right? Because always number one is number one.Anastasios [00:12:23]: There's also an element of trust. Our main priority in everything we do is trust. We want to make sure we're doing everything like all the I's are dotted and the T's are crossed and nobody gets unfair treatment and people can see from our profiles and from our previous work and from whatever, you know, we're trustworthy people. We're not like trying to make a buck and we're not trying to become famous off of this or that. It's just, we're trying to provide a great public leaderboard community venture project.Wei Lin [00:12:51]: Yeah.Swyx [00:12:52]: Yes. I mean, you are kind of famous now, you know, that's fine. Just to dive in more into biases and, you know, some of this is like statistical control. The classic one for human preference evaluation is humans demonstrably prefer longer contexts or longer outputs, which is actually something that we don't necessarily want. You guys, I think maybe two months ago put out some length control studies. Apart from that, there are just other documented biases. Like, I'd just be interested in your review of what you've learned about biases and maybe a little bit about how you've controlled for them.Anastasios [00:13:32]: At a very high level, yeah. Humans are biased. Totally agree. Like in various ways. It's not clear whether that's good or bad, you know, we try not to make value judgments about these things. We just try to describe them as they are. And our approach is always as follows. We collect organic data and then we take that data and we mine it to get whatever insights we can get. And, you know, we have many millions of data points that we can now use to extract insights from. Now, one of those insights is to ask the question, what is the effect of style, right? You have a bunch of data, you have votes, people are voting either which way. We have all the conversations. We can say what components of style contribute to human preference and how do they contribute? Now, that's an important question. Why is that an important question? It's important because some people want to see which model would be better if the lengths of the responses were the same, were to be the same, right? People want to see the causal effect of the model's identity controlled for length or controlled for markdown, number of headers, bulleted lists, is the text bold? Some people don't, they just don't care about that. The idea is not to impose the judgment that this is not important, but rather to say ex post facto, can we analyze our data in a way that decouples all the different factors that go into human preference? Now, the way we do this is via statistical regression. That is to say the arena score that we show on our leaderboard is a particular type of linear model, right? It's a linear model that takes, it's a logistic regression that takes model identities and fits them against human preference, right? So it regresses human preference against model identity. What you get at the end of that logistic regression is a parameter vector of coefficients. And when the coefficient is large, it tells you that GPT 4.0 or whatever, very large coefficient, that means it's strong. And that's exactly what we report in the table. It's just the predictive effect of the model identity on the vote. The other thing that you can do is you can take that vector, let's say we have M models, that is an M dimensional vector of coefficients. What you can do is you say, hey, I also want to understand what the effect of length is. So I'll add another entry to that vector, which is trying to predict the vote, right? That tells me the difference in length between two model responses. So we have that for all of our data. We can compute it ex post facto. We added it into the regression and we look at that predictive effect. And then the idea, and this is formally true under certain conditions, not always verifiable ones, but the idea is that adding that extra coefficient to this vector will kind of suck out the predictive power of length and put it into that M plus first coefficient and quote, unquote, de-bias the rest so that the effect of length is not included. And that's what we do in style control. Now we don't just do it for M plus one. We have, you know, five, six different style components that have to do with markdown headers and bulleted lists and so on that we add here. Now, where is this going? You guys see the idea. It's a general methodology. If you have something that's sort of like a nuisance parameter, something that exists and provides predictive value, but you really don't want to estimate that. You want to remove its effect. In causal inference, these things are called like confounders often. What you can do is you can model the effect. You can put them into your model and try to adjust for them. So another one of those things might be cost. You know, what if I want to look at the cost adjusted performance of my model, which models are punching above their weight, parameter count, which models are punching above their weight in terms of parameter count, we can ex post facto measure that. We can do it without introducing anything that compromises the organic nature of theWei Lin [00:17:17]: data that we collect.Anastasios [00:17:18]: Hopefully that answers the question.Wei Lin [00:17:20]: It does.Swyx [00:17:21]: So I guess with a background in econometrics, this is super familiar.Anastasios [00:17:25]: You're probably better at this than me for sure.Swyx [00:17:27]: Well, I mean, so I used to be, you know, a quantitative trader and so, you know, controlling for multiple effects on stock price is effectively the job. So it's interesting. Obviously the problem is proving causation, which is hard, but you don't have to do that.Anastasios [00:17:45]: Yes. Yes, that's right. And causal inference is a hard problem and it goes beyond statistics, right? It's like you have to build the right causal model and so on and so forth. But we think that this is a good first step and we're sort of looking forward to learning from more people. You know, there's some good people at Berkeley that work on causal inference for the learning from them on like, what are the really most contemporary techniques that we can use in order to estimate true causal effects if possible.Swyx [00:18:10]: Maybe we could take a step through the other categories. So style control is a category. It is not a default. I have thought that when you wrote that blog post, actually, I thought it would be the new default because it seems like the most obvious thing to control for. But you also have other categories, you have coding, you have hard prompts. We consider that.Anastasios [00:18:27]: We're still actively considering it. It's just, you know, once you make that step, once you take that step, you're introducing your opinion and I'm not, you know, why should our opinion be the one? That's kind of a community choice. We could put it to a vote.Wei Lin [00:18:39]: We could pass.Anastasios [00:18:40]: Yeah, maybe do a poll. Maybe do a poll.Swyx [00:18:42]: I don't know. No opinion is an opinion.Wei Lin [00:18:44]: You know what I mean?Swyx [00:18:45]: Yeah.Wei Lin [00:18:46]: There's no neutral choice here.Swyx [00:18:47]: Yeah. You have all these others. You have instruction following too. What are your favorite categories that you like to talk about? Maybe you tell a little bit of the stories, tell a little bit of like the hard choices that you had to make.Wei Lin [00:18:57]: Yeah. Yeah. Yeah. I think the, uh, initially the reason why we want to add these new categories is essentially to answer some of the questions from our community, which is we won't have a single leaderboard for everything. So these models behave very differently in different domains. Let's say this model is trend for coding, this model trend for more technical questions and so on. On the other hand, to answer people's question about like, okay, what if all these low quality, you know, because we crowdsource data from the internet, there will be noise. So how do we de-noise? How do we filter out these low quality data effectively? So that was like, you know, some questions we want to answer. So basically we spent a few months, like really diving into these questions to understand how do we filter all these data because these are like medias of data points. And then if you want to re-label yourself, it's possible, but we need to kind of like to automate this kind of data classification pipeline for us to effectively categorize them to different categories, say coding, math, structure, and also harder problems. So that was like, the hope is when we slice the data into these meaningful categories to give people more like better signals, more direct signals, and that's also to clarify what we are actually measuring for, because I think that's the core part of the benchmark. That was the initial motivation. Does that make sense?Anastasios [00:20:27]: Yeah. Also, I'll just say, this does like get back to the point that the philosophy is to like mine organic, to take organic data and then mine it x plus factor.Alessio [00:20:35]: Is the data cage-free too, or just organic?Anastasios [00:20:39]: It's cage-free.Wei Lin [00:20:40]: No GMO. Yeah. And all of these efforts are like open source, like we open source all of the data cleaning pipeline, filtering pipeline. Yeah.Swyx [00:20:50]: I love the notebooks you guys publish. Actually really good just for learning statistics.Wei Lin [00:20:54]: Yeah. I'll share this insights with everyone.Alessio [00:20:59]: I agree on the initial premise of, Hey, writing an email, writing a story, there's like no ground truth. But I think as you move into like coding and like red teaming, some of these things, there's like kind of like skill levels. So I'm curious how you think about the distribution of skill of the users. Like maybe the top 1% of red teamers is just not participating in the arena. So how do you guys think about adjusting for it? And like feels like this where there's kind of like big differences between the average and the top. Yeah.Anastasios [00:21:29]: Red teaming, of course, red teaming is quite challenging. So, okay. Moving back. There's definitely like some tasks that are not as subjective that like pairwise human preference feedback is not the only signal that you would want to measure. And to some extent, maybe it's useful, but it may be more useful if you give people better tools. For example, it'd be great if we could execute code with an arena, be fantastic.Wei Lin [00:21:52]: We want to do it.Anastasios [00:21:53]: There's also this idea of constructing a user leaderboard. What does that mean? That means some users are better than others. And how do we measure that? How do we quantify that? Hard in chatbot arena, but where it is easier is in red teaming, because in red teaming, there's an explicit game. You're trying to break the model, you either win or you lose. So what you can do is you can say, Hey, what's really happening here is that the models and humans are playing a game against one another. And then you can use the same sort of Bradley Terry methodology with some, some extensions that we came up with in one of you can read one of our recent blog posts for, for the sort of theoretical extensions. You can attribute like strength back to individual players and jointly attribute strength to like the models that are in this jailbreaking game, along with the target tasks, like what types of jailbreaks you want.Wei Lin [00:22:44]: So yeah.Anastasios [00:22:45]: And I think that this is, this is a hugely important and interesting avenue that we want to continue researching. We have some initial ideas, but you know, all thoughts are welcome.Wei Lin [00:22:54]: Yeah.Alessio [00:22:55]: So first of all, on the code execution, the E2B guys, I'm sure they'll be happy to helpWei Lin [00:22:59]: you.Alessio [00:23:00]: I'll please set that up. They're big fans. We're investors in a company called Dreadnought, which we do a lot in AI red teaming. I think to me, the most interesting thing has been, how do you do sure? Like the model jailbreak is one side. We also had Nicola Scarlini from DeepMind on the podcast, and he was talking about, for example, like, you know, context stealing and like a weight stealing. So there's kind of like a lot more that goes around it. I'm curious just how you think about the model and then maybe like the broader system, even with Red Team Arena, you're just focused on like jailbreaking of the model, right? You're not doing kind of like any testing on the more system level thing of the model where like, maybe you can get the training data back, you're going to exfiltrate some of the layers and the weights and things like that.Wei Lin [00:23:43]: So right now, as you can see, the Red Team Arena is at a very early stage and we are still exploring what could be the potential new games we can introduce to the platform. So the idea is still the same, right? And we build a community driven project platform for people. They can have fun with this website, for sure. That's one thing, and then help everyone to test these models. So one of the aspects you mentioned is stealing secrets, stealing training sets. That could be one, you know, it could be designed as a game. Say, can you still use their credential, you know, we hide, maybe we can hide the credential into system prompts and so on. So there are like a few potential ideas we want to explore for sure. Do you want to add more?Anastasios [00:24:28]: I think that this is great. This idea is a great one. There's a lot of great ideas in the Red Teaming space. You know, I'm not personally like a Red Teamer. I don't like go around and Red Team models, but there are people that do that and they're awesome. They're super skilled. When I think about the Red Team arena, I think those are really the people that we're building it for. Like, we want to make them excited and happy, build tools that they like. And just like chatbot arena, we'll trust that this will end up being useful for the world. And all these people are, you know, I won't say all these people in this community are actually good hearted, right? They're not doing it because they want to like see the world burn. They're doing it because they like, think it's fun and cool. And yeah. Okay. Maybe they want to see, maybe they want a little bit.Wei Lin [00:25:13]: I don't know. Majority.Anastasios [00:25:15]: Yeah.Wei Lin [00:25:16]: You know what I'm saying.Anastasios [00:25:17]: So, you know, trying to figure out how to serve them best, I think, I don't know where that fits. I just, I'm not expressing. And give them credits, right?Wei Lin [00:25:24]: And give them credit.Anastasios [00:25:25]: Yeah. Yeah. So I'm not trying to express any particular value judgment here as to whether that's the right next step. It's just, that's sort of the way that I think we would think about it.Swyx [00:25:35]: Yeah. We also talked to Sander Schulhoff of the HackerPrompt competition, and he's pretty interested in Red Teaming at scale. Let's just call it that. You guys maybe want to talk with him.Wei Lin [00:25:45]: Oh, nice.Swyx [00:25:46]: We wanted to cover a little, a few topical things and then go into the other stuff that your group is doing. You know, you're not just running Chatbot Arena. We can also talk about the new website and your future plans, but I just wanted to briefly focus on O1. It is the hottest, latest model. Obviously, you guys already have it on the leaderboard. What is the impact of O1 on your evals?Wei Lin [00:26:06]: Made our interface slower.Anastasios [00:26:07]: It made it slower.Swyx [00:26:08]: Yeah.Wei Lin [00:26:10]: Because it needs like 30, 60 seconds, sometimes even more to, the latency is like higher. So that's one. Sure. But I think we observe very interesting things from this model as well. Like we observe like significant improvement in certain categories, like more technical or math. Yeah.Anastasios [00:26:32]: I think actually like one takeaway that was encouraging is that I think a lot of people before the O1 release were thinking, oh, like this benchmark is saturated. And why were they thinking that? They were thinking that because there was a bunch of models that were kind of at the same level. They were just kind of like incrementally competing and it sort of wasn't immediately obvious that any of them were any better. Nobody, including any individual person, it's hard to tell. But what O1 did is it was, it's clearly a better model for certain tasks. I mean, I used it for like proving some theorems and you know, there's some theorems that like only I know because I still do a little bit of theory. Right. So it's like, I can go in there and ask like, oh, how would you prove this exact thing? Which I can tell you has never been in the public domain. It'll do it. It's like, what?Wei Lin [00:27:19]: Okay.Anastasios [00:27:20]: So there's this model and it crushed the benchmark. You know, it's just like really like a big gap. And what that's telling us is that it's not saturated yet. It's still measuring some signal. That was encouraging. The point, the takeaway is that the benchmark is comparative. There's no absolute number. There's no maximum ELO. It's just like, if you're better than the rest, then you win. I think that was actually quite helpful to us.Swyx [00:27:46]: I think people were criticizing, I saw some of the academics criticizing it as not apples to apples. Right. Like, because it can take more time to reason, it's basically doing some search, doing some chain of thought that if you actually let the other models do that same thing, they might do better.Wei Lin [00:28:03]: Absolutely.Anastasios [00:28:04]: To be clear, none of the leaderboard currently is apples to apples because you have like Gemini Flash, you have, you know, all sorts of tiny models like Lama 8B, like 8B and 405B are not apples to apples.Wei Lin [00:28:19]: Totally agree. They have different latencies.Anastasios [00:28:21]: Different latencies.Wei Lin [00:28:22]: Control for latency. Yeah.Anastasios [00:28:24]: Latency control. That's another thing. We can do style control, but latency control. You know, things like this are important if you want to understand the trade-offs involved in using AI.Swyx [00:28:34]: O1 is a developing story. We still haven't seen the full model yet, but it's definitely a very exciting new paradigm. I think one community controversy I just wanted to give you guys space to address is the collaboration between you and the large model labs. People have been suspicious, let's just say, about how they choose to A-B test on you. I'll state the argument and let you respond, which is basically they run like five anonymous models and basically argmax their Elo on LMSYS or chatbot arena, and they release the best one. Right? What has been your end of the controversy? How have you decided to clarify your policy going forward?Wei Lin [00:29:15]: On a high level, I think our goal here is to build a fast eval for everyone, and including everyone in the community can see the data board and understand, compare the models. More importantly, I think we want to build the best eval also for model builders, like all these frontier labs building models. They're also internally facing a challenge, which is how do they eval the model? That's the reason why we want to partner with all the frontier lab people, and then to help them testing. That's one of the... We want to solve this technical challenge, which is eval. Yeah.Anastasios [00:29:54]: I mean, ideally, it benefits everyone, right?Wei Lin [00:29:56]: Yeah.Anastasios [00:29:57]: And people also are interested in seeing the leading edge of the models. People in the community seem to like that. Oh, there's a new model up. Is this strawberry? People are excited. People are interested. Yeah. And then there's this question that you bring up of, is it actually causing harm?Wei Lin [00:30:15]: Right?Anastasios [00:30:16]: Is it causing harm to the benchmark that we are allowing this private testing to happen? Maybe stepping back, why do you have that instinct? The reason why you and others in the community have that instinct is because when you look at something like a benchmark, like an image net, a static benchmark, what happens is that if I give you a million different models that are all slightly different, and I pick the best one, there's something called selection bias that plays in, which is that the performance of the winning model is overstated. This is also sometimes called the winner's curse. And that's because statistical fluctuations in the evaluation, they're driving which model gets selected as the top. So this selection bias can be a problem. Now there's a couple of things that make this benchmark slightly different. So first of all, the selection bias that you include when you're only testing five models is normally empirically small.Wei Lin [00:31:12]: And that's why we have these confidence intervals constructed.Anastasios [00:31:16]: That's right. Yeah. Our confidence intervals are actually not multiplicity adjusted. One thing that we could do immediately tomorrow in order to address this concern is if a model provider is testing five models and they want to release one, and we're constructing the models at level one minus alpha, we can just construct the intervals instead at level one minus alpha divided by five. That's called Bonferroni correction. What that'll tell you is that the final performance of the model, the interval that gets constructed, is actually formally correct. We don't do that right now, partially because we know from simulations that the amount of selection bias you incur with these five things is just not huge. It's not huge in comparison to the variability that you get from just regular human voters. So that's one thing. But then the second thing is the benchmark is live, right? So what ends up happening is it'll be a small magnitude, but even if you suffer from the winner's curse after testing these five models, what'll happen is that over time, because we're getting new data, it'll get adjusted down. So if there's any bias that gets introduced at that stage, in the long run, it actually doesn't matter. Because asymptotically, basically in the long run, there's way more fresh data than there is data that was used to compare these five models against these private models.Swyx [00:32:35]: The announcement effect is only just the first phase and it has a long tail.Anastasios [00:32:39]: Yeah, that's right. And it sort of like automatically corrects itself for this selection adjustment.Swyx [00:32:45]: Every month, I do a little chart of LMSys Elo versus cost, just to track the price per dollar, the amount of like, how much money do I have to pay for one incremental point in ELO? And so I actually observe an interesting stability in most of the Elo numbers, except for some of them. For example, GPT-4-O August has fallen from 12.90𝑡𝑜12.90to12.60 over the past few months. And it's surprising.Wei Lin [00:33:11]: You're saying like a new version of GPT-4-O versus the version in May?Swyx [00:33:17]: There was May. May is $12.85. I could have made some data entry error, but it'd be interesting to track these things over time. Anyway, I observed like numbers go up, numbers go down. It's remarkably stable. Gotcha.Anastasios [00:33:28]: So there are two different track points and the Elo has fallen.Wei Lin [00:33:31]: Yes.Swyx [00:33:32]: And sometimes ELOs rise as well. I think a core rose from 1,200𝑡𝑜1,200to1,230. And that's one of the things, by the way, the community is always suspicious about, like, hey, did this same endpoint get dumber after release? Right? It's such a meme.Anastasios [00:33:45]: That's funny. But those are different endpoints, right?Wei Lin [00:33:47]: Yeah, those are different API endpoints, I think. For GPT-4-O, August and May. But if it's for like, you know, endpoint versions we fixed, usually we observe small variation after release.Anastasios [00:34:04]: I mean, you can quantify the variations that you would expect in an ELO. That's a close form number that you can calculate. So if the variations are larger than we would expect, then that indicates that we shouldWei Lin [00:34:17]: look into that. For sure.Anastasios [00:34:19]: That's important for us to know. So maybe you should send us a reply. Yeah, please.Wei Lin [00:34:22]: I'll send you some data. Yeah.Alessio [00:34:24]: And I know we only got a few minutes before we wrap, but there are two things I would definitely love to talk about. One is route LLM. So talking about models, maybe getting dumber over time, blah, blah, blah. Are routers actually helpful in your experience? And Sean pointed out that MOEs are technically routers too. So how do you kind of think about the router being part of the model versus routing different models? And yeah, overall learnings from building it?Wei Lin [00:34:51]: Yeah. So route LLM is a project we released a few months ago, I think. And our goal was to basically understand, can we use the preference data we collect to route model based on the question, conditional on the questions, because we will make assumption that some model are good at math, some model are good at coding, things like that. So we found it somewhat useful. For sure, this is like ongoing effort. Our first phase with this project is pretty much like open source, the framework that we develop. So for anyone interested in this problem, they can use the framework, and then they can train their own router model, and then to do evaluation to benchmark. So that's our goal, the reason why we released this framework. And I think there are a couple of future stuff we are thinking. One is, can we just scale this, do even more data, even more preference data, and then train a reward model, train like a router model, better router model. Another thing is, release a benchmark, because right now, currently, there seems to be, one of the end point when we developed this project was like, there's just no good benchmark for a router. So that will be another thing we think could be a useful contribution to community. And there's still, for sure, methodology, new methodology we can use.Swyx [00:36:18]: I think my fundamental philosophical doubt is, does the router model have to be at least as smart as the smartest model? What's the minimum required intelligence of a router model, right? Like, if it's too dumb, it's not going to route properly.Anastasios [00:36:32]: Well, I think that you can build a very, very simple router that is very effective. So let me give you an example. You can build a great router with one parameter, and the parameter is just like, I'm going to check if my question is hard. And if it's hard, then I'm going to go to the big model. If it's easy, I'm going to go to the little model. You know, there's various ways of measuring hard that are like, pretty trivial, right? Like, does it have code? Does it have math? Is it long? That's already a great first step, right? Because ultimately, at the end of the day, you're competing with a weak baseline, which is any individual model. And you're trying to ask the question, how do I improve cost? And that's like a one-dimensional trade-off. It's like performance cost, and it's great. Now, you can also get into the extension, which is what models are good at what particularWei Lin [00:37:23]: types of queries.Anastasios [00:37:25]: And then, you know, I think your concern starts taking into effect is, can we actually do that? Can we estimate which models are good in which parts of the space in a way that doesn't introduce more variability and more variation and error into our final pipeline than just using the best of them? That's kind of how I see it.Swyx [00:37:44]: Your approach is really interesting compared to the commercial approaches where you use information from the chat arena to inform your model, which is, I mean, smart, and it's the foundation of everything you do. Yep.Alessio [00:37:56]: As we wrap, can we just talk about LMSYS and what that's going to be going forward? Like, LMRENA, I'm becoming something. I saw you announced yesterday you're graduating. I think maybe that was confusing since you're PhD students, but this is a different typeWei Lin [00:38:09]: of graduation.Anastasios [00:38:10]: Just for context, LMSYS started as like a student club.Wei Lin [00:38:15]: Student driven. Yeah.Anastasios [00:38:16]: Student driven, like research projects, you know, many different research projects are part of LMSYS. Sort of chatbot arena has, of course, like kind of become its own thing. And Lianmin and Ying, who are, you know, created LMSYS, have kind of like moved on to working on SGLANG. And now they're doing other projects that are sort of originated from LMSYS. And for that reason, we thought it made sense to kind of decouple the two. Just so, A, the LMSYS thing, it's not like when someone says LMSYS, they think of chatbot arena. That's not fair, so to speak.Wei Lin [00:38:52]: And we want to support new projects.Anastasios [00:38:54]: And we want to support new projects and so on and so forth. But of course, these are all like, you know, our friends.Wei Lin [00:38:59]: So that's why we call it graduation. I agree.Alessio [00:39:03]: That's like one thing that people wear. Maybe a little confused by where LMSYS kind of starts and ends and where arena startsWei Lin [00:39:10]: and ends.Alessio [00:39:10]: So I think you reach escape velocity now that you're kind of like your own thing.Swyx [00:39:15]: So I have one parting question. Like, what do you want more of? Like, what do you want people to approach you with?Anastasios [00:39:21]: Oh, my God, we need so much help. One thing would be like, we're obviously expanding into like other kinds of arenas, right? We definitely need like active help on red teaming. We definitely need active help on our different modalities, different modalities.Wei Lin [00:39:35]: So pilot, yeah, coding, coding.Anastasios [00:39:38]: You know, if somebody could like help us implement this, like REPL in REPL in chatbot arena,Wei Lin [00:39:44]: massive, that would be a massive delta.Anastasios [00:39:45]: And I know that there's people out there who are passionate and capable of doing it. It's just, we don't have enough hands on deck. We're just like an academic research lab, right? We're not equipped to support this kind of project. So, yeah, we need help with that. We also need just like general back-end dev. And new ideas, new conceptual ideas. I mean, honestly, the work that we do spans everything from like foundational statistics, like new proofs to full stack dev. And like anybody who's like, wants to contribute something to that pipeline is, should definitely reach out.Wei Lin [00:40:22]: We need it. And it's an open source project anyways. Anyone can make a PR.Anastasios [00:40:26]: And we're happy to, you know, whoever wants to contribute, we'll give them credit, you know? We're not trying to keep all the credit for ourselves. We want it to be a community project.Wei Lin [00:40:33]: That's great.Alessio [00:40:34]: And fits this pair of everything you've been doing over there. So, awesome, guys. Well, thank you so much for taking the time. And we'll put all the links in the show notes so that people can find you and reach out if they need it. Thank you so much.Anastasios [00:40:46]: It's very nice to talk to you. And thank you for the wonderful questions.Wei Lin [00:40:49]: Thank you so much. Get full access to Latent Space at www.latent.space/subscribe
    --------  
    41:02
  • How NotebookLM Was Made
    If you’ve listened to the podcast for a while, you might have heard our ElevenLabs-powered AI co-host Charlie a few times. Text-to-speech has made amazing progress in the last 18 months, with OpenAI’s Advanced Voice Mode (aka “Her”) as a sneak peek of the future of AI interactions (see our “Building AGI in Real Time” recap). Yet, we had yet to see a real killer app for AI voice (not counting music).Today’s guests, Raiza Martin and Usama Bin Shafqat, are the lead PM and AI engineer behind the NotebookLM feature flag that gave us the first viral AI voice experience, the “Deep Dive” podcast:The idea behind the “Audio Overviews” feature is simple: take a bunch of documents, websites, YouTube videos, etc, and generate a podcast out of them. This was one of the first demos that people built with voice models + RAG + GPT models, but it was always a glorified speech-to-text. Raiza and Usama took a very different approach:* Make it conversational: when you listen to a NotebookLM audio there are a ton of micro-interjections (Steven Johnson calls them disfluencies) like “Oh really?” or “Totally”, as well as pauses and “uh…”, like you would expect in a real conversation. These are not generated by the LLM in the transcript, but they are built into the the audio model. See ~28:00 in the pod for more details. * Listeners love tension: if two people are always in agreement on everything, it’s not super interesting. They tuned the model to generate flowing conversations that mirror the tone and rhythm of human speech. They did not confirm this, but many suspect the 2 year old SoundStorm paper is related to this model.* Generating new insights: because the hosts’ goal is not to summarize, but to entertain, it comes up with funny metaphors and comparisons that actually help expand on the content rather than just paraphrasing like most models do. We have had listeners make podcasts out of our podcasts, like this one.This is different than your average SOTA-chasing, MMLU-driven model buildooor. Putting product and AI engineering in the same room, having them build evals together, and understanding what the goal is lets you get these unique results. The 5 rules for AI PMsWe always focus on AI Engineers, but this episode had a ton of AI PM nuggets as well, which we wanted to collect as NotebookLM is one of the most successful products in the AI space:1. Less is more: the first version of the product had 0 customization options. All you could do is give it source documents, and then press a button to generate. Most users don’t know what “temperature” or “top-k” are, so you’re often taking the magic away by adding more options in the UI. Since recording they added a few, like a system prompt, but those were features that users were “hacking in”, as Simon Willison highlighted in his blog post.2. Use Real-Time Feedback: they built a community of 65,000 users on Discord that is constantly reporting issues and giving feedback; sometimes they noticed server downtime even before the Google internal monitoring did. Getting real time pings > aggregating user data when doing initial iterations. 3. Embrace Non-Determinism: AI outputs variability is a feature, not a bug. Rather than limiting the outputs from the get-go, build toggles that you can turn on/off with feature flags as the feedback starts to roll in.4. Curate with Taste: if you try your product and it sucks, you don’t need more data to confirm it. Just scrap that and iterate again. This is even easier for a product like this; if you start listening to one of the podcasts and turn it off after 10 seconds, it’s never a good sign. 5. Stay Hands-On: It’s hard to build taste if you don’t experiment. Trying out all your competitors products as well as unrelated tools really helps you understand what users are seeing in market, and how to improve on it.Chapters00:00 Introductions01:39 From Project Tailwind to NotebookLM09:25 Learning from 65,000 Discord members12:15 How NotebookLM works18:00 Working with Steven Johnson23:00 How to prioritize features25:13 Structuring the data pipelines29:50 How to eval34:34 Steering the podcast outputs37:51 Defining speakers personalities39:04 How do you make audio engaging?45:47 Humor is AGI51:38 Designing for non-determinism53:35 API when?55:05 Multilingual support and dialect considerations57:50 Managing system prompts and feature requests01:00:58 Future of NotebookLM01:04:59 Podcasts for your codebase01:07:16 Plans for real-time chat01:08:27 Wrap upShow Notes* Notebook LM* AI Test Kitchen* Nicholas Carlini* Steven Johnson* Wealth of Nations* Histories of Mysteries by Andrej Karpathy* chicken.pdf Threads* Area 120* Raiza Martin* Usama Bin ShafqatTranscriptNotebookLM [00:00:00]: Hey everyone, we're here today as guests on Latent Space. It's great to be here, I'm a long time listener and fan, they've had some great guests on this show before. Yeah, what an honor to have us, the hosts of another podcast, join as guests. I mean a huge thank you to Swyx and Alessio for the invite, thanks for having us on the show. Yeah really, it seems like they brought us here to talk a little bit about our show, our podcast. Yeah, I mean we've had lots of listeners ourselves, listeners at Deep Dive. Oh yeah, we've made a ton of audio overviews since we launched and we're learning a lot. There's probably a lot we can share around what we're building next, huh? Yeah, we'll share a little bit at least. The short version is we'll keep learning and getting better for you. We're glad you're along for the ride. So yeah, keep listening. Keep listening and stay curious. We promise to keep diving deep and bringing you even better options in the future. Stay curious.Alessio [00:00:52]: Hey everyone, welcome to the Latent Space Podcast. This is Alessio, partner and CTO at Residence at Decibel Partners. And I'm joined by my co-host, Swyx, founder of Smol.ai.Swyx [00:01:01]: Hey, and today we're back in the studio with our special guest, Raiza Martin. And Raiza, I forgot to get your last name, Shafqat.Raiza [00:01:10]: Yes.Swyx [00:01:10]: Okay, welcome.Raiza [00:01:12]: Hello, thank you for having us.Swyx [00:01:14]: So AI podcasters meet human podcasters, always fun. Congrats on the success of Notebook LM. I mean, how does it feel?Raiza [00:01:22]: It's been a lot of fun. A lot of it, honestly, was unexpected. But my favorite part is really listening to the audio overviews that people have been making.Swyx [00:01:29]: Maybe we should do a little bit of intros and tell the story. You know, what is your path into the sort of Google AI org? Or maybe, actually, I don't even know what org you guys are in.Raiza [00:01:39]: I can start. My name is Raisa. I lead the Notebook LM team inside of Google Labs. So specifically, that's the org that we're in. It's called Google Labs. It's only about two years old. And our whole mandate is really to build AI products. That's it. We work super closely with DeepMind. Our entire thing is just, like, try a bunch of things and see what's landing with users. And the background that I have is, really, I worked in payments before this, and I worked in ads right before, and then startups. I tell people, like, at every time that I changed orgs, I actually almost quit Google. Like, specifically, like, in between ads and payments, I was like, all right, I can't do this. Like, this is, like, super hard. I was like, it's not for me. I'm, like, a very zero-to-one person. But then I was like, okay, I'll try. I'll interview with other teams. And when I interviewed in payments, I was like, oh, these people are really cool. I don't know if I'm, like, a super good fit with this space, but I'll try it because the people are cool. And then I really enjoyed that, and then I worked on, like, zero-to-one features inside of payments, and I had a lot of fun. But then the time came again where I was like, oh, I don't know. It's like, it's time to leave. It's time to start my own thing. But then I interviewed inside of Google Labs, and I was like, oh, darn. Like, there's definitely, like—Alessio [00:02:48]: They got you again.Raiza [00:02:49]: They got me again. And so now I've been here for two years, and I'm happy that I stayed because especially with, you know, the recent success of Notebook LM, I'm like, dang, we did it. I actually got to do it. So that was really cool.Usama [00:03:02]: Kind of similar, honestly. I was at a big team at Google. We do sort of the data center supply chain planning stuff. Google has, like, the largest sort of footprint. Obviously, there's a lot of management stuff to do there. But then there was this thing called Area 120 at Google, which does not exist anymore. But I sort of wanted to do, like, more zero-to-one building and landed a role there. We were trying to build, like, a creator commerce platform called Kaya. It launched briefly a couple years ago. But then Area 120 sort of transitioned and morphed into Labs. And, like, over the last few years, like, the focus just got a lot clearer. Like, we were trying to build new AI products and do it in the wild and sort of co-create and all of that. So, you know, we've just been trying a bunch of different things. And this one really landed, which has felt pretty phenomenal. Really, really landed.Swyx [00:03:53]: Let's talk about the brief history of Notebook LM. You had a tweet, which is very helpful for doing research. May 2023, during Google I.O., you announced Project Tailwind.Raiza [00:04:03]: Yeah.Swyx [00:04:03]: So today is October 2024. So you joined October 2022?Raiza [00:04:09]: Actually, I used to lead AI Test Kitchen. And this was actually, I think, not I.O. 2023. I.O. 2022 is when we launched AI Test Kitchen, or announced it. And I don't know if you remember it.Swyx [00:04:23]: That's how you, like, had the basic prototype for Gemini.Raiza [00:04:26]: Yes, yes, exactly. Lambda.Swyx [00:04:28]: Gave beta access to people.Raiza [00:04:29]: Yeah, yeah, yeah. And I remember, I was like, wow, this is crazy. We're going to launch an LLM into the wild. And that was the first project that I was working on at Google. But at the same time, my manager at the time, Josh, he was like, hey, I want you to really think about, like, what real products would we build that are not just demos of the technology? That was in October of 2022. I was sitting next to an engineer that was working on a project called Talk to Small Corpus. His name was Adam. And the idea of Talk to Small Corpus is basically using LLM to talk to your data. And at the time, I was like, wait, there's some, like, really practical things that you can build here. And just a little bit of background, like, I was an adult learner. Like, I went to college while I was working a full-time job. And the first thing I thought was, like, this would have really helped me with my studying, right? Like, if I could just, like, talk to a textbook, especially, like, when I was tired after work, that would have been huge. We took a lot of, like, the Talk to Small Corpus prototypes, and I showed it to a lot of, like, college students, particularly, like, adult learners. They were like, yes, like, I get it, right? Like, I didn't even have to explain it to them. And we just continued to iterate the prototype from there to the point where we actually got a slot as part of the I.O. demo in 23.Swyx [00:05:42]: And Corpus, was it a textbook? Oh, my gosh.Raiza [00:05:45]: Yeah. It's funny. Actually, when he explained the project to me, he was like, talk to Small Corpus. It was like, talk to a small corpse?Swyx [00:05:51]: Yeah, nobody says Corpus.Raiza [00:06:00]: It was like, a small corpse? This is not AI. Yeah, yeah. And it really was just, like, a way for us to describe the amount of data that we thought, like, it could be good for.Swyx [00:06:02]: Yeah, but even then, you're still, like, doing rag stuff. Because, you know, the context length back then was probably, like, 2K, 4K.Raiza [00:06:08]: Yeah, it was basically rag.Raiza [00:06:09]: That was essentially what it was.Raiza [00:06:10]: And I remember, I was like, we were building the prototypes. And at the same time, I think, like, the rest of the world was. Right? We were seeing all of these, like, chat with PDF stuff come up. And I was like, come on, we gotta go. Like, we have to, like, push this out into the world. I think if there was anything, I wish we would have launched sooner because I wanted to learn faster. But I think, like, we netted out pretty well.Alessio [00:06:30]: Was the initial product just text-to-speech? Or were you also doing kind of, like, synthesizing of the content, refining it? Or were you just helping people read through it?Raiza [00:06:40]: Before we did the I.O. announcement in 23, we'd already done a lot of studies. And one of the first things that I realized was the first thing anybody ever typed was, summarize the thing. Right?Raiza [00:06:53]: Summarize the document.Raiza [00:06:54]: And it was, like, half like a test and half just like, oh, I know the content. I want to see how well it does this. So it was part of the first thing that we launched. It was called Project Tailwind back then. It was just Q&A, so you could chat with the doc just through text, and it would automatically generate a summary as well. I'm not sure if we had it back then.Raiza [00:07:12]: I think we did.Raiza [00:07:12]: It would also generate the key topics in your document, and it could support up to, like, 10 documents. So it wasn't just, like, a single doc.Alessio [00:07:20]: And then the I.O. demo went well, I guess. And then what was the discussion from there to where we are today? Is there any, maybe, intermediate step of the product that people missed between this was launch or?Raiza [00:07:33]: It was interesting because every step of the way, I think we hit, like, some pretty critical milestones. So I think from the initial demo, I think there was so much excitement of, like, wow, what is this thing that Google is launching? And so we capitalized on that. We built the wait list. That's actually when we also launched the Discord server, which has been huge for us because for us in particular, one of the things that I really wanted to do was to be able to launch features and get feedback ASAP. Like, the moment somebody tries it, like, I want to hear what they think right now, and I want to ask follow-up questions. And the Discord has just been so great for that. But then we basically took the feedback from I.O., we continued to refine the product.Raiza [00:08:12]: So we added more features.Raiza [00:08:13]: We added sort of, like, the ability to save notes, write notes. We generate follow-up questions. So there's a bunch of stuff in the product that shows, like, a lot of that research. But it was really the rolling out of things. Like, we removed the wait list, so rolled out to all of the United States. We rolled out to over 200 countries and territories. We started supporting more languages, both in the UI and, like, the actual source stuff. We experienced, like, in terms of milestones, there was, like, an explosion of, like, users in Japan. This was super interesting in terms of just, like, unexpected. Like, people would write to us and they would be like, this is amazing. I have to read all of these rules in English, but I can chat in Japanese. It's like, oh, wow. That's true, right? Like, with LLMs, you kind of get this natural, it translates the content for you. And you can ask in your sort of preferred mode. And I think that's not just, like, a language thing, too. I think there's, like, I do this test with Wealth of Nations all the time because it's, like, a pretty complicated text to read. The Evan Smith classic.Swyx [00:09:11]: It's, like, 400 pages or something.Raiza [00:09:12]: Yeah. But I like this test because I'm, like, asking, like, Normie, you know, plain speak. And then it summarizes really well for me. It sort of adapts to my tone.Swyx [00:09:22]: Very capitalist.Raiza [00:09:25]: Very on brand.Swyx [00:09:25]: I just checked in on a Notebook LM Discord. 65,000 people. Yeah.Raiza [00:09:29]: Crazy.Swyx [00:09:29]: Just, like, for one project within Google. It's not, like, it's not labs. It's just Notebook LM.Raiza [00:09:35]: Just Notebook LM.Swyx [00:09:36]: What do you learn from the community?Raiza [00:09:39]: I think that the Discord is really great for hearing about a couple of things.Raiza [00:09:43]: One, when things are going wrong. I think, honestly, like, our fastest way that we've been able to find out if, like, the servers are down or there's just an influx of people being, like, it saysRaiza [00:09:53]: system unable to answer.Raiza [00:09:54]: Anybody else getting this?Raiza [00:09:56]: And I'm, like, all right, let's go.Raiza [00:09:58]: And it actually catches it a lot faster than, like, our own monitoring does.Raiza [00:10:01]: It's, like, that's been really cool. So, thank you.Swyx [00:10:03]: Canceled eat a dog.Raiza [00:10:05]: So, thank you to everybody. Please keep reporting it. I think the second thing is really the use cases.Raiza [00:10:10]: I think when we put it out there, I was, like, hey, I have a hunch of how people will use it, but, like, to actually hear about, you know, not just the context of, like, the use of Notebook LM, but, like, what is this person's life like? Why do they care about using this tool?Raiza [00:10:23]: Especially people who actually have trouble using it, but they keep pushing.Raiza [00:10:27]: Like, that's just so critical to understand what was so motivating, right?Raiza [00:10:31]: Like, what was your problem that was, like, so worth solving? So, that's, like, a second thing.Raiza [00:10:34]: The third thing is also just hearing sort of, like, when we have wins and when we don't have wins because there's actually a lot of functionality where I'm, like, hmm, IRaiza [00:10:42]: don't know if that landed super well or if that was actually super critical.Raiza [00:10:45]: As part of having this sort of small project, right, I want to be able to unlaunch things, too. So, it's not just about just, like, rolling things out and testing it and being, like, wow, now we have, like, 99 features. Like, hopefully we get to a place where it's, like, there's just a really strong core feature set and the things that aren't as great, we can just unlaunch.Swyx [00:11:02]: What have you unlaunched? I have to ask.Raiza [00:11:04]: I'm in the process of unlaunching some stuff, but, for example, we had this idea that you could highlight the text in your source passage and then you could transform it. And nobody was really using it and it was, like, a very complicated piece of our architecture and it's very hard to continue supporting it in the context of new features. So, we were, like, okay, let's do a 50-50 sunset of this thing and see if anybody complains.Raiza [00:11:28]: And so far, nobody has.Swyx [00:11:29]: Is there, like, a feature flagging paradigm inside of your architecture that lets you feature flag these things easily?Raiza [00:11:36]: Yes, and actually...Raiza [00:11:37]: What is it called?Swyx [00:11:38]: Like, I love feature flagging.Raiza [00:11:40]: You mean, like, in terms of just, like, being able to expose things to users?Swyx [00:11:42]: Yeah, as a PM. Like, this is your number one tool, right?Raiza [00:11:44]: Yeah, yeah.Swyx [00:11:45]: Let's try this out. All right, if it works, roll it out. If it doesn't, roll it back, you know?Raiza [00:11:49]: Yeah, I mean, we just run Mendel experiments for the most part. And, actually, I don't know if you saw it, but on Twitter, somebody was able to get around our flags and they enabled all the experiments.Raiza [00:11:58]: They were, like, check out what the Notebook LM team is cooking.Raiza [00:12:02]: I was, like, oh!Raiza [00:12:03]: And I was at lunch with the rest of the team and I was, like, I was eating. I was, like, guys, guys, Magic Draft League!Raiza [00:12:10]: They were, like, oh, no!Raiza [00:12:12]: I was, like, okay, just finish eating and then let's go figure out what to do.Raiza [00:12:15]: Yeah.Alessio [00:12:15]: I think a post-mortem would be fun, but I don't think we need to do it on the podcast now. Can we just talk about what's behind the magic? So, I think everybody has questions, hypotheses about what models power it. I know you might not be able to share everything, but can you just get people very basic? How do you take the data and put it in the model? What text model you use? What's the text-to-speech kind of, like, jump between the two? Sure.Raiza [00:12:42]: Yeah.Raiza [00:12:42]: I was going to say, SRaiza, he manually does all the podcasts.Raiza [00:12:46]: Oh, thank you.Usama [00:12:46]: Really fast. You're very fast, yeah.Raiza [00:12:48]: Both of the voices at once.Usama [00:12:51]: Voice actor.Raiza [00:12:52]: Good, good.Usama [00:12:52]: Yeah, so, for a bit of background, we were building this thing sort of outside Notebook LM to begin with. Like, just the idea is, like, content transformation, right? Like, we can do different modalities. Like, everyone knows that. Everyone's been poking at it. But, like, how do you make it really useful? And, like, one of the ways we thought was, like, okay, like, you maybe, like, you know, people learn better when they're hearing things. But TTS exists, and you can, like, narrate whatever's on screen. But you want to absorb it the same way. So, like, that's where we sort of started out into the realm of, like, maybe we try, like, you know, two people are having a conversation kind of format. We didn't actually start out thinking this would live in Notebook, right? Like, Notebook was sort of, we built this demo out independently, tried out, like, a few different sort of sources. The main idea was, like, go from some sort of sources and transform it into a listenable, engaging audio format. And then through that process, we, like, unlocked a bunch more sort of learnings. Like, for example, in a sense, like, you're not prompting the model as much because, like, the information density is getting unrolled by the model prompting itself, in a sense. Because there's two speakers, and they're both technically, like, AI personas, right? That have different angles of looking at things. And, like, they'll have a discussion about it. And that sort of, we realized that's kind of what was making it riveting, in a sense. Like, you care about what comes next, even if you've read the material already. Because, like, people say they get new insights on their own journals or books or whatever. Like, anything that they've written themselves. So, yeah, from a modeling perspective, like, it's, like Reiza said earlier, like, we work with the DeepMind audio folks pretty closely. So, they're always cooking up new techniques to, like, get better, more human-like audio. And then Gemini 1.5 is really, really good at absorbing long context. So, we sort of, like, generally put those things together in a way that we could reliably produce the audio.Raiza [00:14:52]: I would add, like, there's something really nuanced, I think, about sort of the evolution of, like, the utility of text-to-speech. Where, if it's just reading an actual text response, and I've done this several times. I do it all the time with, like, reading my text messages. Or, like, sometimes I'm trying to read, like, a really dense paper, but I'm trying to do actual work. I'll have it, like, read out the screen. There is something really robotic about it that is not engaging. And it's really hard to consume content in that way. And it's never been really effective. Like, particularly for me, where I'm, like, hey, it's actually just, like, it's fine for, like, short stuff. Like, texting, but even that, it's, like, not that great. So, I think the frontier of experimentation here was really thinking about there is a transform that needs to happen in between whatever.Raiza [00:15:38]: Here's, like, my resume, right?Raiza [00:15:39]: Or here's, like, a 100-page slide deck or something. There is a transform that needs to happen that is inherently editorial. And I think this is where, like, that two-person persona, right, dialogue model, they have takes on the material that you've presented. That's where it really sort of, like, brings the content to life in a way that's, like, not robotic. And I think that's, like, where the magic is, is, like, you don't actually know what's going to happen when you press generate.Raiza [00:16:08]: You know, for better or for worse.Raiza [00:16:09]: Like, to the extent that, like, people are, like, no, I actually want it to be more predictable now. Like, I want to be able to tell them. But I think that initial, like, wow was because you didn't know, right? When you upload your resume, what's it about to say about you? And I think I've seen enough of these where I'm, like, oh, it gave you good vibes, right? Like, you knew it was going to say, like, something really cool. As we start to shape this product, I think we want to try to preserve as much of that wow as much as we can. Because I do think, like, exposing, like, all the knobs and, like, the dials, like, we've been thinking about this a lot. It's like, hey, is that, like, the actual thing?Raiza [00:16:43]: Is that the thing that people really want?Alessio [00:16:45]: Have you found differences in having one model just generate the conversation and then using text-to-speech to kind of fake two people? Or, like, are you actually using two different kind of system prompts to, like, have a conversation step-by-step? I'm always curious, like, if persona system prompts make a big difference? Or, like, you just put in one prompt and then you just let it run?Usama [00:17:05]: I guess, like, generally we use a lot of inference, as you can tell with, like, the spinning thing takes a while. So, yeah, there's definitely, like, a bunch of different things happening under the hood. We've tried both approaches and they have their, sort of, drawbacks and benefits. I think that that idea of, like, questioning, like, the two different personas, like, persists throughout, like, whatever approach we try. It's like, there's a bit of, like, imperfection in there. Like, we had to really lean into the fact that, like, to build something that's engaging, like, it needs to be somewhat human and it needs to be just not a chatbot. Like, that was sort of, like, what we need to diverge from. It's like, you know, most chatbots will just narrate the same kind of answer, like, given the same sources, for the most part, which is ridiculous. So, yeah, there's, like, experimentation there under the hood, like, with the model to, like, make sure that it's spitting out, like, different takes and different personas and different, sort of, prompting each other is, like, a good analogy, I guess.Swyx [00:18:00]: Yeah, I think Steven Johnson, I think he's on your team. I don't know what his role is. He seems like chief dreamer, writer.Raiza [00:18:08]: Yeah, I mean, I can comment on Steven. So, Steven joined, actually, in the very early days, I think before it was even a fully funded project. And I remember when he joined, I was like, Steven Johnson's going to be on my team? You know, and for folks who don't know him, Steven is a New York Times bestselling author of, like, 14 books. He has a PBS show. He's, like, incredibly smart, just, like, a true, sort of, celebrity by himself. And then he joined Google, and he was like, I want to come here, and I want to build the thing that I've always dreamed of, which is a tool to help me think. I was like, a what? Like, a tool to help you think? I was like, what do you need help with? Like, you seem to be doing great on your own. And, you know, he would describe this to me, and I would watch his flow. And aside from, like, providing a lot of inspiration, to be honest, like, when I watched Steven work, I was like, oh, nobody works like this, right? Like, this is what makes him special. Like, he is such a dedicated, like, researcher and journalist, and he's so thorough, he's so smart. And then I had this realization of, like, maybe Steven is the product. Maybe the work is to take Steven's expertise and bring it to, like, everyday people that could really benefit from this. Like, just watching him work, I was like, oh, I could definitely use, like, a mini-Steven, like, doing work for me. Like, that would make me a better PM. And then I thought very quickly about, like, the adjacent roles that could use sort of this, like, research and analysis tool. And so, aside from being, you know, chief dreamer, Steven also represents, like, a super workflow that I think all of us, like, if we had access to a tool like it, would just inherently, like, make us better.Swyx [00:19:46]: Did you make him express his thoughts while he worked, or you just silently watched him, or how does this work?Raiza [00:19:52]: Oh, now you're making me admit it. But yes, I did just silently watch him.Swyx [00:19:57]: This is a part of the PM toolkit, right? They give user interviews and all that.Raiza [00:20:00]: Yeah, I mean, I did interview him, but I noticed, like, if I interviewed him, it was different than if I just watched him. And I did the same thing with students all the time. Like, I followed a lot of students around. I watched them study. I would ask them, like, oh, how do you feel now, right?Raiza [00:20:15]: Or why did you do that? Like, what made you do that, actually?Raiza [00:20:18]: Or why are you upset about, like, this particular thing? Why are you cranky about this particular topic? And it was very similar, I think, for Steven, especially because he was describing, he was in the middle of writing a book. And he would describe, like, oh, you know, here's how I research things, and here's how I keep my notes. Oh, and here's how I do it. And it was really, he was doing this sort of, like, self-questioning, right? Like, now we talk about, like, chain of, you know, reasoning or thought, reflection.Raiza [00:20:44]: And I was like, oh, he's the OG.Raiza [00:20:46]: Like, I watched him do it in real time. I was like, that's, like, L-O-M right there. And to be able to bring sort of that expertise in a way that was, like, you know, maybe, like, costly inference-wise, but really have, like, that ability inside of a tool that was, like, for starters, free inside of NotebookLM, it was good to learn whether or not people really did find use out of it.Swyx [00:21:05]: So did he just commit to using NotebookLM for everything, or did you just model his existing workflow?Raiza [00:21:12]: Both, right?Raiza [00:21:12]: Like, in the beginning, there was no product for him to use. And so he just kept describing the thing that he wanted. And then eventually, like, we started building the thing. And then I would start watching him use it. One of the things that I love about Steven is he uses the product in ways where it kind of does it, but doesn't quite. Like, he's always using it at, like, the absolute max limit of this thing. But the way that he describes it is so full of promise, where he's like, I can see it going here. And all I have to do is sort of, like, meet him there and sort of pressure test whether or not, you know, everyday people want it. And we just have to build it.Swyx [00:21:47]: I would say OpenAI has a pretty similar person, Andrew Mason, I think his name is. It's very similar, like, just from the writing world and using it as a tool for thought to shape Chachabitty. I don't think that people who use AI tools to their limit are common. I'm looking at my NotebookLM now. I've got two sources. You have a little, like, source limit thing. And my bar is over here, you know, and it stretches across the whole thing. I'm like, did he fill it up?Raiza [00:22:09]: Yes, and he has, like, a higher limit than others, I think. He fills it up.Raiza [00:22:14]: Oh, yeah.Raiza [00:22:14]: Like, I don't think Steven even has a limit, actually.Swyx [00:22:17]: And he has Notes, Google Drive stuff, PDFs, MP3, whatever.Raiza [00:22:22]: Yes, and one of my favorite demos, he just did this recently, is he has actually PDFs of, like, handwritten Marie Curie notes. I see.Swyx [00:22:29]: So you're doing image recognition as well. Yeah, it does support it today.Raiza [00:22:32]: So if you have a PDF that's purely images, it will recognize it.Raiza [00:22:36]: But his demo is just, like, super powerful.Raiza [00:22:37]: He's like, okay, here's Marie Curie's notes. And it's like, here's how I'm using it to analyze it. And I'm using it for, like, this thing that I'm writing.Raiza [00:22:44]: And that's really compelling.Raiza [00:22:45]: It's like the everyday person doesn't think of these applications. And I think even, like, when I listen to Steven's demo, I see the gap. I see how Steven got there, but I don't see how I could without him. And so there's a lot of work still for us to build of, like, hey, how do I bring that magic down to, like, zero work? Because I look at all the steps that he had to take in order to do it, and I'm like, okay, that's product work for us, right? Like, that's just onboarding.Alessio [00:23:09]: And so from an engineering perspective, people come to you and it's like, hey, I need to use this handwritten notes from Marie Curie from hundreds of years ago. How do you think about adding support for, like, data sources and then maybe any fun stories and, like, supporting more esoteric types of inputs?Raiza [00:23:25]: So I think about the product in three ways, right? So there's the sources, the source input. There's, like, the capabilities of, like, what you could do with those sources. And then there's the third space, which is how do you output it into the world? Like, how do you put it back out there? There's a lot of really basic sources that we don't support still, right? I think there's sort of, like, the handwritten notes stuff is one, but even basic things like DocX or, like, PowerPoint, right? Like, these are the things that people, everyday people are like, hey, my professor actually gave me everything in DocX. Can you support that? And then just, like, basic stuff, like images and PDFs combined with text. Like, there's just a really long roadmap for sources that I think we just have to work on.Raiza [00:24:04]: So that's, like, a big piece of it.Raiza [00:24:05]: On the output side, and I think this is, like, one of the most interesting things that we learned really early on, is, sure, there's, like, the Q&A analysis stuff, which is like, hey, when did this thing launch? Okay, you found it in the slide deck. Here's the answer. But most of the time, the reason why people ask those questions is because they're trying to make something new. And so when, actually, when some of those early features leaked, like, a lot of the features we're experimenting with are the output types. And so you can imagine that people care a lot about the resources that they're putting into NotebookLM because they're trying to create something new. So I think equally as important as, like, the source inputs are the outputs that we're helping people to create. And really, like, you know, shortly on the roadmap, we're thinking about how do we help people use NotebookLM to distribute knowledge? And that's, like, one of the most compelling use cases is, like, shared notebooks. It's, like, a way to share knowledge. How do we help people take sources and, like, one-click new documents out of it, right? And I think that's something that people think is, like, oh, yeah, of course, right? Like, one push a document. But what does it mean to do it right? Like, to do it in your style, in your brand, right?Raiza [00:25:08]: To follow your guidelines, stuff like that.Raiza [00:25:09]: So I think there's a lot of work, like, on both sides of that equation.Raiza [00:25:13]: Interesting.Swyx [00:25:13]: Any comments on the engineering side of things?Usama [00:25:16]: So, yeah, like I said, I was mostly working on building the text to audio, which kind of lives as a separate engineering pipeline, almost, that we then put into NotebookLM. But I think there's probably tons of NotebookLM engineering war stories on dealing with sources. And so I don't work too closely with engineers directly. But I think a lot of it does come down to, like, Gemini's native understanding of images really well with the latest generation.Raiza [00:25:39]: Yeah, I think on the engineering and modeling side, I think we are a really good example of a team that's put a product out there, and we're getting a lot of feedback from the users, and we return the data to the modeling team, right? To the extent that we say, hey, actually, you know what people are uploading, but we can't really support super well?Raiza [00:25:56]: Text plus image, right?Raiza [00:25:57]: Especially to the extent that, like, NotebookLM can handle up to 50 sources, 500,000 words each. Like, you're not going to be able to jam all of that into, like, the context window. So how do we do multimodal embeddings with that? There's really, like, a lot of things that we have to solve that are almost there, but not quite there yet.Alessio [00:26:16]: On then turning it into audio, I think one of the best things is it has so many of the human... Does that happen in the text generation that then becomes audio? Or is that a part of, like, the audio model that transforms the text?Usama [00:26:27]: It's a bit of both, I would say. The audio model is definitely trying to mimic, like, certain human intonations and, like, sort of natural, like, breathing and pauses and, like, laughter and things like that. But yeah, in generating, like, the text, we also have to sort of give signals on, like, where those things maybe would make sense.Alessio [00:26:45]: And on the input side, instead of having a transcript versus having the audio, like, can you take some of the emotions out of it, too? If I'm giving, like, for example, when we did the recaps of our podcast, we can either give audio of the pod or we can give a diarized transcription of it. But, like, the transcription doesn't have some of the, you know, voice kind of, like, things.Raiza [00:27:05]: Yeah, yeah.Alessio [00:27:05]: Do you reconstruct that when people upload audio or how does that work?Raiza [00:27:09]: So when you upload audio today, we just transcribe it. So it is quite lossy in the sense that, like, we don't transcribe, like, the emotion from that as a source. But when you do upload a text file and it has a lot of, like, that annotation, I think that there is some ability for it to be reused in, like, the audio output, right? But I think it will still contextualize it in the deep dive format. So I think that's something that's, like, particularly important is, like, hey, today we only have one format.Raiza [00:27:37]: It's deep dive.Raiza [00:27:38]: It's meant to be a pretty general overview and it is pretty peppy.Raiza [00:27:42]: It's just very upbeat.Raiza [00:27:43]: It's very enthusiastic, yeah.Raiza [00:27:45]: Yeah, yeah.Raiza [00:27:45]: Even if you had, like, a sad topic, I think they would find a way to be, like, silver lining, though.Raiza [00:27:50]: Really?Raiza [00:27:51]: Yeah.Raiza [00:27:51]: We're having a good chat.Raiza [00:27:54]: Yeah, that's awesome.Swyx [00:27:54]: One of the ways, many, many, many ways that deep dive went viral is people saying, like, if you want to feel good about yourself, just drop in your LinkedIn. Any other, like, favorite use cases that you saw from people discovering things in social media?Raiza [00:28:08]: I mean, there's so many funny ones and I love the funny ones.Raiza [00:28:11]: I think because I'm always relieved when I watch them. I'm like, haha, that was funny and not scary. It's great.Raiza [00:28:17]: There was another one that was interesting, which was a startup founder putting their landing page and being like, all right, let's test whether or not, like, the value prop is coming through. And I was like, wow, that's right.Raiza [00:28:26]: That's smart.Usama [00:28:27]: Yeah.Raiza [00:28:28]: And then I saw a couple of other people following up on that, too.Raiza [00:28:32]: Yeah.Swyx [00:28:32]: I put my about page in there and, like, yeah, if there are things that I'm not comfortable with, I should remove it. You know, so that it can pick it up. Right.Usama [00:28:39]: I think that the personal hype machine was, like, a pretty viral one. I think, like, people uploaded their dreams and, like, some people, like, keep sort of dream journals and it, like, would sort of comment on those and, like, it was therapeutic. I didn't see those.Raiza [00:28:54]: Those are good. I hear from Googlers all the time, especially because we launched it internally first. And I think we launched it during the, you know, the Q3 sort of, like, check-in cycle. So all Googlers have to write notes about, like, hey, you know, what'd you do in Q3? And what Googlers were doing is they would write, you know, whatever they accomplished in Q3 and then they would create an audio overview. And these people they didn't know would just ping me and be like, wow, I feel really good, like, going into a meeting with my manager.Raiza [00:29:25]: And I was like, good, good, good, good. You really did that, right?Usama [00:29:29]: I think another cool one is just, like, any Wikipedia article. Yeah. Like, you drop it in and it's just, like, suddenly, like, the best sort of summary overview.Raiza [00:29:38]: I think that's what Karpathy did, right? Like, he has now a Spotify channel called Histories of Mysteries, which is basically, like, he just took, like, interesting stuff from Wikipedia and made audio overviews out of it.Swyx [00:29:50]: Yeah, he became a podcaster overnight.Raiza [00:29:52]: Yeah.Raiza [00:29:53]: I'm here for it. I fully support him.Raiza [00:29:55]: I'm racking up the listens for him.Swyx [00:29:58]: Honestly, it's useful even without the audio. You know, I feel like the audio does add an element to it, but I always want, you know, paired audio and text. And it's just amazing to see what people are organically discovering. I feel like it's because you laid the groundwork with NotebookLM and then you came in and added the sort of TTS portion and made it so good, so human, which is weird. Like, it's this engineering process of humans. Oh, one thing I wanted to ask. Do you have evals?Raiza [00:30:23]: Yeah.Swyx [00:30:23]: Yes.Raiza [00:30:24]: What? Potatoes for chefs.Swyx [00:30:27]: What is that? What do you mean, potatoes?Raiza [00:30:29]: Oh, sorry.Raiza [00:30:29]: Sorry. We were joking with this, like, a couple of weeks ago. We were doing, like, side-by-sides. But, like, Raiza sent me the file and it was literally called Potatoes for Chefs. And I was like, you know, my job is really serious, but you have to laugh a little bit. Like, the title of the file is, like, Potatoes for Chefs.Swyx [00:30:47]: Is it like a training document for chefs?Usama [00:30:50]: It's just a side-by-side for, like, two different kind of audio transcripts.Swyx [00:30:54]: The question is really, like, as you iterate, the typical engineering advice is you establish some kind of test or benchmark. You're at, like, 30 percent. You want to get it up to 90, right?Raiza [00:31:05]: Yeah.Swyx [00:31:05]: What does that look like for making something sound human and interesting and voice?Usama [00:31:11]: We have the sort of formal eval process as well. But I think, like, for this particular project, we maybe took a slightly different route to begin with. Like, there was a lot of just within the team listening sessions. A lot of, like, sort of, like... Dogfooding.Raiza [00:31:23]: Yeah.Usama [00:31:23]: Like, I think the bar that we tried to get to before even starting formal evals with raters and everything was much higher than I think other projects would. Like, because that's, as you said, like, the traditional advice, right? Like, get that ASAP. Like, what are you looking to improve on? Whatever benchmark it is. So there was a lot of just, like, critical listening. And I think a lot of making sure that those improvements actually could go into the model. And, like, we're happy with that human element of it. And then eventually we had to obviously distill those down into an eval set. But, like, still there's, like, the team is just, like, a very, very, like, avid user of the product at all stages.Raiza [00:32:02]: I think you just have to be really opinionated.Raiza [00:32:05]: I think that sometimes, if you are, your intuition is just sharper and you can move a lot faster on the product.Raiza [00:32:12]: Because it's like, if you hold that bar high, right?Raiza [00:32:15]: Like, if you think about, like, the iterative cycle, it's like, hey, we could take, like, six months to ship this thing. To get it to, like, mid where we were. Or we could just, like, listen to this and be like, yeah, that's not it, right? And I don't need a rater to tell me that. That's my preference, right? And collectively, like, if I have two other people listen to it, they'll probably agree. And it's just kind of this step of, like, just keep improving it to the point where you're like, okay, now I think this is really impressive. And then, like, do evals, right? And then validate that.Swyx [00:32:43]: Was the sound model done and frozen before you started doing all this? Or are you also saying, hey, we need to improve the sound model as well? Both.Usama [00:32:51]: Yeah, we were making improvements on the audio and just, like, generating the transcript as well. I think another weird thing here was, like, we needed to be entertaining. And that's much harder to quantify than some of the other benchmarks that you can make for, like, you know, Sweebench or get better at this math.Swyx [00:33:10]: Do you just have people rate one to five or, you know, or just thumbs up and down?Usama [00:33:14]: For the formal rater evals, we have sort of like a Likert scale and, like, a bunch of different dimensions there. But we had to sort of break down what makes it entertaining into, like, a bunch of different factors. But I think the team stage of that was more critical. It was like, we need to make sure that, like, what is making it fun and engaging? Like, we dialed that as far as it goes. And while we're making other changes that are necessary, like, obviously, they shouldn't make stuff up or, you know, be insensitive.Raiza [00:33:41]: Hallucinations. Safety.Swyx [00:33:42]: Other safety things.Raiza [00:33:43]: Right.Swyx [00:33:43]: Like a bunch of safety stuff.Raiza [00:33:45]: Yeah, exactly.Usama [00:33:45]: So, like, with all of that and, like, also just, you know, following sort of a coherent narrative and structure is really important. But, like, with all of this, we really had to make sure that that central tenet of being entertaining and engaging and something you actually want to listen to. It just doesn't go away, which takes, like, a lot of just active listening time because you're closest to the prompts, the model and everything.Swyx [00:34:07]: I think sometimes the difficulty is because we're dealing with non-deterministic models, sometimes you just got a bad roll of the dice and it's always on the distribution that you could get something bad. Basically, how many do you, like, do ten runs at a time? And then how do you get rid of the non-determinism?Raiza [00:34:23]: Right.Usama [00:34:23]: Yeah, that's bad luck.Raiza [00:34:25]: Yeah.Swyx [00:34:25]: Yeah.Usama [00:34:26]: I mean, there still will be, like, bad audio overviews. There's, like, a bunch of them that happens. Do you mean for, like, the raider? For raiders, right?Swyx [00:34:34]: Like, what if that one person just got, like, a really bad rating? You actually had a great prompt, you actually had a great model, great weights, whatever. And you just, you had a bad output.Usama [00:34:42]: Like, and that's okay, right?Raiza [00:34:44]: I actually think, like, the way that these are constructed, if you think about, like, the different types of controls that the user has, right? Like, what can the user do today to affect it?Usama [00:34:54]: We push a button.Raiza [00:34:55]: You just push a button.Swyx [00:34:56]: I have tried to prompt engineer by changing the title. Yeah, yeah, yeah.Raiza [00:34:59]: Changing the title, people have found out.Raiza [00:35:02]: Yeah.Raiza [00:35:02]: The title of the notebook, people have found out. You can add show notes, right? You can get them to think, like, the show has changed. Someone changed the language of the output. Changing the language of the output. Like, those are less well-tested because we focused on, like, this one aspect. So it did change the way that we sort of think about quality as well, right? So it's like, quality is on the dimensions of entertainment, of course, like, consistency, groundedness. But in general, does it follow the structure of the deep dive? And I think when we talk about, like, non-determinism, it's like, well, as long as it follows, like, the structure of the deep dive, right? It sort of inherently meets all those other qualities. And so it makes it a little bit easier for us to ship something with confidence to the extent that it's like, I know it's going to make a deep dive. It's going to make a good deep dive. Whether or not the person likes it, I don't know. But as we expand to new formats, as we open up controls, I think that's where it gets really much harder. Even with the show notes, right? Like, people don't know what they're going to get when they do that. And we see that already where it's like, this is going to be a lot harder to validate in terms of quality, where now we'll get a greater distribution. Whereas I don't think we really got, like, varied distribution because of, like, that pre-process that Raiza was talking about. And also because of the way that we'd constrain, like, what were we measuring for? Literally, just like, is it a deep dive?Swyx [00:36:18]: And you determine what a deep dive is. Yeah. Everything needs a PM. Yeah, I have, this is very similar to something I've been thinking about for AI products in general. There's always like a chief tastemaker. And for Notebook LM, it seems like it's a combination of you and Steven.Raiza [00:36:31]: Well, okay.Raiza [00:36:32]: I want to take a step back.Swyx [00:36:33]: And Raiza, I mean, presumably for the voice stuff.Raiza [00:36:35]: Raiza's like the head chef, right? Of, like, deep dive, I think. Potatoes.Raiza [00:36:40]: Of potatoes.Raiza [00:36:41]: And I say this because I think even though we are already a very opinionated team, and Steven, for sure, very opinionated, I think of the audio generations, like, Raiza was the most opinionated, right? And we all, like, would say, like, hey, I remember, like, one of the first ones he sent me.Raiza [00:36:57]: I was like, oh, I feel like they should introduce themselves. I feel like they should say a title. But then, like, we would catch things, like, maybe they shouldn't say their names.Raiza [00:37:04]: Yeah, they don't say their names.Usama [00:37:05]: That was a Steven catch, like, not give them names.Raiza [00:37:08]: So stuff like that is, like, we all injected, like, a little bit of just, like, hey, here's, like, my take on, like, how a podcast should be, right? And I think, like, if you're a person who, like, regularly listens to podcasts, there's probably some collective preference there that's generic enough that you can standardize into, like, the deep dive format. But, yeah, it's the new formats where I think, like, oh, that's the next test. Yeah.Swyx [00:37:30]: I've tried to make a clone, by the way. Of course, everyone did. Yeah. Everyone in AI was like, oh, no, this is so easy. I'll just take a TTS model. Obviously, our models are not as good as yours, but I tried to inject a consistent character backstory, like, age, identity, where they work, where they went to school, what their hobbies are. Then it just, the models try to bring it in too much.Raiza [00:37:49]: Yeah.Swyx [00:37:49]: I don't know if you tried this.Raiza [00:37:51]: Yeah.Swyx [00:37:51]: So then I'm like, okay, like, how do I define a personality? But it doesn't keep coming up every single time. Yeah.Raiza [00:37:58]: I mean, we have, like, a really, really good, like, character designer on our team.Raiza [00:38:02]: What?Swyx [00:38:03]: Like a D&D person?Raiza [00:38:05]: Just to say, like, we, just like we had to be opinionated about the format, we had to be opinionated about who are those two people talking.Raiza [00:38:11]: Okay.Raiza [00:38:12]: Right.Raiza [00:38:12]: And then to the extent that, like, you can design the format, you should be able to design the people as well.Raiza [00:38:18]: Yeah.Swyx [00:38:18]: I would love, like, a, you know, like when you play Baldur's Gate, like, you roll, you roll like 17 on Charisma and like, it's like what race they are. I don't know.Raiza [00:38:27]: I recently, actually, I was just talking about character select screens.Raiza [00:38:30]: Yeah. I was like, I love that, right.Raiza [00:38:32]: And I was like, maybe there's something to be learned there because, like, people have fallen in love with the deep dive as a, as a format, as a technology, but also as just like those two personas.Raiza [00:38:44]: Now, when you hear a deep dive and you've heard them, you're like, I know those two.Raiza [00:38:48]: Right.Raiza [00:38:48]: And people, it's so funny when I, when people are trying to find out their names, like, it's a, it's a worthy task.Raiza [00:38:54]: It's a worthy goal.Raiza [00:38:55]: I know what you're doing. But the next step here is to sort of introduce, like, is this like what people want?Raiza [00:39:00]: People want to sort of edit the personas or do they just want more of them?Swyx [00:39:04]: I'm sure you're getting a lot of opinions and they all, they all conflict with each other. Before we move on, I have to ask, because we're kind of on this topic. How do you make audio engaging? Because it's useful, not just for deep dive, but also for us as podcasters. What is, what does engaging mean? If you could break it down for us, that'd be great.Usama [00:39:22]: I mean, I can try. Like, don't, don't claim to be an expert at all.Swyx [00:39:26]: So I'll give you some, like variation in tone and speed. You know, there's this sort of writing advice where, you know, this sentence is five words. This sentence is three, that kind of advice where you, where you vary things, you have excitement, you have laughter, all that stuff. But I'd be curious how else you break down.Usama [00:39:42]: So there's the basics, like obviously structure that can't be meandering, right? Like there needs to be sort of a, an ultimate goal that the voices are trying to get to, human or artificial. I think one thing we find often is if there's just too much agreement between people, like that's not fun to listen to. So there needs to be some sort of tension and build up, you know, withholding information. For example, like as you listen to a story unfold, like you're going to learn more and more about it. And audio that maybe becomes even more important because like you actually don't have the ability to just like skim to the end of something. You're driving or something like you're going to be hooked because like there's, and that's how like, that's how a lot of podcasts work. Like maybe not interviews necessarily, but a lot of true crime, a lot of entertainment in general. There's just like a gradual unrolling of information. And that also like sort of goes back to the content transformation aspect of it. Like maybe you are going from, let's say the Wikipedia article of like one of the History of Mysteries, maybe episodes. Like the Wikipedia article is going to state out the information very differently. It's like, here's what happened would probably be in the very first paragraph. And one approach we could have done is like maybe a person's just narrating that thing. And maybe that would work for like a certain audience. Or I guess that's how I would picture like a standard history lesson to unfold. But like, because we're trying to put it in this two-person dialogue format, like there, we inject like the fact that, you know, there's, you don't give everything at first. And then you set up like differing opinions of the same topic or the same, like maybe you seize on a topic and go deeper into it and then try to bring yourself back out of it and go back to the main narrative. So that's, that's mostly from like the setting up the script perspective. And then the audio, I was saying earlier, it's trying to be as close to just human speech as possible. I think was the, what we found success with so far.Raiza [00:41:40]: Yeah. Like with interjections, right?Raiza [00:41:41]: Like I think like when you listen to two people talk, there's a lot of like, yeah, yeah, right. And then there's like a lot of like that questioning, like, oh yeah, really?Raiza [00:41:49]: What did you think?Swyx [00:41:50]: I noticed that. That's great.Raiza [00:41:52]: Totally.Usama [00:41:54]: Exactly.Swyx [00:41:55]: My question is, do you pull in speech experts to do this? Or did you just come up with it yourselves? You can be like, okay, talk to a whole bunch of fiction writers to, to make things engaging or comedy writers or whatever, stand up comedy, right? They have to make audio engaging, but audio as well. Like there's professional fields of studying where people do this for a living, but us as AI engineers are just making this up as we go.Raiza [00:42:19]: I mean, it's a great idea, but you definitely didn't.Raiza [00:42:22]: Yeah.Swyx [00:42:24]: My guess is you didn't.Raiza [00:42:25]: Yeah.Swyx [00:42:26]: There's a, there's a certain field of authority that people have. They're like, oh, like you can't do this because you don't have any experience like making engaging audio. But that's what you literally did.Raiza [00:42:35]: Right.Usama [00:42:35]: I mean, I was literally chatting with someone at Google earlier today about how some people think that like you need a linguistics person in the room for like making a good chatbot. But that's not actually true because like this person went to school for linguistics. And according to him, he's an engineer now. According to him, like most of his classmates were not actually good at language. Like they knew how to analyze language and like sort of the mathematical patterns and rhythms and language. But that doesn't necessarily mean they were going to be eloquent at like while speaking or writing. So I think, yeah, a lot of we haven't invested in specialists in audio format yet, but maybe that would.Raiza [00:43:13]: I think it's like super interesting because I think there is like a very human question of like what makes something interesting. And there's like a very deep question of like what is it, right? Like what is the quality that we are all looking for? Is it does somebody have to be funny? Does something have to be entertaining? Does something have to be straight to the point? And I think when you try to distill that, this is the interesting thing I think about our experiment, about this particular launch is first, we only launched one format. And so we sort of had to squeeze everything we believed about what an interesting thing is into one package. And as a result of it, I think we learned it's like, hey, interacting with a chatbot is sort of novel at first, but it's not interesting, right? It's like humans are what makes interacting with chatbots interesting.Raiza [00:43:59]: It's like, ha ha ha, I'm going to try to trick it. It's like, that's interesting.Raiza [00:44:02]: Spell strawberry, right?Raiza [00:44:04]: This is like the fun that like people have with it. But like that's not the LLM being interesting.Raiza [00:44:08]: That's you just like kind of giving it your own flavor. But it's like, what does it mean to sort of flip it on its head and say, no, you be interesting now, right? Like you give the chatbot the opportunity to do it. And this is not a chatbot per se. It is like just the audio. And it's like the texture, I think, that really brings it to life. And it's like the things that we've described here, which is like, okay, now I have to like lead you down a path of information about like this commercialization deck.Raiza [00:44:36]: It's like, how do you do that?Raiza [00:44:38]: To be able to successfully do it, I do think that you need experts. I think we'll engage with experts like down the road, but I think it will have to be in the context of, well, what's the next thing we're building, right? It's like, what am I trying to change here? What do I fundamentally believe needs to be improved? And I think there's still like a lot more studying that we have to do in terms of like, well, what are people actually using this for? And we're just in such early days. Like it hasn't even been a month. Two, three weeks.Usama [00:45:05]: Three weeks.Raiza [00:45:06]: Yeah, yeah.Usama [00:45:07]: I think one other element to that is the fact that you're bringing your own sources to it. Like it's your stuff. Like, you know this somewhat well, or you care to know about this. So like that, I think, changed the equation on its head as well. It's like your sources and someone's telling you about it. So like you care about how that dynamic is, but you just care for it to be good enough to be entertaining. Because ultimately they're talking about your mortgage deed or whatever.Swyx [00:45:33]: So it's interesting just from the topic itself. Even taking out all the agreements and the hiding of the slow reveal. I mean, there's a baseline, maybe.Usama [00:45:42]: Like if it was like too drab. Like if someone was reading it off, like, you know, that's like the absolute worst.Raiza [00:45:46]: But like...Swyx [00:45:47]: Do you prompt for humor? That's a tough one, right?Raiza [00:45:51]: I think it's more of a generic way to bring humor out if possible. I think humor is actually one of the hardest things. Yeah.Raiza [00:46:00]: But I don't know if you saw...Raiza [00:46:00]: That is AGI.Swyx [00:46:01]: Humor is AGI.Raiza [00:46:02]: Yeah, but did you see the chicken one?Raiza [00:46:03]: No.Raiza [00:46:04]: Okay. If you haven't heard it... We'll splice it in here.Swyx [00:46:06]: Okay.Raiza [00:46:07]: Yeah.Raiza [00:46:07]: There is a video on Threads. I think it was by Martino Wong. And it's a PDF.Raiza [00:46:16]: Welcome to your deep dive for today. Oh, yeah. Get ready for a fun one. Buckle up. Because we are diving into... Chicken, chicken, chicken. Chicken, chicken. You got that right. By Doug Zonker. Now. And yes, you heard that title correctly. Titles. Our listener today submitted this paper. Yeah, they're going to need our help. And I can totally see why. Absolutely. It's dense. It's baffling. It's a lot. And it's packed with more chicken than a KFC buffet. What? That's hilarious.Raiza [00:46:48]: That's so funny. So it's like stuff like that, that's like truly delightful, truly surprising.Raiza [00:46:53]: But it's like we didn't tell it to be funny.Usama [00:46:55]: Humor is contextual also. Like super contextual is what we're realizing. So we're not prompting for humor, but we're prompting for maybe a lot of other things that are bringing out that humor.Alessio [00:47:04]: I think the thing about ad-generated content, if we look at YouTube, like we do videos on YouTube and it's like, you know, a lot of people like screaming in the thumbnails to get clicks. There's like everybody, there's kind of like a meta of like what you need to do to get clicks. But I think in your product, there's no actual creator on the other side investing the time. So you can actually generate a type of content that is maybe not universally appealing, you know, at a much, yeah, exactly. I think that's the most interesting thing. It's like, well, is there a way for like, take Mr.Raiza [00:47:36]: Beast, right?Alessio [00:47:36]: It's like Mr. Beast optimizes videos to reach the biggest audience and like the most clicks. But what if every video could be kind of like regenerated to be closer to your taste, you know, when you watch it?Raiza [00:47:48]: I think that's kind of the promise of AI that I think we are just like touching on, which is, I think every time I've gotten information from somebody, they have delivered it to me in their preferred method, right?Raiza [00:47:59]: Like if somebody gives me a PDF, it's a PDF.Raiza [00:48:01]: Somebody gives me a hundred slide deck, that is the format in which I'm going to read it. But I think we are now living in the era where transformations are really possible, which is, look, like I don't want to read your hundred slide deck, but I'll listen to a 16 minute audio overview on the drive home. And that, that I think is, is really novel. And that is, is paving the way in a way that like maybe we wanted, but didn'tRaiza [00:48:24]: expect.Raiza [00:48:25]: Where I also think you're listening to a lot of content that normally wouldn't have had content made about it. Like I watched this TikTok where this woman uploaded her diary from 2004.Raiza [00:48:36]: For sure, right?Raiza [00:48:36]: Like nobody was going to make a podcast about a diary.Raiza [00:48:39]: Like hopefully not. Like it seems kind of embarrassing. It's kind of creepy. Yeah, it's kind of creepy.Raiza [00:48:43]: But she was, she was doing this like live listen of like, oh, like here's a podcast of my diary.Raiza [00:48:48]: And it's like, it's entertaining right now to sort of all listen to it together. But like the connection is personal. It was like, it was her interacting with like her information in a totallyRaiza [00:48:57]: different way.Raiza [00:48:58]: And I think that's where like, oh, that's a super interesting space, right? Where it's like, I'm creating content for myself in a way that suits the way that I want to, I want to consume it.Usama [00:49:06]: Or people compare like retirement plan options. Like no one's going to give you that content. Like for your personal financial situation.Raiza [00:49:14]: Yeah.Usama [00:49:14]: And like, even when we started out the experiment, like a lot of the goal was to go for really obscure content and see how well we could transform that. So like if you look at the mountain view, like city council meeting notes, like you're never going to read it. But like if it was a three minute summary, like that would be interesting. I see.Swyx [00:49:33]: You have one system, one prompt that just covers everything you threw at it.Raiza [00:49:37]: Maybe.Swyx [00:49:39]: I'm just, I'm just like, yeah, it's really interesting. You know what? I'm trying to figure out what you nailed compared to others. And I think that the way that you treat your, the AI is like a little bit different than a lot of the builders I talked to. So I don't know what it is. You said, I wish I had a transcript right in front of me, but it's something like people treat AI as like a tool for thought, but usually it's kind of doing their bidding and you know, what you're really doing is loading up these like two virtual agents. I don't, you've never said the word agents. I put that in your mouth, but two virtual humans or AIs and letting them from the, from their own opinion and letting them kind of just live and embody it a little bit. Is that accurate?Raiza [00:50:17]: I think that that is as close to accurate as possible. I mean, in general, I try to be careful about saying like, oh, you know,Raiza [00:50:24]: letting, you know, yeah, like these, these personas live.Raiza [00:50:27]: But I think to your earlier question of like, what makes it interesting? That's what it takes to make it interesting.Raiza [00:50:32]: Yeah.Raiza [00:50:32]: Right. And I think to do it well is like a worthy challenge. I also think that it's interesting because they're interested, right? Like, is it interesting to compare?Raiza [00:50:42]: Yeah.Raiza [00:50:42]: Is it, is it interesting to have two retirement plans?Raiza [00:50:46]: No, but to listen to these two talk about it.Raiza [00:50:50]: Oh my gosh.Raiza [00:50:50]: You'd think it was like the best thing ever invented, right? It's like, get this, deep dive into 401k through Chase versus, you know,Raiza [00:50:59]: whatever.Swyx [00:51:00]: They do do a lot of get this.Raiza [00:51:02]: I know. I know.Raiza [00:51:03]: I dream about it.Raiza [00:51:06]: I'm sorry.Swyx [00:51:08]: There's a, I have a few more questions on just like the engineering around this. And obviously some of this is just me creatively asking how this works. How do you make decisions between when to trust the AI overlord to decide for you? In other words, stick it, let's say products as it is today. You want to improve it in some way. Do you engineer it into the system? Like write code to make sure it happens or you just stick it in the prompt and hope that the LLM does it for you?Raiza [00:51:38]: Do you know what I mean?Raiza [00:51:39]: Do you mean specifically about audio or sort of in general?Swyx [00:51:41]: In general, like designing AI products. I think this is like the one thing that people are struggling with. And there's, there's compound AI people and then there's big AI people. So compound AI people will be like Databricks, have lots of little models, chain them together to make an output. It's deterministic. You control every single piece and you know, you produce what you produce. The open AI people, totally the opposite. Like write one giant prompts and let the model figure it out.Raiza [00:52:05]: Yeah.Swyx [00:52:06]: And obviously the answer for most people is going to be a spectrum in between those two, like big model, small model. When do you decide that?Raiza [00:52:11]: I think it depends on the task. It also depends on, well, it depends on the task, but ultimately depends on what is your desired outcome? Like what am I engineering for here? And I think there's like several potential outputs and there's sort of like generalRaiza [00:52:24]: categories.Raiza [00:52:24]: Am I trying to delight somebody? Am I trying to just like meet whatever the person is trying to do? Am I trying to sort of simplify a workflow?Raiza [00:52:31]: At what layer am I implementing this?Raiza [00:52:32]: Am I trying to implement this as part of the stack to reduce like friction, you know, particularly for like engineers or something? Or am I trying to engineer it so that I deliver like a super high qualityRaiza [00:52:43]: thing?Raiza [00:52:44]: I think that the question of like which of those two, I think you're right, itRaiza [00:52:48]: is a spectrum.Raiza [00:52:49]: But I think fundamentally it comes down to like it's a craft, like it's still a craft as much as it is a science. And I think the reality is like you have to have a really strong POV about like what you want to get out of it and to be able to make that decision. Because I think if you don't have that strong POV, like you're going to get lost in sort of the detail of like capability. And capability is sort of the last thing that matters because it's like, models will catch up, right? Like models will be able to do, you know, whatever in the next five years. It's going to be insane. So I think this is like a race to like value. And it's like really having a strong opinion about like, what does that lookRaiza [00:53:25]: like today?Raiza [00:53:25]: And how far are you going to be able to push it? Sorry, I think maybe that was like very like philosophical.Swyx [00:53:31]: We get there.Usama [00:53:32]: And I think that hits a lot of the points it's going to make.Alessio [00:53:35]: I tweeted today or I ex-posted, whatever, that we're going to interview you on what we should ask you. So we got a list of feature requests, mostly. It's funny. Nobody actually had any like specific questions about how the product was built. They just want to know when you're releasing some feature. So I know you cannot talk about all of these things, but I think maybe it would give people an idea of like where the product is going. So I think the most common question I think five people asked is like, are you going to build an API? And, you know, do you see this product as still be kind of like a full head product for like a login and do everything there? Or do you want it to be a piece of infrastructure that people build on?Raiza [00:54:13]: I mean, I think why not both?Raiza [00:54:16]: I think we work at a place where you could have both. I think that end user products, like products that touch the hands of usersRaiza [00:54:23]: have a lot of value.Raiza [00:54:24]: For me personally, like we learn a lot about what people are trying to do and what's like actually useful and what people are ready for. And so we're going to keep investing in that. I think at the same time, right, there are a lot of developers that are interested in using the same technology to build their own thing. We're going to look into that, how soon that's going to be ready. I can't really comment, but these are the things that like, Hey, we heard it.Raiza [00:54:47]: We're trying to figure it out.Raiza [00:54:48]: And I think there's room for both.Swyx [00:54:50]: Is there a world in which this becomes a default Gemini interface because it's technically different org?Raiza [00:54:55]: It's such a good question.Raiza [00:54:56]: And I think every, every time someone asks me, it's like, Hey, I just leadRaiza [00:55:00]: Domogolem.Raiza [00:55:02]: We'll ask the Gemini folks what they think.Alessio [00:55:05]: Multilingual support. I know people kind of hack this a little bit together. Any ideas for full support, but also I'm mostly interested in dialects. In Italy, we have Italian obviously, but we have a lot of local dialects. Like if you go to Rome, people don't really speak Italian, they speak localRaiza [00:55:20]: dialect.Alessio [00:55:21]: Do you think there's a path to which these models, especially the speech can learn very like niche dialects? Like how much data do you need? Can people contribute? Like I'm curious, like if you see this as a possibility.Raiza [00:55:35]: Totally.Usama [00:55:35]: So I guess high level, like we're definitely working on adding moreRaiza [00:55:39]: languages.Usama [00:55:39]: That's like top priority. We're going to start small, but like theoretically we should be able to cover like most languages pretty soon. What a ridiculous statement, by the way.Swyx [00:55:48]: That's, that's crazy.Usama [00:55:49]: Unlike the soon or the pretty soon part.Swyx [00:55:52]: No, but like, you know, a few years ago, like a small team of like, I don't know, 10 people saying that we will support the top 100, 200 languages is like absurd, but you can do it. Yeah, you can do it.Raiza [00:56:03]: And I think like the speech team, you know, we are a small team, but the speech team is another team and the modeling team, like these folks are just like absolutely brilliant at what they do. And I think like when we've talked to them and we've said, Hey, you know, howRaiza [00:56:17]: about more languages? How about more voices? How about dialects?Raiza [00:56:20]: Right?Raiza [00:56:20]: This is something that like they are game to do. And like, that's, that's the roadmap for them.Usama [00:56:25]: The speech team supports like a bunch of other efforts across Google, like Gemini Live, for example, is also the models built by the same like sort of deep mind speech team. But yeah, the thing about dialects is really interesting. Cause like, and some of our sort of earliest testing with trying out other languages, we actually noticed that sometimes it wouldn't stick to a certain dialect, especially for like, I think for French, we noticed that like when we presented it to like a native speaker, it would sometimes go from like a Canadian person speaking French versus like a French person speaking French or an American person speaking French, which is not what we wanted. So there's a lot more sort of speech quality work that we need to do there to make sure that it works reliably. And at least sort of like the, the standard dialect that we want, but that does show that there's potential to sort of do the thing that you're talking about of like fixing a dialect that you want, maybe contribute your own voice or like you pick from one of the options. There's, there's a lot more headroom there. Yeah.Alessio [00:57:20]: Because we have movies, like we have old Roman movies that have like different languages, but there's not that many, you know? So I'm always like, well, I'm sure like the Italian is so strong in the model that like when you're trying to like pull that away from it, like you kind of need a lot, but right.Usama [00:57:35]: That's, that's all sort of like wonderful deep mind speech team.Swyx [00:57:39]: Well, anyway, if you need Italian, he's got you.Swyx [00:57:44]: Specifically Singlish.Raiza [00:57:45]: I got you.Swyx [00:57:46]: Managing system prompts. People want a lot of that. I assume.Raiza [00:57:50]: Yes.Swyx [00:57:50]: Ish.Raiza [00:57:51]: Definitely looking into it for just core notebook LM. Like everybody's wanted that forever. So we're working on that. I think for the audio itself, we're trying to figure out the best way to do it. So we'll launch something sooner rather than later. So we'll probably stage it. And I think like, you know, just to be fully transparent, we'll probably launch something that's more of a fast follow than like a fully baked feature first.Raiza [00:58:15]: Just because like, I see so many people put in like the fake show notes.Raiza [00:58:18]: It's like, Hey, I'll, I'll help you out.Raiza [00:58:19]: We'll just put a text box. Yeah. Yeah.Usama [00:58:21]: I think a lot of people are like, this is almost perfect, but like, I just need that extra 10, 20%. Yeah.Swyx [00:58:26]: I noticed that you say no a lot, I think, or you try to ship one thing and that there's different about you than maybe other PMs or other teams that try to ship, but they're like, Oh, here are all the knobs.Raiza [00:58:38]: I'm just.Swyx [00:58:38]: Take all my knobs. Yeah.Raiza [00:58:40]: Yeah.Swyx [00:58:40]: Top P top cake. It doesn't matter. I'll just put it in the docs and you figure it out. Right. Whereas for you, it's you, you actually just, you make one product.Raiza [00:58:49]: Yeah.Swyx [00:58:49]: As opposed to like 10, you could possibly have done.Raiza [00:58:51]: Yeah.Swyx [00:58:51]: I don't know.Raiza [00:58:52]: It's interesting. I think about this a lot.Raiza [00:58:53]: I think it requires a lot of discipline because I thought about the knobs.Raiza [00:58:57]: I was like, Oh, I saw on Twitter, you know, on X people want the knobs. It's like, great.Raiza [00:59:02]: Start mocking it up, making the text boxes, designing like the little fiddles.Raiza [00:59:06]: Right.Raiza [00:59:07]: And then I looked at it and I was kind of sad. I was like, well, right. It's like, Oh, it's like, this is not cool.Raiza [00:59:12]: This is not fun.Raiza [00:59:13]: This is not magical. It is sort of exactly what you would expect knobs to be. Then, you know, it's like, Oh, I mean, how much can you, you know, design a knob?Raiza [00:59:24]: I thought about it. I was like, but the thing that people really like was that there wasn't any.Raiza [00:59:29]: That they just pushed a button and it was cool.Raiza [00:59:32]: And so I was like, how do we bring more of that?Raiza [00:59:34]: Right.Raiza [00:59:34]: That still gives the user the optionality that they want. And so this is where like, you have to have a strong POV. I think you have to like really boil down. What did I learn in like the month since I've launched this thing that people really want? And I can give it to them while preserving like that, that delightful sort of fun experience. And I think that's actually really hard.Raiza [00:59:54]: Like I'm not going to come up with that by myself.Raiza [00:59:55]: And like, that's something that like our team thinks about every day. We all have different ideas. We're all experimenting with sort of how to get the most out of like the insight and also ship it quick. So, so we'll see.Raiza [01:00:06]: We'll find out soon if people like it or not.Usama [01:00:08]: I think the other interesting thing about like AI development now is that the knobs are not necessarily like speak going back to all the sort of like craft and like human taste and all of that that went into building it. Like the knobs are not as easy to add as simply like I'm going to add a parameter to this and it's going to make it happen. It's like you kind of have to redo the quality process for everything. Yeah, the prioritization is also different.Raiza [01:00:36]: It goes back to sort of like, it's a lot easier to do an eval for like the deep dive format than if like, okay, now I'm going to let you inject like these random things, right?Raiza [01:00:45]: Okay.Raiza [01:00:45]: How am I going to measure quality?Raiza [01:00:46]: Either?Raiza [01:00:46]: I say, I don't care because like you just input whatever.Raiza [01:00:50]: Or I say, actually wait, right?Raiza [01:00:53]: Like I want to help you get the best output ever.Raiza [01:00:55]: What's it going to take?Usama [01:00:56]: The knob actually needs to work reliably.Raiza [01:00:58]: Yeah. Yeah. Very important part.Alessio [01:01:00]: Two more things we definitely want to talk about. I guess now people equivalent notebook LM to like a podcast generator, but I guess, you know, there's a whole product suite there.Raiza [01:01:09]: Yeah.Alessio [01:01:10]: How should people think about that? Like is this, and also like the future of the product as far as monetization too, you know, like, is it going to be the voice thing going to be a core to it? Is it just going to be one output modality? And like, you're still looking to build like a broader kind of like a interface with data and documents.Raiza [01:01:27]: I mean, that's such a, that's such a good question that I think the answer it's I'm waiting to get more data. I think because we are still in the period where everyone's really excited about it, everyone's trying it. I think I'm getting a lot of sort of like positive feedback on the audio. We have some early signal that says it's a really good hook, but people stay for the other features.Raiza [01:01:49]: So that's really good too.Raiza [01:01:50]: I was making a joke yesterday.Raiza [01:01:51]: I was like, it'd be really nice, you know, if it was just the audio, because then I could just like simplify the train.Raiza [01:01:58]: Right.Raiza [01:01:58]: I don't have to think about all this other functionality, but I think the reality is that the framework kind of like what we were talking about earlier that we had laid out, which is like you bring your own sources. There's something you do in the middle and then there's an output is that really extensible one. And it's a really interesting one. And I think like, particularly when we think about what a big business looks like, especially when we think about commercialization, audio is just one such modality. But the editor itself, like the space in which you're able to do these things is like, that's the business, right? Like maybe the audio by itself, not so much, but like in this big package, like, oh, I could see that. I could see that being like a really big business.Raiza [01:02:37]: Yep.Alessio [01:02:37]: Any thoughts on some of the alternative interact with data and documents thing, like cloud artifacts, like a JGBD canvas, you know, kind of how do you see, maybe we're notebook LM stars, but like Gemini starts, like you have so many amazing teams and products at Google. There's sometimes like, I'm sure you have to figure that out.Raiza [01:02:56]: Yeah.Raiza [01:02:56]: Well, I love artifacts.Raiza [01:02:59]: I played a little bit with canvas. I got a little dizzy using it. I was like, oh, there's something.Raiza [01:03:03]: Well, you know, I like the idea of it fundamentally, but something about the UX was like, oh, this is like more disorienting than like artifacts.Raiza [01:03:11]: And I couldn't figure out what it was. And I didn't spend a lot of time thinking about it, but I love that, right?Raiza [01:03:16]: Like the thing where you are like, I'm working with, you know, an LLM, an agent, a chap or whatever to create something new. And there's like the chat space.Raiza [01:03:26]: There's like the output space. I love that. And the thing that I think I feel angsty about is like, we've been talking about this for like a year, right?Raiza [01:03:35]: Like, of course, like I'm going to say that, but it's like, but like for a year now I've had these like mocks that I was just like, I want to push the button.Raiza [01:03:42]: But we prioritize other things.Raiza [01:03:43]: We were like, okay, what can we like really win at? And like we prioritize audio, for example, instead of that. But just like when people were like, oh, what is this magic draft thing? Oh, it's like a hundred percent, right?Raiza [01:03:54]: It's like stuff like that that we want to try to build into notebook too.Raiza [01:03:57]: And I'd made this comment on Twitter as well, where I was like, now I don't know, actually, right? I don't actually know if that is the right thing.Raiza [01:04:05]: Like, are people really getting utility out of this? I mean, from the launches, it seems like people are really getting it.Raiza [01:04:11]: But I think now if we were to ship it, I have to rev on it like one layer more, right? I have to deliver like a differentiating value compared to like artifacts or chemicals, which is hard.Swyx [01:04:20]: Which is because you've, you demonstrated the ability to fast follow. So you don't have to innovate every single time. I know, I know.Raiza [01:04:27]: I think for me, it's just like the bar is high to ship.Raiza [01:04:30]: And when I say that, I think it's sort of like conceptually like the value that you deliver to the user. I mean, you'll, you'll see a notebook alarm. There are a lot of corners that like that I have personally cut where it's like our UX designer is always like, I can't believe you let us ship with like these ugly scroll bars. And I'm like, no, no one notices, I promise.Raiza [01:04:47]: He's like, no, everyone.Raiza [01:04:48]: It's a screenshot, this thing.Raiza [01:04:50]: But I mean, kidding aside, I think that's true that it's like we do want to be able to fast follow.Raiza [01:04:54]: But I think we want to make sure that things also land really well. So the utility has to be there.Swyx [01:04:59]: Code in, especially on our podcast has a special place. Is code notebook LLM interesting to you? I haven't, I've never, I don't see like a connect my GitHub to this thing. Yeah, yeah.Raiza [01:05:10]: I think code, code is a big one. Code is a big one. I think we have been really focused, especially when we had like a much smaller team, we were really focused on like, let's push like an end to end journey together. Let's prove that we can do that. Because then once you lay the groundwork of like sources, do something in the chat output, once you have that, you just scale it up from there. Right. And it's like, now it's just a matter of like scaling the inputs, scaling the outputs, scaling the capabilities of the chat. So I think we're going to get there. And now I also feel like I have a much better view of like where the investment is required. Whereas previously I was like, Hey, like let's flesh out the story first before we put more engineers on this thing, because that's just going to slow us down.Usama [01:05:49]: For what it's worth, the model still understands code. So I've seen at least one or two people just like download their GitHub repo, put it in there and get like an audio overview of your code.Raiza [01:06:00]: Yeah, yeah. I've never tried that.Usama [01:06:01]: This is like, these are all the files are connected together because the model still understands code. Like even if you haven't like.Raiza [01:06:07]: I think on sort of like the creepy side of things, I did watch a student like with her permission, of course, I watched her do her homework in Notebook LM.Raiza [01:06:17]: And I didn't tell her like what kind of homework to bring, but she brought like her computer science homework.Raiza [01:06:23]: And I was like, Oh, and she uploaded it. And she said, here's my homework, read it. And it was just the instructions. And Notebook LM was like, okay, I've read it. And the student was like, okay, here's my code so far.Raiza [01:06:37]: And she copy pasted it from the editor.Raiza [01:06:39]: And she was like, check my homework. And Notebook LM was like, well, number one is wrong.Raiza [01:06:44]: And I thought that was really interesting because it didn't tell her what was wrong. It just said it's wrong.Raiza [01:06:48]: And she was like, okay, don't tell me the answer, but like walk me through like how you think about this. And it was what was interesting for me was that she didn't ask for the answer.Raiza [01:06:58]: And I asked her, I was like, oh, why did you do that? And she was like, well, I actually want to learn it. She's like, because I'm gonna have to take a quiz on this at some point. And I was like, oh, yeah, it's a really good point.Raiza [01:07:05]: And it was interesting because, you know, Notebook LM, while the formatting wasn't perfect, like did say like, hey, have you thought about using, you know, maybe an integer instead of like this?Raiza [01:07:14]: And so that was, that was really interesting.Alessio [01:07:16]: Are you adding like real-time chat on the output? Like, you know, there's kind of like the deep dive show and then there's like the listeners call in and say, hey.Raiza [01:07:26]: Yeah, we're actively, that's one of the things we're actively prioritizing. Actually, one of the interesting things is now we're like, why would anyone want to do that? Like, what are the actual, like kind of going back to sort of having a strong POV about the experience? It's like, what is better? Like, what is fundamentally better about doing that? That's not just like being able to Q&A or Notebook. How is that different from like a conversation? Is it just the fact that there was a show and you want to tweak the show? Is it because you want to participate? So I think there's a lot there that like we can continue to unpack. But yes, that's coming.Swyx [01:07:58]: It's because I formed a parasocial relationship. Yeah, that just might be part of your life.Raiza [01:08:03]: Get this.Raiza [01:08:05]: Totally.Swyx [01:08:07]: Yeah, but it is obviously because OpenAI has just launched a real-time chat. It's a very hot topic. I would say one of the toughest AI engineering disciplines out there because even their API doesn't do interruptions that well, to be honest. And, you know, yeah, so real-time chat is tough.Raiza [01:08:25]: I love that thing.Raiza [01:08:26]: I love it.Swyx [01:08:27]: Okay, so we have a couple ways to end. Either call to action or laying out one principle of AI PMing or engineering that you really think about a lot. Is there anything that comes to mind?Raiza [01:08:39]: I feel like that's a test.Raiza [01:08:40]: Of course, I'm going to say go to notebooklm.google.com, try it out, join the Discord and tell us what you think.Swyx [01:08:46]: Yeah, especially like you have a technical audience. What do you want from a technical engineering audience?Raiza [01:08:52]: I mean, I think it's interesting because the technical and engineering audience typically will just say, hey, where's the API?Raiza [01:08:58]: But, you know, I think we addressed it. But I think what I would really be interested to discover is, is this useful to you?Raiza [01:09:05]: Why is it useful?Raiza [01:09:05]: What did you do? Right? Is it useful tomorrow?Raiza [01:09:08]: How about next week?Raiza [01:09:08]: Just the most useful thing for me is if you do stop using it or if you do keep using it, tell me why.Raiza [01:09:14]: Because I think contextualizing it within your life, your background, your motivations, is what really helps me build really cool things.Swyx [01:09:22]: And then one piece of advice for AI PMs.Raiza [01:09:24]: Okay, if I had to pick one, it's just always be building. Build things yourself. I think for PMs, it's such a critical skill. And just take time to pop your head up and see what else is new out there. On the weekends, I try to have a lot of discipline. I only use ChatGPT and Cloud on the weekend. I try to use the APIs. Occasionally, I'll try to build something on GCP over the weekend because I don't do that normally at work. But it's just the rigor of just trying to be a builder yourself. And even just testing, right? You could have an idea of how a product should work and maybe your engineers are building it. But it's like, what was your proof of concept? What gave you conviction that that was the right thing?Raiza [01:10:06]: Call to action?Usama [01:10:07]: I feel like consistently, the most magical moments out of AI building come about for me when I'm really, really, really just close to the edge of the model capability. And sometimes it's farther than you think it is. I think while building this product, some of the other experiments, there were phases where it was easy to think that you've approached it. But sometimes at that point, what you really need is to show your thing to someone and they'll come up with creative ways to improve it. We're all sort of learning, I think. So yeah, I feel like unless you're hitting that bound of this is what Gemini 1.5 can do, probably the magic moment is somewhere there, in that sort of limit.Swyx [01:10:48]: So push the edge of the capability. Yeah, totally.Alessio [01:10:51]: It's funny because we had a Nicola Scarlini from DeepMind on the pod and he was like, if the model is always successful, you're probably not trying hard enough to give it heart.Raiza [01:11:00]: Right. Thanks.Alessio [01:11:00]: So, yeah.Swyx [01:11:03]: My problem is sometimes I'm not smart enough to judge. Yeah, right.Raiza [01:11:08]: Well, I think I hear that a lot.Raiza [01:11:11]: Like people are always like, I don't know how to use it.Raiza [01:11:14]: And it's hard.Raiza [01:11:15]: Like I remember the first time I used Google search. I was like, what do we type?Raiza [01:11:18]: My dad was like, anything.Raiza [01:11:19]: It's like anything.Raiza [01:11:20]: I got nothing in my brain, dad. What do you mean?Raiza [01:11:23]: And I think there is a lot of like for product builders is like, have a strong opinion about like, what is the user supposed to do?Raiza [01:11:30]: Yeah. Help them do it.Swyx [01:11:31]: Principle for AI engineers or like just one advice that you have others?Usama [01:11:36]: I guess like in addition to pushing the bounds and to do that, that often means like you're not going to get it right in the first go. So like, don't be afraid to just like batch multiple models together. I guess that's I'm basically describing an agent, but more thinking time equals just better results consistently. And that holds true for probably every single time that I've tried to build something.Swyx [01:12:01]: Well, at some point we will talk about the sort of longer inference paradigm. It seems like DeepMind is rumored to be coming out with something. You can't comment, of course.Raiza [01:12:09]: Yeah.Swyx [01:12:09]: Well, thank you so much. You know, you've created. I actually said, I think you saw this. I think that Notebook LLM was kind of like the ChatGPT moment for Google.Raiza [01:12:18]: That was so crazy when I saw that.Raiza [01:12:19]: I was like, what?Raiza [01:12:20]: Like, ChatGPT was huge for me. And I think, you know, when you said it and other people have said it, I was like, is it?Raiza [01:12:27]: Yeah. That's crazy.Swyx [01:12:28]: People weren't like really cognizant of Notebook LLM before and audio overviews and Notebook LLM like unlocked the, you know, a use case for people in the way that I would go so far as to say cloud projects never did. And I don't know. You know, I think a lot of it is competent PMing and engineering, but also just, you know, it's interesting how a lot of these projects are always like low key research previews for you. It's like you're a separate org, but like, you know, you built products and UI innovation on top of also working with research to improve the model. That was a success that wasn't planned to be this whole big thing. You know, your TPUs were on fire, right?Raiza [01:13:06]: Oh my gosh, that was so funny.Raiza [01:13:08]: I didn't know people would like really catch on to the Elmo fire, but it was just like one of those things where I was like, you know, we had to ask for more TPUs.Raiza [01:13:16]: Yeah, we many times.Raiza [01:13:18]: And, you know, it was a little bit of a, of a subtweet of like, Hey, reminder, give us more TPUs on here.Raiza [01:13:25]: It's weird.Swyx [01:13:25]: I just think like when people try to make big launches, then they flop. And then like when they're not trying and they just, they're just trying to build a good thing, then, then they succeed. It's, it's this fundamentally really weird magic that I haven't really encapsulated yet, but you've, you've done it. Well, thank you.Raiza [01:13:40]: Thank you.Raiza [01:13:40]: And, you know, I think we'll just keep going in like the same way. We just keep trying, keep trying to make it better.Raiza [01:13:45]: I hope so.Swyx [01:13:46]: All right.Raiza [01:13:47]: Cool.Swyx [01:13:47]: Thank you.Raiza [01:13:48]: Thank you. Thanks for having us. Thanks. Get full access to Latent Space at www.latent.space/subscribe
    --------  
    1:13:57
  • Building the AI Engineer Nation — with Josephine Teo, Minister of Digital Development and Information, Singapore
    Singapore's GovTech is hosting an AI CTF challenge with ~$15,000 in prizes, starting October 26th, open to both local and virtual hackers. It will be hosted on Dreadnode's Crucible platform; signup here!It is common to say if you want to work in AI, you should come to San Francisco. Not everyone can. Not everyone should. If you can only do meaningful AI work in one city, then AI has failed to generalize meaningfully.As non-Americans working in the US, we know what it’s like to see AI progress so rapidly here, and yet be at a loss for what our home countries can do. Through Latent Space we’ve tried to tell the story of AI outside of the Bay Area bubble; we talked to Notion in New York and Humanloop and Wondercraft in London and HuggingFace in Paris and ICLR in Vienna, and the Reka, RWKV, and Winds of AI Winter episodes were taped in Singapore (the World’s Fair also had Latin America representation and we intend to at least add China, Japan, and India next year).The Role of Government with AIAs an intentionally technical resource, we’ve mostly steered clear of regulation and safety debates on the podcast; whether it is safety bills or technoalarmism, often at the cost of our engagement numbers or ability to book big name guests with a political agenda. When SOTA shifts 3x faster than it takes to pass a law, when nobody agrees on definitions of important things, when you can elicit never-before-seen behavior by slightly different prompting or sampling, it is hard enough to simply keep up to speed, so we are happy limiting our role to that. The story of AI progress has more often been achieved in the private sector, usually in spite of, rather than with thanks to, government intervention.But industrial policy is inextricably linked to the business of AI, which we do very much care about, has an explicitly accelerationist intent if not impact, and has a track record of success in correcting for legitimate market failures in private sector investment, particularly outside of the US. It is with this lens we approach today’s episode and special guest, our first with a sitting Cabinet member.Singapore’s National AI StrategyIt is well understood that much of Singapore’s economic success is attributable to industrial policy, from direct efforts like the Jurong Town Corporation industrialization to indirect ones like going all in on English as national first language. Singapore’s National AI Strategy grew out of its 2014 Smart Nation initiative, first launched in 2019 and then refreshed in 2023 by Minister Josephine Teo, our guest today.While Singapore is not often thought of as an AI leader, the National University ranks in the top 10 in publications (above Oxford/Harvard!), and many overseas Singaporeans work at the leading AI companies and institutions in the US (and some of us even run leading AI Substacks?). OpenAI has often publicly named the Singapore government as their model example of government collaborator and is opening an office in Singapore in time for DevDay 2024.AI Engineer NationsSwyx first pitched the AI Engineer Nation concept at a private Sovereign AI summit featuring Dr. He Ruimin, Chief AI Officer of Singapore, which eventually led to an invitation to discuss the concept with Minister Teo, the country’s de-facto minister for tech (she calls it Digital Development, for good reasons she explains in the pod).This chat happened (with thanks to Jing Long, Joyce, and other folks from MDDI)!The central pitch for any country, not just Singapore, to emphasize and concentrate bets on AI Engineers, compared with other valuable efforts like training more researchers, releasing more government-approved data, or offering more AI funding, is a calculated one, based on the fact that: * GPU clusters and researchers have massive returns to scale and colocation, mostly concentrated in the US, that are irresponsibly expensive to replicate* Even if research stopped today and there was no progress for the next 30 years, there are far more capabilities to unlock and productize from existing foundation models and we * Good AI Engineering requires genuine skill and is deepening enough to justify sub-specialization as a sub-industry of Software Engineering* Companies and countries with better AI engineer workforces will disproportionately benefit from AI vs those who equivocate it as one of many equivalent priorities* Tech progress is often framed as “the future is here but it is not evenly distributed”. The role of the AI Engineer is therefore to better distribute the state of the art to as much of humanity as possible, including the elderly, poor, and differently abled.All of which are themes we first identified in the Rise of the AI Engineer. Singapore simply has a few additional factors that make it not just a good fit, but an economic imperative:* English speaking, very-online country that is great at STEM* Aging, ex-growth population (Total Fertility Rate of 1.1)* #3 GDP per capita (PPP) country in the world* Physically remote from major economic growth centers ex China/SEAThat basically dictates that any continued economic growth must be disconnected to geography, timezone, or headcount, or reliance on existing industrial drivers. Short of holding Taylor Swift hostage, making an intentional, concentrated bet on AI industrial policy is Singapore’s best option to keep up progress in the 21st century. As a pioneer in education policy being the primary long term determinant of economic success, this may result in Python as Singapore’s next National Language in the long run, a proposal we also discussed extensively at the RAISE retreat where this episode was recorded.Because of upcoming election season concerns around the globe, we also took the opportunity to ask about Singapore’s recent deepfake (election integrity) law.Full YouTube episodeShow Notes* Josephine Teo Official Bio, Wikipedia* Singapore National AI Strategy* 2019 - v1* 2023 - v2* ICLR (machine learning conference)* Philipp Kandal (CPO of Grab)* Temasek* GIC* EDBI* Economic Development Board (EDB)* Michael Fay incident* Quincy Larson* AIBots (internal RAG system for Singapore government)* Slovakia election incident* National AI Strategy - Singapore* Singapore AI Safety Institute* AI Verify* SkillsFuture* Ministry of Digital Development and Information (MDDI)* GovTech* NTU (Nanyang Technological University)Timestamps00:00:00 Introductions00:00:34 Singapore's National AI Strategy00:02:50 Ministry of Digital Development and Information00:08:49 Defining a National AI Strategy00:14:32 AI Safety and Governance00:16:50 AI Adoption in Companies and Government00:19:53 Balancing AI Innovation and Safety00:22:56 Structuring Government for Rapid Technological Change00:27:08 Doing Business with Singapore00:32:21 Training and Workforce Development in AI00:37:05 Career Transition Help for Post-AI Jobs00:40:19 AI Literacy and Coding as a Language00:43:28 Sovereign AI and Digital Infrastructure00:50:48 Government and AI Workloads00:51:02 Favorite AI Use Case in Government00:53:52 AI and ElectionsTranscriptAlessio [00:00:00]: Hey everyone, welcome to the Latent Space podcast. This is Alessio, partner and CTO at Residence at Decibel Partners, and I'm joined by my co-host Swyx, founder of Small.ai.Swyx [00:00:13]: Hey everyone, this is a very, very special episode. We have here Mr. Josephine Teo from Singapore. Welcome.Josephine [00:00:19]: Hi Shawn and hi Alessio. Thank you for having me. Of course.Swyx [00:00:23]: You are the Minister for Digital Development and Information and Second Minister for Home Affairs. We're meeting here at RAISE, which is effectively your agency. Maybe we want to explain a little bit about what Singapore is doing in AI.Josephine [00:00:34]: Well, we've had an AI strategy at the national level for some years now, and about two years ago when generative AI became so prominent, we thought it was about time for us to refresh our national AI strategy. And it's not unusual on such occasions for us to consult widely. We want to talk to people who are familiar with the field. We want to talk to people who are active as practitioners, and we also want to talk to people in Singapore who have an interest in seeing the AI ecosystem develop. So when we put all these together, we discovered something else by chance, and it was really a bonus. This was the fact that there were already Singaporeans that were active in the AI space, particularly in the US, particularly in the Bay Area. And one of the exciting things for us was how could we also consult these Singaporeans who clearly still have a passion for Singapore, they do care about what happens back home, and they want to contribute to it. So that's how RAISE came about. And RAISE actually preceded the publication of the refresh of our national AI strategy, which took place in December last year. So the inputs of the participants from RAISE helped us to sharpen what we thought would be important in building up the AI ecosystem. And also with the encouragement of participants at RAISE, primarily Singaporeans who were doing great work in the US, we decided to raise our ambitions, literally. That's why we say AI for the public good, recognising the fact that commercial interest will certainly drive exciting developments in the industry space. But keep in mind, there is a need to make sure that AI serves the public good. And we say for Singapore and the world. So the idea is that experiments that are carried out in Singapore, things that are scaled up in Singapore potentially could have contributions elsewhere in the world. And so AI for the public good, for Singapore and the world. That's how it came about.Alessio [00:02:50]: I was listening to some of your previous interviews, and even the choice of the name development in the ministry name was very specific. You mentioned naming is your ethos. Can you explain maybe a bit about what the ministry does, which is not simply funding R&D, but it's also thinking about how to apply the technologies in industry and just maybe give people an overview since there's not really an equivalent in the US?Josephine [00:03:13]: Yeah, so when people talk about our Smart Nation efforts, it was helpful in articulating a few key pillars. We talked about one pillar being a vibrant digital economy. We also talk about a stable digital society because digital technologies, the way in which they are used, can sometimes cause divisions in society or entrench polarisation. They can also have the potential of causing social upheaval. So when we talked about stable digital society, that was what we had in mind. How do you preserve cohesion? Then we said that in this domain, government has to be progressive too. You can't expect the rest of Singapore to digitalise, and yet the government is falling behind. So a progressive digital government is another very important pillar. And underpinning all of this has to be comprehensive digital security. There is, of course, cyber security, but there is also how individuals feel safe in the digital domain, whether as users on social media or if they're using devices and they're using services that are delivered digitally. So when we talk about these four pillars of a Smart Nation, people get it. When we then asked ourselves, what is the appropriate way to think of the ministry? We used to be known as the Ministry of Communications and Information, and we had been doing all this digital stuff without actually putting it into our name. So when we eventually decided to rename the ministry, there were a couple of options to choose from. We could have gone for digital technologies, we could have gone for digital advancement, we could have gone for digital innovation. But ultimately we decided on digital development because it wasn't the technologies, the advancements or the innovation that we cared about, they are important, but we're really more interested in their impact to society, impact to communities. So how do we shape those developments? How do we achieve a digital experience that is trustworthy? How do we make sure that everyone, not just individuals who are savvy from the get-go in digital engagements, how does everyone in society, regardless of age, regardless of background, also feel that they have a sense of progression, that embracing technology brings benefits to them? And we also believe that if you don't pay attention to it, then you might not consciously apply the use of technology to bring people together. And you may passively just allow society to break apart without being too...Swyx [00:06:05]: Oh my god, that's drastic.Josephine [00:06:06]: That sounds very drastic, that sounds a bit scary. But we thought that it's important to say that we do have the objective of bringing people together with the help of technology. So that's how we landed on the idea of digital development. And there's one more dimension, that one we draw reference from perhaps the physical developmental aspects of cities. We say that if you think of yourself as a developer, all developers have to conceptualise, all developers have to plan, developers have to implement, and in the process of implementation you will monitor and things don't go as well as you'd like them to, you have to rectify. Yeah, it sucks, essentially, it is. But that's what any developer, any good developer must do. But a best-in-class developer would also have to think about the higher purpose that you're trying to achieve. Should also think about who are the partners that you bring into the picture and not try to do everything alone. And I think very importantly, a best-in-class developer seeks to be a leader in thought and action. So we say that if we call ourselves the Ministry of Digital Development, how do we also, whether in thinking of the digital economy, thinking of the digital society, digital security or digital government, embody these values, these values of being a bridge builder, being an entity that cares about the longer-term impact, that serves a higher purpose. So those were the kinds of things that we brought into the discussions on our own renaming. That's quite a good experience for the whole team.Swyx [00:07:49]: From the outside, I actually was surprised, I was looking for MCI and I couldn't find it. Since you renamed it.Josephine [00:07:54]: There, there, there.Swyx [00:07:55]: Yeah, exactly. We have to plug the little logo for the cameras. I really like that you are now recognizing the role of the web, digital development, technology. We never really had it officially, it used to be Ministry of Information Communication and the Arts. One thing that we're going to touch on is the growth of Singapore as an engineering hub. OpenAI is opening an office in Singapore and how we can grow more AI engineers in Singapore as well. Because I do think that that is something that people are interested in, whether or not it's for their own careers or to hire out in Singapore. Maybe it's a good time to get into the National AI Strategy. You presented it to the PM, now PM, I guess. I don't know what the process was because we have a new PM. Most of our audience is not going to be Singaporeans. There are going to be more Singaporeans than normal, but most of our audience are not Singaporeans, they've never heard of it. But they all come from countries which are all trying to figure out the National AI Strategy. So how did you go about defining a National AI Strategy?Josephine [00:08:49]: Well, in some sense, we went back to the drawing board and said, what do we want to see AI be able to do in Singapore? I mean, there are all these exciting developments, obviously we would like to be part of the action. But it has to be in service of something. And what we were interested in is just trying to find a way to continuously uplift our people. Because ultimately, for any national strategy to work, it must bring benefits to the local communities. And the local communities can be defined very broadly. You have citizen communities, and citizens would like to be able to do better jobs, and they would like to be able to earn higher wages. But it's not just citizen communities. Citizens are themselves sometimes involved in businesses. So how about the enterprise community? And in the enterprise community, in the Singapore landscape, it's really interesting. Like most other economies, we do have SMEs. But we also have multinationals that are at the very cutting edge. Because in order to succeed in Singapore, they have to be very competitive. So the question is, how can they, through the use of technologies, and including AI, offer an even higher value proposition to their customers, to their owners. And so we were very interested in seeing enterprise applications of AI. That in a way also relates back to the workforce. Because for all of the employees of these organisations, then to see that their employers are implementing AI models, and they are identifying AI use cases, is tremendously motivating for the broader workforce to themselves want to acquire AI-related skills. Then not forgetting that for the large body of small and medium enterprises, it's always going to be a little bit harder for smaller businesses to access technologies. So what do we put in place to enable these small businesses to take advantage of what AI has to offer? So you have to have a holistic strategy that can fire up many different engines. So we work across the board to make compute available, firstly to the research community, but also taking care to ensure that compute capacity could be available to companies that are in need of them. So how do we do that? That's one question that we have to go get it organised. Then another very important aspect is making data available. And I think in this regard, some of the earlier work that we did was helpful. We did, from more than a decade ago, already have privacy laws in place. We have data protection, and these laws have also been updated so as to support businesses with legitimate use cases. So the clarity and the certainty is there. And then we've also tried to organise data, make it more readily available. Some of it, for example, could be specific to the finance sector, some specific to the logistics sector. But then there are also different kinds of data that lies within government possession, and we are making it much more readily available to the private sector. So that deals with the data part of it. I think the third and very important part of it is talent. And we're thinking of talent at different levels. We're thinking of talent at the uppermost level, you know, for want of a better term, we call them AI creators. We know that they are very highly sought after, there aren't all that many in the world. And we want to interest them to do work with Singapore. Sometimes they will be in Singapore, but there is a value in them being plugged into the international networks, to be plugged into globally leading-edge projects that may or may not be done out of Singapore. We think that keeping those linkages are very important. These AI creators have to be supported by what we generally refer to as AI practitioners. We're talking about people who do data science, we're talking about people who do machine learning, they're engineers, they're absolutely engineers. But then you also need the broad swath of AI users, people who are going to be comfortable using the tools that are made available to them. So you may have, for example, a group within a company that designs AI bots or finds use cases, but if their colleagues aren't comfortable using them, then in some sense, the picture is not complete. So we want to address the talent question at all of these levels. In a sense, we are fortunate that Singapore is compact enough for us to be able to get these kinds of interventions organised. We already have a robust training infrastructure, we can rely on that. People know what funding support is available to them. Training providers know that if they curate programmes that lead to good employment outcomes, they are very likely to be able to get support to offer these programmes at subsidised rates. So in a sense, that ecosystem is able to support what we hope to see come out of an AI strategy. So those are just some of the pieces that we put in place.Swyx [00:14:15]: Many pieces. 15 items. Okay. So for people who are interested, they can look it up, but I just wanted to get an introduction to people. Many people don't even know that we have a very active AI strategy, and actually it's the second one. There's already been a five-year plan, pre-generative AI, which was very foresighted.Josephine [00:14:32]: One thing that we also pay attention to is how can AI be developed and deployed in a responsible manner, in a way that is trustworthy. And we want to plug ourselves into conversations at the forefront. We have an AI Safety Institute, and we work together with our colleagues in the US, as well as in the UK, and anywhere else that has AI Safety Institutes to try and advance our understanding of this topic. But I think more importantly is that in the meantime, we've got to offer the business community, offer AI developers something practical to work with. So we've developed testing tools, by no means perfect, but they're a start. And then we also said that because AI Verify was developed for traditional AI, classical AI, then for generative AI, you need something different. Something that also does red teaming, something that also does benchmarking. But actually our interests go beyond that, beyond AI governance frameworks and practical tools. We are interested in getting into the research as to how do you prove that an AI system is really safe? How do you get into the mathematics of it? I'm not an expert in this field, but I think it's not difficult for people to understand that until you can get to a proof, then some of the other testing is reassuring, but to an extent.Swyx [00:15:58]: It may be fundamentally unprovable.Josephine [00:16:00]: It may well be.Swyx [00:16:01]: You might have to be comfortable with that and go ahead anyway.Josephine [00:16:03]: Yes.Alessio [00:16:04]: Yeah. Yeah. The simulations especially are really interesting. I think NTU is going to be one of the first universities to have these cyber ranges for like a AI red teaming training. One of our companies does AI red teaming and their customers are like some of the biggest foundation model labs. And then GovTech is like the only government organization working. So yeah, Singapore has been at the forefront of this. We sat down with the CPO of Grab, Philip Kendall, on my trip there, and they shut down their whole company for a week to just focus on Gen AI training. Literally, if you work at Grab, you have to do something in Gen AI and learn and get comfortable with it. Going back to your point, I think the interest of the government easily transpires into the companies. This is like a national priority, so we should all spend time in it.Josephine [00:16:50]: You're right. Companies like Grab, what they are trying to do is to make awareness so broad within their organization and to get to a level of comfort with using Gen AI tools, which I think is a smart move because the returns will come later, but they will surely come. They're not the only ones doing that, I'm glad to say, some of our leading banks, even Singapore Airlines, which may be the airline that you flew into Singapore, they've got a serious team looking at AI use cases, and I don't know whether you are aware of it, they have definitely quite a good number. I'm not sure that they have talked about it openly because airline operations are quite complex.Swyx [00:17:37]: At least Singapore Airlines offer.Josephine [00:17:38]: No, because airline operations are very complex. There are lots of things that you can optimize. There are lots of things that you have to comply with. There are lots of processes that you must follow, and this kind of context makes it interesting for AI. You can put it to good use. And government mustn't be lagging too. We've always believed that in time to come, we may well have to put in place guardrails, but you are able to put in place guardrails better if you yourself have used the technology. So that's the approach that we are taking. Quite early on, we decided to lay out some guidelines on how Gen AI could be used by government offices. And then we also went about developing tools that will enable them to practice and also to try their hand at it. I think in today's context, we're quite happy with the fact that there are enough colleagues within government that are competent, that know, in fact, how to generate their own AI and create a system for their colleagues. And that's quite an exciting development.Swyx [00:18:47]: I will mention that as a citizen and someone keen on developing AI in Singapore, I do worry that we lead with safety, lead with public good. I'm not sure that the Singapore government is aware that safety sometimes is a bad word in some AI circles because their work is associated with censorship.Josephine [00:19:09]: Or over-regulation.Swyx [00:19:10]: Over-regulation. And nerfing is the Gen Z word for this, of capabilities in order to be safe. And actually that pushes what you call AI creators, some others might call LLM trainers, whatever. There are trade-offs. You cannot have it all. You cannot have safe and cutting edge sometimes, because sometimes cutting edge means unsafe. I don't know what the right answer is, but I will say that my perception is a lot of the Bay Area, San Francisco is on the, let everything be unregulated as possible. Let's explore the frontier. And Europe's approach is like, we're going to have government conferences on the safety of AI, even before creating frontier AI. And Singapore, I think is like in the middle of that. There's a risk. Maybe not. I saw you shake your head.Josephine [00:19:53]: It's a really interesting question. How do you approach AI development? Do you say that there are some ethical principles that should be adhered to? Do you say that there are certain guidelines that should inform the developer's thinking? And we don't have a law in place just yet. We've only introduced very recently a law that has yet to be passed. This is on AI generated content, other synthetic materials that could be used during an election. But that's very specific to an election. It's very specific to election. For the broader base of AI developers and AI model deployers, the way in which we've gone about it is to put in place the principles. We articulate what good AI governance should look like. And then we've decided to take it one step further. We have testing tools, we have frameworks, and we've also tried to say, well, if you go about AI development, what are some of the safety considerations that you should put in place? And then we suggest to AI model developers that they should be transparent. What are the things they ought to be transparent about? For example, your data. How is it sourced? You should also be transparent about the use cases. What do you intend for it to be used for? So there are some of these specific guidelines that we provide. They are, to a large extent, voluntary in nature. But on the other hand, we hope that through this process, there is enough education being done so that on the receiving end, those who are impacted by those models will learn to ask the right questions. And when they ask the right questions of the model developers and the deployers, then that generates a virtual cycle where good questions are being brought to the surface, and there is a certain sense of responsibility to address those questions. I take your point that until you are very clear about the outcomes you want to achieve, putting in place regulations could be counterproductive. And I think we see this in many different sectors. Well, since AI is often talked about as general purpose technology, yes, of course, in another general purpose technology, electricity, in its production, of course, there are regulations around that. You know, how to keep the workers safe in a power plant, for example. But many of the regulations do not attempt to stifle electricity usage to begin with. It says that, well, if you use electricity in this particular manner or in that particular manner, then here are the rules that you have to follow. I believe that that could be true of AI too. It depends on the use cases. If you use it for elections, then okay, we will have a set of rules. But if you're not using it for elections, then actually in Singapore today, go ahead. But of course, if you do harmful things, that's a different story altogether.Alessio [00:22:56]: How do you structure a ministry when the technology moves so quickly? Even if you think about the moratorium that Singapore had on data center build-out that was lifted recently, obviously, you know, that's a forward-looking thing. As you think about what you want to put in place for AI versus what you want to wait out and see, like, how do you make that decision? You know, CEOs have to make the same decision. Should I invest in AI now? Should I follow and see where it goes? What's the thought process and who do you work with?Josephine [00:23:23]: The fortunate thing for Singapore, I think, is that we're a single tier of government. In many other countries, you may have the federal level and then you have the provincial or state level governments, depending on the nomenclature in that particular jurisdiction. For us, it's a single tier.Swyx [00:23:41]: City-state.Josephine [00:23:42]: City-state. When you're referring to the government, well, is the government, no one asks, okay, is it the federal government or is it the local government? So that in itself is greatly facilitative already. The second thing is that we do have a strong culture of cooperating across different ministries. In the digital domain, you absolutely have to, because it's not just my ministry that is interested in seeing applications being developed and percolate throughout our system. If you are the Ministry of Transport, you'd be very interested how artificial intelligence, machine learning can be applied to the rail system to help it to advance from corrective maintenance where you go in and maintain equipment after they've broken down to preventive maintenance, which is still costly because you can't go around maintaining everything preventatively. So how do you prioritize? If you use machine learning to prioritize and move more effectively into predictive maintenance, then potentially you can have a more reliable rail system without it costing a lot more. So Ministry of Transport would have this set of considerations and they have to be willing to support innovations in their particular sector. In healthcare, there would be equally a different set of considerations. How can machine learning, how can AI algorithms be applied to help physicians, not to overtake physicians? I don't think physicians can be overtaken so easily, not at all for the imaginable future. But can it help them with diagnosis? Can it help them with treatment plans? What constitutes an optimized treatment plan that would take into consideration the patient's whole set of health indicators? And how does a physician look at all these inputs and still apply judgment? Those are the areas that we would be very interested in as MDDI, but equally, I think, my colleagues in the Ministry of Health. So the way in which we organize ourselves must allow for ownership to also be taken by our colleagues, that they want to push it forward. We keep ourselves relatively lean. At the broad level, we may say there's a group of colleagues who looked at digital economy, another group that looks at digital society, another group looks at digital government. But actually, there are many occasions where you have to be cross-disciplinary. Even digital government, the more you digitalize your service delivery to citizens, the more you have to think about the security architecture, the more you have to think about whether this delivery mechanism is resilient. And you can't do it in isolation. You have to then say, if the standards that we set for ourselves are totally dislocated with what the industry does, how hyperscalers go about architecting their security, then the two are not interoperable. So a degree of flexibility, a way of allowing people to take ownership of the areas that come within their charge, and very importantly, constantly building bridges, and also encouraging a culture of not saying that, here's where my job stops. In a field that is, as you say, developing as quickly as it does, you can't rigidly say that, beyond this, not my problem. It is your problem until you find somebody else to take care of it.Swyx [00:27:08]: The thing you raised about healthcare is something that a lot of people here are interested in. If someone, let's say a foreign startup or company, or someone who is a Singaporean founder wants to do this in the healthcare system, what should they do? Who do they reach out to? It often seems impenetrable, but I feel like we want to say Singapore is open for business, but where do they go?Josephine [00:27:30]: Well, the good thing about Singapore is that it's not that difficult eventually to reach the right person. But we can also understand that to someone who is less familiar with Singapore, you need an entry point. And fortunately, that entry point has been very well served by the Economic Development Board. The Economic Development Board has got colleagues who are based in, I believe, more than 40 And they serve as a very useful initial touch point. And then they might provide advice as to who do you link up with in Singapore. And it doesn't take more than a few clicks, in a way, to get to the right person.Swyx [00:28:09]: I will say I've been dealing with EDB a little bit from my conference, and they've been extremely responsive and it's been nice to see, because I never get to see this out of government, nice to see that as someone that wants to bring a foreign business into Singapore, they're kind of rolling on the welcome mat.Josephine [00:28:24]: But we also recognise that in newer areas, there could be question of, oh, okay, this is something unfamiliar. The way in which we go about it is to say that, okay, even if there is no particular group or entity that champions a topic, we don't have to immediately turn away that opportunity. There must be a way for us to connect to the right group of people. So that tends to be the approach that we take.Swyx [00:28:52]: There's a bit of tension. The external perception of Singapore, people are very influenced by still the Michael Faye incident of like 30 years ago. And they feel us as conservative. And I feel like within Singapore, we know what the OB markers are, quote unquote, and then we can live within that. And it's actually, you can have a lot of experimentation within that. In fact, I think a lot of Singapore's success in finance has been due to a liberal acceptance of what we can do. I don't have a point apart from which to say, I hope that people who are looking to enter Singapore, don't have that preconception that we are hard to deal with because we're very eager, I think, is my perception.Josephine [00:29:29]: You need to hop on a plane and get to Singapore, and then we are happy to show them around.Swyx [00:29:34]: I'll take this chance to mention that, so next year, I kind of have been pitching as the Olympics of Singapore year, in the sense that ICLR, one of the big machine learning conferences is coming. I think one of your agencies had a part to do with that, and I'm bringing my own conference as well to host alongside. Excellent.Josephine [00:29:50]: So you're hosting a conference on AI engineers? Yes. Fantastic. You'll be very welcome. Oh, yeah. Thanks.Swyx [00:29:56]: I hope so. Well, you can't deny me entry.Josephine [00:29:58]: Should we have reason to? No, no, no.Swyx [00:30:02]: My general hope is that when conferences like ICLR happen in Singapore, that a lot of AI creators will be coming to Singapore for the first time, and they'll be able to see the kind of work that's being done. Yes. And that will be on the research side. And I hope that the engineering side grows as well. Yeah. We can talk about the talent side if you want.Josephine [00:30:18]: Well, it's quite interesting for me because I was listening to your podcast explaining the different dimensions of what an AI engineer does, and maybe we haven't called them AI engineers just yet, but we are seeing very healthy interest amongst people in companies that take an enthusiastic approach to try and see how AI can be helpful to their business. They seem to me to fit the bill. They seem to me already, whether they recognize it or not, to be the kind of AI engineers that you have in mind, meaning that they may not have done a PhD, they may not have gotten their degrees in computer science, they may not have themselves used NLP. They may not be steep in this area, but they are acquiring the skills very quickly. They are pivoting. They have the domain knowledge.Swyx [00:31:11]: Correct. It's not even about the pivoting. They might just train from the start, but the point is that they can take a foundation model that is capable of anything and actually fashion it into a useful product at the end of it. Yes. Right? Which is what we all want. Everybody downstairs wants that. Everybody here wants that. They want useful products, not just general capable models. I see the job title. There are some people walking around with their lanyards today, which is kind of cool. I think you have a lot of terms, which are AI creators, AI practitioners. I want to call out that there was this interesting goal to increase the triple the number of AI practitioners, which is part of the national AI strategy from 5,000 to 15,000. But people don't walk around with the title AI practitioners.Josephine [00:31:49]: Absolutely not.Swyx [00:31:50]: So I'm like, no, you have to focus on job title because job titles get people jobs. Yeah.Josephine [00:31:55]: Fair enough.Swyx [00:31:56]: It is just shorthand for companies to hire and it's a shorthand for people to skill up in whatever they need in order to get those jobs. I'm a very practical person. I think many Singaporeans are, and that's kind of my pitch on the AI engineer side.Josephine [00:32:10]: Thank you for that suggestion. We'll be thinking about how we also help Singaporeans understand the opportunities to be AI engineers, how they can get into it.Swyx [00:32:21]: A lot of governments are trying to do this, right? Like train their citizens and offer opportunities. I have not been in the Singapore workforce my adult career, so I don't really know what's available apart from SkillsFuture. I think that there are a lot of people wanting help and they go for courses, they get certificates. I don't know how we get them over the hump of going into industry and being successful engineers and I fear that we're going to create a whole bunch of certificates that don't mean anything. I don't know if you have any thoughts or responses on that.Josephine [00:32:53]: This idea that you don't want to over-rely on qualifications and credentials is also something that has been recognised in Singapore for some years now. That even includes your academic qualifications. Every now and then you do hear people decide that that's not the path that they're going to take and they're going to experiment and they're going to try different ways. Entrepreneurship could be one of it. For the broad workforce, what we have discovered is that the signal from the employer is usually the most important. As members of the workforce, they are very responsive to what employers are telling them. In the organisational context, like in the case of Grab, Alessio was talking about them shutting down completely for one week so that everyone can pick up generative AI skills. That sends a very strong signal. So quite a lot of the government funding will go to the company and say that it's an initiative you want to undertake. We recognise that it does take up some of your company's resources and we are willing to help with it. These are what we call company-led training programmes. But not everyone works for a company that is progressive. If the company is not ready to introduce an organisation-wide training initiative, then what does an individual do? So we have an alternative to offer. What we've done is to work with knowledgeable industry practitioners to identify for specific sectors, the kinds of technology that will disrupt jobs within the next three to five years. We're not choosing to look at a very long horizon because no one really knows how the future of work will be like in 15, 35 years, except in very broad terms. You can. You can say in very broad terms that you are going to have shorter learning cycles, you are going to have skills atrophy at a much quicker rate. Those broad things we can say. But specifically, the job that I'm doing today, the tasks that I have to perform today, how will I do them differently? I think in three to five years you can say. And you can also be quite specific. If you're in logistics, what kinds of technology will change the way you work? Robotics will be one of them. Robotics isn't as likely to change jobs in financial services, but AI and machine learning will. So if you identify the timeframe and if you identify the specific technologies, then you go to a specific job role and say, here's what you're doing today and here's what you're going to be doing in this new timeframe. Then you have a chance to allow individuals to take ownership of their learning and say then, how do I plug it? So one of the examples I like to give is that if you look at the accounting profession, a lot of the routine work will be replaceable. A lot of the tasks that are currently done by individuals can be done with a good model backing you. Now, then what happens to the individual? They have to be able to use the model. They have to be able to use the AI tools, and then they will have to pivot to doing other things. For example, there will still be a great shortage of people who are able to do forensics. And if you want someone to do forensics, for example, a financial crime has taken place. Within an organisation, there was a discovery that was fraud. How did this come about? That forensics work still needs an application of human understanding of the problem. Now, one of the jobs that we found is that a person with audit experience is actually quite suitable to do digital forensics because of their experience in audit. So then how do we help a person like that pivot? Good if his employer is interested to invest in his training, but we would also like to encourage individuals to refer to what we call jobs transformation maps to plan their own career trajectory. That's exactly what we have done. I think we have definitely more than a dozen of such job transformation maps available, and they cut across a variety of sectors.Swyx [00:37:05]: So it's like open source career change programmes. Exactly.Josephine [00:37:08]: I think you put it better than I, Sean.Swyx [00:37:11]: You can count on me for marketing.Josephine [00:37:13]: Yeah. So actually, one day, somebody is going to feed this into a model.Swyx [00:37:17]: Yeah, I was exactly thinking that.Josephine [00:37:19]: Yeah, they have to. Actually, if they just use REG, it wouldn't be too difficult, right? Because that document, to add to a database for the purposes of REG, they will still all fit into the window. It's going to be possible.Swyx [00:37:32]: This is a planning task. That is the talk of the week. The talk of the town this week, because of OpenAI's O1 model, that is, the next frontier after REG is planning and reasoning. So the steps need to make sense. And that is not typically a part of REG. REG is more recall of facts. And this is much more about planning, something that in sequence makes sense to get to a destination. Which could be really interesting. I would love the auditors to spell out their reasoning traces so that the language model guys can go and train on it.Josephine [00:38:04]: The planning part, I was trying to do this a couple of years ago. That was when I was still in the manpower ministry. We were talking to, in fact, some recruitment firms in the US. And it's exactly as you described. It's a planning process. To pivot from one career to the next is very often not a single step. There might be a path for you to take there. And if you were able to research the whole database of people's career paths, then potentially for every person that shows up and asks the question, you can use this database to map a new career path.Swyx [00:38:44]: I'm very open about my own career transition from finance to tech. That's why I brought Quincy Larson here to RAISE, because he taught me to code. And I think he can teach Singapore to code. Wow, why not?Josephine [00:38:55]: If they want to. Many do. Yeah, many do.Swyx [00:38:58]: Many do.Josephine [00:38:59]: So they will be complementary. There is the planning aspect of it. But if you wanted to use REG, it does not have individual personalised career paths to draw on. That one has got a frame, a proposal of how you could go about it. It could tell you, maybe from A, you could get to B. Whereas what you're talking about planning is that, well, here's how someone else has gotten from A to B by going through C, D, E in between. So they're complementary things.Swyx [00:39:33]: You and I talked a little bit this morning about winning the 30-year war, right? A lot of the plans are very short term, very like, how can we get it now? How can we, like, we got OpenAI to open an office here, great, let's go and get Anthropic, Google DeepMind, all these guys, the AI creators to move to Singapore. Hopefully we can get there, maybe not. Maybe, maybe not, right? It's hard to tell. The 30-year war, in my mind, is the kind of scale of operation that we did that leads me to speak English today. We as a government decided, strategically, English is an important thing, we'll teach it in schools, we'll adopt it as the language of business. And you and I discussed, like, is there something for code? Is it that level? Is it time for that kind of shift that we've done for English, for Mandarin? And like, is this the third one that we speak Python as a second language? And I want to just get your reactions to this crazy idea.Josephine [00:40:19]: This may not be so crazy, the idea that you need to acquire literacy in a particular field. I mean, some years ago, we decided that computer literacy was important for everyone to have and put in place quite a lot of programs in order to enable people at various stages of learning, including those who are already adult learners, to try and acquire these kinds of skills. So, you know, AI literacy is not a far-fetched idea. Is it all going to be coding? Perhaps for some people, this type of skills will be very relevant. Is it necessary for everyone? That's something I think the jury is out. I don't think that there is a clear conclusion. We've discussed this also with colleagues from around the world who are interested in trying to improve the educational outcomes. These are professional educators who are very interested in curriculum. They're interested in helping children become more effective in the future. And I think as far as we are able to see, there is no real landing point yet. Does everyone need to learn coding? And I think even for some of the participants that raised today, they did not necessarily start with a technical background. Some of them came into it quite late. This is not to say that we are completely close to the idea. I think it is something that we will continue to investigate. And the good thing about Singapore is that if and when we come to the conclusion that that's something that has to become either third language for everyone or has to become as widespread as mathematics or some other skillset, digital skills, or rather reading skills, then maybe it's something that we have to think about introducing on a wider scale.Alessio [00:42:17]: In July, we were in Singapore. We hosted the Sovereign AI Summit. We gave a presentation to a lot of the leaders from Temasek, GSE, EDVI about some of the stuff we've seen in Silicon Valley and how different countries are building out AI. Singapore was 15% of NVIDIA's revenue in Q3 of 2024. So you have a big investment in sovereign data infrastructure and the power grid and all the build-outs there. Malaysia has been a very active space for that too. How do you think about the importance of owning the infrastructure and understanding where the models are run, both from the autonomous workforce perspective, as you enable people to use this, but also you mentioned the elections. If you have a model that is being used to generate election-related content, you want to see where it runs, whether or not it's running in a safe environment. And obviously, there's more on the geopolitical side that we will not touch on. But why was that so important for Singapore to do so early, to make such a big investment? And how do you think about, especially the Saudi Sino-Asian, not bloc, but coalition, was at an office in Singapore, and you can see Indonesia from a window, you can see Malaysia from another window. So everything there is pretty interconnected.Josephine [00:43:28]: There seems to be a couple of strands in your question. There was a strand on digital infrastructure, and then I believe there was also a strand in terms of digital governance. How do you make sure that the environment continues to be supportive of innovation activities, but also that you manage the potential harms?Swyx [00:43:48]: I think there's a key term of sovereign AI as well that's kind of going around. I don't know what level this is at.Josephine [00:43:52]: What did you have in mind?Alessio [00:43:54]: Especially as you think about deploying some of these technologies and using them, you could deploy them in any data center in the world, in theory. But as they become a bigger part of your government, they become a bigger part of the infrastructure that the country runs on, maybe bringing them closer to you is more important. You're one of the most advanced countries in doing that. So I'm curious to hear what that planning was, the decision was going into it. It's like, this is something important for us to do today versus waiting later. We want to touch on the elections thing that you also mentioned, but that's kind of like a separate topic.Swyx [00:44:29]: He's squeezing two questions in one.Josephine [00:44:32]: Right. Alessio, a couple of years ago, we articulated for the government a cloud-first strategy, which therefore means that we accept that there are benefits of putting some of our workloads on the cloud. For one thing, it means that you don't have to have all the capacity available to you on a dedicated basis all the time. We acknowledge the need for flexibility. We acknowledge the need to be able to expand more quickly when the workload needs increase. But when we say a cloud-first strategy, it also means that there will be certain things that are perhaps not suitable to put on the cloud. And for those, you need to have a different set of infrastructure to support. So having a hybrid approach where some of the workloads, even for government, can go to the cloud, and then some of the workloads have to remain on-prem. I think that is a question of the mix. To the extent that you are able to identify the systems that are suitable to go to the cloud, then the need to have the workloads run on your on-prem systems is more circumscribed as a result. And potentially, you can devote better resources to safeguarding this smaller bucket rather than to try and spread your resources to protecting the whole, because you are also relying on security architecture of cloud service providers. So this hybrid approach, I think, has defined how we think about government workloads. In some sense, how we will think about AI workloads is not going to be entirely different. This is looking at the question from the government standpoint. But more broadly, if you think about Singapore as a whole, equally, not all the AI workloads can be hosted in Singapore. The analogy I like to make sometimes is, if you think about manufacturing, some of the earlier activities that were carried out in Singapore at some point in time became not feasible to continue. And then they have to be redistributed elsewhere. You're always going to be part of this supply chain. There is a global supply chain. There is a regional supply chain. And if everyone occupies a point in that supply chain that is optimal for their own circumstances, that plays to their advantage, then in fact, the whole system gains. That's also how we will think of it. Not all the AI workloads, no matter how much we expand our data center capacity, will be possible to host. Now, the only way we can host all the AI workloads is if we are totally unambitious. There's so little AI workload that you can host everything in Singapore. That has to be the case, right? I mean, if there's more AI workloads, it has to be distributed elsewhere. Does all of it require the latency, the very tight latency margins that you can tolerate and absolutely have to have them in Singapore? Some of it actually can be distributed, we'll have to see. But a reasonable guess would be that there is always going to be scope for redistribution. And in that sense, we look at the whole development in our region in a positive way. There is just more scope to be able to host these activities. For Southeast Asia?Swyx [00:47:44]: For Southeast Asia.Josephine [00:47:46]: Could be elsewhere in the world. And it's generally a helpful thing to happen. Keep in mind also that when you look at data center capacity in Singapore, relative to our GDP, relative to our population, it's already one of the most dense in the world. In that regard, that doesn't mean that we stop expanding the capacity. We are still trying to open up headroom. And that means greener data centers. And there are really two main ways of making the greener centers become a reality. One is you use less energy. One is you use greener energy. And we are pursuing activities on both fronts.Alessio [00:48:22]: I think one of the ideas in the Sovereign AI team is the government also becoming an intelligence provider. So if you think about the accounting work that you mentioned, some of these AI models can do some of that work. In the future, do you see the government being able to offer AI accountants as a service in the Singaporean infrastructure? I think that's one of the themes that are very new. But as you have, most countries have shrunken population, declining workforce. So there needs to be a way to close the gap for productivity growth. And I think governments owning some of this infrastructure for workloads and then re-offering it to local enterprises and small businesses will be one of the drivers of this gap closure. So yeah, I was just curious to get your thoughts. But it seems like you're already thinking about how to scale versus what to put outside of the country. But we were.Josephine [00:49:12]: We were thinking about access for startups. We were concerned about access by the research community. So we did set aside, I think, a reasonable budget in Singapore to make available compute capacity for these two groups in particular. What we are seeing is a lot of interest on the part of private providers. Some are hyperscalers, but they're not confined to hyperscalers. There are also data center operators that are offering to provide compute as a service. So they would be interested in linking up with entities that have the demand. We'll monitor the situation. In some sense, government ought to complement what is available in the private sector. It's not always the case that the government has to step in. So we'll look at where the needs are. Yeah.Swyx [00:50:04]: You told me that this is a change in the way the government works in the private sector recently.Josephine [00:50:09]: Certainly the idea that we were talking specifically about training. We said that with adult education in particular, it's very often the case that training intermediaries in the private sector are closer to the needs of industry. They're more familiar with what the employers want. The government should not assume that it needs to be the sole provider. So yes, our institutes of higher learning, meaning our polytechnics, our universities, they also run programs that are helpful to industry, but they're not the only ones. So it would have to depend on the situation, who is in a better position to fulfill those requirements. Yeah, excellent.Swyx [00:50:48]: We do have to wrap up for your other events going on. There's a lot of programs that the Singapore government and GovTech in particular does to make use of AI within the government to serve citizens and for internal use. I'll show that in the show notes for readers and listeners.Josephine [00:51:02]: Sure.Swyx [00:51:02]: But I was wondering if you personally have a favourite AI use case that has inspired you or maybe affected your life or kids' life in some way.Josephine [00:51:11]: That's a really good question. I would say I'm more proud of the fact that my colleagues are so enthusiastic. I'm not sure whether you've heard of it. Internally, we have something called AIBot. Yes.Swyx [00:51:21]: Your staff actually said to me like three times, like AIBot, AIBot, AIBot.Josephine [00:51:24]: Oh, okay.Swyx [00:51:25]: I was like, what is this AIBot?Josephine [00:51:26]: I've never heard of it.Swyx [00:51:26]: But apparently, it's like the RAG system for the Singapore government. Yeah.Josephine [00:51:30]: What happens is that we're encouraging our colleagues to experiment. And they have access to internal memos in each ministry or each agency that are treasure trove of how the agency has thought about a problem. So for example, if you're the Inland Revenue, and somebody comes to you with an appeal for a tax case. Well, it has been decided on before, many times over. But to a newer colleague, what is the decision to begin with? Now, they can input through a RAG system, all the stuff that they have done in the past. And it can help the newer colleague figure out the answer much faster. It doesn't mean that there's no longer a pause to understand, okay, why is it done this way? To your point earlier, that the reasoning part of it also has to come to the fore. That's potentially one next step that we can take. But at least there are many bots that are being developed now that are helping lots of agencies. It could be the Inland Revenue, as I mentioned earlier. It could be the agency that looks after our social security that has a certain degree of complexity. That if you simply did a search, or if you relied on our previous assistant, it was an assistant that was not so smart, if I could put it that way. It gave a standard answer. And it wasn't able really to understand your question. It was frustrating when after asking A, you say, okay, then how about B? And then how about C? It wasn't able to then take you to the next level. It just kept spewing out the same answer. So I think with the AI bots that we've created, the ability to have a more intelligent answer to the question has improved a great deal. But it's still early days yet. But they represent the kind of advancements that we'd like to see our colleagues make more of.Swyx [00:53:21]: Jensen Huang calls this preservation of institutional knowledge. You can actually transfer knowledge much easier. And I'm also very positive on the impact of this for an aging population. We have one of the lowest birth rates in the world. And making our systems, our government systems smarter for them, it is the most motivating thing as an engineer that I would work on.Josephine [00:53:37]: Great.Swyx [00:53:38]: Yeah, I'm very excited about that. Is there anything we should ask you, like open-ended?Josephine [00:53:43]: Unless you had another question that we didn't really finish.Alessio [00:53:47]: Yeah, I think just the elections piece. Yeah, Singapore's running for elections.Swyx [00:53:52]: How worried are you? How worried are you about AI? And it's a very topical thing for the US as well.Josephine [00:53:58]: Well, we have seen it show up elsewhere. It's not only in the US. There have been several other elections. I think in Slovakia, for example, there was material, there was content that was put out that eventually turned out to be false. And it was very damaging to the person being portrayed in that content. So the way we think about it is that political discourse has to be built on the foundation of facts. It's very difficult to have honest discourse. You can be critical of each other. It doesn't mean that I have to agree with your opinions. It doesn't mean that only what you say or what somebody else says is acceptable. But the discourse has to be based on facts. So the troubling point about AI-generated content or other synthetic material is that it no longer contains facts. It's made up. So that in itself is problematic. So if a person is depicted in a realistic manner to be saying something that he did not say, or to be doing something that he did not do, that's very confusing for people who want to participate in the discourse. In an election, it could also affect people favorably or in a prejudicial manner, and neither of it is right. So we have to take a decision that when it comes to an election, we have to decide on the basis of what actually happened, what was actually said. We may not like what was said, but that was what was actually said. You can't create something and override it, as it were. So that was where we were coming from. It is, in a way, a very specific set of requirements that we are putting in place, which is that in an election setting, we should only be shown saying what we actually said, or doing what we actually did. And anything else would be an assault on factual accuracy. And that should not become a norm in our election. And people should be able to trust what was said and what they are seeing. So that's where it's coming from.Swyx [00:56:13]: Thank you so much for your time. You've been extremely generous to have a minister as a listener of our little thing, but hopefully it's useful to you as well. If you're interested in anything, let us know.Josephine [00:56:21]: I hope your AI engineer conference in Singapore is a great success. Yeah, well, you can help us.Swyx [00:56:26]: Okay. Get full access to Latent Space at www.latent.space/subscribe
    --------  
    56:39

Weitere Technologie Podcasts

Über Latent Space: The AI Engineer Podcast — Practitioners talking LLMs, CodeGen, Agents, Multimodality, AI UX, GPU Infra and all things Software 3.0

Podcast-Website

Hören Sie Latent Space: The AI Engineer Podcast — Practitioners talking LLMs, CodeGen, Agents, Multimodality, AI UX, GPU Infra and all things Software 3.0, Lex Fridman Podcast und viele andere Podcasts aus aller Welt mit der radio.de-App

Hol dir die kostenlose radio.de App

  • Sender und Podcasts favorisieren
  • Streamen via Wifi oder Bluetooth
  • Unterstützt Carplay & Android Auto
  • viele weitere App Funktionen
Radio
Rechtliches
Social
v6.28.0 | © 2007-2024 radio.de GmbH
Generated: 11/18/2024 - 7:24:26 PM