Jake Levirne On How to Use AI and The Internet the Right Way, How It Affects Our Psyche and How to Use New Tools Ethically.

With AI changing the landscape of software development, expert computer scientist and 20 year veteran product manager Jake Levirne offers a look into how software development will change on the product and human level in the future. Jake’s special blend of expertise and virtue led to his new venture Ducky.foo, a platform teaching up-and-coming software developers the trade without internet troll scorn. Jake also teaches us some basic AI terminology to improve software literacy, part of his larger, honorable mission to prevent the bounty of AI innovation from remaining in the hands of the wealthy.

Key Takeaways

  • On the risk of AI programing affecting software quality: "At the end of the day AI is just a tool, right? And so it's how we choose to use it that could have impacts there. If we allow AI usage to be an excuse to move quickly [when developing software], but sloppily, then yeah, we're going to build more and more software that is is tenuous and has the potential of falling over."

  • On the greatest risk of AI’s utilization in software development: "Unless we are intentional as an industry, we run the risk of replacing the natural apprenticeship that's been in place for a few decades."

  • On AI taking our jobs: "Humans will always seek to work on the things that they're uniquely able to deliver value on, and I think so we'll just keep doing that in software development. But but I am worried about what the path looks like for people to get to that level of expertise."

  • On his new venture Ducky.foo: "Ducky.foo is is the outcome of me wrestling with the disparity that AI assistants are creating in terms of junior developer versus senior developer productivity... Create a human community where more experienced developers can teach and mentor and share their hard won expertise and real world knowledge with junior developers, but do it at scale...it's not novel to think of a community of software developers of different experience levels helping each other out. But I think what is novel is that I think we can hyper scale this type of community by injecting AI into it."

  • "Those with the most going into any kind of innovation tend to be the ones who benefit the most." Ducky.foo is hoping to curb this from taking place with AI innovation, and prevent the fruits of AI innovation from resting in the hands of the wealthy.

  • On the toxicity of Stack Overflow and general trolling: "here's a place where AI does have a leg up. It's infinitely patient, infinitely pleasant. And so I think that's one thing we can borrow from AI as we're building Ducky.foo."

  • “It's in your interest to love your customers, because customers who feel loved will not complain to their representatives about being abused. And so you are free to maintain and generate monopoly profits. As long as if you do that in a way where people don't feel like they're being harmed.”

Transcript

Tom: Can you create an online community that doesn't fall victim to trolls? How is AI being used by software developers, and what do we mean when we talk about AI? These are some of the questions I asked Jake Laverne, computer scientist and 20 year veteran of product management.

Jake has spent his career building products that improve the lives of software engineers, with stints at Digital Ocean and Docker, a product used by tens of millions of engineers. Jake is pursuing a new venture, Ducky.foo. Ducky.foo helps engineers take advantage of the good parts of AI responsiveness: tirelessness, always being positive etc. without the bad parts: bad advice, and bad information. The kinds of things that AIs are prone to. 

Jake has a big dream: he wants to make AI something that improves the lives of all software developers without making anyone feel marginalized and without harvesting the work of many for the benefit of a very few. We talk about love as a business, value, virtuous monopolies, what knowledge is, and lots of our other favorite subjects on this episode of the Fortunes Path podcast. 

Tom: Jake, it's great to see you. Thank you so much for for coming.

Jake: It's great to chat again, Tom.

Tom: You've had a history in product and in development and working specifically with developers, and you spent some time at Docker. Can you tell me a little bit about your time there at Docker?

Jake: That's exactly right. So I've actually I've done product management for most of my career, 20 plus years, but really maybe 7 or 8 years ago I decided personally, I just love helping people build and ship things. So there's this amazing group that have, you know, worked with my whole career software developers that are building, building and shipping every day. And so that's the group that I decided to focus on. And Docker was a step in that journey, actually. Started maybe seven years ago, first at DigitalOcean, a cloud provider that helps individual developers and hobbyists and startups get their software projects off the ground. And then Docker was a continuation of that journey, because Docker is used by literally tens of millions of developers to help build modern software, it makes it really easy to reuse existing components as you're building software.

Tom: That's interesting. A lot of my software experience is in organizations where the software is pretty tangled. And this idea of things being able to be separated and reusable components to me feels very born on the web ish. Is that a reasonable way to look at it?

Jake: I Think that's right. Usually when people talk about this concept, they use the term cloud native, or web native. And so this cloud native concept, it's two things at once. It's It's a technology concept about the way that you. Sort of componentize and reuse in the modern distributed web era. It's also sort of tied to this organizational principle that I think a lot of us are familiar with around agile software development and this idea of empowered teams that can each move independently and move quickly but still work together. So those two concepts kind of go hand in hand.

Tom: So that idea of empowered teams, to me, I'm wondering about AI I know we want to talk to spend a chunk of our time today talking about AI and how it is going to be used or is being used and will in the future be used in the process of software development. So I'll start off with something very speculative just because it's what popped into my head. Could you see a time when AI is almost like an independent team working within a team of individuals?

Jake: I don't think of AI as an independent team. I Think more of AI as like a. Prospective team member or set of team members on an existing team, right? I think a lot about the way that people and I can combine together to build software. But if I think about AI as its own team, completely decoupled from humans, then yeah, all sorts of bad things start to come to to mind.

Tom: Yes. I think other people have gone there. We don't have to go there right now. So you've been doing some studying about how AI is being used today in the development of software, and what have you found so far? What's been interesting about that process?

Jake: I've been digging in, you know, using AI myself for building software and doing DevOps, but. But also talking to a lot of developers, folks that I've worked with throughout my career. And the anecdotal evidence is People are adopting AI for software development workflow pretty aggressively and and pretty happily. And it's not just my anecdote's. There's there's some good surveys out there. So like the Stack Overflow survey from 2023 that shows something like 70% of all developers responding to that that are either using or planning to use AI tools in their development process over the coming year. And like over 75% are favorable or very favorable towards the use of AI tools and development. So it's like a in general, you know, maybe we're just somewhere on the on the curve before the trough of disillusionment. But in general, software developers are pretty optimistic about the use of AI and are adopting it pretty aggressively.

Tom: Are there any concerns that you have about that? I've heard an analogy that if we design buildings the way we design software, all the buildings around us would be falling down constantly. And that their software is like a hidden infrastructure that we're all completely dependent upon. And that is there is AI going to make it easier to find where things are broken or, you know, I'm concerned about how it arrives at decisions without showing its work necessarily. Is that a false concern?

Jake: I don't think it's a false concern, but I think there's a lot of nuance, right? At the end of the day AI is just a tool, right? And so it's how we choose to use it that could have impacts there. Right. So so some examples in software development if we use AI to help us write the code that we would have been writing anyway, in order to build the software systems that we are building out, there’s no inherent problem with that, as long as we keep our other best practices and controls in place. So a typical software development team has controls and best practices around peer review of any code changes that are happening. And that goes a long way to establish quality and security and even a culture of engineering that that values these things. Right? If we allow AI usage to be an excuse to move quickly, but sloppily, then yeah, we're going to build more and more software that is is tenuous and has the potential of falling over.

Tom: Let's talk about concept of pair programing. So my understanding of it is it's basically two developers, one screen, and they're writing the code together. Is that a correct understanding of pair programing?

Jake: That would be my definition of pair programing as well. But I think there's a second and maybe more prevalent way that developers collaborate with each other these days, which is through the review of pull requests and GitHub. Right. So when one developer makes a code change, they do it as a proposal, right? They don't do it as a as a final change that moves immediately to production. And so it's that proposal review that sort of serves as the standard way that development teams collaborate with each other when it comes to coding these days. Many use pair programing as well though.

Tom: In that proposal phase I'm going to I do a poll and make a proposal to have something get pulled from GitHub. Is that a good place for the AI to be one of the first sets of eyes in the code?

Jake: I actually think that's a horrible place for the AI to be, to be a set of eyes on the code.

Tom: It's good to know.

Jake: The AI tools that software developers use today are typically inside that local development workflow, and it is actually much more like the side by side, elbow to elbow pair programing that you were describing. And so, given that, you know, if you're inside your development environment, you're relying on AI to do what is effectively like really, really advanced auto completion of your code for you -if you're relying on AI there- I don't think you also want to be relying on AI, and certainly not the exact same AI at the point of review of that code. I think you want to create some separation of concerns, ideally by getting a set of human eyes on that code first, or at the very least a different AI model looking at it.

Tom: I'm going to use a music analogy. Let's suppose I lay down a click track in order to be able to keep everything on beat over the course of the song. I'm going to take that out before I release the song. Song's not going to have a click track in it, and I'm not going to necessarily use the click track to sort of adjust the tempo of the song within this, within it, in order to make sure everything's still on beat. I can live with, you know, that's. Anyway, I'm not sure if I've beaten that analogy to death but you're not you don't want the original thought partner, the AI, to also be the one who approves it because it's not it doesn't spot any of its own mistakes.

Jake: It doesn't. That's right. Especially I mean, in your analogy, it's a really interesting one. People can usually tell when you use a click track or, or You know, there's like, almost all modern audio production tools have this sort of quantize or snap that aligns every note perfectly to the beat and people notice how how how staccato that is. And so yeah, I don't think. The machine that. That creates these “beats” or -in the case of software- code, is the best tool to use for. For reviewing gatekeeper. Yeah. As the gatekeeper.

Tom: If I'm a young developer, how much can I use it as a tool? It's my buddy accelerate my development. Can I do you think it's going to close the time in the sort of internship process or the the seasoning process for someone to go from being a junior developer to a senior developer?

Jake: There are a couple of really interesting phenomena at play here. Maybe I'll take us on a little bit of a winding path to answer that question. I'll use a metaphor that I've been thinking about here. If we had an amazing robotic chef, right? Somebody went out and they just developed the best robotic chef that the world has ever seen. At first blush, it might seem like it could perform at the level of a sous chef or what we might in our in our world. You know, we're talking about developers. So a junior developer. Great. So I've got my sous chef. Working amazingly. Does a sous chef help a human sous chef actually get any better? I'm not sure it does that much. And so in fact, I think what ends up happening is you've got this robotic sous chef now all of the best chefs in the world can extend their abilities, because maybe they can even manage two or 3 or 4 kitchens at once, because they've got all these chefs that are these robotic chefs that have read every recipe in the world, they've seen every cooking show in the world, they have perfect recall they never complain, never give lip. There's no drama, right? So from a world class chef's perspective, it feels great to use this robot, right? I forget what this unique recipe might call for in terms of proportions or ratios that the robot sous chef will do its thing, but from human sous chefs perspective, that's not going to be enough for you to jump or leapfrog becoming a world class master chef. Usually, the way you learn to become a world class master chef is by sous-chefing underneath the guidance of a master chef. And so I actually worry that opportunities will dry up a bit for junior developers if we're not explicit about helping them come along. And look, I think there are a lot of ways that AI does help the learning process. If you're explicit about learning, if you've got a good curriculum, if you're using AI plus other tools like coding bootcamps or online materials and resources, and you're learning concepts as well as just troubleshooting specific problems with AI, I think it can be super beneficial. But I think unless we're intentional as an industry, we run the risk of losing the natural apprenticeship that's been in place for a few decades now.

Tom: That's the classic thing everybody worries about AI. Ai's coming to take our jobs. I don't hear you saying that's the case unless we want it to. Unless we say, I want all sous chefs in the world to be the same sous chef, essentially, and I have no desire for them to be anything other than existing within that sous chef box.

Jake: I think maybe in this metaphor, I mean, we'll just keep extending it till it breaks, but I'm not worried about AI and the robots coming to take our jobs because in our robotic sous chef case, it can't smell and it can't taste. So it'll never be…it might understand how to cook theoretically and understand as a stretch of the word, but you ask it to spice the dish appropriately, and it's just going to pull spices based on probabilities and not based on real taste. Right? So no, I'm not worried that ultimately, we're going to lose human jobs. Humans will always seek to work on the things that they're uniquely able to deliver value on, and I think so we'll just keep doing that in software development. But but I am worried about what the path looks like for people to get to that level of expertise.

Tom: Interesting. Let's take a second, if you don't mind, to back up and give an AI 101. I think it's it's a term that gets bandied about and is poorly understood, typically poorly defined. 

Someone who I respect says most of what we call AI is really machine learning and that there is a there is a difference between these things. Do you mind giving me and a little bit of that 101 to make sure I'm not walking around with some misconceptions?

Jake: AI or artificial intelligence, I actually think of that as the umbrella term. And that umbrella term covers lots of different forms of trying to get machines to behave intelligently. And so one path of inquiry and study for decades since the 60s and earlier has been through /AI and knowledge models, right through intentionally building out the rules and connections and structure of of the concepts that we interact with on a daily basis. Another path has been more mathematical, mechanistic, and probabilistic model which is machine learning based approaches and we've seen pendulum swings between those two approaches. But yes, today most of what we talk about when we talk about AI is machine learning and more of the mathematical approaches using neural nets. And what this means ultimately is that we get some iInteresting capabilities and effects that maybe weren't all that predictable. And so we're learning what we mean by AI and what we mean by generative AI as we use it. So unlike models where you know, maybe somebody trained the knowledge very directly by hand, where we can understand the decision making flow with the tools that AI tools that we use today, we don’t we don't always understand that decision making flow. And that creates some interesting properties.

Tom: So make sure I understand that the kind of old school AI idea of we're going to develop these representations of knowledge that have a predictable path through them, because it's a path that we've defined. That was old school AI, and then the new AI, the probabilistic mathematically driven AI is: well, we don't really know what the outcome is. We can't predict what it's going to be because the machine, just as you say, makes probabilistic judgments in order to produce its output. And in the old school, you could almost say, well, the machine had knowledge it was the knowledge that we gave it, and it had a specific structure. It didn't know it wasn't sentient, it didn't understand, it didn't know, but it had knowledge. And in the new methodology, it's like it doesn't know and it actually has no knowledge. All it has is a gigantic training corpus, a bunch of, like you say, like every recipe, every written, every cooking show ever, ever recorded. It has that. And then it just looks at before and after. Yeah. What happened before, what happened after. And it does that over and over and over again. And it comes up with probabilities and it uses math to sequence something then gives it to us and says, do you like it. Yeah. And then we either go, yeah, I like it. But it has absolutely no idea what it just did. There's no no cognition. There's no meaning, there's no understanding. It's simply executing a bunch of equations, sequencing something and then giving that back.

Jake: That's exactly what it is doing. And I would agree, you know, in both cases I think it's a stretch to say that there's any real knowledge. There's certainly no cognition. But in machine learning based approaches and especially what we are all mostly talking about today, generative AI, it is just a big probabilistic model. There's a lot of math underneath the covers. And when we when we prompt the AI, it just walks a tree and tries to guess what the best next word to output is… Maybe back to a little bit of terminology. I think what we see with the use of generative AI, so a tool like ChatGPT, even the terminology hints at this, which is you write a prompt, right? The prompt is the beginning of the response that you're going to get back. The response you get back is called a completion because essentially it's just completing the the word sequence, the train of thought from from the words that you first put in. So this is this is exactly what's happening.

Tom: Is writing computer code a really good case for something that is essentially making determinations based upon probability for what comes next. So is computer code, sort of -I can't write code- So I don't know if code is more deterministic than deciding between fruit and apple. If I'm writing a sentence in English and I'm trying to, you know, I think about what's the meaning I'm trying to create, well, do I want to say fruit or do I want to say Apple is software code? There's there's a fairly wide number of variables that could complete that sentence of whether fruit, apple, pear, grape. You get where I'm going. Yep. Vegetable. I mean, it could be all kinds of things. Is computer code a lot more deterministic where it's like, really there's two choices.

Jake: It is it is a lot more deterministic. Right. So, so. So to answer your question, yes, I think writing computer code is is probably one of the. Best use cases for generative AI in terms of its ability to to do a good job at it. And in part it's because. Computer languages are a much smaller universe than than human languages. Right. And so its likelihood of getting the right next word, the right next token in a in a computer program is, is much higher. Having said that computer programs [are] different than human language in that if they're even slightly off, the computer can't understand them, right? Whereas we as as human readers we can infer. Right. So so it's a mixed bag. And some of the most interesting cases are like when the two get combined together. So it's really interesting when you're using a development environment that's got AI in it, like like GitHub Copilot, not only will it complete the code for you, it'll complete your comments for you. So it'll complete the English words too. And you get this really interesting thing. Like it'll guess that, you know, oh, you've just created an array of like banana, pear and grape. It'll, it'll guess what the next three fruits you meant were. So it's that combination is kind of interesting. And then then you find yourself saying yes and tab completing too quickly and you're like, no, no, no, no, that wasn't my thought. Right. So that's the that's the other interesting thing there. But but yeah, it's a good use case. Having said that. It's nowhere near perfect. In fact I was, I was just looking at a study. So there's a Purdue University research study from May and it shows that 52% of programing answers generated by ChatGPT are incorrect. So batting less than 500. And it takes a pretty strong stance on correctness. You know, we were just talking about it like it's either right or it's wrong. And so if there's any flaw in the code, it'll like it's wrong. Yeah. They flagged it as wrong. So maybe in that sense, it's kind of amazing that 48% of the time that it's right. Yeah, that 52% of the time, though, the interesting thing is, like, it's not completely wrong. It's it's often in some weird way or some knowable and understandable way if you've got the expertise, So sort of back to our metaphor, right? Like you, you're, you're the master chef. You ask your robotic sous chef to to hand you the mustard. It hands you the brown mustard. When you actually need the Dijon mustard, you can correct it. It's not like an end of the world thing. It's actually really easy to correct. And by analogy, when you've got an experienced software developer trying to complete code using AI and it makes some wrong suggestions like you know the wrong set of parameters to pass into a into a function call. It's okay. A trained human can spot that pretty easily and correct it. So they're still getting a lot of the benefits as long as they've got that training and expertise already.

Tom: So that's where Ducky.foo comes in.

Jake: It is.

Tom: Tell me tell me about what Ducky.foo is and what's drawn you to this, this next adventure.

Jake: Yeah. Ducky.foo is is the outcome of me sort of wrestling with a couple of problems. One of the problems is the disparity that I believe AI assistants are creating in terms of junior developer versus senior developer productivity. So we just we saw this McKinsey study from last year where on many computer programing tasks, senior developers can do them in half the time using AI, junior developers actually take 7 to 10% longer to do those tasks than they took when they weren't using AI at all. Right? So using AI actually slows them down because of this phenomenon we were talking about. And so a natural answer here is to do what we've always done and help each other out. Create a human community where more experienced developers can teach and mentor and share their hard won expertise and real world knowledge with junior developers, but do it at scale. And we've got great communities like this today. I mean, there's there's Stack Overflow and there's a bunch of subreddits focused on software development and programing. Right? So it's not novel to think of a community of software developers of different experience levels helping each other out. But I think what is novel is that I think we can hyper scale this type of community by injecting AI into it. Right. So at Ducky.foo, what we're imagining is junior developer comes in with a question, a problem, something that they're trying to tackle. They ask that question and get an immediate response back from the AI. Right. So you get that that quick hit of, you know, maybe this is the 48% of things that I can just get the answer that I need. But if it's not, rather than spinning on my own, what I get is a set of eyes from the rest of the software development community looking at my question and looking at the AI answer to that question, and helping point out the places where it got it wrong. And maybe it was just something small and simple and a knowledgeable human giving me a nudge, giving me a hand will get me on track much faster than me just struggling in single player mode with the with the AI.

Tom: There's a lot that goes through my head in that story. One thing is like, this is capitalism kind of playing itself out over and over again in the sense that those with the most going into any kind of innovation tend to be the ones who benefit the most. So if you think about those who with the most knowledge about software development going into an AI revolution are the ones who stand the most to benefit from the automation of certain software tasks rather than, you know, it's “democratizing it” where everybody gets a chance to play software developer. And the other thing that comes to mind is, so who owns the correction? So the junior dev in your story asks the AI for help, it gives it some help, And then senior developers look at this and say “it messed up here, here and here: Fix that and you'll be fine.” Who owns that correction? And is that correction sold to future audiences?

Jake: That's a great question. So we'll model on the best software development communities and examples that we've seen and those are, again, like a Stack Overflow or open source projects and communities like the Apache Foundation or the Cloud Native Computing Foundation, which is to say, the community generated that content. The community should own that content. And so it's a flavor of Creative Commons license that will use for all of those corrections. And what that means, right, the practical implications of that are that anybody should be able to benefit from. All of those corrections and the types of people I'm most interested in seeing benefit from it are our developers, you know, people that are trying to learn, trying to grow and learning directly from those corrections, but also all of the. Researchers and software developers, and open source projects that are creating open AI models like Llama and Mistral and and dozens of others. Now that can offer. Great and continually improving AI experiences without proprietary black box around them.

Tom: Now we're into kind of AI governance, which is a whole topic of itself. I’m not that familiar with the story of OpenAI. I know just a little bit of it. I know it's going through a transformation now from being a not for profit to a for profit. And so I would be have some if I'm a Ducky.foo community member and I'm contributing to the intellectual content of Ducky.foo, I might have some concern about, like, well, somebody else can come in to the Ducky.foo community, scrape everything we have, put it into theirs, and then use that for some proprietary purpose. I mean, we've seen this happen on the web over and over. And can you tell me, like, don't worry about it. It's not a big deal. Or, you know, this is what we're doing to stop that. Yeah.

Jake: I think that is what will happen. Right. So rather than trying to build up walls and and prevent that from happening. That is the natural flow. And you know, we see it with Stack Overflow content. We see it with Reddit content. Before that, we saw it with all the independent blogs that people were writing that were that were indexed by Google. Right. So it does happen over and over again. And I think we need to build with that in mind. What the important thing is, and and we see this with like the Reddit IPO for example, is that when you create community it's more than just the content, right? It is the set of people and interactions that happen on a daily basis. And so I think the only way you can be successful is that it can't just feel like a mechanistic place to get a response and get out quickly. It needs to feel like a real community. And this is one of the areas where StackOverflow, unfortunately, I think has fallen over. And you can watch YouTube videos or read articles about the toxicity on Stack Overflow if you're a new software developer, just trying to get some help with a question that you're struggling with, your first experience of asking a question on Stack Overflow will feel like getting wrist slapped and then poked and then prodded and then beat up and then beat up again and trolled a little bit in the mix because you didn't ask your question properly or because or because you used some informality like, “hey, I could, I could really use some help on this,” Those, those types of things get edited right out of questions immediately. And it just makes it a really. Tough place, especially for for new software developers. And so here's a place where AI does have a leg up. It's infinitely patient, infinitely pleasant. And so I think That's one thing we can borrow from AI as we're building Ducky.foo. It's also really interesting. To look at Reddit and see the human aspects of the interaction there, with all its silliness and wackiness and and the fact that people can share opinions and it's okay. Again, on on Stack Overflow, opinions are are considered a negative thing, not a positive thing. Right? Like they're really just looking for coding facts. I think with Ducky.foo, like with Reddit, we have the opportunity for people to share our real world experience, to share opinion, to share the human parts of their their thinking.

Tom: I love that vision. I worked in a cognitive psychology research center back at the beginning of the internet. And there's a lot of overlap between cognitive psychology, AI and some and some with software development. But back then, you know, this was Netscape was the browser of the day, if you remember. Yeah, yeah. At that time and there was so much excitement about, like, the free transfer of knowledge we're going to share. It was, you know, it was an academic phenomenon at the time of like and It's just like putting things on simple web pages. Felt, you know, fantastic. Yeah. And and then everything went to shit. It's got it's got worse and worse and worse. You know, until we have the catastrophe that is the internet of today. Yeah. And I have So I'm being a little cynical, but I have concerns about I going through the same thing there is. We started out with this idea of, you know, here's this. When generative AI first poked up, it was like, oh, here's this cool thing. And there's an organization that has a community board, and they're concerned about the ethics of it all. That stuff's getting thrown overboard. Yeah, no more community board, no more concern about, you know, it's just full steam ahead in monetization. And so. Help me not be so cynical. Why should I not think that this is just going to be one more poison pill?

Jake: It's a good question. So we have pockets of goodness on the internet still. Right. And I think it's It's It's just a matter of. Feeding them and caring for them and and helping them grow. The the you know, from my viewpoint, focused on software development software developer tools. The clearest view of this is open source projects. I mean, open source projects, open source community. Do an amazing job of bringing large groups of people together, increasingly more diverse people. You know, nothing's perfect, but it's making strides, bringing increasingly diverse people together across. Space and also across time, right? We have projects that change hands and are very long lived in order to build some amazing software and not not amazing software. That's like a tangent to what we do. It's the amazing software that powers everything else that we see. I mean, you could look into any major website, any major mobile application, any major electronic device, and 99 out of 100 times you will find open source software inside of there as a major component. And so I think we have some examples of community and commercialism coming together and getting it right.

Jake: People contribute to open source for a lot of different reasons. But many of them are what we would consider capitalistic reasons. Right? They want to build their personal brand, build their personal reputation, get a foot in the door where, you know, maybe they don't have commercial experience, but through open source projects are able to gain the experience that they need or they just have passion for a given area. I mean, I have a friend and former colleague who has worked on. Search text search for decades and just loves it and goes from. You know, company to company and whoever will pay him to work on open source search. He's happy to be there. And so I think there are ways to tap into that capitalistic spirit and get positive outcomes. And. In this example around open source. What it comes down to is aligning motivations and building a community that has a set of shared purposes and beliefs. Right. And so writing those down becomes critical and important.

Tom: Wikipedia is my example of a miracle. I mean, just thinking about Wikipedia almost makes me want to cry that it has survived. And you know, it's still it's an amazing invention. Just absolutely amazing. It is. And so, yeah, literally, I get choked up thinking about what Wikipedia has accomplished. Agreed. And but from, from an open source standpoint. So to your point, it's possible it is possible to create positive communities that are able, you know, to develop kind of antibodies to protect themselves from trolls. And, you know, Wikipedia is not perfect, but it's done. It does a decent job of being troll proof. It does. And so if Ducky.foo can, you know, develop that for, become a community for developers to use AI safely, efficiently, responsibly, in a troll free environment? Yep. That would be. That's a heck of an accomplishment.

Jake: That would be a heck of an accomplishment. And that's the goal, right? That's a goal. That's a big hill to climb. But that is the goal, right? And I think it's I mean, I think you get there early days matter a lot. Early days set course and direction right. When you have belief and and purpose that attracts certain set of individuals with that same belief and purpose. But over time, yeah. Then it just takes consistent vigilance.

Tom: I think that to your point it's almost the operating principles that you establish at the beginning that determine the success or failure of the project. You know, religions establish a set of operating principles early in their, in their career, so to speak, and those get all kinds of, you know, interpretations, manipulations, however you want to say it over the course of their life as, as phenomenon, but they still hew to some original idea. And if there isn't that original driving idea, they just they vanish. And I think that those operating principles in any enterprise that you really establish at the beginning are so critical. And then when you start to betray them. So, like if you think of the animal farm. When the animals first liberate themselves from the farmer, they have a set of beliefs that they write down. And then eventually they they keep coming back and crossing things out and changing it out. They get down to just one belief at the end. All animals are equal, but some animals are more equal than others. And that story, it's such a fantastic fable of the, the corruption of a great idea. And I think we see that in business over and over all the time. Yeah. Yeah.

Jake: A negative example is when you look at Google and their, you know, evil, right. And organize all the world's information. Right. That's those two things hand in hand are are amazing great beliefs. And I think that carried company a long way. And I actually think we'll carry the company long into the future. But, but you see this eroding every, every once in a while, whether it's you can I think I saw an article about the treatment of sponsored results and, you know, the changes that they've made visually to those over time in order to increase. Revenues or even just ranking algorithms and quality of results. Right. So so these. That's a counter example. I'll give a positive example of of culture and values mattering a lot at Digital Ocean. One of the core values from date was established up in. Until and through the present is love is at our core. So they actually have a company value around love. And it feels yeah, it feels.

Tom: Warm.

Jake: And fuzzy and touchy feely, but it translated into direct. Yeah, outcomes day in and day out. Some examples you know as I worked. The product documentation team that I managed, they would make decisions about how to write what they were writing, the style and tone that they would use, the topics that they would cover based on wanting to share the love with our developers. Right. And when you do that, it's just sort of naturally points you towards outcomes that are going to be positive for for others. Right. And that that went a long way.

Tom: It's funny. I believe that the objective of every business is to establish a monopoly. I want to be competition kills profits. I want to be sole source for something. And a monopoly can be a force for good. They typically are not, because the power that comes with a monopoly is so corrupting and without regulation there is nothing to prevent monopolies from becoming abusive unless they just have tremendous foresight and they understand that there are monopolies, come in all sizes and have all kinds of different market powers. But unless you you're foresighted enough to understand that government is the only force in society with that is a match for the power of monopoly. And is capable of I'll say, rescuing the rest of the society from that abuse of monopoly. And so to take a case like Google, you know, you It's in your interest to love your customers, because customers who feel loved will not complain to their representatives about being abused. And so you are free to maintain to to generate monopoly profits. As long as if you do that in a way where people don't feel like they're being harmed. Apple's a genius at essentially milking people in a way that they don't feel like they're being screwed. And at least so it's like me as a consumer of it, I don't feel visibly -I'm going way upfield- I don't feel screwed. If I'm a developer I feel totally screwed. You know, if I'm, if I'm the makers of Fortnite. Yep. I feel like I'm really getting shafted here. It's true. And so I think that I'll try to bring it back to, to AI. There is, there's this in the in the teaching of the model as a community activity. And then each, each little correction is a piece of intellectual property that's being freely shared under this Creative Commons license, where one for all and all for one. I don't know what your revenue model is, but the subscription revenue is still generated for Ducky.foo to benefit Ducky.foo shareholders and Ducky.foo investors, etc.. Talk to me about is there some concept within Ducky.foo of like, royalties?

Jake: Not royalties. So I'll caveat first by saying we're figuring it all out, but there’s two clear streams.

Tom: Not to put you on the spot.

Jake: That's okay. There’s two clear streams, though, there are there are there are two clear streams, right. So one clear stream is that when done in the right way and when made transparent, I think it's fine to include sponsorship and advertisement in a community. Stack overflow does it. It's a great way, for example, for companies to advertise for open positions and to hire great developers. It's a way for. Cloud providers developer tools to get their message across. Right. So I think that's one clear revenue stream. Another is: There are some places where the public community is valuable and appropriate. There are other instances where you actually want a private community. And so you imagine a large software engineering organization, a major bank like a Citibank or Fidelity, or within a government or manufacturing setting where the information that you're using to figure out what to build and how to build it is not and should not be publicly available. And I think there's opportunity there for a private community version of Ducky.foo as well. And that would have subscription revenue associated with it.

Tom: What you describe, the second one to me is much more appealing from the kind of virtue standpoint than the first one. I have, I feel like could be wrong, but I feel like Wikipedia would not have been able to exist on anything other than a pure nonprofit donation model.

Jake: Wikipedia took that that extreme approach, right? They went with a pure nonprofit foundation model. They live off of donations. I one I'm part of the, you know, 1% or 0.5% that makes my annual contribution to them. And I think I think that's a perfectly fine model. And I've actually contemplated and I'm still contemplating a nonprofit or foundation model for, for Ducky.foo.

Tom: The other model that you're talking about of, “I want a closed wall, I want a wall around my instance of Ducky.foo. I can pull in information from the outside world, but inside we're sharing stuff that I don't want anybody else to see, because they’re secrets.” They’re secrets. Right? Yeah. That to me makes a ton of sense. And it also seems to like it would fit even in organizations that weren't vast. You know, anybody who had a concern about security, I could see startups. Same sort of thing about you know, there's only we have three developers, but we need a way to collaborate. And we want AI to be part of that collaboration. But we also need to be able to it's just it's essentially it's one more I'm going to use, you know, employee. One more partner. And I needed a way to comment on the quality of its work. Yep. And I want it to to if I say so and so is a mistake. I wanted to learn that and understand that, but I don't want to share that with the rest of the world. I want that within my, you know, my little secret tribe.

Jake: The challenge there, right, of course, is that if everybody worked in private, then there would be no there would be no public good from from the community. Right. And so figuring out that balance, one of the things we've been thinking about is. To what extent can we. Identify secrets, work with secrets, redact secrets, and still get some of the shared value out of communities. And maybe it's a community of communities, right? Maybe it's tribes that work independently. When they need to, but by default are also giving back in some way to the public good.

Tom: You’re sending yourself straight into the teeth of capitalism. Because I feel like so much of of value and capitalism is premised on a concept of secret knowledge. I have. Even if you don't, if you pretend to have secret knowledge, that can be an extremely profitable position to occupy. Yes. Because others believe you. Yep. And you can move sentiment or markets or whatever. You can create the the illusion of value in something because you have some secret knowledge that others don't have. Yes. And you're I think what you're, you're saying is like, well, we don't there are no secrets here.

Jake: At first, there will be no secrets. I mean, that is the starting point of a public community with. Well, that's interesting that you say that. Now, I'll even caveat myself the the first in order to get people to adopt it, to use it, engage engagingly day in and day out. There are questions that you have that you want an I answer and you don't want to share with the world. And so even if we do share by default, which we will do We'll have a way for people to mark their conversation as private if they don't want to share it.

Tom: This brings us back to a good practical question. You now have practical experience collaborating with AI because I think you're you're building Ducky.foo using AI.

Jake: That's right. Where we've gotten so far very early days, but you know, my co-founder sort of part time CTO, Sean and I have been and you know Sean very well, too. Yeah. Have been. We just started from you know, a shared document like Doki Fu at first was just a shared document where I'd ask a question and then go to ChatGPT, ask that same question, copy and paste the answer into the shared document, and Sean would comment on it. And the questions I were asking were the questions I would need to build the first iteration of the site, which I got out. And it's really just a community of two, me and Sean right now. And I ask questions in there and he comments back on them, and we're using that to help build out the second iteration of the site.

Tom: I love that. I mean, you're, you're you're discovering how to do it by doing it. Which I think is fantastic. Yeah. So what have you learned in that? How how has AI been as a collaborator?

Jake: Yeah, I think it's funny. It's a journey that sort of follows the. What is it? The technology adoption curve with the trough of disillusionment? Yeah. So I think we're still. Flying pretty high. My initial experiences with using AI for software development have been. On the whole pretty positive. And they resonate with what with that Stack Overflow survey, you know, 75 plus percent of people have a sort of a positive take on on I it's helped me build something using programing language I've never used before. Right. I had another hobby project that I had started with in Flutter and Dart. I had never programed with that framework or language, and I was able to get to a working and running piece of software in a matter of a couple of weeks, so it definitely helps speed things up for me. It. It's a little boring and predictable. This is one of the other things that comes across like the. You can spot an AI answer from a mile away and. That has actually played into some of the design thinking around Ducky.foo and the reason for a fun and silly name and for you know, really embracing something that doesn't feel as sterile as as the typical AI responses. And then. I think some of the. Some of the other things that I've learned are. The biggest one was that even as a trained computer scientist, so I have a CS degree, I haven't written code professionally in decades, but I do it for fun and for hobby. And for Ducky.foo. Even with that, I find myself. Spiraling when I use I in certain circumstances, so I'll ask a question. I think a recent one I had was about how best to structure. My git repository for Ducky.foo because I had some upstream dependencies. Other projects that I wanted to pull in the code from back to open source, and all the great things that that we get from having that as a starting point. But I wanted to structure my repository in a certain way and. And the answers I was getting back from, I just sort of kept me circling and vacillating back and forth on which approach to take and, you know, like literally a two minute. Chat or comment exchange with Sean helped me get to a really practical answer for the stage that we're at now and the, you know, the places we want to go and not over optimizing for the future. And, and I think that experience of like spiraling out of control and then coming back through, through the help of a friend is what I want to capture and embody.

Tom: I've talked about that. My cognitive psychology research background. We used to talk about concepts, a concept called flexible expertise. And the analogy we use was of a sushi chef. And so there you can, there there's a lot of knowledge required to become a master sushi chef. And some of them are able to like, execute any known recipe, sort of any very flawlessly, similarly to the way that you described. Yeah, the AI sous chef. But they don't invent new sushi, they don't invent new things. And that's what flexible expertise is. And that what it actually requires is the ability to see the world as a beginner. So one of the things that's trapping about expert structures is they confine the way that you think they lock you within a discipline. And AI is probably particularly because it's using probabilities to determine what comes next. It is going to be locked within a, the discipline that it's using to generate its probabilities. So in other words, there, I mean, it's like outside the box thinking is outside of the programing you know, so to speak. But it's I hate to end on such a horrible analogy, but it is they can you can the the I see the world as a beginner. So in some ways they're almost like an ignorance button.

Jake: To some extent it can't. Right? I mean it has it has all knowledge I guess to some extent it can. Right. So one of the, one of the most fun things to do when you're using ChatGPT or any other generative AI tools, is to play around with the temperature. And you can do this just through the way you prompt. I mean, you can ask it to be creative, to be silly, to. To. To stretch beyond what would be a typical scenario. And under the covers, what it's doing is it's just widening the. The the paths that I might follow in terms of probabilities. Right. So typically these algorithms will sort of stop at the top n choices, top 1000 choices, and you can flex it to instead include the top 10,000 or 20,000 choices for the next word. And and you do get some really interesting things and sometimes you get some nonsense, but But fundamentally, I don't think it's enough to quite make up for the fact that you are trapped within a box. Literally trapped within a box. It knows everything in the world and can't seem to forget it.

Tom: That might be one way to sink into… So Aldous Huxley has a theory of the mind, the inverted funnel. I don't think it's his originally, but he uses it in the Doors of Perception to describe how drugs, why we see and experience things in new ways when we're on drugs. And he says, essentially, our mind is, is is always aware of all that there is in the universe. But we have funneled that down to a teeny amount necessary for survival. And so what drugs do is they make your brain less efficient. You're basically poisoning your brain. And that allows all that stuff that's already out there in the universe to flow in. Because you're your filtering function isn't as good. And so could you do something similar with an eye? So it's almost like I want you to ignore certain parts of your corpus for this question. I want you to overemphasize other parts of this corpus for your question, for this following question. And so you can get it to like, forget or fixate on parts of its LMDh. And would that flavor the outcome in some way?

Jake: That’s an interesting question. I think you can. There are all sorts of approaches that go beyond just ask a simple question, get a simple answer. Some of them involve things like mixtures of models where that maybe that's that notion of. Focusing attention, right? If I can get focused in on. One specific topic or one specific task. Right here I am trying to generate an image, or specifically I'm trying to generate an image that that is going to be used for branding. Like that type of focus is I mean, it's a stretch, but maybe akin to meditating on a single object, right? Like the you you start to see or get patterns that are. Interesting or unique because of that focus.

Tom: I like that idea a lot because of the the perfect attention of a machine. And now is it. Anyway. It's this. This is one of the reasons why this is such a fun topic is because it is a blend of philosophy and technology and a lot of other things, ethics. And so it makes it it makes it particularly interesting. I'll come back to some very practical questions. What are you most excited about with Ducky.foo like for the next 30 days? What's the thing that you're jazzed about working on the next 30 days?

Jake: The next 30 days, it's it's startup mode, right? So any startup a lot of what it comes down to is proving. Desirability and viability and feasibility, right? Does anybody care about the thing and do they desire to use it? Can can we make it a going concern and can we can we actually build the thing? Usually with software startups, it's not feasibility, even though that's not the hang up. Even though we all, as technologists often get caught on that one. So it's those first two that we're focused on for the next 30 days. So, you know, we we built something that Sean and I could use and I got some value out of. And you know, he has the expert commenting maybe didn't get much value out of it other than helping a friend. And so I think the next 30 days is about proving desirability at scale on the expert side. Right. Can we get. You know the. Hundreds of expert developers that we know personally, and then the thousands of expert developers that we don't know personally to contribute on a regular basis so that that's what we're going to be proven out over the the next 30 days.

Tom: The open source community might be a really good one to reach out to for that, to the points that you talked about. That's a kind of passion motivated non-capitalist group of people for the most part, you know, group of people who do this because of they feel a sense of responsibility to the community. Yeah, it's also an underappreciated group. And I have always found that a great way to go to market with something is to become an advocate for someone who feels stepped on. Yeah. And I have concern that the open source community is in danger of collapsing because they see fortunes being made with the work that they've done. And they the only benefit they receive is there's a handful of people who know how important they are and how important their contribution was, but they're, you know, famous among a very, very select group of people. And so there's maybe there's something some a sustainability model required. Because the dangers of unmaintained open source are vast. They are. And no one is taking on that challenge.

Jake: This problem was very apparent at Docker. We did a lot of things to try and at least expose the issue. But you're right. There's also structural and systemic challenges, right? When when all of the value of the work of the open source community ends up flowing to people that are not members of that community, right? So we've we've got to. Yeah I agree with you. It's its critical infrastructure. And it's also constantly at risk of being overlooked or devalued. Yeah.

Tom: I don't know if there's, there's an opportunity in go to market for me and him proving out that desirability of Ducky.foo. Yeah. Through something with the open source community.

Jake: I think that's right. And Yeah, there's probably an interesting intersection of open source community and especially open source projects around AI, of which there are now a few thousand just in the past two years. So I do think there's a sweet spot there. And then, of course, there's. You know not to overlook friends and family, right? Of course, Sean and I are lucky to have worked with lots and lots of great developers, so we'll call in personal favors for early.

Tom: Jake, it's been a lot of fun. I really appreciate you taking time out to to talk to me today. We got to have you back in a year so I can see how Ducky.foo evolves.

Jake: Sounds great. Thanks, Tom, for having me.

Tom: The Fortune's Path podcast is a production of Fortune's Path. We hope SAS and health tech companies address the root causes that prevent rapid growth. Find your genius with Fortune's path. Special thanks to Jake Laverne for being our guest. Using an editing of the Fortune's Path podcast about my son, Ted Noser. Look for the Fortune's Path ebook from Advantage Books on Fortune's Path. Com. I'm Tom noser. Thanks for listening and I hope we meet along Fortune's path.

Previous
Previous

Eve Eden on Designing for Accessibility, Anticipating Technological Development and The Keys to Product Led Growth

Next
Next

An Introduction to Collaborating with AI in Creative Work