GitHub CEO Exits, Claude 1M Tokens, Gemma 3 & GPT-5 Tweaks
Hi,
Murilo:everyone. Welcome to the monkey patching podcast where we go bananas about all things genies, wikis, and more. My name is Morello, joined always by my friend, Bart. Hi, Murillo. Hi, Bart.
Murilo:How are you?
Bart:I'm doing, very well. Completely relaxed, off of, three days of holiday. Three weeks? Three weeks. Sorry.
Murilo:Three weeks. Yes. Yes. In that way. Exactly.
Murilo:He was like, woah. Time. What is time? You know? It's all the same.
Murilo:Any any highlights you wanna share?
Bart:No. It was all good. Good.
Murilo:It was just like
Bart:we traveled around a bit. We went to the South Of France, and then we took the ferry, and we went to the North of, Corsica and also went to the more mountainous area in the middle, then to the ferry again to Italy, and then in, Stockton, Switzerland to, hut tracking. So we, we, we enjoyed our time, and we had a good good weather everywhere.
Murilo:Yeah. Really? Did you catch the heat waves everywhere? Well,
Bart:I guess it's part of the good weather, but not the extreme, but nothing It
Murilo:was in the neck and neck neck and neck That's nice. That's nice. Glad you had a good time. Happy to have you back with us. The people missed you.
Murilo:You know? They're
Bart:like, we Thank you. Happy to be back. Happy to be back.
Murilo:Yes. Yes. So a lot of stuff happened in between?
Bart:A lot of stuff. Yeah.
Murilo:A lot of stuff as always. Right? I feel like if you don't if you blink if you blink twice, it's too late. You know, things pass.
Bart:Yeah. These days in especially in AI, it's if you miss a week Yeah. You miss a lot. Right?
Murilo:Yeah. Exactly. Exactly.
Bart:Like, because it's every time so much news, you get numb to it a bit as well.
Murilo:You get numbed, and it's also, like, on one hand, you feel like you're falling behind, but on the other hand, it's like, I'll catch the next train because I know something else. You know? It's like, it's fine. It's a bit of a weird weird time. Right?
Murilo:But what do we have for today? What do we separate? We have GitHub CEO, Thomas Domke, is stepping down, and Microsoft is folding GitHub's leadership more tightly into its core AI organization. He'll stick around to ease the shift writing, and I quote, I'll be staying through the 2025 to help guide the transition. So GitHub is popular for the Git repos and stuff.
Murilo:It was a separate organization, and it was acquired by Microsoft? It was acquired,
Bart:around this is testing my memory. Around 02/2018, I want to say. And then, Ned Friedman took over CEO from the founder of GitHub. And then Thomas took over from, Ned Friedman, want to say, in, 02/2001, something like that. Yep.
Bart:And now he's stepping down, and he's, he's he's gonna look at the startup world. He's getting involved more as a founder in a in a startup community. As we understand, there's very little information on what exactly he's gonna do. But the other side of the coin is that Microsoft also announced that there will be no new CEO of GitHub and that GitHub, GitHub's organization will basically report directly to the core AI team within Microsoft. Okay.
Bart:It will be not a single person leading GitHub, but the the different, quote, unquote, team leads within GitHub will will report directly to the Microsoft team leads. So
Murilo:GitHub has been having more AI stuff. Right? Even they released the GitHub Spark a while ago, which is kinda like vibe coding. But there's been more and more AI features. Right?
Murilo:So I think that's the only not similar, but, like, the only reason I can see why they're reporting to the CoreAI team. But I still feel like
Bart:But I looked into it a little bit because I didn't know what the CoreAI team was, and the CoreAI team is AI platform. So the Azure AI foundry is under there, but it's also tools. It's, for example, Versus Code is under there. Copilot is under there. Ketup will now also be folded within that, fill folded into that organization.
Murilo:And Copilot is the GitHub Copilot, or is the o three sixty five Copilot or both?
Bart:I thought it was the GitHub Copilot, but you make me doubt now.
Murilo:Okay.
Bart:So there, I think people are a bit afraid, like, what this will this mean with, for GitHub. Right? Will it keep its, its own identity? Will it shift going forward? I think so far, Microsoft has done a very good job at keeping Kit up what it was, basically this, this, this community space where everybody around open source, like decides like this is where we want to start from.
Bart:This is where we, where we believe in that we want to build on. But I think everybody's a bit afraid, like, will this go mean for the future? Let's see. I think everybody was also afraid when Microsoft acquired it and that that turned out quite okay. There's maybe also something to say, like, being within the Scoria team that being closer to the development of these other sideways related tools and platforms that it might improve GitHub as well?
Bart:Question mark. But, let's see. Time will tell.
Murilo:Time will tell. But it does feel like a big change. No? Like, a whole position is being It
Bart:feels to me, it feels a bit like when GitHub got acquired. When GitHub got acquired, like, everybody had a bit this this this, fear that it would lose this feeling of being this neutral hub. Like, it very much has this image like this is this neutral open source focused hub for the community. And I want I think it's somewhat fair to say that that fear then, like, wasn't really needed. I think Mike's did a very good job.
Bart:Now we're maybe in a different situation. Right? Maybe get get less less independence to make their own decisions to to remain this neutral hub if it's if there is no single person responsible for for this this for GitHub as an organization.
Murilo:Yeah. I think GitHub has a no. Microsoft has had a history a bit with open source that is interesting as well, right, with being very, let's say, not against, I guess, open source, but now change a bit the image, like, really doing a lot for open source as well. So I think maybe back then when when Microsoft acquired GitHub, it was a bit of that I don't know. I feel like my impression is things could have gotten mixed up.
Murilo:Right? But indeed, realistically, I I don't think there will be a big change for us. I think maybe for the people within GitHub, it's like now it's a bit different person. I have a line with other teams. Maybe you don't have someone's attention to really advocate.
Murilo:Now you have to, you know, share. But I think for us, I I'm not too concerned right now. Right now. Maybe tomorrow will be a different story. Time will tell.
Murilo:Time will tell indeed. What else we have?
Bart:FFmpeg eight point zero is picking up a native Whisper filter, bringing built in speed transcription via OpenAI's model through whisper.cpp. And it's coming soon. And I quote, FFmpeg eight point zero should release within a few weeks alongside optional GPU acceleration and the handy SRT JSON outputs. That's a whole mouthful. What does it mean?
Bart:Because I'm quite excited about what it means. So when I was saying, like, you typically use a Vendback to record stuff, to convert stuff, etcetera, what Whisper does, and now it will have native built in Whisper support. Whisper, basically does speech transcription. So it basically understood, understands what is said and turns that into either an SRT file, a subtitle file or a JSON file. So it becomes very easy.
Bart:For example, when we're recording this podcast to run this video file through FVM pack and automatically inject subtitles in it. Yeah. Or just a subtitle file that you can then use for other purposes for like, like, like, make summaries or whatever.
Murilo:Yep. Yeah. And I guess the JSON file, I guess, is something analogous to the SRT, right, to the subtitles. Right? Like, time stamps.
Murilo:This was set to this time this time.
Bart:Yeah. I would assume. Yeah.
Murilo:The Whisper CPP model, does it come with FFmpeg, or or do you have to bring your model, like, and it's compatible? Or how how does it work?
Bart:That is a very good question. What I assume what I hope, actually, is that it's bundled, but I'm not sure what I would mean for the install size.
Murilo:Yeah. That's what I was that's what I was thinking indeed.
Bart:I'm not sure if it's, like, the licenses for Whisper and FVMPAC are compatible to also even do that, but I hope it's for an end user very simple to install. Yeah. Yeah. But I haven't looked into those details, to be honest.
Murilo:I saw I see here here reading the FFmpeg eight point zero can be built with the and then the flag enable whisper library when whisper library is present present in the system, having OpenAI Whisper model support. So maybe you need to have it like, you use the the the model you have, but when you're building it, it probably adds to the the binary
Bart:type test. So so this this is when you, like, built the source code yourself to build a binary yourself. Yep. And what I hope is that, like, I'm a I'm a Mac user. Like, if I use Homebrew to install FFmpeg, then it's very easy for me to say
Murilo:Yeah. It's just there.
Bart:With Whisper.
Murilo:Yeah. Yeah. Yeah. Indeed. Indeed.
Murilo:And the Whisper model, it is a open source model or open weights model. K. I'm gonna go into the beta. And for the dot CPP, for people that are not aware, basically, this one guy, he he decided to rewrite some things in c plus plus to make it more efficient to run less memory footprint so you can basically run these models that before you kinda couldn't run-in your machine, but WhisperSpeedP, now you can. So it got very popular.
Murilo:And there's also Llama CPP, which is a sister repo of that. Very cool. Very cool. I feel like there's a lot of things that I could use this for, but I think now I feel like they're they're, like, solution, like, we use Riverside, right, Some things that kinda come with it. But I think it's really nice that they they're offering this as well.
Murilo:I wonder also how well it does for other languages other than English as well.
Bart:Well, I've tried Whisper actually a few times on our on our the audio from our podcast.
Murilo:Yeah. And
Bart:It's good, but it's not great. But we're also, me, especially, maybe you saw a a partially we're not native speakers. Right?
Murilo:Yeah. We're not.
Bart:That that's probably also confuses the model sometimes. But it's def like, so and that means, like, it's super done great. It means it's not good enough to just object in the video stream and everything will be perfect, but it's definitely good enough to generate summaries, etcetera.
Murilo:Yeah. Yeah. Yeah. Yeah. But I think it's same thing with all of Jenny.
Murilo:Right? Like, it's 80% right. So it's like, it's good enough for a lot of stuff, but it's not Exactly. Cool. And what else do we have?
Murilo:The Wikimedia Foundation's court challenge to The UK's online safety act categorization rules was dismissed, but the judgment puts pressure on Ofcom. As the foundation puts it, and I quote, our concerns on the looming threats to Wikipedia and its contributors remain unaddressed, underscoring why they brought the case. What is this about?
Bart:So in The UK, there's a, online safety act that they are implementing, and there are some categories in that online safety act. I'm definitely not an expert in this. There's some categories. Category one is the most stringent, and it's, refers to platforms where there are a lot of users, where there's a lot of user to user intervention, where there are recommendations. These type of platforms, which are called category one platforms under this, under this regulation, they have some duties that you need to adhere to.
Bart:The most important one, why Wikimedia also went to court to try to get it dismissed, the the ruling at least for them to get an exemption is that that there is a duty for, identity verification of contributors, which Wikimedia always held very high regard not to have to share any information and to let the, basically, the community do their work if something was not correct, have some, also correct us. And there's also something to say, like, there is there are things that we don't really care that much about identification. Like when I write an article on, gorda cheese, and at some, at some point, it comes out that I wrote an article on gorda cheese. I mean, it's not the end of the world. Right?
Bart:It was a data breach in that. But if if you are writing about a repressive regime and you're in that country at the moment and it comes out in the day to be said it was you that was, that was writing this, it's of course, can be very dangerous. Yeah. So I think it's good that that the Wikimedia foundation also take this takes this stance to try and safeguard the privacy, basically, of their contributors. They try to get a basically sort of an exemption to not have to adhere to these rules.
Bart:They went to court for got dismissed in court, But there is sort of this conclusion that even though they do need to adhere to the online safety act regulations, we will more or less try to make it work. That's how I read it without any very, very good guarantees on what this means, what this means for the Wikimedia, and how they shouldn't shouldn't actually implement it. So I think that it's still very vague on Yeah. What this means for the future for Wikimedia in The UK, at least.
Murilo:So Wikimedia is like, the the foundation is the the ones that govern Wikipedia.
Bart:Yeah. Exactly. Yeah.
Murilo:So that would mean that if they fully comply, at least in The UK, every time you wanna make a change on a Wikipedia article, you have to identify yourself.
Bart:Well, that's the that's the fear that you have available going as far. Yeah. If
Murilo:it goes fully under The UK Online Safety Act. Yeah. And if it you
Bart:need to identify yourself to the world, but you need to identify yourself to Wikipedia, which but it's a it's it's a data risk because if there would be a data breach, your information is out there.
Murilo:Yeah. Yeah. Yeah. And also yeah. Indeed.
Murilo:And probably even less people are gonna actually contribute to there's another reason not to, right, to not to contribute. But I think it's a do you know why it was dismissed? Did they just say, like like because to me, Wikipedia is something that's been here for so long. I don't think anyone has I mean, I don't remember any story or anything about issues with with that. Right?
Murilo:Like, the people would add stuff and, I don't know, like, dire consequences or anything.
Bart:No. I didn't find the went into the the the the why exactly, but can I when I reason about it, like, think it's when you implement something that has a test far reaching, which for a lot of platforms out there, you actually want this to happen? Right? Like, I can imagine that there are social media platforms where you find it important that if it's a category one, that some users also, if they share important information with other users that they do identify themselves as being real people. I think at this early stage where when implementing this this regulation, if there are already exemptions on such a scale where they're going to say, oh no, even though you check all the boxes to be category one, because we like you and you're Wikimedia, you're exempted.
Bart:I mean, I think those presidents are probably very dangerous. Like it puts a bit on like, a regulation like that on a loose footing if you do it on such early days.
Murilo:Yeah. I see what you're saying. So it's more about, like, not wanting to set a precedent. It's a very easily breakable rule.
Bart:When I try to reason about it, I don't know Yeah. If that was their explanation.
Murilo:Yeah. Yeah. Yeah. No. But that's also the the thought I had.
Murilo:But at the same time, it's like the whole Wikipedia concept is like the world is self healing kinda. Right? Like, people are in general well intended and, like, they like, people will take the time and effort, and I think that's been a very successful experiment, let's say. Right? True.
Murilo:So I also think it would be a bit of a pity lout to kinda take steps back when I don't know. I feel like that's that's where we wanna go. Right? We would like to be in a place where we as a community, we as, like, society, we want the information there to be accurate. We we take time.
Murilo:You know? Like, we don't need anyone to babysit us. Like, we we got this. Right? We as a so I think it's a that's that's the bit that's why I was also not challenging, but, like, a bit curious why it was dismissed.
Murilo:But, yeah, I I also understand the other side. I definitely understand the other side. Yeah. I am. And what
Bart:else do we have, Bart? We have Anthropic boosted Cloth Sonnet four to a 1,000,000 token context window, enabling huge code bases and documents set in a single request. It's already rolling out with long context support for Sonnet four now in public beta. It will start on on, the API platform and on Bedrock on AWS. 1,000,000 tokens of context, Rito.
Bart:What do you think?
Murilo:It's, it's good. I think before too for context. I think before it was how many tokens? Before it was It's a
Bart:five time increase. So it was Five time increase. Yeah. Indeed. So 200,000 before.
Murilo:Yeah. It's pretty good. It's pretty
Bart:good, and it basically means, it says in their article that, it will allow code bases up to 75,000, lines of code, which is it's already significant. Right?
Murilo:Yeah. Indeed.
Bart:Why it's good is if you use a lot of a s AI assisted coding is that when you edit the file, when you generate a new component to your, to your application, it's important for the LLM to have enough context of what you're trying to build basically. So because if you build a new component in that application, like it probably interacts with a lot of stuff. And if the context window is big enough to have your whole project in its context window, it will typically improve the output.
Murilo:Yeah. I think it's also it we'd also talked, like, mentioned context. We also talked about context engineering, how that's a bit of challenge. Also a lot of the times, when the models have a smaller context window, they have to, quote, unquote, decide what goes in the context. So they have to do queries, and they have to search and add this and add that.
Murilo:And I think by adding by increasing the context window, you can you you move a bit of those decisions. Right? Like, you can just cast a bigger net, and you trust the information is gonna be there, and it's one less problem to have. Right? I think even things like Rag, if you have something that just fits, and I think a million tokens is is quite a lot.
Murilo:There's a lot of stuff that you don't need a whole complex architecture. Right? So it's another example that these things help. Maybe also for this is clonic Cloud Sonnet four, which is the the the cheaper model. Right?
Murilo:Like, the less powerful model. I think the the more powerful is the Opus. Right?
Bart:Yeah, that is true. Yeah. What Opus is used for a lot is for more for, let's say, complex reasoning and let's say architecture design for a new or or for bug fixing, Sonnet four is really used a lot for generating code.
Murilo:Indeed. Indeed. Indeed. The so, actually so and, actually, Sonnet had a four has a 4.1 or no? Because I think Opus had a 4.1.
Murilo:No?
Bart:I'm not that I'm aware of, but it could be.
Murilo:But, yeah, I think it does. But in any case, so it's like it it the model got an upgrade, but it's not a new model. It's just the context window, which I think, to be honest, maybe I even get more excited about this than a new model because I feel like this is more concrete how it changes my life.
Bart:This is Concrete concrete, like, result on
Murilo:what do today. Like, when you say there's a new model, and we're gonna talk about GPT five, like, sometimes it's like you get the benchmarks, and it's a bit like, I kinda see how it's better, but not really or I don't really feel it. And I think this is something very palpable. Right? Like, okay.
Murilo:Now I can do these things that I definitely couldn't do before.
Bart:So I That is it's interesting to see because also what we saw in the past is, like, the huge context windows, like, the performance of that is, the recall is not always great. I haven't looked at if there is there a recall benchmarks on this yet. How they implement this pricing wise so that the price will go up, from the moment that your context window is bigger than the original, so bigger than, the 200,000, then the price, will double. But you also have still have the ability to to do caching on these, on these large, context windows. So it's, I think it's if you're working on a very big copy, it's it's probably still worth it.
Murilo:Yep. Yeah. I think so too. I mean, again,
Bart:it's interesting to see if you if you look at it from a pricing perspective. I think what you what you said earlier, it makes sense. Like, might when you use slot code, you see today a lot of it does a lot of tool calling to understand, like, where where in your project is is something to also make sure that it injects it into the context. Maybe it will reduce the amount of tool calling and actually, like, the impact on total spend if you use it via the API is somewhat neutral. Will be interesting to see what the impact is.
Murilo:I think it definitely could. I think it definitely could. One thing also, maybe it's a bit of a side note, but you mentioned, like, tool calling. One thing that I noticed a lot that these LLMs do, they they have a bias towards writing code. So if you let it run, like, even if you let it run a few times, even if it's the same clock code in, like, a few days, sometimes you see, like, duplicative functions in different places of the code.
Murilo:I like a lot of these things. Right? And I imagine that if you have if you could reduce amount of two calls, you know, and all these things, it will help. And I remember there was even an article some weeks ago that was covered on a on a podcast that was the the smells of vibe coating. The vibe coating smells.
Murilo:You know? It's like, yeah, if you if have a nice utility function, but you duplicate this, duplicate there, we probably probably because you're using LMs. Right? Which just something to be aware. Right?
Murilo:True. I don't think it's oh, yeah. Very cool. Excited about this as well. Anything else you wanna mention here, or shall we move on to the next?
Bart:Let's move on to the next.
Murilo:Google unveiled Gemma $3,270,000,000, a compact on device friendly model built for fast, cheap fine tuning, and solid instruction following. I quote, internal test on a Pixel nine Pro SoC show the int four quantized model used just 0.75% of the battery for 25 conversations. So it's another model from Google.
Bart:It's another model from Google, really focusing on the edge device space, which I think is very interesting to see that we have quite performant models, with very low power consumption. That's what you were were presenting just now. Like it's for 25 chats. It only uses point 75% of the battery of a pixel pixel pro. Well, you can debate whether or that is a lot or not or not.
Bart:Yeah. And how they how they, do this is that this Gemma three, a 270,000,000 model, it's really, it's it's trained to be very good natively at tool calling. But you need to fine tune it for the specific tool calling that you want to do. So typically you have a fine tuning stage before you actually start using this model. And bring it into your application on a mobile device.
Bart:So you really train for a specific goal, and apparently that is what it's very well suited to do.
Murilo:So the model was fine tuned for function for specific function calling. Does that mean that in the training data, afterwards, you have a step where you have prompts, you have available tools, and then we have the actual expected function call that was supposed to be called, and we fine tune on that. Or what does it mean?
Bart:Well, I think what they focus on and to to be honest, I'm not exactly sure when when following your definition, but I think what they focus on is that it's what this model is very good at this instruction following. Mhmm. But you need to fine tune it on that. So this can be like when you get this voice input or the transcription from this voice into the input, make sure to generate a summary and put it in my Apple notes.
Murilo:I see. See.
Bart:See. To, like, really like this specific instruction you need to fine tune it for to be very good at that specific use case.
Murilo:I see. I see. I see. So it kinda like steers towards those kinds of actions and those kinds of things. Exactly.
Murilo:I see. I see. Interesting. And I also see here it's also production where you quantization. Right?
Murilo:So quantization memory serves me well is basically when you you basically shrink the memory of the models and also improves the inference speed. Right? So we have a lot of floating point numbers, which has a lot of, like, a lot of decimals after the dot. Right? So you can just kinda cut them a bit and say it's not exactly 0.11227 is 0.1, and the model does okay still.
Murilo:It's not as perfect, but then the memory is much smaller, inference is much faster. So when you talk about edge devices, this is
Bart:True. Yeah.
Murilo:Big win. Right? Very cool. You can also have a login phase or Llama. So, yeah, Kaggle, LM Studio, doc.
Bart:Yeah. And actually what they say because you mentioned Lama here, like, can very easily test it on your local machine, but, it's definitely not meant. They also explicitly say that it's not meant for conversational use cases.
Murilo:Yeah. I see. I see. See. But it's nice.
Murilo:I I think and, again, we started talking about the the phone usage, which I think is maybe the the headline. Right? But I think maybe more realistically for people is that you could probably run this locally. Right? Like, you're taking I don't know.
Murilo:You're on a long plane. You wanna have a model available for you. You can spit a lump on on Llama, and you can you're not like, maybe it's not gonna be perfect, but you have you have a friend that you can talk to during your flight.
Bart:Yeah. Then maybe this one is not the best one. Maybe not. But this is what you could do. Yeah.
Bart:Yeah.
Murilo:This is what you could do. Yeah. Maybe not to talk to a friend, but yeah. Yeah. Indeed.
Murilo:Cool. And here, we see here, Lama CPP. Just linking a bit back to our previous news article. Right? So brand is JAMA JAMA CPP as well.
Murilo:Alrighty. What else do we have?
Bart:We have more Google News. Well, DeepMind news. Google DeepMind news to be specific. I'm very excited about it. DeepMind's Genie three steps beyond video generation to real time interactive world models that run at 24 frames per second and p for minutes.
Bart:They describe it as, and I quote, a general purpose world model that can generate an unprecedented diversity of interactive environments.
Murilo:So three?
Bart:Genie three is called. We've seen video generation over the last two years becoming better and better and better. We've seen some examples, and we've discussed some examples as well of world generation. Actually, not that long ago, we we did. We discussed a new, research paper.
Bart:What Genie three does is that it's via text prompts. You can generate a world and then you can actually walk through it.
Murilo:Yes. You can try this as well. Yeah.
Bart:What is to me the most impressive because What is impressive is that it does this at a real time like that at 24 frames per second, which is very impressive. It does this. At a quite a high resolutions but what to me is the most interesting thing is that it has memory.
Murilo:Yeah. Indeed. I saw by by memory, and just to make sure I follow, for example, we had the I think we did even the live on the we're still data topics unplugged, which was, like, Minecraft generated, I think. They're like, you could walk a bit in a Minecraft generated world, but, like, if you look back and then you look back again, so now you're back looking forward, the lens have completely changed.
Bart:Exactly. Exactly. Yeah. And
Murilo:so when
Bart:when what they do here and it's actually in the you're you're sharing the screen and there's a there's an example demo there or further if you go up a little bit where they are painting the room. This one. Yeah. And then they're they're drawing something on the wall basically with with a paintbrush. And then they they are looking away, and then they will look back and you still see exactly the same painting.
Murilo:Yeah. That's impressive.
Bart:Which is very impressive. Yeah. The limitations there that they have now, and it's not necessarily it's it's more of a limitation of how the model is set up. It's not a hard limitation, but the the memory limitation is is that they have a memory of up to one minute. So if you would, look away for more than one minute, it's probably gone.
Bart:It looks different, but it's still super impressive. It's, I'm very excited to what this will bring as well. Just talk about video generation, what we've been doing now for two years. Like, you can do this interactively just by walking in that scene and just typing, please replace this, this cow by a horse. And then you, and then you walked her and you pushed it a bit to the side.
Bart:Yeah. Like it's a, it's, it's, this opens up a lot of
Murilo:different use cases. Yeah. For sure. I mean, to me, this is I feel like how can I say? I feel like people got a bit excited about Vue, actually.
Murilo:I think it was also a model from Google about video generation. Right? But to me, this is like
Bart:The interesting view was that it included audio. It also generated the audio.
Murilo:No. Yeah. Yeah.
Bart:This lifelike people talking. To me,
Murilo:this is, like, five steps ahead, like, to have a it's not even, like it's not a video. It's, like, it's really an interactive world.
Bart:And the interesting thing is is also like, when you go through the demos, you see also examples of a person swimming. You see a person skiing. And these are so it's a world model, and these things are emergent properties. So it's not that they said this we need we are gonna build something, it needs to be able to ski or be able to swim, but it learned this from whatever trading set it has, probably YouTube, to do this, to do these things. And you also see like it's by through their training data set.
Bart:It also to some extent learns some like physical environment properties, like for the ski, for example, you notice like when they ski to the side, it goes slower than they want to ski straight out. And all of these are just emergent properties being learned from the training data set.
Murilo:Yeah. It's it's yeah. It's pretty insane.
Bart:Impressive. Very impressive.
Murilo:Pretty yeah. Yeah. And, also, I'm also wondering because, again, the vibe you get, and if you're following the screen, you can see a bit some demos. The vibe you get is like a video game. Mhmm.
Murilo:Right? I was wondering if this is also gonna bubble back to the video game industry. Right? Like, I mean, maybe it won't do the whole job for them, but, like, if this is an asset that they can import into Unreal Engine or whatever and then tweak, that's already, like, so many steps ahead. Right?
Murilo:Right? Like or just, like, stitching scenes together or something. Like That feels really
Bart:True. Like like, if if this becomes this will will become a tool in gaming engines. Right? For sure. This is the way you can build your world.
Murilo:Exactly.
Bart:By writing prompts and working through it and fixing some stuff along the way.
Murilo:And like game developers gonna be vibe coding, you know, vibe developing, vibe game developers.
Bart:What I do, what I personally think as well, is that this technology within five years will also very much spur on the VR slash metaverse world.
Murilo:That is true.
Bart:This opens so much new possibilities for collaborative gaming, for building communities online, for whatever, like in a in a virtual setting that is basically unbounded only by your own imagination?
Murilo:Yeah. It's true. It's true. There is a yeah. You mentioned, like, bounded by imagination.
Murilo:I also heard an interview of a guy. He was talking about Vibe coding, and then he he said something like that. It's like, some developers are are not super happy with the whole vibe coding movement, but it's he said he almost gets angry at them because it's like, just have some creativity. You know? Like, there's so much more you can do now with, like, you have, like, this magic power, you know, that you can do so many things, and you're getting mad at you.
Murilo:You you you you have the powers. You know? It's like it's a bit and I feel like it's a bit like that as well with this. I think it would be really cool as well to see what people build with this. I also agree there's gonna be a accelerator for a lot of these derendered worlds.
Murilo:Right? So, yeah, I think it's really I think it's really cool. I think I get excited about this. Mel, maybe practical question. If I want to try this, do you know how to go about it?
Murilo:Do you need to
Bart:You can't.
Murilo:You cannot. Yeah. That's So
Bart:they opened it up for for a limited set of of researchers and some some academic institutions basically and also some, quote unquote influencers.
Murilo:Did you receive yours? Yes. Like
Bart:Not yet. I'm Podcasting in the coming days.
Murilo:Yeah. Maybe they got my email wrong or something. It's fine. Forgive them. It's cool.
Murilo:Alrighty. Well, I think even if this they don't open open this, I think just showing to the world that this is possible, motivated people are gonna try and reproduce it, and I think it's already gonna move forward the space. Right? So good things will happen for sure. Very cool.
Bart:Lost in AI news. We have a we actually didn't really talk about it, in the last, month because of holidays, but we have a new GPT release, GPT five. And they're already saying that they will tune it to sound warmer and less formal after feedback from the initial rollout. Did you try GPT
Murilo:I did not try yet. It came out when? It came out, like, a few days ago. No?
Bart:I want to say already two weeks ago.
Murilo:Two weeks ago? Is this possible? I haven't tried yet. But I also I looked at a few things, like some reactions and some people saying that it's it's basically a bundle of models. So, like, now it's like you have one interface, and it kind of routes your model to the right They write route your question to the right model rather.
Murilo:You tried it. No?
Bart:I tried it. Yeah. So I use ChessGPT quite a bit for everything that is not coding. Before GPT five, you would have the the basically the option to choose whatever GPT four flavor model you wanted to use. Now with g p t five, you basically have the g p t five base model and a thinking model.
Bart:And then after a lot of protests from the community, they also reintroduced the four o model. You can still you can still choose that now. But I think the the general consensus of the community that is that everybody was very, a bit disappointed that it's, it's it's just that. Right? Like, it's not nothing.
Bart:I personally I think the the problem with OpenAI in general is that that Sam Altman, in all a lot of interviews makes these hints like, yeah, we're really going into AGI now. Yeah. Was trying it this morning and like, oh, wow. I was really surprised that I didn't know that it was possible. And then it makes you think like, Wow.
Bart:What's this what's this gonna be? Like, this is gonna be AGI, but then it's just like slightly better than GPT four. Right? And I do think it's I'm personally not disappointed, I think, for the things that I use it for. So I we also, for for example, use it to to generate summaries for a form of the monkey patching podcast, and it's actually better than the GPT the o three model that was available before.
Bart:I think it's a what I and it's very, it's very subtle. Right? But, before I also had often had I had to do it. I had to tweak it a little bit out of out of maybe a different prompt, now it's often good in one go. But it's very subtle to explain, like
Murilo:Yeah. Yeah. Yeah.
Bart:Yeah. What you feel. But it's definitely not the change that you had from going from GPT three to four. Right? That that was more tangible.
Murilo:But I think that's a bit the the the vibe I get. Like, people get disappointed with LLMs because there's so much promised, and I think Sam Altman plays a big role in that. But I think if you get too caught up in that, you also lose sight a bit of what's actually useful. Right? Like, okay.
Murilo:It's not a 10, but it's a eight, right, which is pretty good still. Right? I think even the announcement, he said, like, oh, yeah. We could release way more powerful models, but we want to make this accessible for the because I think even the GPT five is actually available for everyone. Right?
Murilo:Which is The light one, not the thinking one. Ah, the thinking one.
Bart:The thinking one is only for pro users. But what what he actually said is we have way more powerful models that we can make available, but we ran out of few but we ran out of resources because we are onboarding so many new users. So
Murilo:Nice. Okay.
Bart:Let's see. Right? I think, the only the only downside as a pro user, but haven't really had delved into it, yet. So before GPT five was released, you also were able to do, yet a deep research functionality where JetGPT would, would browse a lot of different sources to basically build a research report or whatever you asked it. But that got dropped, it was moved only to the max subscription that cost €200 per month.
Murilo:We felt a bit cheated there.
Bart:It it feels like you get a bit you get a bit cheated there. So far for my use cases, I've been quite impressed with the GPT five's thinking model. It's also like you can prompt it all to also to browse a lot of different sources and to, to take a lot of actions on its own. So far, I haven't noticed any, let's say, any, decrease in any functionality. So for to me, to me, it's only been there's been an like an improvement even though it's not a major step upwards.
Murilo:Yeah. You know who did a similar move that got a lot of people upset? The cursor. They also changed the limits for the paid paying customers and they because they have the the max subscription, right, with, claw and limits, and they also moved that, and now a lot of people really I mean, rightfully so. Right?
Murilo:Like, people pay for, like, yearly subscription, and then they change the limits, and now say, okay. Now you need to pay more. So it's a bit it's a bit of a not a nice move. Let's just say that.
Bart:It's not a nice move. I think I think in the in in their case, it's a little bit more understandable because they are very dependent on model providers, and they rely heavily on Entropic, and Entropic very much up their prices over the last months.
Murilo:I think no. I agree. I think it's more the because I think a lot of people saw Cursor as, like, this one stop shop to have access to all the the models. So yeah. Like, I think the the also the the the kerfuffle was that they they weren't very transparent about it.
Murilo:They just kinda did it without announcing to everyone, but they didn't give people an option to to get a partial refund or anything. So it was a bit poorly received to say the least.
Bart:I see. Yeah.
Murilo:And maybe talking about IDs, I actually recently talked to to my brother actually, and he actually tried keto. You know? We talked about the agent IDE. It was also a Versus Code port.
Bart:The AWS one.
Murilo:Yeah. He liked it, actually. He said he hasn't used as much, but he said that, yeah, like, he can use for anything. Right? But it kinda forces to have, like, a bit of a diagram, like, to review a diagram to approve.
Murilo:So it removes a bit of the prompting, you know, like it's very opinionated. So it's like you say, okay, you have this diagram. Do you agree with this? Do not agree with this? This is the plan.
Murilo:Do agree with this? Then he goes off in these things, which I think is kind of something that you
Bart:more referrals in place like you.
Murilo:Yeah. I think so. But I think, like because I remember even you you mentioned, right, like, you're if you're coding with AI. Right? Like, you ask for a plan, and then you review the plan, and you say, okay.
Murilo:This is good, or say, okay. Give me ask me question ask me follow-up questions. Don't get just give me a plan on the first go. Be critical. Ask me questions.
Murilo:And I think this probably has some some prompts or some things already, like some steps baked in that you you have to go through them.
Bart:Yeah. It forces a bit the following these
Murilo:Yeah. Indeed. Structured approach to. Indeed. Indeed.
Murilo:But it kinda forces that structure on you, which is not bad if you never use these AI tools. So, like, at the same time, maybe if you if you are using a lot and you already had these good habits, let's say, maybe I don't know. Again, I haven't used it myself, but he he was very he was very impressed. Yeah. Cool.
Murilo:Cool. Cool. Cool. Cool. And what do we have as the very last topic?
Murilo:We have NGINX, and I think that's how you pronounce it. NGINX is adding native ACME support via a Rust based dynamic module, letting servers request and renew certificates directly in config. And I quote, the implementation introduces a new module, g x h t p acme module, that provides built in directives for requesting, installing, and renewing certificates directly from n g NGINX configuration. So, Bart, for the people that are not familiar with NGINX Yeah.
Bart:This is a that's more technical, or or is it maybe a bit more removed from AI? Yeah. Which most of the times are a bit more focusing on AI.
Murilo:Right? Yeah. Well, sometimes not a bad thing, you know, maybe it's a can be nice sometimes.
Bart:So so NGINX is a lot of things, but it's, for most people, it's a web server. If you have your, let's say you have your own server at home and you want to host a website. If you're, you have a server on AWS or wherever, right? But you want to hold, you have the host website that you need a web server. NGINX is probably today the de facto standard.
Bart:For that. What you probably want people to do is to visit your website via HTTPS via secured quote unquote a connection. And to do that, you need to have, certificates in place. Getting certificates manually is a is a whole hassle. Maybe not let's not get into this.
Bart:So where we've been I think what the community has moved towards over the last, I want to say, ten years, is to use, something like that's encrypt, which visions basically certificates for you, using the ACME equity protocol.
Murilo:ACME stands for automated certificate management environment.
Bart:Yes. And you've auto services, but I think let's encrypt is the probably the the most known. And you had a lot of tooling in place like cert bot that you could use, but you had to still set up this automated renewal via this protocol with a service like like that's encrypt. But now what, NGINX has announced is that it will be, there will be a new NGINX module, the HTTP Acme module, and that will basically take care of everything of this for you via the NGINX config file. So that, like, it's it's a big hurdle if you're if you if you're in a stage where this is the first time you're setting up an NGINX website, NGINX hosted website, like getting certificates in place is like a very technical thing.
Bart:You really need to understand how it works. But like, if this gets taken care of you automatically, like this is a big win for end users, I think.
Murilo:Yeah. Yeah, for sure. For sure.
Bart:It's not a solution for everything because like when you're in the microserver environment, like I said, it's probably doesn't make sense to do it like this, but for a lot of simple cases, like it makes a lot of sense for you for user users.
Murilo:Let me ask you this. Being a bit devil's advocate now. Like, I wanna host a website, so I bought a domain. A lot of the time, the domain providers also have the certificates. They also have the they also offer these certificates.
Murilo:No? They manage this, like because I think I could Yeah. Now and I don't have to worry about all these things. Right?
Bart:Yeah. But then you pay too much.
Murilo:I see. Pockets.
Bart:That's true. Like, if you go to something very typical, like GoDaddy, you buy domain there, they charge a lot for a for a SSL certificate. Well, when you go to Letsubscript, it's literally free.
Murilo:Yeah. Yeah. Yeah. Yeah. And I know.
Bart:And the And and if you do it manually, like, so you buy it at something like GoDaddy, you get a you get a file that you then need to put in the right place. But it also introduces a risk. Right? Like, because you need to make understand what this file is, where you should put it, where you should definitely not put it, how to secure these directories, etcetera. Like, you need to like, by doing this manually, you introduce another risk.
Murilo:And if you hosted the the website in GitHub Pages or something, could GitHub Pages handle the certificate?
Bart:GitHub Pages does it. Like the like this if you use something like GitHub Pages, like, has an abstraction of the virtual machine of NGINX. Like, they probably use NGINX under the hood, but they do it for you. Like that's not something you need to worry about. Is really something like even you either you have like a server in your basement and you you want to geek out and host something on there or you you, you, you basically rent out a virtual machine wherever, and you want to, from the virtual machine, serve something to the Internet.
Bart:Then those are typical use cases where
Murilo:you Yeah. See. See. Nice to see that it's there. But I think it also goes to show that everything is a rabbit hole.
Bart:Everything is a rabbit hole. Definitely.
Murilo:I like it. Sure. You go deep. It's like the world is complicated and interesting.
Bart:It's true.
Murilo:Alrighty. Cool. Anything else you wanna add here? Nope. And I think that brings us to the end of our topics today after a short hiatus, summer hiatus.
Bart:It's just something to think about before we leave. We were actually discussing it before we started recording. Google search summaries have become very good.
Murilo:Yeah. That's true. So for people that like Google search summaries, like, if you Google something in the beginning, it gives, like, a little AI generated summary. Right? Like, ah, if you wanna do this and it adds links.
Murilo:And I remember it was in the news some years ago because it would taste some absurd things like
Bart:Yeah.
Murilo:Putting glue on pizza. But nowadays, it's really good.
Bart:Yeah. It's like, for me, personally, like, eight out of 10 times, the summary is good enough for what I wanted to search.
Murilo:Yeah. For sure. And, I mean, in in nine out of 10 times, I don't even check the sources because it's like, if it's
Bart:wrong And I really wonder what that means to oh, it's actually a very interesting one. So I was gonna say, was wondering what it means to SEO, but we actually know. Yeah. We talked about the call failure, stuff. No.
Bart:No. No. No. Something else okay. We actually know because monday.com's like the productivity app, I would like to call it, like their stock dropped hugely.
Bart:I think 30%, over in the, like, yesterday or the day before, something like that. Because their CEO with, with their earnings report, they announced that SEO they they allocate a lot of their budget to SEO to basically acquire new customers. And SEO is broken. It doesn't work anymore. Yeah.
Bart:Because like people don't look anymore at the top 10, results, in Google. They just look at the Google summary. And Google is not the whole story here, but like basically SEO doesn't work anymore because of AI. Yeah, indeed. Because you find your information in different locations and like your typical way of just paying for Adwords and stuff like this to get at a higher ranking.
Bart:Like, it's not that simple anymore. And we did like with, I think with monday.com, it's the first big company where they actually allocate like this happening to bad results.
Murilo:But like when and when did they when they allocate budget for SEO? Because I feel like a lot of people have been saying this already, right? Like, that the SEO was was dead. Like, I remember we covered an article on this positive that I think Cloudflare, they said they wanted to to ban AI crawlers because they noticed that they they people have a
Bart:good something that's really, like, it's a bit of a it's probably a bit of a hockey stick thing. Like, you notice, like, SEO is changing, is changing, is changing, and and something like the the change is good. It's big enough to be completely different. Right?
Murilo:Yeah, indeed. I think, yeah, there's some interesting questions there because I think for a big part of the Internet, right? Like it works on SEO optimization and all these things. And now I remember listening to a podcast, the guy was saying now well, I'm trying to build my website so so AI models will find it more easily, and they will redirect to me because, you know, like, even if you do click the sources on the Google search summary, it's gonna be the sources that Google's model selected. Mhmm.
Murilo:Right? Even so to to have a chance that they will visit your website, you still need to the game now is not like being on the top of the search engine. Now it's being on the top of the the AI search.
Bart:Yeah. And I also heard these, like, you can do these kind of tricks. Like if you build your page that you put a ten ten second TLDR on the top of that content, and apparently, like these crawlers pick up that more quickly, like they prioritize that over a long list.
Murilo:Makes sense. Cheaper.
Bart:So you get these new, I think the SEO space will completely, revamp, be revamped end of this year.
Murilo:Yeah. Yeah. And I think there's also a big, indeed, other motivations for for everyone to let their websites to be indexed. Right? Because before it was like, I'll let you index my website, but you're also gonna give me traffic.
Murilo:But I was like, you will index my website, and I will get little to no traffic.
Bart:Yeah. But still it will still be the only source of traffic. Right?
Murilo:Indeed. And that's bit the the the thing. Right? Like, it's like, maybe one one hand is like, nah. Fuck this.
Murilo:Like, I don't, like, I don't want this anymore, but at the same time, it's like, if you're not gonna do this, then what are you gonna do?
Bart:Mhmm. True.
Murilo:So, yeah, it's it would be yeah. I'm curious to see what's gonna happen there. Let's just say that. Let's say that. Alrighty.
Bart:And with that, any big plans, Bart? Any new holidays popping up? Nothing in the coming weeks.
Murilo:Alright. Then I guess we'll see everyone in the next week.
Bart:Thanks a lot for listening.
Murilo:Thanks a lot, everyone.
Bart:Thank you for joining me, Manuel.
Murilo:Thanks for joining me, Bart. Happy to have you back. Talk to you soon. Ciao. Ciao.
Creators and Guests


