AI Bubble Fears, AWS CEO Slams Layoffs, DeepSeek V3.1 & Google’s 33x Efficiency Jump

Speaker 1:

Trailer the preview guy trailer. You know? Like, one man, this summer.

Speaker 2:

You know? Yeah. This is the blockbuster.

Speaker 1:

Yeah. Yeah. Yeah. Like, explosion sounds like Distant thunder. How about we have thunder?

Speaker 1:

No. We don't.

Speaker 2:

In a world where code ships at midnight and AIs never sleep, one show refuses to press default. They bend APIs. They break bad IDs. They patch reality. This is monkey patching where death's data and daring collide Hosted by Bart and Marillo.

Speaker 1:

Buckle up. Do you hear the cheering, actually? Oh, very faint. Very faint. Oh, it's okay.

Speaker 1:

Okay. Or maybe clapping. I got it real. Maybe we should actually, like, at the end of the intro, I'll I'll I'll edit the sound. You know?

Speaker 1:

It's like, oh, wow.

Speaker 2:

Or we actually gonna use this intro. Yeah. Yeah. Let's do it. But then we need to add a bit of a banter before that this was just a test to generate an intense blockbuster intro.

Speaker 1:

Okay. I'll I'll do I'll chattypity. I'll see what I can do with the editing.

Speaker 2:

K. This is a

Speaker 1:

okay. So shall we shall we get started?

Speaker 2:

Yes. Let's play the sound note. Yes.

Speaker 1:

Hi, everyone. Welcome to the monkey patching podcast, where we go bananas about all things search, wearables, and more. My name is Murillo, joined by my cohost and friend, Bart.

Speaker 2:

Hi. Hey, Murillo. Hey. How are you doing? I'm doing very well.

Speaker 1:

Nice shirt.

Speaker 2:

Thank you.

Speaker 1:

You too. What?

Speaker 2:

Where'd you get the shirt? At the Kenrick Lamar concert.

Speaker 1:

Oh, I was there too.

Speaker 2:

Oh, what a coincidence.

Speaker 1:

Oh, wow. It's great mindset. Everything good? No. You wouldn't last time we talked about your holidays and everything.

Speaker 1:

Right? Yeah. Holidays planned?

Speaker 2:

I actually do have some holidays planned.

Speaker 1:

Damn. What a life.

Speaker 2:

Yeah. Few days to the Alps for to go run, basically.

Speaker 1:

Nice. Nice. Nice. Nice. What do we have?

Speaker 1:

I think we have a lot of stuff to cover this week. Think we maybe have some

Speaker 2:

We had too much. Had to cut a few.

Speaker 1:

I think we have some tidbits at the end. Yeah. Some small extras that we'll just touch upon, but then let's get started.

Speaker 2:

Let's do that. What do we have? We have MIT's new Nanda report. It says 95% of corporate Jedi pilots aren't moving the the p and l. A sobering reality check for CFOs chasing quick wins.

Speaker 2:

It finds companies that buy specialized tools fair far better than DIY builds, citing a learning gap and poor workflow integration as their real blockers. For the rest of us, the playbook is narrow use cases, measurable outcomes, and partnerships that speed the path from pilot to payoff.

Speaker 1:

So this is a report, a very recent report, I wanna say. This is from August 18? Yeah. A few days ago. Yeah.

Speaker 1:

And it's saying that the JNI pilots at companies are failing.

Speaker 2:

Yeah. And it basically says that only 5% of GNI pilots reach production at scale and that the rest basically don't really have an impact on profit or loss even though that they need to that these companies are investing heavily in it.

Speaker 1:

But does it mean like so counter thought. A lot of times these like, every time something new, people are experimenting with it. Right? That's why we have, like, the POCs and all these things. But just because doesn't something go to production doesn't mean it's failed?

Speaker 2:

Mhmm. No. But 5% is very slow, of course.

Speaker 1:

5% is low. But I also feel like to say like, I it think also depends a bit on how you approach when you're starting this thing. Right? Like, are you just experimenting? Like, are you investing because you know it's gonna succeed and then it didn't?

Speaker 1:

Or were you just very curious because I do think the number is high, but at the same time, I also I don't think it's very constructive to think, like, everything that goes that doesn't go to production has failed. I think whenever there's something new you need to innovate, there is a bit of risk that you're taking, and I think maybe some people are they don't acknowledge the risk as they should. Mhmm. But I think it's also normal. Like, it's expected.

Speaker 1:

Right? Like, maybe people are not as informed, but, like, 95% is a lot. But I also even if it was yeah. If it was 70%, I was like, okay. Yeah.

Speaker 1:

It's normal.

Speaker 2:

Yeah. But the world is also not no longer the first year. Right? Think what we see, we see a lot of large companies putting a a huge amount of of budget in this. And what the report is basically saying that it doesn't really in most of the cases, it doesn't really really lead to a measurable increase in profits or decrease in loss.

Speaker 2:

That's basically what the report is saying. And that's also, like, it's better to buy a specialized tool for the problem that they're trying to solve than DIY it with Gen AI.

Speaker 1:

Yeah. Do do you find any of these things surprising?

Speaker 2:

Surprising, not necessarily, but I think this is the first time we have such a large study at this scale. Yeah. And it's actually it's it's caused a bit of noise on the stock market as well. Ah. The last day is a bit of a decrease on the on the big tech players.

Speaker 2:

Because of this and because it's, like, one of the potentially, one of the signs that we're in a AI bubble that is about to pop.

Speaker 1:

We talked about this in another episode. Right? Like, we're not got them bubble.

Speaker 2:

Yeah. But this is think the the the the report is a bit what what introduced it. This is the first time that we're seeing also signs that, like, people are more cautious on the stock market as well. Yeah. And especially because, like, AI is priced also to to be this big change agent for companies.

Speaker 2:

Like, if you use this solution of this AI company, then you will move forward very quickly. Yeah. Right? Like, it will bring ROI very quickly. This this this large report from MIT is basically saying,

Speaker 1:

like it's not. It's not.

Speaker 2:

Yeah. Yeah. Yeah. Yeah. Yeah.

Speaker 2:

I think that everybody has a bit scared because also a lot of these AI companies are not really they're not publicly traded. Mhmm. They're owned by by VC or private equity, which means that you also don't like it it's a bit of a black box for investors because you don't really know how are they doing. So they investors typically only notice this when when NVIDIA has bad results. So there's also this this lag effect, like, what is going on?

Speaker 2:

Like, we have these reports going on. We don't really know how these companies are doing. Like, what will the next quarterly results of NVIDIA be? And I think that all of that is making everybody a bit a bit cautious.

Speaker 1:

I see. I see. I see. Maybe so, I mean, in other words, we're saying this is this is hype. Right?

Speaker 1:

The numbers are showing this is really hype. There's a lot of investment, but there's not actual value.

Speaker 2:

I think the fear is that there's a bubble. Yeah.

Speaker 1:

Yeah. But do you think that, like, next year, if MIT does the same report, do you think they'll come up with similar or comparable numbers? Or do you think because I'm also wondering, like, maybe next year, instead of 95%, it will be 70%, which is a big increase kinda. And maybe the the next year would be instead of 70, it would be 40%. Right?

Speaker 1:

Because, again, there is a lag effect, and maybe we're undermining how how short the lag would be.

Speaker 2:

I I think what you still see in, like, we're a few years in, but a few years for a large company that is not tech native, not tech first is still a very short time. So I think you still have this and and the report also touches a bit on this. It's like, it's still hard to get these JNI solutions, like, neatly integrated into the workflow of that already exists for potentially decades in the company. There's also, like like, the capabilities of an organization. Like, do you have the right people to actually implement this?

Speaker 2:

And, like, the a combination of these things also, like, point a bit, like, maybe it just takes time to to learn this new capability.

Speaker 1:

Because I was also wondering because I also think maybe the feeling that GenAI gives value very quick as well comes from the the the quick wins. Right? And the quick wins usually come at a, like, user level. Right? Like, ah, before we needed to prepare show notes for the podcast episode, and now ChatGPT does it quickly.

Speaker 1:

Like, you're coding. Now you have the the AI assisted things. And I think when you when this microscale, you see very quick things, and then it's easy to to daydream, but I think maybe takes more time to indeed, like, the processes, how things are done. I feel like a lot of the times the reflection for people is to try to use AI on the current process, but sometimes because the agents, they have so many capabilities, you have to rethink about the process. Right?

Speaker 1:

Like, why are you trying to do this this way when the agent can do x and y and z much better, faster, etcetera? Right? So I think, again, I'm a bit surprised by the magnitude, not surprised that it is a bit hype. It's still a bit bubble. The people are saying a lot of these they there's a lot of promises.

Speaker 1:

You see, like, the the big consultancy, you know, they talk about the the super vertical, the hyperscalers and all these things, and, like, this is what we're investing in. But I think you will get there. Again, it's hard to say if you will actually fulfill the whole promise of the AI stuff because there's a lot of promises, but I think it would definitely be I mean, the world's not gonna be the same. It's, like, Internet kinda, maybe different scale, but, like, see kids like this, they're gonna be growing up, growing up with this. It's gonna be very different.

Speaker 2:

Yeah. And talking about kids growing up with AI.

Speaker 1:

Exactly.

Speaker 2:

What else do we have, Milo?

Speaker 1:

We have and let me put this in the screen before I start. AWS CEO Mark Garmin says, firing junior workers because they ask him their jobs is, and I quote, the dumbest thing I've ever heard, end quote. He argues juniors are inexpensive and AI native, and he calls code percentage bragging a silly metric that celebrates bad code. A key takeaway, keep hiring grads, then use tools like Keto, which is an AWS ID, to train and accelerate them instead of hollowing out pipeline. So I think he's the first CEO guy that really went against replacing less experienced people, and I actually thought that he made good points.

Speaker 1:

And I think it links back to what we're just saying. Right? Nowadays, you've been some people depends on the person, of course. But if you've been programming for twenty, thirty years, now having to change your workflow a bit, there's a lot of people that struggle with a change. So I think the arguments that he makes is one, they're the cheapest workforce.

Speaker 1:

So if you wanna cut costs, don't start there. And they grow up with these tools. Right? The same way that older people, they struggle working with computers where younger people, even if they don't have education, they they're usually more tech savvy. The general developers, they also grew up with the well, they grew up.

Speaker 1:

I know it's very short, but they're more adapt to the AI coding assistance. So he says instead of cutting them off, mold them into the employee you want them to be, train them to use the tools well because they already know the tools, and then he mentions their ID, right, Akito, to make them very productive and, like, shape them into the employee that you actually want. Right? But they are gonna be the ones in, like, five years that will be the most productive. So firing them now and not hiring them now is very stupid.

Speaker 1:

That's his point. What do you think of that?

Speaker 2:

I think it sounds coherent. I think it's this is logical. I think mainly the two things are, like, indeed, like, people are the cheapest part of the workforce. They probably learn new tools the quickest, and they've very much grown up on this.

Speaker 1:

And a lot of the times, I think they're well, I don't know if it's a bias, but I feel like a lot of the times, junior people, they're more hungry. They're more, like, motivated. Like, I think if you again, I'm maybe generalizing too much here, but and maybe it's a bias. But if you have someone who says, okay. Now you have to change the way you're doing stuff.

Speaker 1:

I feel like you usually find more resistance with people that have a lot of experience.

Speaker 2:

Yeah. That's true. I think I I can see some truth to that.

Speaker 1:

Definitely. So yeah. I don't think well, again, I also think that there was a lot of on the statements before that AI is gonna replace even mid level engineers or junior engineers. I thought it was also a bit.

Speaker 2:

I think that the one of the struggles that we've also touched upon here and there is a bit like people's concern, like, how will AI people learn how to program? Yeah. Right? I'm I'm I've a bit shifted my opinion on that.

Speaker 1:

What was your opinion before, and what's your opinion now?

Speaker 2:

I think my opinion before was indeed this is challenging because potentially you won't exactly know how everything works and what good practices are. But it's just to me, like, it's a bit of a new paradigm. Right? For sure. I think you can also make this parallel where before you were programming on cards or you were programming assembly, and the assembly programmer saying, oh, yeah.

Speaker 2:

We see this new high level language coming out. Oh, what the fuck is Python? I mean, these people will not know what is going on with the system.

Speaker 1:

Yeah. Even, like, c, c plus plus, and Python. Right? Exactly.

Speaker 2:

And I think this is maybe a bigger step, but it's it's comparable. In the in the end, it's about, like, learning best practices and and with whatever tools that you're using.

Speaker 1:

Yeah. I think also so I'll take analogy that is further away from programming languages. Like when I I'm from Brazil. Right? And when you move to another place, sometimes it's it's hard to imagine what your day to day is gonna be because you've never been to that place.

Speaker 2:

Mhmm.

Speaker 1:

Right? And I think it's a bit the same, like, because there's this new paradigm of programming and learning, you cannot imagine what it's gonna be like, but people assume that this is not gonna work because I've learned it like this, my peers learn like this, and my teachers learn like this. So that's the only way that's gonna work. But when you do get there, there are ways. Right?

Speaker 1:

Like, there there are many like, it will it will be different for sure. It will be something but it's you cannot imagine something that you never experienced.

Speaker 2:

True. And I think what what the government here is saying, you should still hire junior people. I don't think everyone anyone has said you don't need it. I think what the feeling is a bit is, like, that that AI will, quote, unquote, make things more efficient, so, potentially, you need less people. And I think even AWS in the last two years when they did layoffs, they hinted towards being more efficient than this thanks to Gen AI and just needing more more need less people.

Speaker 2:

And I think the easiest way to decrease your workforce is to, for a certain time, stop hiring junior people. But I'm as a smart thing. I don't know. But, like, it's the easiest way. Right?

Speaker 2:

Because you have churn. If you have no new people coming in, then you decrease your workforce.

Speaker 1:

Well, he also says, I mean, at least on the article, right, it's not necessarily about not hiring, not hiring. It's really about firing junior people or replacing junior people. Right? I think but I also think the the reason why juniors are the first one well, one is because because you pay them less, you think it's less valuable. I guess maybe there's a core like, people correlated in their heads a bit.

Speaker 1:

The second thing is that usually it's easier to find junior people than medium and senior people. True. So it's like if I say, I'll fire them now, for next month anymore, I'll hire them more easily. But if I fire senior people and I wanna hire them, it's gonna be way more challenging.

Speaker 2:

But to be honest, and and to the view on the the whole news, like, I've heard very little about companies act actively firing junior people because of Gen AI.

Speaker 1:

Yeah. Indeed. I've heard a

Speaker 2:

lot of about about hiring stops. That's and, actually, we we got a report, at least in Belgium, that confirms basically that junior people are take a lot longer time to find their first job if they compare this year versus the previous year.

Speaker 1:

There was already a report.

Speaker 2:

There was recent news by the Tate.

Speaker 1:

Oh, wow. That confirms a bit. We we mentioned that

Speaker 2:

Then we felt it, then we heard a bit in the community that was harder to get to a junior job, but it's the numbers also confirm it.

Speaker 1:

Yeah. Actually, for me, I finished the master's the year before COVID, and I actually consider starting the master's one year later as well. So I think for me, if I had actually said, oh, I'll start later, would be very hard for me to find out. So I think it was one of those things that I like a bit luck, I guess. But it really worked out for me because, like, COVID happened and, I mean, it's been okay.

Speaker 1:

COVID was in twenty twenty ish. We're in 2025. Five years later, there's this. Right? So it's like, I'm happy that I started when I did.

Speaker 1:

Yeah. I should say that. True. I should say that. What else do we have, Bart?

Speaker 2:

DeepSeek unveiled v 3.1, calling it, and I quote, our first step towards the agent era with a hybrid think naughting model, both DeepSeek Chat and DeepSeek Reasoner get a 102 k context, stricter function calling, and stronger tool using agent skills tuned for multistep tasks. Practically, you'll toggle heavy heavy reasoning only when needed, and, they note there are will be price changes as off peak discounts will end September 5. DeepSeek with a new model.

Speaker 1:

Yes. Three v 3.1. Actually, the the previous on three point zero was was a while ago.

Speaker 2:

No? It feels long. Yeah.

Speaker 1:

But I want to say six months? Really? That's it. August 1?

Speaker 2:

That's this this v 3.1.

Speaker 1:

This was the r one was three months ago, and the v three was five months ago.

Speaker 2:

Not so that far off. Right?

Speaker 1:

Yeah. Yeah. Not that far off. But it feels like a long time.

Speaker 2:

It's five years in AI.

Speaker 1:

Yeah. Exactly. Yeah. It's, older. So one thing I noted is that I mean, they didn't say it, and I don't know if I'm actually accurate on here saying this.

Speaker 1:

But it feels like the GPT five kind of thing, how they have, like, thinking and non thinking.

Speaker 2:

Exactly. It feels very much like how I use the touchy pity Yeah. UI. Yeah.

Speaker 1:

Yeah. And did you try it? DeepSequium?

Speaker 2:

No. DeepSequium? No. Haven't.

Speaker 1:

Yeah. Yeah. So it felt like that. So I guess I don't know if they're gonna have r because the r one was the reasoning models and the v three were the non reasoning. Right?

Speaker 1:

And again, recap for people listening. Reasoning basically means if you ask a question, the the model will output some stuff for itself. So it's almost like if you were to anthropoformize it, it was, like, talking to itself before it answers. R one was a reasoning model, v three wasn't, and now v 3.1 apparently merges the two. It's a hybrid thing.

Speaker 1:

So I guess depending on the question, it will be able to tell if it should talk to itself or not before it answers. I also saw there are a few things on agent agent agent upgrades. So more reliable to to use with strict JSON, so better for multistep reasoning, blah blah blah, which also agrees with the overall trend that these models are becoming better agents, right, like, to to call these things. Like, the it seems like people are focusing more on these things now.

Speaker 2:

Exactly. And are becoming better at understanding which tools are available at understanding when it should call which tools for for what. Yeah. And that's clearly, like, all these new generation models are have some of this baked in.

Speaker 1:

Yeah. Exactly. So, yeah, another model there. And it's actually I don't know if it's cheaper than the other open source models that we saw. We saw Quinn.

Speaker 1:

We saw Kimi. But I think Deepsea became very popular because it was the first open source one that really like, open, open, open. It's really rivaled.

Speaker 2:

Rivaled with minimal resources?

Speaker 1:

Wife with minimal resources. Yeah. But because I remember even my my my parents called me, they're like, oh, did you hear about this Deep Sea?

Speaker 2:

Yeah. It really in the news everywhere.

Speaker 1:

Yeah. Like, was really, like, a different dimension. But, like, Gwen also is really good.

Speaker 2:

The league, and I think that is why it is in news. It hasn't had it had an impact on the stock market at that point.

Speaker 1:

Yeah. Indeed. Was a huge impact. Huge impact.

Speaker 2:

Even though, like, Gwen and stuff like like that, like, they're also good malls, but for some reason, like, it never really

Speaker 1:

Indeed. I think I don't know. Yeah. We can speculate a bit. But, yeah, I think bottom line is they are very good models, but I think Deepsea got a got some popularity back then because I think it was probably one of the first ones.

Speaker 1:

But it's cool. Nice to see keep keep moving, and I hear some some numbers for people. It is the best deep seek model. So on the benchmark, they just compared with the other. I don't do you know from the top of your head what the SWE bench verified for the other for the nonopen source models or even the other models?

Speaker 1:

No. But so on the recent notes they have here, they compare with DeepSeek v three and DeepSeek car one and DeepSeek 3.1. So the new model is actually the best across the board. And, again, still at a good price. So it would be cool to to have actually, we did it.

Speaker 1:

Like, data it's a like, a battle of LLMs. Go ahead. I think you need to do this once every week just to make sure you stay up to date.

Speaker 2:

Yeah. I think it's it's very hard, like, even from the benchmarks. Like, the all the benchmarks are are very for very specific use cases. It's without testing the model out in your specific context, it's hard to understand, like, whether it's better out of the box for you or not.

Speaker 1:

Yeah. Yeah. Indeed. I think even even within a domain, sometimes it's harder. Do you have two different ways of doing the same thing?

Speaker 1:

Is this really better than the other? It's a bit arguable, but but I think it's like I don't remember what I heard. Like, iPhone releases? Like, from one to the other, maybe you don't see as much difference. But if you take it back, like, five generations, then you're like, oh, yeah.

Speaker 1:

This is very different. True. So see this moving along. Anything else you wanna add here before I move on to the next topic? No.

Speaker 1:

Then what else do we have? Marty Kagan, I don't know how to say his last name, argues AI won't end SaaS or make everyone build. The real shift is components plus user program workflows with, quote, unquote, literally thousands of hidden business rules and rise of MCP. For agents, the future is yes to both. So expect to buy robust course and layer vibe coded or agent driven customizations on top, not reinvent the enterprise wheel.

Speaker 1:

I came across this article. I thought it was it had some interesting points, and I think also some things that resonated with what I feel. Right? So he starts with, like, build versus buy. That do you want to so basically, build versus buy for software is like, do you want to build a capability that you need, or would you like to buy someone that offers the capability for you?

Speaker 2:

Exactly.

Speaker 1:

And he says that trend or the general rule of thumb is if this is a core competency of your company, then you build. If it's not, you buy.

Speaker 2:

Exactly. Yeah. Yeah. I think that has been a bit more more or less the default for the last decade.

Speaker 1:

Exactly. Right? So it's like, if it's something that you're an expert in that you need to be an expert in, keep maintain. But if it's not, don't bother with it. Otherwise, you're gonna spend too much time and effort in these things.

Speaker 1:

Then he adds another layer. Right? He said that now with what he calls, well, vibe coding, I don't know, AI assisted coding, a lot of people it's very easy to build now. Like, very easy.

Speaker 2:

A bit too easy sometimes.

Speaker 1:

A bit too easy. Yeah. Indeed. So does it shift the equation? We should build more.

Speaker 1:

And and then that's when we go to the the SaaS stuff. Right? Basically saying like, do you think that SaaS are not gonna exist because it's so easy to build these days that everyone's just gonna build everything? Because also, arguably, if you build something for yourself, it probably is a little bit better than something you buy. Right?

Speaker 1:

Because it's really custom made for your needs. Maybe also even even thinking, yeah, if this AI agentic or agentic assisted coding is perfect, which is not, even if in a few years it is, would it work out? And he says no, because a lot of these things that these SaaS actually do, it's very it's very clear, like very specific rules that are encoded in the processes of how things should work. So the hard thing is not encoding these things, it's knowing what you want to build. So for example, he gives the example of this it's not citizen data science, citizen data scientist or citizen developer, right, with, like, Excel and all these things.

Speaker 1:

A lot of these things existed for a long time, but you if you want to build a, I don't know, a financial application or financial FinOps, whatever, is not the technical challenges, it's really knowing what to encode and what not to encode, what the process should look like and what they shouldn't look like. And I think people, they oversimplify these things in their heads, but this process a lot of times are very complex. And for you to know what you want to build is actually the challenge, not the actual building of it. So he says, you're not going to kill the SaaS, they're still going to be there, they're still going to provide value. But at the same time, you will probably build a layer on top of these things because now building software is much easier to really make it very custom for your companies.

Speaker 1:

So that's why that's why he means by yes to both.

Speaker 2:

Yeah. You've you've heard his voice about SaaS will no longer exist in the AI era. I don't believe in that either. I think I think these are expert products that are built for a very specific goal, to rival that with something that you hack out of together is very hard. I think maybe the interface of two the SaaS products will change.

Speaker 2:

Like, maybe there will be more conversational, like, via via MCP or whatever. I think in the buy versus build discussion, you also have I think it becomes easier to buy to some extent because it is much easier to with Vibe coding to integrate stuff. You can quickly integrate something new. True. I think that is that is an argument that why it makes it easier to buy stuff.

Speaker 2:

I do think that also it makes sense to also more quickly go into build. So I've in my previous role as as in the HRTools' founder, very for all most of the things that we needed that were not our core our core business, we went the buy route. But that also means, like, for example, for HR tooling, that these are opinionated tools and that sometimes you you change your processes and function of the tools that you buy, which is arguably not ideal. And if it is simple enough to to build, I think there are also are arguments to say, look. If this fits your workflow better to build it yourself, even if it's not your your the core thing that you do, but it is important to you Yeah.

Speaker 2:

For whatever reason, How you look at different actions that need to be taken or culture or whatever. Like, there's also it becomes easier to to to build these things. Yeah. You also need to take that into the equation.

Speaker 1:

True. True. And I think what you said as well, like, it's easier to integrate. I think it's gonna be more, like, easier and easier. Right?

Speaker 1:

Because I'm also thinking now really thinking just of the Geniei layer. A lot of the applications that you buy nowadays, they also have a Geniei component to it. So if you go to Notion, they have the Notion AI. If go to Google Drive, they have Gemini. If you go to the Office 365, they have the Copilot.

Speaker 2:

If you pay the AI tax.

Speaker 1:

If you pay the AI tax, yeah. If you go to Riverside, there's also an AI thing. But I still feel like they fall short because a lot of the times for an actual use case, you need to have data from different parts. So if you have a contract, they're gonna come from one source, and then you probably need to move it to internal documentation, and you need to do this, and you to do that, so sometimes that flow, you have components that blocks that do things really well, but to really get the value end to end, you still like it a bit. Right?

Speaker 1:

Do you want to reinvent the wheel? Right? Like if Notion has a really nice AI as well to search stuff, do you want to build something else to search just because you want to make the thing flow? So it's a bit I think also the ideal world is everything can be exposed like, everything is exposed with an MCP, with a lot of APIs and all these things, and you can just interact with these services. Right?

Speaker 1:

And then your application becomes way smaller, but it's really just a layer that integrates things end to end. I don't think we're there yet, but I think it's a logical step. It's whether we are gonna because maybe they don't wanna do this because they maybe they wanna add another, like, AI techs on top of it for you to do these things. Right? But I think that would be the the world that I would like to live in.

Speaker 1:

So you can have, like you just care about the end to end flow and the process. Right? And you can still build a little layer, but underneath, you don't need to reinvent the wheel for everything.

Speaker 2:

It's the glue the the thing that glues everything together becomes very

Speaker 1:

Exactly.

Speaker 2:

Efficient.

Speaker 1:

And today, I feel like the it's not easy to glue things yet. I think, logically speaking, that will be the best place to be.

Speaker 2:

And today, it's not not perfect yet because a lot of these existing SaaS players, they don't support MSP yet, so it's difficult to interact with them via an agent. Think it's also what we're seeing, like, with DeepSeek, with ChatGPT five, that they are focusing very much on improved tool calling. Like, you also need good performance there. Like but we're getting there. Right?

Speaker 1:

I think we're getting there. Again, I think the only reason why I'm a bit hesitant to say that's where we're gonna be in the few years or the next year even is because, yeah, maybe people are gonna they see there's a lot of value. Right? And from the company point of view, maybe they'll say, I want to maybe make this a pro plus plus feature.

Speaker 2:

Yeah. Yeah. I see it. Yeah.

Speaker 1:

And then it's like, you to pay more for each

Speaker 2:

use our MCP. Yeah.

Speaker 1:

Exactly. Exactly. And that's the bit, like that's that's so I hope that doesn't happen. Right? But I also can see that happening.

Speaker 1:

Right? And I think they're they're in the right to do so. Alright. Anything else you wanna add here? Anything did you get to did you have a look at this article as well?

Speaker 1:

Anything that stood out for you?

Speaker 2:

Not necessarily. Well, I like I say, I I think I agree with the with the conclusion in that it's shifting from buy is always logical choice Yeah. If it's not your core versus maybe there are some reasons now to also build part or build an abstraction.

Speaker 1:

Yeah. I think the the JN AI thing has a nice an interesting layer to it. Right? He even mentions tools like Bolt and Lovable. They're like giving people that are not technical.

Speaker 1:

Right? So I think you you would like, I also saw there was a on a podcast, the guy was saying that the company's policy now is every time someone has an idea for something, they need to deliver a POC. Doesn't matter if you're back office or HR. Like, vibe code something. Just give me like a simple UI just to show, and then we talk about that.

Speaker 2:

That is also the good good thing you need. Like, it it makes an ID very concrete very quickly.

Speaker 1:

Exactly. Right? And it's very much like just the fact that you can do yourself. So what it's like your head to the PC, it's not your head to someone else's head to the PC. Really shortens a lot.

Speaker 1:

Right? And and I think it's easier for people to take ownership. I think it's, like, overall, it's a very nice approach. Alrighty. And what else do we have?

Speaker 2:

Google claims a typical AI text prompt now uses about point two four watt hours, roughly nine seconds of TV after a 33 times, efficiency jump. It packs a water at about five drops and c o two at 0.03 grams, but experts say the framing ignores key factors. The net effect, your prompts are cheaper, yet scale and encountered the training mean overall impact may still climb.

Speaker 1:

This is I also saw I saw this in some different article, actually. I saw from the actual announcement Okay. In Google. I just came through. They also had a video that were that showed a bit that was a maybe a video version of the reports, like just a a trailer, let's say.

Speaker 1:

I think the surprising thing, and I think everyone's mentioning is the the query is the equivalent of nine sec nine seconds off TV. And I think there was a lot of numbers, a lot of speculation that it is much, much, much bigger than that. So now Google is kinda saying that's not true.

Speaker 2:

Well, after their efficiency jump.

Speaker 1:

After the efficiency. Because if

Speaker 2:

you if you, like, if you take the headline, it's a 33 times efficiency jump. That is I think that's true. Becomes a lot. Right? Like, is 33 times nine seconds.

Speaker 1:

But the headline from Google wasn't that it was really just like we care about sustainability and it's only nine seconds.

Speaker 2:

You cook.

Speaker 1:

Right? But it's true. And I think I even saw 44 times or something. So here it says 33 times in one year. And I think last week we talked about some I think what was it?

Speaker 1:

Was it Gemma? No. It wasn't Gemma. The Edge model that was more efficient as well.

Speaker 2:

Gemma three.

Speaker 1:

Yeah. A

Speaker 2:

version of Gemma three. Yeah.

Speaker 1:

A version of it. I think also, indeed, we also see GenAI also moving towards more that efficient, less, like, energy consuming. We see GPT five now kinda routing queries directly for you, so you don't have to do these things. So I think it's also nice nice to see. I think Google is trying to to get some PR out of this.

Speaker 1:

Right?

Speaker 2:

Sure. But it's also good to get a bit more objective facts on what does a query actually cost.

Speaker 1:

True.

Speaker 2:

Right?

Speaker 1:

Well, I think they also mentioned the medium query I I saw at some point. Yep. So I think depends,

Speaker 2:

like based on what they call a medium query.

Speaker 1:

Exactly. So it's like if you're someone that usually have long queries to JHPT and expects, like, summaries of documents or stuff that is very you know, a lot of output tokens and stuff.

Speaker 2:

I would not assume that that is a medium.

Speaker 1:

No. I would not assume that's a medium.

Speaker 2:

Yeah. I think a lot like a simple question. Exactly.

Speaker 1:

Exactly. I think a lot of people like a Google search.

Speaker 2:

Yeah. Basically, the the Yeah. The alternative to a Google search.

Speaker 1:

Exactly. I think I think that's that's what I would expect. Right?

Speaker 2:

And do do you think this is a lot or not nine seconds of TV?

Speaker 1:

I think it's less than I thought, but I think it's something that I can live with, but it's not negligible. Right? I think it's like how can I say it? If well, and I also hope that this is not the end goal. Right?

Speaker 1:

I also hope that it will keep

Speaker 2:

I think the good thing here is that it's in everybody's best interest to decrease this as much as possible because more resource consumption will mean a higher cost for these players.

Speaker 1:

Yeah. Exactly. I think this is a clear example of, like, sustainability, but it's a win win.

Speaker 2:

The incentives are aligned.

Speaker 1:

Exactly. Yeah. Yeah. Which is good. But also I also thought it was nice to see that, and it was a bit surprising because before Gen AI, there was a lot of environment focus, let's say, like sustainability focus.

Speaker 1:

And then these things came and then The US kinda say like, fuck everything, you know, like, let's just and then you also feel like you cannot fall behind as well. And then it's a bit of a weird

Speaker 2:

but I I feel what you're meaning.

Speaker 1:

It's true. But it's a bit of a weird place, especially like

Speaker 2:

It's a weird place to be.

Speaker 1:

Yeah. Working in AI, you say, in the next five years, we should definitely focus on sustainable AI and gen AI. And it's like, how? Or I like it feels like they're very contradicting. So I think it's nice to see Google doing this.

Speaker 1:

It would be nice to have a wave effect, let's say, if Anthropic also kind of publishes more. There's more of winners. There's more of this. There's more of that. I think also they mentioned here again, maybe I'm I'm paraphrasing a bit.

Speaker 1:

This is not just the model. Right? There's also so there's the efficiency of the model, but I think there's also efficiency of the hardware. There's also the sustainability of where the energy comes from from their data centers. Yep.

Speaker 1:

Right? So it's a whole bunch of things that are done to reduce the the the emissions. And actually so maybe a question here for you. Mean, remind me if I'm when they say equivalent of nine seconds of TV, it's in terms of power.

Speaker 2:

Power. Yes.

Speaker 1:

So it's not like c two emissions or anything.

Speaker 2:

Well, the they are the c o two emissions are explicitly mentioned elsewhere. I think they're separate. Yeah.

Speaker 1:

Okay. Okay. So yeah. So I I I think Google as well is in a good place to do that because they also have their own TPUs and all these things. So what did you think, actually?

Speaker 1:

Because you asked me if I thought it was

Speaker 2:

So I I knew that it was a bit overstated in the past. Right? Like, was a bit because people are frustrated, like, what about sustainability? I think you you had a lot of large numbers on how much it actually cost to query, and you had a lot of all other sources that were also, like, bringing a bit more rationale into this. So it was it was more or less, I would want to say, what I was expecting.

Speaker 2:

Maybe it is even a bit more than I was expecting. I think nine seconds is still feels like a lot for a small query.

Speaker 1:

Yeah. But how much do you did you do you think nine seconds of TV is a lot of energy?

Speaker 2:

Yeah. The thing is I wasn't really know what to comparing it to something like it's very tangible. Right?

Speaker 1:

Like Yeah. Yeah. Because I'm also thinking when they also say nine seconds of TV, is it just the power of my device? Or does it also include because, like, if I'm watching Netflix, it's also streaming data from some other place.

Speaker 2:

I think you make it too complex now.

Speaker 1:

Yeah. Maybe. Because I was thinking, like, nine seconds of TV. If I toast my bread in the morning on a toast

Speaker 2:

That's way more.

Speaker 1:

It's way more. Right?

Speaker 2:

Yeah. It's way more.

Speaker 1:

So the next years, I was like, yeah. That's that's kinda small. Right?

Speaker 2:

That's And how do you do this every morning?

Speaker 1:

I'm not sure if I do it every morning.

Speaker 2:

But almost. Like, nature is fucked because of you.

Speaker 1:

Yes. Maybe. I don't know.

Speaker 2:

It was Murillo all along.

Speaker 1:

Yeah. Exactly. But I feel like, okay.

Speaker 2:

Anyway It was another glacier.

Speaker 1:

Yeah. Yeah. It's okay. One degree warmer. Thank you, Murillo.

Speaker 1:

And still, the toaster is the most efficient way to toast bread. No? If I put it in a pan and I toast it, it's gonna be more it's gonna be less efficient or no?

Speaker 2:

I have no clue. I've never I really

Speaker 1:

do know that toaster I do think toasters are very energy hungry.

Speaker 2:

We can ask CetiPT to do deep research on this, but I think that will be it will cost you the next 500 to toast bread.

Speaker 1:

It defeats a bit of purpose. Yeah. I saw in a maybe a side note. I saw there was a video of a guy, a cyclist, like, super buffed, and then he was trying to toast bread with the energy produced cycling. And the guy was, like, struggling.

Speaker 1:

He was sweating. Then in the end, like, you know, and he's, like, all of sudden, he's, like, takes a bite, but, like, the guy is super big, you know, and it's, like, a bit absurd of the the Yeah. Yeah. Yeah. Yeah.

Speaker 1:

Crazy. It's crazy. It's crazy. Oh, yeah. I'm also wondering here.

Speaker 1:

So when they did the analysis, do they do how did they do this I also think they they also wanted this analysis to be of a starting point for other companies to like a framework, let's say, for other places to be able to measure their their power consumption. Mhmm. Right? So when they say one query, I guess, is just inference, so it doesn't take into account the the training cost Mhmm. Which is is it higher, you think?

Speaker 1:

Like, if you take if you say how much power, how much energy we use to train these models?

Speaker 2:

Well, that is one of the one of the reactions that comes to this is indeed, like like, training is not not counted in this. Yeah. And training is probably well, it's it's huge because these models are trained for sometimes weeks or sometimes even months. The question is, like, how how it really depends on how much of this is then used to understand, like, what does how much does it contribute to a single query. Yeah.

Speaker 2:

Difficult for me to to guesstimate, to be honest. And they

Speaker 1:

they weren't very transparent then on what what the actual thing included, what the actual analysis included. Mhmm. That's a bit of a pity. That's a bit of a pity. Let's see.

Speaker 1:

I think sometimes when a big player releases something, open source or the community kinda rallies behind and shows Yeah. Alternative versions, but possibly more transparent. So I also hope that even if Google didn't make it as transparent that we have more stuff moving in that direction, which maybe leads us to Mirage two. Yeah. Last week, we talked about Genie Genie three from DeepMind.

Speaker 1:

Yes. And now we have Mirage two. So Dynamics Lab released Mirage two and presents, and I quote, the leap the next leap in generative world engines, turning text and controls into live playable worlds. The research preview features GTA style and Forza like demos, cloud stream near 16 frames per second with on the fly UGC and ten minute sessions. For players and creators, it hints at co creating games, prompting, steering, and sharing worlds as you play.

Speaker 1:

So rhymes with Genie three. Right? I actually didn't know this. So, basically, recap from Genie three and Mirage two. Right?

Speaker 1:

You have you you have a prompt, generates a world that you can actually interact with.

Speaker 2:

Well, I understand that Genie three that we discussed, you have a text prompt. Yes. Mirage two, today, have a image prompt.

Speaker 1:

Oh, yeah.

Speaker 2:

With an input image.

Speaker 1:

Okay.

Speaker 2:

So in this, it's only been a week where that we were amazed with Genie three. Now we have Mirage two coming from Dynamics Lab.

Speaker 1:

Dynamics Lab? Did you know them before?

Speaker 2:

No. I didn't know. But, apparently, they released Mirage one just a few months ago.

Speaker 1:

Oh, wow. Let's see here. Mirage one.

Speaker 2:

And they're calling this generative world engine, basically meaning, like, you, a user, can generate whatever you want to play. You start with an image. You can actually play a demo on the website. They have all kinds of things. There's, like, a a doodle you can walk through.

Speaker 2:

You can you have, like, starry night by Van Gogh. They have an input image. When you I tried it yesterday. You get placed in a queue, but then you then you can walk around a bit if there's enough place on their on their systems. It is very impressive.

Speaker 2:

I think it opens up so many creative avenues. I think when you there's also on the website, there's actually a comparison with Genie three because, of course, it's very comparable. And the interaction horizon, what I understand, but I'm not 100% sure if that is my correct interpretation, is that this the the way that you it understands the world and also remembers the world, but I'm not 100% sure that I'm not correct here. And they're saying it's ten plus minutes. Oh, that's a lot.

Speaker 2:

That is a lot. Yeah.

Speaker 1:

It's a bit less frames per second. Right? The Genie three was four 24 frames per second, I wanna say. Yep. Exactly.

Speaker 1:

But this is a well, I'm using them here. Livestreaming. Joey a bit of a game playing. They only allow for twenty seconds or so, so it already is over. It looks pretty cool.

Speaker 1:

So I the one I tried as well was just like a childlike drawing. Yeah. And then it actually generated a world, like a three d world, and you can kinda walk and play and run and fight with people apparently.

Speaker 2:

And the big difference here, of course, it's it's you can actually play the demo. Yeah. The genie three, the general public doesn't have access to it.

Speaker 1:

Yeah. That's true. That's true. Let's see. There there's some more examples here.

Speaker 2:

So to me, like, this feels like such a a big step. Like, we're very much at the beginning of this, like, these these usual generated creations, like, like, add a few years to this. Like, what what will it bring?

Speaker 1:

Right? Again, I'm really really wondering how what's this gonna mean for the the game developers.

Speaker 2:

Game developers, but for me also, like, AR and VR space will get a a big boom from this.

Speaker 1:

True. True. True. True. Yeah.

Speaker 1:

Again, think the only thing that I would like to be sure is like if you can start with this and then bring it into the the old school way of building these things and then tweak it. Right? Kind of like the same thing that you could do with, I don't know, your clock code. You generate a whole bunch of stuff, but then you still go to your Versus code afterwards and tweak this stuff. Right?

Speaker 1:

So you have you have the the cannonball that gets you 80% there very quickly, and then you can still go and shizzle the details. Right? I think that would be the the best place to be. So I think I don't know how they build this. I I very know very little about the that engine and stuff, but, like, how you can import export artifacts and

Speaker 2:

But I think what you will see is, like, when you talk about gaming is that where you have today, for example, the Unity gaming engine, where you have this very, quote, unquote, classic, sort of ID, maybe you can call it, and it's very visual, of course, that will just get, like, an AI prompting component where you can say, okay, I want the world to to look like this. I want to be able to interact like this. It will, like, shifted how software developers work day in, day out. It will also shift how game developers work.

Speaker 1:

For sure.

Speaker 2:

Maybe allow your your uncle wherever to say, oh, this is my proof of concept game now. Yeah. Yeah. Because I'm just gonna prompt it with something like like lovablegames.ai.

Speaker 1:

Sure. I think Kanye links what we said as well before. Right? Like, if you can have a everyone can bring their POC.

Speaker 2:

Yeah. Exactly.

Speaker 1:

Like, I'm thinking of this world that you can do this, and you can jump and get fast. Like, don't don't tell me. Just show me. Show me. Right?

Speaker 1:

Think

Speaker 2:

Type it, then then show me.

Speaker 1:

Exactly. Like, type it. If it's not what you think, tweak it. I think sometimes for these things, need to really try and see how it looks. Yeah.

Speaker 1:

To see if it really it's really what you has the effect that you expected. Right? So I think it's cool. And I also think a lot of people go into programming for game development as well still. So I think it would be nice too.

Speaker 1:

I don't know. It's nice. Nice. Nice. Nice.

Speaker 1:

Nice add on. Something that I wasn't really expecting actually when I first saw the Genie stuff. No. True. Came a bit of nowhere for me.

Speaker 1:

Yeah. Very cool. What else do we have there, Bart?

Speaker 2:

We have researchers trained a wearables foundation model on 2,500,000,000 from a 162,000 people to better predict health states. Across 57 tasks, behavior driven signals excel, especially sleep, and combining with raw sensors boost accuracy further. This means, basically, expect smarter, more timely nudges from wearables, not just step counts and generic recovery scores. Cool. Yeah.

Speaker 2:

This is cool. Yeah. So what they tried to do, what they actually did, is that they built a foundation model. And the foundation model, how we typically look at it, these are LLMs text based models. Right?

Speaker 2:

But instead of text, they input, they input, basically, sensor measurements and trends over time and linked them to health states. Like, health states being, you recovered well last night or you or you're stressed or like these type of health states and also some precursors to medical conditions. I skimmed a bit through the article. And apparently, the the performance is very good and typically also beats simple models that try to understand these signals.

Speaker 1:

And simple models are usually rule based, I guess?

Speaker 2:

Rule based or, let's say, simple regression models that were built on built on this. It is it seems to be if you look at the performance, it seems a worthy avenue for these type of smart device manufacturers to really explore further. The question is, of course, like, these simple models are and that's, I think, something that you need to at least understand here is, like, they're way simple models are way more energy efficient. Right? So if you want to you're probably not gonna run a foundation model tomorrow on your Garmin with the with the technology that is out there today.

Speaker 2:

But you might you might gonna run this on your on your a company cell phone app. Right?

Speaker 1:

And this is wearables in general? Wearables in general. So it can even be the Oura Ring.

Speaker 2:

They could be like these type of signals.

Speaker 1:

What about explainability though? Because I'm also wondering, like, if it says I'm gonna have a high likelihood of having a heart condition, maybe this is a bit simpler. But would I want to

Speaker 2:

I I think, like, the heart condition is already, like, looked as a medical

Speaker 1:

Yeah. Maybe.

Speaker 2:

It's maybe that goes very close to a diagnosis. Like, they really looked at health states, like, are you tired? Are you recovered? Are you stressed? Like, it's a bit more precursors to that.

Speaker 1:

And then if I say, like, okay. Let's imagine I'm stressed. Because I feel like sometimes these these type of things, at least for me, they trigger follow-up questions. Like you said, I'm I'm stressed. Oh, why is that?

Speaker 1:

Ah, it's because your heart rate during the night is is elevated. Okay. What can I do to to to help that? You know? Like, maybe not I don't know.

Speaker 1:

Not in a heavy mood before bed or not doing this. I wonder how this would all play out if it's all LLM based.

Speaker 2:

Well, it's foundation model based.

Speaker 1:

Foundation model. Yeah. Yeah. Another language model.

Speaker 2:

But I think even even better than it would today because because, like, this what the the results they're they're presenting is basically, like, we're better able to understand your health state versus traditional models. So it's better, like, when when your Aurorings has stressed that it will be closer to reality that should be. Right? Like and there, of course, like, there's gonna be a a time period where it, like, needs improvement, maybe a bit of a Yeah. The loop approach.

Speaker 2:

But then the next thing is that's what you're referring to is, like, what do I do with it? I am stressed. Like, what is the I think their LMs can even be a bit better at the current approach. And the current approach, like, with Aura is, I are stressed. We have this meditation that you can follow.

Speaker 2:

Do you want to follow this? Yes or no. And they're, like, an an LM model can maybe say, we have these type of things. What is your interest? Are you interested in meditation?

Speaker 2:

So but we also have these these four other things and, like, a bit better learn

Speaker 1:

Yeah. Yeah. Yeah.

Speaker 2:

And memorize also, like, what who are you as a person and what works best for you.

Speaker 1:

Yeah. I see what you're saying. True. I think it's, again, there's a lot of stuff you can explore there. Right?

Speaker 1:

I think wearables. I think it's when I see wearables, I think it's it gives me flashbacks of the AI wearables that there were pilots. Right? Like, there was what's the human the AI pin? Like, things like that that really never caught on.

Speaker 2:

Yeah. But that's really, like, dedicated AI work. It's not really

Speaker 1:

Yeah. I think this is, like, take or already exists. Yeah. Exactly. Already used and already works and try to see if we can add this layer of foundation models to see.

Speaker 1:

That's nice. That's nice. Anything that you will be in like, if there is one use case that relates to this that you would like to see? Because you also have wearables. I know you also look into, like, how to augment your training regime with foundation models or in that case with DLNAMS?

Speaker 2:

Well, what comes to mind, and it actually goes a bit beyond this article and maybe links more to the bio build discussion that we had earlier, what I am typically quite annoyed of is that, like, all these things, all these these wearable or sports or related sensors and metrics that, like, they live a bit in isolation. Yeah. So I have my I have a Suunto heart rate monitor, a smartwatch, basically. I have my Oura Ring. I use some other things.

Speaker 2:

Like, I have a Zwift bike that has its own data. And, like, they all live a bit in in isolation, and there are there exist very few solutions to, like, build a good overview on and to very, understand, like, what is it that you did in the last week.

Speaker 1:

Yeah. I see what you're saying. I think the the best thing I see today is just, like, Apple Health because everything syncs to Apple Health, and then you have a bit of a one view there. But it's there's tons of room for improvement.

Speaker 2:

Yeah. That's good.

Speaker 1:

To say the least. Alrighty. Anything else you wanna say on this? No. Alrighty.

Speaker 1:

The last well, kinda, but definitely not least, we have Google is adding agentic, quote, unquote, get it done skills to searches AI mode. Starting with restaurant reservations for AI ultra subscribers in The US, it's also expanding AI mode to over a 180 countries in English, adding opt in personalization and link sharing for collaborative planning. If you lens search shifts from answers to actions, bookings today, local services, and event tickets soon. So I actually didn't see that before, so I had a quick look. I think it's more on the, again, more agentic stuff.

Speaker 1:

Right? Like, if you want to book a reservation at a restaurant, like, think it mentioned again, if you want to have tickets for your favorite artist that they're actually working on these things and getting better at these things there as well, which actually I think Google is in a good place to do so. Right? Because they already have a lot of stuff with Google Maps and Yep. Interactions with that.

Speaker 2:

And to me, this is also a bit of a we've seen this with OpenAI before. I want to say we discussed it roughly a month ago that they introduced agent mode. This feels very similar to that. Right?

Speaker 1:

Indeed. It does.

Speaker 2:

You want to you want to give it a prompt to, like, book a reservation at this this this restaurant or maybe even more complex, like, I want to go to the to the Chinese restaurants within that radius with the best score and please book a book a table for me at Friday at nine. And then it goes and it does it for you, which I think is also a bit the premise of what Chekipi Tees 18 mode is.

Speaker 1:

Yeah. Indeed. I think the the one thing that they added, I would say, is the the collaboration. I don't remember seeing that on the OpenAI stuff, which is like, from what I understand is like

Speaker 2:

Collaboration is is it's I I haven't seen how it looks like, but it sounds very interesting. Yeah.

Speaker 1:

Yeah. Right. Like and then from what I understand again, it's like, we need to find a restaurant, and then I somehow we need to choose the restaurant together so I can kinda share with you and we can kinda together That's cool.

Speaker 2:

That that maybe pings me to say, oh, yeah. You were available at that time, or should we shift it an hour?

Speaker 3:

Yeah. Indeed. Indeed. Or like, maybe this or maybe let's go for

Speaker 1:

this one, but this one is not available this time. Would you prefer this one and this one? So I think it's Be nice.

Speaker 2:

Remove all need for any communication. Exactly.

Speaker 1:

That's the goal.

Speaker 2:

Yeah. That is the goal.

Speaker 1:

This is available, but just in The US, as I understand, and only for

Speaker 2:

But it is getting rolled out in a a lot of countries.

Speaker 1:

But is it like a quote, unquote, plus plus users? Or

Speaker 2:

It's for ultra users. Yeah.

Speaker 1:

Ultra users. Yeah. How much is that? Do you know?

Speaker 2:

No. To be honest, not. I think it's the more expensive solution. You're gonna you're just verifying it.

Speaker 1:

No. How is

Speaker 2:

Let me quickly see check it. The pro is the default one, which is in euros, €22 per month. The ultra is the €275 per month.

Speaker 1:

Oh, it's kinda like a I'm surprised that I didn't know this before, but it kinda looks like the the OpenAI stuff as well. Right? Panera has, like, a paid, affordable, and they have the one that is more expensive.

Speaker 2:

Right? And interestingly, talking about pricing, thought this wasn't we didn't put it here on the news items. I think last week somewhere, OpenAI also introduced a new option for for pricing, which is between their pro, which is the basic paid model, and their free one. Only in India, and it's it's priced somewhere in between those two. So it's a way more affordable.

Speaker 2:

So it's something like when you translate it, something like $56. And it's more limited in the terms of queries that you can do, of course, but it's it opens suddenly opens up to a lot more people at that price point. Right? Like Yeah. Yeah.

Speaker 2:

Yeah. €20, I think, or 25. It's not these these these days a month. I mean, you're gonna consider, like, do I actually need this. Right?

Speaker 2:

Like Yeah. Yeah. Yeah. Yeah. You need good arguments.

Speaker 1:

Yeah. But I think it's also maybe it's a gateway drug. Right?

Speaker 2:

True. True. True.

Speaker 1:

I mean, if it works well, to be honest, I think that's a bit of thing. Right? There's a lot of promises. If it works well, I'll be down to Yeah. To invest in it.

Speaker 1:

Right? But but it's a big gift.

Speaker 2:

Well, especially for for people like us that are active in in this do domain in the tech domain. Right?

Speaker 1:

Yeah. Yeah. For sure.

Speaker 2:

I think the they're the cost of €25. 12 is very reasonable.

Speaker 1:

Yeah. I think so. Well, I think especially when you consider the the the time time gain and, like, the quality of life improvement and all these things. Right?

Speaker 2:

And I could even argue that the more expensive one, which is, like, 200 a month, is still cheap. Because, like, it comes down to the cost of, basically, a laptop, like, cheap laptop a year. Yeah. That's true. If you look at it like that, like, from the moment that it really actually starts significantly accelerating what you do, I think you should think of it in those terms.

Speaker 1:

Indeed. I think it's like you start thinking of how much your time is worth and how much you can do in this time and how much of this, how much of that. Yeah. Sure. Sure.

Speaker 1:

Do you have the the super plus plus for OpenAI or something like that? Not yet. Not yet.

Speaker 2:

Not yet.

Speaker 1:

Not yet. Alrighty. Anything else you wanna say on the AI molding search from Gemini from Google Gemini?

Speaker 2:

No. We have some small tidbits left.

Speaker 1:

We have some small tidbits. Lots of stuff we wanted to cover, so we kinda agreed, like, let's let's add it to the end quickly. So first one is imagestyles.ai.

Speaker 2:

Yeah. A page that you shared. Indeed. Basically shows, like, a variety of image styles generated by JGPTI.

Speaker 1:

Exactly. So if you're if you're following the YouTube, we're sharing the screen so you see a few examples. There's a picture of a very colorful teddy bear and a dog next to it, and then it has a bit of the different styles. So the reason why I how I came across this is I wanted to generate images for presentation. I wanted them to build the same style, but then whenever I was trying to describe the style, it wasn't really getting there.

Speaker 1:

And, also, when I was even asking to describe the style and then generate the same image on the same style as that, it wasn't quite getting there. And there's a lot of them like, let's see, like, at least 100 different styles here and things that I would never think of. So like first one, I just reading here in order, like no man's sky, watercolor, invader Zim, eight bit low poly, plexus, like things that I wouldn't have thought.

Speaker 2:

No. I think it's very valuable, like to give a quick overview of how the how does a certain keyword in terms of styling, how does it look like, and to get a bit of

Speaker 1:

Exactly.

Speaker 2:

Consistency on what you what you generate. Yeah.

Speaker 1:

Exactly. And I think so if you if I'm doing a presentation, sometimes I like to I like to put I like to put images because I think it catches more the attention of people. But I also want there to be some coherence between the images. So I think you can just pick a style and just say, okay. Generate this in this style and then kinda kinda follow through and follow through.

Speaker 1:

So it was a nice reference point for me as well. So wanted to Cool. To share. And what else we have? We have also sick sicko fancy, but I couldn't find anything on that part.

Speaker 2:

Sicko fancy is South Park's latest episode. If you've been following South Park this this year.

Speaker 1:

Not this year. No. I used to watch it more before.

Speaker 2:

There are three episodes now, and I think the first two actually, third one as well. Like, they are they go very much in protest against Trump.

Speaker 1:

Oh, really?

Speaker 2:

Yeah. Wow. Whatever you think about Trump, it's they're funny because they go very like, the humor is extremely directly targeted.

Speaker 1:

Yeah. Yeah. They didn't care.

Speaker 2:

I'm really wondering in, like, how long it will take until we see a lawsuit coming from. But the third one is relevant here. It's called Sychophancy. It's it's a play on sick offense? Sickofants?

Speaker 1:

Sicko yeah. I know what you're saying.

Speaker 2:

I'm not sure. I'm pronouncing it exactly the same way as the title there, but it's a written a bit differently. Sickofants basically means in terms of of LLMs is that LMs are very pleasing. And I'm paraphrasing a little bit here what what happens in what what also happens in the the episode. Like, I have this business idea.

Speaker 2:

I want to I want to combine fries with walking dogs, like, making fries recipes with walking dogs. And then the the Cheesy Petite doesn't go like doesn't go like, what the fuck are you talking about? But it is very sycophant, meaning, like, it's gonna go like, wow. That you're so creative.

Speaker 1:

That's a great idea.

Speaker 2:

Let's take these two aspects and build something for you. And, like, the really displeasing aspect, and it it gets you're you're showing the screen there, like, Randy, like, a bit in trouble with a marijuana farm he's trying to get off the ground.

Speaker 1:

Because he was trying to he was chatting with the

Speaker 2:

Yeah. Yeah. Yeah. His his assistant really understand, like, what direction he needs to take is is is Chechipiti. It's very, very pleasing.

Speaker 2:

Everything that he So

Speaker 1:

he's just like, that's a great idea. Yeah. Oh, yeah. Yeah. I think you everything

Speaker 2:

everything It's really funny. It's a bit of a bit of a I think this that's what the South Park creators do is, like, they they find a funny way of quote unquote protesting things that people at least need to think about.

Speaker 1:

Like, what does this mean? Yeah. It's like it's like you're laughing, but at the end of the day, like, yeah. But they have a point.

Speaker 2:

Yeah. Exactly.

Speaker 1:

Yeah. Yeah. I get it. I get it. I get it.

Speaker 1:

Oh, that's fine. When do you watch it actually? Is it on, like, Netflix and stuff? No.

Speaker 2:

It's not on Netflix, so I can't really mention on stream Okay. Where I watch this.

Speaker 1:

We'll talk offline. We'll talk offline, but I'll definitely try to to watch it. And this is season twenty seven, episode three. A lot of seasons. A lot of seasons.

Speaker 1:

And now last, but definitely not least, we have 11 Labs released a new model. One that has so Meet 11 v three, the most expressive text to speech model. So, again, it's text to text to speech. So you type some stuff, and it will say it for you. Apparently, you can also add some voice marks, like, I'll give you some example here.

Speaker 1:

Excited, shouting. There's also one laughing, chuckles. Right? And then it sounds really good, actually.

Speaker 2:

I'm gonna say Yeah. And these these voice tags, like, annotations, like, someone excited. Like, this used to be way, way, way more limited than v two.

Speaker 1:

Yeah. But, like, it's getting really good, and I'm not can I if I play, will this go through? Let me try if I click sample display.

Speaker 4:

We're off under the lights here for this semifinal clash. The stadium buzzing with anticipation. Eleven laps united in their iconic black and white shirts pushing forward with intent straight from the opening whistle. Driving down the wing, pace to Burnie skips past one, skips past two. Oh, this is beautiful.

Speaker 4:

One on one with the fullback cuts inside. Oh, that's a lovely bit of footwork.

Speaker 1:

And got

Speaker 4:

all. Oh my goodness. They've done it again. Pretty cool. Never

Speaker 2:

again. Very good. I think it's slightly less impressive in Dutch. I also

Speaker 1:

tried it already.

Speaker 2:

No. But it's it's still good, it's slightly less impressive. I also noticed a few bugs when it's in Dutch. Like, sometimes, like, the the annotation tags, like, excited chuckles, like, they get rattled out.

Speaker 1:

Oh, really?

Speaker 2:

Yeah. Yeah. Oh, wow. But you can like, in this dialogue, you can basically type whatever you want and then choose the language.

Speaker 1:

I see. So that's really a voice.

Speaker 2:

Yeah. That's very good.

Speaker 1:

Try this. So for people following only audio, they're only Definitely

Speaker 2:

worth going there and testing it.

Speaker 1:

Exactly. I think and I think programming or stuff with speech is is a bit because they are voice agents.

Speaker 2:

Right? Maybe try something in, like, take giggles. Start with giggles maybe, like the tag giggles, and then say, oh, Bart. Oh, Bart. Oh, Bart.

Speaker 2:

And then maybe go to whispers because it's already there. Oh, Bart, comma. Yeah. And then whispers. And then your slippers are so cute.

Speaker 1:

Is this what you wanted to hear? Bart's gonna be Like walking down the street with his AirPods on. It's like, let's see. Let's see. Now I'm gonna play.

Speaker 1:

Just want to put it on. Just like, this guy. Or Bart, your slippers are so cute.

Speaker 2:

Thank you.

Speaker 1:

I I I I oh my god. And you picked a a female voice as well. Maybe it was just based on the text?

Speaker 2:

Yeah. You can actually below, you can you can choose voices. So if you copy paste the text for a second, and then we can also try it in a different

Speaker 1:

Okay.

Speaker 2:

Because it will remove the text.

Speaker 1:

Okay. Maybe the sergeant. You are you have

Speaker 4:

what it takes.

Speaker 2:

Jesus. Okay. Let's try this voice, yeah, with the The sergeant. So it's gonna be different if the sergeant tells me.

Speaker 1:

Oh, Bart, your slippers are so cute.

Speaker 2:

Okay. That's weird. I feel I felt like I had to do, like, 10 pushes or

Speaker 1:

something. Was like, I'll give you 10. Cool. I feel like it's always it's always fun to to to play with these things. True.

Speaker 1:

Alrighty. I think that's it for today.

Speaker 2:

That's it for today.

Speaker 1:

Had a lot of topics. Had fun as well. Anything you want? Any parting words of wisdom that you want to share with the with the world, Bart?

Speaker 2:

Invest in slippers.

Speaker 1:

Invest in slippers. Nice. And in socks as well, I guess. White. White.

Speaker 1:

Wow. Nice. Alrighty. Thanks, everyone.

Speaker 2:

Thanks for listening. Thanks for watching.

Speaker 1:

Thanks for joining me, Bart, and I'll see y'all next week. Ciao.

Creators and Guests

Bart Smeets
Host
Bart Smeets
Mostly dad of three. Tech founder. Sometimes a trail runner, now and then a cyclist. Trying to survive creative & outdoor splurges.
Murilo Kuniyoshi Suzart Cunha
Host
Murilo Kuniyoshi Suzart Cunha
AI enthusiast turned MLOps specialist who balances his passion for machine learning with interests in open source, sports (particularly football and tennis), philosophy, and mindfulness, while actively contributing to the tech community through conference speaking and as an organizer for Python User Group Belgium.
AI Bubble Fears, AWS CEO Slams Layoffs, DeepSeek V3.1 & Google’s 33x Efficiency Jump
Broadcast by