GPT-5.1 Drops, Netflix Sets Boundaries, and an AI Song Hits No. 1
Murilo (00:07)
Hi everyone, welcome to the Monkey Patching Podcast where we go bananas about all things country songs, brain rot and more. My name is Murilo, I'm joined as always by my friend Bart. Hi Bart. How are you?
Bart (00:18)
Hey Brilla.
I'm doing very fine. sitting inside while the rain just stopped actually.
Murilo (00:21)
Very fine.
Yeah, it stopped. think today is a bit less gray, so it's nice. But I'm always amazed when the daylight saving stops. That just looks like it's dark all the whole day. So...
Bart (00:31)
Huh?
Murilo (00:38)
We'll live. We'll live. We have a lot of stuff today.
Bart (00:41)
Let's get to it.
Murilo (00:42)
Let's get to it.
Bart (00:43)
An AI-generated country song ⁓ is stopping a billboard chart, stoking a fresh debate over authenticity and how charts treat synthetic artists. Walk My Way by Breaking Rust, hit number one on country digital song sales, raising questions about disclosure, bottom-flater metrics and where billboard draws the line.
Murilo (01:08)
Is it a walk my way or walk my walk?
Bart (01:11)
It was this indeed walk my walk.
Murilo (01:13)
All right.
It's fine, we can keep that in.
Bart (01:16)
Did you listen to it?
We should be able to.
We hear it too.
Murilo (01:19)
Yeah, I feel like the audio is not coming through on the right.
sure.
Bart (01:22)
Yeah.
Murilo (01:23)
Okay
Bart (01:24)
So what do think? It's currently number one on billboards country digital song sales.
Murilo (01:30)
So it's not like, yeah, okay. I see what you're saying. Like, cause I think when they say topping a billboard chart, it's like, it's specific, right?
Bart (01:37)
It's what
I understand, it's ⁓ Billboards country songs and the digital sales of that, on Spotify and stuff.
Murilo (01:44)
And what does digital country song means for billboards? It is AI generated or? ⁓ okay. Okay, okay, okay. The song looks, sounds like a country song. I'm not a big country song guy. I I listen to it. There are a few songs that I can jam to, but it's not like, I don't listen enough to say this is good, it's bad. It's it's a country song.
Bart (01:48)
No, think it's like digital channels. So it's contrary on digital channels.
Yeah, me neither.
Yeah, same for me, same for me. I had a listen to it. There's a lot of interaction on socials on this. A lot of positive, also a lot of frustrated by actual artists. It's a...
Murilo (02:20)
Actual artists are the
main ones frustrated that have come out and voiced their frustration.
Bart (02:23)
Yeah, yeah, yeah, yeah.
And to me it sounds a little bit... Well, if I didn't know it was AI, I probably wouldn't pick it out. But because you know, like it sounds a bit cringy or something. Like it doesn't... Like it doesn't really feel like a natural voice. But it currently has 1.8 million Spotify listeners.
Murilo (02:41)
Yeah.
⁓ Yeah, maybe to shortcut a bit the discussion because the thing that I when I read this article, mainly this statement here and that should infuriate us all. Why should it infuriate us all? Because the author here, talks about how Billboard also doesn't care, kinda, that it's AI generated. This song, they do mention AI generation.
on I think somewhere like Spotify or something but not on on Instagram so but
Bart (03:19)
Well, but they don't mention it on ⁓ Billboard. ⁓ Billboard currently doesn't have separate charts, they don't have separate labels. ⁓ And I do think if you take the Billboard charts and you're just gonna go through it, you're gonna assume this is authentic music, right?
Murilo (03:39)
Yeah. No, that's true.
Bart (03:39)
And that's not really the case here. Like, because everything
is fake, from the instruments to the vocals, right? And I think that is, like, that feels a bit misleading. ⁓
Murilo (03:48)
Yeah, yeah, for sure.
But if that wasn't there, I feel like, I mean, I guess I don't know the author, but he makes a lot of points here that even if that was very clear on the description, he would still be a critic.
Bart (04:06)
I
if it would be clearly labeled, I think like billboard AI generated numbers versus human generated numbers on billboard, they should be able to compete with each other. But they should be labeled as such, right? I don't think they need separate charts, but they do need clear labels.
Murilo (04:22)
Yeah, I mean, I'm with you. Yeah, I'm with you. I'm with you. Yeah.
But like, think, for example, he mentions here that I don't remember. don't I cannot find where, but he says the AI-generated music getting traction discourages artists to produce content, which I see why I see the logic in the argument, but I also feel like it's a bit
Like, I don't know. I don't know if I fully agree, you know? I think it's like, there are people that are gonna like the iGenerate music because they listen to music and it's catchy and it's like, whatever. And there are people that listen to it because they want to know that there's someone behind the pen thinking about it and feeling these things and writing it out, right? But I still feel like there is probably people that will be interested in both, Like there are song, like the other thing he mentions, I think it's country song as well. There are a bit more.
emotional, let's say, like they do have lyrics that try, and he mentions here that the lyrics are soulless and all these different things. But at the same time, I feel like for a lot of people, if they didn't know it was AI generated, they wouldn't notice. Right. And I feel like there's a lot of songs that are also written by actual artists that are also soulless. Right. They're not talking about anything and still get very catchy.
Bart (05:32)
Yeah, but I don't think it negates the fact that it might ⁓ demotivate actual artists. Like an actual artist to make a song, they're probably gonna work on this for one or two months to get it all perfectly right. And then like your vibe artist is gonna do this in three hours. And it's slightly, slightly less good. Like I can imagine that this is like with the same time that you spent
generating a single song like this vibe, Art is gonna generate like 20 quite okay songs.
Murilo (06:07)
Yeah, yeah, yeah. And I get it, you also, I mean, I get the argument of you're incentivizing slop, right? AI slop, which maybe slop is a big word for this, but...
You're incentivizing noise over quality, right? You're incentivizing people to just push a lot because maybe one thing will resonate with people instead of actually trying to create. then the scale of things that are actually good and the things that are just noise is going to increase a lot, which is also what you see on YouTube and all the other like LinkedIn and all these things, right?
Bart (06:37)
Yet to me this will probably like also like it will like raise the bar for what is the minimal quality that you expect, right? Like it will elevate the really good artists and like it will take out artists that are in the bottom when it comes to quality because they will like they will not be able to surpass things made by Sunu and and likes.
Murilo (06:58)
Yeah, true. The artists that are actually passionate about music, right? Because I'm also thinking if people just want to be famous and just like maybe those the mediocre artists, let's say they're also going to use AI then, right?
Bart (07:12)
potentially.
Murilo (07:13)
Potentially, right? So yeah, I think the devil's advocate point is like if people listen and they like it and it's there's no deception, right? Like it's clear that the eye generated and do we care?
Bart (07:25)
We'll let the people decide.
Murilo (07:27)
We'll the people decide. Yeah, I'm not saying I personally subscribe to that necessarily. I haven't made up my mind yet, but I think that's the bit the devil's advocate. Right. And I think people are very opinionated, but that's a bit.
That's where I feel like I have a bit of a hard time, you know, say it's all better, all good, right? think reality is that it has changed the industry. Yeah, that I fully agree. feel like that I can definitely stand behind.
Bart (07:46)
it comes down to transparency.
Murilo (07:51)
What else? And talking about art, I guess, right? Next up we have Netflix outlines when productions can use generative AI and what must be cleared before anything hits the screen. Ideation is generally low risk, but final deliverables, digital replicas of talent, personal data, or third party IP require written approval and enterprise grade safeguards backed by a handy use case matrix. ⁓
So this is Netflix guidelines of using Gen.ai, right? And when I will...
Bart (08:21)
Exactly. For basically
production teams, right? The teams that are actually making series or movies.
Murilo (08:26)
Yeah, because the way I understand as well is like Netflix, they have the streaming platform, but then they actually partner up with a lot of almost independent directors and things like that to produce content for Netflix that becomes like Netflix content, right? Like Netflix exclusive content. ⁓ And these guidelines are for everyone that basically wants to publish anything under the Netflix brand.
I ⁓ guess what I saw here, just put here the use case matrix, the main concern from them is like a legal thing. That's what it looked like here.
Bart (08:54)
Exactly.
Well, it's probably triggered by legal, but like their wording is also like, ⁓ we should not mislead ⁓ viewers, we should also have respect for actual performers to not create a digital twins of them. And probably the this comes down to legal rep, the legal risks in this as well, but it's not necessarily worded just from a legal risk point of view. ⁓
Actually, like going through this, to me it sounds very sane.
Murilo (09:44)
Yeah, I also
had that feeling. It's like...
Feels grounded, feels... I don't know, it feels like they're doing the right thing, right?
Bart (09:51)
Yeah, exactly. Like, like don't mislead people. Don't use data that you do not own. ⁓ If you do something, get a written approval. ⁓ Don't copy ⁓ based on other material. Don't copy ⁓ actors based on other material, et cetera. It makes, it makes, it makes a lot of sense, I guess.
Murilo (10:11)
Yeah. They also say like if you're stuff for intermediate, right, like mood boards or reference images, like, okay, low risk things, of course, that's fine. But as soon as something goes on the final cut, even if it's a detail, like use judgment, escalate, talk to us, right? yeah, it looks... I mean, you look very sensible indeed, Like probably if I spent...
some time thinking about it, I'll probably come to something close to it, right? Like use your comment, like, yeah, don't deceive people and all that. Maybe one, because I also, for me, was also, can't say I was surprised, but I also thought that it was a bit quote unquote early, right? So it made me wonder as well if there's a lot of already Gen.E.I. content on Netflix today. What do you think?
Bart (10:55)
Uhm... It's good question, I don't know. I've never noticed it as such. And it could be that it's shitty CGI. That is just genii, maybe.
Murilo (10:58)
Yeah.
Bart (11:04)
I've never clearly noticed it, to be honest. think one of the bigger guiding principles, like a lot of them, make sense, like the five different guiding principles, maybe to zoom in on one of them is that ⁓ GEN.AI ⁓ should not be used to replace, but also generate new talent performances ⁓ without consent. And that's...
It's a hard stance, basically like if when we need an actor and be the actual actor, be the voice actor, whatever, like the default should always be use an actual person.
Murilo (11:40)
Yeah, yeah, I should say.
Bart (11:41)
And of course there is a back door because you can get written prior consent etc. etc. But like it's a much a conscious decision.
Murilo (11:49)
Yeah, it feels like, yeah, again, like it feels like it's the right thing, right? Like you're not trying to take jobs away. You're not trying to add more to the noise. You're like, don't take shortcuts. That's a bit the thing. Also, what I do imagine that JNI is used is to also create, and that's actually what I thought when I looked at it, when I first saw the article, I thought it was gonna be about like generating thumbnails.
Bart (11:54)
Go.
Murilo (12:15)
or personalized text for the descriptions or something. But actually it's really about the actual video, I guess more like the actual content, right? Like not the marketing around it, right? So.
Bart (12:15)
Hmm.
Yeah,
exactly. It's actually really about the content. And something that they also like clearly do give green light on is like more the brainstorming part, like ideation. I want to create a mood board. Like this is the type of thing that we're thinking about. Like, let's speed up this ideation part.
Murilo (12:46)
Yeah, and then again, it makes sense, right? It's like, if it's really just as a tool to get your brain juices flowing and all these things, things are not gonna make the final cut. Like if you're using it as a tool, it's okay. If you're using it as a replacement of something, then probably not okay. Or at least talk to someone, right? So yeah, again.
Pretty, I mean, and again, the timing as well, the impression I have and I don't know how much Gen.ai is today, that they're not doing this as a reaction to anything. They're really doing this to set the standards from the beginning. So they are really following it up. They are thinking critically of how this could impact the content that they put forward. So it felt like very, very modern approach, right? Like a very forward thinking and well thought out. So points to, they gained points on my book, let's say Netflix.
Bart (13:35)
Yeah,
I agree.
Murilo (13:36)
Okay, what do we have next?
Bart (13:37)
paper, LLMs can get brain-wrought, dials down on the risk that large language models degrade when trained on their own inputs instead of diverse human data. It collects early research and symptoms like blandness, repetition and drifting facts, framing the problem in plain language for non-specialists and policymakers. So this is a website we'll link in the show notes that basically gives an outline of a paper that they published on on archive.
Murilo (14:06)
Yes, got a lot traction this paper. I mean, also because I think the, ⁓ is very newsy, let's say, right? The Brain Rot, I've heard a lot, like I heard about Brain Rot, they define here as, it's coarse your hand, low effort, endless engagement, bait content that can dull human cognition.
eroding focus, memory discipline and social judgment through compulsive online consumption. So Brain Rot I heard first with the Italian Brain Rot, which is Gen.ai content, like some animations. And then they just say stuff that makes no sense. And the first time I heard of it was like people like cracking up. But to me, I'm like...
Yeah, what the fuck like this, like, this is not funny. This is just dumb. Like, you know, like it and then you see a lot of videos, but usually younger generation, like younger kids or something, not younger, like tallers, but like, I don't know, high schoolers or something that they were. They memorize the nonsensical words and they were just like in the same temple or the same amount of times repeating the same word and then like making a big deal out of it. Right.
So I think the idea here is that this erodes your cognitive capacity, right? Your brain. And they actually did a study on this for, on LLMs. So they actually had, this is just from X content here. So from ⁓ what was Twitter before, and they basically rated the tweets, let's say, or the content in general as junk or control, right?
Bart (15:48)
Chunk
being ⁓ brain rot content.
Murilo (15:50)
brainwrap content. So I think they mentioned like it's a, I think that was in one article here. It's like stuff that is mainly for, for engagement, mainly for like low effort, not something that the challenges your, your brain basically, right? Something that is, yeah, simple, straightforward. And they train LLMs with this or I think they fine tune LLMs I want to say. Maybe just going here from one of the articles that cover this.
Bart (16:02)
Thank you.
Murilo (16:16)
Junk set included highly popular content engineer to grab attention with minimal information, clickbait threads, recycled meme commentary, post position to spark outrage and algorithmically generated articles. basically just stuff that's just clickbait basically. And they train models or define two models on this. And they actually saw that the models, the LLMs that had the junk content, they performed worse on.
Basically, like they had the data sets here, like the ARC, I think we covered ARC a while ago, which is the game thing for trying to find the rules based on the outputs of the game. But they also give some examples here, right? They have some questions basically with alternatives, and they saw that the baseline model actually performed better, right? The logic was wrong, or sometimes there was no logic or no plan, and the answer was basically wrong. And they actually saw that even after you do this and you try to...
Bart (16:48)
Hmm.
Murilo (17:10)
to mitigate the brain rotting, let's say, with clean data that still persisted to a large degree. And again, there are some questions on like this mirror also human brains, right? Like in the sense that if you just consume junk, stuff that doesn't challenge your brain, right, to think about it, if you can also have impacts on the biological brain as well.
Bart (17:17)
Hello.
Murilo (17:39)
I do think that there needs to be more studies or larger scale studies on this, but I thought it was quite interesting.
Bart (17:48)
Yeah, and also a bit worrisome for future training data, right? Given that the amount of data that's currently flooding internet, which is quote unquote, just brainwashed content.
Murilo (18:02)
Yeah, indeed. Also the...
Bart (18:04)
and very much also
a bit of a vicious circle because a lot of this Brain Rod content comes from L-Lamps.
Murilo (18:11)
Yeah, yeah, yeah. think it's like, I think we talked about it, I if was on the data topics or the monkey patching, but that some people are saying that AI spoiled its own well, kind of, right? Like that we released the Gen. and now so many people use it to write articles or to do this that now it's being used again to retrain the thing, you know, and it's like, this has an impact on the performance and the quality of all these different...
Bart (18:23)
Hmm, yeah, yeah, yeah.
Murilo (18:39)
things, right? ⁓ One thing that I just do, like data quality in AI or machine learning, it's not something super new, right? I think it's something that we've seen, that I think there was even a data-centric competitions, like for machine learning, right? That basically, instead of you giving your own model in the competition, you give a data set that is clean. And then basically, everyone, let's say model strain on the different data sets to show the importance of data quality. ⁓
But this also kind of talks about a different level, right? Because traditional data quality classifiers thinks it's fine, but it quietly degrades reasoning because it teaches models to mimic attention rather than understanding. So it's also like another dimension of data quality, right? It's not like the data looks clean, the text does look there, like it does look like the sentences make sense. But when you take a more subjective, a deeper look on the cognitive,
demand, let's say, for this data, then it has also a big impact, right? So I'm also wondering if for future models, if people will pay more attention to this, right? Like also try to classify what is junk, what's not junk. In a way, it's a bit analogous to what we saw for like traditional machine learning, right? How before it was like just gathering a lot of data and putting them all to train.
And then now people are saying, okay, it's not just about getting data. It's about having the right data, having the right features, having the right, like the clean examples, right? You don't want to include everything. And I'm wondering if this is also a similar moment for LLMs, right? Like it's not just about getting as much data as possible. If we should also spend a bit more time now as well, curating the data, seeing what makes sense, what doesn't make sense, and see if there is an impact. I think last week, I want to say, we also covered the small playbook, which was like a...
Bart (20:06)
Yeah.
Murilo (20:21)
a bit of a ⁓ playbook on how to train your own LLMs. And they also talk about how even with a relatively small data sample, you can have a quite good LLM. So, yeah, I don't have first-hand experience to comment on this, but I would be very curious to see the impact of if you have a model train from scratch on all the data and a model train from a subset of that, which is still a lot of data, but...
and picking the best examples if you have a very big performance.
Bart (20:51)
Yeah, fair point.
Murilo (20:52)
Did this surprise you at all Bart? The data quality thing?
Bart (20:56)
⁓ No, to me it sounds a bit intuitive, right? Like if you have a pre-existing set of knowledge, so pre-existing model and then you start feeding it bad quality data, the performance gets worse. Feels pretty intuitive, right? But here like this article or this paper basically proves that.
Murilo (21:15)
Yeah. ⁓
Bart (21:16)
But the big challenge is of course, like this is what is happening in real life at a very large scale, where a large part of the internet is now generated. ⁓ So yeah, big question on what that means long term.
Murilo (21:30)
Okay, it's time to tell.
And talking about machine learning models or new LMS, OpenAI announced GPT 5.1 aiming for a smarter, more conversational chat GPT rolling out to paid users first. Two flavors, instant and thinking, promise better instruction following, clear reasoning that adapts to test difficulty and controls to shape tone so responses feel warmer or more precise. So...
new model from OpenAI. You tried this one already.
Bart (22:01)
I've tried it a little bit. I've tried it on the chat UI, but also through Codex, which is OpenAI's coding assistant. I'm not sure, be very honest, if it's really like a new model. I think it's a tweaked version of GPT-5, which is a bit, quote unquote, smarter on understanding when it needs to use the thinking tokens or not. And it's also...
It promises to be bit more smarter, warmer in communication.
Murilo (22:30)
Yeah, so I haven't used it as much yet, but from the announcement here, it does... I mean, they mentioned like more to the communication, right, to make it warmer and stuff. ⁓ They also mentioned like a bit quirky, which to me is a bit... it's subjective, right? People like it or not, or if you want more of this or not, maybe you just want something that is super direct and super to the point.
Bart (22:42)
Mmm, yeah.
Murilo (22:57)
⁓ One thing that they did mention that I thought could be useful is like the instruction following, right? So the example is always, yeah, always responding six words is the example. And then you see that for the 5.1 instant, it looks like it sticks more to the instructions. I think that can be very, very useful. I think the thinking as well, but I think it's something that they also did. But for example, ⁓ clear responses with less jargon and fewer undefined terms, which I think it's, I'll keep writing.
Bart (23:01)
Yeah, it also improves, yeah.
Murilo (23:24)
It's good when you look at it unless you're someone that is very technical and you understand the jargon, which is more accurate, right? So the vibe I got has also been like, there are improvements for the general population, but for example, tone is more warmer and more empathetic. I feel like it's a bit subjective and I don't know, feel like it's, I don't feel like, don't know.
Bart (23:43)
Well, they do also mention
that with these improvements, that they do have significant improvements both on math and coding evaluations. ⁓ So there is that.
Murilo (23:52)
Hmm, okay.
I feel like that, but that I appreciate, I like looking at the first post here, they are focusing a lot on the tone and how warm it feels. And I'm not, to me, again, I'm biased, but I don't know if it's a step in the right direction, right? I feel like you're equating more and more of chatbots to people, right? But I don't know, like, for me, I still feel like we should look at it as a tool, right? And I think all these changes, it's like,
you're taking less of a step as this is a tool and more as this is a personal companion that you have, which I think is a bit dangerous as well.
Yeah, I feel like if that was the only change, I would be a bit, I don't know, conflicted. They also make at the very end as well that you can change Chagi PT settings for like tone and all these things. So you can make it more uniquely yours, they call it. So you can make it more professional, more quirky, more nerdy, more, I don't know, something like this. Which I think that can be helpful, but...
It makes me wonder if ChachiPT is aiming to be more and more like a personal friend than a tool for you, which I think it's again, dangerous territory.
For the more objective things like you said, you used in codecs, did you feel a difference in the coding abilities?
Bart (25:12)
So maybe just the SWA bench, they showed it GPT 5.1 is slightly above GPT 5 levels. And that also means that it's more or less the same as Claude Opus 4.1. It's more or less the same as Glock 4 when it comes to coding. It's very comparable.
Murilo (25:32)
Which is very good, right? Those are the...
Bart (25:35)
which is very good.
It's apparently also the best ranked in the ⁓ AIME ⁓ in high school math. It outperforms all the others. My own experience with Codex is actually quite positive, although it's very limited at this point. Well, with 5.1 and Codex actually just this morning that I tried it out.
I was still going on a task with Cloud Code and I tried to fix it with Codex and I actually got it in one go.
Murilo (26:06)
Okay.
Bart (26:06)
So it's, yeah, I'm happily surprised.
I think also seeing that you now have GPT 5.1, have GROK 4, you have Claude Sonnet 4.5, want to say. You have Kimu that is doing very well. But I also, I'm hopeful that this will also bring the cost of AI assisted coding down because token costs are still very high for these type of models.
especially for the the tropic ones.
Murilo (26:34)
Yeah.
Yeah, true. True.
Bart (26:36)
So I'm happy
to see these improvements. Apparently everybody is able to at least be good at this. Maybe not the best, but at least be very good at this.
Murilo (26:38)
like more.
which I think a lot of the times that's enough, right? Yeah, think cost will go down. I think we're gonna cover another article later that talks a bit about this, but I agree. I agree. Feels like people are figuring this out more, right? Like coding systems.
Bart (27:00)
⁓ With the OpenAI chat UI, haven't really noticed any big changes. I haven't tried it out extensively either. I do hope that as well. So what I've used to be doing...
with GPT-5 is that I very quickly switch to thinking because it does a bit more tool usage and then it's much better with let's say state of the art news and stuff like this. It really crawls the internet for it. I hope that these things, that this has become a bit smarter, knows when to use it or not and at the same time takes less time than GPT-5 thinking because it takes way way way more time to get, let's say, accurate
news data than for example perplexity or or Google's ⁓ AI answer.
Murilo (27:47)
And do you think that's because just the tool calling is slower or do you think it's because it does more tool calls or it adds a big, I don't know.
Bart (27:55)
I think it's a combination
of those things, yeah. Probably the two.
Murilo (27:59)
Well, let's see.
And you mentioned perplexity, mentioned cloud code, mentioned codecs, you mentioned chadgpt. Do you have one, do you have like a workflow or do you sometimes try things on different ones or you try one, you didn't get what you want and you go to perplexity or how do you think about these things?
Bart (28:18)
Typically my 80 % is that I use a cloth code for coding and OpenAI for all the rest.
Murilo (28:26)
Okay, that's the default. But just to deviate a bit, if you don't get like, if you feel like the quality is not good.
Bart (28:27)
as a default.
Yeah,
for updates on news and stuff, tend to use Propaxity just because it's really good at this and very fast. Whereas GPD is very slow.
Murilo (28:40)
Okay, interesting. What's next part?
Bart (28:42)
something that everybody has been talking about. can't get it out of my LinkedIn feed. Token-oriented object notation. Toon pitches a compact schema-aware alternative to JSON for LLM prompts. With a v1 release on November 10th and a TypeScript SDK, Toon targets uniform arrays for major token savings while keeping lossless structure, positioning it as a pragmatic bridge between JSON and model-friendly text.
Murilo (29:09)
Do feel like this is bike shedding or like for, I don't know. We are all bike shedding as a community.
Bart (29:18)
Explain the concept ⁓ of bike sharing.
Murilo (29:21)
So bike shedding, like I read it somewhere that explained like, it's basically if you're in a meeting and you have many topics and then many topics, like all the topics are super important and then you have one or like where should you put the bike sheds in your office? And then there is a, I don't know if I should call it a phenomenon, but basically it's like in the meeting you spend most of the time talking about bike shed, the bike shed, where you should put it because it's something that everyone has an opinion. It affects everyone, but it's like, like it's really a...
It's not that relevant, right? Yeah, it's trivial, right? Like the other things are way more important, but you spend so much more time discussing this because everyone's an opinion. It affects everyone. And I'm saying this here because it's like, yeah, everyone can talk about token usage. It's very objective. Everyone can, I don't know. It's like something that everyone can understand because it's very simple, but like, is it really that?
Bart (29:48)
Trivial, right?
Murilo (30:09)
Is it that big of a deal? So what do you think Bart?
Bart (30:13)
Sure point.
I think it's a good analogy. ⁓ But maybe explain for people that haven't seen it yet, what is tune?
Murilo (30:25)
So basically, you have, like, I don't know, if you're describing information, right, you have different formats, you have like JSON, you have YAML. Exactly.
Bart (30:34)
Like you have a data object, right? Like a multiple
rows of data, an array of data, whatever. Something data.
Murilo (30:39)
Exactly. And
something data. And then the main criticism like JSON, think it's what most elements were using or I don't know, maybe started like this. I don't know, but it's very, very standard. Very, very both. Exactly. A lot of like new lines, codes, this and that, and that, that adds costs, right? To, I mean, we talked about
Bart (30:51)
But it's very verbose, right? Like you need a lot of characters and...
Murilo (31:03)
tool calling using natural language instead of a JSON schema, right? That also reduced the number of tokens. And there's YAML, which is more like of a, I'm showing on the screen for people that are following the video, by the way. ⁓ YAML, it feels a bit more human readable, something that kind of more like Markdown-y kind of thing, a little less tokens. And Toon is basically a new format that there, it's a mix between
kind of a mix between the two, right? So you have like, the example they have here is like Hikes3. So I guess it's three elements. And then you have brackets and then you have kind of like the headers of the table, kinda, right? So you have ID, name, distance kilometer, elevation gain, companion, and what's sunny. And then after that, you have a tab and then you have the values as a comma separated list. And that's it.
⁓ There is less tokens, guess that's a bit. So it's a typically 30 to 60 percent fewer tokens. I'm wondering if this is as flexible because in JSON you can have like nested objects, right? You can have like a thing inside of a thing inside of a thing. I'm not sure how Tune would work. ⁓ I'm not sure how different this is from like just a CSV, to be very honest.
I it's, I don't know. Maybe this is just the nested, I don't know. But it's a new format, right? And I think even if...
I think it's something that could be relevant. I think it's something that I would like to know. I think the thing for me that is a bit surprising is how much noise or how much how much how this exploded, you know, I think it's something like, yeah, okay, that's cool. But I feel like a lot of a lot of people talking about this, a lot of LinkedIn posts, a lot of people saying why this is the future, this is better. And
Bart (32:49)
I think it's a good thing to how can you convey the same information with less tokens. I think that's a good thing. That these experiments are going on. think it's probably hard to get one-on-one performance on this versus something like JSON on YAML just because there's very little of this in the training data.
How do you need to interpret this? What I don't like myself about this is that you have this length counter. So if you go down a bit to the example that you were showing, ⁓ you have this hikes, like three hikes there. But in order to add a fourth hike, you also need to update the counter. So like that's a bit counter intuitive or something. ⁓
Yeah, but it's good to see these explorations. I think that's a valid thing to find a way to spend less tokens. ⁓ But indeed it's very interesting how this went absurdly viral,
Murilo (33:46)
Yeah, yeah. And I think like when I'm looking here, the efficiency ranking for 1000 tokens and it's like, Tone is the first, of course, but then like JSON Compact is the second. So it's like, and I guess JSON Compact is just removing the new lines, just, know, like, and you have like 26.9 and 22.9. So
I don't know, like it also feels, I mean, it is good. It's good that people thinking about this. Huh?
Bart (34:09)
And everybody knows Jason, right?
And everybody knows Jason.
Murilo (34:14)
And everybody knows JSON, right? And now you're starting to have all these like Python package, TypeScript packages, because now you want to be able to parse these things and convert from JSON to Tune. I don't know, like it's again, I'm not, I think people should still work on this. think it's good, but I also think the reason why this is gaining so much attention is because everyone, it's simple. Everyone can understand. Everyone's going to have an opinion on.
Like the whole YAML versus JSON debate was already something, right? ⁓ So it feels a bit like bike-shedding. But again, there is value, right? That's what I wanna get to. I like it, but I also heard so much about this that I was like, pfft. Yeah.
Would you use this?
Bart (34:59)
⁓ Maybe a specific use case? Not at the moment.
Murilo (35:03)
And do you think this is only for, mean, they clearly it started for LLMs, but
Do you see a future of this outside LLMs? Like people just using Tone format just because it's the new standard.
Bart (35:15)
⁓ Well, if it can encapsulate everything that adjacent today does, why not, right? I would maybe argue that it's less easy to maintain it, but like I was explaining, right? I if the discount or if... I'm not sure if it's possible to enforce a schema because you need a lot of utility functions for that again, like...
I don't think it will quickly become a new standard, but maybe for LLMs it does. And I can think of use cases where people are building solutions that are extremely LLM heavy and that are sending a lot of JSONs back and forth. That's where like a 5 % decrease in tokens results in a big decrease in cost, right?
Murilo (35:53)
True
Yeah, sure. don't know if I find it so readable though compared to like, EMO or JSON even, right? So I think that's the only thing you lose, I agree. I agree. I was so curious here that the workflow, the proposal is that you have JSON and then you encode the JSON into Tone and then that you passed into the LLM. So it's like, it does look a bit like it's bit of an intermediate layer, right? Like people shouldn't really worry about it, but yeah.
Bart (36:27)
Mmm, yeah.
Murilo (36:29)
What else do we have? Is it my turn?
Bart (36:31)
I think so, yeah.
Murilo (36:33)
So we have 11 Labs introduce Scribev2 Real-time, a streaming speech-to-text model built for agents, meetings, and live captions. It promises sub-150 millisecond latency, automatic language detection, and enterprise options, claiming 93.5 % accuracy across 30 languages with API access and direct integration into 11 Labs agents.
So speech to text model, this is a, I'm not sure what state of the art means for speech to text, but very, very low latency, right? And I think they mentioned here that they use cases especially for voice agents, right? ⁓ Today, what do you use today Bart for speech to text?
Bart (37:15)
⁓ I don't do lot of speech and text to be honest. I use it a little bit like the voice mode in the chat GPT. ⁓ I actually tried recently, forgot the name, but basically an OS X client that can help you do speech and text, but the performance wasn't great. ⁓ So I don't actually use it. So yeah.
Murilo (37:37)
This is not an open model, right? So it's not like, because I think first thing I think when I think of specious text, I think of whisper, right? ⁓ I don't know if you can read. Yeah, I think so too. ⁓
Bart (37:43)
Exactly.
think that's what everybody defaults to these days.
We actually use it for the podcast to some extent. ⁓ We generate transcripts basically using whisper. Then those transcripts we use to find interesting slices and to generate shorts.
Murilo (37:56)
To some extent, yeah. You wanna explain how?
Yeah, indeed. You actually run Whisper locally, so actually it's the Whisper CPP, right?
Bart (38:14)
Yep.
Murilo (38:15)
Yeah, indeed. ⁓ Which is...
quote unquote good enough I feel, no?
Bart (38:19)
Yeah, wouldn't generate a newsletter from it, right? But it's good enough for an LLM to pick it up and understand enough context to propose good short locations.
Murilo (38:24)
Yeah.
Yeah. So I guess, mean, latency for a whisper CPP is actually quite okay. I feel well again, having used as much, right? But the little experience I have is that it looked okay. So I guess the main difference between such models like this is that maybe it's even more instant and the accuracy is much higher. Right. One thing that, one thing that I'm always curious, especially since moving to Belgium.
Bart (38:52)
Exactly.
Murilo (38:59)
is the different languages part, right? How well does it work with all the different languages? And also how well does it work if you switch languages in the middle? Which is not... Worldwide maybe it's not that common, but I feel like there are multilingual countries that this does happen. Right, if you're in a meeting... ⁓
Bart (39:03)
Mmm.
Yeah, true.
But even like if you say I've like this, this voice assistant by phone, and if you're in Belgium, they're like, there's like France, there's Flemish people calling in there, maybe even German speaking people calling in, and then have international people speaking English calling in. And like, it helps if you have just one model that can respond to everything to avoid having to detect, right.
Murilo (39:39)
Yeah, no, for sure. Fully agree. Fully agree. But I'm also wondering like the what if someone starts in, I don't know. I mean, this happens with me. So I'm learning Dutch. I'm learning Flemish. And I start and maybe I start first. My accent is probably horrible. Right. So I wonder how good these models are. But the other thing, sometimes I switch from like Dutch to English because I'm in the conversation and at one point it's OK to speak English because it's just easier to.
Bart (39:40)
So there are definitely some use case for that.
Murilo (40:06)
carry on like this, right? And I wonder if the model can also switch like that, it's good, know, like mid conversation to just switch languages. Never tested, but I've been curious about it. Yeah, right? We should give it a try, see how it goes. ⁓ Do you know maybe last thing, like on 11labs, have you used a lot of 11labs? ⁓ The agents and all these different things or no?
Bart (40:19)
We should check it out.
⁓ The agent is not, ⁓ because they released only a few months ago, right? The agent platform. ⁓ But it did use a lot of text-to-speech. I think it's the strongest player in the field. They've also been around more or less from the beginning. ⁓ And it's just very good. Also, when I talk to founders that are active in this space, they all use 11labs.
Murilo (40:42)
Thanks Joe.
Yeah.
Yeah, yeah, they've been there for quite a long time as well, right? Cool. What else? What do we have?
Bart (41:06)
Bydance's Volcano Engine launched a low-cost coding agent priced at 1.30 USD for the first month escalating China's AI coding price war. The Duobao seed code model later costs 41 yuan monthly, is ⁓ roughly 5 USD, and Tout's SWE bench verified results on par with other leading systems.
Murilo (41:30)
So very, very cheap, huh?
Bart (41:32)
Yeah, if we ignore the first month, which is very cheap, but then going further, it's $5 a month, which outperforms, I want to say, all Western providers. More or less, maybe I'm skipping here or there one. But yeah, I haven't really dived into what's our quotas or stuff like this. How good is it today?
Murilo (41:44)
think so.
Bart (41:55)
But it's good to see these price wars going on.
Murilo (41:58)
Yeah,
yeah, I think by then I've I don't remember which models I saw but I remember them releasing models that we did discuss I want to say here on the part so like good models. Maybe it was image model indeed. There's a dual bowel seed code. I don't know. Yeah, this model right. ⁓ I guess this is an open source model. I guess.
Bart (42:07)
I think it was an image model. ⁓
It's actually the first time I hear about it, to be honest. Yeah.
Murilo (42:25)
think so. Let's check.
Bart (42:26)
I think so
as well, think so as well. I think it's an open weights model.
Murilo (42:30)
Open weights indeed. And yeah, like here on the article, they talk about like the performance on the benchmarks, right? Which, which looks, looks good. ⁓ What I thought was also interesting is that the, mentioned that this was launched after, ⁓ where is it? I think it was launched after Anthropix, Claude Cote was banned for Chinese companies. So it was a bit of a reaction to that.
Right. So yeah, six days, six days after Biden's cut off access to Entropiq's cloud model following American firms service restructuring affecting Chinese owned entities worldwide. So basically there's some geopolitics involved with this, right? Like basically they, they, Entropiq said, yeah. So he also says the launch comes after Entropiq, USAI started up updated service restrictions in September to block access by subsidiaries of Chinese firms. The latest sign of growing polarization.
Bart (42:57)
⁓ interesting, yes.
Murilo (43:23)
in the global AI landscape. basically, Entropiq said, you cannot use these models anymore. And then Biden said, okay, you know what? I'm just gonna release my own and I'm gonna make it super cheap. ⁓ So there's a big market as well for them, I guess. Like if you just think of Chinese companies that cannot use this, there's gonna be a huge market there. ⁓ I think there's gonna be more incentives for the Chinese companies to release these coding models. They are very cheap. So I think, also feel, I don't know, I'm not sure if I feel like they poked the bear a bit.
Right, like by.
You know, the by by by making these these sanctions or like by restricting the use by by this now they have a lot more competition. ⁓
Bart (44:01)
Well, this
is well, it has not been a smart decision I think this is this shows for for software solutions But they're doing the same thing on on chips and hardware and like the only reaction that comes from ⁓ from China's to start building their own capabilities and Not only that exporting their own capabilities to the rest of the world
Murilo (44:10)
Exactly.
And there.
Yeah, and they look like they're good at it as well, right? It's not like they come up with good models, they come up with, know, like manufacturing-wise, I think they're also very, very strong. And I think maybe not to get too much to the politics, right? But it feels like the US hasn't been the most reliable partner, like with tariffs and all these things, right? So I think there's another argument to be open to deal with.
all these things, right? So I'm wondering as well like if this hurts American companies in the long run. Like I guess it, I think it does, but how much, right? Like what's the impact I think is like, is it as big as I'm thinking right now or yeah.
Bart (44:55)
I think it does. I don't think there was a question there. I think it does.
I think it's potentially very big. We discussed a little bit on the previous episode. Like if you ignore that this is a very affordable coding agent, just also the reality that China is releasing a lot of open weight models into the global markets that are more or less on par, which today are the leading US models. It's will very much like a...
heighten the competition on this. It will also very much devalue these premium US companies like Entropic, like OpenAI, if they don't just compete on we have the best models, which they today maybe can still argue, it's the last year has showed us how close these open source models have come like, like when like, like KiMiK2. ⁓
Like DeepSeek, it's not far off, like another year to that. And I would be surprised if they still have a very big lead to these open AIs and entropies of the world. And then the economic reality sets in a bit because if they are overvalued, that's another question, like are they overvalued today? If there's much more competition, it's more likely that they're overvalued. And then it also reflects badly on the US economy because like the S &P...
Murilo (46:06)
Yeah, and I
Bart (46:22)
500 like all the growth in that this year just going came from AI companies. So it has a long term but actually has a very big impact on their economy.
Murilo (46:27)
Yeah. Or like related, like Nvidia or something, right?
Yeah, that's true. It's true. And again, I OpenAI, are not, I'm not sure if they're cashflow positive by now. I think they have a lot of revenue, but they also have a lot of costs, right? And because I think we talked a bit about the different strategies between Entropic and OpenAI. So I feel like they were projecting the, yeah, indeed. So yeah, the more you delay these things and more of these things happen. And again,
Bart (46:52)
They're investing very heavily.
Murilo (47:00)
not just models, right? You also mentioned the GPUs, right? That Nvidia is also not, I think they're banning Nvidia to power these things. So China is also building their own chips and yeah, I really feel like again, they poked the bear, you know, they woke up the bear. was a bit, that's the vibe that I get. Yeah.
Alright, last but not least, we have the tidbits.
Bart (47:23)
GitHub's Octoverse interview explores how AI tools are reshaping language choices and developer workflows, with TypeScript rising alongside Python's AI strengths. In conversation with GitHub Next's Eden Gazit, the piece frames an AI feedback loop where typed languages help agents refactor reliably across tasks.
Murilo (47:43)
So this is from a GitHub event, GitHub Next. And he kind of broke it down a bit.
Bart (47:46)
Yeah, and
they basically did some interviews, actually survey among developers. And they also use GitHub data. And what they see is that TypeScript basically overtook JavaScript and Python on GitHub in 2025. And they grew 66 % year over year.
Which is crazy,
Murilo (48:11)
I think, before maybe
the most popular language was JavaScript. I think... ⁓
Bart (48:16)
⁓
JavaScript also bites, Tobias I don't know.
I think Python actually, but I'm not 100 % sure. We actually discussed this last year as well when it came out. It's a yearly analysis they do. The interesting thing is that there is a bit of this AI feedback loop because TypeScript, because it is typed, it gives a lot of feedback to an AI model to see like the code that I generated, is it valid code or not.
And I think that part plus also TypeScript is used a lot in front-end and application design, which also is extremely grown with tools like Lovable, with Bolt, ⁓ with all these Vibe coding tools that have sprung up in the last year or two. So we've seen this.
like these tools that focus on easy, easily like rapid front-end application development, plus TypeScript being the logical choice there because the LMS get easy feedback, like this huge search basically in the TypeScript language and also like TypeScript becoming a de facto standard again for developers.
Murilo (49:20)
Yeah, maybe being a bit devil's advocate, Like Python, could have the, you could statically, the static type-ins, right? You could check them.
Bart (49:24)
Mm-hmm.
Murilo (49:30)
Is it much different from TypeScript? Like the actual type system, I mean.
Bart (49:35)
I think TypeScript is stricter, ⁓ but it's much more intuitive to use TypeScript in the frontend than it is to use Python.
Murilo (49:49)
Yeah, but that agree. That I agree. Because the main argument here is TypeScript over Python, because of the type system, but like Python you could enforce quote unquote a type system, right?
Bart (50:04)
⁓ You could. So in TypeScript, what happens when there is something incorrectly typed or something, you get like a warning that you need to explicitly need to explicitly fix or ignore ⁓ before you can continue. ⁓ I think that's way more strictness than in Python, right? Because by Python you could do that, but you need to set it up yourself to be so strict and like there's a
Murilo (50:27)
Yeah.
Yeah.
Bart (50:33)
need to be more opinionated, I think, with Python to have the same level of strictness.
Murilo (50:36)
Yeah, I guess for I'm also wondering if I mean, and also this is for LLMs, right? So that's the the argument here is LLMs, benefit from the strictness, right? So I guess if you want to do the same thing for Python and LLMs, you need to set up some hooks, right? I cloud code, they actually have hooks that to say every time you run so you write some code, run mypy or there's tie, which is the I think the attempt from Astro, right? Maybe it's also slower because I know mypy is also can be a bit slow.
Compared to TypeScript. So like there's more scaffolding, but I guess for me, I'm thinking like, should I write more Python? I don't write a lot of TypeScript, but if it's really, it's such a big plus, should I also go through the trouble of doing all these things? I think that's the question that I had for myself, right?
Bart (51:22)
But I think maybe also what changed a little bit is that where Python was being used a lot for machine learning purposes, for training models, et cetera, is that now of these original users, like 80 % today just calls an API. And you can debate whether or not you even need to use.
something like Python for that, and it's just as easy for TypeScript. And then it becomes also easy to unify your whole stack because your frontend uses types with, like, why not your backend, right?
Murilo (51:57)
Yeah, I think, but I think that's a good, I was also thinking of that. The people maybe are using more TypeScript because people are building more applications. They need a front end, right? And usually that's like TypeScript. I mean, not usually, but like you can do this with Python, I guess, but it's not, the ecosystem is not there. It doesn't feel the language was made for this, right? The Python is more for data analysis and all these things. They also mentioned here Python, like Python is the dominant language for machine learning. So why would I choose an already, whatever.
Like there's an ecosystem, right? But to me, then that's the reason why TypeScript's more used because people now that there's AI, they're building their own applications. And then because you want to have something that has a nice ecosystem with front-end, then you go for TypeScript, but not necessarily the types, right? The types is a bit of a, I mean, then maybe the types come in when you say JavaScript versus TypeScript.
Bart (52:47)
Yeah, I think it's very much a combination of things, yeah. Because the types do give much more feedback to the lamp, right? It improves the performance.
Murilo (52:52)
Yeah.
Yeah, but I guess for me it's just like you felt that the comparison between TypeScript and Python, which is what they are comparing a bit, is a bit misleading because there is a bigger context.
Bart (53:03)
I think that is something that a Python
developer would say.
Murilo (53:06)
⁓ But I just think ⁓ there's a broader context to that. I think the ecosystem maybe plays a bigger role than the fact that one is typed and the other one is not typed. Even though Python, can be strict about types if you set up some extra scaffolding. That was maybe my reaction to this. But I do agree with what you said. I do think that if you have a simple web app,
Bart (53:21)
True.
Murilo (53:32)
Right. And you don't need a backend. Maybe you can just call things from there. Just use TypeScript. If you're already learning languages, you're already doing stuff in TypeScript, you have your cursor rules or a cloud empty file, whatever, set up for TypeScript. You already have your preferred stack. Maybe you can even use it on the backend as well. I do think all these things make sense. Right. And I do think that if you compare TypeScript with JavaScript, then I think it's a better comparison saying TypeScript wins because the type system adds some guarantees that JavaScript doesn't. Right. I think...
Bart (53:59)
Yeah.
Murilo (54:01)
or like checks or feedback, right, to the model. I think all these things make a lot of sense for sure. ⁓ One thing that I thought was interesting is that they mentioned the surprise winners of the IAEA, the duct tape languages. And I think they're talking really about bash, right? So bash is like the language that runs in your terminal, kinda, right? Like, so you can actually have scripts that you can execute. Exactly. ⁓
Bart (54:23)
shell script basically.
Murilo (54:27)
And a lot of people call this the duct tape languages because it's kind of like you're, mean, again, maybe I'm butchering this, so feel free to jump in and correct me. But it's like a lot of the stuff you run in the terminal. So you have, can have a bash script that basically run execute shell commands and it becomes a bit the duct tape because that's the common platform, let's say for all the different languages, the system stuff, you know, most of the stuff you can do in your terminal. it becomes a bit that, so there's a lot of like
bash scripts that developers have come to use quite a bit. ⁓ But people don't, and this is what they are saying here, right? People don't really like writing bash, ⁓ but it has become a very easy way for LLMs as well to interact with different things in your system, right? Because it kind of goes in your terminal level, your shell level. ⁓
And now because the LMS are taking care of this, even though people don't enjoy it, people are using it more.
Bart (55:23)
Yeah, can. I do the same thing. To me, shell scripts like I used ⁓ before the whole AI era also wrote shell script, but way less because it does something that you do every now and then. And like every time you do it, you need to look up the syntax, et cetera, et cetera. ⁓ And it's not that intuitive. Like it takes more effort and like within a lem, it's like.
Murilo (55:29)
Yeah.
It never feels like it's the thing you want to do.
Bart (55:51)
It's you, one time you blink and it's there, right? ⁓ So it's become very easy to do these things with LLMs.
Murilo (55:55)
Yeah.
I feel like with Bash as well, it's never something that people want to do. It's almost like you have to do it, but it's not... When you sit on a project and you're like, okay, I want to do this. I feel like it's very rare when people say, I want to do this in Bash. Like I'm excited about setting up this Bash script to automate this. think usually it's more like you want to do this, okay, now I need to... Let's write a quick Bash script to do this and this and this because it's easier. And I think also what I noticed from my experience with LLMs is that...
Bart (56:20)
Yeah, exactly.
Murilo (56:26)
LM's also very eager to write bash scripts. ⁓ Yeah, I feel like a few times it actually wrote a few bash scripts for me, even though I didn't say like write bash or anything like that. It just kind of said, yeah, let me write a quick trip for you to test this. And then you would just do it. It was a good use case for bash, but still it was also something that I noticed. ⁓ 206%.
Bart (56:28)
Hmm, interesting.
Okay, interesting.
Murilo (56:53)
year over year growth quite a bit. Yeah, it is a lot. Yeah, I think maybe not much else. Maybe the last thing that maybe it's a bit controversial. I don't know. But the next horizon, where language stops being a constraint. ⁓ he says again here, WebAssembly is changing the rules. ⁓
Bart (56:55)
slaughter.
Murilo (57:18)
If any language can target Wasm and run everywhere, that removes the key consideration when picking your stack. And think he says also picking your stack because he also said that people prefer to programming languages where there's a good AI support. I also heard that argument for frameworks. So for example, people using pandas because there's a lot of like, LMS are good at pandas and not good at polars. But then he's saying like, okay, if you combine AI with WebAssembly, so...
Maybe what is WebAssembly BART for people that don't know what it is?
Bart (57:49)
question really. How can we summarize that?
Okay, I'll try something else.
Murilo (57:53)
I can also try but I feel like your explanation would probably be better than mine.
Bart (57:57)
Yeah, I'm not sure. ⁓ So WebAssembly itself is basically like ⁓ sort of an instruction format ⁓ for a virtual machine. And that is designed to be very portable. Like you can run it in your browser, you can run it anywhere. ⁓ Compiling to Wasm, like using Wasm as a target basically means like from wherever you write, you...
can generate a set of binary instructions for Wasm so that it basically just shows whatever you built, even though it's not necessarily by the original interpreter or execution runtime of your code. So you can write ⁓ Rust and you can compile to WebAssembly as target, you can write Python and you can run it in a WebAssembly context. I'm not sure if that... ⁓
completely explains what we're doing and there are some complexities as well like they're actually compiled languages they are interpretal languages for which you have the interpreter in Wasm and you're still executing the script. ⁓
Murilo (59:04)
And Python is compiled, right? Like it compiles to bytecode and the bytecode gets interpreted, right? So I think maybe I guess if you write Python compile to WebAssembly, maybe that's how it works. I'm not too sure. Maybe the parallel draw with assembly and WebAssembly is that assembly is like the machine code, right? It's very, very, very low level, like the ones and zeros in WebAssembly. I think a lot of like what runs on your browser, kinda, right? And maybe it's more lightweight.
One thing that I was surprised to hear is that I think fonts have a WebAssembly thingy. So you can actually execute. That's why there was the LLama.ttf. So it's a font that actually has a model built in. So was like a big file that loaded the weights and it had a WebAssembly thing to run the inference there. ⁓ What he's saying here, I guess, is like WebAssembly is becoming more popular. I actually heard podcasts of people saying that
WebAssembly, and again, this is maybe a bit over my head, but like, you don't have to, well, not that you don't have to, but you have containers, right? Which is a bit to make sure your code run in different environments. And WebAssembly also plays a role there, right? That you can have a WebAssembly target and then you can run in many different targets, right? Because you can run on the browser, you can run on different machines and all these different things. And I guess what he's saying here is that,
Bart (1:00:09)
You
Murilo (1:00:20)
Right now, today, we're still looking at languages because we're looking at support and we're looking at the ecosystem. But more and more as WebAssembly becomes a thing, ⁓ this is going to be less relevant.
Bart (1:00:33)
Yeah, just to give you a... So the moment that you can compile to ⁓ WebAssembly, you can basically run anywhere, right? Because WebAssembly provides its own virtual machine. If you do not have WebAssembly and you say, like, for me, it's very important to be able to use my, let's say, CLI in both Windows, in ⁓ Linux, in ⁓ OS X, ⁓ then probably you're gonna say, maybe I need to use something like Go because it's very easy to compile to.
and not go to Python route, example, And WebAssembly takes this whole consideration away.
Murilo (1:01:09)
Yeah, yeah, yeah, indeed. So yeah, mean, here voice has Nien as well, et cetera, runs on the code edge, clouds, local sandbox. You have a WASM target and then you can just run everywhere. ⁓ Do you see a future where this is the case? Like AI plus WebAssembly. Then you just tell them to write something and then you don't even care what's in between. It's almost like a
natural language programming.
Bart (1:01:37)
I don't think it's people's main concern at the moment. Portability. I think for specific use cases, portability is very important. I think for the 80 % it's not.
Murilo (1:01:45)
True, true. I think I've heard about WebAssembly before, like even in Python. I've been curious about it. I've seen projects. I saw in the FOSDEM actually last year or this year rather. And it was a project that was about, basically had Docker files, but it would compile to WebAssembly. So, but it was still the same syntax as a Docker file, right? And I actually was very curious about it.
I even thought like, maybe I should try to do a little project about this to learn a bit more about WebAssembly. But I actually never had to. There was actually no need, right, which I think speaks to what you're saying. Most of the times for most developers, you don't need to worry about this. Right. But I still think it's cool. Like it's a cool technology. I think it's something cool that exists and I think can have an impact in the future. But I know how close in the future or how far away.
And I think that's it for the main topics, let's say, but we do have ⁓ tidbits. How do you want to go about them?
Bart (1:02:42)
⁓ Maybe we can just like... I'll go over the titles. We will do them one by one. And we will add the articles in the newsletter which you can sign up for at newsletter.monkeypatching.io So we have Jan Le Coon, Meta's chief AI scientist who is planning to leave Meta ⁓ to build his own startup. ⁓ What else, Merida?
Murilo (1:02:49)
Sounds good.
Yes.
Maybe just quickly on here, I thought this can be interesting. I don't know if it was Yann LeCun that he said that, well, he has been a bit critical about LLabs, that's one. And the second thing is that Meta has been changing a bit how they're approaching the research, right, with open sourcing, all these things. think those two things make it, I don't know.
Interesting piece of news, right? And I think Yannick Kuhn is very, very big in the AI and machine learning even before LLMs, right? Like very, very one of the godfathers of AI. Up next is the Morgan Freeman says his lawyers are quote unquote, very busy cracking down unauthorized use of his voice. Well, more voice cloning, guess, the legal stuff. Morgan Freeman, very iconic.
Bart (1:03:56)
this is an uphill battle that he's trying to his voice is so iconic, like everybody, everybody wants to use it just for novelty sake, right? And like, probably everybody like reading this, this this headline of the article you're showing, like can also read this in their mind, like in Morgan Freeman's voice.
Murilo (1:04:00)
There.
Yeah, exactly. know, everyone can.
Yeah, everyone wants to have their their voicemail in the Morgan Freeman voice, right? Yeah, I mean, maybe he has a lot of money that he just kind of keeps some some lawyers busy just like try to do something right. But yeah. What's next?
Bart (1:04:30)
Clickhouse welcomes LibreChat introducing the open source agentic data stack. they acquired LibreChat, which is like your open source version of a chat GPT. And they're trying to build a full agentic data stack.
Murilo (1:04:45)
So LibreChat is like the open source what?
Bart (1:04:47)
An open source Chess GPT ⁓ clone to use the derogative terms.
Murilo (1:04:56)
Okay, do they have plans with this or do know?
Bart (1:04:59)
Well, they really want to incorporate it into what I think they call their agentic data stack is that there really is a chat interface to their ClickHouse instance.
Murilo (1:05:09)
to ClickHouse. OK. And ClickHouse is like a data platform, I guess. I'm not sure how to.
Bart (1:05:16)
Yes, ⁓ it provides basically a fast way to store a lot of data and to query it A very fast analytical database that grew a lot in the last three years.
Murilo (1:05:32)
And it is open source.
I heard really nice things about ClickHouse. Are they open source or no?
Bart (1:05:37)
Yeah, yeah, people are very, very positive about this, yeah.
They are open source. They definitely started open source. I'm not sure if everything, all functionalities is still open source, but because they do have a managed platform and enterprise license, etc.
Murilo (1:05:52)
Feels like they're cooking something, huh? It could be interesting, definitely to keep an eye out. ⁓ Up next we have introducing iPhone Pocket, a beautiful way to wear and carry iPhone. ⁓
This is from Apple, no? It's an Apple product.
Bart (1:06:09)
It's an Apple product together with a designer. Issei Miyake?
Murilo (1:06:12)
Okay. And it's ⁓
Bart (1:06:14)
It's basically a sock. A sock with a string on it that you can put your iPhone in. could take it directly and I could put any type of phone in. Or your keys. And it costs 230 USD.
Murilo (1:06:15)
Yeah, it's basically...
Yeah.
Bart (1:06:29)
So are you going for a sock, Amrila?
Murilo (1:06:29)
How many of them?
This sock? Not sure about that. Yeah, that's true. But I feel like I could all probably make something, right? I just take like, you know, like those football, football socks, and then you just like cut, cut, cut, then you sew and then boom, that's it. And I'll sell it for like, I don't know, 70 euros. Way cheaper, way cheaper. Exactly. Gumbiaga version.
Bart (1:06:33)
That's okay. You can put your phone in it,
And old suck. Mmm, yeah.
Okay.
Yeah, the Brazilian version.
Murilo (1:06:57)
How many of those have you bought already? How many colors? Five. Yeah, I mean, you get them now while it's cheaper, otherwise. And actually, like, there's only a few places you can buy, right? The promo pictures, I think it's so funny. It's like they have a whole person and then it's just like the spotlight on the... Yeah, exactly. You see, like, it's just so... I don't know, so strange. you can attach it to your bag as well. I mean...
Bart (1:06:59)
Like five, huh? Exactly, exactly.
sock hanging off the arm.
It's really multifunctional.
Murilo (1:07:23)
So many functionalities,
Bart (1:07:24)
Wow.
Murilo (1:07:25)
it's crazy. And you can buy it in, you can only buy in a few places, guess, in Hong Kong, Tokyo, Shanghai, Paris, Seoul, Singapore, Milan, London, New York, and Taipei.
Bart (1:07:38)
I'll take a ticket to Milan and go and get the pocket.
Murilo (1:07:41)
I guess.
Not Paris. It is closer. I mean...
Bart (1:07:46)
I think for something
like this haute couture you need to go to Milan.
Murilo (1:07:49)
You need to go to Milan. exactly. And then you're gonna. Yeah, it's Yeah. I wonder what's the board. You know, they're discussing like, need to release some things like maybe we should do a iPhone pocket iPhone sock, you know, and there's like, yeah, but we should get like a like a designer also like
To it's also, I mean, okay, I don't know fashion, huh? But to me it's a bit weird that there's a whole designer thing, but like it's just a one color sock.
Like there's no...
there's no design it's just the color and maybe the whole thing is the design right but it's like
Bart (1:08:20)
The whole thing is a design, my love.
Murilo (1:08:21)
I don't know, it's a bit bland still, you know?
Bart (1:08:24)
It's eight
different colors. You can get it in. That ain't nothing, right?
Murilo (1:08:29)
That ain't nothing, that's true, that's true. I'm just hating for no reason. What is the last tidbit that we have Bart?
Bart (1:08:35)
⁓ Yes, it's a bit of an update on the Nexperia case. ⁓ We did, we discussed this, I want to say a month ago or something, and people are asking us on an update. So what is this?
what is Nexperia experience basically a company that is China-owned but present in the Netherlands. It's a subsidiary from a larger Chinese corporation. But roughly a month ago the Netherlands minister like they invoked the Goods Availability Act and citing serious governance shortcomings and risk to chip availability.
And the what is called enterprise chamber in the Netherlands, they suspended the current ⁓ CEO and they imposed interim government measures. A lot of discussions came from that. A big Chinese response where they basically said that China was not gonna export these chips anymore because basically these next period products were shipped to China to finalize. And they
kind of threatened and also for a very short term applied a ban on export which would basically put the whole European car manufacturing sector ⁓ at risk. ⁓ And ⁓ after some US China talks this was basically put on hold the export ban but the Goods Availability Act is still in progress on the expiria.
Murilo (1:10:00)
US China.
It was the US-China talk, but like the...
Bart (1:10:06)
Yes, exactly. because the US had played some role in this, although it's not clear exactly to what extent, the ⁓ US basically said that ⁓ the parent company of Nexperia, which is called WingTech, was added to their blacklist, the US entity blacklist, ⁓ which they do not buy from basically. And then they said that ⁓
There's also an affiliates 50 % rule where you pay a lot of ⁓ sanctions basically and next period is an affiliate of that. Meaning that everybody globally like was trying to understand like what does it impacts if we are buying from an ⁓ affiliate or subsidiary from a blacklisted company by the US. And there is some rumor that this was also part of the rationale of the
Murilo (1:10:40)
Okay.
Bart (1:10:58)
Dutch Minister, but it's not really, really transparent on that.
Murilo (1:11:02)
I see. But it's more
like, it's not like the US influenced the Netherlands to take control of this. It's more like, yeah, maybe like the minister or someone in the government saw this and they're like, we should look into it. And they had this and they called this law for the first time, something like that. And then you mentioned like,
Bart (1:11:06)
Not directly, that's what I understand, but maybe indirectly through this.
Yeah.
⁓ And the Dutch government basically
says that the Goods Availability Act will ⁓ stay until supply stabilizes again. ⁓ Which then to me brings in the question why was it there in the beginning? Because the supply is destabilized because this was brought into practice. ⁓ So yeah, it's still not to me very transparent on why the Dutch government did this.
Murilo (1:11:49)
Yeah,
that's what I was gonna s-
Bart (1:11:50)
it's very
vague like there's governance shortcomings there is a fear of not having chips available in the the Netherlands or in Europe but it's all very vague and like it's not really well argumented or at least not publicly
Murilo (1:12:04)
Yeah, that's what I gonna ask because indeed it doesn't bring answers.
Bart (1:12:07)
Yeah, you there is no,
it doesn't bring answers. No, ⁓ there is something that happened recently that also I think one to say last week that brought about this same feeling and people reacting like why should we not have the same act come into play now ⁓ is that when ⁓ kinderle which is a large us
information technology firm, they provide infrastructure. They acquired Solvenity. Solvenity is a large Dutch infrastructure firm that does a lot of service and basically cloud provisioning for major public sector contractors like municipal Antwerp, justice police workloads they have. So it's a big player as a partner for them.
Dutch government and you had the same reactions here from the public like this brings very much sovereignty questions into into play. Why does the Dutch government not now not take the same action as what they did with Nexperia? Because you could argue that this is very the work that sovereignty does is very strategic to the continued functioning of the Dutch government because they power a lot of the of the infrastructure that is needed. ⁓
But it's not being done. Also, like the argument here is also different in the sense it's not the act that they used for an experience, the Goods Availability Act is very specific on availability of goods here. ⁓ While ⁓ this general thing does not necessarily ⁓ play around tangible goods, it provides infrastructure services and sovereignty. ⁓
And it does raise a lot of questions on sovereignty, apparently this same act could not be applied here for that reason.
Murilo (1:14:01)
But they did have an answer why they couldn't apply the same act because of
Bart (1:14:05)
Yeah, yeah, just because of the
circumstances, think the question why this was allowed to go through is still open,
Murilo (1:14:12)
Yeah, I see. And Kindrew, you said it's an American company. I feel like the conspiracy theorists and the conspiracy theorist in me is gonna is wondering if one is because one is Chinese and the one is American. Right. ⁓ I don't want to don't want to speculate or add more fire or add more gasoline to the fire. But
Bart (1:14:15)
Yeah.
Let's see.
Murilo (1:14:31)
Here's a question.
Bart (1:14:31)
Well, think
the with the Experia case is that it's very transparent on what the actual arguments are, right? Like if the arguments were like we see less and less and less of these chips that are being produced by the Experia and Netherlands actually being distributed in Europe and instead moving to China, mean that is a very concrete thing.
That's probably not what you're gonna see with Solvenity because Solvenity lives in the Netherlands and provides services to the Netherlands. It was all their incentive to provide more services because they will become more profitable. But the problem with the Xperia case is that there's not really any clear transparent argumentation on this.
Murilo (1:14:51)
Yeah.
Yeah.
Which is sketchy, right? Like if people cannot articulate clearly, why? What are you holding back? Like there's a reason, right? Like either you're not doing something 100 % correct, quote unquote, or maybe there is some other reason, but I feel like that's where the discomfort grows, right? Like it makes you feel like there's a...
Bart (1:15:27)
Well, it's probably
is also like it's still relatively new. Like it's first time that they're applying this act. It's also relatively young. It's answered like it was a month ago. I can imagine like this is still going to the whole legal machination before they're allowed to talk about these arguments transparently.
Murilo (1:15:45)
That is true and I hope that is the case and I hope that was a... I just say I hope that the world is right. Let's just say that. Let's assume. Let's assume. All right. Thanks, by the way, for whoever reacted and asked for updates. If there's any topics you would like us to cover, feel free to reach out to us on YouTube, on TikTok. We are on Instagram as well now, right Bart? Since a week.
Bart (1:15:52)
Yeah, let's assume that.
since a week.
Murilo (1:16:10)
⁓ Also feel free to reach out via email or anything. So if there's anything you'd like us to cover or give an update, please do. ⁓ We just kind of went faster on the last topics on tidbits, but we will send them on the newsletter as well. So if you haven't signed up newsletter.monkeypatching.io. And yeah, I think that's it for today, Bart. Anything else? Any final words?
Bart (1:16:34)
That's it. Thank you all for listening.
Murilo (1:16:38)
Thanks everyone. Ciao.
Bart (1:16:41)
Ciao!
Creators and Guests
