Nvidia x Groq, Meta x Manus, and the Netflix’s open content
Hi everyone, welcome to the Monkey Patching Podcast, where we go bananas about all things
AI pens, vending machines and more.
My name is Murilo and I'm joined by my friend Bart.
Hey Bart, happy new year.
Yeah, exactly, we said it before, but first time recording on the record.
How was your break?
It was quite good.
went with the family to ski in Switzerland.
Nice.
wow.
Nice, nice, nice.
And how's the weather?
The was very good.
There was no snow, like was no snowfall.
But there were actually a lot of areas in Switzerland where there was too little snow to
decently ski.
But where we were, it was very good.
So we just had sun and good ski piste.
uh
I feel like today I stayed in Belgium, but there were some really nice days as well.
Like it snowed here, like the whole ground is covered in snow, but it still had a lot of
sunlight as well, which is that is not common.
Yeah, exactly.
So there it did.
Yeah, there were a few days I went to I went for a walk with my dogs and it was like all
white on the ground and then like it's just it.
Yeah, I know.
I actually like I tell my wife that like
I don't take my dogs for a walk.
like more like they take me for a walk, you know, because like we go out and like, ah,
this is nice.
You know, I go back.
I'm happier, you know.
So yeah, exactly.
Like they just take me for a little walk and just, know, so I can pull and all these
things outside.
It's fine.
What do we have this first week of 2026?
I'll kick it off.
Anthropix Claude was put in charge of a real office vending machine and the experiment
became a cautionary tale about AI agents handling money and inventory.
After splurging on a PlayStation 5 and even live fish, it still couldn't stay solvent.
So what does that say about AI or AI employees in the wild?
Yes, so this article was an experiment done by the Wall Street Journal, I want to say,
yeah, which basically they gave $1,000 to a vending machine to make orders as well.
And there were two agents actually.
There was one quote unquote CEO called Seymour Cash and one agent just called the Claudius
Senate.
So basically one was actually handling the day to day things.
Then there was another agent that was the CEO.
And then, of course, everyone tried to break it.
Everyone sending messages saying, I want to order PlayStation 5.
I want to order Life Fish.
I want to do this.
I want to do that.
Someone sent a message.
was like a groundbreaking economic experiment.
Monday, 12 p.m.
Traditional market dynamics are turned upside down.
So they try to make like an exclusive experiment.
Let's try this.
Like today is the national give everything for free day kind of thing.
Then try to trick the LLM.
At one point, the agent CEO Seymour Cash actually stepped in to kind say, hey, we cannot
just make these spendings.
But after a while, actually, people relented and actually gave in.
And actually, at some point, and I don't know if it was actually before or after actually,
there was even some people were saying that the other agent, the non-CEO agent, Claudius,
falsified documents showing quote unquote the board that
that the decision to stop this was a no basically.
So there was a bit of a mischief and dishonesty within the agents again.
So I think it lasted, I don't know how much, so like at one point experiments were much
over about 1000 in depth already.
So basically the experiment failed quote unquote in the sense that an AI agent cannot
manage a vending machine.
But the people from Anthropic, still said the experiment was a success because it's still
taking steps in the right direction.
I thought it was interesting.
again, I didn't expect it to go well, right?
But I still think it's fun to read these anecdotes.
And the reason why I also caught my attention is because, this article is from December
20th, so a little while ago.
But back in June,
of last year, there was also a project by Anthropic, there was project Vend, right, which
was a bit the same outcome.
Slightly different setup on another model, but same idea, right?
Same idea.
Yeah, I'm a bit skeptical on the conclusion of the Wall Street Journal project.
Like it's like maybe someday this can work, that day is not today.
They conclude something like that.
But they also gave like very...
very few instructions, guardrails, very little of a definition on what quote unquote
business is this vending machine actually doing, right?
Like if you're in business to sell snacks, you're not gonna suddenly start selling
PlayStations, right?
And also like there's very much this asymmetry in information, like where I as a user can
say to this vending machine,
yeah, but now we're gonna...
What did they say?
about we're gonna go all out capitalism?
Something like that, um All vending machine items available at zero cost.
We were saying that as users.
no, it's Claudius that...
But it was something that had to do with users basically convincing the bot to run like an
economic experiment.
Yeah, exactly.
Called the ultra capitalist free for all after 140 back and forth prompts.
thing of that is like, if you would say this to a person, the person would at least like,
there is now this experiment, the person would at least maybe check like, what is the
effect of that?
Google or other people doing this, right?
but if you just limit yourself, like isolate one person in a room and just talk to that,
you're the only person influencing that, the guy is gonna make weird decisions, right?
Like, so I think also here, like you need to have given agent like the ability to...
to crawl the web, see what is normal, what is not normal, do the research to come to a
conclusion.
um So a bit...
I think that this field you could have known before starting it, with the way it's set up.
think so.
I think so.
I think it's like you can expect that it will fail.
Maybe it's like how will it fail?
Right?
Like is it going to be, I don't know, how fast or what are the decisions?
I think like, I think everyone knew it was going to fail.
think maybe you can still learn a few things even though you know it's going to fail.
But I'm also, yeah, if it was also trying to be, if you really wanted to succeed today,
you probably want to set some card rails as well, right?
Like you have, can spend up to this much like.
These things you need to get a CEO approval or maybe I don't know like some other things
right which I don't think it was the case here But maybe question then do you think?
Do you think in the future, in the near future, like let's say one to two years, do you
think it will be possible to have agents managing like a vending machine?
I think so, with the right instructions and cartwheels.
oh
yeah, yeah.
And with like minimal human intervention, I guess.
with minimal human intervention.
Maybe we should try one.
But what would we sell?
We can sell the goodies from monkey patching, I guess.
Ooh, and that's the agent then buys these goodies and then sells them.
But then we need to have actual people that buy them, right?
Otherwise...
Yeah, exactly.
Just have it like the experiment succeeded perfectly, you know, like zero breaches, you
know, but I do think like some people, even if it's just to, I don't know, I'm talking to
you, maybe listening, right?
Like if you just want to trick the system, I think it's fair game running a little
experiment, right?
Interesting.
Let's think about this.
It could be fun.
It could be a lot of fun.
All right.
Up next we have...
Meta says its new SEM audio model can segment and edit sounds in a clip like isolating a
voice instrument using prompts rather than painstaking manual work.
If it works reliably across messy real world recordings, it could reshape audio
post-production.
but it also raises fresh questions about provenance and misuse.
So SEM, it stands for Segment Anything Model.
So it is a well-known model from Meta even before LLMs and all these things.
And I maybe Segment Anything is like you have a picture that has a whole bunch of objects
and you want to just crop the dog or just the car or just the person.
And that was a model that can actually do a lot of these things for a lot of different
objects in for you, like your spirit chain, right?
So it was mainly for computer vision and what's in the released new iterations, of course.
And this one is about audio model, right?
So what does it mean like audio model?
Maybe we'll just play this video quietly.
so it basically allows you to isolate certain voices or instruments from an audio track.
So you can say, for example, isolate the guitar for me, and then you can say either listen
to just the guitar on the audio track and it will filter out everything else, or the
inverse, have the audio track but without the guitar.
Hmm, yeah, okay.
think a lot of people that do sampling, like either as a hobby or a job to create new
tracks, will heavily jump on this, because the performance is very good.
I actually have a small demo.
Let me see if I can share my screen.
Give me a second.
You see this?
Yes, I see it.
let me just see if you hear this sound as well.
Yeah.
Yeah.
it's a small sample from Biggie Smalls.
I think it's called Juicy, the track.
I'm just gonna play it.
So it's vocal, it a kick drum, it has a piano-ish, right?
It has snares, so I'm gonna try here, I choose what to isolate.
I'm typing here kick drum, so just gonna try to isolate that sound.
And it will take like, I don't know, 10 seconds, 15 seconds.
While it's doing that, you're showing something on the screen.
What is that?
Is that like a UI that's meta provides or how if people want to give it a try, how do
they?
playground of Meta, can do that.
You can open it for free basically.
And then you get here three things to play.
The original sound, which we just heard.
The isolated sound or the without isolated sound.
So I'm just gonna play the isolated sound now.
and it's more or less just a kick drum.
It's cooler.
And if I would now play the original one without the isolated sound, the kick drum is
goad.
um I'm gonna start over here.
I'm gonna say give me the vocals.
take again 10 to 15 seconds.
It's really good.
Isolating audio and there we have it.
I'm gonna play just the isolated one again.
It's crazy good, huh?
Wow, it is...
And I'm gonna, one more, I'm gonna try to isolate the keys, so the keyboard.
And the thing about this is like you, in the small sample, you hear the keyboard more or
less, I heard it more or less at the end.
But actually it plays throughout the whole track, it becomes much more noticeable when you
isolate it.
So here we have it.
I wasn't even aware that this was the sound I playing in the background.
Yeah.
And it's very clean, And sometimes you hear little bit of distortion, but it's very clean
in general.
You can just download it.
I think this will...
A lot of people will use this just to create new beats, new sample slice and stuff.
Because this used to be super hard to do at it because you had like...
If you were into sample slicing, you had to find like a small piece where there was no
vocals on top and like where it was clean, where you can edit out stuff.
But here you can just say, give me this and it will give it to you.
No, indeed.
The quality is really good.
I'm actually super impressed.
And also there is a I mean, the complexity of like, because you just typed it, like also.
Like you type something and understands what you're trying to say and it takes the right
stuff out of it.
Right.
Like I feel like it's it's it's very human like like the interaction.
Right.
Like you say, give me this.
But like actually to make that work, it's super complicated.
Yeah, I'm very impressed.
It's cool to play around with.
I'm start a new band part.
uh
we have to get it with you,
The monkey patching, I don't
monkey patching samplers.
We'll sell CDs on our AI vending machine.
cassette tape out of the back of our van.
You know, man, indeed.
All right.
What do we have?
This is really cool, by the way.
By the way, maybe one last question, actually.
Do you know if this is open source?
If I want to download and run it, is there any licenses or anything or?
Not that I read, but I also didn't explicitly search for it, but I don't think it was in
the announcement.
yeah, I don't think I don't think so either but But very very very cool again.
I'm impressed
What else is there?
What's next?
I'm just quickly verifying here.
There is actually some audio repo on Facebook research, provides code and links for
downloading the trained model checkpoints.
So it does seem to be there in the last commits from three weeks ago.
So that more or less corresponds.
So I think there is at least a more or less open model.
Hmm, interesting.
Maybe what's the license as well?
I'm just wondering if it's...
okay, Sam license, whatever that means.
Just wondering if...
to the next article.
A new leak claims OpenAI and Joni Ives' hardware venture may be exploring an AI-powered
pen as part of a still mysterious Chachipiti gadget lineup.
A pen sounds quaint, but in practice it could be an always-available sensor.
So the real debate is whether convenience beats the privacy trade-offs.
OpenAI acquired the company from Joni Ives.
um Joni Ives himself is someone that comes from design at Apple.
The acquisition was for lot of money.
I want to say six billion, but maybe I'm completely wrong.
Yeah, for approximately 6.4 to 6.5 billion.
So it's a big acquisition.
And I was really...
like something cool is gonna come out and now the rumor is that it's gonna be a I'm like,
A pen?
Yeah, an AI powered pen.
So what do you think?
Okay, let's brainstorm a bit.
What do think this pen could do?
Well, the rumors are that you can scribble something and it will quote unquote translated
to written text.
I'm not sure how relevant that is, right?
I can just maybe also type it.
But it will also record audio.
So you can whisper against it like we've discussed this earlier with smart rings, which I
think there is use cases to that.
I think a smart ring would be...
whispering to it and like recording notes and stuff like that would be more useful to me
because the smart ring is just easier right like when I'm running I can use my smart ring
I'm not gonna carry my pen right um yeah yeah just more usable when it comes to recording
audio like in more different places when you're a ring versus a pen and just the
scribbling of notes is like I don't know man like that sounds it sounds like like it's
just like a fun feature but it's not like
It's definitely not worth 6 billion, right?
For me, it sounds like a party trick.
Like, look, just write something on my back and I'll tell you what it is.
And it is like...
And I'm also...
this is one of the three ideas they're working on.
So it's definite that this will actually come out.
Or just this will come out.
So we'll have to see how or what.
But I hope it will be...
When OpenAI comes out with hardware, I hope it will be bigger than a pen.
Yeah, it's a bit underwhelming, right?
They did say so Sam Altman has gone on record saying that the device should feel like a
quote unquote cabin by a lake.
So I guess it should be.
I don't know what this means, but I guess it's more like.
I don't know.
I think I guess it feels like non intrusive, like I guess not a lot of notifications,
something chill, like something, you know, that doesn't disturb.
Kinda.
Yeah.
I mean, because.
But I guess like I'm thinking like I have, and this is a bit of an overkill before I say I
do have an Auraring and an Apple Watch, right?
If I had to say which one feels more like a cabin by a lake, I'd say the Auraring.
You know, like you can kind of, that's a good, that's kind of like how I see it, right?
So I think it's going to be something not super intrusive that is there for you, but it's
not very.
no extra notifications.
Hopefully no external notifications, but I'm also one.
Yeah.
Yeah.
You will be forced to write everything down.
Yeah, exactly.
Like, so let's see.
Yeah.
I'm also like, I don't know.
I'm a bit skeptical about hardware devices.
I think there's a lot of, there were a lot of tries for AI, like the humane AI pin or
wearable pin or what was the AI pin or something.
Yeah, yeah, I depend, yeah.
That was the Rabbit R1.
were a few things and I mean, even like the AI goggles, right?
Like the Ray-Ban glasses from Meta and all these things like.
But I think like I'm quite excited about the the re-pebble ring Which is like the very
affordable ring that I don't think they have launched yet But it's from the company that
is bringing Pebble back but it just becomes like it's not that it's a smart ring but it's
just a device that becomes an input to an AI model and in this case for note-taking right
like I think we should see it like that's like it's an easy way of an input
Yeah, but that I think it's that I agree.
I think that one is a good one.
I mean, also I'm saying this, but I really enjoyed the watering, for example.
And maybe before you asked me, I wasn't very convinced.
I was like, I'll try it because it was a gift.
So I said, I'll give it a try.
But if I had to buy myself, I'm not sure if I would have.
And actually, I really enjoy it.
So again, I'm skeptical, but I know that I've been wrong in the past.
Aura should come with a mic.
I think so.
feel like Aura had like, yeah.
Yeah.
think they will when we see competitors coming up.
Because there's another one as well, I forgot the name.
And that basically does the same as the pedalboard one here.
Like just you can whisper notes into it.
I think Aura will follow.
Yeah, but I do think it's, mean, because the ring also, I was thinking if you have the
pen, right, maybe you can clip the pen on your shirt.
But also like the ring is just always there.
It's like steady on your hand.
Yeah, it's just there.
Like it's not gonna fall.
It's not gonna.
or running or do something where you're not sitting at a desk and like a ring is way
easier than a pen.
Exactly.
Yeah, exactly.
mean, things that are just like quote unquote attached to your body.
It's also by your hand is very deck style as well.
Right.
Like even the watch sometimes you have to use two hands or something like for the ring is
just there.
So, yeah.
Well, let's see.
I also.
I also hope it will be something more.
I don't know.
less underwhelming than a pen, but we'll see.
What else do we have?
We have Netflix open content library publishes high-end test footage and assets, 4K HDR,
high frame rates and Atmos mixes.
So researchers and engineers can stress test codecs and workflows without using real
shows.
It's a rare peak behind the streaming curtain and a useful reminder that video quality is
built on lots of measurable trade-offs.
So when I saw this, was really open source and Netflix is not two things you expect to see
together very often.
Like, what does it mean open source here?
Yeah, this was released, I want to say two weeks ago.
And they've released a lot of content, haven't gone through everything, under CC by 4, a
Creative Commons license, where you can basically like, you share it, copy and
redistribute it, you can adapt it, you can use it for commercial purposes.
But you need to follow the terms of the license.
The only requirement that this license gives you is that you need to attribute.
So you need to very clearly show where you got this content from when you do something
with it, whatever you do with it.
Maybe to break it down, Like open source, I imagine most of the people that listen to us,
they're thinking of code, but these are like video assets.
What does open source here means?
you have stuff here like videos, like images, like audio that you can more or less do the
same thing with than you would do with code.
Like you can share it, you can change it, you can use it for commercial purposes.
But you need to...
would you change it?
guess that's the thing, right?
Like, why are you not going to make like when you think of open source, I think of people
contributing to like there's a product that evolves and people are contributing to it,
right?
let's take the example of the audio we were just editing.
You take some original piece of audio and I'm gonna change or take a part of it or like
re-slice it or and the new output will be my deliverable.
But I still need to show the attribution.
I got a lot of input from whatever the source is near Netflix open source repository.
I see.
So what you're saying is like open source, not in the sense as much of like I'm going to
contribute to this film and I'm going to make changes so the film is better, but it's more
like I'm going to use assets for this for my own purposes.
And I just.
maybe look at it as like using an open source library or something.
Indeed, you're not going to contribute to the open source project, but you're maybe going
to use it for your own purposes.
That's a fair point to make.
And also like this was test content, right?
I think they mentioned here at some point as well.
by test and I looked a bit because they have they also linked to the blog posts from
Netflix as well.
So engineering is making movies, aka test content.
So apparently this was also used to test a bit like how does the
the different frames per second or like the contrast and like they also have some films to
see how it will look, right?
They have some things that they're not sure if it's gonna look good on screen or maybe
it's not gonna sound great um for streaming purposes as well.
Like can we stream as many frames or that many pixels?
So I think this is also something that they've been trying.
So it is, mean, I haven't like even like there's a Cosmos laundromat, which is a
animation, right?
Sparks, think this is more on the contrast to see what's the quality of these things So I
thought it was I mean, I never thought of these things when I'm watching these movies,
right?
But I thought it was interesting interesting to see as well one thing that I was a bit
because I was curious about like I don't know this one like Nocturne and I tried watch on
Netflix and I just searched on my Netflix account and I couldn't find it so it was a bit
like This is real, you know, but haven't looked it.
Yeah
So yeah, interesting stuff as well.
And I think for me also getting linked to the seeing a bit behind the scenes like the
engineering part and the concerns that they have, I thought it was also interesting for me
to think a bit about these problems.
Yeah, it's cool to see Netflix doing this.
Like it's this from what they're showing here and also earlier the stands on Gen.ai
generated content like they seem to be like a positive optimistic player in the whole
authentic content sphere.
I wonder how, like, if they're gonna start doing stuff with JNI for this as well, to try
it out.
Like something like a bit of an open research thing, right?
Like how are people doing this?
We'll be interested to see.
is...
Well, but it's probably actually the case when I think about it like this content can use
it to train models.
The answer is probably yes, as long as you give attribution.
Yeah, I think...
I don't know if it's enough.
the output of those models need to give attribution, which is the hard thing, right?
Yeah, I think today there are a lot of models that are trained on a lot of stuff and they
definitely don't give a fuck, right?
I think it's good.
I also thought that open source maybe is more like for training models and stuff.
But then I also looked at the date.
The first one here is from 2013, which was before the whole Gen.Eye craze, right?
So yeah, cool.
Interesting.
Maybe related to this, what do we have next?
PC Gamer revisits Disney's much mocked AI generated Star Wars Field Guide video, an odd
parade of scrambled animals sold as futuristic creativity.
It's framed as the first stumble in a year of AI embarrassments and the tension is simple.
When big studios chase generative shortcuts, what happens to craft and trust?
I saw the bit from the TED talk actually.
So if there's a link here that they already go to the, yeah, here, they already go
directly to the part.
It was, so maybe to set the stage, right?
This was a TED talk from Disney.
Who is this?
Who's saying this actually?
I someone from Disney, right?
And then he's talking about using Gen.ai for creating new movies, new scenes and...
Basically, he shows, I mean, I'm putting here on the screen a little bit, right?
Basically, this was like a field guide for Star Wars.
So.
It was a bit like what would the experience be if we would land on another planet and
would be able to look around there.
Yeah, exactly.
And then they show a lot of basically a lot of animals, but it's clearly.
And the result is sad, right?
It's just AI slop.
I mean, I think so.
It is a eyes lop.
For example, now we're seeing like a polar bear with some like tiger, tiger, tiger.
Yeah, it's very basic, right?
Like he's like, you clearly just put in two animals together.
You can still see the animals.
It's nothing like Star Wars or anything, right?
So it is very sad.
But on the other hand, too, it is does look very realistic.
We're saying it's very sad because we're in 2026, right?
If you say like one year ago and you know, like
You don't think so?
I think you mean it looks realistic as in the quality is very good.
That I agree with, but like I think 10 years ago we would also have said like there is no
creativity in this whatsoever.
Like the assignment is like walk around in a forest on a planet far, far away.
What kind of animals would you encounter?
And then it's just like, I don't know, like a bird with a snail.
Yeah.
house on the back right like like it's just animals mashed together like a polar bear with
tiger stripes like i mean the person the the creative director where they would have been
fired
Yeah, no, that's for sure.
That's for sure.
they said this.
So the other thing too, that made it bit more awkward is that he said he spent two weeks
on this.
There was someone spent two weeks on this.
It did say a few times that this is experimental.
This is early stages and this and this.
But at the same time, they did mention like they're very proud of this, which is very
weird.
Right.
Like this is
Well, can imagine that two weeks is a very short time to have something like this, If you
would actually have to animate this, would probably take a lot more time.
m
with like, I don't know, Nano, Banana, Sora and you have, because everything's also short
clips, right?
But you're arguing whether or not two weeks is a good period or...
I it's like if you spent two weeks on this, I would expect something that was more thought
through.
Yeah, well, probably shows how hard it is to get consistent output there.
Like if you want to create a bird with a snail's house on the back, you probably have to
generate that 30 times because there are always artifacts and shit like that.
I don't know, I agree with the community consensus about this is just crap.
And that in the context that Disney has basically said we're gonna go...
all out on on Gen.ai and investing 1 billion in equity in OpenAI, they announced it a few
weeks ago.
um
like we need help let's let's put some money on eh
and I'm not even saying that it's not a good tool, right?
Like you're saying, it looks realistic, it looks good, having a GenAI video generating
tool doesn't mean that it brings creativity.
It just generates moving images.
I fully agree.
I think it's like the the audience member in me is very disappointed because it's like it
doesn't add anything.
Right.
And this is like, again, they also mentioned Star Wars.
Like why?
There is nothing Star Wars on this whole thing.
Right.
Like the engineering me thinks this is impressive.
You know, like so so.
So yeah, indeed.
I think there are still ways and I think, I mean, also it's still very kind of new, right?
I don't think people have been playing as much with a JNI video for like these kinds of
things.
I do think that in the future for sure it will play a role, but...
It's gonna take a while before it does 80 % of the job for them.
I wonder if there's a way, and I don't think it would be just with video, but to use AI to
pre-generate assets that they can edit later on.
I kinda like how, I think, gaming engines, you have an interface, but it can also go in
the code level.
Mm-hmm.
have like a GenAI layer that kind of precreates the rough and then you can actually import
these assets and actually edit these things.
I wonder if there's something like this.
think that's probably the safest way to go, I would say.
Yeah, it already exists to some extent, like in modern video tools where you can define
characters so you can reuse them in multiple scenarios.
But yeah, let's see.
Also, I feel that it's moving very fast.
Yeah, that's true.
That's true.
I also think there's a lot of money there, right?
I'll go to something completely different.
Or you're gonna go to something completely different.
Go ahead.
I will go for something.
have China has floated draft rules aimed at stopping chatbots from encouraging suicide,
self harm or violence, pushing the safety burden onto AI providers.
The proposal would require human intervention when suicide is mentioned and set public
feedback deadline of January 25th.
So 20 days from now ish testing how far regulation can reach into conversations.
So basically it's
a rule, right?
Like it's still not fully 100 % approved, but in China.
But it's very short term that they want to apply it, right?
Yeah, think 25th of January is the deadline for providing feedback.
25th year public feedback.
Okay.
Not applied.
not applied.
basically, well, we do, we have seen cases of AI assisted suicide or self harm mostly in
the, well, I think there were a few cases in Europe, but also in the US.
I hadn't heard as much in China, but I don't know if we hear as much from China.
And I guess I heard that this is one of the, it's a very, how do you say, not restricting,
but it's a very,
What's the word I'm looking for here?
I think worldwide is one of the most...
regulatory for AI in the self harm like most proactive.
think that's the
as in tackling the risk of self harm initiated by AI.
So what they want to do basically is that this draft like it requires human intervention
from the moment anything around suicide is mentioned.
How that will exactly look like in practice, not yet clear, but it basically it puts the
responsibility of AI in this context and pushes it back to the provider.
So if I provide a chat layer to chat with an LLM, I need to have monitoring for this in
place.
And when someone mentions suicide, I need to do something with it.
And what that is, is I think yet a bit to be seen, like notifying parents, notifying
whatever help organization, whatever, but something needs to happen at that point.
Yeah, so I saw here on the article here, they say users, especially minors and elderly,
will have to provide a contact information for a guardian when they register, who will be
notified with suicide or self-harm is discussed.
So I guess it's like as a guardian.
Yeah, exactly.
So there is a person needs to be notified.
So it's not someone from the provider.
They say that this service wouldn't be for every eye provider, but basically the relevant
ones, quote unquote.
So the ones exceeding 1 million registered users.
or more than 100,000 monthly active users.
But still, like you said, the provider, but the provider has a responsibility that the
guardian gets the message, right?
So they need to monitor it and they need to create a notification.
Yeah, I think.
in Europe we have the AI act, which I don't know if because it covers as much like this
very concrete thing in the US I saw like I think in New York there are a few laws around
this.
But I think this looks like is the most.
Concrete one in a way as well.
so as well.
I think so the people, I think I'm in favor of this.
I think the people not in favor will be privacy advocates because it's a very sensitive
thing, right?
Like it's to get your guardians notified from the moment you're discussing this.
But I think those downsides are offset by the upsides.
I think so.
And I think the fact that you're choosing the guardian should be someone you trust as
well, right?
Like it's not like there's a, not like someone, like it's not like it's someone that you
don't know that is going to be notified, right?
But still, it's probably someone that you would not discuss it with normally.
But I think the...
this problem has become so big.
And also in a context where there is more more isolation of people, it becomes more more
difficult to be socially active.
People get isolated through social media, we've heard a lot of stories like chat GPT alike
tools become their, like literally their relationship.
So I think this is a...
apparently is not as uncommon as I thought as we thought I think we saw some some numbers
that
Well, exactly.
The numbers we saw in usage were like what we expected everybody.
the main usage would be an open AI, but it was actually a character AI.
And character AI is very much like you built your AI character and you interact with it.
Yeah.
So yeah, I think it's good.
And I hope also that the thing's a bit difficult is that we don't hear a lot from China,
right?
Like what's the like, because I think what I would hope is that people look at this and
pay attention to this.
And this is a successful quote unquote experiment.
And then this can be rolled out in other places, right?
Because I do think it's a in the right direction.
Maybe it's not, maybe things need to be adjusted, of course, but I do think it's a real
concern, right?
And I think
We need to do something about it.
the good thing of...
which say we don't hear a lot about China, so it's a bit vague like what, how does
decision making actually happen in China?
But it feels to me like these things like they realize it's a big enough problem, let's
try something.
And if next year we see this doesn't work, we're gonna adjust it.
While here in the EU, I think...
It's also noted that this is a problem and we will probably talk five more years on this
before doing something on it.
Yeah, because we need to make the right decision.
We need to make the right decision.
We can't go too quickly.
And in the US, they probably won't do anything because it's a very, very capitalist
market.
And what you see then is that there is a lot of what they call regulatory capture where
OpenAI proactively says, yeah, we're going to do something like this.
We're going to build a system for this.
And because OpenAI is already saying themselves they're going to do something, there are
not going to be much regulation.
Yeah, and I think they also say this.
I think it's also there's also money incentive there as well.
Right.
so it's different.
I mean, there's also, of course, goodwill.
Right.
But it's not just that.
I feel like everything's got a bit murky there.
Yeah.
So let's see what happens.
mean, if we get any updates on this, but I think something that hopefully people are
paying attention to.
What do we have next?
Nvidia is bringing in top executives from AI chip startup Grok.
Grok with a Q, not X's Grok.
And striking a licensing deal for its inference technology.
Another signal that the hardware race is now as much about people as silicon.
Grok's founder Jonathan Ross is among those moving.
So will this speed innovation or blur the line between competition and consolidation?
So yeah, you said grok, so G-R-O-Q, not G-R-O-K, which is very different.
the K1 is the one from XAI.
is Grog with a Q is a company that was founded in 2016.
It's building hardware and software basically.
And what they mainly focus on is the inference.
So they have specific chips that are really optimized not for training but for inference.
and apparently quite successful.
They have raised at almost 7 billion valuation in the past, not that long ago.
Really by focusing on having efficient inference chips, but also the software to very
easily scale that, manage that, parallelize that in a data center.
Yeah, also heard something about the techniques that they employ to create these chips as
well, which is apparently what Nvidia was also interested in.
So it's not an acquisition per se,
Not an acquisition per se.
They basically get IP, so they have an exclusive licensing deal on architecture.
But they also get key builders, so part of the personnel of Grok is moving there.
Yeah.
do you follow a bit the chips, the GPUs and all this space or no?
A bit, I would say, a bit.
Is this something that comes a bit as a surprise to you or did you also feel like it's a
logical move?
It's not per se a surprise in a sense that, well, Nvidia's stock price is very high, so
it's very cheap for them to buy something.
one thing.
I think this has a very specific niche, like really inference optimization, which is
probably valuable for Nvidia.
If they don't do it themselves, then they can better acquire it, so it doesn't become a
competitor.
But they also get very very smart people on board.
And I think Nvidia is, even though it's extremely extremely extremely successful in what
they do over the past years, they do have a lot of competition from...
big players like the TPUs from Google, the Cranium chips from AWS, a lot of Chinese
players, so it's not that they don't have any competition.
So I think if they can do strategic quote-unquote acquisitions, they will do it right, and
this seems like a good match.
And maybe, do you have any idea why chips are better for inference than training?
No, to be honest, no, I don't know why they are.
I'm sure there may be something in the architecture that makes it more efficient and maybe
one more, yeah.
Yeah, but goes above my head, right?
Like, I don't know, but interesting.
Shall we move on to the next acquisition?
Yes, Meta is buying Manus, a buzzy AI agent startup signaling how aggressively Mark
Zuckerberg wants revenue-generating AI products inside Facebook, Instagram and WhatsApp.
TechCrunch reports deals about 2 billion and comes amid scrutiny over Manus' China-linked
origins and Meta's massive infrastructure spend.
um
was, they were popular because it was about like agents, like coding agents that could
like, I think it was, what was it like Upwork?
They could do something with Upwork or something.
I think the value proposition is that they're very autonomous.
normally like something like that GPT is really like a chat and they should chat with it
and it helps you along.
But this like, manas, manas is really positioned like as a virtual colleague.
not like a chat assistant.
It's really a virtual colleague and you can offload stuff to manage and it will basically
plan and execute even multi-step tasks like research, can do coding, data analysis,
business workflows with very minimal prompting in between where with something like CHPD
you have to keep prompting it.
It was relatively early, right?
I want to say it was already there three years ago, two and a half, something like that.
So I think they made a lot of noise because of that.
Hmm.
And what I did not know, because I didn't actually, to be honest, I didn't hear lot about
it anymore since then, but they are doing 100 million in annual recurring revenue.
That's.
must have quite a bit of large customers.
I do wonder where they are then, right?
Like is they, are they in US, are they in Europe, are they in China, these customers?
Yeah.
So also what Meta is doing here, like they're not just buying technology and skills,
they're really buying cashflow here as well.
Yeah, indeed.
think I also heard, well, there are a few things that were discussed that I heard as well.
One is that Mennon's, I think he started in China, then they moved to Singapore.
And there was a whole bit like Chinese origin now going to the US and the whole
competition between US and China for AI that I heard that there were some people that they
didn't.
For the Chinese, the Chinese government, weren't happy with this transition because they
felt like one of the big players now is going to the US.
m
Yeah, I'm not sure if you have any thoughts on that as well, if you feel like because I
mean, Menace, think even because when I think of the Chinese AI, it is a lot open source.
It is about a lot low cost.
Right.
And Menace first, I mean, they are in Singapore now.
And also the it did feel a bit different.
Like, honestly, when I when I saw about Menace, I thought they were in the US actually,
you know.
It has Chinese roots indeed.
They moved to Singapore.
I think now with the acquisition of Meta, they will have to sever these roots with China,
even if they still exist.
I don't think it will be a problem, to be honest.
You know, maybe what could be a problem.
And I heard this, yeah.
I don't know what models Manus uses underneath, right?
I don't think they have their own models, but like if they're using a Chinese model or
like a competitor, would the competitor still want them to like, for example, if Entropic,
would Entropic be happy to power Manus if it means competing with themselves?
Well, do they compete with themselves?
Then they're just a provider, right?
I mean, they make a lot of money.
um
what's the meta play here, right?
What does meta want to do with?
Because meta wants to do stuff with AI for sure, right?
What exactly?
has been investing a lot, like a lot, a lot, a lot in AI last years.
haven't really, hasn't really paid off, got a lot of criticism the last years, the stock
price suffered a bit.
Not last years, last months, stock price suffered a bit.
Because they invest a lot and they like haven't really got that much to show for.
I think what they really want to do is that to really go across all the meta services with
AI skills that you have indeed agents in WhatsApp for business or that you have agents in
Instagram or like in all of that today is still very limited.
Like in WhatsApp, I have like a very bad LLM assistant that like I will never use because
JetJPT is way better.
If it would be as good as
DeepSeek or SysGPT or whatever, like I would probably just use my quote unquote free
WhatsApp LLM, right?
um There's also this context where you would like to have like WhatsApp agents.
Like if you have WhatsApp for business, you have customers asking you stuff, like very
easily spin up a new agent.
can imagine that the Manus can do something like that.
Help with setting up advertisement across their advertisement platforms, which they do
have some AI capabilities, but let's be honest, it's really shitty.
today.
So I do think there are a lot of options there.
Aside also even just from the talent that brings in, right?
Like because there's also a lot of talent that's coming with with MENACE in the sense that
it's and that's different talent I think that that is today available at Facebook
Research.
or whatever the meta department is.
Because it's really something that was built from scratch.
It's more of a startup environment, even though it is doing 100 million in ARR these days.
Yeah, that's true.
That's true.
It's one of those AI startups that exploded, right?
Yeah, it's true.
I'm also, yeah, because you mentioned Facebook research.
And I was also thinking of Meta did have models, right?
Like with Llama, they were the last AI provider, American AI provider that was open
source.
Right.
And now I think they kind of stopped, kind of.
I actually just heard that Yelnek Kuhn, just found his new venture.
So he was on the Facebook.
But we can talk about it next week, maybe.
So it's like, I'm not sure like, are these two, like, is this something still interesting
for Meta to work on their own foundation models or, I don't know.
And how is this going to play with Manus somehow?
Well, that's more of a strategic question.
It's a good question.
think if they...
I think in the short run it probably doesn't make sense for them to have their own model.
In the long run if they can have a model that is more or less comparable to something like
DeepSeek, it's probably way more cost efficient to have your own model.
Yeah, yeah, yeah for sure for sure and
what they want to do is really build something that current models don't do at all.
Like I don't think they're there today for example in LM that is truly truly truly good at
making good ad campaigns, which is like basically like the 80 % of revenue of Meta, right?
Like maybe there's a strategic opening there.
But what we've seen is that the llama models that they made in the past or that they
published at least in the past, maybe they still using them in house new versions, is that
they are very good, but they're always slightly behind in the state of the art.
Yeah, yeah, that's true.
That's true.
I do think they could probably specialize some of these different models, right?
It's true.
Maybe one last thing as well, just to complete the story that Manus will still be running
independently, right?
least that's what they say right now.
So, which I guess it's also if they have good revenue, it also makes sense for Meta,
right?
To still keep it a separate product, but like still...
leverage from the brains and all these things and trying to integrate into Facebook
products.
So yeah.
Yeah, there is this meme, actually kind of fun.
You have now Alexander Wang that is leading, I think he's the chief AI officer at Meta.
And he came from Scale AI, like basically a labeling, AI labeling platform that Meta
acquired.
And there is this meme that...
Zuckerberg sends a text message to Alexander Wang saying we should buy Menus.
Alexander Wang sends back a Don't Deal Boss.
We bought it.
Zuckerberg sends back like did we buy pro or premium?
Whoopsie.
I think it's funny also like sometimes I also you said this I thought of like my
interactions with the AI sometimes right like I say do this and then they just do
completely that's like what the fuck but like technically they are they are following what
I said right it's like ah okay context yeah indeed it matters that is it for our regular
topics we have two two tidbits I guess small things um
One is a project that I came to my attention via colleagues, it's called Trivy.
I don't know if you know about Trivy, Bart?
No, I quickly glance at it when you send me the link, yeah.
Yeah, basically the about on the GitHub is find vulnerabilities, misconfiguration,
secrets, S-BOM, not sure what it is, in containers, Kubernetes, code repos, clouds and
more.
The discussion was, there was a blog post from, I think, Michael Kennedy from TalkPython
about how to verify PIP dependencies.
So he's talking like when you're using LLM, sometimes install stuff, but sometimes the
stuff is actually malicious, right?
I think...
Nowadays it's less of an issue, but I think earlier when the models weren't really good,
this was a big problem.
So like you would install requests instead of requests, right?
And a lot of vulnerabilities like this.
So it was also saying like how you can...
still happens is installing alter dependencies,
Yeah, indeed.
So if there's a dependency that there was a vulnerability maybe on their on its
dependencies, right?
And then you want to use the latest latest one as well.
So there's also this thing like if there's a new version, do you also want to jump on it?
Do you want to wait a bit?
I think that's also something that they talk on the in the article, right?
Like maybe you don't want to maybe you want to a few weeks before you install the
dependency and just make sure there are no vulnerabilities if you have like, yeah,
depending on the project, etc.
So there was a bit of back and forth on like how to do this like there's a PIP check or
something and then there was like a UV check so there was a bit of a discussion internally
on slack and Someone shared this one trivia, which I wasn't familiar and the reason why
they said this one is because it's not It's not only for Python.
It's actually like if you have no JS if you have Docker whatever like it actually scans
everything it gives you a comprehensive view of
What are the vulnerabilities that you may have?
So something that could be interesting, especially on the age of AI.
Yeah, could be.
maybe try to run it on one of my AI generated projects, just to see what comes out.
Yeah, conclusion is just like, drop this project, never do anything with it.
exactly.
It's like, oh my God, what the fuck are you doing?
So that's one.
And the second tidbit is something that came 24th of December.
So just on Christmas Eve that Microsoft wants to replace its entire C and C++ code base,
perhaps by 2030.
And they want to migrate from this to
rust.
um Yeah, it is.
But I think so I saw somewhere that it is a bit of a research thing.
Like they're not really
They're exploring if it's doable.
m
doable.
Like again, they say that, mean, so why would you do this?
Maybe, since C plus plus and rust, they're fast, right?
But the difference, the main difference and why they're saying they want to do this is
that in rust, you have a memory safety, right?
In rust, if you, if you reference a value, you have more guarantees that the value is not
going to throw an error basically, right?
So it's very like memory management and all these things, very low level.
And they say apparently like you can actually resolve a lot of issues or a lot of bugs or
like your code can be more robust if you make this change.
They want to use AI to migrate to this.
So I guess they would just ask AI, maybe the update on December 29th, to let them use that
hunt updated it's supposed to stay that Windows is not being rewritten in Rust with AI.
So it's not the goal there.
research project, so it's more of question that we're trying to answer.
Could it be rewritten in Rust with AI?
So my team's project is a research project he added.
We're building tech to make migration from language to language possible.
The intent of my post was to find like-minded engineers.
Okay, yada, yada.
So it is a bit like, is it possible?
How long would it take?
Is it something feasible?
What are the benefits?
But I think it could be, I mean, I think it's interesting.
I would never do it probably or seriously do it, but I think it's interesting.
And I do see, like, I think Rust is also in the Linux kernel, I think already.
So it is gaining traction there as well.
What do think of this Bart?
Would you be happy to work on a project like this?
Even if it's research.
To have to work on project like this.
No, I don't think that's for me.
I don't think it is for me either.
What I did try to find a bit is like, okay, it's a research project, I get it.
But like, what are the research questions, quote, quote, that you're trying to answer?
Like, do you have metrics?
Do you have benchmarks?
Like, you're gonna do this and how are you gonna know if it was a success or not?
What are the things you're trying to learn?
And I imagine that they have this, but I couldn't find it.
Yeah, such a huge project as well.
Yeah, indeed.
To be seen, to be seen.
And I think that's it for today.
Yeah, thanks a lot, Marilla, for joining me.
No, thank you Bart for being here.
Thanks everyone for listening.
Update over the Christmas.
think we have like what 7k subscribers.
7.5k.
Yes.
So not bad at all.
So thanks everyone for subscribing.
Thanks everyone for listening.
Tell your friends, family.
We got some love as well on LinkedIn.
So thank you for that.
Really appreciate it.
Yeah.
Feel free to subscribe to our newsletter as well.
Thanks everyone and I'll see you all next week.
See you all next week.
Ciao!
out.
Creators and Guests
