AI’s Collision With Work, Art, and Hardware
Hi everyone, welcome to the Monkey Patching Podcast where we go bananas about all things
physical world, AI generated art and more.
My name is Murilo, I'm joined by my friend Bart, hey Bart.
Doing good, how are you?
Doing fine as well, doing fine as well.
Quite busy these days with the new startups that we launched.
Top of mind.
Everybody check it out.
Top of mind.cloud.
There we go.
I don't know if it's been a little while since we recorded last.
I don't know if you had a website back then, but you definitely do one have one now,
right?
Yeah, we do.
I'm not sure if we had it back then, actually.
Everything is a few weeks old, but we're moving very quickly.
Maybe again, maybe people that didn't listen last time, what is the top of mind in
one-liner?
a, I want to say an application, but it's more than an application, but we're building
basically a system for people that take both a lot of, try to store a lot of knowledge,
try to store a lot of information, and also are very active with their network.
Think about salespeople, think about investors, think about business leaders.
Like you want to be aware of, I'm having a talk with Marilo.
He's saying that he's going to run a marathon in two months.
Like, would be nice if you get remembered in two months.
I'm really looking forward to this marathon.
Like, that you have this context top of mind.
And what we're basically building is a system that allows you to do very, very low, with a
very, very low friction in just any kind of information.
And then it basically structures it for you and surfaces it for you at the right time.
Yeah, very, cool.
you're also looking for your said you're busy, right?
Maybe a small shout out, right?
Like you're also looking for for people to join the team.
Yes, we have two.
We have a total team of four people now, of which two people are technical, and that's not
enough.
So we're looking for a new software engineer.
Next to that, also a marketing engineer.
Okay, okay, so if anyone is interested, yeah, I think.
Well, we got to pay the podcast bill somehow in part, just.
So yeah, very cool.
So you've been busy with, yeah, I guess just sorting anything out.
And also you have some users already, like test users.
Yeah, we went live with test users.
think actually last time we recorded, I said we were going to go live with test users.
And we went live, yeah.
I think for now, this is the second week.
I think it's positive overall.
I think the biggest challenge that we have actually is that our back end is basically an
AI agent.
So you ingest a lot of knowledge, and this agent can do a lot for it.
for you because it has access to a lot of different tools.
It can structure your knowledge, but it also can use a lot of external sources.
And because of all these combinations of tools that it has access to, it's hard to get an
overview of what are actually all the features that this app has.
And we need to make sure that we surface the right features to the right users, if that
makes sense.
I see what you're saying.
basically what you're, well, maybe to make sure I understand you're saying like, this
person would leverage a lot this kind of feature.
So we need to make sure this person knows that this feature is there.
But because there's so much that it does, maybe it's not easy for the person to know that
they can just, I don't know, use this.
should, so what our agent now does is like, could say like, for example, at the end of the
week, give me a summary of all the new information that I collected about people, about
organizations this week.
Or I can say at the beginning of the week, give me an overview of which new invites I have
on in upcoming meetings and build a profile on them.
Or I can do, I think Apple is a very nice company, send me an update if there's any daily
news on them.
all of these are very, like all of these are possible today already, but they're not like
enable this.
Like there's not a click button.
Like you need to say it to the application.
And I think what we need to do is like to see like what are the features that 80 % likes
and automatically enable it for them.
Yeah, I see what you're saying.
Okay, very, very cool.
Very cool.
So again, if someone is interested, it's an AI product looking for AI native engineers,
It's right enabled as well.
So I think if you want to work on AI and if you want to work with AI, think this is a cool
opportunity, right?
Yeah, no?
Like I said, we'll adjust the advertisement fees I'll send it to you later.
But no, very cool.
I've also been playing more with AI, but not so much cloud code, which I was doing before,
but also with the co-work as well for.
yeah.
I have some less, let's say less technical work like preparing proposals or slides or all
these things on my day job.
And I've been using it more and actually I've been super, I was, we're discussing a bit
before the, the restart recording.
Not only, I think the output is better because I feel like I have a sparing partner that
can, can brainstorm with me and it's much faster and it maybe thinks of things that I
didn't think.
I think it's also faster because I spent a lot of time like preparing slides, making sure
this like the box is the same size, creating text boxes, whatever.
And also.
It's more fun.
So I feel like I spend time doing these things and I'm, so it's been interesting.
Yeah.
is a bit like a workflow that you need to get into, but once you're into it, it really
makes you more efficient,
Yeah, really it does.
I think I was, mean, I was even reflecting to myself cause co-work like you give access to
a directory, right?
So in some ways it kind of feels like cloud codes.
Like when you, when you go to a terminal and you go there and you talk to it and you
create and you can edit files and then you can go in and change it again.
And then you can read the files and edit it, like iterate on this.
But cloud co-work feels, I don't know if it was, it feels like it fits better with the
non-technical work somehow, even though I feel like I could do the same things with cloud
code.
Right.
So I was also reflected that like, is it just the UX thing?
Is it just because there's an app instead of the terminal?
What is, what is this?
Right.
So.
I think it's also a bit primed to desktop work.
I think it has some like built-in skills around, for example, generating PowerPoint
presentations, stuff like that, which you don't by default have in Cloud Code,
yeah for sure.
But I'm also, we also use like granola and I also have like a granola MCP and sometimes we
use have meetings to discuss a bit what should be the format of the proposal and then you
can just hook up.
I don't know, it just yeah, I think maybe it's just maybe there's also some prompting that
makes it better.
This built-in skills maybe but yeah.
And also have you used the cloud projects as well?
But that's the one that exists already for long time,
Yeah, so like they had like you can have projects and you can have you can spin from
projects to co-work as well.
But then to me, I was a bit like, what's the difference between them?
And I did some quick search and they saying like projects is more so you have the same
context throughout and then co-work is more for automating according to them.
Like if you want to convert from one thing to another.
and they interact with each other as well.
So from a project and say, do this in Cloud Work and then let me know when it's back and
all these things.
So I'm also giving it a try, but yeah, still need to organize myself a bit and see what
works really well for me.
yeah, do you use Cloud Work or?
uh
use it, but not as a daily driver.
Let's play that.
Okay.
Very cool.
actually ran out of, I reached the limit with Cloud Cowork when I was using it to come up
with it.
So I had to wait after, I don't know, three hours.
Team plan, but yeah.
I actually, like, the team plan doesn't have max.
I realized, so I was setting up our, our team plan, like on Tropic team plan for Claude
code.
And it doesn't have a max plan.
Like it only has a sort of the teams version of the pro plan, which is at max five X,
think while the max plan is 20 X.
So you basically need to subscribe as an individual contributor to get to 20 X plan.
And I think if you, if you.
use this to do AI native coding as a daily driver, you need 20x.
Otherwise, you're going to run out of your limits very soon.
Yeah, yeah, I see what you're saying.
the max is...
but so...
So I'm looking here at the page as we were saying is.
So you have the pro and then you have max and then it says you choose between 5x or 20x.
and I think the 5x is $100 here.
If I'm not mistaken, the 20x is $180.
So I'm telling you that.
so this is max.
And then if you go for the team, you have this premium seat, which is 5X, which is 100.
Yeah, okay, okay, that's interesting.
Huh, interesting, yeah, yeah.
Maybe one last thing.
you see that a little tidbit as well as we're talking about Claude?
I think Claude also expanded the usage for Claude code on the weekends or something like
that to try to encourage people to tinker more with Claude on the weekends, I think.
Did you see that?
Something like that.
I don't have a link for that.
Yeah, it was like Claude, can, the weekend usage doesn't count for like, cause at Claude
you have a session limit and then you have a weekly limit.
Right.
And
Now they extend it on the weekends and if you use it on the weekend it doesn't count for a
weekly limit or something like this.
think actually in Cloud Code there was like a hint or tip rendered at some point about
this.
Drinks a Bell, yeah.
Yeah, but there's a, that was funny.
Like, I mean, they're kinda, they're kinda encouraging people to tinker in the weekend
right now work, but yeah.
And also for Codex, they did something similar.
They said, Codex, I think it's still free.
I'm not sure.
But they said if you're using the app instead of the terminal, you can, you get two, two X
usage as well.
Okay, okay.
Only the terminal.
Yeah, I also feel like the I use the Codex app because the more usage and I want to try
Codex, but I also prefer the terminal.
I'm not too sure.
Yeah, I I feel a bit limited when I'm on the use the cloth code.
It is nice though if you're like on the road and you're like you can quickly create a PR
for something, right?
One thing I have also used is the remote control from the terminal.
You know, I've used the remote control.
So it's like I sent something, I think I sent something to work on and then I went to the
gym or something and then everyone's now like clarifications or something or just like,
it's this, it's that, go ahead, do this, do this.
was quite happy with it as well.
So cool.
What do we have for today Bart?
Jan Lecoen is back with a billion dollar bet that AI should learn from the physical world,
not just from text.
It's a direct challenge to the language model consensus.
The new startup AMI launches a creep at a 3.5 billion valuation and aims first at
industries like manufacturing, biomedicine and robotics.
Cool, maybe for people that never heard or maybe they don't know as much in the AI world,
who is Yann LeCun?
Why does it matter that it's this guy?
I think Jan Lecon is probably one of the...
not sure the godfather, but at least one of the grandfathers of modern AI, I would say.
He is probably most known to the general public through his work at Facebook, where among
other things, and I don't think he's the sole responsible, but among other things, the
llama models came out of his team.
Yeah, he did a lot of stuff on the research as well, right?
He worked for the universe, like computer vision.
Yeah, exactly.
And maybe more recently as well, he's been, he was very vocal in the beginning that he, he
wasn't like, he had a very famous tweet, I think it was him.
They said like, I'm not interested in LLMs anymore.
I'm moving on to something else.
He also said a few times that
He didn't think that LLMS is the path to AGI.
So he's been quote unquote critical, critical of the what's today's state of the art AI,
right?
Which is LLMS basically.
All very critical, right?
Like in the last two years, I want to say.
Something like that.
yeah.
And what is this now that he has a new startup?
What is it about?
So his point basically is that the current architecture, the current text-based LLM
models, that they are limited in how much further that they can evolve because they have a
lot of difficulty understanding the physical world, basically.
And what they're building on, what AMI is building, AMI apparently stands for Advanced
Machine Intelligence.
Sounds very advanced, right?
Yeah.
But they're building a new type of model which they call the JIPA architecture, the Joint
Embedding Predictive Architecture, which I think we should probably do a deep dive session
on this at some point.
But it learns abstract representations rather than predicting like something pixel by
pixel or like word by word.
um
how that exactly translates to an architecture I would be interested in to do a deep dive,
I'm not sure at this point.
um
initial results?
Because again, to me it's like, we have LLMs, which is basically the transformer
architecture that exploded now and now there are variations of this architecture, as I
understand.
I know that I'm not as much in the know for these latest and greatest architectures, but
it took a long time to get there.
And I feel like to introduce a new one is also...
It's not because it's new that it's gonna be better than LLMs, right?
difficult to say, but they're.
There are clearly a lot of people that believe in it, if you can raise one billion at this
moment, right?
For something that's for as long as far as we know, there is no real practical proof of
this yet.
But what they did, think, so the MIT now consists of a lot of people of Jan LeCun's old
team at Facebook, the FAIR team.
And the FAIR team there actually did a lot, like was focusing more on research.
Like we all know Facebook AI from Lama, right?
But Lama actually came from the Gen.AI team and Lajjan LeCoumb was probably involved.
He was not the one driving the Lama models.
His fair team was, what I understand, lot involved in more fundamental research, but also
on reinforcement learning and these kinds of things.
And uh how I understand it is that also there already, like he was investigating these
type of newer architectures.
Okay.
And,
he's been getting also like a lot of flack in the last year, I want to say, because he's
been extremely negative for the last two years on the advances that that's quote unquote
traditional LLMs can still make.
But at the same time, like we're in a space where one in the last two years, these
traditional LLMs have become one multimodal and two better than he ever thought possible.
Probably, right?
Like if we see how good these things are in certain tasks today.
And the other thing is also, let's be honest, Facebook was never able to compete on LLMs.
The only thing why they were relevant is because they had an overweight model and that
everybody could easily use it then cheaply.
Yeah, that's true.
That's true.
Yeah, maybe the...
He says here like, just did a quick search, right?
One year ago, Young-Le Koon, he said, if you're interested in human level AI, don't work
on LLMs.
So I think he was also saying like, a lot of these things are marginal gains on LLMs that
like, it's not gonna be a breakthrough.
It's not gonna be pathway AI, which...
I mean, I can also grant that it's probably not going to be the path to AGI, but I'm not
like AGI is also very idealistic, right?
I mean, who's to say that whatever he's building now is the path to AGI, right?
Exactly.
I like every time...
there is also this concept that's, that's at least he's trying a new architecture, right?
Like they're what, what I think a lot of these, these big players, are so heavily invested
in their transformer architecture.
These models are so huge.
It takes months and months and months to train a new model.
It's also super risky to experiment with a lot of new things, right?
Like, and then that's at least.
what he's doing.
And the question is a bit like, like, is it the right timing?
Right?
Like it could be like that's young Lekun is like this brilliant researcher and that he's
very right in theory, but maybe he's wrong in timing.
Right?
Like, and then for a startup, for a startup judge, it just means that you're wrong.
Right.
Because you need to go to market at some point.
Also, I'm thinking like even well, part of reason why LLMs are good because they basically
said, fuck everyone, I'm gonna train all your data.
Do you think if he needs that level of scaling data, do you think he could do that again?
Like if he needs like, even if he has this new architecture, but he needs to have all the
data in the internet to train it.
I'm also wondering if like,
you
two, three, whatever, were able to get away with it because no one really knew what was
right from wrong or right, legally speaking, let's say.
But I feel like if someone tried to do the same thing now, they wouldn't be able to.
So I'm also wondering, is also a disadvantage there probably,
Fair question.
What I'm wondering is like, where does the majority of this data for this new Jeep
architecture come from, right?
Like, what is this physical world understanding, right?
Like, is it video?
Is it lighter sensor data?
Is it like, is it text?
Is it, I don't know, like, is it audio, right?
But it's a fair point.
I still don't think the landscapes changed that much though, right?
Like if you...
There have been some court cases.
But what we're basically seeing is that a lot of the court rulings are pointing to saying
that, ah, yeah, it was more or less fair use of the copyrighted material.
So there's not a major issue.
And if there is a major issue, then let's settle.
Then we're going to pay you something to stop this court case.
So I'm not sure if that landscape really changed from a few years ago, to be honest.
maybe, I don't know.
Maybe one last thought that I have on this, because it's like just on this.
Human level AI will come from mastering the physical world, not language.
I'm also thinking from a maybe philosophical standpoint.
I don't fully agree with this.
I actually think that the things that make humans human, think it's more abstract than the
physical world.
I feel like animals master the physical world.
but they don't have the complex language that we do.
I think it's more like the thing that makes human level intelligence, human level, is
actually the ability of understanding complex concepts, right?
And I don't think it's really tied to the physical world itself.
Again, maybe I'm being a bit too philosophical here.
understand what you're saying.
understand what you're saying.
But at the same time, combine that with the physical world, it's again a step forward,
That I agree.
I feel like that I agree.
That I agree.
But I...
we need something like this to actually go the next step in robotics as well.
That I also fully agree.
That I also fully agree.
I think...
still, like if you, and maybe that's a like, a good explanation on where something like
this might be much more valuable.
Like something as simple as controlling a browser is still super shitty with an LLM.
Like whatever the LLM, even the latest, which should be better, like a GPT 5.4, which
should be better in this, they're just shit at it, right?
Like it's like.
Yeah.
the big pixel by pixel.
Let's move the mouse and then let's try to click.
shit, I was in the wrong location.
mean, like there's, this is clearly not performing right?
even though there are huge, huge, huge models.
And so there is clearly like some, some mastering of whatever physical world is out there
aside from there is this binary or whatever text string, right?
Like there's more to it than that.
No, that I fully agree.
I do think that LMS became popular because of text and now they're trying to transpose
that into different domains, but I'm not sure if that's the way to go.
But yeah, let's see.
think again, I also, in the of the day, whether it's gonna succeed or not, whether I think
it's a good idea or not, or whatever I think about the guy, I also think it's good that
people are trying new things.
think that's always gonna be good.
It's intelligent people, people that have resources and...
and the mind for it.
So let's see.
something, a very nice realization of Jan LeCun.
It's one of the largest C-trons ever.
So 1 billion and a 3.5 billion pre-money valuation.
But it's the largest ever for a European company.
And its headquarters is in Paris.
he is French, I think, no?
Yann LeCun.
And, but he was, I thought he was in the US actually.
Well, Facebook, I thought he was in the US and also I thought that he was also teaching at
the University of New York, but his company is in Paris now.
Maybe he moved back.
HQ is in Paris, that's offices in the US, in New York, Montreal and Singapore.
Wow, but I definitely agree that this is very good for Europe.
So yeah, we'll see.
Best of luck to him.
And I hope to be proven wrong as well by my, let's say, skepticism towards this, but we'll
see.
Next, what do we have?
have Capwing is reflecting on an unusual AI experiment, paying artists royalties when
their styles helped power generate images.
A live test of whether ethical AI art can really work.
The post looks back on test.design from launch in May 2024 to shut down in January 2026
with the hard lessons in between.
I think this brings us back to what we said about the copyrights and using the data for AI
and trying to be ethical about it.
oh
So this is Tesla design.
They actually tried to make an ethical artist friendly AI marketplace, but it didn't work.
Yeah, yeah, so try to summarize.
I have feeling it's been two weeks since I read it.
But what they tried to do is in summer of 2034, they set up a marketplace of fine-tuned
stable diffusion models, fine-tuned for specific artists, where as an artist, you could
say, is, you can use my data to fine-tune this stable diffusion model.
and that everybody that uses that stable diffusion model via that website, I assume it's
via an API or something like that, they pay a few cents and there is some royalties going
from that to the artist.
But then basically like me as a user, say, want this style.
So I'm gonna use this.
It's almost like the model becomes a proxy for the artist.
Exactly.
exactly.
Let's say Rembrandt was still alive.
He could sign an agreement with with test.design, was called, test.design, where you could
say, okay, you test.design, can use my works to, to fine tune a stable diffusion model.
And that stable Rembrandt stable diffusion model you can then offer on your website and
your users can then interact with it.
And basically for, let's say a few cents, I don't know what it would cost.
They can generate an AI generated image using that fine-tuned stable diffusion model.
And of the payments that the user does to the platform, a few cents go to the original
artist.
That also solves a bit of the problem because we talked in the past how if you ask the
HTTP to generate an image, it's also hard to know how much of the style came from this
data or that data and consequently from this artist, that artist.
So they just kind of solved that by saying, we're going to have multiple models and each
model is exclusively trained on this artist.
And it didn't work.
It didn't work.
think there were two challenges.
getting artists to basically sign over their rights to them.
Like you as a platform can now use my intellectual property to train something.
I think that was one of the challenges that have to basically convince artists or
copyright holders in general.
I think the other, and that's probably the major reason why it failed.
Customers like don't care about it.
I think the, is there, there was at that point, 2024, some uncertainty on like, if I
generate something with AI, what will happen?
Like, do I actually own the copyright or not?
And they had these, these, these noise about, about court cases being done and
There was a lot of legal uncertainty.
And I think over time, it's not necessarily that the legal certainty went away.
But we see that leadership of major countries don't care about this.
And that also means that corporations don't really need to care about this compliance.
that there's not really a willingness to pay for something like this.
So basically like because the country leadership basically they didn't necessarily care.
I mean maybe not care but maybe it wasn't a priority.
maybe, but like, I don't the people that should care about this are actually acting
towards it, right?
Like we're not seeing like a majority of court cases being ruled in favor of original
copyrights.
Like in the opposite, we're seeing a majority of legislation like coming up that says like
a lot of these things have become, have been fair use of copyrighted material.
Right?
Like we had the huge course case, I think it was on Tropic.
I'm not sure if it was in Tropic actually, where the books on, I think a lot of them came
from Anna's archive, where basically the ruling was.
You need to buy one book.
it was okay to train on this, but what you did wrong is you didn't buy the books.
You tormented the books.
So you just need to buy a single book and you can do whatever you want with it, right?
Like that is, and that's a bit like, if I take a legal picture of a Rembrandt, I can do
whatever I want with it.
that's a bit of what it seems to be becoming the norm.
And that, and if that becomes the norm, like something like this, like test or design,
like becomes something like you're ethical norms, right?
Like the only people
purchasing this is like, yeah, because I feel it's more honest.
And I mean, there is probably a very small audience for this, but it's not viable, right?
You can live off of this.
Probably the people that would do this are the people that would just donate to artists
directly,
Well, yeah, I think what they try to do is like, try to sell basically ethics in a
marketplace where a customer doesn't care about ethics.
Yeah.
think you can abstract everything that it did away to that.
Yeah, yeah, yeah.
Yeah, it's true.
So yeah, so I think what I think is interesting is that this is a concrete example that
it's like everyone will complain that these things are not right and it's not ethical.
But when you give people the power to do it, they don't want to do it either, right?
Well, yeah, you as a consumer, let's say you want to generate some AI art.
mean, you're going to pay your $20 Gemini license and just do whatever you want, right?
Like you're not going to pay for a single image, something to an artist and that means
that you're going to pay more.
Like you're not going to do that out of your own free will.
You need a strong legal system to enforce something like that.
Yeah, no, but I do remember in the beginning, especially like there was a lot of blog
posts and articles saying like, this is not fair, you know, like this is straight on the
artist.
The artist don't see anything.
I feel like there was a lot I felt right.
Of course, there was a lot of the public opinion was like, this is not right.
And something needs to people need to do something about it.
But then and that's probably where test.design came up.
But the reality is like it's
People are very enthusiastic about it when they need to give their opinion, but when it's
time to act on this, then people are not as excited, let's say, right?
You need to pay for it yourself, it's less interesting.
Yeah, I think in the end people just want the companies to pay but it's also like, yeah,
this is like, you cannot control that, right?
companies to pay like you need a strong legal system that enforces it.
mean, companies are owned by stakeholders and stakeholders want short-term gains.
It's as as that.
As long as we're in a capitalistic society, that's reality simply.
Yeah, exactly.
It's just like physics, right?
That's just the way it is.
Can't fight it.
All right.
All right.
But still, I also thought it was interesting experiment at least.
And I thought it was also very nice that they were very open about this, right?
Like this is our lessons learned.
Yeah, and to be clear, I hope at some point something like this does come up, that there
are royalties going to original artists, original authors, that today is not the day.
agree, agree, to better days, to better days.
What is next?
Anthropic is trying to measure AI's effects on jobs with new occupation-level datasets,
moving the debate from vibes to something close to evidence.
Its early read is the tension worth watching.
AI use is spreading fastest in some knowledge work tasks, but the broader employment
impact is still far from settled.
It's a new report from Anthropic.
It's by now already a bit more than a week old.
um What is it about, Meryl?
stale, right?
Like one week old is like, so what I understood is basically actually looked at what is
the impact of AI on the workforce.
And there's a lot of theoretical, right?
So basically places that AI could have an impact, but there's also some place that is
already having an impact.
So they did a bit of analysis and trying to basically, yeah, I guess it's like basically
see how the workforce is going to change or how the workforce is going to be impacted.
with AI going forward.
So actually I think there was one image here that I thought was quite interesting.
This one, I wanna say.
There are capabilities, so maybe for people listening, it's like, how do you call it?
Like a rater chart?
It's kind of like a rater chart.
And for people that don't know what a rater, yeah, that's the name.
And for people that don't know what a rater chart is, and you play FIFA.
At least before you had like the different skills and then it kind of, it kind of comes up
like, kind of looks like a net, right?
So you have a circle with a whole bunch of concentric circles.
And then if like there are different points that kind of reflect on how, how from zero to
100, I guess, like kinda, you know, how, I don't know, maybe just, I'll just link the
article.
If people are interested, can click on it.
Yeah.
It's actually quite intuitive when you watch it.
But what do we see on the chart?
So basically like you see the different domains on the different axes, So management,
business, finance, computer, math, architecture, engineering, it's so good and goes on and
on, including sales, office, agriculture even.
And then there's a blue area.
which shows the theoretical AI coverage.
And then below smaller, there's a red area, which is the observed AI coverage.
So for example, management, there is a very high theoretical AI coverage, meaning that
there's a lot of stuff that management people could use AI for, but the reality is that
very little people actually do it today.
So I guess it's something that...
As time goes by and these things become more commonplace, I would expect this to expand,
right?
To get closer to the blue area as well.
So yeah, maybe one thing like grounds maintenance, that it's almost zero, right?
But I think the management, business and finance, computer math, architecture,
engineering.
Life and social sciences, legal arts and media, and office and admin, and sales, they all
have very high theoretical AI coverage.
So I guess the message here is if you're in one of those areas or part of the job is one
of those areas, I think looking into AI is gonna be very relevant for your job going
forward, right?
And of course, computer math, business and finance.
Legal education, no, yeah, arts and media, sales and office and admin.
It's already a reality today.
Right.
So.
Yeah, I think what they say is that the most exposed occupations, because this is more
domain, like most exposed occupations today, because they have 75 % already task coverage,
are computer programmers.
And then closely followed by customer service reps.
Also recognizable, right?
And data entry people.
People that do data entry on whatever, right?
You get in some information and you need to insert it into some systems.
Very administrative tasks done in a lot of big institutions.
Indeed.
So it's already very relevant.
Maybe we discussed a bit the...
I think we discussed on the like during the recording you think that the computer science
is not going to be as thriving as a career, right?
Like it's going to...
the workforce is going to decrease quite a lot because of AI.
This sounds very pessimistic.
um
So I've been a bit both positive and negative about this.
Maybe the negative part is I think what we will see is that.
your typical software engineer will become five times more efficient if they adopt these
skills.
I think that will take some time.
Like people need to pick it up, but I think they will at least become five times more
efficient.
That means that your typical engineering team suddenly can output five times more stuff,
but your customers as a company are probably not asking for five times more stuff.
So there's, I think there will be like,
this lack effect.
I think at some point that will equalize, but I think that lack effect will cause some
displacement.
And I think also this report, they're saying we cannot yet prove that there is less
employment, but we see more hesitation to out-hiring rejuveners, which is, I think,
something that we all recognize from hearing around.
em
in Belgium, you do hear like, mean, the US you also heard a lot, but like people saying
they're gonna cut their workforce with AI and they're planning to reduce the workforce.
So this is also a reality to.
It's a reality.
I think some of them are also overstated because there are like an explanation on why
you're cutting jobs, right?
it's, so there's a lot of noise on that, on those messaging as well.
But we do hear it a lot here.
It's not all noise, right?
It's...
well for data for this particular signal.
I do agree with you as a general trend that this will probably happen like growing pains,
right?
Like the market will probably adjust on the what is asking, but there will be some time
that they're going to be like, whoa, wow.
Why there's like
you can produce way more than what we're asking, right?
And there's gonna take some time before they start asking more.
But I also think that for this very particular signal, like the company saying they're
gonna cut the workforce because of AI, I also wonder if there's a bit of a herd mentality.
Like they see one place doing this and the people are like, okay, why are they doing this
and we are not, right?
If you can actually be more productive with AI, why are we not?
And then there's a bit of a, you start discussing more of these things and then there's
more people that kind of jump in it, right?
Yeah, I think it's also a way to force yourself as a company to pick up these skills, of
course.
If you have a very big engineering team, you're in a very comfortable situation, and
nobody's really incentivized to become five times more efficient.
Unless you have a problem, you still have less people, and you need to do it to survive.
so you said like, do you think everyone's gonna be 5X?
Or do you think the good ones are gonna be 5X?
Because I'm also wondering like, I think everyone's gonna be more productive, but I'm
wondering if like, everyone is gonna be 2X and then the very good ones are gonna be 5X?
very good ones will be 100X.
Where we used to say the good ones are 10X engineers, I think it will be 100X engineers.
Okay, okay, cool.
The thing that I am positive about, they actually did a write up on my blog the other day
on this, is that I was talking a while back to a startup and they were like, they're
non-technical founders and they actually had like a quite a good product market fit, but
they were all like, oh yeah, you don't need to show the article, I'm bit shy on these
things.
I insist.
And they were actually a non-technical founders and they were actually like they have a
very strong offering, but a very limited offering.
they had a difficult time of hooking customers for the long run.
And they had a very clear pathway, like we need to build these and these and these
features to hook them in for the long run.
And the offering was very strong.
And...
But the challenge was like, ah, yeah, but it takes so much money to develop these features
and it takes so much time to develop these features.
So we can only get this feature to market like six months from now.
And I was thinking to myself, like, I think this landscape will completely change.
to me, it's a bit parallel with the online advertisement, like online advertisement when
it didn't exist.
I wanted us as a, we're creating a brand now, top of mind.
If I wanted to become known in, let's say India, it was gonna, I had to probably go there.
need to contract people there.
need to do a lot of manual advertisement there.
It's going to take me six months to get to, I don't know, 10,000 people that view my
brand.
And, but now with online advertisement, I can, I can put credit, like it's not free,
right?
But it's, I can put credits towards it.
And like, if I can get those 10,000 views on my brand next week.
And I think that's dynamic.
We're also going to see in software development, where software development used to be
like prohibitively expensive.
Like you need a very big investors to get something done.
Suddenly we're in this new stage where, I mean, it's not, the cost is not zero, of course,
but like it's much more doable.
It's like just something that you can, that you also have to do.
It's not the major investment to get stuff started.
And to make that parallel, maybe that's why I'm optimistic.
The advertisement market as such.
only grew over time and roles shifted, right?
Like the jobs that were there, they completely evolved to what before what they were, but
the space as such only grew.
And that's a positive thing.
And I hope that we see that with the tech space as well.
I see what you're I see what you're saying.
I think as you're saying this as well, I'm also wondering if the signal to noise is going
to be different scale as well.
Like you're to have a lot of these features, a lot of these companies, a lot of these
things, but because there's so much more, it's going to be harder to find the...
the true, like the things that are valuable, the things that you should pay attention to,
right?
What is actually robust and what is not?
I mean, it...
You mean like applications being offered, SaaS platforms being offered.
That's what you mean?
Yeah, but that I agree with you.
There is actually a nice report maybe for next time to go to the survey of revenue cats
that just came out.
And it's actually like your revenue cat is a bit of the man in the middle when it comes to
integrating with the iOS payment system.
But it's very big in the SaaS ecosystem.
And they have a lot of data on how are subscriptions evolving, what is churn, how much
more new entrance do we have in last year, which is a crazy amount.
It's worth a read to get a bit of a view on how these dynamics are.
Yeah, exactly.
Let's go for it next time.
we'll cover next time for sure.
But yeah, but I'm also, yeah, like you said, six months, take six months to build
something.
And the first time I have, my first thought was kind of the same as yours is like, months
is like, is that right?
Like if you have a team of four people that have, that are efficient with AI, like I
cannot think of something that, like if you know what you want.
Like you have a very concrete view of what you want that probably won't take already six
months, right?
Because I think that's big part of it.
So the horizon shrinks for sure, right?
So yeah, to be seen, be seen.
And last but not least,
Nvidia's GTC conference is shaping up as a pivot point, with Jensen Huang expected to show
how CPUs and new inference chips fit alongside the company's GPUs empire.
One striking forecast hangs over the story.
Inference could make up 75 % of a 1.2 trillion AI data center market by 2030.
So Nvidia, they have the GTC, which actually stands for, for what?
It's like a...
Basically their conference, but I forgot what it's stood for.
But basically their, the expectation, I think they have...
coming up, the GTC, and they're expecting Nvidia to announce their first CPU chip, right?
So Nvidia historically has been very, well, it's been the market, the standard, right?
Industry standard for GPUs.
And now they're also entering the CPU stage, which is actually, I think is Intel and ARM,
I think, right?
That really dominate the space.
you
still in the AI play, let's say, they're saying that CPUs, still are, well, they said it's
become the bottleneck for actually AI inference, right?
So, and I'm actually, read it, it's not on this article, but I think I saw somewhere else
that they were even thinking of how to link this with...
because they also acquired Grok with the queue, so not the AI, not the Grok AI, but also
was like linking a bit how that acquisition also leads to the CPU technology, right?
Um, yeah, my, understanding of this is, uh, well, are two a bit different things, right?
Grok and the CPU, uh, X-ray.
So Grok is an acquisition that they did.
Um, we actually covered it.
Want to say, uh, six months ago, uh, $20 billion acquisition.
Uh, and what Grok does it, it's generally, it creates, uh, LPUs, uh, language processing
units.
Um,
And they really focus on being very good at inference.
So they're probably not very efficient at training, but they're very good at inference.
And Yanshan Wang is lookout for the future is actually that we will see more of a
commoditization of inference, like on LLMs.
we will see this growth of large AI factories that do inference at a very, very large
scale.
And that's a lot of these new AI factories will use really like chips.
Well, focused on inference, very good at inference, like this grok offering.
Nvidia actually didn't have it.
it's their, like their acquisition is a bit of their answering on this offering because
they didn't have it.
but there are, they do have competitors like Google's TPU.
The thing is the cranium, I think it's called cranium or something like that by AWS.
And there are a few, there's actually some competitors and Avia did have a lot of
competitors in the space before they, so Grog is a bit like they bought one of the
competitors.
But what we will also see like in these very big AI factories is not just a lot of very
efficient inference.
What we also see is an optimization for agentic workloads where you actually will have,
for example, lot of tool calls, a lot of agentic workloads that typically run on a CPU.
And I think that is a bit the reasoning why they're now saying like CPU is becoming the
bottleneck because data centers are expected to also do much more of not just inference,
but also this agentic workloads.
I see.
So it's like there's the CPU and the Grok, it connected to the sense that it's to support
AI usage, right?
So Grok is more the model inference and then the CPU is more for the stuff around the
model inference, like tool calling or interacting with the operating system and all these
different things, which according to them is the...
And in terms of CPUs, so they have, and I'm not too knowledgeable on the exact specs on
that, but they have a CPU, it's called Grace.
They launched it already, I think, I want to say four years ago, but now they announced a
new generation, it's called Vera.
Yeah, exactly.
2021 was grace and now Vera is now in production.
And I think I also saw somewhere that I think there are multi-year deal with Meta as well.
they also already like you mentioned servers and all these things, but they already, they
have a very concrete, yeah, very concrete client there, right?
the, well.
are you hopeful that we will see a more commoditization of these elements?
Yes, I am.
think, well, again, think, commoditization, there are degrees to it, right?
But I do think, I do think, yes, I think, I saw a report a while ago that Chinese open
models are on average six months behind US closed source models.
Maybe this will close the gap a bit.
So there's that, you still need specialized infrastructure.
But I also think it's a bit the...
And maybe I'm not the best person to say it, but my impression is that research and
industry, they swing a bit like this, right?
Like first research shows something is possible and then the industry makes sure it's
usable and then the industry makes sure it's usable for everyone, right?
So I think that's also the next, I think there's a big, I don't know.
I do think in the future we will see more of these things, not just in model size, like.
We talked about inference, but maybe models being smaller for maybe more specific tasks.
saw, I think we covered months ago, the tiny models as well, the tiny recursion models.
So I know there are some things that people are looking into.
There's still a lot of people that want to run things more locally as well.
I think this is a reality today.
It's not gonna be a reality tomorrow, but...
But I do think it's something that people are asking themselves, right?
And I think if people are asking themselves, I think people are looking into it.
And I think at some point in the future, it will be more commoditized.
So again, I think it will, but I'm not sure how much, right?
I'm not sure if you're still gonna need like...
Maybe it's not closed source models.
Maybe more open models will do the trick.
Maybe it's, you don't need GPUs for doing the like racks and racks of GPUs to run
inference.
Maybe it's CPUs, maybe it's the, these cheaper chips.
But I do think something will change as well.
I also hear that the costs for running LLMs is still super high.
Like actually there was another article that maybe we'll cover next time that they were
talking about open AI and entropic and even
XAI how they're not profitable they're not close to being profitable so there's also
appealing a bit to the to government subsidies in the US and the only one that is a bit
that is a very luxury position is Gemini because they can take the the costs right they
can they can eat up the losses because they have very very good very healthy revenue from
the other sources on the other products so
I do think to counterbalance that even the big players are probably also looking into
this.
I don't know if OpenAI announced like a chip or something.
I know Google has the TPU.
I know Anthropic had some deals with think AWS as well or even Google as well.
I think we covered as well.
So I do think something will change there, but I'm not sure how it's going to look like.
How far down the commoditization spectrum are we going to get?
What do you think?
I'm hopeful.
think if you look at most benchmarks, like if it was the big one, the artificial analysis
one, for example, like if the top models are very, very close in terms of performance.
I also see it now because we have a very large test suite for the thing that we're
building on a lot of different cases.
And like, if you compare
I don't know, Gemini 3.1 with, for example, Sonnador Opus, and then you compare it to GLM
5.
There are some differences, but the differences don't weigh against, they're not
significant enough to say that this is a six month difference in pace of evolution to me.
So I'm actually very hopeful.
I think a lot of like, even if you're using your, let's say you're using Gemini as an app
on your phone, if someone behind the screen switched it to Sonos or to OpenAI or to GLM or
to Keemi, I doubt that a lot of people would even notice.
I think so too.
I think they would only notice when you hit a certain problem and then you hit your head
against it a few times and then if you switch models it just kind of goes through.
Yeah, that's thing that could happen.
I still believe that there are some models that are better than others, but I think it's
very, it's a very close race.
that's since I'm also hopeful in a sense that's what we, that we will not get like these
two or three very, very big, almost monopolistic players that basically rule the world.
But that's in the end, like there's enough competition to basically move the value that
gets created to the bigger ecosystem, right?
Like people are.
through these things, we are able to create more value for society at large, not just for
these few major players.
I am hopeful for that, to be honest.
If you see the evolutions now, um but let's see in five years.
most wondering, example, we talked about cloud core work and we talked about cloud code a
few times.
Like I think for sure the models are really good, but I also think a lot of it is the
application behind it, right?
Like making sure it has the right context, making sure it keeps the right things, making
sure like JudgePT or came up with a codex, right?
And it says optimize for long running tasks.
And I don't think it's because they have such a better model.
I think it's probably because the engineering behind it that made it better for long
running tasks.
what people these days call the harness, right?
Like you have an LLM model and you have like a harness around it, whether it's a cloth
coat, whether it's codex, whether it's Gemini CLI, whether it's whatever solution that
you're building, it's this harness that keeps it in check, gives a direction.
And I think cloth coat is a very, very, very strong harness.
Yeah, no, I agree.
I agree.
Yeah, I think and I think again, maybe one thing we talked a while ago, I think those
things I think is interesting to me, like working with in the beginning of Gen.ai, I
wasn't very excited because it was like prompting and prompting is like, like that's my
that's your job now.
But I feel like if it's creating the hardness and how to make these applications work, I
think that's way more exciting for me.
you
with me a position, was a prompt, what was it?
Full stack prompt engineer or something.
Like, I'm sorry, like if I see this, if this is my job title, I would be ashamed.
Like, you know, like this doesn't look interesting at all.
Like this doesn't look interesting at all, but I feel like if it's like building an
application, mean, prompts are part of it, right?
But like context, engineering the context and all these things, what are the tools, how,
what to keep, how to make sure the context doesn't overflow, et cetera, et cetera.
think that's more.
Boom, we have, I think that's it for the main articles.
Do want to go over quickly about the two tidbits that we have?
I'm actually, I just ran out of time.
It's 11 o'clock here.
So let's keep them for next time.
We'll keep them for next time.
All righty.
I think this is it for today.
Again, we're still cooking some changes.
We've mentioned a few times, but for the listeners, things may change in the future.
But something maybe we can announce hopefully in the next weeks.
Thank you Bart.
Any last final words of wisdom?
Keep coding.
Keep coding away.
Burn those tokens.
Keep plotting.
Thank you.
Ciao everyone.
Ciao.
Creators and Guests
